When sound hunts down drones
Published by Joseph SARDIN, on
Summary
- Acoustic detection identifies drones and missiles by their sound signature.
- A network of microphones triangulates the position in real time.
- The system is passive: it listens without emitting anything.
- AI tells a drone apart from a thunderstorm or a tractor.
- Ukrainian systems like Zvook, Sky Fortress, and FENEK illustrate the approach.
I don't enjoy writing about war. On this blog, sound usually means the sensitive material of landscapes, professions, and scientific discoveries. But when current events put listening at the heart of a conflict, I can't ignore it. Wars are everywhere these days, in Ukraine, in the Middle East, in the Caucasus, and the drone has become the everyday weapon of those confrontations. And for several months now, microphones have been learning to track them. Sound isn't only there to soothe or move us: whether we like it or not, it also plays a part in defense logic. Better to talk about it openly.
An ear pointed at the sky
The idea is almost disarmingly simple. Instead of emitting a radar signal, which can be jammed or pinpointed, you set up a network of microphones on the ground. They listen, around the clock, to the noise of the sky. When a drone passes overhead, its engine, its rotors, or its whistle leave behind an identifiable acoustic signature. By comparing the precise instant each microphone picks up that signature, an algorithm calculates the target's bearing, range, and altitude. It's acoustic triangulation, on a large scale.
The principle isn't new. During World War I, huge horn-shaped listening devices were already used to spot aircraft by ear. A century later, the microphone has replaced the human ear, and the algorithm has replaced the operator, but the gesture is the same: cocking an ear toward the sky.
Anatomy of an acoustic sensor
A typical sensor looks like a mast fitted with an array of microphones, often six or seven, arranged in a known geometry. That layout makes it possible to measure the time delay between the arrivals of the same wave at each capsule. From that delay, the system figures out the direction of the source, both in bearing and elevation. With a single, well-calibrated mast, some units claim an angular accuracy of around two degrees, which is impressive for a purely acoustic system. Several masts working as a network can then cross-reference their data and pinpoint the target precisely in three dimensions, in real time.
Local processing is key. Each sensor packs a computing module that filters out ambient noise (wind, traffic, animals) and only forwards pre-processed information to the central station. That cuts down on network load and lets units keep working even without an internet connection, a crucial detail out in the field.
Silence as a tactical edge
The major strength of these systems is that they don't emit anything at all. No radio waves, no electromagnetic signature an enemy can exploit. For fiber-optic-guided drones, which never talk over the air with an operator, this is actually the only way to see them coming. Modern electronic warfare sometimes feels like a game of hide-and-seek where both sides shut off their emissions to avoid getting spotted. In a landscape saturated with jamming, a sensor that just listens becomes invisible.
The detection ranges announced by Ukrainian manufacturers put Shahed-type drones at around 1.9 miles, and cruise missiles at around 3.1 miles. Shahed drones have a well-known acoustic quirk: a low, continuous drone, often compared to an old moped, that Ukrainian residents have learned to recognize on instinct. That texture, paradoxically, makes them an ideal target for passive listening.
Learning to recognize each sound
The other key piece is the software. Development teams build out databases of acoustic signatures: civilian drones, military drones, helicopters, missiles, fighter jets. Machine learning models are trained to classify those signatures in real time. The challenge is telling a Shahed apart from a tractor, a thunderstorm, or even, as a Telegraph article put it back in 2024, a mooing cow. Ground units then get the information on a tablet: target type, heading, speed, position, and can trigger the right response.
This passive, distributed logic looks a lot like the one behind the industrial acoustic cameras I've covered before in the context of predictive maintenance. The tools are cousins; the uses differ.
Several systems, one shared principle
Ukraine is now the full-scale testing ground for this approach. Several systems coexist there. Zvook, one of the pioneers, started as a project run by civilian engineers and uses microcomputers mounted on radio towers. Sky Fortress, the most massive of the three, fields more than fourteen thousand sensors across the country, with units built for a few hundred dollars apiece, sometimes from plain smartphones tucked into weatherproof enclosures. FENEK, the newest of the bunch, relies on seven-microphone masts and a proprietary sound-filtering algorithm. All three follow the same principle: listen, triangulate, classify. Several European countries, including Lithuania, have announced plans to roll out this kind of acoustic shield starting in 2026.
Sound, again and again
This story joins a series of pieces I've already devoted to the more sensitive uses of sound: psychological warfare through loudspeakers in North Korea, and the sonic weapons reported in Belgrade. Each time, sound shows up as an ambiguous tool: it informs, it manipulates, it harms, it protects. With these listening networks, it becomes a passive shield, made affordable by necessity, and one that could tomorrow protect targets other than war-torn cities. Do you think this passive listening logic has a future beyond the battlefield, say to protect airports, power plants, or major events?
"Any news, information to share or writing talents? Contact me!"
♥ - Joseph SARDIN - Founder of BigSoundBank.com - About - Contact