James Hopgood: Acoustic Sensing
From Billy Rosendale on October 5th, 2016
In this video James describes his research on how Signal Processing is a core technology when a sensor, such as a microphone, is placed at a distance from a physical event being observed. He designs algorithms to detect, localise, enhance, or classify these events.
A number of acoustic sensing problems can be summarised in terms of detection, localisation, and classification (DLC) of acoustic events from moving targets using audio data from a mobile sensor platform. These include:
1. Audio sensing of weapons fire from helicopters and other moving vehicles;
2. Audio sensing of acoustic anomalies from autonomous unmanned aerial vehicles (UAVs);
3. Audio sensing from a mobile platform for human-robot interaction.
Each of these applications involves DLC, and depends on the specific environment they are in. While helicopters and UAVs tend to self-generate significant amounts of interference through their rotor noise, robotic mobile-platforms are likely to be used in noisy environments such as factories. Therefore these problems are related and essentially use the same techniques.
Generally Related Projects
There are a number of projects that I work on related to the three application areas described above:
4. Sniper Localisation using Acoustic Sensors;
5. Distant Speech Recognition and Audio-Visual Fusion.
6. Acoustic sensing using off-the-shelf UAVs
7. Acoustic Localisation and Classification of Multiple Simultaneous Acoustic Events.
8. Real-time suppression of Engine Noise Components for Audio Processing on an UAV;
9. Image processing for self-tracking and localisation using cameras for UAVs (monoSLAM)
10. Audio Localisation of Nonconcurrent Acoustic Events using Sensors on Mobile Platforms;
Audio Localisation on Mobile Platforms
Audio localization techniques are generally based on time-delay of arrival and delay-and-sum beamforming (for example, the steered-response power of a beamformer). These techniques are particularly appealing because they can be easily implemented to execute in real-time. However, with the existence of a plethora of sound source localization methods and because mobile-platform applications have intrinsic integration issues (e.g., real-time performance, mobile-base, changing environmental conditions), currently established systems have yet to present a clear demonstration that they are the best methods available. This is therefore still an active research area.
Sniper Localisation using Acoustic Sensors
Acoustic gunshot and sniper localisation systems have important applications in both the civilian and the defence sectors. Civilian applications include forensics, homeland security and law enforcement, and urban monitoring. Military activities include peacekeeping and crisis situations. There are several commercial sniper localisation systems, as well as many reported in the open literature. However, most of these techniques are designed under specific assumptions, such as knowledge of the bullet speed. Most rely on TDOA estimates obtained from a generalised cross-correlation (GCC) between signals at microphone positions. There are a number of technical challenges still to be addressed here.
Find out more:University Defence Research Collaboration in Signal Processing (UDRC): http://www.udrc.eng.ed.ac.uk
Dr James Hopgood, School of Engineering profile: http://www.eng.ed.ac.uk/about/people/dr-james-r-hopgood
Edinburgh Research Explorer: http://www.research.ed.ac.uk/portal/jhopgoo1