A research team in Taiwan has combined several technologies–such as computer vision, specialized algorithms, and microphone arrays–that provide users with a better ear for where sound is coming from. The proposed design includes an innovative dual-layer microphone array placed on the ears, and a necklace-style wearable device, which incorporates a camera with computer vision AI. An algorithm helps the computer vision component find faces in the scene to predict which face the sound is coming from. When the speaker is out of range of the computer vision system, an algorithm that predicts that sound’s origin based on the angle and time of arrival kicks in.
In the last step, a mixing algorithm helps modify the sound that users hear to help them better detect the sound’s directionality, and subsequently adjusts the volume to achieve an immersive auditory experience. Researchers tested the hearing aid in a group of 30 patients. They found that study participants were able to correctly identify the source of sounds using the computer vision component of their hearing aid with 94 percent or higher accuracy, at distances that people typically have conversations (at 160 centimeters or less). When sound is originating from an area detectable by the microphones but not the computer vision device, users were still able to detect the source of sounds with more than 90 percent accuracy.
More information: