Linköping University uses Spresense for next generation hearing aid technology

Linköping University is working with the Danish hearing aid company Oticon to explore the next generation of hearing aid technology. As with all types of hearing aids, the aim is to allow the user to better perceive relevant sound sources in the near environment, while suppressing unwanted signals and noise. The conceptual solution to do this is to sample all sound reaching the wearer, convert it from analog signals to digital data, filter out which data comes from desired sound sources and discard the rest. It's conceptually simple but requires a substantial amount of mathematics to process inputs in real time from multiple microphones. The field of extracting sound sources through identifying the direction of arrival of signals dates back several decades, and it's possible to do all these calculations using computers with good enough hardware. However, when the goal is to be able to wear the equipment in an earpiece or integrated in a pair of glasses, the algorithms must be much more efficient to minimize computational cost to allow the usage of hardware with small footprint and low power consumption.

Here is where the Spresense microcontroller board fits well with its strong computing ability and low power consumption. The researchers at Linköping University use Spresense as the base to explore new algorithms to give equivalent results as conventional methods, but with much lower computational costs.

Using Spresense with an array of multiple microphones

The test unit prototype comprises the Spresense board fitted with an 8-mic array, WiFi and an inertial/magnetic sensor (IMU). The mic array perceives all sound signals around the wearer and feeds these streams to the Spresense unit. The WiFi unit allows developers to read out real-time data and adjust settings and the IMU sensor helps position and reorient the unit in the spatial domain. All sensor data is collected, filtered, and extracted by the Spresense unit.

This test unit is used to explore new computing models to accurately and efficiently calculate where different sources are located in the 3D space. The next step is to segment these sources and only let the user hear what's relevant (e.g. a conversation across the table). The microphones are sampled at 8000 Hz. Samples are processed in batches of 16000 with an overlap of 6000 samples. The IMU sensor is sampled at 100 Hz. The software components used for this solution are open source.

Testing the new algorithms against real sound sources and environments

To test the computing models, the research team considered three scenarios:

  • Scenario 1: Simulation: Stationary mic frame listening to two sound sources at different angles (a man and a woman)
  • Scenario 2: Experiment: Stationary mic frame listening to two sound sources at different angles (a man and a woman)
  • Scenario 3: Moving mic frame with one man talking

Six different methods of computing beamforming were evaluated on these scenarios. Below is one of the result outputs where the two gray lines at the side indicate the true angle of the two sound sources and the colorful pattern shows the calculation over time.

The development, testing and analysis by the researchers is far more comprehensive than this and shows that the results can be further improved with additional development and experimentation. The team concludes: "Results show that sound sources can be localized and tracked robustly and accurately while rotating the platform and that the proposed method outperforms standard methods at reconstructing the signals."

To understand more about the explicit calculations and algorithms used during testing, please see the links under 'More information below'.

More information:

spresense