You are here
Advanced sound processing applications, particularly those based on array processing, are critically sensitive to the environment’s acoustic response because their design does not account for the complex propagation phenomena that cause it. Reverberations are usually seen as a liability to take countermeasures against, while nature teaches us that the information provided by the acoustic interaction with the environment can become a valuable asset that enables complex navigational tasks and more. Turning the acoustic response from a liability into an asset requires a thorough understanding of propagation phenomena, and an accurate acoustic modelling of the environment. This can be done by listening to how the environment renders controlled sound emissions, as long as such emission exhibit a temporal as well as a spatial “structure”.
The SCENIC project is aimed at developing a comprehensive set of methodologies and analysis tools that will enable acoustic systems to become aware of their own characteristics and geometry and those of the environment that they operate in, and will enable advanced space-time processing solutions to take advantage of the additional information provided by the environment’s acoustic response. One key point of the project is in the fact that, in order to achieve this status of awareness, sensors and sources will be used together in a synergistic fashion, while keeping into account requirements of flexibility, cost and real-time operation.
The Consortium will focus on three research directions, two based on acoustic wavefield decomposition (modal and geometric) and one based on a point-to-point representation (channel identification). The joint use of such methodologies in space-time acoustic processing will pave the way to novel applications that go beyond what is currently possible today.