A computational framework for modelling active exploratory listening that assigns meaning to auditory scenes
The Two!Ears Auditory Model consists of several stages for modeling active human listening. These stages include not only classical signal driven processing steps like you can find in the Auditory Modeling Toolbox. It comes also with a complete acoustical simulation framework for creating binaural ear signals for specified acoustical scenes. The classical auditory signal processing is further accompanied by a blackboard architecture that allows top-down processing and the inclusion of world knowledge to the model.
The actual release of the model is available at its official release page. For an extensive documentation on how to install and use the model have a look at its online documentation.
The model is being developed by the partners of the EU project Two!Ears.
- Audiovisual Technology Group, TU Ilmenau, Germany
- Neural Information Processing Group, TU Berlin, Germany
- Department of Electrical Engineering-Hearing Systems, Technical University of Denmark
- Institute of Communication Acoustics, Ruhr University Bochum, Germany
- The Institute for Intelligent Systems and Robotics, UPMC, France
- Robotics, Action and Perception Group, LAAS, France
- Institute of Communications Engineering, University Rostock, Germany
- Department of Computer Science, University of Sheffield, UK
- Human-Technology Interaction Group, Eindhoven University of Technology, Netherlands
- The Center for Cognition, Communication, and Culture, Rensselaer, USA
A list of all project publications can be found on the project homepage. Additional material for some of the publications can be found in the papers repository.
If not otherwise stated inside single files the Two!Ears Auditory Model is
licensed under GNU General Public License, version
2, and its parts available in the
RoboticPlatform
and the Tools
folder are licensed under The BSD 2-Clause
License.
If you are interested in getting involved in the development of the Two!Ears model, please visit the development documentation page.
This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 618075.