Our research focuses on the hypothesis that the initial stages of auditory processing are largely concerned with converting incoming sound into an auditory image which subsequently serves as the basis for more central processes like stream segregation and source identification. The auditory image is assumed to be the first representation of a sound of which we can be aware. Whereas, the auditory processing involved in auditory image formation is essentially data-driven signal processing, the processing of the auditory image be more central systems has more the character of pattern recognition where contextual information and feedback from higher centres play a larger role.
The auditory image hypothesis suggests that image construction
- takes place in the sub-cortical auditory pathway (brainstem and thalamus),
- is the product of the main temporal integration process in hearing, which stabilizes our perception of communication sounds, and
- initiates the process of figure/ground separation in hearing.
The division of auditory function into image construction and image processing, and the specification of a location for image construction, together provide a framework for understanding the function of the modules in the auditory pathway up to and including auditory cortex (Heschl's gyrus and planum temporale). Research at the CNBH is a combination of physiological experiments and single-cell modelling at the micro level, and perceptual experiments with functional modelling and brain imaging at the macro level.
Software Packages for AIM
aim2009: a realtime version of AIM written in C++ by Tom Walters.
aim2006: a matlab version of AIM with a GUI to facilitate auditory model development, written by Stefan Bleeck.
AIM is a time-domain model of auditory processing intended to simulate the auditory images we hear when presented with complex sounds like music, speech, animal calls etc.* The Auditory Image is constructed in three stages:
- An auditory filterbank is used to simulate the basilar membrane motion (BMM) produced by a sound in the cochlea.
- A bank of haircell simulators converts the BMM into a simulation of the Neural Activity Pattern (NAP) produced at the level of the auditory nerve or cochlear nucleus.
- Finally, a form of Strobed Temporal Integration (STI) is applied to each channel of the NAP to stabilize any repeating patterns in the NAP and convert it into a simulation of our auditory image of the sound.
The main concepts of AIM are described in
Patterson, R.D., Robinson, K., Holdsworth, J., McKeown, D., Zhang, C. and Allerhand M. (1992) "Complex sounds and auditory images," In: Auditory physiology and perception, Proceedings of the 9th International Symposium on Hearing, Y Cazals, L. Demany, K. Horner (eds), Pergamon, Oxford, 429-446.
Patterson, R.D. (1994b). "The sound of a sinusoid: Time-interval models." J. Acoust. Soc. Am. 96, 1419-1428.
Patterson, R D., Allerhand, M. H. and Giguere, C. (1995). "Time-domain modelling of peripheral auditory processing: A modular architecture and a software platform," J. Acoust. Soc. Am. vol 98, 1890-1894. (pdf) and
Patterson, R.D. (2000). "Auditory images: How complex sounds are represented in the auditory system," J Acoust Soc Japan(E) 21 (4), 183-190 (pdf)
aim2000: AIM as an 'application' of DSAM/AMS, written by Lowel O'Mard at the CNBH with Roy Patterson.
aim1992: the original version of AIM for historical reference, written by John Holdsworth, Paul Mason and Mike Allerhand at the MRC APU with Roy Patterson. It is this version of AIM that is described in Patterson et al. (1995).