Projekt

Daten zum Projekt

Musical Scene Analysis and Synthesis for Hearing-Impaired Listeners

Zur Projekt-Website

Initiative: Freigeist-Fellowships
Bewilligung: 02.07.2019
Laufzeit: 5 Jahre

Projektinformationen

Musical sounds rarely go alone. In fact, the interplay and mix of instruments or voices is at the heart of music composition, performance, and production. Listeners separate polyphonic mixtures into foreground and background, or melody and accompaniment, through so-called auditory scene analysis. What if the ears become imprecise as for hearing-impaired individuals? Is it still possible to hear out a solo violin in the midst of an orchestra? In contrast to speech perception, research on music perception has traditionally not addressed hearing loss. As the first large-scale campaign into this topic, the goal of this project is to explore what makes music listening difficult for hearing-impaired listeners and why current hearing aids only poorly transmit music, whether by Beethoven or The Beatles. Specifically, the project will characterize scene analysis abilities of normal and hearing-impaired listeners using psychophysics and brain imaging, explore the acoustical determinants of musical sound clarity, and develop scene synthesis algorithms tailored to the needs of hearing-impaired listeners. By integrating methods from music psychology, psychophysics, signal processing, and computational neuroscience, this project will reveal groundbreaking insights into elementary principles of music listening and seed the foundations for future breakthroughs in hearing technology for music.

Projektbeteiligte

  • Dr. Kai Siedenburg

    Universität Oldenburg
    Fakultät VI - Medizin u. Gesundheitswissenschaften
    Department für Medizinische Physik und Akustik
    Oldenburg

Open Access-Publikationen