Time: 6:30pmVenue: Mason Lecture Theatre, Francis Bancroft Building, Mile End Campus
Sound and music surrounds us all the time. Often we hear all this without really noticing: it just forms part of the background to our lives. The human ability to listen to sounds is something that is very hard for computers; but we are now beginning to build sound processing methods that can help us. In this lecture I will introduce some of these techniques that can separate out different sound sources from a mixture, follow the notes and the beats in a piece of music, or show us the sound in new visual ways. These “machine listening” algorithms offer the potential to make sense of the huge amount of sound and music in our digital world: helping us to find the music we want in the collection of music tracks on our iPods, to create music in new ways, or to analyze non-musical sounds like heartbeats or birdsong.
Mark Plumbley joined Queen Mary, University of London in 2002, becoming Professor of Machine Learning and Signal Processing in 2008, and he currently leads the Centre for Digital Music. His research interests include the analysis of audio and music signals, including beat tracking, automatic music transcription and source separation, using techniques such as neural networks, information theory, and sparse representations. Mark is an EPSRC Leadership Fellow, and an occasional choral singer.
See also - Map and directions to this venue
If you wish to attend this event please click BOOK NOW at the top of this page