Scientists from Queen Mary University of London have found a successful way of identifying bird sounds from large audio collections, which could be useful for expert and amateur bird-watchers alike.
The analysis used recordings of individual birds and of dawn choruses to identify characteristics of bird sounds. It took advantage of large datasets of sound recordings provided by the British Library Sound Archive, and online sources such as the Dutch archive called Xeno Canto.
Publishing in the journal PeerJ, the authors describe an approach that combines feature-learning – an automatic analysis technique - and a classification algorithm, to create a system that can distinguish between which birds are present in a large dataset.
“Automatic classification of bird sounds is useful when trying to understand how many and what type of birds you might have in one location,” commented lead author Dr Dan Stowell from QMUL’s School of Electronic Engineering and Computer Science and Centre for Digital Music.
Dr Stowell was recently awarded a prestigious five-year fellowship from the Engineering and Physical Sciences Research Council (EPSRC) to develop computerised processes to detect multiple bird sounds in large sets of audio recordings.
“Birdsong has a lot in common with human language, even though it evolved separately. For example, many songbirds go through similar stages of vocal learning as we do, as they grow up, which makes them interesting to study. From them we can understand more about how human language evolved and social organisation in animal groups,” said Dr Stowell.
He added: “The attraction of fully automatic analysis is that we can create a really large evidence base to address these big questions.”
The classification system created by the authors performed well in a public contest using a set of thousands of recordings with over 500 bird species from Brazil. The system was regarded as the best-performing audio only classifier, and placed second overall out of entries from 10 research groups in the competition.
The researchers hope to drill down into more detail for their next project.
Dr Stowell says: “I'm working on techniques that can transcribe all the bird sounds in an audio scene: not just who is talking, but when, in response to whom, and what relationships are reflected in the sound, for example who is dominating the conversation.”
Want to know more? Read the paper.
For media information, contact: