There are several closely-linked areas involving speech, language and music that are covered within the Laboratory. For many years, we have done fundamental research into speech and language processing algorithms (e.g. speech recognition in noise, formulaic language modelling, language processing for speech synthesis) and development of applications of speech processing (e.g. call-routing, recognition of speech transmitted using VOIP, dysarthric speech). A recent application area of interest is distributed speech recognition (DSR), an emerging technology in which recognition is divided between a client and its server, and we have been active in defining the international DSR standard. We have developed methods of estimating speech information lost in transmission across networks and compensating for this loss in the recognition process.
We were one of the first groups to research audio-visual speech synthesis, and have recently begun an EPSRC-funded project in collaboration with the University of Surrey to research into automatic lip-reading. The 2009 Auditory and Visual Speech Processing (AVSP) conference will take place at UEA, Norwich, in September 2009.
Our music processing systems developed in conjunction with IMIRSEL at the University of Illinois won the Genre Classification, Artist Identification and Classical Composer Identification tasks at the 2007 MIREX competition. The algorithms behind these systems have been patented, and we are currently commercialising them with venture capital funds as FindTunes.