The Audio and Speech Group at UEA currently consists of four faculty members, two Research Associates and ten PhD students.
The group has been active in fundamental research into speech processing algorithms (e.g. speech recognition in noise, speech enhancement, speaker adaptation, confidence measures for speech recognition) and development of applications of speech processing (e.g. call-routing, recognition of speech transmitted using VOIP, dysarthric speech) for many years.
More recently, we have been investigating incorporating visual information into several aspects of speech and audio processing.
An important current focus is research into automatic lip-reading algorithms, which has been funded by the EPSRC and the Home Office. We are also interested in exploiting visual speech information to improve traditionally audio-only methods of speech enhancement and speaker separation, and in combining audio and visual information to understand events such as sports games (with EPSRC funding).
We have also been active in developing the use of avatars for sign-language. Our research into avatar speech animation is developing avatars that are capable of expressive speech. We have collaborations with Apple and Disney Research as well as with many small companies.