Oxford Develops AI System for Significantly Improved Lip-reading
Ron Perillo / 1 year ago
Using thousands of hours of BBC News programmes including Breakfast, Newsnight, Question Time and more, scientists at Oxford have developed an artificial intelligence system that can lip-read better than humans. The project has been developed in collaboration with Google’s DeepMind AI division, dubbed the “Watch, Attend and Spell” system, now boasts a 50% lip-reading hit-rate. In comparison, the same lip-reading tests shown to professional lip-readers only have an accuracy of 12%.
The neural network using speech recognition and image recognition algorithms is able to gather 17,500 words for its vocabulary from examining 118,000 sentences in the clips. Since it is fed mostly news programmes, it has grown to associate the likelyhood of certain words following another in a particular topic such as “minister” after “prime”, but this also means that it is limited and cannot recognize many words that are not spoken by newsreaders.
As good as the system is now, the researchers still expect a lot of work needed before it can be put to practical use. Many groups, such as those who are advocates of the hearing impaired, are very excited about the development however.
“AI lip-reading technology would be able to enhance the accuracy and speed of speech to text,” says Jesal Vishnuram, Action on Hearing Loss technology research manager. “This would help people with subtitles on TV, and with hearing in noisy surroundings.”
The next objective for the Oxford researchers, is to make the system work in real time. For now, it is only able to operate on full sentences from recorded video. According to Joon Son Chung, a doctoral student at Oxford University’s Department of Engineering, this is actually a simpler task than refining the accuracy of the AI system, so it is not as challenging as it sounds.