Hear your avatar laugh like you on your PC

London, July 12 : It won’t be long that your computer avatar will be able to break into laughter or sneeze just like you do after hearing a good joke or under a cold spell, thanks to new software that has the in-built ability to recognise "non-linguistic" sounds, such as laughter, and generate an appropriate facial animation sequence.

While animated characters are already "learning" to lip sync when played human speech, the new software could improve the quality of web-based avatars or computer-animated movies.

However, this is just one side of the picture, as computers are not equipped to simulate facial expressions linked to other sounds, like that of a laugh, cry, yawn and sneeze

But now, Darren Cosker at the University of Bath, UK, and Cathy Holt at the University of Cardiff, UK, have developed software that can automatically recognise some of these vocalisations and can successfully generate appropriate animation sequences.

By using optical motion capture, the researchers measured the facial expressions of four participants as they performed a number of laughs, sobs, sneezes and yawns. They even recorded the participant''s voices at the time of their performances.

Later, they developed software that could link the key audio features of each non-linguistic vocalisation with the relevant facial motion-capture data, which was used to animate a standardised facial model.

Thus finally resulted in a software model which, when played a new laugh or cry, could automatically animate an appropriate avatar.

"Providing a person laughs with this standard structure, the computer can take their voice and create an animation sequence," New Scientist quoted Cosker, as saying.

However, he said that there were still limitations with the technique— a loud guffaw produced different sound than a snigger, and the software could still not get to grips with this level of variation.

"There is also some ambiguity in the audio. One person''s laugh can sound similar to another person''s crying. In terms of classifying actions on the basis of audio alone, we still need to do more work, said Cosker. 

The researchers presented their work at the 8th International Symposium on Computer Methods in Biomechanics and Biomedical Engineering held in Porto, Portugal. (ANI)

Technology Update: 
Regions: