Two different types of avatars were recently developed to improve communication and information transfer in computer and telecommunication applications developed for people with a hearing impairment.
The first is a computer animation of a person producing sign language. Information accessibility is improved when deaf computer users who have sign language as their 'mothertongue', have information presented in sign language. This can either be a graphical representation of information that is already available in print at the screen, but could also be extra information presented as additional explanation of what is visible. Whereas earlier types of signing avatars produced a resynthesis of what was signed by a human, (thus reducing the stream of information to be sent over the communication line), the present avatars are able to sign messages that were not recorded before, but are synthesised by generating input commands for the avatar, describing the sign (or the complete sign language message) in terms of handshape, position with respect to the body and direction of movement. Although automatic and synthetic sign language production based on text-to-sign algorithms is the ultimate goal, at present these input commands still have to be provided by expert users of the system.
The second type of avatar is an animation of a so-called 'talking head'. It is known that seeing a speaker will make speech recognition easier. In this application, which is used in telephone communication to support the hard-of-hearing conversation partner at the receiving end, a display shows an animated representation of a head. The animated mouth moves simultaneous to the speech produced by the (normal-hearing) person at the other end of the line. Speech analysis and recognition algorithms process the spoken word and send control parameters to the animated head which shows the movements of the mouth of the normal-hearing speaker. As this analysis and recognition process takes time, the analog speech signal that runs over the telephone line has to be somewhat delayed in order to synchronise to the image of the talking head.
The analysis and recognition algorithms are to a certain extent language-dependent. Although the algorithms also include features that are language-universal. Speech recognition algorithms were trained with different languages. In evaluation experiments it was demonstrated that using this application speech recognition improved through the supportive use of the talking head.