Speech-Based Gesture Generation Using Deep Learning.
One of the unsolved problems in virtual agents’ research is the generation of high-fidelity body language and gestures that correspond to the utterances of the agent.
Body language, including gestures, facial expressions, body posture, and movement, constitute a non-verbal portion of the communication process and are essential components of effective communication. In evolution, the spoken language comes as a secondary aid to body language.
The project is a collaboration between Dr. Pietroszek of Institute for IDEAS, Dr. Xiao in the Department of Computer Science, Dr. Guetl of the University of Graz in Austria.