Institute for IDEAS

Institute for Immersive Designs, Experiences, Applications, and Stories

Speech-Driven Gestures:

Speech-Based Gesture Generation Using Deep Learning.

Speech-Driven Gestures

One of the unsolved problems in virtual agents’ research is the generation of high-fidelity body language and gestures that correspond to the utterances of the agent. 

Body language, including gestures, facial expressions, body posture, and movement, constitute a non-verbal portion of the communication process and are essential components of effective communication. In evolution, the spoken language comes as a secondary aid to body language.

The project is a collaboration between Dr. Pietroszek of Institute for IDEAS, Dr. Xiao in the Department of Computer Science, Dr. Guetl of the University of Graz in Austria.

Publications:

  1. Manuel Rebol, Christian Gütl, and Krzysztof Pietroszek. 2021. Real-time Gesture Animation Generation from Speech for Virtual Human Interaction. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA '21). Association for Computing Machinery, New York, NY, USA, Article 197, 1–4. https://doi.org/10.1145/3411763.3451554
  2. Manuel Rebol, Christian Güti and Krzysztof Pietroszek. "Passing a Non-verbal Turing Test: Evaluating Gesture Animations Generated from Speech," 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 2021, pp. 573-581, doi: 10.1109/VR50410.2021.00082.