IVML  
  about | r&d | publications | courses | people | links
   

K. Karpouzis, S. Kollias
Facial Animation and Affective Human-Computer Interaction
B. Furht (ed.), Encyclopedia of Multimedia, pp. 246-251, Springer US, 2006.
ABSTRACT
Even though everyday human-to-human communication is thought to be based on vocal and lexical content, people seem to base both expressive and cognitive capabilities on facial expressions and body gestures. Related research in both the analysis and synthesis fields is based on trying to recreate the way the human mind works while making an effort to recognize such emotion. This inherently multimodal process means that in order to achieve robust results, one should take into account features like speech, face and hand gestures or body pose, as well as the interaction between them. In the case of speech, features can come from both linguistic and paralinguistic analysis; in the case of facial and body gestures, messages are conveyed in a much more expressive and definite manner than wording, which can be misleading or ambiguous, especially when users are not visible to each other. While a lot of effort has been invested in examining individually these aspects of human expression, recent research has shown that even this approach can benefit from taking into account multimodal information
09 May , 2006
K. Karpouzis, S. Kollias, "Facial Animation and Affective Human-Computer Interaction", B. Furht (ed.), Encyclopedia of Multimedia, pp. 246-251, Springer US, 2006.
[ BibTex] [ Print] [ Back]

© 00 The Image, Video and Multimedia Systems Laboratory - v1.12