IVML  
  about | r&d | publications | courses | people | links
   

N. Tsapatsoulis, A. Raouzaiou, S. Kollias, R. Cowie and E. Douglas-Cowie
Emotion Recognition and Synthesis based on MPEG-4 FAPs
in MPEG-4 Facial Animation, Igor Pandzic, R. Forchheimer (eds), John Wiley & Sons, UK, 2002.
ABSTRACT
In the framework of MPEG-4 hybrid coding of natural and synthetic data streams, one can include teleconferencing and tele-presence applications, where a synthetic proxy or a virtual agent is capable of substituting the actual user. Such agents can interact with each other, analysing input textual data entered by the user, as well as multi-sensory data, including human emotions, facial expressions and non-verbal speech. This not only enhances interactivity, by replacing single media representations with dynamic multimedia renderings, but also assists human-computer interaction issues, letting the system become accustomed to the current needs and feelings of the user. Actual application of this technology is expected in educational environments, 3D video conferencing and collaborative workplaces, online shopping and gaming, virtual communities and interactive entertainment. Facial expression synthesis and animation has gained much interest within the MPEG-4 framework; explicit Facial Animation Parameters (FAPĒs) have been dedicated to this purpose. However, FAP implementation is an open research area. In this chapter we describe a method for generating emotionally-enriched human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions. To achieve this goal we utilize both MPEG-4 Facial Definition Parameters (FDPs) and FAPĒs. The contribution of the work is two-fold: it proposes a way of modelling primary expressions using FAPĒs and it describes a rule-based technique for analysing both archetypal and intermediate expressions; for the latter we propose an innovative model generation framework. In particular, a relation between FAPĒs and the activation parameter proposed in classical psychological studies is established, extending the archetypal expression studies that the computer society has concentrated on. The overall scheme leads to a parameterised approach to facial expression synthesis that is compatible with the MPEG-4 standard and can be used for emotion understanding.
31 May , 2002
N. Tsapatsoulis, A. Raouzaiou, S. Kollias, R. Cowie and E. Douglas-Cowie, "Emotion Recognition and Synthesis based on MPEG-4 FAPs", in MPEG-4 Facial Animation, Igor Pandzic, R. Forchheimer (eds), John Wiley & Sons, UK, 2002.
[ BibTex] [ Print] [ Back]

© 00 The Image, Video and Multimedia Systems Laboratory - v1.12