IVML  
  about | r&d | publications | courses | people | links
   

M. Pantic, G. Caridakis, E. Andre, J. Kim, K. Karpouzis, S. Kollias
Multimodal emotion recognition from low-level cues
P. Petta et al. (eds.), Emotion-Oriented Systems, Cognitive Technologies, pp 115-132, Springer
ABSTRACT
Emotional intelligence is an indispensable facet of human intelligence and one of the most important factors for a successful social life. Endowing machines with this kind of intelligence towards affective human–machine interaction, however, is not an easy task. It becomes more complex with the fact that human beings use several modalities jointly to interpret affective states, since emotion affects almost all modes – audio-visual (facial expression, voice, gesture, posture, etc.), physiological (respiration, skin temperature, etc.), and contextual (goal, preference, environment, social situation, etc.) states. Compared to common unimodal approaches, many specific problems arise from the case of multimodal emotion recognition, especially concerning fusion architecture of the multimodal information. In this chapter, we firstly give a short review for the problems and then present research results of various multimodal architectures based on combined analysis of facial expression, speech, and physiological signals. Lastly we introduce designing of an adaptive neural network classifier that is capable of deciding the necessity of adaptation process in respect of environmental changes.
11 May , 2011
M. Pantic, G. Caridakis, E. Andre, J. Kim, K. Karpouzis, S. Kollias, "Multimodal emotion recognition from low-level cues", P. Petta et al. (eds.), Emotion-Oriented Systems, Cognitive Technologies, pp 115-132, Springer
[ save PDF] [ BibTex] [ Print] [ Back]

© 00 The Image, Video and Multimedia Systems Laboratory - v1.12