IVML  
  about | r&d | publications | courses | people | links
   

G. Caridakis, G. Castellano, L. Kessous, A. Raouzaiou, L. Malatesta, S. Asteriadis, K. Karpouzis,
Expressive faces, gestures and speech in multimodal affective analysis
in C. Boukis, A. Pnevmatikakis and L. Polymenakos (eds.), Artificial Intelligence and Innovations: from Theory to Applications, pp 375-388
ABSTRACT
Current work presents a multimodal approach integrating information from facial expressions, body movement and gestures and speech. Bayesian classifier was trained and tested and using a multimodal corpus with eight emotions and ten subjects the emotion classification approach was validated. Individual classifiers were trained for each modality in a unimodal approach and then data were fused at feature level and at decision level in a multimodal setting. Fusing multimodal data increased significantly the recognition rates in comparison with the unimodal systems. Furthermore, fusion performed at feature level showed better results than the one performed at decision level.
18 November , 2007
G. Caridakis, G. Castellano, L. Kessous, A. Raouzaiou, L. Malatesta, S. Asteriadis, K. Karpouzis, , "Expressive faces, gestures and speech in multimodal affective analysis", in C. Boukis, A. Pnevmatikakis and L. Polymenakos (eds.), Artificial Intelligence and Innovations: from Theory to Applications, pp 375-388
[ save PDF] [ BibTex] [ Print] [ Back]

© 00 The Image, Video and Multimedia Systems Laboratory - v1.12