IVML  
  about | r&d | publications | courses | people | links
   

K. Rapantzikos, N. Tsapatsoulis, Y. Avrithis and S. Kollias
A Bottom-Up Spatiotemporal Visual Attention Model for Video Analysis
IET Image Processing, vol. 1, no. 2, pp. 237- 248, Jun 2007
ABSTRACT
The Human Visual System (HVS) has the ability to fixate quickly on the most informative (salient) regions of a scene and reducing therefore the inherent visual uncertainty. Computational visual attention (VA) schemes have been proposed to account for this important characteristic of the HVS. In this paper a video analysis framework based on a spatiotemporal VA model is presented. We propose a novel scheme for generating saliency in video sequences by taking into account both the spatial extent and dynamic evolution of regions. Towards this goal we extend a common image-oriented computational model of saliency-based visual attention to handle spatiotemporal analysis of video in a volumetric framework. The main claim is that attention acts as an efficient preprocessing step in order to obtain a compact representation of the visual content in the form of salient events/objects. The model has been implemented and qualitative as well as quantitative examples illustrating its performance are shown.
29 June , 2007
K. Rapantzikos, N. Tsapatsoulis, Y. Avrithis and S. Kollias, "A Bottom-Up Spatiotemporal Visual Attention Model for Video Analysis", IET Image Processing, vol. 1, no. 2, pp. 237- 248, Jun 2007
[ save PDF] [ BibTex] [ Print] [ Back]

© 00 The Image, Video and Multimedia Systems Laboratory - v1.12