Improving bag-of-visual-words model with spatial-temporal correlation for video retrieval.
MetadataShow full item record
WANG, L., SONG, D. and ELYAN, E., 2012. Improving bag-of-visual-words model with spatial-temporal correlation for video retrieval. In: Proceedings of the 21st ACM international conference on Information and knowledge management. 29 October – 2 November 2012. New York, NY: ACM. Pp. 1303-1312
Most of the state-of-art approaches to Query-by-Example (QBE) video retrieval are based on the Bag-of-visual-Words (BovW) representation of visual content. It, however, ig- nores the spatial-temporal information, which is important for similarity measurement between videos. Direct incorpo- ration of such information into the video data representa- tion for a large scale data set is computationally expensive in terms of storage and similarity measurement. It is also static regardless of the change of discriminative power of vi- sual words for di erent queries. To tackle these limitations, in this paper, we propose to discover Spatial-Temporal Cor- relations (STC) imposed by the query example to improve the BovW model for video retrieval. The STC, in terms of spatial proximity and relative motion coherence between dif- ferent visual words, is crucial to identify the discriminative power of the visual words. We develop a novel technique to emphasize the most discriminative visual words for similar- ity measurement, and incorporate this STC-based approach into the standard inverted index architecture. Our approach is evaluated on the TRECVID2002 and CC WEB VIDEO datasets for two typical QBE video retrieval tasks respec- tively. The experimental results demonstrate that it sub- stantially improves the BovW model as well as a state of the art method that also utilizes spatial-temporal informa- tion for QBE video retrieval.