Semantic feature mining for video event understanding

X Yang, T Zhang, C Xu - ACM Transactions on Multimedia Computing …, 2016 - dl.acm.org
X Yang, T Zhang, C Xu
ACM Transactions on Multimedia Computing, Communications, and Applications …, 2016dl.acm.org
Content-based video understanding is extremely difficult due to the semantic gap between
low-level vision signals and the various semantic concepts (object, action, and scene) in
videos. Though feature extraction from videos has achieved significant progress, most of the
previous methods rely only on low-level features, such as the appearance and motion
features. Recently, visual-feature extraction has been improved significantly with machine-
learning algorithms, especially deep learning. However, there is still not enough work …
Content-based video understanding is extremely difficult due to the semantic gap between low-level vision signals and the various semantic concepts (object, action, and scene) in videos. Though feature extraction from videos has achieved significant progress, most of the previous methods rely only on low-level features, such as the appearance and motion features. Recently, visual-feature extraction has been improved significantly with machine-learning algorithms, especially deep learning. However, there is still not enough work focusing on extracting semantic features from videos directly. The goal of this article is to adopt unlabeled videos with the help of text descriptions to learn an embedding function, which can be used to extract more effective semantic features from videos when only a few labeled samples are available for video recognition. To achieve this goal, we propose a novel embedding convolutional neural network (ECNN). We evaluate our algorithm by comparing its performance on three challenging benchmarks with several popular state-of-the-art methods. Extensive experimental results show that the proposed ECNN consistently and significantly outperforms the existing methods.
ACM Digital Library