Video captioning aims to generate natural language descriptions according to
the content, where representation learning plays a crucial role. Existing
methods are mainly developed within the supervised learning framework via
word-by-word comparison of the generated caption against the ground-truth text
without fully exploiting linguistic semantics. In this work, we propose a
hierarchical modular network to bridge video representations and linguistic
semantics from three levels before generating captions. In particular, the
hierarchy is composed of: (I) Entity level, which highlights objects that are
most likely to be mentioned in captions. (II) Predicate level, which learns the
actions conditioned on highlighted objects and is supervised by the predicate
in captions. (III) Sentence level, which learns the global semantic
representation and is supervised by the whole caption. Each level is
implemented by one module. Extensive experimental results show that the
proposed method performs favorably against the state-of-the-art models on the
two widely-used benchmarks: MSVD 104.0% and MSR-VTT 51.5% in CIDEr score.
Authors
Hanhua Ye, Guorong Li, Yuankai Qi, Shuhui Wang, Qingming Huang, Ming-Hsuan Yang