Understanding Pixel-level 2D Image Semantics with 3D Keypoint Knowledge Engine
Pixel-level 2D object semantic understanding is an important topic in
computer vision and could help machine deeply understand objects (e.g.
functionality and affordance) in our daily life. However, most previous methods
directly train on correspondences in 2D images, which is end-to-end but loses
plenty of information in 3D spaces. In this paper, we propose a new method on
predicting image corresponding semantics in 3D domain and then projecting them
back onto 2D images to achieve pixel-level understanding. In order to obtain
reliable 3D semantic labels that are absent in current image datasets, we build
a large scale keypoint knowledge engine called KeypointNet, which contains
103,450 keypoints and 8,234 3D models from 16 object categories. Our method
leverages the advantages in 3D vision and can explicitly reason about objects
self-occlusion and visibility. We show that our method gives comparative and
even superior results on standard semantic benchmarks.
Authors
Yang You, Chengkun Li, Yujing Lou, Zhoujun Cheng, Liangwei Li, Lizhuang Ma, Weiming Wang, Cewu Lu