We propose deep virtual markers, a framework for estimating dense and
accurate positional information for various types of 3D data. We design a
concept and construct a framework that maps 3D points of 3D articulated models,
like humans, into virtual marker labels. To realize the framework, we adopt a
sparse convolutional neural network and classify 3D points of an articulated
model into virtual marker labels. We propose to use soft labels for the
classifier to learn rich and dense interclass relationships based on geodesic
distance. To measure the localization accuracy of the virtual markers, we test
FAUST challenge, and our result outperforms the state-of-the-art. We also
observe outstanding performance on the generalizability test, unseen data
evaluation, and different 3D data types (meshes and depth maps). We show
additional applications using the estimated virtual markers, such as non-rigid
registration, texture transfer, and realtime dense marker prediction from depth
maps.
Authors
Hyomin Kim, Jungeon Kim, Jaewon Kam, Jaesik Park, Seungyong Lee