3D object reconstruction and 6D-pose estimation from 2D shape for robotic grasping of objects
Marcell Wolnitza, Osman Kaya, Tomas Kulvicius, Florentin Wörgötter, Babette Dellen
We propose a method for 3D object reconstruction and 6D-pose estimation from
2D images that uses knowledge about object shape as the primary key. In the
proposed pipeline, recognition and labeling of objects in 2D images deliver 2D
segment silhouettes that are compared with the 2D silhouettes of projections
obtained from various views of a 3D model representing the recognized object
class. By computing transformation parameters directly from the 2D images, the
number of free parameters required during the registration process is reduced,
making the approach feasible. Furthermore, 3D transformations and projective
geometry are employed to arrive at a full 3D reconstruction of the object in
camera space using a calibrated set up. Inclusion of a second camera allows
resolving remaining ambiguities. The method is quantitatively evaluated using
synthetic data and tested with real data, and additional results for the
well-known Linemod data set are shown. In robot experiments, successful
grasping of objects demonstrates its usability in real-world environments, and,
where possible, a comparison with other methods is provided. The method is
applicable to scenarios where 3D object models, e.g., CAD-models or point
clouds, are available and precise pixel-wise segmentation maps of 2D images can
be obtained. Different from other methods, the method does not use 3D depth for
training, widening the domain of application.