3D endoscopic depth estimation using 3D surface-aware constraints
Shang Zhao, Ce Wang, Qiyuan Wang, Yanzhe Liu, S Kevin Zhou
Robotic-assisted surgery allows surgeons to conduct precise surgical
operations with stereo vision and flexible motor control. However, the lack of
3D spatial perception limits situational awareness during procedures and
hinders mastering surgical skills in the narrow abdominal space. Depth
estimation, as a representative perception task, is typically defined as an
image reconstruction problem. In this work, we show that depth estimation can
be reformed from a 3D surface perspective. We propose a loss function for depth
estimation that integrates the surface-aware constraints, leading to a faster
and better convergence with the valid information from spatial information. In
addition, camera parameters are incorporated into the training pipeline to
increase the control and transparency of the depth estimation. We also
integrate a specularity removal module to recover more buried image
information. Quantitative experimental results on endoscopic datasets and user
studies with medical professionals demonstrate the effectiveness of our method.