Fully-Unsupervised Classification of Endothelial Instruments
FUN-SIS: a Fully UNsupervised approach for Surgical Instrument Segmentation
Automatic surgical instrument segmentation of endoscopic images is a crucial building block of many computer-assistance applications for minimally invasive surgery.
So far, state-of-the-art approaches completely rely on the availability of a ground-truth supervision signal, obtained via manualannotation, thus expensive to collect at large scale.
In this paper, we present fun-sis, a fully-unsupervised approach for binary surgical instrument segmentation.
Fun-sis trains a per-frame segmentation model on completely unlabelled endoscopic videos, by solely relying on implicit motion information and instrument shape-priors.
Shape-priors are realistic segmentationmasks of the instruments, not necessarily coming from the same dataset/domain as the videos.
We leverage them as part of a novel generative-adversarial approach, allowing to perform unsupervised instrument segmentation of optical-flow images during training.
We then use the obtained instrument masks as pseudo-labels in order to train a per-frame segmentation model ; to this aim, we develop a learning-from-noisy-labels architecture, designed to extract a clean supervision signal from these pseudo-labels, leveraging their peculiar noiseproperties.
We validate the proposed contributions on three surgical datasets, including the miccai 2017 endovis robotic instrument segmentation challengedataset.
Authors
Luca Sestini, Benoit Rosa, Elena De Momi, Giancarlo Ferrigno, Nicolas Padoy