Large image datasets: A pyrrhic win for computer vision?
In this paper we investigate problematic practices and consequences of large
scale vision datasets. We examine broad issues such as the question of consent
and justice as well as specific concerns such as the inclusion of verifiably
pornographic images in datasets. Taking the ImageNet-ILSVRC-2012 dataset as an
example, we perform a cross-sectional model-based quantitative census covering
factors such as age, gender, NSFW content scoring, class-wise accuracy,
human-cardinality-analysis, and the semanticity of the image class information
in order to statistically investigate the extent and subtleties of ethical
transgressions. We then use the census to help hand-curate a look-up-table of
images in the ImageNet-ILSVRC-2012 dataset that fall into the categories of
verifiably pornographic: shot in a non-consensual setting (up-skirt), beach
voyeuristic, and exposed private parts. We survey the landscape of harm and
threats both society broadly and individuals face due to uncritical and
ill-considered dataset curation practices. We then propose possible courses of
correction and critique the pros and cons of these. We have duly open-sourced
all of the code and the census meta-datasets generated in this endeavor for the
computer vision community to build on. By unveiling the severity of the
threats, our hope is to motivate the constitution of mandatory Institutional
Review Boards (IRB) for large scale dataset curation processes.