Conventional algorithmic fairness is Western in its sub-groups, values, and
optimizations. In this paper, we ask how portable the assumptions of this
largely Western take on algorithmic fairness are to a different geo-cultural
context such as India. Based on 36 expert interviews with Indian scholars, and
an analysis of emerging algorithmic deployments in India, we identify three
clusters of challenges that engulf the large distance between machine learning
models and oppressed communities in India. We argue that a mere translation of
technical fairness work to Indian subgroups may serve only as a window
dressing, and instead, call for a collective re-imagining of Fair-ML, by
re-contextualising data and models, empowering oppressed communities, and more
importantly, enabling ecosystems.
Authors
Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Vinodkumar Prabhakaran