Machine learning (ML) currently exerts an outsized influence on the world,
increasingly affecting communities and institutional practices. It is therefore
critical that we question vague conceptions of the field as value-neutral or
universally beneficial, and investigate what specific values the field is
advancing. In this paper, we present a rigorous examination of the values of
the field by quantitatively and qualitatively analyzing 100 highly cited ML
papers published at premier ML conferences, ICML and NeurIPS. We annotate key
features of papers which reveal their values: how they justify their choice of
project, which aspects they uplift, their consideration of potential negative
consequences, and their institutional affiliations and funding sources. We find
that societal needs are typically very loosely connected to the choice of
project, if mentioned at all, and that consideration of negative consequences
is extremely rare. We identify 67 values that are uplifted in machine learning
research, and, of these, we find that papers most frequently justify and assess
themselves based on performance, generalization, efficiency, researcher
understanding, novelty, and building on previous work. We present extensive
textual evidence and analysis of how these values are operationalized. Notably,
we find that each of these top values is currently being defined and applied
with assumptions and implications generally supporting the centralization of
power. Finally, we find increasingly close ties between these highly cited
papers and tech companies and elite universities.
Authors
Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, Michelle Bao