Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic Multi-Objective Approach
Suyun Liu, Luis Nunes Vicente
In the application of machine learning to real-life decision-making systems,
e.g., credit scoring and criminal justice, the prediction outcomes might
discriminate against people with sensitive attributes, leading to unfairness.
The commonly used strategy in fair machine learning is to include fairness as a
constraint or a penalization term in the minimization of the prediction loss,
which ultimately limits the information given to decision-makers. In this
paper, we introduce a new approach to handle fairness by formulating a
stochastic multi-objective optimization problem for which the corresponding
Pareto fronts uniquely and comprehensively define the accuracy-fairness
trade-offs. We have then applied a stochastic approximation-type method to
efficiently obtain well-spread and accurate Pareto fronts, and by doing so we
can handle training data arriving in a streaming way.