by , , ,
Abstract:
We draw attention to an important, yet largely overlooked aspect of evaluating fairness for automated decision making systems—namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics, and is justified by the Rawlsian conception of fairness behind a veil of ignorance. The convex formulation of our welfare-based measures of fairness allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of individual fairness. Furthermore and perhaps most importantly, our work provides both heuristic justification and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.
Reference:
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making H. Heidari, C. Ferrari, K. P. Gummadi, A. KrauseIn Neural and Information Processing Systems (NeurIPS), 2018
Bibtex Entry:
@inproceedings{heidari2018fairness,
	author = {Hoda Heidari and Claudio Ferrari and Krishna P. Gummadi and Andreas Krause},
	booktitle = {Neural and Information Processing Systems (NeurIPS)},
	month = {December},
	title = {Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making},
	year = {2018}}