by , , ,
Abstract:
We draw attention to an important, yet largely overlooked aspect of evaluating fairness for automated decision making systems—namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics. We come to this proposal by taking the perspective of a rational, risk-averse individual who is going to be subject to algorithmic decision making and is faced with the task of choosing between several algorithmic alternatives behind a Rawlsian veil of ignorance. The convex formulation of our measures allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al.’s notion of individual fairness. Furthermore and perhaps most importantly, our work provides both theoretical and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level (un)fairness.
Reference:
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making H. Heidari, C. Ferrari, K. P. Gummadi, A. KrauseIn Neural and Information Processing Systems (NIPS), 2018
Bibtex Entry:
@Article{heidari2018fairness,
title = {Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making},
author = {Hoda Heidari and Claudio Ferrari and Krishna P. Gummadi and Andreas Krause},
journal = {Neural and Information Processing Systems (NIPS)},
year = {2018},
month = {December}}