by D. Lindner, H. Heidari, A. Krause
Abstract:
Machine Learning (ML) increasingly informs the allocation of opportunities to individuals and communities in areas such as lending, education, employment, and beyond. Such decisions often impact their subjects' future characteristics and capabilities in an a priori unknown fashion. The decision-maker, therefore, faces exploration-exploitation dilemmas akin to those in multi-armed bandits. Following prior work, we model communities as arms. To capture the long-term effects of ML-based allocation decisions, we study a setting in which the reward from each arm evolves every time the decision-maker pulls that arm. We focus on reward functions that are initially increasing in the number of pulls but may become (and remain) decreasing after a certain point. We argue that an acceptable sequential allocation of opportunities must take an arm's potential for growth into account. We capture these considerations through the notion of policy regret, a much stronger notion than the often-studied external regret, and present an algorithm with provably sub-linear policy regret for sufficiently long time horizons. We empirically compare our algorithm with several baselines and find that it consistently outperforms them, in particular for long time horizons.
Reference:
Addressing the Long-term Impact of ML Decisions via Policy Regret D. Lindner, H. Heidari, A. KrauseIn Proc. International Joint Conference on Artificial Intelligence (IJCAI), 2021
Bibtex Entry:
@inproceedings{lindner2021addressing,
author = {Lindner, David and Heidari, Hoda and Krause, Andreas},
booktitle = {Proc. International Joint Conference on Artificial Intelligence (IJCAI)},
month = {August},
title = {Addressing the Long-term Impact of ML Decisions via Policy Regret},
video = {https://www.youtube.com/watch?v=WBUFl1r7iFg},
year = {2021}}