by W. Yang, L. Lorch, M. Graule, H. Lakkaraju, F. Doshi-Velez
Abstract:
Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
Reference:
Incorporating interpretable output constraints in Bayesian neural networks W. Yang, L. Lorch, M. Graule, H. Lakkaraju, F. Doshi-VelezIn , volume 33, 2020Spotlight presentation
Bibtex Entry:
@article{yang2020incorporating,
author = {Yang, Wanqian and Lorch, Lars and Graule, Moritz and Lakkaraju, Himabindu and Doshi-Velez, Finale},
booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
pages = {12721--12731},
title = {Incorporating interpretable output constraints in Bayesian neural networks},
volume = {33},
year = {2020}}