by M. Turchetta, F. Berkenkamp, A. Krause
Abstract:
In interactive machine learning (IML), we iteratively make decisions and obtain noisy observations of an unknown function. While IML methods, e.g., Bayesian optimization and active learning, have been successful in applications, on real-world systems they must provably avoid unsafe decisions. To this end, safe IML algorithms must carefully learn about a priori unknown constraints without making unsafe decisions. Existing algorithms for this problem learn about the safety of all decisions to ensure convergence. This is sample-inefficient, as it explores decisions that are not relevant for the original IML objective. In this paper, we introduce a novel framework that renders any existing unsafe IML algorithm safe. Our method works as an add-on that takes suggested decisions as input and exploits regularity assumptions in terms of a Gaussian process prior in order to efficiently learn about their safety. As a result, we only explore the safe set when necessary for the IML problem. We apply our framework to safe Bayesian optimization and to safe exploration in deterministic Markov Decision Processes (MDP), which have been analyzed separately before. Our method outperforms other algorithms empirically.
Reference:
Safe Exploration for Interactive Machine Learning M. Turchetta, F. Berkenkamp, A. KrauseIn Proc. Neural Information Processing Systems (NeurIPS), 2019
Bibtex Entry:
@inproceedings{turchetta19goose,
author = {Turchetta, Matteo and Berkenkamp, Felix and Krause, Andreas},
booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
month = {December},
title = {Safe Exploration for Interactive Machine Learning},
year = {2019}}