by F. Berkenkamp, M. Turchetta, A. P. Schoellig, A. Krause
Abstract:
Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.
Reference:
Safe Model-based Reinforcement Learning with Stability Guarantees F. Berkenkamp, M. Turchetta, A. P. Schoellig, A. KrauseIn Proc. Neural Information Processing Systems (NeurIPS), 2017
Bibtex Entry:
@inproceedings{berkenkamp17saferl,
author = {Felix Berkenkamp and Matteo Turchetta and Angela P. Schoellig and Andreas Krause},
booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
month = {December},
title = {Safe Model-based Reinforcement Learning with Stability Guarantees},
video = {https://www.youtube.com/watch?v=UDLI9K6b9G8&start=19146},
year = {2017}}