by , , ,
Abstract:
Learning-based methods have been successful in solving complex control tasks without significant prior knowledge about the system. However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications. In this paper, we present a learning-based model predictive control scheme that provides provable high-probability safety guarantees. To this end, we exploit regularity assumptions on the dynamics in terms of a Gaussian process prior to construct provably accurate confidence intervals on predicted trajectories. Unlike previous approaches, we do not assume that model uncertainties are independent. Based on these predictions, we guarantee that trajectories satisfy safety constraints. Moreover, we use a terminal set constraint to recursively guarantee the existence of safe control actions at every iteration. In our experiments, we show that the resulting algorithm can be used to safely and efficiently explore and learn about dynamic systems.
Reference:
Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning T. Koller, F. Berkenkamp, M. Turchetta, A. KrauseTechnical report, arXiv, 2018
Bibtex Entry:
@techreport{koller18safempc,
	Author = {Torsten Koller and Felix Berkenkamp and Matteo Turchetta and Andreas Krause},
	Institution = {arXiv},
	Month = {March},
	Title = {Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning},
	Year = {2018}}