by , , , ,
Abstract:
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly. In such settings, the agent needs to behave safely not only after but also while learning. To achieve this, existing safe reinforcement learning methods make an agent rely on priors that let it avoid dangerous situations during exploration with high probability, but both the probabilistic guarantees and the smoothness assumptions inherent in the priors are not viable in many scenarios of interest such as autonomous driving. This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor that saves the agent from violating constraints during learning. In this model, we introduce the monitor that neither needs to know how to do well at the task the agent is learning nor needs to know how the environment works. Instead, it has a library of reset controllers that it activates when the agent starts behaving dangerously, preventing it from doing damage. Crucially, the choices of which reset controller to apply in which situation affect the speed of agent learning. Based on observing agents' progress, the teacher itself learns a policy for choosing the reset controllers, a curriculum, to optimize the agent's final policy reward. Our experiments use this framework in two environments to induce curricula for safe and efficient learning.
Reference:
Safe Reinforcement Learning via Curriculum Induction M. Turchetta, A. Kolobov, S. Shah, A. Krause, A. AgarwalIn Proc. Neural Information Processing Systems (NeurIPS), 2020Spotlight presentation
Bibtex Entry:
@inproceedings{turchetta20cisr,
	author = {Turchetta, Matteo and Kolobov, Andrey and Shah, Shital and Krause, Andreas and Agarwal, Alekh},
	booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
	month = {December},
	title = {Safe Reinforcement Learning via Curriculum Induction},
	year = {2020}}