by M. Wendl, Y. As, M. Prajapat, A. Pollak, S. Coros, A. Krause
Abstract:
Safe exploration is a key requirement for reinforcement learning agents to learn and adapt online, beyond controlled (e.g. simulated) environments. In this work, we tackle this challenge by utilizing suboptimal yet conservative policies (e.g., obtained from offline data or simulators) as priors. Our approach, SOOPER, uses probabilistic dynamics models to optimistically explore, yet pessimistically fall back to the conservative policy prior if needed. We prove that SOOPER guarantees safety throughout learning, and establish convergence to an optimal policy by bounding its cumulative regret. Extensive experiments on key safe RL benchmarks and real-world hardware demonstrate that SOOPER is scalable, outperforms the state-of-the-art and validate our theoretical guarantees in practice.
Reference:
Safe Exploration via Policy Priors M. Wendl, Y. As, M. Prajapat, A. Pollak, S. Coros, A. KrauseIn The Fourteenth International Conference on Learning Representations, 2026
Bibtex Entry:
@inproceedings{
wendl2026safe,
title={Safe Exploration via Policy Priors},
author={Manuel Wendl and Yarden As and Manish Prajapat and Anton Pollak and Stelian Coros and Andreas Krause},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
month={April},
pdf={https://openreview.net/pdf?id=JC8xYAADHL},
blog={https://yardenas.github.io/sooper/}
}