by , , , , ,
Abstract:
Reinforcement learning (RL) is ubiquitous in the development of modern AI systems. However, state-of-the-art RL agents require extensive, and potentially unsafe, interactions with their environments to learn effectively. These limitations confine RL agents to simulated environments, hindering their ability to learn directly in real-world settings. In this work, we present ActSafe, a novel model-based RL algorithm for safe and efficient exploration. ActSafe learns a well-calibrated probabilistic model of the system and plans optimistically w.r.t. the epistemic uncertainty about the unknown dynamics, while enforcing pessimism w.r.t. the safety constraints. Under regularity assumptions on the constraints and dynamics, we show that ActSafe guarantees safety during learning while also obtaining a near-optimal policy in finite time. In addition, we propose a practical variant of ActSafe that builds on latest model-based RL advancements and enables safe exploration even in high-dimensional settings such as visual control. We empirically show that ActSafe obtains state-of-the-art performance in difficult exploration tasks on standard safe deep RL benchmarks while ensuring safety during learning.
Reference:
ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning Y. As*, B. Sukhija*, L. Treven, C. Sferrazza, S. Coros, A. KrauseIn Proc. International Conference on Learning Representations (ICLR), 2025
Bibtex Entry:
@inproceedings{as2024actsafe,
	title={ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning},
	author={As*, Yarden and Sukhija*, Bhavya and Treven, Lenart and Sferrazza, Carmelo and Coros, Stelian and Krause, Andreas},
	booktitle = {Proc. International Conference on Learning Representations (ICLR)},
	pdf = {https://arxiv.org/pdf/2410.09486},
	month = {April},
	year = {2025}}