by , ,
Abstract:
In real-world tasks, reinforcement learning (RL) agents frequently encounter situations that are not present during training time. To ensure reliable performance, the RL agents need to exhibit robustness to such worst-case situations. The robust-RL framework addresses this challenge via a minimax optimization between an agent and an adversary. Previous robust RL algorithms are either sample inefficient, lack robustness guarantees, or do not scale to large problems. We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem while attaining near-optimal sample complexity guarantees. RH-UCRL is a model-based reinforcement learning (MBRL) algorithm that effectively distinguishes between epistemic and aleatoric uncertainty and efficiently explores both the agent and the adversary decision spaces during policy learning. We scale RH-UCRL to complex tasks via neural networks ensemble models as well as neural network policies. Experimentally we demonstrate that RH-UCRL outperforms other robust deep RL algorithms in a variety of adversarial environments.
Reference:
Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning S. Curi, I. Bogunovic, A. KrauseIn Proc. International Conference on Machine Learning (ICML), 2021
Bibtex Entry:
@inproceedings{curi21combining,
	author = {Sebastian Curi and Ilija Bogunovic and Andreas Krause},
	booktitle = {Proc. International Conference on Machine Learning (ICML)},
	month = {July},
	pdf = {http://proceedings.mlr.press/v139/curi21a/curi21a.pdf},
	title = {Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning},
	year = {2021}}