by D. Lindner, A. Krause, G. Ramponi
Abstract:
Inverse Reinforcement Learning (IRL) is a powerful paradigm for inferring a reward function from expert demonstrations. Many IRL algorithms require a known transition model and sometimes even a known expert policy, or they at least require access to a generative model. However, these assumptions are too strong for many real-world applications, where the environment can be accessed only through sequential interaction. We propose a novel IRL algorithm: Active exploration for Inverse Reinforcement Learning (AceIRL), which actively explores an unknown environment and expert policy to quickly learn the expert's reward function and identify a good policy. AceIRL uses previous observations to construct confidence intervals that capture plausible reward functions and find exploration policies that focus on the most informative regions of the environment. AceIRL is the first approach to active IRL with sample-complexity bounds that does not require a generative model of the environment. AceIRL matches the sample complexity of active IRL with a generative model in the worst case. Additionally, we establish a problem-dependent bound that relates the sample complexity of AceIRL to the suboptimality gap of a given IRL problem. We empirically evaluate AceIRL in simulations and find that it significantly outperforms more naive exploration strategies.
Reference:
Active Exploration for Inverse Reinforcement Learning D. Lindner, A. Krause, G. RamponiIn Proc. Neural Information Processing Systems (NeurIPS), 2022
Bibtex Entry:
@inproceedings{lindner2022active,
author = {Lindner, David and Krause, Andreas and Ramponi, Giorgia},
booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
month = {December},
title = {Active Exploration for Inverse Reinforcement Learning},
video = {https://www.youtube.com/watch?v=4qUCa0TyFec},
year = {2022}}