by D. Lindner, M. Turchetta, S. Tschiatschek, K. Ciosek, A. Krause
Abstract:
For many reinforcement learning (RL) applications, specifying a reward is difficult. This paper considers an RL setting where the agent obtains information about the reward only by querying an expert that can, for example, evaluate individual states or provide binary preferences over trajectories. From such expensive feedback, we aim to learn a model of the reward that allows standard RL algorithms to achieve high expected returns with as few expert queries as possible. To this end, we propose Information Directed Reward Learning (IDRL), which uses a Bayesian model of the reward and selects queries that maximize the information gain about the difference in return between plausibly optimal policies. In contrast to prior active reward learning methods designed for specific types of queries, IDRL naturally accommodates different query types. Moreover, it achieves similar or better performance with significantly fewer queries by shifting the focus from reducing the reward approximation error to improving the policy induced by the reward model. We support our findings with extensive evaluations in multiple environments and with different query types.
Reference:
Information Directed Reward Learning for Reinforcement Learning D. Lindner, M. Turchetta, S. Tschiatschek, K. Ciosek, A. KrauseIn Proc. Neural Information Processing Systems (NeurIPS), 2021
Bibtex Entry:
@inproceedings{lindner2021information,
author = {Lindner, David and Turchetta, Matteo and Tschiatschek, Sebastian and Ciosek, Kamil and Krause, Andreas},
blog = {https://las.inf.ethz.ch/information-directed-reward-learning},
booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
month = {December},
title = {Information Directed Reward Learning for Reinforcement Learning},
video = {https://www.youtube.com/watch?v=1RpiZrxhV90},
year = {2021}}