by J. Gehring, G. Synnaeve, A. Krause, N. Usunier
Abstract:
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration. However, prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design. In previous work on continuous control, the sensi- tivity of methods to this trade-off has not been addressed explicitly, as locomotion provides a suitable prior for navigation tasks, which have been of foremost interest. In this work, we analyze this trade-off for low-level policy pre-training with a new benchmark suite of diverse, sparse-reward tasks for bipedal robots. We alleviate the need for prior knowledge by proposing a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner. For utilization on downstream tasks, we present a three-layered hierarchical learning algorithm to automatically trade off between general and specific skills as required by the respec- tive task. In our experiments, we show that our approach performs this trade-off effectively and achieves better results than current state-of-the-art methods for end- to-end hierarchical reinforcement learning and unsupervised skill discovery. Code and videos are available at https://facebookresearch.github.io/hsd3.
Reference:
Hierarchical Skills for Efficient Exploration J. Gehring, G. Synnaeve, A. Krause, N. UsunierIn Proc. Neural Information Processing Systems (NeurIPS), 2021
Bibtex Entry:
@inproceedings{gehring21hierarchical,
author = {Jonas Gehring and Gabriel Synnaeve and Andreas Krause and Nicolas Usunier},
booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
month = {December},
title = {Hierarchical Skills for Efficient Exploration},
year = {2021}}