by M. Bagatella, J. Hübotter, G. Martius, A. Krause
Abstract:
Pre-trained generalist policies are rapidly gaining relevance in robot learning due to their promise of fast adaptation to novel, in-domain tasks. This adaptation often relies on collecting new demonstrations for a specific task of interest and applying imitation learning algorithms, such as behavioral cloning. However, as soon as several tasks need to be learned, we must decide which tasks should be demonstrated and how often? We study this multi-task problem and explore an interactive framework in which the agent adaptively selects the tasks to be demonstrated. We propose AMF (Active Multi-task Fine-tuning), an algorithm to maximize multi-task policy performance under a limited demonstration budget by collecting demonstrations yielding the largest information gain on the expert policy. We derive performance guarantees for AMF under regularity assumptions and demonstrate its empirical effectiveness to efficiently fine-tune neural policies in complex and high-dimensional environments.
Reference:
Active Fine-Tuning of Generalist Policies M. Bagatella, J. Hübotter, G. Martius, A. KrauseIn arXiv preprint arXiv:2410.05026, 2024
Bibtex Entry:
@article{bagatella2024active,
author = {Bagatella, Marco and H{\"u}botter, Jonas and Martius, Georg and Krause, Andreas},
journal = {arXiv preprint arXiv:2410.05026},
pdf = {https://arxiv.org/pdf/2410.05026},
title = {Active Fine-Tuning of Generalist Policies},
year = {2024}}