by , , ,
Abstract:
Meta-Learning promises to enable more data-efficient inference by harnessing previous experience from related learning tasks. While existing meta-learning methods help us to improve the accuracy of our predictions in face of data scarcity, they fail to supply reliable uncertainty estimates, often being grossly overconfident in their predictions. Addressing these shortcomings, we introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as stochastic processes and performs meta-level regularization directly in the function space. This allows us to directly steer the probabilistic predictions of the meta-learner towards high epistemic uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates. Finally, we showcase how our approach can be integrated with sequential decision making, where reliable uncertainty quantification is imperative. In our benchmark study on meta-learning for Bayesian Optimization (BO), F-PACOH significantly outperforms all other meta-learners and standard baselines. Even in a challenging lifelong BO setting, where optimization tasks arrive one at a time and the meta-learner needs to build up informative prior knowledge incrementally, our proposed method demonstrates strong positive transfer.
Reference:
Meta-Learning Reliable Priors in the Function Space J. Rothfuss, D. Heyn, J. Chen, A. KrauseArXiv, 2021
Bibtex Entry:
@misc{rothfuss21fpacoh,
	author = {Jonas Rothfuss and Dominique Heyn and Jinfan Chen and Andreas Krause},
	title = {Meta-Learning Reliable Priors in the Function Space},
    archiveprefix = {arXiv},
    eprint = {2106.03195},
    month = {June},
    year = {2021},
    primaryclass = {cs.LG},
    publisher = {ArXiv}}