by , ,
Abstract:
Meta-learning aims to extract useful inductive biases from a set of related datasets. In Bayesian meta-learning, this is typically achieved by constructing a prior distribution over neural network parameters. However, specifying families of computationally viable prior distributions over the high-dimensional neural network parameters is difficult. As a result, existing approaches resort to meta-learning restrictive diagonal Gaussian priors, severely limiting their expressiveness and performance. To circumvent these issues, we approach meta-learning through the lens of functional Bayesian neural network inference, which views the prior as a stochastic process and performs inference in the function space. Specifically, we view the meta-training tasks as samples from the data-generating process and formalize meta-learning as empirically estimating the law of this stochastic process. Our approach can seamlessly acquire and represent complex prior knowledge by meta-learning the score function of the data-generating process marginals instead of parameter space priors. In a comprehensive benchmark, we demonstrate that our method achieves state-of-the-art performance in terms of predictive accuracy and substantial improvements in the quality of uncertainty estimates.
Reference:
MARS: Meta-Learning as Score Matching in the Function Space K. L. Pavasovic*, J. Rothfuss*, A. KrauseIn International Conference on Learning Representations (ICLR), 2023Spotlight presentation
Bibtex Entry:
@inproceedings{pavasovic2022mars,
	author = {Pavasovic*, Krunoslav Lehman and Rothfuss*, Jonas and Krause, Andreas},
	booktitle = {International Conference on Learning Representations (ICLR)},
	month = {May},
	title = {MARS: Meta-Learning as Score Matching in the Function Space},
	year = {2023}}