by P. Kassraie, N. Emmenegger, A. Krause, A. Pacchiano
Abstract:
Model selection in the context of bandit optimization is a challenging problem, as it requires balancing exploration and exploitation not only for action selection, but also for model selection. One natural approach is to rely on online learning algorithms that treat different models as experts. Existing methods, however, scale poorly (poly$M$) with the number of models $M$ in terms of their regret. Our key insight is that, for model selection in linear bandits, we can emulate full-information feedback to the online learner with a favorable bias-variance trade-off. This allows us to develop ALEXP, which has an exponentially improved ($\log M$) dependence on $M$ for its regret. ALEXP has anytime guarantees on its regret, and neither requires knowledge of the horizon n, nor relies on an initial purely exploratory stage. Our approach utilizes a novel time-uniform analysis of the Lasso, establishing a new connection between online learning and high-dimensional statistics.
Reference:
Anytime Model Selection in Linear Bandits P. Kassraie, N. Emmenegger, A. Krause, A. PacchianoIn Proc. Neural Information Processing Systems (NeurIPS), 2023Oral presentation at PAC-Bayes Meets Interactive Learning Workshop at ICML, and at the Royal Statistical Society International Conference
Bibtex Entry:
@inproceedings{kassraie2023anytime,
author = {Kassraie, Parnian and Emmenegger, Nicolas and Krause, Andreas and Pacchiano, Aldo},
booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
month = {December},
pdf = {https://arxiv.org/pdf/2307.12897.pdf},
title = {Anytime Model Selection in Linear Bandits},
year = {2023}}