by J. Kirschner, T. Lattimore, C. Vernade, C. Szepesvári
Abstract:
We introduce a simple and efficient algorithm for stochastic linear bandits with finitely many actions that is asymptotically optimal and worst-case rate optimal in finite time. The approach is based on the frequentist information-directed sampling (IDS) framework, with a surrogate for the information gain that is informed by the optimization problem that defines the asymptotic lower bound. Our analysis sheds light on how IDS balances the trade-off between regret and information. Moreover, we uncover a surprising connection between the recently proposed primal-dual methods and the Bayesian IDS algorithm. We demonstrate empirically that IDS is competitive with UCB in finite-time, and can be significantly better in the asymptotic regime.
Reference:
Asymptotically Optimal Information-Directed Sampling J. Kirschner, T. Lattimore, C. Vernade, C. SzepesváriIn Proc. Conference on Learning Theory (COLT), 2021
Bibtex Entry:
@inproceedings{kirschner21asymptotically,
archiveprefix = {arXiv},
author = {Johannes Kirschner and Tor Lattimore and Claire Vernade and Csaba Szepesv{\'a}ri},
booktitle = {Proc. Conference on Learning Theory (COLT)},
eprint = {2011.05944},
month = {August},
primaryclass = {stat.ML},
title = {Asymptotically Optimal Information-Directed Sampling},
year = {2021}}