by , , ,
Abstract:
Modern online learning algorithms achieve low (sublinear) regret in a variety of diverse settings. These algorithms, however, update their solution at every time step. While these updates are computationally efficient, the very requirement of frequent updates makes the algorithms untenable in some practical applications. In this work we develop online learning algorithms that update a sublinear number of times. We give a meta algorithm based on non-homogeneous Poisson Processes that gives a smooth trade-off between regret and frequency of updates. Empirically, we show that in many cases, we can significantly reduce updates at a minimal increase in regret.
Reference:
Consistent Online Optimization: Convex and Submodular M. R. Karimi, A. Krause, S. Lattanzi, S. VassilvitskiiIn International Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Bibtex Entry:
@inproceedings{karimi2018consistent,
	author = {Mohammad Reza Karimi and Andreas Krause and Silvio Lattanzi and Sergei Vassilvitskii},
	booktitle = {International Conference on Artificial Intelligence and Statistics (AISTATS)},
	month = {April},
	title = {Consistent Online Optimization: Convex and Submodular},
	year = {2019}}