by K. Y. Levy, A. Yurtsever, V. Cevher
Abstract:
We present a novel method for convex unconstrained optimization that, without any modifications, ensures: (i) accelerated convergence rate for smooth objectives, (ii) standard convergence rate in the general (non-smooth) setting, and (iii) standard convergence rate in the stochastic optimization setting. To the best of our knowledge, this is the first method that simultaneously applies to all of the above settings. At the heart of our method is an adaptive learning rate rule that employs importance weights, in the spirit of adaptive online learning algorithms (Duchi et al., 2011; Levy, 2017), combined with an update that linearly couples two sequences, in the spirit of (Allen-Zhu and Orecchia, 2017). An empirical examination of our method demonstrates its applicability to the above mentioned scenarios and corroborates our theoretical findings.
Reference:
Online Adaptive Methods, Universality and Acceleration K. Y. Levy, A. Yurtsever, V. CevherIn Neural and Information Processing Systems (NeurIPS), 2018
Bibtex Entry:
@inproceedings{levy2018online,
author = {Levy, Kfir Y and Yurtsever, Alp and Cevher, Volkan},
booktitle = {Neural and Information Processing Systems (NeurIPS)},
month = {December},
title = {Online Adaptive Methods, Universality and Acceleration},
year = {2018}}