by , , ,
Many important learning algorithms, such as stochastic gradient methods, are often deployed to solve nonlinear problems on Riemannian manifolds. Motivated by these applications, we propose a family of Riemannian algorithms generalizing and extending the seminal stochastic approximation framework of Robbins and Monro (1951). Compared to their Euclidean counterparts, Riemannian iterative algorithms are much less understood due to the lack of a global linear structure on the manifold. We overcome this difficulty by introducing an extended Fermi coordinate frame which allows us to map the asymptotic behavior of the proposed Riemannian Robbins–Monro (RRM) class of algorithms to that of an associated deterministic dynamical system under very mild assumptions on the underlying manifold. In so doing, we provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes, albeit with a significantly more involved analysis that requires a number of new geometric ingredients. We showcase the flexibility of the proposed RRM framework by using it to establish the convergence of a retraction-based analogue of the popular optimistic / extra-gradient methods for solving minimization problems and games, and we provide a unified treatment for their convergence.
The Dynamics of Riemannian Robbins-Monro Algorithms M. R. Karimi, Y. P. Hsieh, P. Mertikopoulos, A. KrauseIn Proc. of Thirty Fifth Conference on Learning Theory (COLT), 2022
Bibtex Entry:
	author = {Karimi, Mohammad Reza and Hsieh, Ya-Ping and Mertikopoulos, Panayotis and Krause, Andreas},
	booktitle = {Proc. of Thirty Fifth Conference on Learning Theory (COLT)},
	month = {July},
	title = {The Dynamics of Riemannian Robbins-Monro Algorithms},
	year = {2022}}