by Y. P. Hsieh, M. R. Karimi, A. Krause, P. Mertikopoulos
Abstract:
Many modern machine learning applications - from online principal component analysis to covariance matrix identification and dictionary learning - can be formulated as minimization problems on Riemannian manifolds, typically solved with a Riemannian stochastic gradient method (or some variant thereof). However, in many cases of interest, the resulting minimization problem is _not_ geodesically convex, so the convergence of the chosen solver to a desirable solution - i.e., a local minimizer - is by no means guaranteed. In this paper, we study precisely this question, that is, whether stochastic Riemannian optimization algorithms are guaranteed to avoid saddle points with probability $1$. For generality, we study a family of retraction-based methods which, in addition to having a potentially much lower per-iteration cost relative to Riemannian gradient descent, include other widely used algorithms, such as natural policy gradient methods and mirror descent in ordinary convex spaces. In this general setting, we show that, under mild assumptions for the ambient manifold and the oracle providing gradient information, the policies under study avoid strict saddle points / submanifolds with probability $1$, from any initial condition. This result provides an important sanity check for the use of gradient methods on manifolds as it shows that, almost always, the end state of a stochastic Riemannian algorithm can only be a local minimizer.
Reference:
Riemannian stochastic optimization methods avoid strict saddle points Y. P. Hsieh, M. R. Karimi, A. Krause, P. MertikopoulosIn Proc. of Thirty Seventh Conference on Neural Information Processing Systems (NeurIPS), 2023
Bibtex Entry:
@inproceedings{hsieh2022riemannian,
author = {Hsieh, Ya-Ping and Karimi, Mohammad Reza and Krause, Andreas and Mertikopoulos, Panayotis},
booktitle = {Proc. of Thirty Seventh Conference on Neural Information Processing Systems (NeurIPS)},
month = {December},
title = {Riemannian stochastic optimization methods avoid strict saddle points},
year = {2023}}