by , ,
Abstract:
We analyze the convergence rate of the randomized Newton-like method introduced by Qu et. al. (2016) for smooth and convex objectives, which uses random coordinate blocks of a Hessian-over-approximation matrix $\bM$ instead of the true Hessian. The convergence analysis of the algorithm is challenging because of its complex dependence on the structure of $\bM$. However, we show that when the coordinate blocks are sampled with probability proportional to their determinant, the convergence rate depends solely on the eigenvalue distribution of matrix $\bM$, and has an analytically tractable form. To do so, we derive a fundamental new expectation formula for determinantal point processes. We show that determinantal sampling allows us to reason about the optimal subset size of blocks in terms of the spectrum of $\bM$. Additionally, we provide a numerical evaluation of our analysis, demonstrating cases where determinantal sampling is superior or on par with uniform sampling.
Reference:
Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling M. Mutný, M. Derezisnki, A. KrauseIn Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Bibtex Entry:
@inproceedings{Mutny2020b,
	author = {Mutn\'{y}, Mojmir and Derezisnki, Michal and Krause, Andreas},
	booktitle = {Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS)},
	month = {June},
	title = {Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling},
	year = {2020}}