by M. R. Karimi, Y. P. Hsieh, A. Krause
Abstract:
Many problems in machine learning can be formulated as solving entropy-regularized optimal transport on the space of probability measures. The canonical approach involves the Sinkhorn iterates, renowned for their rich mathematical properties. Recently, the Sinkhorn algorithm has been recast within the mirror descent framework, thus benefiting from classical optimization theory insights. Here, we build upon this result by introducing a continuous-time analogue of the Sinkhorn algorithm. This perspective allows us to derive novel variants of Sinkhorn schemes that are robust to noise and bias. Moreover, our continuous-time dynamics offers a unified perspective on several recently discovered dynamics in machine learning and mathematics, such as the "Wasserstein mirror flow" of Deb et al. (2023) or the "mean-field Schrödinger equation" of Claisse et al. (2023).
Reference:
Sinkhorn Flow as Mirror Flow: A Continuous-Time Framework for Generalizing the Sinkhorn Algorithm M. R. Karimi, Y. P. Hsieh, A. KrauseIn Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Bibtex Entry:
@inproceedings{karimi2024sinkhorndinger equation" of Claisse et al. (2023).},
author = {Karimi, Mohammad Reza and Hsieh, Ya-Ping and Krause, Andreas},
booktitle = {Proc. International Conference on Artificial Intelligence and Statistics (AISTATS)},
month = {may},
pdf = {https://arxiv.org/pdf/2311.16706.pdf},
title = {{Sinkhorn} Flow as Mirror Flow: A Continuous-Time Framework for Generalizing the {Sinkhorn} Algorithm},
year = {2024}}