by B. Mirzasoleiman, A. Karbasi, R. Sarkar, A. Krause
Abstract:
Many large-scale machine learning problems–clustering, non-parametric learning, kernel machines, etc.–require selecting a small yet representative subset from a large dataset. Such problems can often be reduced to maximizing a submodular set function subject to various constraints. Classical approaches to submodular optimization require centralized access to the full dataset, which is impractical for truly large-scale problems. In this paper, we consider the problem of submodular function maximization in a distributed fashion. We develop a simple, two-stage protocol GreeDi, that is easily implemented using MapReduce style computations. We theoretically analyze our approach, and show that under certain natural conditions, performance close to the centralized approach can be achieved. We begin with monotone submodular maximization subject to a cardinality constraint, and then extend this approach to obtain approximation guarantees for (not necessarily mono- tone) submodular maximization subject to more general constraints including matroid or knapsack constraints. In our extensive experiments, we demonstrate the effectiveness of our approach on several applications, including sparse Gaussian process inference and exemplar based clustering on tens of millions of examples using Hadoop.
Reference:
Distributed Submodular Maximization B. Mirzasoleiman, A. Karbasi, R. Sarkar, A. KrauseIn Journal of Machine Learning Research (JMLR), 2016
Bibtex Entry:
@article{mirzasoleiman16distributed,
author = {Baharan Mirzasoleiman and Amin Karbasi and Rik Sarkar and Andreas Krause},
journal = {Journal of Machine Learning Research (JMLR)},
title = {Distributed Submodular Maximization},
year = {2016}}