by A. Gotovos, S. H. Hassani, A. Krause
Abstract:
Submodular and supermodular functions have found wide applicability in machine learning, capturing notions such as diversity and regularity, respectively. These notions have deep consequences for optimization, and the problem of (approximately) optimizing submodular functions has received much attention. However, beyond optimization, these notions allow specifying expressive probabilistic models that can be used to quantify predictive uncertainty via marginal inference. Prominent, well-studied special cases include Ising models and determinantal point processes, but the general class of log-submodular and log-supermodular models is much richer and little studied. In this paper, we investigate the use of Markov chain Monte Carlo sampling to perform approximate inference in general log-submodular and log-supermodular models. In particular, we consider a simple Gibbs sampling procedure, and establish two sufficient conditions, the first guaranteeing polynomial-time, and the second fast (O(nlogn)) mixing. We also evaluate the efficiency of the Gibbs sampler on three examples of such models, and compare against a recently proposed variational approach.
Reference:
Sampling from Probabilistic Submodular Models A. Gotovos, S. H. Hassani, A. KrauseIn Proc. Neural Information Processing Systems (NeurIPS), 2015Oral presentation
Bibtex Entry:
@inproceedings{gotovos15sampling,
author = {Alkis Gotovos and S. Hamed Hassani and Andreas Krause},
booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
month = {December},
title = {Sampling from Probabilistic Submodular Models},
video = {http://www.youtube.com/watch?v=kRgCUp33uLs},
year = {2015}}