by ,
Abstract:
Can we incorporate discrete optimization algorithms within modern machine learning models? For example, is it possible to incorporate in deep architectures a layer whose output is the minimal cut of a parametrized graph? Given that these models are trained end-to-end by leveraging gradient information, the introduction of such layers seems very challenging due to their non-continuous output. In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible. The key idea is that we can continuously relax the output without sacrificing guarantees. We provide an easily computable approximation to the Jacobian complemented with a complete theoretical analysis. Finally, these contributions let us experimentally learn probabilistic log-supermodular models via a bi-level variational inference formulation.
Reference:
Differentiable Learning of Submodular Models J. Djolonga, A. KrauseIn Neural Information Processing Systems (NeurIPS), 2017Spotlight presentation
Bibtex Entry:
@inproceedings{djolonga17learning,
	author = {Josip Djolonga and Andreas Krause},
	booktitle = {Neural Information Processing Systems (NeurIPS)},
	month = {December},
	title = {Differentiable Learning of Submodular Models},
	year = {2017}}