Schedule

8.35-8.45
Welcome
8.45-9.30 Volkan Cevher: A totally unimodular view of structured sparsity [video]
9.30-9.50 Spotlights I
9.50-10.30 ----- Coffee Break and Posters -----

10.30-12.00 S. Thomas McCormick: A survey of minimizing submodular functions on lattices [video]
12.00-15.00 ----- Lunch and Posters -----

15.00-15.20 Spotlights II
15.20-16.05 Yaron Singer: Stochastic submodular optimization [video]
16.05-16.25 Spotlights III
16.25-17.00 ----- Coffee Break and Posters ------

17.00-17.45 Sebastian Nowozin: Tradeoffs in Structured Prediction [video]
17.45-18.30 Dhruv Batra: Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets [video]

Overview

Discrete optimization problems and combinatorial structures are becoming increasingly important in machine learning. In the endeavour to process more and more complex data, one may quickly find oneself working with graphs, relations, partitions, or sparsity structure. In addition, we may want to predict structured, sparse estimators, or combinatorial objects such as permutations, trees or other graphs, group structure and so on.

While complex structures enable much richer applications, they often come with the caveat that the related learning and inference problems become computationally very hard. When scaling to large data, combinatorial problems also add some further challenges to compact representations, streaming and distributed computation.

Despite discouraging theoretical worst-case results, many practically interesting problems can be much more well behaved (when modeled appropriately), or are well modeled by assuming properties that make them so. Such properties are most often structural and include symmetry, exchangeability, sparsity, or submodularity. Still, not all of these structures and their application ranges are well understood. The DISCML workshop revolves around such structures in machine learning, their applications and theory.

Accepted Papers

Spotlights I

Spotlights II

Spotlights III

Invited Talks

Volkan Cevher: A totally unimodular view of structured sparsity

We describe a simple framework for structured sparse recovery based on convex optimization. We show that many interesting structured sparsity models can be naturally represented by linear matrix inequalities on the support of the unknown parameters, where the constraint matrix has a totally unimodular (TU) structure. For such structured models, tight convex relaxations can be obtained in polynomial time via linear programming. Our modeling framework unifies the prevalent structured sparsity norms in the literature, introduces new interesting ones, and renders their tightness and tractability arguments transparent.

Based on
http://infoscience.epfl.ch/record/202767?ln=en http://infoscience.epfl.ch/record/184981?ln=en
Web:
http://lions.epfl.ch/publications


Tom McCormick: A survey of minimizing submodular functions on lattices

We usually talk about submodular functions on the subsets of a finite set, the so-called Boolean lattice, and there are many applications of them. There has been a lot of progress made in the last thirty years in understanding the complexity of, and algorithms for, submodular function minimization (SFM) on the Boolean lattice.

But there are other applications where more general notions of submodularity are important. Suppose that we take the usual definition of submodularity and replace the notion of intersection and union of subsets by the "meet" and "join" in a finite lattice. For example, we could take a rectangular "box" of integer points in n-space, with meet being component-wise min, and join being component-wise max. Then we would like to generalize the results about SFM from the Boolean lattice to more general lattices.

There has been a lot of work on such questions in the last ten years, and this talk will survey some of this work. We start with the case where the lattice is distributive. Here Birkhoff's Theorem allows a clever reduction from SFM on a distributive lattice to ordinary SFM on the Boolean lattice. We then consider lattices that are "oriented" or "signed" generalizations of the Boolean lattice, leading to "bisubmodularity", where we do have good characterizations and algorithms. We also briefly cover recent efforts to further generalize bisubmodularity to "k-submodularity", and to product lattices of "diamonds".

Finally we come back to lattices of integer points in n-space. We demonstrate that submodularity by itself is not enough to allow for efficient minimization, even if we also assume coordinate-wise convexity. This leads to considering concepts from "discrete convexity" such as "L-natural convexity", where we do gain enough structure to allow for efficient minimization.


Yaron Singer: Adaptive seeding: a new framework for stochastic submodular optimization

In this talk we will introduce a new framework for stochastic optimization called adaptive seeding. The framework was originally designed to enable substantial improvements to influence maximization by leveraging a remarkable structural phenomenon in social networks known as the "friendship paradox" (or "your friends have more friends than you"). At a high level, adaptive seeding is the task of making choices in the present that lead to favorable outcomes in the future, and may be of independent interest to those curious about stochastic optimization, submodular maximization, and machine learning. In the talk we will give a taste to some of the problems that arise in this rich domain. We will discuss key algorithmic ideas, as well as empirical and experimental results.


Sebastian Nowozin: Trade-offs in Structured Prediction

Structured learning and prediction problems often turn out to be challenging computationally. Yet, there are different sources of structure and it pays to examine them more closely. One source is a fixed or given or physical constraint on feasible decisions, for example when an actual physical system is being controlled. Another source is the assumption that by modelling the structured domain through latent variables and constraints provides statistical advantages because a small number of parameters can now describe the target domain more accurately and we can learn more effectively from fewer samples by exposing this structure. A third source of combinatorial structure is often the loss function that is used; a structured loss invariably leads to a structured decision problem. When designing our models we are often free to select trade-offs in terms of model structure, capacity, number of parameters, and difficulty of inference; in this talk I will argue that by thinking about the above sources of combinatorial structure we can make more sensible trade-offs and I will illustrate these using example applications from computer vision.


Dhruv Batra: Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets

Perception problems are hard and notoriously ambiguous. Robust prediction methods in Computer Vision and Natural Language Processing often search for a diverse set of high-quality candidate solutions or proposals. In structured prediction problems, this becomes a daunting task, as the solution space (image labelings, sentence parses, etc.) is exponentially large.

We study greedy algorithms for finding a diverse subset of solutions in structured-output spaces by drawing new connections between submodular functions over combinatorial item sets and High-Order Potentials (HOPs) studied for graphical models. Specifically, we show that when marginal gains of submodular diversity functions allow structured representations, this enables efficient (sub-linear time) approximate maximization by reducing the greedy augmentation step to inference in a factor graph with appropriately constructed HOPs.

Joint work with Adarsh Prasad (UT-Austin) and Stefanie Jegelka (UC Berkeley).



Organizers

Previous workshops


This workshop is a continuation of DISCML 2009, 2010, 2011, 2012, 2013.