Videos

The workshop talks were recorded and are available at videolectures.net.

Schedule

7.30-7.40
Opening remarks
7.40-8.15 Tamir Hazan: Mixing sum-product and max-product type updates to tighten tree-reweighted upper bounds for the log-partiton function
8.15-9.05 Satoru Iwata: Submodular Function Minimization
9.05-9.20 BREAK (Posters)
9.20-10.10 Jan Vondrak: Multilinear relaxation: a tool for maximization of submodular functions
10.10-10.40 Spotlight presentations
10.40-15.10
BREAK (Posters)
15.10-15.35 Stefanie Jegelka: Online submodular minimization with combinatorial constraints
15.35-16.10 Yuri Boykov: Energy Minimization with Label costs and Applications in Multi-Model Fitting
16.10-16.45 Michael Collins: Dual decomposition for inference in natural language processing
16.45-17.00 BREAK (Posters)
17.00-17.35 Joachim Buhmann: Information Theoretic Model Validation by Approximate Optimization
17.35-18.10 Carlos Guestrin: Taming Information Overload
18.10-18.35 Daniel Golovin: Adaptive Submodularity: A New Approach to Active Learning and Stochastic Optimization


Overview

Solving optimization problems with ultimately discretely solutions is becoming increasingly important in machine learning: At the core of statistical machine learning is to infer conclusions from data, and when the variables underlying the data are discrete, both the tasks of inferring the model from data, as well as performing predictions using the estimated model are discrete optimization problems. Many of the resulting optimization problems are NP-hard, and typically, as the problem size increases, standard off-the-shelf optimization procedures become intractable.

Fortunately, most discrete optimization problems that arise in machine learning have specific structure, which can be leveraged in order to develop tractable exact or approximate optimization procedures. For example, consider the case of a discrete graphical model over a set of random variables. For the task of prediction, a key structural object is the "marginal polytope," a convex bounded set characterized by the underlying graph of the graphical model. Properties of this polytope, as well as its approximations, have been successfully used to develop efficient algorithms for inference. For the task of model selection, a key structural object is the discrete graph itself. Another problem structure is sparsity: While estimating a high-dimensional model for regression from a limited amount of data is typically an ill-posed problem, it becomes solvable if it is known that many of the coefficients are zero. Another problem structure, submodularity, a discrete analog of convexity, has been shown to arise in many machine learning problems, including structure learning of probabilistic models, variable selection and clustering. One of the primary goals of this workshop is to investigate how to leverage such structures.

There are two major classes of approaches towards solving such discrete optimization problems machine learning: Combinatorial algorithms and continuous relaxations. In the first, the discrete optimization problems are solved directly in the discrete constraint space of the variables. Typically these take the form of search based procedures, where the discrete structure is exploited to limit the search space. In the other, the discrete problems are transformed into continuous, often tractable convex problems by relaxing the integrality constraints. The exact fractional solutions are then "rounded" back to the discrete domain. Another goal of this workshop is to bring researchers in these two communities together in order to discuss (a) tradeoffs and respective benefits of the existing approaches, and (b) problem structures suited to the respective approaches. For instance submodular problems can be tractably solved using combinatorial algorithms; similarly, in certain cases, the continuous relaxations yield discrete solutions that are either exact or with objective within a multiplicative factor of the true solution.

Finally, a solution lives on the problems it solves. In addition to studying discrete structures and algorithms, the workshop will put a particular emphasis on novel applications of discrete optimization in machine learning. Often, interesting discrete algorithms arise from a certain application. The workshop will bring together researchers from the application side, machine learning, and discrete optimization to foster an exchange of ideas and techniques and identify relevant open problems from either side. It is vital for progress in machine learning to make discrete mathematicians aware of the problems in machine learning, and machine learners aware of recent developments in discrete optimization.

Format

Broadly, this one-day workshop aims at exploring the current challenges in discrete optimization in machine learning. It will explore these topics in tutorials and invited talks. In addition, we will have a poster session with spotlight presentations to provide a platform for presenting new contributions.

Organizers

Other related workshops


The proposed workshop is a continuation of DISCML '09. This year's workshop will have a particular focus on discrete structures, efficient algorithms, and an emphasis on their application in machine learning. The workshop is also related to the workshop series Optimization in Machine Learning.