by , ,
A wide range of AI problems, such as sensor placement, active learning, and network influence maximization, require sequentially selecting elements from a large set with the goal of optimizing the utility of the selected subset. Moreover, each element that is picked may provide stochastic feedback, which can be used to make smarter decisions about future selections. Finding efficient policies for this general class of adaptive optimization problems can be extremely hard. However, when the objective function is adaptive monotone and adaptive submodular, a simple greedy policy attains a 1-1/e approximation ratio in terms of expected utility. Unfortunately, many practical objective functions are naturally non-monotone; to our knowledge, no existing policy has provable performance guarantees when the assumption of adaptive monotonicity is lifted. We propose the adaptive random greedy policy for maximizing adaptive submodular functions, and prove that it retains the aforementioned 1-1/e approximation ratio for functions that are also adaptive monotone, while it additionally provides a 1/e approximation ratio for non-monotone adaptive submodular functions. We showcase the benefits of adaptivity on three real-world network data sets using two non-monotone functions, representative of two classes of commonly encountered non-monotone objectives.
Non-monotone Adaptive Submodular Maximization A. Gotovos, A. Karbasi, A. KrauseIn International Joint Conference on Artificial Intelligence (IJCAI), 2015
Bibtex Entry:
	Author = {Alkis Gotovos and Amin Karbasi and Andreas Krause},
	Booktitle = {International Joint Conference on Artificial Intelligence (IJCAI)},
	Title = {Non-monotone Adaptive Submodular Maximization},
	Year = {2015}}