by ,
Abstract:
Active learning can lead to a dramatic reduction in labeling effort. However, in many practical implementations (such as crowdsourcing, surveys, high-throughput experimental design), it is preferable to query labels for batches of examples to be labelled in parallel. While several heuristics have been proposed for batch-mode active learning, little is known about their theoretical performance. We consider batch mode active learning and more general information-parallel stochastic optimization problems that exhibit adaptive submodularity, a natural diminishing returns condition. We prove that for such problems, a simple greedy strategy is competitive with the optimal batch-mode policy. In some cases, surprisingly, the use of batches incurs competitively low cost, even when compared to a fully sequential strategy. We demonstrate the effectiveness of our approach on batch-mode active learning tasks, where it outperforms the state of the art, as well as the novel problem of multi-stage influence maximization in social networks.
Reference:
Near-optimal Batch Mode Active Learning and Adaptive Submodular Optimization Y. Chen, A. KrauseIn International Conference on Machine Learning (ICML), 2013
Bibtex Entry:
@inproceedings{chen13near,
	author = {Yuxin Chen and Andreas Krause},
	booktitle = {International Conference on Machine Learning (ICML)},
	month = {June},
	title = {Near-optimal Batch Mode Active Learning and Adaptive Submodular Optimization},
	year = {2013}}