by , , ,
Abstract:
How can one summarize a massive data set "on the fly", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of "representativeness" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1/2 approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that, our streaming method, while achieving practically the same utility value, runs about 100 times faster than the existing work.
Reference:
Streaming Submodular Optimization: Massive Data Summarization on the Fly A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, A. KrauseIn Proc. ACM Conference on Knowledge Discovery in Databases (KDD), 2014
Bibtex Entry:
@inproceedings{badanidiyuru14streaming,
	Author = {Ashwinkumar Badanidiyuru and Baharan Mirzasoleiman and Amin Karbasi and Andreas Krause},
	Booktitle = {Proc. ACM Conference on Knowledge Discovery in Databases (KDD)},
	Month = {August},
	Title = {Streaming Submodular Optimization: Massive Data Summarization on the Fly},
	Year = {2014}}