In recent years, a fundamental problem structure has emerged as very useful in a variety of ma- chine learning applications: Submodularity is an intuitive diminishing returns property, stating that adding an element to a smaller set helps more than adding it to a larger set. Similarly to convexity, submodularity allows one to efficiently find provably (near-) optimal solutions for large problems. We present SFO, a toolbox for use in MATLAB or Octave that implements algorithms for mini- mization and maximization of submodular functions. A tutorial script illustrates the application of submodularity to machine learning and AI problems such as feature selection, clustering, inference and optimized information gathering.
SFO: A Toolbox for Submodular Function Optimization A. KrauseIn Journal of Machine Learning Research (JMLR), volume 11, 2010
Bibtex Entry:
	Author = {Andreas Krause},
	Journal = {Journal of Machine Learning Research (JMLR)},
	Pages = {1141-1144},
	Title = {SFO: A Toolbox for Submodular Function Optimization},
	Volume = {11},
	Year = {2010}}