by ,
Abstract:
The weighted k-nearest neighbors algorithm is one of the most fundamental non-parametric methods in pattern recognition and machine learning. The question of setting the optimal number of neighbors as well as the optimal weights has received much attention throughout the years, nevertheless this problem seems to have remained unsettled. In this paper we offer a simple approach to locally weighted regression/classification, where we make the bias-variance tradeoff explicit. Our formulation enables us to phrase a notion of optimal weights, and to efficiently find these weights as well as the optimal number of neighbors efficiently and adaptively, for each data point whose value we wish to estimate. The applicability of our approach is demonstrated on several datasets, showing superior performance over standard locally weighted methods.
Reference:
k*-Nearest Neighbors: From Global to Local O. Anava, K. Y. LevyIn Proc. Neural Information Processing Systems (NeurIPS), 2016
Bibtex Entry:
@inproceedings{anava2016k,
	author = {Anava, Oren and Levy, Kfir Y},
	booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
	month = {December},
	title = {k*-Nearest Neighbors: From Global to Local},
	video = {https://www.youtube.com/watch?v=cafZRFZMl1k},
	year = {2016}}