TY - GEN AB - Regression and classification methods based on similarity of the input to stored examples have not been widely used in applications involving very large sets of high-dimensional data. Recent advances in computational geometry and machine learning, however, may alleviate the problems in using these methods on large data sets. This volume presents theoretical and practical discussions of nearest-neighbor (NN) methods in machine learning and examines computer vision as an application domain in which the benefit of these advanced methods is often dramatic. It brings together contributions from researchers in theory of computation, machine learning, and computer vision with the goals of bridging the gaps between disciplines and presenting state-of-the-art methods for emerging applications. The contributors focus on the importance of designing algorithms for NN search, and for the related classification, regression, and retrieval tasks, that remain efficient even as the number of points or the dimensionality of the data grows very large. The book begins with two theoretical chapters on computational geometry and then explores ways to make the NN approach practicable in machine learning applications where the dimensionality of the data and the size of the data sets make the naive methods for NN search prohibitively expensive. The final chapters describe successful applications of an NN algorithm, locality-sensitive hashing (LSH), to vision tasks. AU - Shakhnarovich, Gregory. AU - Darrell, Trevor. AU - Indyk, Piotr. CN - QA278.2 CY - Cambridge, Mass. : DA - ©2005. ID - 1385552 KW - Nearest neighbor analysis (Statistics) KW - Machine learning KW - Algorithms KW - Geometry KW - COMPUTER SCIENCE/Machine Learning & Neural Networks LK - https://univsouthin.idm.oclc.org/login?url=https://doi.org/10.7551/mitpress/4908.001.0001?locatt=mode:legacy LK - http://www.oclc.org/content/dam/oclc/forms/terms/vbrl-201703.pdf N1 - " ... held in Whistler, British Columbia ... annual conference on Neural Information Processing Systems (NIPS) in December 2003"--Preface. N2 - Regression and classification methods based on similarity of the input to stored examples have not been widely used in applications involving very large sets of high-dimensional data. Recent advances in computational geometry and machine learning, however, may alleviate the problems in using these methods on large data sets. This volume presents theoretical and practical discussions of nearest-neighbor (NN) methods in machine learning and examines computer vision as an application domain in which the benefit of these advanced methods is often dramatic. It brings together contributions from researchers in theory of computation, machine learning, and computer vision with the goals of bridging the gaps between disciplines and presenting state-of-the-art methods for emerging applications. The contributors focus on the importance of designing algorithms for NN search, and for the related classification, regression, and retrieval tasks, that remain efficient even as the number of points or the dimensionality of the data grows very large. The book begins with two theoretical chapters on computational geometry and then explores ways to make the NN approach practicable in machine learning applications where the dimensionality of the data and the size of the data sets make the naive methods for NN search prohibitively expensive. The final chapters describe successful applications of an NN algorithm, locality-sensitive hashing (LSH), to vision tasks. PB - MIT Press, PP - Cambridge, Mass. : PY - ©2005. SN - 9780262256957 SN - 0262256959 T1 - Nearest-neighbor methods in learning and vision :theory and practice / TI - Nearest-neighbor methods in learning and vision :theory and practice / UR - https://univsouthin.idm.oclc.org/login?url=https://doi.org/10.7551/mitpress/4908.001.0001?locatt=mode:legacy UR - http://www.oclc.org/content/dam/oclc/forms/terms/vbrl-201703.pdf ER -