001387399 000__ 03637cam\a2200529Ma\4500 001387399 001__ 1387399 001387399 003__ MaCbMITP 001387399 005__ 20240325105113.0 001387399 006__ m\\\\\o\\d\\\\\\\\ 001387399 007__ cr\cn\nnnunnun 001387399 008__ 010226s1994\\\\maua\\\\ob\\\\001\0\eng\d 001387399 020__ $$a0585350531$$q(electronic bk.) 001387399 020__ $$a9780585350530$$q(electronic bk.) 001387399 020__ $$a0262276860$$q(electronic bk.) 001387399 020__ $$a9780262276863$$q(electronic bk.) 001387399 020__ $$z0262111934 001387399 020__ $$z9780262111935 001387399 035__ $$a(OCoLC)47009798$$z(OCoLC)827012857$$z(OCoLC)880334531$$z(OCoLC)970468460$$z(OCoLC)971581758$$z(OCoLC)971951737$$z(OCoLC)972096855$$z(OCoLC)1058052922 001387399 035__ $$a(OCoLC-P)47009798 001387399 040__ $$aOCoLC-P$$beng$$epn$$cOCoLC-P 001387399 050_4 $$aQ325.5$$b.K44 1994eb 001387399 072_7 $$aCOM$$x005030$$2bisacsh 001387399 072_7 $$aCOM$$x004000$$2bisacsh 001387399 08204 $$a006.3$$220 001387399 1001_ $$aKearns, Michael J. 001387399 24513 $$aAn introduction to computational learning theory /$$cMichael J. Kearns, Umesh V. Vazirani. 001387399 260__ $$aCambridge, Mass. :$$bMIT Press,$$c©1994. 001387399 300__ $$a1 online resource (xii, 207 pages) :$$billustrations 001387399 336__ $$atext$$btxt$$2rdacontent 001387399 337__ $$acomputer$$bc$$2rdamedia 001387399 338__ $$aonline resource$$bcr$$2rdacarrier 001387399 506__ $$aAccess limited to authorized users. 001387399 520__ $$aEmphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning.Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs.The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation. 001387399 588__ $$aOCLC-licensed vendor bibliographic record. 001387399 650_0 $$aMachine learning. 001387399 650_0 $$aArtificial intelligence. 001387399 650_0 $$aAlgorithms. 001387399 650_0 $$aNeural networks (Computer science) 001387399 653__ $$aCOMPUTER SCIENCE/Machine Learning & Neural Networks 001387399 655_0 $$aElectronic books 001387399 7001_ $$aVazirani, Umesh Virkumar. 001387399 852__ $$bebk 001387399 85640 $$3MIT Press$$uhttps://univsouthin.idm.oclc.org/login?url=https://doi.org/10.7551/mitpress/3897.001.0001?locatt=mode:legacy$$zOnline Access through The MIT Press Direct 001387399 85642 $$3OCLC metadata license agreement$$uhttp://www.oclc.org/content/dam/oclc/forms/terms/vbrl-201703.pdf 001387399 909CO $$ooai:library.usi.edu:1387399$$pGLOBAL_SET 001387399 980__ $$aBIB 001387399 980__ $$aEBOOK 001387399 982__ $$aEbook 001387399 983__ $$aOnline