Learnability in optimality theory / Bruce Tesar and Paul Smolensky.
2000
P158.42 .T47 2000eb
Linked e-resources
Linked Resource
Details
Title
Learnability in optimality theory / Bruce Tesar and Paul Smolensky.
Author
ISBN
0585354677 (electronic bk.)
9780585354675 (electronic bk.)
9780262284790 (electronic bk.)
0262284790 (electronic bk.)
0262201267
9780262201261
9780585354675 (electronic bk.)
9780262284790 (electronic bk.)
0262284790 (electronic bk.)
0262201267
9780262201261
Publication Details
Cambridge, Mass. : MIT Press, ©2000.
Copyright
©2000
Language
English
Description
1 online resource (vi, 140 pages) : illustrations
Call Number
P158.42 .T47 2000eb
Dewey Decimal Classification
401/.93
Summary
Annotation Highlighting the close relationship between linguistic explanation and learnability, Bruce Tesar and Paul Smolensky examine the implications of Optimality Theory (OT) for language learnability. They show how the core principles of OT lead to the learning principle of constraint demotion, the basis for a family of algorithms that infer constraint rankings from linguistic forms. Of primary concern to the authors are the ambiguity of the data received by the learner and the resulting interdependence of the core grammar and the structural analysis of overt linguistic forms. The authors argue that iterative approaches to interdependencies, inspired by work in statistical learning theory, can be successfully adapted to address the interdependencies of language learning. Both OT and Constraint Demotion play critical roles in their adaptation. The authors support their findings both formally and through simulations. They also illustrate how their approach could be extended to other language learning issues, including subset relations and the learning of phonological underlying forms.
Note
Annotation Highlighting the close relationship between linguistic explanation and learnability, Bruce Tesar and Paul Smolensky examine the implications of Optimality Theory (OT) for language learnability. They show how the core principles of OT lead to the learning principle of constraint demotion, the basis for a family of algorithms that infer constraint rankings from linguistic forms. Of primary concern to the authors are the ambiguity of the data received by the learner and the resulting interdependence of the core grammar and the structural analysis of overt linguistic forms. The authors argue that iterative approaches to interdependencies, inspired by work in statistical learning theory, can be successfully adapted to address the interdependencies of language learning. Both OT and Constraint Demotion play critical roles in their adaptation. The authors support their findings both formally and through simulations. They also illustrate how their approach could be extended to other language learning issues, including subset relations and the learning of phonological underlying forms.
Access Note
Access limited to authorized users.
Source of Description
OCLC-licensed vendor bibliographic record.
Added Author
Record Appears in