An introduction to optimal control theory : the dynamic programming approach / Onésimo Hernández-Lerma, Leonardo Ramiro Laura-Guarachi, Saúl Mendoza-Palacios, David González-Sánchez.
2023
QA402.3
Linked e-resources
Linked Resource
Concurrent users
Unlimited
Authorized users
Authorized users
Document Delivery Supplied
Can lend chapters, not whole ebooks
Details
Title
An introduction to optimal control theory : the dynamic programming approach / Onésimo Hernández-Lerma, Leonardo Ramiro Laura-Guarachi, Saúl Mendoza-Palacios, David González-Sánchez.
ISBN
9783031211393 (electronic bk.)
3031211391 (electronic bk.)
9783031211386
3031211383
3031211391 (electronic bk.)
9783031211386
3031211383
Published
Cham : Springer, 2023.
Language
English
Description
1 online resource (254 pages) : illustrations (black and white, and color).
Item Number
10.1007/978-3-031-21139-3 doi
Call Number
QA402.3
Dewey Decimal Classification
515/.642
Summary
This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management Science, among many others. The main objective is to give a concise, systematic, and reasonably self contained presentation of some key topics in optimal control theory. To this end, most of the analyses are based on the dynamic programming (DP) technique. This technique is applicable to almost all control problems that appear in theory and applications. They include, for instance, finite and infinite horizon control problems in which the underlying dynamic system follows either a deterministic or stochastic difference or differential equation. In the infinite horizon case, it also uses DP to study undiscounted problems, such as the ergodic or long-run average cost. After a general introduction to control problems, the book covers the topic dividing into four parts with different dynamical systems: control of discrete-time deterministic systems, discrete-time stochastic systems, ordinary differential equations, and finally a general continuous-time MCP with applications for stochastic differential equations. The first and second part should be accessible to undergraduate students with some knowledge of elementary calculus, linear algebra, and some concepts from probability theory (random variables, expectations, and so forth). Whereas the third and fourth part would be appropriate for advanced undergraduates or graduate students who have a working knowledge of mathematical analysis (derivatives, integrals, ...) and stochastic processes.
Bibliography, etc. Note
Includes bibliographical references and index.
Access Note
Access limited to authorized users.
Source of Description
Description based on print version record.
Added Author
Series
Texts in applied mathematics ; v. 76.
Available in Other Form
Linked Resources
Record Appears in
Table of Contents
Introduction: optimal control problems-. Discrete-time deterministic systems
Discrete-time stochastic control systems
Continuous-time deterministic systems
Continuous-time Markov control processes
Controlled diffusion processes
Appendices
Bibliography
Index.
Discrete-time stochastic control systems
Continuous-time deterministic systems
Continuous-time Markov control processes
Controlled diffusion processes
Appendices
Bibliography
Index.