A Retrospective on Adaptive Dynamic Programming for Control
Sponsor
This work was partially supported by the U.S. National ScienceFoundationGrant ECS-0301022.
Published In
Proceedings of the International Joint Conference on Neural Networks
ISBN
9781424435531
Document Type
Citation
Publication Date
11-18-2009
Abstract
Some three decades ago, certain computational intelligence methods of reinforcement learning were recognized as implementing an approximation of Bellman's Dynamic Programming method, which is known in the controls community as an important tool for designing optimal control policies for nonlinear plants and sequential decision making. Significant theoretical and practical developments have occurred within this arena, mostly in the past decade, with the methodology now usually referred to as Adaptive Dynamic Programming (ADP). The objective of this paper is to provide a retrospective of selected threads of such developments. In addition, a commentary is offered concerning present status of ADP, and threads for future research and development within the controls field are suggested. © 2009 IEEE.
Rights
©2009 IEEE
Locate the Document
https://doi.org/10.1109/IJCNN.2009.5178716
DOI
10.1109/IJCNN.2009.5178716
Persistent Identifier
https://archives.pdx.edu/ds/psu/37266
Citation Details
Lendaris, G. G. (2009, June). A retrospective on adaptive dynamic programming for control. In 2009 International Joint Conference on Neural Networks (pp. 1750-1757). IEEE.