Higher-level Application of Adaptive Dynamic Programming/Reinforcement Learning - A Next Phase for Controls and System Iidentification?
Sponsor
This work was supported in part by the NSF Grant no. ECS-0301022.
Published In
2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL)
Document Type
Citation
Publication Date
7-28-2011
Abstract
In previous work it was shown that Adaptive-Critic-type Approximate Dynamic Programming could be applied in a “higher-level†way to create autonomous agents capable of using experience to discern context and select optimal, context-dependent control policies. Early experiments with this approach were based on full a priori knowledge of the system being monitored. The experiments reported in this paper, using small neural networks representing families of mappings, were designed to explore what happens when knowledge of the system is less precise. Results of these experiments show that agents trained with this approach perform well when subject to even large amounts of noise or when employing (slightly) imperfect models. The results also suggest that aspects of this method of context discernment are consistent with our intuition about human learning. The insights gained from these explorations can be used to guide further efforts for developing this approach into a general methodology for solving arbitrary identification and control problems.
Rights
Copyright 2011 IEEE
Locate the Document
https://doi.org/10.1109/ADPRL.2011.5967395
DOI
10.1109/ADPRL.2011.5967395
Persistent Identifier
https://archives.pdx.edu/ds/psu/37309
Citation Details
Lendaris, G. G. (2011, April). Higher-level application of Adaptive Dynamic Programming/Reinforcement Learning-a next phase for controls and system identification?. In 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL) (pp. x-xix). IEEE.