Higher-level Application of Adaptive Dynamic Programming/Reinforcement Learning - A Next Phase for Controls and System Iidentification?
This work was supported in part by the NSF Grant no. ECS-0301022.
2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL)
In previous work it was shown that Adaptive-Critic-type Approximate Dynamic Programming could be applied in a â€œhigher-levelâ€ way to create autonomous agents capable of using experience to discern context and select optimal, context-dependent control policies. Early experiments with this approach were based on full a priori knowledge of the system being monitored. The experiments reported in this paper, using small neural networks representing families of mappings, were designed to explore what happens when knowledge of the system is less precise. Results of these experiments show that agents trained with this approach perform well when subject to even large amounts of noise or when employing (slightly) imperfect models. The results also suggest that aspects of this method of context discernment are consistent with our intuition about human learning. The insights gained from these explorations can be used to guide further efforts for developing this approach into a general methodology for solving arbitrary identification and control problems.
Copyright 2011 IEEE
Locate the Document
Lendaris, G. G. (2011, April). Higher-level application of Adaptive Dynamic Programming/Reinforcement Learning-a next phase for controls and system identification?. In 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL) (pp. x-xix). IEEE.