Controller Design via Adaptive Critic and Model Reference Methods

Published In

Proceedings of the International Joint Conference on Neural Networks

Document Type

Citation

Publication Date

9-25-2003

Abstract

Dynamic Programming (DP) is a principled way to design optimal controllers for certain classes of nonlinear systems; unfortunately, DP is computationally very expensive. The Reinforcement Learning methods known as Adaptive Critics (AC) provide computationally feasible means for performing approximate Dynamic Programming (ADP). The term 'adaptive ' in A C refers to the critic 's improved estimations of the Value Function used by DP. To apply DP, the user must craft a Utility function that embodies all the problem-specific design specifications/criteria. Model Reference Adaptive Control methods have been successfully used in the control community to effect on-line redesign of a controller in response to variations in plant parameters, with the idea that the resulting closed loop system dynamics will mimic those of a Reference Model. The work reported here 1) uses a reference model in ADP as the key information input to the Utility function, and 2) uses ADP off-line to design the desired controller. Future work will extend this to on-line application. This method is demonstrated for a hypersonic shaped airplane called LoFL YTE®; its handling characteristics are natively a little "hotter" than a pilot would desire. A control augmentation subsystem is designed using ADP to make the plane "feel like " a better behaved one, as specified by a Reference Model. The number of inputs to the successfully designed controller are among the largest seen in the literature to date.

Locate the Document

https://doi.org/10.1109/IJCNN.2003.1224080

DOI

10.1109/IJCNN.2003.1224080

Persistent Identifier

https://archives.pdx.edu/ds/psu/37279

Share

COinS