Presentation Type

Oral Presentation

Start Date

5-8-2024 1:00 PM

End Date

5-8-2024 3:00 PM

Subjects

Computer Science, Reinforcement Learning

Advisor

Dr. Banafsheh Rekabdar

Student Level

Masters

Abstract

Introduction: Achieving optimal performance in 2D racing games presents unique challenges, requiring adaptive strategies and advanced learning algorithms. This research explores the integration of sophisticated agent models with Meta Reinforcement Learning (Meta-RL) techniques, specifically Model-Agnostic Meta-Learning (MAML) and Proximal Policy Optimization (PPO), to enhance decision-making and adaptability within these simulated environments. We hypothesize that this innovative approach will lead to marked improvements in game performance and learning efficiency.

Methods: In our experimental setup, we applied MAML for its rapid adaptation capabilities and PPO for optimizing the agents' policy decisions within a 2D racing game simulator. The objective was to reduce lap times and improve the agent's ability to adapt to new tracks and environmental conditions. Performance metrics were meticulously recorded across varied track designs, with particular attention to learning speed and adaptability.

Results: The integrated Meta-RL and agent model approach yielded a significant reduction in lap times, achieving significant improvement over baseline models. Furthermore, agents demonstrated a noticeable enhancement in adaptability to new track configurations and environmental challenges. Remarkably, the models also showed an unexpected proficiency in navigating through dynamically changing obstacles, underscoring the potential of Meta-RL in complex, rapidly evolving game scenarios.

Conclusion: This study underscores the effectiveness of combining Meta-RL algorithms with agent models to advance the state of AI in 2D racing games. The observed improvements in lap times and adaptability not only confirm our hypothesis but also suggest broader applications for Meta-RL techniques in other areas of gaming and simulation. Future research will explore the extension of these methods to more complex 3D environments and their potential implications for real-world applications in areas requiring rapid decision-making and adaptability.

Creative Commons License or Rights Statement

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS
 
May 8th, 1:00 PM May 8th, 3:00 PM

Integration of Agent Models and Meta Reinforcement Learning (Meta-RL) Algorithms for Car Racing Experiment

Introduction: Achieving optimal performance in 2D racing games presents unique challenges, requiring adaptive strategies and advanced learning algorithms. This research explores the integration of sophisticated agent models with Meta Reinforcement Learning (Meta-RL) techniques, specifically Model-Agnostic Meta-Learning (MAML) and Proximal Policy Optimization (PPO), to enhance decision-making and adaptability within these simulated environments. We hypothesize that this innovative approach will lead to marked improvements in game performance and learning efficiency.

Methods: In our experimental setup, we applied MAML for its rapid adaptation capabilities and PPO for optimizing the agents' policy decisions within a 2D racing game simulator. The objective was to reduce lap times and improve the agent's ability to adapt to new tracks and environmental conditions. Performance metrics were meticulously recorded across varied track designs, with particular attention to learning speed and adaptability.

Results: The integrated Meta-RL and agent model approach yielded a significant reduction in lap times, achieving significant improvement over baseline models. Furthermore, agents demonstrated a noticeable enhancement in adaptability to new track configurations and environmental challenges. Remarkably, the models also showed an unexpected proficiency in navigating through dynamically changing obstacles, underscoring the potential of Meta-RL in complex, rapidly evolving game scenarios.

Conclusion: This study underscores the effectiveness of combining Meta-RL algorithms with agent models to advance the state of AI in 2D racing games. The observed improvements in lap times and adaptability not only confirm our hypothesis but also suggest broader applications for Meta-RL techniques in other areas of gaming and simulation. Future research will explore the extension of these methods to more complex 3D environments and their potential implications for real-world applications in areas requiring rapid decision-making and adaptability.