Replication of Multi-Agent Reinforcement Learning for “Hide & Seek” Problem / Muhammad Haider Kamal,

By: Kamal, Muhammad HaiderContributor(s): Supervisor Dr. Muaz Ahmed Khan NiaziMaterial type: TextTextRawalpindi MCS, NUST 2023Description: viii, 79 pSubject(s): MSCSE / MSSE-27 | MSCSE / MSSEDDC classification: 005.1,KAM
Contents:
Reinforcement learning generates policies based on reward functions and hyperparameters. Slight changes in these can significantly affect results. The lack of documentation and reproducibility in Reinforcement learning research makes it difficult to replicate once-deduced strategies. While previous research has identified strategies using grounded maneuver, there is limited work in the more complex environments. The agents in this study are simulated similarly to Open Al’s hide and seek agents, in addition to a flying mechanism, enhancing their mobility, and expanding their range of possible actions and strategies. This added functionality improves the Hider agents to develop chasing strategy from approximately 2 million steps to 1.6 million steps and hiders shelter strategy from approximately 25 million steps to 2.3 million steps while using a smaller batch size of 3072 instead of 64000. We also discuss the importance of reward functions design and deployment in a curriculum-based environment to encourage agents to learn basic skills along with the challenges in replicating these Reinforcement learning strategies. We demonstrated that the results of the reinforcement agent can be replicated in more complex environment and similar strategies are evolved including” running and chasing” and ”fort building”.
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Home library Shelving location Call number Status Date due Barcode Item holds
Thesis Thesis Military College of Signals (MCS)
Military College of Signals (MCS)
Thesis 005.1,KAM (Browse shelf) Available MCSTCS-544
Total holds: 0
Browsing Military College of Signals (MCS) shelves, Shelving location: Thesis Close shelf browser

Reinforcement learning generates policies based on reward functions and hyperparameters. Slight changes in these can significantly affect results. The lack of documentation and reproducibility in Reinforcement learning research makes it difficult to replicate once-deduced strategies. While previous research has identified strategies using grounded maneuver, there is limited work in the more complex environments. The agents in this study are simulated similarly to Open Al’s hide and seek agents, in addition to a flying mechanism, enhancing their mobility, and expanding their range of possible actions and strategies. This added functionality improves the Hider agents to develop chasing strategy from approximately 2 million steps to 1.6 million steps and hiders shelter strategy from approximately 25 million steps to 2.3 million steps while using a smaller batch size of 3072 instead of 64000. We also discuss the importance of reward functions design and deployment in a curriculum-based environment to encourage agents to learn basic skills along with the challenges in replicating these Reinforcement learning strategies. We demonstrated that the results of the reinforcement agent can be replicated in more complex environment and similar strategies are evolved including” running and chasing” and ”fort building”.

There are no comments on this title.

to post a comment.
© 2023 Central Library, National University of Sciences and Technology. All Rights Reserved.