Reinforcement learning has typically been the model of choice when it comes to game playing, such as Go, Atari games, etc., due to the natural distillation of the problem into the standard reinforcement learning framework, in which an agent is situated in an environment and interacts in an action-observation-reward loop. There have been two approaches taken to the implementation of the environment in such models - (1) logical representations of the game environment that make use of manually engineered features, as was the case in early implementations of chess engines and other board games, or (2) the more general pixel-to-control approach, in which only raw pixel data is given to the model, which then must learn its own internal representation of the game, as was first illustrated in DeepMind's deep Q-learning paper for Atari games. The purpose of the Unity project is to build a game playing agent for complex environments that closely model or simulate the nuances of the real world; in particular, first person shooter and strategy games. The main thesis is that the pixel-to-control approach, in which the model is forced to learn everything end to end, is not feasible for environments of this complexity. Instead, a large part of the project will involve developing a good logical representation of a game, given the full source code of the game engine, which is amenable to deep RL. The hope is that with full access to the code for the game, a combination of environment features can be used to help the agent learn to play the game at human level performance. This will then be followed up with the actual implementation of the RL agent, and may potentially explore new ideas with respect to RL to achieve strong performance.
currently no documents uploaded for this project