The idea is to create a deep policy network that is intelligent enough to generalize to most games in OpenAI's Gym.
-
To run this code first install OpenAI's Gym: https://github.com/openai/gym
-
Download this repo and run
python run_carpole.py
to run the agent (or any other game in this repo, likepython run_lunarlander.py
) and see it improve over time. -
To run a Box2D game like LunarLander you have to install the Box2D Physics engine:
pip install -e '.[box2d]'
Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back. Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Solved is 200 points. Landing outside landing pad is possible and fuel is infinite.
Initially, the agent is as good as randomly picking the next action:
After several hundred episodes, the agent starts learning how to fly and hover around:
Finally after about 3K episodes the agent can land pretty well:
A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.
Initially, the agent is quite dumb, but it's exploring the state/action/reward space:
As more episodes go by, it starts to get better by learning from experience (using reward guided loss):
Eventually, the agent masters the game (trained on my Macbook Pro for ~10 minutes):
After 297 episodes the agent scored 617,332!