Q-Learning Space Invaders using the Arcade Learning Environment (ALE)
The following papers were used as reference for the implementation:
- Multi-agent reinforcement learning: independent versus cooperative agents link
The state space consists of the laser cannon's X coordinate, a total of 160 potential values, reduced to a range of 38 to 120, for the game playing bounds, invisible walls. There are 4 actions. The total q-table size is 160 x 4, even though only ~82 are used, with some outliers.
The ALE Space Invaders Action Space has been reduced from 6 to the following 4 actions.
| Value | Meaning |
|---|---|
| 0 | NOOP |
| 1 | FIRE |
| 2 | RIGHT |
| 3 | LEFT |
| 4 | RIGHT-FIRE |
| 5 | LEFT-FIRE |
The following software is required for proper operation
Atari ROM space_invaders.bin was obtained from
Atari Mania
Run the following commands in the root directory of the repository to compile all executables. The base project uses cmake build system with default of make.
cmake ./
makeThe primary executable is qsi multi-agent hedonic simulation environment.
The program is implemented using GNU Argp, and has available --help menu for
information on the arguments that each program accepts, which are required and
are optional.
Credits and thanks for resources referenced and used in this repository, including some code and/or project structure, go to the following:
- Introducing Q-Learning
- Mastering Atari Games with Q-Learning
- Approximate Q-Learning With Atari Game: A first Approach To Reinforcement Learning
- Space Invaders challenge: a Reinforcement Learning competition
- Object Detection in 2D Video Games Using the cv2 Match Function in Python
- Template Matching in OpenCV