Skip to content

AI Flappy Bird Game Solved using Deep Q-Learning and Double Deep Q-Learning

Notifications You must be signed in to change notification settings

BhanuPrakashPebbeti/AI-Flappy-Bird

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DQN and Double-DQN

AI Flappy Bird Game Solved using Deep Q-Learning and Double Deep Q-Learning

  • Flappy Bird Game is taken as reference to create the environment.
  • Unnecessary graphics like wing movements is removed to make rendering and training faster.
  • Background is replaced with black color to help the model converge faster due to more GPU computation.

Deep Q Learning

A core difference between Deep Q-Learning and Vanilla Q-Learning is the implementation of the Q-table. Critically, Deep Q-Learning replaces the regular Q-table with a neural network. Rather than mapping a state-action pair to a q-value, a neural network maps input states to (action, Q-value) pairs.

Deep Q-Learning Pseudo code

Double Deep Q-Learning

Double Q-Learning implementation with Deep Neural Network is called Double Deep Q Network (Double DQN). Inspired by Double Q-Learning, Double DQN uses two different Deep Neural Networks, Deep Q Network (DQN) and Target Network.

Reward Stats while Training Deep Q-Network

Flappy Bird

flappy_bird_gif

About

AI Flappy Bird Game Solved using Deep Q-Learning and Double Deep Q-Learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages