8000 GitHub - nerdoid/dqn: Just a pleasure-seeking, fun-loving, Atari-playing Deep Reinforcement Learning agent.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

nerdoid/dqn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DQN

This project is an implementation of DeepMind's DQN algorithm and its associated bag of tricks. It relies on TensorFlow and the OpenAI Gym. It currently includes the DQN and Double DQN algorithms. Most notably, prioritized experience replay is not yet implemented.

Requirements

Python 3 or greater

TensorFlow 1.0 or greater

Getting Started

Training

To train an agent (on Breakout by default):

> python dqn/train.py --name [name_for_this_run]

All summaries, videos, and checkpoints will go to the results directory.

Demos

You can record vidoes using a trained model by running:

> python dqn/demo.py

Configuration

To customize a training or demo run (for example to use a different game), change the available settings in dqn/config.py.

Analysis

Since every run has a name, TensorBoard summaries are automatically written to a corresponding subdirectory under results/stats. Algorithmic variations can then be compared with graphical overlays in TensorBoard:

> tensorboard --logdir=results/stats

Sample Stats

Running vanilla DQN on OpenAI Gym environment BreakoutDeterministic-v3

Average Score

Average Score

About

Just a pleasure-seeking, fun-loving, Atari-playing Deep Reinforcement Learning agent.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0