8000 GitHub - ShAw7ock/emdqn_torch: Episodic Memory Deep Q-Networks (EMDQN) for Atari Games
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Episodic Memory Deep Q-Networks (EMDQN) for Atari Games

Notifications You must be signed in to change notification settings

ShAw7ock/emdqn_torch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Episodic Memory DQN by PyTorch

EMDQN

This repo is a simple test code for "Episodic Memory Deep Q-Networks" by Zichuan Lin, Tianqi Zhao, Guangwen Yang and Lintao Zhang. And their EMDQN source code can be gotten here using Tensorflow.

This code is a simple version using PyTorch.
The users can modify the code to suit your own testing environments (DISCREATE ACTION SPACE).

Episodic Memory DQN

Requirements:

  • Python >= 3.6.0 (optional)
  • PyTorch == 1.7.0 (optional)
  • OpenAI Gym[Atari]
  • Scikit-Learn == 1.0.2 (optional)

NOTE:

  • To run this code, cd into the root directory and run : python main.py --env PongNoFrameskip-v4
  • The kernel updating codes for EMDQN algorithm: ./core/emdqn.py
  • The Episodic Memory using LRU_KNN: ./utils/lru_knn.py
  • The off-policy replay buffer: ./components/replay_buffer.py
  • The networks include base Q network and Dueling Q network: ./components/networks.py
  • The Hyper-parameters can be modified in: ./components/arguments.py The details can be seen in the original paper.

TODO LIST:

  • DQN, DuelingDQN, EMDQN
  • Save and Load Pre-Trained Models.
  • CUDA supported.
  • Modify and Suit Atari Games.
  • Modify Prioritized ReplayBuffer.

About

Episodic Memory Deep Q-Networks (EMDQN) for Atari Games

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0