+
Skip to content

jstat17/minihack-rl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MiniHack RL Project

minihack_pic

Version Python Discord

In this repository a group of four students explore various methods in an attempt to get an agent to successfully complete tasks in the minihack environment . The tasks chosen escalated in difficulty and therefore processing/training time requirements. Starting with getting the agent to move around the map without dying, and then attempting to complete levels.

Two methods were explored: one model-based (DQN) and one model-free (PPO). The two methods are split into two folders and to train and then run needs to be done manually.

Get Started

Please see the requirements.txt file to see what Python libraries are required to run this project. Or alternatively create a conda environment using environment.yml.

Note: minihack should not be run on a Windows machine. If you only have access to Windows, please set up a Docker container running a linux OS since it will be much easier to install.

To run the model-based (DQN):

  • to train the agent: python train.py
  • to watch the agent in action: python play.py
  • The model-based DQN agent has a list of environment names and a list of environment actions it iterates through (env_names and env_action_spaces). Choose the ones to train on in order.

To run the model-free (PPO):

  • to train the agent: python Multi_room_PPO_training.py
  • the to watch the agent in action: python video.py

Videos

Model-based DQN 5x5 Room

Model-based DQN 15x15 Room

Model-based DQN Lava Crossing

PPO 5x5 Room

PPO 15x15 Room

PPO Lava Crossing

About

Reinforcement learning agent for solving Minihack environments

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载