这是indexloc提供的服务,不要输入任何密码
Skip to content

philwil/AWS-JPL-OSR-Challenge

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

header image

Welcome to the AWS-JPL Open Source Rover Challenge repository.

Here you will find everything you need to begin the challenge.

The main sections of this document are:

  1. [What is the challenge?]()

  2. [What are the rules to the challenge]()

  3. [Getting Started]()

  4. [Asset manifest and descriptions]()

  5. [Help and support]()

## What is the Challenge?

“AWS / JPL Open-Source Rover Challenge” to be held online starting on Monday, December 2, 2019 and ending on Friday, February 21, 2020 and is sponsored by Amazon Web Services, Inc. (“The Sponsor” or “AWS”) and is held in collaboration with JPL and AngelHack LLC (“Administrator”).

## What are the rules?

Simply put - you must train an RL agent to successfully navigate the Rover to the checkpoint on Mars.

the below images shows the NASA-JPL Open Source Rover (on the left) and your digital version of the Rover (on the right)

osr

To win the challenge, your RL agent must navigate the Rover to the checkpoint WITH THE HIGHEST SCORE

The scoring function works such that when the Rover reaches the checkpoint without collisions, the mission is complete

+ Begin with 10,000 basis points + Subtract the number of steps required to reach the checkpoint + Subtract the distance travelled to reach the checkpoint (in meters) + Subtract the average linear acceleration in m/s^2 of the Rover

##Getting Started

While familiarity with RoS and Gazebo are not required for this challenge, you will be required to submit your entry in the form of an AWS Robomaker simulation job. All of the Martian world environment variables, and Rover Sensor data are captured for you and are then made available via global Python variables. At a minimum, you must populate the function known as the "reward_function()". The challenge ships with several examples of how to populate the reward function, but no level of accuracy or performance is guaranteed.

If you wish to learn more about how the Rover interacts with it's environment, you can look at the "Training Grounds" world that also ships with this repo. It is a very basic world with monolith-type structures that the Rover must learn to navigate around. You are free to edit his world to learn more about how the Rover manuevers. Submissions to the challenge must be done via an unedited Rover and an unedited Martian world.

##Asset manifest and descriptions

Project Structure: There are three primary components of the solution: header image

    + A RoS package describing the Open Source Rover - this package is NOT editable
    + A RoS/Gazebo package that describes and runs the simulated world
    + A Python3 module that contains a custom OpenAI Gym environment as well as wrapper code to initiate an rl_coach training session.  
    within this module is a dedicated function 


These three components work together to allow the Rover to navigate the Martian surface and send observation <-> reward tuples
back to the RL-agent which then uses a TensorFlow algorithm to learn how to optimize actions.

Custom Gym Environment: This is gym environment exists as a single python file in src -> rl-agent -> environments -> mars_env.py

mars_env.py is where you will create your reward function.  There is already a class method for you called:
def reward_function(self)

while you are free to add your own code to this method, you cannot change the signature of the method, or change the return types.

the method must return a boolean value indicating if the episode has ended (see more about episode ending events below) 
the method must also return a reward value for that time step.

If you believe they are warranted, you are free to add additional global variables in the environment.  However, keep in mind
if they are episodic values (values that should be reset after each episode) you will need to reset those values within the 
reward_function method once you have determined the episode should end.

Recommended Episode ending scenarios: There are two scenarios that should automatically end an episode. 1. If the Rover collides with an object 2. If the Rover Power supply is drained You are free to build

Reward Function, available data to create custom reward functions Episode steps Current Distance to Checkpoint Distance Traveled Collision Threshold Current Location Checkpoint Location

##Help and Support

slack channel:

email:

About

Official repo of the AWS-JPL Open Source Rover challenge

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 65.2%
  • Python 31.2%
  • CMake 3.4%
  • Shell 0.2%