Custom gym to use with OpenAI algorithms. Showing how you can create more test environments for your custom algorithms. Once you get your environment setup in the OpenAI Gym format it is super easy to switch between different test algorithms.
- To use stable baselines you will need to install python 3.6 or 3.7
- Then install TensorFlow 1.15.0 ( Install GPU if you have one available or CPU to run on CPU )
pip install tensorflow==1.15 # CPU
pip install tensorflow-gpu==1.15 # GPU
- install OpenAI Gym and stable-baselines
pip install gym
pip install stable-baselines[mpi]
- install the rest of dependencies listed in requirments.txt
- Uses inputs to the nural network (Observations) of pixes in the game include frames from the past. The shape is 252, 84, 1
- This allows our network to see which way enemies are moving to avoid collisions and shoot to score points.
- The enviroment provides a reward to allow the network to learn from it's actions.
- The reward I chose if fairly simple at the start of the frame the reward is -0.001. This means if the AI just stays alive and never shoots enemies it will have a low reward score.
- If an enemy is hit the reward is increased by 1.
- If you touch an enemy the game is over and your reward is -2.
Break down of the files in this GitHub
- All the pre-trained models I use in the tutorial video are stored in the Trained-Models directory.
- The game source code is all stored inside of the src directory
- This will save back ups of your model as you are training it in case the program would crash.
- This is the actual enviroment file that is used to create our project.
- This must follow the OpenAI Gym format.
- More Info (stable-baselines)
- Examples of how to train your enviroment with different algorithms.
- Examples of how to run your trained models.
- Also an example of using a random agent to get a baseline to see how your trained models perform.
Reach out to me at one of the following places!
- YouTube Clarity Coders
- Chat with me! Discord