It's a collection of multi agent environments based on OpenAI gym. Also, you can use minimal-marl to warm-start training of agents.
- Setup (important):
pip install 'pip<24.1' pip install 'setuptools<=66' pip install 'wheel<=0.38.4'
- Install package:
-
Using PyPI:
pip install ma-gym
-
Directly from source (recommended):
git clone https://github.com/koulanurag/ma-gym.git cd ma-gym pip install -e .
-
Please use this bibtex if you would like to cite it:
@misc{magym,
author = {Koul, Anurag},
title = {ma-gym: Collection of multi-agent environments based on OpenAI gym.},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/koulanurag/ma-gym}},
}
import gym
env = gym.make('ma_gym:Switch2-v0')
done_n = [False for _ in range(env.n_agents)]
ep_reward = 0
obs_n = env.reset()
while not all(done_n):
env.render()
obs_n, reward_n, done_n, info = env.step(env.action_space.sample())
ep_reward += sum(reward_n)
env.close()
Please refer to Wiki for complete usage details
- Checkers
- Combat
- PredatorPrey
- Pong Duel
(two player pong game)
- Switch
- Lumberjacks
- TrafficJunction
Note : openai's environment can be accessed in multi agent form by prefix "ma_".Eg: ma_CartPole-v0
This returns an instance of CartPole-v0 in "multi agent wrapper" having a single agent.
These environments are helpful during debugging.
Please refer to Wiki for more details.
Checkers-v0 | Combat-v0 | Lumberjacks-v0 |
---|---|---|
PongDuel-v0 | PredatorPrey5x5-v0 | PredatorPrey7x7-v0 |
Switch2-v0 | Switch4-v0 | TrafficJunction4-v0 |
TrafficJunction10-v0 | ||
- Install:
pip install -e ".[test]"
- Run:
pytest
- This project was initially developed to complement my research internship @ SAS (Summer - 2019).