Paper: https://link.springer.com/article/10.1007/s10994-022-06142-7 Extended abstract: https://www.ijcai.org/proceedings/2022/0742.pdf IJCAI presentation: https://www.ijcai.org/proceedings/2022/742 IJCLR presentation: https://www.youtube.com/watch?v=1gsLt-zFXiY&ab_channel=Inst.Informatics%26Telecomms%2CNCSRDemokritos
Supplementary materials from the paper may be found in /neuro/supplementary_materials
This is the main repository for the novel DUA system. It is a fork of the main AnimalAI branch and contains the modifications necessary for DUA to function. Modifications to the original AnimalAI code are found in /animalai
, /animalai_train
and ml-agents-envs
.
For all DUA specific code please refer to the /neuro
directory which contains the implementation of the whole system. It also contains its own Readme.md describing the various modules.
The Animal-AI Testbed introduces the study of animal cognition to the world of AI. It provides an environment for testing agents on tasks taken from, or inspired by, the animal cognition literature. Decades of research in this field allow us to train and test for cognitive skills in Artificial Intelligence agents.
This repo contains the training environment, a training library as well as 900 tasks for testing and/or training agents. The experiments are divided into categories meant to reflect various cognitive skills. Details can be found on the website.
We ran a competition using this environment and the associated tests, more details about the results can be found here
The environment is built using Unity ml-agents and contains an agent enclosed in a fixed sized arena. Objects can spawn in this arena, including positive and negative rewards (green, yellow and red spheres) that the agent must obtain (or avoid). All of the tests are made from combinations of objects in the training environment.
Just want to get started? Then:
- Clone this repo
- In the
examples
folder runpip install -r requirements.txt
- Get the environment executable for your platform
- In the
examples
folder, startjupyter notebook
and go through the environment and the training notebooks!
For more examples to run see the examples folder.
First download the environment for your system:
OS | Environment link |
---|---|
Linux | download v2.0.1 |
MacOS | download v2.0.1 |
Windows | download v2.0.1 |
Unzip the entire content of the archive to the examples/env
folder. On linux you may have to make the file executable by running chmod +x env/AnimalAI.x86_64
.
The Animal-AI packages works on Linux, Mac and Windows and requires python 3.
-
The main package is an API for interfacing with the Unity environment. It contains both a gym environment as well as an extension of Unity's ml-agents environments. You can install it via pip:
pip install animalai
Or you can install it from the source by running
pip install -e animalai
from the repo folder -
We also provide a package that can be used as a starting point for training, and which is required to run most of the example scripts found in the
examples/
folder. It contains an extension of ml-agents' training environment that relies on OpenAI's PPO and BAIR's SAC. You can also install this package using pip:pip install animalai-train
Or you can install it from source by running
pip install -e animalai_train
from the repo folder
The Unity source files for the environment can be found on our ml-agents fork.
If you launch the environment directly from the executable or through the load_config_and_play,py
script it will launch
in player mode. Here you can control the agent with the following:
Keyboard Key | Action |
---|---|
W | move agent forwards |
S | move agent backwards |
A | turn agent left |
D | turn agent right |
C | switch camera |
R | reset environment |
If you use the Animal-AI environment in your work you can cite the environment paper:
Beyret, B., Hernández-Orallo, J., Cheke, L., Halina, M., Shanahan, M., Crosby, M. The Animal-AI Environment: Training and Testing Animal-Like Artificial Cognition, arXiv preprint
@inproceedings{Beyret2019TheAE,
title={The Animal-AI Environment: Training and Testing Animal-Like Artificial Cognition},
author={Benjamin Beyret and Jos'e Hern'andez-Orallo and Lucy Cheke and Marta Halina and Murray Shanahan and Matthew Crosby},
year={2019}
}
Paper with all the details of the test battery will be released after the competition has finished.
The Animal-AI Olympics was built using Unity's ML-Agents Toolkit.
The Python library located in animalai extends ml-agents v0.15.0. Mainly, we add the possibility to change the configuration of arenas between episodes.
Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). Unity: A General Platform for Intelligent Agents. arXiv preprint arXiv:1809.02627
The competition was kindly hosted on EvalAI, an open source web application for AI competitions. Special thanks to Rishabh Jain for his help in setting this up. We will aim to reopen submissions with new hidden files in order to keep some form of competition going.
Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvijit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee and Dhruv Batra (2019) EvalAI: Towards Better Evaluation Systems for AI Agents
-
v2.0.1 (env only):
- movable objects are lighter and easier to move
-
v2.0.0:
- fix small bugs
- adds tutorial notebooks
- Bump ml-agents from 0.7 to 0.15 which:
- allows multiple parallel environments for training
- adds Soft actor critic (SAC) trainer
- has a new kind of actions/observations loop (on demand decisions)
- removes brains and some protobufs
- adds side-channels to replace some protobufs
- refactoring of the codebase
- GoodGoalMulti are now yellow with the same texture (light emitting) as GoodGoal and BadGoal
- The whole project including the Unity source is now available on our ml-agents fork
For earlier versions see here