- We use large language models (LLMs) to bridge natural language and behavior analysis.
- This work is published at NeurIPS2023! Read the paper, AmadeusGPT: a natural language interface for interactive animal behavioral analysis by Shaokai Ye, Jessy Lauer, Mu Zhou, Alexander Mathis & Mackenzie W. Mathis.
- Like this project? Please consider giving us a star ⭐️!
Developed by part of the same team that brought you DeepLabCut, AmadeusGPT is a natural language interface that turns natural language descriptions of behaviors into machine-executable code. The process of quantifying and analyzing animal behavior involves translating the naturally occurring descriptive language of their actions into machine-readable code. Yet, codifying behavior analysis is often challenging without deep understanding of animal behavior and technical machine learning knowledge, so we wanted to ease this jump. In short, we provide a "code-free" interface for you to analysis video data of animals. If you are a DeepLabCut user, this means you could upload your videos and .h5 keypoint files and then ask questions, such as "How much time does the mouse spend in the middle of the open field?". In our original work (NeurIPS 2023) we used GPT3.5 and GPT4 as part of our agent. We continue to support the latest OpenAI models, and are continuing to actively develop this project.
Conda is an easy-to-use Python interface that supports launching Jupyter Notebooks. If you are completely new to this, we recommend checking out the docs here for getting conda installed. Otherwise, proceed to use one of our supplied conda files. As you will see we have minimal dependencies to get started, and here is a simple step-by-step guide you can reference for setting it up (or see BONUS below). Here is the quick start command:
conda env create -f amadeusGPT.yml
To note, some modules AmadeusGPT can use benefit from GPU support, therefore we recommend also having an NVIDIA GPU available and installing CUDA.
Why OpenAI API Key is needed AmadeusGPT relies on API calls of OpenAI (we will add more LLM options in the future) for language understanding and code writing. Sign up for a openAI API key here.
Then, you can add this into your environment by passing the following in the terminal after you launched your conda env:
export OPENAI_API_KEY='your API key'
Or inside a python script or Jupyter Notebook, add this if you did not pass at the terminal stage:
import os
os.environ["OPENAI_API_KEY"] = 'your api key'
See below on how to get started!
We provide a StreamLit App, or you can use AmadeusGPT in any python interface, such as Jupyter notebooks. For this we suggest getting started from our demos:
You can git clone (or download) this repo to grab a copy and go. We provide example notebooks here!
- Draw a region of interest (ROI) and ask, "when is the animal in the ROI?"
- Use your own data - (make sure you use a GPU to run SuperAnimal if you don't have corresponding DeepLabCut keypoint files already!
- Write you own integration modules and use them. Bonus: source code. Make sure you delete the cached modules_embedding.pickle if you add new modules!
- Multi-Animal social interactions
- Reuse the task program generated by LLM and run it on different videos
- You can ask one query across multiple videos. Put your keypoint files and video files (pairs) in the same folder and specify the
data_folder
as shown in this Demo. Make sure your video file and keypoint file follows the normal DeepLabCut convention, i.e.,prefix.mp4
prefix*.h5
.
import os
from amadeusgpt import create_project
from amadeusgpt import AMADEUS
from amadeusgpt.utils import parse_result
if 'OPENAI_API_KEY' not in os.environ:
os.environ['OPENAI_API_KEY'] = 'your key'
# data folder contains video files and optionally keypoint files
# please pay attention to the naming convention as described above
data_folder = "temp_data_folder"
# where the results are saved
result_folder = "temp_result_folder"
# Create a project
config = create_project(data_folder, result_folder, video_suffix = ".mp4")
# Create an AMADEUS instance
amadeus = AMADEUS(config)
query = "Plot the trajectory of the animal using the animal center and color it by time"
qa_message = amadeus.step(query)
# we made it easier to parse the result
parse_result(amadeus, qa_message)
- You will need to git clone this repo and have a copy locally. Then in your env run
pip install 'amadeusGPT[streamlit]'
- Then you can open the terminal and within the directory run:
make app
If you want to set up your own env,
conda create -n amadeusGPT python=3.10
the key dependencies that need installed are:
pip install notebook
conda install hdf5
conda install pytables==3.8
# pip install deeplabcut==3.0.0rc4 if you want to use SuperAnimal on your own videos
pip install amadeusgpt
If you use ideas or code from this project in your work, please cite us using the following BibTeX entry. 🙏
@article{ye2023amadeusGPT,
title={AmadeusGPT: a natural language interface for interactive animal behavioral analysis},
author={Shaokai Ye and Jessy Lauer and Mu Zhou and Alexander Mathis and Mackenzie Weygandt Mathis},
journal={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=9AcG3Tsyoq},
- arXiv preprint version AmadeusGPT: a natural language interface for interactive animal behavioral analysis by Shaokai Ye, Jessy Lauer, Mu Zhou, Alexander Mathis & Mackenzie W. Mathis.
AmadeusGPT is license under the Apache-2.0 license.
- 🚨 Please note several key dependencies have their own licensing. Please carefully check the license information for DeepLabCut (LGPL-3.0 license), SAM (Apache-2.0 license), etc.
- If you only provide a video file, we use SuperAnimal models SuperAnimal models to predict which animal is in your video. While we highly recommend GPU installation, we are working on faster, light-weight SuperAnimal models to work on your CPU.
- If you already have keypoint file corresponding to the video file, look how we set-up the config file in the Notebooks. Right now we only support keypoint output from DeepLabCut.
- July 2024 v0.1.1 is released! This is a major code update ...
- June 2024 as part of the CZI EOSS, The Kavli Foundation now supports this work! ✨
- 🤩 Dec 2023, code released!
- 🔥 Our work was accepted to NeuRIPS2023
- 🧙♀️ Open-source code coming in the fall of 2023
- 🔮 arXiv paper and demo released July 2023
- 🪄Contact us