- MegaDetector overview
- Our ask to MegaDetector users
- Who is using MegaDetector?
- How fast is MegaDetector, and can I run it on my giant/small computer?
- Downloading the model (optional)
- Using the model
- OK, but is that how the MD devs run the model?
- Is there a GUI?
- How do I use the results?
- Have you evaluated MegaDetector's accuracy?
- What is MegaDetector bad at?
- Pro tips for coaxing every bit of accuracy out of MegaDetector
- Citing MegaDetector
- Tell me more about why detectors are a good first step for camera trap images
- Pretty picture
- Mesmerizing video
- Can you share the training data?
- What if I just want to run non-MD scripts from this repo?
- What if I want to use MD without all the baggage of your very specific package versions?
Conservation biologists invest a huge amount of time reviewing camera trap images, and a huge fraction of that time is spent reviewing images they aren't interested in. This primarily includes empty images, but for many projects, images of people and vehicles are also "noise", or at least need to be handled separately from animals.
Machine learning can accelerate this process, letting biologists spend their time on the images that matter.
To this end, this page hosts a model we've trained - called "MegaDetector" - to detect animals, people, and vehicles in camera trap images. It does not identify animals to the species level, it just finds them.
Before you read the rest of this page...
- If you are looking for a convenient tool to run MegaDetector, you don't need anything from this page: check out EcoAssist.
- If you're just considering the use of AI in your workflow, and you aren't even sure yet whether MegaDetector would be useful to you, we recommend reading the much shorter "getting started with MegaDetector" page.
- If you're a programmer-type looking to use tools from this repo, check out the Python package that provides access to everything in this repo (yes, you guessed it, "pip install megadetector").
- If you're already familiar with MegaDetector and you're ready to run it on your data, and you're looking for instructions on running MegaDetector, read on!
- If you have any questions, or you want to tell us that MegaDetector was amazing/terrible on your images, email us!
MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled “Everything I know about machine learning and camera traps”.
MegaDetector is free, and it makes us super-happy when people use it, so we put it out there as a downloadable model that is easy to use in a variety of conservation scenarios. That means we don't know who's using it unless you contact us (or we happen to run into you), so please please pretty-please email us at cameratraps@lila.science if you find it useful!
We often run MegaDetector on behalf of users as a free service; see our "Getting started with MegaDetector" page for more information. But there are many reasons to run MegaDetector on your own; how practical this is will depend in part on how many images you need to process and what kind of computer hardware you have available. MegaDetector is designed to favor accuracy over speed, and we typically run it on GPU-enabled computers. That said, you can run anything on anything if you have enough time, and we're happy to support users who run MegaDetector on their own GPUs (in the cloud or on their own PCs), on their own CPUs, or even on embedded devices. If you only need to process a few thousand images per week, for example, a typical laptop will be just fine. If you want to crunch through 20 million images as fast as possible, you'll want at least one GPU.
Here are some rules of thumb to help you estimate how fast you can run MegaDetector on different types of hardware.
- On a decent laptop (without a fancy deep learning GPU) that is neither the fastest nor slowest laptop you can buy in 2023, MegaDetector v5 can process somewhere between 25,000 and 50,000 images per day. This might be totally fine for scenarios where you have even hundreds of thousands of images, as long as you can wait a few days.
- On a dedicated deep learning GPU that is neither the fastest nor slowest GPU you can buy in 2023, MegaDetector v5 can process between 300,000 and 1,000,000 images per day. We include a few benchmark timings below on some specific GPUs.
We don't typically recommend running MegaDetector on embedded devices, although some folks have done it! More commonly, for embedded scenarios, it probably makes sense to use MegaDetector to generate bounding boxes on lots of images from your specific ecosystem, then use those boxes to train a smaller model that fits your embedded device's compute budget.
These results are based on a test batch of around 13,000 images from the public Snapshot Karoo and Idaho Camera Traps datasets. These were chosen to be "typical", and anecdotally they are, though FWIW we have seen very high-resolution images that run around 30% slower than these, and very low-resolution images (typically video frames) that run around 100% faster than these.
Some of these results were measured by "team MegaDetector", and some are user-reported; YMMV.
- An RTX 4090 processes around 17.6 images per second, or around 1,500,000 images per day (for MDv5)
- An RTX 3090 processes around 11.4 images per second, or around 985,000 images per day (for MDv5)
- An RTX 3080 processes around 9.5 images per second, or around 820,800 images per day (for MDv5)
- A desktop RTX 3050 processes around 4.2 images per second, or around 363,000 images per day (for MDv5)
- A laptop RTX 3050 processes around 3.0 images per second, or around 250,000 images per day (for MDv5)
- A Quadro P2000 processes around 2.1 images per second, or around 180,000 images per day (for MDv5)
- A 2024 M3 MacBook Pro (18 GPU cores) averages around 4.61 images per second, or around 398,000 images per day (for MDv5)
- A 2020 M1 MacBook Pro (8 GPU cores) averages around 1.85 images per second, or around 160,000 images per day (for MDv5)
- An Intel Core i7-12700 CPU processes around 0.5 images per second on a single core (43,000 images per day) (multi-core performance is... complicated) (for MDv5)
- An Intel Core i7-13700K CPU processes around 0.8 images per second on a single core (69,000 images per day) (multi-core performance is... complicated) (for MDv5)
FWIW, MDv5 is consistently 3x-4x faster than MDv4, so if you see a device listed here and want to estimate MDv5 performance, assume 3x-4x speedup.
- An NVIDIA V100 processes around 2.79 images per second, or around 240,000 images per day (for MDv4)
- An NVIDIA RTX 3090 processes ~3.24 images per second, or ~280,000 images per day (for MDv4)
- An NVIDIA RTX 2080 Ti processes ~2.48 images per second, or ~214,000 images per day (for MDv4)
- An NVIDIA RTX 2080 processes ~2.0 images per second, or ~171,000 images per day (for MDv4)
- An NVIDIA RTX 2060 SUPER processes ~1.64 images per second, or ~141,000 images per day (for MDv4)
- An NVIDIA Titan V processes ~1.9 images per second, or ~167,000 images per day (for MDv4)
- An NVIDIA Titan Quadro T2000 processes ~0.64 images per second, or ~55,200 images per day (for MDv4)
If you want to run this benchmark on your own, here are azcopy commands to download those 13,226 images, and we're happy to help you get MegaDetector running on your setup. Or if you're using MegaDetector on other images with other GPUs, we'd love to include that data here as well. Email us!
Speed can vary widely based on image size, hard drive speed, etc., and in these numbers we're just taking what users report without asking what the deal was with the data, so... YMMV.
- A GTX 1080 processed 699,530 images in just over 44 hours through MDv5 (4.37 images per second, or ~378,000 images per day)
- An RTX 3050 processes ~4.6 images per second, or ~397,000 images per day through MDv5
- An RTX 3090 processes ~11 images per second, or ~950,000 images per day through MDv5
See this list on the repo's main page.
In previous versions of these instructions, you had to download MegaDetector to your PC before running it. The scripts we use to run MegaDetector can now automatically download MegaDetector, so this whole download step is optional now, and if you're going to follow the instructions on this page, you can probably ignore this section and skip to the "Using the model" section.
That said, in this section, we provide download links for lots of MegaDetector versions. Unless you have a very esoteric scenario, you want MegaDetector v5, and you can ignore all the other MegaDetector versions. The rest of this section, after the MDv5 download links, is more like a mini-MegaDetector-museum than part of the User Guide.
This release incorporates additional training data, specifically aiming to improve our coverage of:
- Boats and trains in the "vehicle" class
- Artificial objects (e.g. bait stations, traps, lures) that frequently overlap with animals
- Rodents, particularly at close range
- Reptiles and small birds
This release also represents a change in MegaDetector's architecture, from Faster-RCNN to YOLOv5. Our inference scripts have been updated to support both architectures, so the transition should be mostly seamless.
MDv5 is actually two models (MDv5a and MDv5b), differing only in their training data (see the training data section for details). Both appear to be more accurate than MDv4, and both are 3x-4x faster than MDv4, but each MDv5 model can outperform the other slightly, depending on your data. When in doubt, for now, try them both. If you really twist our arms to recommend one... we recommend MDv5a. But try them both and tell us which works better for you! The pro tips section contains some additional thoughts on when to try multiple versions of MD.
See the release page for more details, and in particular, be aware that the range of confidence values produced by MDv5 is very different from the range of confidence values produced by MDv4! Don't use your MDv4 confidence thresholds with MDv5!
This release incorporates additional training data from Borneo, Australia and the WCS Camera Traps dataset, as well as images of humans in both daytime and nighttime. We also have a preliminary "vehicle" class for cars, trucks, and bicycles.
- Frozen model (.pb)
- TFODAPI config file
- Last checkpoint (for resuming training)
- TensorFlow SavedModel for TFServing (inputs in uint8 format,
serving_default
output signature)
If you're not sure which format to use, you want the "frozen model" file (the first link).
In addition to incorporating additional data, this release adds a preliminary "human" class. Our animal training data is still far more comprehensive than our humans-in-camera-traps data, so if you're interested in using our detector but find that it works better on animals than people, stay tuned.
- Frozen model (.pb)
- TFODAPI config file
- Last checkpoint (for resuming training)
- TensorFlow SavedModel (inputs in TF common image format,
default
output signature) - TensorFlow SavedModel for TFServing (inputs in uint8 format,
serving_default
output signature)
First MegaDetector release! Yes, that's right, v2 was the first release. If there was a v1, we don't remember it.
You may want to skip the rest of this page and use the MegaDetector Python package (pip install megadetector). There are examples on the package home page, and the package is documented here.
If you are new to Python, you are in the right place, read on...
We provide two ways to run MegaDetector on your images:
-
A simple test script that makes neat pictures with bounding boxes, but doesn't produce a useful output file (run_detector.py)
-
A script for running large batches of images on a local GPU (run_detector_batch.py)
Also see the “Is there a GUI?” section for graphical options and other ways of running MD, including real-time APIs, Docker environments, and other goodies.
The remainder of this section provides instructions for running our "official" scripts, including installing all the necessary Python dependencies.
All of the instructions that follow assume you have installed Miniforge. Miniforge is an environment for installing and running Python stuff.
If you know what you're doing, or you already have Anaconda installed, you can use either Anaconda or Miniforge; the environment files work with both. But our experiences have been best with Miniforge, so, if you just want to get up and running, start by installing Miniforge. If you're using Anaconda and you're staring at a "solving environment" prompt that's been running for like a day, consider switching to Miniforge.
To install Miniforge on Windows, just download and run the Miniforge installer. If you get a "Windows protected your PC" warning, you might have to click "More info" and "run anyway". You can leave everything at the default value during installation.
All the instructions below will assume you are running at the Miniforge command prompt, which is basically just like a regular command prompt, but it has access to all the Python stuff. On Windows, once you've installed Miniforge, you can start your Miniforge command prompt by launching the shortcut called "Miniforge prompt".
You will know you are at a Miniforge prompt (as opposed to run-of-the-mill command prompt) if you see an environment name in parentheses before your current directory, like this:
...or this:
The list of Miniforge installers has links for Linux and OSX. If you're installing on a Mac, be sure to download the right installer: "x86_64" if you are on an Intel Mac, "arm64 (Apple Silicon)" if you are on an M1/M2 Mac with Apple silicon. In all of these cases, you will be downloading a .sh file; after you run it to install Miniforge, you should see an environment name in parentheses just like in the images above.
The instructions will also assume you have git installed. If you're not familiar with git, and you are on a Windows machine, we recommend installing Git for Windows. If you're on a Linux machine or a Mac, there's like a 99.9% chance you already have git installed.
If you have a deep-learning-friendly GPU, you will also need to have a recent NVIDIA driver installed. If you don't have an Nvidia GPU, it's OK, you can still run MegaDetector on your CPU, and you don't need to install any special drivers.
This step is optional; in fact, the only reason to run this step is if you will not have an Internet connection later when you need to run MegaDetector.
Otherwise, when you run MegaDetector later in these instructions, the model file will get downloaded automatically.
That said, if you want to save MegaDetector to a particular folder, download one or more MegaDetector model files (typically MDv5a, but you can also download MDv5b and/or MDv4) to your computer. You can put them anywhere, later in these instructions you'll tell the relevant scripts where to find the model file.
You will need the contents of two git repos to make everything work: this repo and the YOLOv5 repo (more specifically, a fork of that repo). You will also need to set up a Python environment with all the Python packages that our code depends on. In this section, we provide Windows, Linux, and Mac instructions for doing all of this stuff.
The first time you set all of this up, open your Miniforge prompt, and run:
mkdir c:\git
cd c:\git
git clone https://github.com/agentmorris/MegaDetector
git clone https://github.com/ecologize/yolov5
cd c:\git\MegaDetector
mamba env create --file envs\environment-detector.yml
mamba activate megadetector
set PYTHONPATH=c:\git\MegaDetector;c:\git\yolov5
Your environment is set up now! In the future, when you open your Miniforge prompt, you only need to run:
cd c:\git\MegaDetector
mamba activate megadetector
set PYTHONPATH=c:\git\MegaDetector;c:\git\yolov5
Pro tip: if you have administrative access to your machine, rather than using the "set PYTHONPATH" steps, you can also create a permanent PYTHONPATH environment variable. Here's a good page about editing environment variables in Windows. But if you just want to "stick to the script" and do it exactly the way we recommend above, that's fine.
If you have installed Miniforge on Linux, you are probably always at an Miniforge prompt; i.e., you should see "(base)" at your command prompt. Assuming you see that, the first time you set all of this up, and run:
mkdir ~/git
cd ~/git
git clone https://github.com/ecologize/yolov5
git clone https://github.com/agentmorris/MegaDetector
cd ~/git/MegaDetector
mamba env create --file envs/environment-detector.yml
mamba activate megadetector
export PYTHONPATH="$HOME/git/MegaDetector:$HOME/git/yolov5"
If you want to use MDv4 (which you probably don't, unless you have a really good reason to), there's one extra setup step (this will not break your MDv5 setup, you can run both in the same environment):
mamba activate megadetector
pip install tensorflow
Your environment is set up now! In the future, whenever you start a new shell, you just need to do:
cd ~/git/MegaDetector
mamba activate megadetector
export PYTHONPATH="$HOME/git/MegaDetector:$HOME/git/yolov5"
Pro tip: rather than updating your PYTHONPATH every time you start a new shell, you can add the "export" line to your .bashrc file.
If you have installed Miniforge on Mac, you are probably always at an Miniforge prompt; i.e., you should see "(base)" at your command prompt. Assuming you see that, the first time you set all of this up, and run:
mkdir ~/git
cd ~/git
git clone https://github.com/ecologize/yolov5
git clone https://github.com/agentmorris/MegaDetector
cd ~/git/MegaDetector
./envs/md-mac-env-setup.sh
mamba activate megadetector
export PYTHONPATH="$HOME/git/MegaDetector:$HOME/git/yolov5"
If you want to use MDv4 (which you probably don't, unless you have a really good reason to), there's one extra setup step (this will not break your MDv5 setup, you can run both in the same environment):
mamba activate megadetector
pip install tensorflow
Your environment is set up now! In the future, whenever you start a new shell, you just need to do:
cd ~/git/MegaDetector
mamba activate megadetector
export PYTHONPATH="$HOME/git/MegaDetector:$HOME/git/yolov5"
Pro tip: rather than updating your PYTHONPATH every time you start a new shell, you can add the "export" line to your .bashrc file.
To test MegaDetector out on small sets of images and get super-satisfying visual output, we provide run_detector.py, an example script for invoking this detector on new images. This isn't how we recommend running lots of images through MegaDetector (see run_detector_batch.py below for "real" usage), but it's a quick way to test things out. Let us know how it works on your images!
The following examples assume you have your Miniforge prompt open, and have put things in the same directories we put things in the above instructions. If you put things in different places, adjust these examples to match your folders, and most importantly, adjust these examples to point to your images.
To use run_detector.py on Windows, when you open a new Miniforge prompt, don't forget to do this:
cd c:\git\MegaDetector
mamba activate megadetector
set PYTHONPATH=c:\git\MegaDetector;c:\git\yolov5
Then you can run the script like this:
python megadetector\detection\run_detector.py MDV5A --image_file "some_image_file.jpg" --threshold 0.1
"MDV5A" tells this script to automatically download MegaDetector v5a; if you already downloaded it, you can replace this with the full path to your MegaDetector model file (e.g. "c:\megadetector\md_v5a.0.0.pt").
Change "some_image_file.jpg" to point to a real image on your computer.
If you ran this script on "some_image_file.jpg", it will produce a file called "some_image_file_detections.jpg", which - if everything worked right - has boxes on objects of interest.
If you have a GPU, and it's being utilized correctly, near the beginning of the output, you should see:
GPU available: True
If you have an Nvidia GPU, and it's being utilized correctly, near the beginning of the output, you should see:
GPU available: True
If you have an Nvidia GPU and you see "GPU available: False", your GPU environment may not be set up correctly. 95% of the time, this is fixed by updating your Nvidia driver" and rebooting. If you have an Nvidia GPU, and you've installed the latest driver, and you've rebooted, and you're still seeing "GPU available: False", email us.
This is really just a test script, you will mostly only use this to make sure your environment is set up correctly. run_detector_batch.py (see below) is where the interesting stuff happens.
You can see all the options for this script by running:
python megadetector\detection\run_detector.py
To use this script on Linux/Mac, when you open a new Miniforge prompt, don't forget to do this:
cd ~/git/MegaDetector
mamba activate megadetector
export PYTHONPATH="$HOME/git/MegaDetector:$HOME/git/yolov5"
Then you can run the script like this:
python megadetector/detection/run_detector.py MDV5A --image_file "some_image_file.jpg" --threshold 0.1
Don't forget to change "some_image_file.jpg" to point to a real image on your computer.
To apply this model to larger image sets on a single machine, we recommend a different script, run_detector_batch.py. This outputs data in the MegaDetector results format, so you can work with the results in tools like Timelapse.
To use run_detector_batch.py on Windows, when you open a new Miniforge prompt, don't forget to do this:
cd c:\git\MegaDetector
mamba activate megadetector
set PYTHONPATH=c:\git\MegaDetector;c:\git\yolov5
Then you can run the script like this:
python megadetector\detection\run_detector_batch.py MDV5A "c:\some_image_folder" "c:\megadetector\test_output.json" --output_relative_filenames --recursive --checkpoint_frequency 10000 --quiet
"MDV5A" tells this script to automatically download MegaDetector v5a; if you already downloaded it, you can replace this with the full path to your MegaDetector model file (e.g. "c:\megadetector\md_v5a.0.0.pt").
Change "c:\some_image_folder" to point to the real folder on your computer where your images live.
This will produce a file called "c:\megadetector\test_output.json", which - if everything worked right - contains information about where objects of interest are in your images. You can use that file with any of our postprocessing scripts, but most users will read this file into Timelapse.
You can see all the options for this script by running:
python megadetector\detection\run_detector_batch.py
If you are running very large batches, we strongly recommend adding the --checkpoint_frequency
option to save checkpoints every N images (you don't want to lose all the work your PC has done if your computer crashes!). 10000 is a good value for checkpoint frequency; that will save the results every 10000 images. This is what we've used in the example above. When you include this option, you'll see a line in the output that looks like this:
The checkpoint file will be written to c:\megadetector\md_checkpoint_20230305232323.json
The default checkpoint file will be in the same folder as your output file; in this case, because we told the script to write the final output to c:\megadetector\test_output.json, the checkpoint will be written in the c:\megadetector folder. If everything goes smoothly, the checkpoint file will be deleted when the script finishes. If your computer crashes/reboots/etc. while the script is running, you can pick up where you left off by running exactly the same command you ran the first time, but adding the "--resume_from_checkpoint" option, with the checkpoint file you want to resume from, or you can just say "auto" to use the most recent checkpoint. So, in this case, you would run:
python megadetector\detection\run_detector_batch.py MDV5A "c:\some_image_folder" "c:\megadetector\test_output.json" --output_relative_filenames --recursive --checkpoint_frequency 10000 --quiet --resume_from_checkpoint auto
You will see something like this at the beginning of the output:
Restored 80000 entries from the checkpoint
If the extremely unlikely event that your computer happens to crash while a checkpoint is getting written... don't worry, you're still safe, but it's a bit outside the scope of this tutorial, so just email us in that case.
If you have an Nvidia GPU, and it's being utilized correctly, near the beginning of the output, you should see:
GPU available: True
If you have an Nvidia GPU and you see "GPU available: False", your GPU environment may not be set up correctly. 95% of the time, this is fixed by updating your Nvidia driver and rebooting. If you have an Nvidia GPU, and you've installed the latest driver, and you've rebooted, and you're still seeing "GPU available: False", email us.
To use this script on Linux/Mac, when you open a new Miniforge prompt, don't forget to do this:
cd ~/git/MegaDetector
mamba activate megadetector
export PYTHONPATH="$HOME/git/MegaDetector:$HOME/git/yolov5"
Then you can run the script like this:
python megadetector/detection/run_detector_batch.py MDV5A "/some/image/folder" "$HOME/megadetector/test_output.json" --output_relative_filenames --recursive --checkpoint_frequency 10000
Almost... we run a lot of MegaDetector on a lot of images, and in addition to running the main "run_detector_batch" script described in the previous section, running a large batch of images also usually includes:
- Dividing images into chunks for running on multiple GPUs
- Making sure that the number of failed/corrupted images was reasonable
- Eliminating frequent false detections using the repeat detection elimination process
- Visualizing the results using postprocess_batch_results.py to make "results preview" pages like this one
...and, less frequently:
- Running a species classifier on the MD crops
- Moving images into folders based on MD output
- Various manipulation of the output files, e.g. splitting .json files into smaller .json files for subfolders
- Running and comparing multiple versions of MegaDetector
There are separate scripts to do all of these things, but things would get chaotic if we ran each of these steps separately. So in practice we almost always run MegaDetector using manage_local_batch.py, a script broken into cells for each of those steps. We run this in an interactive console in Spyder, but we also periodically export this script to a notebook that does exactly the same thing.
So, if you find yourself keeping track of lots of steps like this to manage large MD jobs, try the notebook out! And let us know if it's useful/broken/wonderful/terrible.
Many of our users either use our Python tools to run MegaDetector or have us run MegaDetector for them (see this page for more information about that), then most of those users use Timelapse to use their MegaDetector results in an image review workflow.
But we recognize that Python tools can be a bit daunting, so we're excited that a variety of tools allow you to run MegaDetector in a GUI have emerged from the community. We're interested in users' perspectives on all of these tools, so if you find them useful - or if you know of others - let us know, and thank those developers!
- EcoAssist is a GUI-based tool for running MegaDetector (supports MDv5) and running some postprocessing functions (e.g. folder separation)
- CamTrap Detector is a GUI-based tool for running MegaDetector (supports MDv5)
- MegaDetector-GUI is a GUI-based tool for running MegaDetector in Windows environments (MDv4 only as far as we know)
- Hendry Lydecker set up a Hugging Face app for running MDv5
- Ben Evans set up a Web-based MegaDetector demo at replicate.com
It's not quite as simple as "these platforms all run MegaDetector on your images", but to varying degrees, all of the following online platforms use MegaDetector:
- Wildlife Insights
- TrapTagger
- WildTrax
- Agouti
- Trapper
- Camelot
- WildePod
- wpsWatch
- TNC Animl (code)
- Cam-WON
- Zooniverse ML Subject Assistant
- Dudek AI Image Toolkit
- Zamba Cloud
- OCAPI
- Mega-Efficient Wildlife Classifier (MEWC) (tools for training classifiers on MD crops) (github.com/zaandahl/mewc)
- MegaDetectorLite (ONNX/TensorRT conversions for MD) (github.com/timmh/MegaDetectorLite)
- MegaDetector-FastAPI (MD serving via FastAPI/Streamlit) (github.com/abhayolo/megadetector-fastapi)
- MegaDetector UI (tools for server-side invocation of MegaDetector) (github.com/NINAnor/megadetector_ui
- MegaDetector Container (Docker image for running MD) (github.com/bencevans/megadetector-contained)
- MegaDetector V5 - ONNX (tools for exporting MDv5 to ONNX) (github.com/parlaynu/megadetector-v5-onnx)
- MEWC (Mega Efficient Wildlife Classifier) (github.com/zaandahl/mewc)
- CamTrapML (Python library for camera trap ML) (github.com/bencevans/camtrapml)
- WildCo-Faceblur (MD-based human blurring tool for camera traps) (github.com/WildCoLab/WildCo_Face_Blur)
- CamTrap Detector (MDv5 GUI) (github.com/bencevans/camtrap-detector)
- SDZG Animl (package for running MD and other models via R) (github.com/conservationtechlab/animl)
- SpSeg (WII Species Segregator) (github.com/bhlab/SpSeg)
- Wildlife ML (detector/classifier training with active learning) (github.com/slds-lmu/wildlife-ml)
- BayDetect (GUI and automation pipeline for running MD) (github.com/enguy-hub/BayDetect)
- Automated Camera Trapping Identification and Organization Network (ACTION) (github.com/humphrem/action)
- TigerVid (animal frame/clip extraction from videos) (github.com/sheneman/tigervid)
- Trapper AI (AI backend for the TRAPPER platform) (gitlab.com/trapper-project/trapper-ai)
- video-processor (MD workflow for security camera footage) (github.com/evz/video-processor)
- Declas (client-side tool for running MD and classifiers) (github.com/stangandaho/declas)
- AI for Wildlife Monitoring (real-time alerts using 4G camera traps) ([github.com/ratsakatika/camera-traps])(https://github.com/ratsakatika/camera-traps/)
- Kaggle notebook for fine-tuning MegaDetector to add additional classes
- Colab notebook (open in Colab) for running MDv5 on images stored in Google Drive.
- Real-time MegaDetector API using Flask. This is deployed via Docker, so the Dockerfile provided for the real-time API may be a good starting point for other Docker-based MegaDetector deployments as well.
- Batch processing API that runs images on many GPUs at once on Azure. There is no public instance of this API, but the code allows you to stand up your own endpoint.
See the "How do people use MegaDetector results?" section of our "getting started" page.
Internally, we track metrics on a validation set when we train MegaDetector, but we can't stress enough how much performance of any AI system can vary in new environments, so if we told you "99.9999% accurate" or "50% accurate", etc., we would immediately follow that up with "but don't believe us: try it in your environment!"
Consequently, when we work with new users, we always start with a "test batch" to get a sense for how well MegaDetector works for your images. We make this as quick and painless as possible, so that in the (hopefully rare) cases where MegaDetector will not help you, we find that out quickly.
All of those caveats aside, we are aware of some external validation studies... and we'll list them here... but still, try MegaDetector on your images before you assume any performance numbers!
These are not necessarily papers specifically about evaluating MegaDetector, but they at least include a standalone MD evaluation.
- WildEye. MegaDetector Version 5 evaluation.
- Clarfeld LA, Sirén AP, Mulhall BM, Wilson TL, Bernier E, Farrell J, Lunde G, Hardy N, Gieder KD, Abrams R, Staats S. Evaluating a tandem human-machine approach to labelling of wildlife in remote camera monitoring. Ecological Informatics. 2023 Aug 10:102257.
- Aguirre I, Hood GA, Westbrook CJ. Short-term dynamics of beaver dam flow states. Science of The Total Environment. 2024 Feb 9:170825.
- Mitterwallner V, Peters A, Edelhoff H, Mathes G, Nguyen H, Peters W, Heurich M, Steinbauer MJ. Automated visitor and wildlife monitoring with camera traps and machine learning. Remote Sensing in Ecology and Conservation. 2023.
- Fennell M, Beirne C, Burton AC. Use of object detection in camera trap image identification: assessing a method to rapidly and accurately classify human and animal detections for research and application in recreation ecology. Global Ecology and Conservation. 2022 Mar 25:e02104.
- Vélez J, Castiblanco-Camacho PJ, Tabak MA, Chalmers C, Fergus P, Fieberg J. Choosing an Appropriate Platform and Workflow for Processing Camera Trap Data using Artificial Intelligence. arXiv. 2022 Feb 4.
- github.com/FFI-Vietnam/camtrap-tools (includes an evaluation of MegaDetector)
Bonus... this paper is not a formal review, but includes a thorough case study around MegaDetector:
- Tuia D, Kellenberger B, Beery S, Costelloe BR, Zuffi S, Risse B, Mathis A, Mathis MW, van Langevelde F, Burghardt T, Kays R. Perspectives in machine learning for wildlife conservation. Nature Communications. 2022 Feb 9;13(1):1-5.
If you know of other validation studies that have been published, let us know!
Really, don't trust results from one ecosystem and assume they will hold in another. This paper is about just how catastrophically bad AI models for camera trap images can fail to generalize to new locations. We hope that's not the case with MegaDetector! But don't assume.
While MegaDetector works well in a variety of terrestrial ecosystems, it's not perfect, and we can't stress enough how important it is to test MegaDetector on your own data before trusting it. We can help you do that; email us if you have questions about how to evaluate MegaDetector on your own data, even if you don't have images you've already reviewed.
But really, we'll answer the question... MegaDetector v5's biggest challenges are with reptiles. This is an area where accuracy has dramatically improved since MDv4, but it's still the case that reptiles are under-represented in camera trap data, and an AI model is only as good as its training data. That doesn't mean MDv5 doesn't support reptiles; sometimes it does amazing on reptile-heavy datasets. But sometimes it drives you bonkers by missing obvious reptiles.
If you want to read more about our favorite MD failure cases, check out the MegaDetector challenges page.
tl;dr: always test on your own data!
As per the training data section, MDv5 is actually two models (MDv5a and MDv5b), differing only in their training data. In fact, MDv5a's training data is a superset of MDv5b's training data. So, when should you use each? What should you do if MegaDetector is working, but not quite well enough for a difficult scenario, like the ones on our MegaDetector challenges page? Or what if MegaDetector is working great, but you're a perfectionist who wants to push the envelope on precision? This section is a very rough flowchart for how the MegaDetector developers choose MegaDetector versions/enhancements when presented with a new dataset.
-
The first thing we always run is MDv5a... 95% of the time, the flowchart stops here. That's in bold because we want to stress that this whole section is about the unusual case, not the typical case. There are enough complicated things in life, don't make choosing MegaDetector versions more complicated than it needs to be.
Though FWIW, we're not usually trying to squeeze every bit of precision out of a particular dataset, we're almost always focused on recall (i.e., not missing animals). So if MDv5a is finding all the animals and the number of false positives is "fine", we don't usually run MDv5b, for example, just to see whether it would slightly further reduce the number of false positives. -
If things are working great, but you're going to be using MegaDetector a lot and you want to add a step to your process that has a bit of a learning curve, but can eliminate a bunch of false positives once you get used to it, consider the repeat detection elimination process.
-
If anything looks off, specifically if you're missing animals that you think MegaDetector should be getting, or if you just want to see if you can squeeze a little more precision out, try MDv5b. Usually, we've found that MDv5a works at least as well as MDv5b, but every dataset is different.
For example, WildEye did a thorough MegaDetector v5 evaluation and found slightly better precision with MDv5b. MDv5a is trained on everything MDv5b was trained on, plus some non-camera-trap data, so as a general rule, MDv5a may do slightly better on reptiles, birds, and distant vehicles. MDv5b may do slightly better on very dark or low-contrast images. -
If you're still missing animals, but one or both models look close, try again using YOLOv5's test-time augmentation tools via this alternative MegaDetector inference script, which produces output in the same format as the standard inference script, but uses YOLOv5's native inference tools. It will run a little more slowly, and still lacks some of the bells and whistles of the standard inference script, but sometimes augmentation helps.
-
If something still looks off, try MDv4.
-
If none of the above are quite working well enough, but two or three of the above are close, try using merge_detections.py to get the best of both worlds, i.e. to take the high-confidence detections from multiple MegaDetector results files.
-
If things are still not good enough, we have a case where MD just seems not to work; that's what the MegaDetector challenges page is all about. Now we're in DIY territory.
And please please please, if you find you need to do anything other than step 1 (simple MDv5a), please let us know! It's really helpful for us to hear about cases where MegaDetector either doesn't work well or requires extra tinkering.
If you use MegaDetector in a publication, please cite:
Beery S, Morris D, Yang S. Efficient pipeline for camera trap image review. arXiv preprint arXiv:1907.06772. 2019 Jul 15.
Please include the version of MegaDetector you used. If you are including any analysis of false positives/negatives, please be sure to specify the confidence threshold you used as well.
The same citation, in BibTex format:
@article{beery2019efficient,
title={Efficient Pipeline for Camera Trap Image Review},
author={Beery, Sara and Morris, Dan and Yang, Siyu},
journal={arXiv preprint arXiv:1907.06772},
year={2019}
}
Can do! See these slides.
Here's a "teaser" image of what detector output looks like:
Image credit University of Washington.
Here's a neat video of MDv2 running in a variety of ecosystems, on locations unseen during training.
Image credit eMammal. Video created by Sara Beery.
This model is trained on bounding boxes from a variety of ecosystems, and many of the images we use in training can't be shared publicly. But in addition to the private training data we use, we also use many of the bounding boxes available on lila.science:
https://lila.science/category/camera-traps/
Each version of MegaDetector uses all the training data from the previous version, plus a bunch of new stuff. Specifically...
MegaDetector v2 was trained on... actually, we don't remember, that was before the dawn of time.
MegaDetector v3 was trained on private data, plus public data from:
MegaDetector v4 was trained on all MDv3 training data, plus new private data, and new public data from:
MegaDetector v5b was trained on all MDv4 training data, plus new private data, and new public data from:
- Orinoquía Camera Traps
- SWG Camera Traps
- ENA24
- Wellington Camera Traps
- Several datasets from Snapshot Safari
The total dataset for MDv5b (including train/val/test) was around ~2.3M boxes on ~2.7M images, all of which are camera trap images.
MegaDetector v5a was trained on all MDv5b training data, and new (non-camera-trap) public data from:
So if MegaDetector performs really well on any of the above data sets, that's great, but it's a little bit cheating, because we haven't published the set of locations from those data sets that we use during training.
If you want to run scripts from this repo, but you won't actually be running MegaDetector, you can install a lighter-weight version of the same environment by doing the following:
-
Install Miniforge, an environment for installing and running Python stuff. If you already have Anaconda installed, you can use that instead.
-
Install git. If you're not familiar with git, we recommend installing git from git-scm (Windows link) (Mac link).
The remaining steps will assume you are running at a Miniforge prompt. You will know you are at a Miniforge prompt (as opposed to run-of-the-mill command prompt) if you see an environment name in parentheses before your current directory, like this:
...or this:
- In your Miniforge prompt, run the following to create your environment (on Windows):
mkdir c:\git
cd c:\git
git clone https://github.com/agentmorris/MegaDetector
cd c:\git\MegaDetector
mamba env create --file envs\environment.yml
mamba activate cameratraps
set PYTHONPATH=c:\git\MegaDetector
...or the following (on MacOS):
mkdir ~/git
cd ~/git
git clone https://github.com/agentmorris/MegaDetector
cd ~/git/MegaDetector
mamba env create --file envs/environment.yml
mamba activate cameratraps
export PYTHONPATH="$HOME/git/MegaDetector"
- Whenever you want to start this environment again, run the following (on Windows):
cd c:\git\MegaDetector
mamba activate cameratraps
set PYTHONPATH=c:\git\MegaDetector
...or the following (on MacOS):
cd ~/git/MegaDetector
mamba activate cameratraps
export PYTHONPATH="$HOME/git/MegaDetector"
Also, the environment file we're referring to in this section (envs/environment.yml, the one without all the MegaDetector stuff) doesn't get quite the same level of TLC that our MegaDetector environment does, so if anyone tries to run scripts that don't directly involve MegaDetector using this environment, and packages are missing, let us know.
We've historically gone a little bonkers making sure that MegaDetector results are absolutely repeatable, so have been very wary of changing PyTorch/YOLOv5 versions, or even Pillow versions. On top of that, various combinations of YOLOv5 and PyTorch versions were unable to load models trained with the specific versions that existed when MDv5 was created. The result of this is that our recommended environment uses older versions of PyTorch (1.10) and YOLOv5.
But... all of those incompatibilities have worked themselves out with only minimal changes to MegaDetector-related code, so as of 2023.09, you can run MegaDetector in the newest versions of Python (3.11.5), PyTorch (2.0.1), and YOLOv5, without having to clone the YOLOv5 repo separately. Results are very slightly different than they are in the recommended environment, typically around the third decimal place in both confidence values and box coordinates. But if you are OK living on the cutting edge with us, you can now set up MegaDetector like this, using a requirements.txt file that doesn't pin any package versions:
mkdir c:\git
cd c:\git
git clone https://github.com/agentmorris/MegaDetector
cd c:\git\MegaDetector
mamba create -n megadetector-pip python=3.11 pip -y
mamba activate megadetector-pip
pip install -r envs\requirements.txt
set PYTHONPATH=c:\git\MegaDetector
mkdir ~/git
cd ~git
git clone https://github.com/agentmorris/MegaDetector
cd ~/git/MegaDetector
mamba create -n megadetector-pip python=3.11 pip -y
mamba activate megadetector-pip
pip install -r envs/requirements.txt
export PYTHONPATH="$HOME/git/MegaDetector"
YMMV.
If you're feeling even more experimental, this also works:
mamba create -n megadetector-pip python=3.11 pip -y
mamba activate megadetector-pip
pip install megadetector --upgrade
python -m megadetector.detection.run_detector_batch --help
This comes with the same caveats as above: this will not produce results that are literally identical to the training environment, so, YMMV. If you use this route, make sure the MegaDetector and YOLOv5 folders are not on your PYTHONPATH.