Major dependecies: pyQT5 and OpenCV 2
Usage:
cd into root (master) directory
python labelme/main.py --nodata
(--nodata excludes image data from .json, more details below)
In label me:
ctrl+r to draw a rectangle
File>save will save your progress thus far in a filename.json file make sure to save it in the same directory as the images.
Click edit polygon to duplicate bounding boxes or edit sizes
See *Note below about bbox labels
Hold checkbox checked - holds all opencv modification. I.e. You can edge detect on top of gamma correction and vice versa.
Hold checkbox unchecked - unchecked will edge detect/gamma correct on original image and not stack the two.
Edge detection - The edge detection is very sensitive to gamma level so you'll need to adjust the thresholds in labelme/app.py - line 1768 (ctrl+f cv.Canny)
TODO - add textboxes in labelme QT gui for the thresholds ^^
Clear - clears all opencv modifications
Gamma Slider - Do not click on the final position of slider. Drag the slider from the original position to the new position for QT to correctly read the changes.
See create_grape_tf_record_json.py for creating tfrecord from jpgs and jsons.\
python create_grape_tf_record_json.py [labeled images path] [tfrecord output path]\
You can also hardcode paths in main() of create_grape_tf_recrod_json.py instead of using the command line. Be careful as they default to my absolute paths right now\
*Note: Object labels weren't specified before and the all_classes
list was hardcoded as ['zero_class', 'object']
but that create_tf_record script builds the all_classes
list from the json file. So if you use Grape as your label it will be ['zero_class', 'Grape']
and the same if you use object for label.
I think this is relevant because there is a separate file for the tf model where you specify the classes\
Labelme is a graphical image annotation tool inspired by http://labelme.csail.mit.edu.
It is written in Python and uses Qt for its graphical interface.
VOC dataset example of instance segmentation.
Other examples (semantic segmentation, bbox detection, and classification).
Various primitives (polygon, rectangle, circle, line, and point).
- Image annotation for polygon, rectangle, circle, line and point. (tutorial)
- Image flag annotation for classification and cleaning. (#166)
- Video annotation. (video annotation)
- GUI customization (predefined labels / flags, auto-saving, label validation, etc). (#144)
- Exporting VOC-format dataset for semantic/instance segmentation. (semantic segmentation, instance segmentation)
- Exporting COCO-format dataset for instance segmentation. (instance segmentation)
- Ubuntu / macOS / Windows
- Python2 / Python3
- PyQt4 / PyQt5 / PySide2
There are options:
- Platform agonistic installation: Anaconda, Docker
- Platform specific installation: Ubuntu, macOS, Windows
You need install Anaconda, then run below:
# python2
conda create --name=labelme python=2.7
source activate labelme
# conda install -c conda-forge pyside2
conda install pyqt
pip install labelme
# if you'd like to use the latest version. run below:
# pip install git+https://github.com/wkentaro/labelme.git
# python3
conda create --name=labelme python=3.6
source activate labelme
# conda install -c conda-forge pyside2
# conda install pyqt
pip install pyqt5 # pyqt5 can be installed via pip on python3
pip install labelme
You need install docker, then run below:
wget https://raw.githubusercontent.com/wkentaro/labelme/master/labelme/cli/on_docker.py -O labelme_on_docker
chmod u+x labelme_on_docker
# Maybe you need http://sourabhbajaj.com/blog/2017/02/07/gui-applications-docker-mac/ on macOS
./labelme_on_docker examples/tutorial/apc2016_obj3.jpg -O examples/tutorial/apc2016_obj3.json
./labelme_on_docker examples/semantic_segmentation/data_annotated
# Ubuntu 14.04 / Ubuntu 16.04
# Python2
# sudo apt-get install python-qt4 # PyQt4
sudo apt-get install python-pyqt5 # PyQt5
sudo pip install labelme
# Python3
sudo apt-get install python3-pyqt5 # PyQt5
sudo pip3 install labelme
# macOS Sierra
brew install pyqt # maybe pyqt5
pip install labelme # both python2/3 should work
# or install standalone executable / app
brew install wkentaro/labelme/labelme
brew cask install wkentaro/labelme/labelme
Firstly, follow instruction in Anaconda.
# Pillow 5 causes dll load error on Windows.
# https://github.com/wkentaro/labelme/pull/174
conda install pillow=4.0.0
Run labelme --help
for detail.
The annotations are saved as a JSON file.
labelme # just open gui
# tutorial (single image example)
cd examples/tutorial
labelme apc2016_obj3.jpg # specify image file
labelme apc2016_obj3.jpg -O apc2016_obj3.json # close window after the save
labelme apc2016_obj3.jpg --nodata # not include image data but relative image path in JSON file
labelme apc2016_obj3.jpg \
--labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list
# semantic segmentation example
cd examples/semantic_segmentation
labelme data_annotated/ # Open directory to annotate all images in it
labelme data_annotated/ --labels labels.txt # specify label list with a file
For more advanced usage, please refer to the examples:
- Tutorial (Single Image Example)
- Semantic Segmentation Example
- Instance Segmentation Example
- Video Annotation Example
--output
specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.- The first time you run labelme, it will create a config file in
~/.labelmerc
. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the--config
flag. - Without the
--nosortlabels
flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided. - Flags are assigned to an entire image. Example
- Labels are assigned to a single polygon. Example
- How to convert JSON file to numpy array? See examples/tutorial.
- How to load label PNG file? See examples/tutorial.
- How to get annotations for semantic segmentation? See examples/semantic_segmentation.
- How to get annotations for instance segmentation? See examples/instance_segmentation.
pip install hacking pytest pytest-qt
flake8 .
pytest -v tests
git clone https://github.com/wkentaro/labelme.git
cd labelme
# Install anaconda3 and labelme
curl -L https://github.com/wkentaro/dotfiles/raw/master/local/bin/install_anaconda3.sh | bash -s .
source .anaconda3/bin/activate
pip install -e .
Below shows how to build the standalone executable on macOS, Linux and Windows.
Also, there are pre-built executables in
the release section.
# Setup conda
conda create --name labelme python==3.6.0
conda activate labelme
# Build the standalone executable
pip install .
pip install pyinstaller
pyinstaller labelme.spec
dist/labelme --version
This repo is the fork of mpitid/pylabelme, whose development has already stopped.
If you use this project in your research or wish to refer to the baseline results published in the README, please use the following BibTeX entry.
@misc{labelme2016,
author = {Ketaro Wada},
title = {{labelme: Image Polygonal Annotation with Python}},
howpublished = {\url{https://github.com/wkentaro/labelme}},
year = {2016}
}