Skip to content

adiezsanchez/bf_intorg_YOLOv8_dev

Repository files navigation

Training a YOLOv8 model for detection of intestinal organoids in brightfield images

License Development Status

workflow

The goal of this repository is to obtain a custom YOLOv8 model to segment and classify intestinal organoids and spheroids from brightfield images acquired using a widefield microscope. The obtained model will be used in the Instance Segmentation of Intestinal organoids and Spheroids from BrightField images using YOLOv8 (ISIS-BF) repository.

As a starting point the ground truth annotations for each raw image (.czi) are in a .tiff file format, where each "channel" contains a binary mask defining instances of each class. Training dataset can be downloaded here

In our particular dataset we have 3 classes of intestinal organoids: dead (or overgrown organoids), differentiated (developed organoids) or undifferentiated (aka spheroids). The resulting model will detect, segment and classify each of those instances.

classes

In order to train the YOLOv8 the initial binary masks defining each class instances must be converted to COCO polygon .json files and later on into YOLO-style polygon .txt files. Executing the notebooks and .py files in a sequential order (1 to 5) allows to do so.

Instructions

  1. In order to run these Jupyter notebooks you will need to familiarize yourself with the use of Python virtual environments using Mamba. See instructions here.

  2. Then you will need to create a virtual environment (venv) either using the following command or recreate the environment from the .yml file you can find in the envs folder:

    mamba create -n int_organoids python=3.9 devbio-napari cellpose pytorch torchvision plotly pyqt ultralytics -c conda-forge -c pytorch

  3. To recreate the venv from the environment.yml file stored in the envs folder (recommended) navigate into the envs folder using cd in your console and then execute:

    mamba env create -f environment.yml

  4. The resulting environment will allow you to run the model training in the CPU, if you want to leverage your CUDA GPU you will need to check CUDA and cuDNN version compatibility with your hardware. In my case I have CUDA 12.1 and cuDNN 8.0 hence I can use the following command to create a working venv that leverages the CUDA cores in my GPU for training the YOLOv8 model:

    mamba create -n int_organoids_GPU python=3.9 devbio-napari cellpose pytorch=2.1.2=py3.9_cuda12.1_cudnn8_0 torchvision plotly pyqt ultralytics python-kaleido -c conda-forge -c pytorch -c nvidia

  5. If you happen to have the same CUDA and cuDNN version you can recreate the from the environment_GPU.yml file stored in the envs folder.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published