This repository provides inference code for CellSAM. CellSAM is described in more detail in the preprint, and is publicly deployed at cellsam.deepcell.org. CellSAM achieves state-of-the-art performance on segmentation across a variety of cellular targets (bacteria, tissue, yeast, cell culture, etc.) and imaging modalities (brightfield, fluorescence, phase, etc.). Feel free to reach out for support/questions! The full dataset used to train CellSAM is available here.
The easiest way to get started with CellSAM is with pip
pip install git+https://github.com/vanvalenlab/cellSAM.git
CellSAM requires python>=3.10
, but otherwise uses pure PyTorch. A sample image is included in this repository. Segmentation can be performed as follows
import numpy as np
from cellSAM import segment_cellular_image
img = np.load("sample_imgs/yeaz.npy")
mask, _, _ = segment_cellular_image(img, device='cuda')
For more details, see cellsam_introduction.ipynb
.
CellSAM includes a basic napari package for annotation functionality. To install the additional napari dependencies, use pip.
pip install git+https://github.com/vanvalenlab/cellSAM.git#egg=cellsam[napari]
To launch the napari app, run cellsam napari
.
Please cite us if you use CellSAM.
@article{israel2023foundation,
title={A Foundation Model for Cell Segmentation},
author={Israel, Uriah and Marks, Markus and Dilip, Rohit and Li, Qilin and Schwartz, Morgan and Pradhan, Elora and Pao, Edward and Li, Shenyi and Pearson-Goulart, Alexander and Perona, Pietro and others},
journal={bioRxiv},
publisher={Cold Spring Harbor Laboratory Preprints},
doi = {10.1101/2023.11.17.567630},
}