Skip to content

pattonw/napari-affinities

Repository files navigation

napari-affinities

License PyPI Python Version tests codecov napari hub

A plugin for creating, visualizing, and processing affinities


This napari plugin was generated with Cookiecutter using @napari's cookiecutter-napari-plugin template.

Installation

You will need a conda environment for everything to run smoothly. Supported python versions are 3.7, 3.8, 3.9.

pip

You can install napari-affinities via pip:

`pip install napari-affinities`

To install latest development version :

`pip install git+https://github.com/pattonw/napari-affinities.git`

Install torch according to your system (follow the instructions here). For example with cuda 10.2 available, run:

conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

Install conda requirements:

conda install -c conda-forge affogato

conda

If you install via conda, there are fewer steps since affogato and pytorch will be installed for you.

You can install napari-affinities via [conda]:

`conda install -c conda-forge napari-affinities`

Download example model:

2D:

epithelial example model Place the model zip file wherever you want. You can open it in the plugin with the "load from file" button.

3D

lightsheet example model Unpack the tar file into test data (lightsheet_nuclei_test_data (an hdf5 file)) and model (LightsheetNucleusSegmentation.zip (a bioimageio model)). Move the data into sample_data which will enable you to load the "Lightsheet Sample" data in napari. Place the model zip file anywhere you want. You can open it in the plugin with the "load from file" button.

Workarounds to be fixed:
  1. you need to update the rdf.yaml in the LightsheetNucleusSegmentation.zip with the following:
    • "shape" for "input0" should be updated with a larger minimum input size and "output0" should be updated with a larger halo. If not fixed, there will be significant tiling artifacts.
    • (Optional) "output0" should be renamed to affinities. The plugin supports multiple outputs and relies on names for figuring out which one is which. If unrecognized names are provided we assume the outputs are ordered (affinities, fgbg, lsds) but this is less reliable than explicit names.
  2. This model also generates foreground in the same array as affinities, i.e. a 10 channel output (fgbg, [-1, 0, 0], [0, -1, 0], [0, 0, -1], [-2, 0, 0], ...). Although predictions will work, post processing such as mutex watershed will break unless you manually separate the first channel.

Use

Requirements for the model:

  1. Bioimageio packaged pytorch model
  2. Outputs with names "affinities", "fgbg"(optional) or "lsds"(optional)
    • if these names are not used, it will be assumed that the outputs are affinities, fgbg, then lsds in that order

Contributing

Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.

License

Distributed under the terms of the MIT license, "napari-affinities" is free and open source software

Issues

If you encounter any problems, please file an issue along with a detailed description.