If you are new to Scivision, start with the website.
The Scivision project is building:
- Some things hosted in this GitHub repository:
- A catalog of community-curated computer vision models and datasets from the sciences and humanities
- A Python package, for conveniently downloading and using these models and datasets from Python (scivision on PyPI)
- Documentation for the above (documentation website)
- A gallery of notebooks using Scivision models and datasets
- a community of computer vision practitioners in the sciences and humanities (mailing list, get Slack invitation)
- A (nascent) ecosystem of computer vision tools and utilities
Submit a bug or feature request here.
If you would like a link to a model or datasource to be listed in the catalog, such a contribution would be gratefully received. See the Contributing Guide for how to set up and submit a new entry. Pull requests for code changes are also welcome.
The Scivision project is funded by the Alan Turing Institute.
The main project repository on GitHub hosts
- development of the Python package (in the root directory)
- development of the website (in
frontend
) - the documentation sources (in
docs
)
A quick overview of using the Scivision.Py python package.
$ pip install scivision
from scivision import load_pretrained_model
resnet18 = load_pretrained_model(
# The model URL
"https://github.com/alan-turing-institute/scivision_classifier",
# A Scivision model can contain several variants -- below we select the one to use
model_selection='resnet18',
# Allow the model and its dependencies to be installed if they are not already
# (including tensorflow in this example)
allow_install=True
)
We can give an image as input to the model. Any image data compatible with numpy (an 'Array_like') is accepted. We can obtain some image data by loading a Scivision datasource.
from scivision import load_pretrained_model
dataset = load_dataset('https://github.com/alan-turing-institute/scivision-test-data')
# 'dataset' provides several named arrays. This datasource provides one named 'test_image':
# the keys can be looked up with `list(dataset)` (or by consulting the datasource documentation)
#
test_image = dataset['test_image'].read()
Optionally, inspect the image (with matplotlib, for example):
import matplotlib.pyplot as plt
plt.imshow(test_image)
resnet18.predict(test_image)
Output: koala : 99.78%
from scivision import default_catalog
# The datasource catalog as a Pandas dataframe
default_catalog.datasources.to_dataframe()
# Similarly for the model catalog
default_catalog.models.to_dataframe()
Output:
name | description | tasks | url | pkg_url | format | scivision_usable | pretrained | labels_required | institution | tags | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | stardist | Single class object detection and segementation of star-convex polygons | (<TaskEnum.object_detection: 'object-detection'>, <TaskEnum.segmentation: 'segmentation'>) | https://github.com/stardist/stardist | git+https://github.com/stardist/stardist.git@master | image | False | True | True | ('epfl',) | ('2D', '3D', 'optical-microscopy', 'xray', 'microtomography', 'cell-counting', 'plant-phenotyping', 'climate-change-and-agriculture') |
1 | PlantCV | Open-source image analysis software package targeted for plant phenotyping | (<TaskEnum.segmentation: 'segmentation'>, <TaskEnum.thresholding: 'thresholding'>, <TaskEnum.object_detection: 'object-detection'>) | https://github.com/danforthcenter/plantcv | git+https://github.com/danforthcenter/plantcv@main | image | False | True | True | ('danforthcenter',) | ('2D', 'hyperspectral', 'multispectral', 'near-infrared', 'infrared', 'plant-phenotyping', 'climate-change-and-agriculture') |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ |
- Contributing an entry to the catalog
- The catalogs are browsable online:
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!