-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🆕 Integrate Foundation Models Available VIA timm
: UNI
, Prov-GigaPath
, H-optimus-0
#856
🆕 Integrate Foundation Models Available VIA timm
: UNI
, Prov-GigaPath
, H-optimus-0
#856
Conversation
Only added UNI and Prov-GigaPath for now. Will add more after initial comments. I do not like that |
I found this file for testing: https://github.com/TissueImageAnalytics/tiatoolbox/blob/develop/tests/models/test_arch_vanilla.py There might be a problem regarding memory and compute resources when running some of the larger feature extractors through GitHub actions, e.g. Prov-GigaPath needs a considerable amount of memory just to be loaded. |
As this is just testing the functionality and not loading weights, I hope this would work. |
In that case, you can inherit
PatchPredictor uses all the functionalities of EngineABC other than the ones defined explicitly.
|
allows to reuse the `infer_batch` method of `CNNBackbone`
timm: UNI
, Virchow
, Hibou
, H-optimus-0
I have updated this branch to make sure that tests pass on Ubuntu-24 before we merge it with develop. |
- remove explicit assert statement for `timm` version - add `timm` version into in if statement for prov-gigapath - add comment about `timm` version for timm-based models Co-authored-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com>
for more information, see https://pre-commit.ci
was not removed while accepting suggested changes
…fied during the init of super(), which is CNNModel
- As the requirements specify `timm>=1.0.3`. There is no need to check this separately. Co-authored-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com>
for more information, see https://pre-commit.ci
1. Split `test_functional` into `test_engine` and `test_full_inference` 2. In `test_full_inference` use `@pytest.mark.parametrize` for `CNNBackbone` and `TimmBackbone` instead of making 2 copies of the function
I think we should add a notebook to show how to
This is precisely what I wanted to know how to do, and there was no end-to-end example notebook. Alternatively, we can add code for saving a mask into the Masking Notebook, which currently does not show how to save the masks: wsi = WSIReader.open(slide_path)
mask = wsi.tissue_mask(resolution=1.25, units="power")
mask_thumbnail = mask.slide_thumbnail(resolution=1.25, units="power",)
mask_thumbnail_path = os.path.join(f"{slide_name}_mask.png")
imwrite(mask_thumbnail_path, np.uint8(mask_thumbnail * 255)) And add |
There are 3 more families of models that can be integrated, but they require their own files to be created: Hibou -b and -L, Phikon v1 and v2, Virchow v1 and v2 Should I add those files and call them from Hibou requires to trust remote code when creating the model, which I do not really like. |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
…et50") into the full pipelines version of slide-graph
4a7ad04
to
5c93e9a
Compare
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
@shaneahmed, I think this branch is ready to be merged into I added code for saving a mask into the Masking Notebook, which currently does not show how to save the masks: I also added a comment showing that TimmBackbone could be used as an alternative: model = CNNBackbone("resnet50") # TimmBackbone(backbone="UNI", pretrained=True) in both slide graph notebooks: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @GeorgeBatch This would be very helpful.
Making a pull request as discussed in issue #855
Copied from the issue:
I think it would be useful to integrate pre-trained foundation models from other labs into tiatoolbox.models.architecture.vanilla.py.
Currently, the
_get_architecture()
function allows the use of models fromtorchvision.models
.But another function
_get_timm_architecture()
could be made to incorporate foundation models which are available fromtimm
with weights on HuggingFace Hub. All the models fromtimm
that I've used require users to sign the licence agreement with the authors, so the licencing question seems to be solved itself since there is no way users will get access to the model weights just through Tiatoolbox without getting the access request approved by the authors first.