Skip to content

Commit

Permalink
Nnunet demo (#40)
Browse files Browse the repository at this point in the history
* uncommented one line pytest


Former-commit-id: 3dc8cd3

* uncommented one line pytest


Former-commit-id: 01fb59285920aaa2fd8c3f355feab86619938568 [formerly 6da65dd] [formerly 6da65dd [formerly 3dc8cd3]]
Former-commit-id: fa09db6
Former-commit-id: 652bd34dc69873469e060381ef0d8995d93335b5

* uncommented one line pytest

* uncommented one line pytest


Former-commit-id: 2b9d6d4

* uncommented one line pytest


Former-commit-id: 7aa572b0b68fa17a57d73a7b5f677af269e19154 [formerly 3f87849] [formerly 3f87849 [formerly 2b9d6d4]]
Former-commit-id: 11196a3
Former-commit-id: 2046fb64b2d129db338c3d3f2a70e3640eea80ab

* uncommented one line pytest

* self.existing, dataset.csv fixed (#10)

* Added test autopipeline and modalities, solved some autopipeline bugs, read_dicom_series and pet now supports series_id

* PT/RTDOSE metadata to csv

* fixed some bugs in autopipeline.py

* now the pipeline saves on exit

* deleted data

* now checks for existing subject id

* uncommented one line pytest

* uncommented one line pytest

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>
Former-commit-id: f2480c3

* self.existing, dataset.csv fixed (#10)

* Added test autopipeline and modalities, solved some autopipeline bugs, read_dicom_series and pet now supports series_id

* PT/RTDOSE metadata to csv

* fixed some bugs in autopipeline.py

* now the pipeline saves on exit

* deleted data

* now checks for existing subject id

* uncommented one line pytest

* uncommented one line pytest

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset


Former-commit-id: 1cd8984

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset


Former-commit-id: 29118a6 [formerly 29118a6 [formerly 1cd8984]]
Former-commit-id: ea70778
Former-commit-id: aa2d95a7888dd213554c627c0d76b4237a1967bf

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset

* Create build-ci.yml

Former-commit-id: 5a0e945

* Create build-ci.yml

* Update build-ci.yml

Former-commit-id: 33a1d60

* Update build-ci.yml

* Update requirements.txt

Former-commit-id: fcd6cb4

* Update requirements.txt

* bug fixes_1.0


Former-commit-id: d255674

* bug fixes_1.0


Former-commit-id: 3344042ddfb1ef709ec4fa02628c0c97a6ebd477 [formerly a27dc3c] [formerly a27dc3c [formerly d255674]]
Former-commit-id: 63604ae
Former-commit-id: b7161a48f3fffb445336ea2a3d827cdb5686deab

* bug fixes_1.0

* test and autopipe fixed


Former-commit-id: 9cc88b1

* test and autopipe fixed


Former-commit-id: bbd1dc9a718bf69247befc0e1e99d1c75519eeef [formerly 2aca001] [formerly 2aca001 [formerly 9cc88b1]]
Former-commit-id: 0784880
Former-commit-id: a402b01f18507176c1de523fb5e978552ea8f01c

* test and autopipe fixed

* bug fixes 2


Former-commit-id: e7f4b42

* bug fixes 2


Former-commit-id: a513f1a9102c236845fa273100bb39112b092bf7 [formerly 374041f] [formerly 374041f [formerly e7f4b42]]
Former-commit-id: 15bc931
Former-commit-id: 4dd4c4a0b4238a97d93d3df36268108de9c7fb2a

* bug fixes 2

* bug fixes 2


Former-commit-id: 90732d8

* bug fixes 2


Former-commit-id: 5ed5f32cea580aab518d28154a9e47c5b344c3c6 [formerly ad812ba] [formerly ad812ba [formerly 90732d8]]
Former-commit-id: 843025f
Former-commit-id: ab086810cefee5270450561b6db2cfb999416c27

* bug fixes 2

* added visualizations and some more bug fixes


Former-commit-id: 735b26c

* added visualizations and some more bug fixes


Former-commit-id: 68280704e6bdc7afb814166dc4d9ea1c69cc0111 [formerly 2daaf02] [formerly 2daaf02 [formerly 735b26c]]
Former-commit-id: 9bdf715
Former-commit-id: 856e72bf73fef22350f1d53fcad601fe3d76798e

* added visualizations and some more bug fixes

* Create manual-test.yml

Former-commit-id: d0b495c

* Create manual-test.yml

* Update build-ci.yml

Former-commit-id: f77c367

* Update build-ci.yml

* Update manual-test.yml

Former-commit-id: c17f6b5

* Update manual-test.yml

* PR tests - macos/ubuntu failing (#13)

* Added test autopipeline and modalities, solved some autopipeline bugs, read_dicom_series and pet now supports series_id

* PT/RTDOSE metadata to csv

* fixed some bugs in autopipeline.py

* now the pipeline saves on exit

* deleted data

* now checks for existing subject id

* uncommented one line pytest

* uncommented one line pytest

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset

* bug fixes_1.0

* test and autopipe fixed

* bug fixes 2

* fixed pipeline tests

* clean tests

* added workflow

* yml

* yml

* matplotlib

* trying other patient to avoid memoryerror

* set roi_names to avoid memoryerror

* cave

* indents

* Update manual-test.yml

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>
Former-commit-id: 47f3122

* PR tests - macos/ubuntu failing (#13)

* Added test autopipeline and modalities, solved some autopipeline bugs, read_dicom_series and pet now supports series_id

* PT/RTDOSE metadata to csv

* fixed some bugs in autopipeline.py

* now the pipeline saves on exit

* deleted data

* now checks for existing subject id

* uncommented one line pytest

* uncommented one line pytest

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset

* bug fixes_1.0

* test and autopipe fixed

* bug fixes 2

* fixed pipeline tests

* clean tests

* added workflow

* yml

* yml

* matplotlib

* trying other patient to avoid memoryerror

* set roi_names to avoid memoryerror

* cave

* indents

* Update manual-test.yml

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>

* fixed bugs regarding multiple connections, saving of metadata and loading of metadata


Former-commit-id: 1528efc

* fixed bugs regarding multiple connections, saving of metadata and loading of metadata


Former-commit-id: ab236e5 [formerly ab236e5 [formerly 1528efc]]
Former-commit-id: 3e374bf
Former-commit-id: 82e0e186b9d508c398f798469326c643cd6527c2

* fixed bugs regarding multiple connections, saving of metadata and loading of metadata

* small bug fix


Former-commit-id: 7a27ee3

* small bug fix


Former-commit-id: 43e3c6b [formerly 43e3c6b [formerly 7a27ee3]]
Former-commit-id: 9c20b59
Former-commit-id: 137b961a46c61f43f19f56f0c2564d14992a3d56

* small bug fix

* added demo.py


Former-commit-id: b644919

* added demo.py


Former-commit-id: 64a7701 [formerly 64a7701 [formerly b644919]]
Former-commit-id: 3b3ca59
Former-commit-id: 52754c4f09646cc9386b8f2699a2718edb77eaf8

* added demo.py

* Ready for

Former-commit-id: 9e5d03a

* Ready for

* Create main.yml (#15)



Former-commit-id: 700fbe6

* Create main.yml (#15)

* Changed dataset class returns


Former-commit-id: 3c6d7f5

* Changed dataset class returns


Former-commit-id: caa9c4d839ff599f63c9ad68c9bc4e844d96092d [formerly 180fe18] [formerly 180fe18 [formerly 3c6d7f5]]
Former-commit-id: 8a66def
Former-commit-id: 51ff2b743a26bfc2ec123e67a1cf8e77245aa75a

* Changed dataset class returns

* fix conflicts


Former-commit-id: caad95a

* fix conflicts


Former-commit-id: 3e769f713eead224f382b5abe98f9f90c69fcc65 [formerly c70d9c8] [formerly c70d9c8 [formerly caad95a]]
Former-commit-id: c207bb5
Former-commit-id: 0cf3b71107adb75b3ebf4c2babf2ba344e7b91cd

* fix conflicts

* fixed test autopipe


Former-commit-id: 63b9543

* fixed test autopipe


Former-commit-id: eb71a6ece7d1b92ce1575fbf38354fde98ccc697 [formerly 3817bbb] [formerly 3817bbb [formerly 63b9543]]
Former-commit-id: dc5e065
Former-commit-id: 0de4a6d07c1b07618fa0e6ddd0bf5ce4553aa8ca

* fixed test autopipe

* merging new features (#16)

* Added test autopipeline and modalities, solved some autopipeline bugs, read_dicom_series and pet now supports series_id

* PT/RTDOSE metadata to csv

* fixed some bugs in autopipeline.py

* now the pipeline saves on exit

* deleted data

* now checks for existing subject id

* uncommented one line pytest

* uncommented one line pytest

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset

* bug fixes_1.0

* test and autopipe fixed

* bug fixes 2

* bug fixes 2

* added visualizations and some more bug fixes

* fixed bugs regarding multiple connections, saving of metadata and loading of metadata

* small bug fix

* added demo.py

* Changed dataset class returns

* fix conflicts

* fixed test autopipe

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>
Former-commit-id: 539777c

* merging new features (#16)

* Added test autopipeline and modalities, solved some autopipeline bugs, read_dicom_series and pet now supports series_id

* PT/RTDOSE metadata to csv

* fixed some bugs in autopipeline.py

* now the pipeline saves on exit

* deleted data

* now checks for existing subject id

* uncommented one line pytest

* uncommented one line pytest

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset

* bug fixes_1.0

* test and autopipe fixed

* bug fixes 2

* bug fixes 2

* added visualizations and some more bug fixes

* fixed bugs regarding multiple connections, saving of metadata and loading of metadata

* small bug fix

* added demo.py

* Changed dataset class returns

* fix conflicts

* fixed test autopipe

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>

* fix path backslash issues


Former-commit-id: d81a74d

* fix path backslash issues


Former-commit-id: c9fcbbe [formerly c9fcbbe [formerly d81a74d]]
Former-commit-id: 0dd174d
Former-commit-id: 26d44c87a36a8ed5cd64b5553ac8bf5a1c5e7af3

* fix path backslash issues

* fix path backslashes (#17)

* Added test autopipeline and modalities, solved some autopipeline bugs, read_dicom_series and pet now supports series_id

* PT/RTDOSE metadata to csv

* fixed some bugs in autopipeline.py

* now the pipeline saves on exit

* deleted data

* now checks for existing subject id

* uncommented one line pytest

* uncommented one line pytest

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset

* bug fixes_1.0

* test and autopipe fixed

* bug fixes 2

* bug fixes 2

* added visualizations and some more bug fixes

* fixed bugs regarding multiple connections, saving of metadata and loading of metadata

* small bug fix

* added demo.py

* Changed dataset class returns

* fix conflicts

* fixed test autopipe

* fix path backslash issues

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>
Former-commit-id: 5c6f7b0

* fix path backslashes (#17)

* Added test autopipeline and modalities, solved some autopipeline bugs, read_dicom_series and pet now supports series_id

* PT/RTDOSE metadata to csv

* fixed some bugs in autopipeline.py

* now the pipeline saves on exit

* deleted data

* now checks for existing subject id

* uncommented one line pytest

* uncommented one line pytest

* Added dataset class which can load from nrrds or directly from the dataset and convert to pytorch dataset

* bug fixes_1.0

* test and autopipe fixed

* bug fixes 2

* bug fixes 2

* added visualizations and some more bug fixes

* fixed bugs regarding multiple connections, saving of metadata and loading of metadata

* small bug fix

* added demo.py

* Changed dataset class returns

* fix conflicts

* fixed test autopipe

* fix path backslash issues

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>

* Update main.yml

Former-commit-id: c0bf276

* Update main.yml

* Update main.yml

Former-commit-id: f539726

* Update main.yml

* Update README.md

Former-commit-id: acf8cfb

* Update README.md

* Update main.yml (#18)

* Update main.yml

* Update requirements.txt

* Update main.yml

* Update main.yml

* build binary/dist

* removed linter

* Update setup.py

Former-commit-id: 9011025

* Update main.yml (#18)

* Update main.yml

* Update requirements.txt

* Update main.yml

* Update main.yml

* build binary/dist

* removed linter

* Update setup.py

* Update README.md

Former-commit-id: 2a36b04

* Update README.md

* Update README.md (#19)



Former-commit-id: bde03e1

* Update README.md (#19)

* Update README.md

Former-commit-id: edb278f

* Update README.md

* added tests for Dataset class


Former-commit-id: 8d345fb

* added tests for Dataset class


Former-commit-id: d75ee6e437692650f4a7d62833946161acb12d79 [formerly abfd4db] [formerly abfd4db [formerly 8d345fb]]
Former-commit-id: d8a5980
Former-commit-id: 3b16fbef699e13a000b61cfcc6a031c455b5abac

* added tests for Dataset class

* added tests for Dataset class


Former-commit-id: 7747a81

* added tests for Dataset class


Former-commit-id: 71334289e578249681a435d7781c8955b6cabe1d [formerly 1f9c17c] [formerly 1f9c17c [formerly 7747a81]]
Former-commit-id: bcff97c
Former-commit-id: 21f753f606167a4258d70f8ac6f8fc23d3a8d175

* added tests for Dataset class

* Create LICENSE (#20)

* Create LICENSE

* Update setup.py

Former-commit-id: 7ec143e

* Create LICENSE (#20)

* Create LICENSE

* Update setup.py

* Seg.nrrd quick fix



Former-commit-id: 7f3b5c6

* Seg.nrrd quick fix

* Minor bug fixes


Former-commit-id: f18aa1b

* Minor bug fixes


Former-commit-id: 7162d45 [formerly 7162d45 [formerly f18aa1b]]
Former-commit-id: 96c6e3e
Former-commit-id: 0858b138f03a568de16d582d42509ba40dc40903

* Minor bug fixes

* test fix


Former-commit-id: 2fad37e

* test fix


Former-commit-id: d55216a [formerly d55216a [formerly 2fad37e]]
Former-commit-id: 5e3f932
Former-commit-id: 37aa5e3e74d6e25b4f8766938462a60601c29ab3

* test fix

* Added demo


Former-commit-id: 48ea411

* Added demo


Former-commit-id: 1ec6966 [formerly 1ec6966 [formerly 48ea411]]
Former-commit-id: fe9d3f4
Former-commit-id: a054f7d9cba105a4d40f2605ad6ea5e954b4a1a1

* Added demo

* Update setup.py (#23)



Former-commit-id: 9e24716

* Update setup.py (#23)



Former-commit-id: ce8e287 [formerly ce8e287 [formerly 9e24716]]
Former-commit-id: b9fe363
Former-commit-id: 0cb0b099ffa6d3d67738f40e211e566049a109ca

* Update setup.py (#23)

* updated README


Former-commit-id: 0f04145

* updated README


Former-commit-id: e35f91d [formerly e35f91d [formerly 0f04145]]
Former-commit-id: bc9c6a2
Former-commit-id: d353efbbdace157b77f9bedf74654b5b96cda5f9

* updated README

* Update README.md (#24)



Former-commit-id: 9aaf51b

* Update README.md (#24)



Former-commit-id: 27494c0 [formerly 27494c0 [formerly 9aaf51b]]
Former-commit-id: 84c0753
Former-commit-id: 63106282d3b55f6d008ae03c60c4a32babcb7d1b

* Update README.md (#24)

* preliminary MRI functionality (MR-RTSTRUCT pairs)


Former-commit-id: 3c01806

* preliminary MRI functionality (MR-RTSTRUCT pairs)


Former-commit-id: 3cdc5eb2ad84488f20323b1933c4285eac78d60d [formerly 11971c8] [formerly 11971c8 [formerly 3c01806]]
Former-commit-id: b4abcac
Former-commit-id: 2a0158397a55d783d6b627181a8af123952df3dc

* preliminary MRI functionality (MR-RTSTRUCT pairs)

* Skim2257 quick fix (#26)

* Updated crawler to force String on all meta fields

* Update setup.py

Former-commit-id: 5e93381

* Skim2257 quick fix (#26)

* Updated crawler to force String on all meta fields

* Update setup.py

Former-commit-id: 5116186cc44acffc828b7983e84ec2db49b628cb [formerly 65d66d2] [formerly 65d66d2 [formerly 5e93381]]
Former-commit-id: 5b66da4
Former-commit-id: 9bbf4aebbf1978a63dd49de045cc91e4893a5237

* Skim2257 quick fix (#26)

* Updated crawler to force String on all meta fields

* Update setup.py

* quick fixes


Former-commit-id: 5e118add39a6796a6d03d9c6729146981da7f44f

* quick fixes


Former-commit-id: d444cb8e2b63900c7747d9a405125066fb67d88e [formerly 8ac1dfd] [formerly 8ac1dfd [formerly 5e118add39a6796a6d03d9c6729146981da7f44f]]
Former-commit-id: b1c528e7b1f32acf42a18cd78f37b462bfa05171
Former-commit-id: 79e19c892e15d254c60b16b4ee67e9c16ca59f91

* first commit

* removed test files, changed gitignore

* changed file directory structure for imageautooutput

* split mask up into each contour

* change kwargs in put for basesubjectwriter

* still kinda failing...

* brought back basesubjectwriter

* .imgtools directory

* changed absolute paths to relative paths

* changed os.path.join to pathlib.Path.as_posix()

* removed unused cv2 import

* removed cv2 import

* appened is deprecated, changed to concat

* debug print

* removed debug print

* added sparse mask class and generating function

* testing out sparse mask

* funky NaN problem

* commented sparse mask

* overwrite all subjects

* space

* overwrite false

* metadata stuffs

* metadata in dataset.csv

* added modalities, num rtstruct and pixel size to metadata

* metadata bugfix

* a

* fixed wrong variable names for metadata stuff

* fixed pathlib float error

* relative paths and output folder paths for dataset.csv

* put metadata stuff into a util file

* deal with empty metadata

* messing around with sparse mask

* tried to save sparse mask, did some stuff with nnunet output format

* compliant with nnunet directory structure

* CLI Interface, argparse moved to utils

* fixed formatting problems with folder names

* train test split

* train size and random state optional

* merge conflicts

* changed warnings to not interrupt

* changed to warnings.warn for generate_sparse_mask

* merge

* resolving conflicts?

* args

* changes for roi names as a dict

* added regex dictionary option for non nnunet runs

* sparse mask global labelling for contour name: index

* got rid of file_name_convention stuff

* conflicts resolved

* yaml thing

* added list capabilities for the roi names dictionary

* dataset.json for nnunet

* CLI "autonew"

* changed all mutable defaults to None

* moved autotest changes to autopipeline and addede a few CLI args

* getting ready for merge to live

* test_components, test_modalities works with new AutoPipeline

* overwrite changes and error fix for nan paths again

* fixed if statement

* joblib parallel

* warnings for missing patients

* summary messages

* updated, passing tests. Updated version to 0.4

* update test

* yaml path cli

* yaml error check

* pandas error

* Fixed read_dicom_auto

* skips series check if seris is None

* updated readme to reflect v0.4 changes

* updated readme

* minor change

* remove .idea

* remote .idea

* git ignore

* transpose for nnunet

* skip if no mask for nnunet and nnunet folder names

* bugs in nnunet folder stuff

* rename nnunetutil (again)

* sorting the output streams

* shell script

* shell script in autopipeline

* bug

* forgot CopyInformation for nnunet

* ignore deprecation warning

* moved sh file down one dir

* set -e for stopping execution if there is an error

* broken checker for missing labels

* train test fix for 1 patient

* fix for broken self.existing_roi_names caused by joblib

* added autotest to gitignore

* sorted list for train test split

Co-authored-by: Vishwesh <vishweshramanathan@gmail.com>
Co-authored-by: Sejin Kim <40668167+skim2257@users.noreply.github.com>
Co-authored-by: Sejin Kim <hello@sejin.kim>
Co-authored-by: Vishwesh Ramanathan <vishwesh@Vishweshs-MacBook-Air.local>
Co-authored-by: Kevin Qu <kqu@uhnslurmbuildbox.uhnh4h.cluster>
Co-authored-by: Kevin Qu <kqu@node90.uhnh4h.cluster>
  • Loading branch information
7 people authored Jul 6, 2022
1 parent c39ffe6 commit 31cecbe
Show file tree
Hide file tree
Showing 7 changed files with 340 additions and 50 deletions.
36 changes: 36 additions & 0 deletions .github/workflows/build-ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: Python application

on:
pull_request:
branches: [ master ]
workflow_dispatch:

jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.7, 3.8, 3.9]



steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest
pip install -e .
pip install -r requirements.txt
- name: Import checking
run: |
python -c "import imgtools"
57 changes: 57 additions & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
name: master-tests

on:
pull_request:
workflow_dispatch:

jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os:
- ubuntu-latest
- macos-latest
- windows-latest
python-version:
- 3.7
- 3.8
- 3.9

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest setuptools wheel twine
pip install -e .
pip install -r requirements.txt
- name: Import checking
run: |
python -c "import imgtools"
- name: Run pytest
run: |
pytest tests
- name: Build binary wheel and a source tarball
run: |
python setup.py install
python setup.py sdist bdist_wheel
- name: Build app (Windows)
if: success() && startsWith(matrix.os,'Windows')
env:
USERNAME: ${{ secrets.pypi_username }}
KEY: ${{ secrets.pypi_pw }}
run: |
python -m twine upload --skip-existing -u $env:USERNAME -p $env:KEY dist/*
- name: Build app (Ubuntu / macOS)
if: success() && startsWith(matrix.os, 'Windows') == false
env:
USERNAME: ${{ secrets.pypi_username }}
KEY: ${{ secrets.pypi_pw }}
run: |
python -m twine upload --skip-existing -u $USERNAME -p $KEY dist/*
43 changes: 43 additions & 0 deletions .github/workflows/manual-test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
name: manual-test

on:
workflow_dispatch:

jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os:
- ubuntu-latest
- macos-latest
- windows-latest
python-version:
- 3.7
- 3.8
- 3.9

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest
pip install -e .
pip install -r requirements.txt
- name: Import checking
run: |
python -c "import imgtools"
- name: Run pytest
run: |
pytest tests
- name: Slack Notification
if: ${{always() && matrix.os == 'ubuntu-latest'}}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
5 changes: 1 addition & 4 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,13 +1,10 @@
# test scripts
examples/adsf.py
examples/autotest.py
.idea

#vscode files
/.idea

#vscode files
/.idea

# data
data
examples/data/tcia_n*
Expand Down
134 changes: 109 additions & 25 deletions examples/autotest.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import glob
import pickle
import struct
from matplotlib.style import available
import numpy as np
import sys
import warnings
Expand Down Expand Up @@ -95,8 +96,30 @@ def __init__(self,
warn_on_error=warn_on_error)
self.overwrite = overwrite
# pipeline configuration
self.input_directory = pathlib.Path(input_directory).as_posix()
self.input_directory = pathlib.Path(input_directory).as_posix()
self.output_directory = pathlib.Path(output_directory).as_posix()
study_name = os.path.split(self.input_directory)[1]
if is_nnunet:
self.base_output_directory = self.output_directory
if not os.path.exists(pathlib.Path(self.output_directory, "nnUNet_preprocessed").as_posix()):
os.makedirs(pathlib.Path(self.output_directory, "nnUNet_preprocessed").as_posix())
if not os.path.exists(pathlib.Path(self.output_directory, "nnUNet_trained_models").as_posix()):
os.makedirs(pathlib.Path(self.output_directory, "nnUNet_trained_models").as_posix())
self.output_directory = pathlib.Path(self.output_directory, "nnUNet_raw_data_base",
"nnUNet_raw_data").as_posix()
if not os.path.exists(self.output_directory):
os.makedirs(self.output_directory)
all_nnunet_folders = glob.glob(pathlib.Path(self.output_directory, "*", " ").as_posix())
available_numbers = list(range(500, 1000))
for folder in all_nnunet_folders:
folder_name = os.path.split(os.path.split(folder)[0])[1]
if folder_name.startswith("Task") and folder_name[4:7].isnumeric() and int(folder_name[4:7]) in available_numbers:
available_numbers.remove(int(folder_name[4:7]))
if len(available_numbers) == 0:
raise Error("There are not enough task ID's for the nnUNet output. Please make sure that there is at least one task ID available between 500 and 999, inclusive")
task_folder_name = f"Task{available_numbers[0]}_{study_name}"
self.output_directory = pathlib.Path(self.output_directory, task_folder_name).as_posix()
self.task_id = available_numbers[0]
self.spacing = spacing
self.existing = [None] #self.existing_patients()
self.is_nnunet = is_nnunet
Expand All @@ -109,9 +132,12 @@ def __init__(self,
self.label_names = {}
self.ignore_missing_regex = ignore_missing_regex

if roi_yaml_path != "" and not read_yaml_label_names:
warnings.warn("The YAML will not be read since it has not been specified to read them. To use the file, run the CLI with --read_yaml_label_names")

roi_path = pathlib.Path(self.input_directory, "roi_names.yaml").as_posix() if roi_yaml_path == "" else roi_yaml_path
if read_yaml_label_names:
if os.path.exists(roi_yaml_path):
if os.path.exists(roi_path):
with open(roi_path, "r") as f:
try:
self.label_names = yaml.safe_load(f)
Expand Down Expand Up @@ -173,6 +199,7 @@ def __init__(self,
os.mkdir(pathlib.Path(self.output_directory,".temp").as_posix())

self.existing_roi_names = {"background": 0}
# self.existing_roi_names.update({k:i+1 for i, k in enumerate(self.label_names.keys())})

def glob_checker_nnunet(self, subject_id):
folder_names = ["imagesTr", "labelsTr", "imagesTs", "labelsTs"]
Expand Down Expand Up @@ -216,7 +243,7 @@ def process_one_subject(self, subject_id):
subject_modalities = set() # all the modalities that this subject has
num_rtstructs = 0

for i, colname in enumerate(self.output_streams):
for i, colname in enumerate(sorted(self.output_streams)): #CT comes before MR before PT before RTDOSE before RTSTRUCT
modality = colname.split("_")[0]
subject_modalities.add(modality) #set add

Expand Down Expand Up @@ -300,21 +327,41 @@ def process_one_subject(self, subject_id):
else:
raise ValueError("You need to pass a reference CT or PT/PET image to map contours to.")

if mask is None: #ignored the missing regex
return

if mask is None: #ignored the missing regex, and exit the loop
if self.is_nnunet:
image_test_path = pathlib.Path(self.output_directory, "imagesTs").as_posix()
image_train_path = pathlib.Path(self.output_directory, "imagesTr").as_posix()
if os.path.exists(image_test_path):
all_files = glob.glob(pathlib.Path(image_test_path, "*.nii.gz").as_posix())
# print(all_files)
for file in all_files:
if subject_id in os.path.split(file)[1]:
os.remove(file)
if os.path.exists(image_train_path):
all_files = glob.glob(pathlib.Path(image_train_path, "*.nii.gz").as_posix())
# print(all_files)
for file in all_files:
if subject_id in os.path.split(file)[1]:
os.remove(file)
warnings.warn(f"Patient {subject_id} is missing a complete image-label pair")
return
else:
break

for name in mask.roi_names.keys():
if name not in self.existing_roi_names.keys():
self.existing_roi_names[name] = len(self.existing_roi_names)
mask.existing_roi_names = self.existing_roi_names
# print(self.existing_roi_names,"alskdfj")

# save output
print(mask.GetSize())
mask_arr = np.transpose(sitk.GetArrayFromImage(mask))

if self.is_nnunet:
sparse_mask = mask.generate_sparse_mask().mask_array
sparse_mask = np.transpose(mask.generate_sparse_mask().mask_array)
sparse_mask = sitk.GetImageFromArray(sparse_mask) #convert the nparray to sitk image
sparse_mask.CopyInformation(image)
if "_".join(subject_id.split("_")[1::]) in self.train:
self.output(subject_id, sparse_mask, output_stream, nnunet_info=self.nnunet_info, label_or_image="labels") #rtstruct is label for nnunet
else:
Expand Down Expand Up @@ -379,24 +426,44 @@ def save_data(self):
subject_id = os.path.splitext(filename)[0]
with open(file,"rb") as f:
metadata = pickle.load(f)
self.output_df.loc[subject_id, list(metadata.keys())] = list(metadata.values())
np.warnings.filterwarnings('ignore', category=np.VisibleDeprecationWarning)
self.output_df.loc[subject_id, list(metadata.keys())] = list(metadata.values()) #subject id targets the rows with that subject id and it is reassigning all the metadata values by key
folder_renames = {}
for col in self.output_df.columns:
if col.startswith("folder"):
self.output_df[col] = self.output_df[col].apply(lambda x: x if not isinstance(x, str) else pathlib.Path(x).as_posix().split(self.input_directory)[1][1:]) # rel path, exclude the slash at the beginning
folder_renames[col] = f"input_{col}"
self.output_df.rename(columns=folder_renames, inplace=True) #append input_ to the column name
self.output_df.to_csv(self.output_df_path)
self.output_df.to_csv(self.output_df_path) #dataset.csv

shutil.rmtree(pathlib.Path(self.output_directory, ".temp").as_posix())
if self.is_nnunet:

if self.is_nnunet: #dataset.json for nnunet and .sh file to run to process it
imagests_path = pathlib.Path(self.output_directory, "imagesTs").as_posix()
images_test_location = imagests_path if os.path.exists(imagests_path) else None
# print(self.existing_roi_names)
generate_dataset_json(pathlib.Path(self.output_directory, "dataset.json").as_posix(),
pathlib.Path(self.output_directory, "imagesTr").as_posix(),
images_test_location,
tuple(self.nnunet_info["modalities"].keys()),
{v:k for k, v in self.existing_roi_names.items()},
os.path.split(self.input_directory)[1])
os.path.split(self.input_directory)[1])
_, child = os.path.split(self.output_directory)
shell_path = pathlib.Path(self.output_directory, child.split("_")[1]+".sh").as_posix()
if os.path.exists(shell_path):
os.remove(shell_path)
with open(shell_path, "w", newline="\n") as f:
output = "#!/bin/bash\n"
output += "set -e"
output += f'export nnUNet_raw_data_base="{self.base_output_directory}/nnUNet_raw_data_base"\n'
output += f'export nnUNet_preprocessed="{self.base_output_directory}/nnUNet_preprocessed"\n'
output += f'export RESULTS_FOLDER="{self.base_output_directory}/nnUNet_trained_models"\n\n'
output += f'nnUNet_plan_and_preprocess -t {self.task_id} --verify_dataset_integrity\n\n'
output += 'for (( i=0; i<5; i++ ))\n'
output += 'do\n'
output += f' nnUNet_train 3d_fullres nnUNetTrainerV2 {os.path.split(self.output_directory)[1]} $i --npz\n'
output += 'done'
f.write(output)


def run(self):
Expand All @@ -411,7 +478,11 @@ def run(self):
if subject_id.split("_")[1::] not in patient_ids:
patient_ids.append("_".join(subject_id.split("_")[1::]))
if self.is_nnunet:
self.train, self.test = train_test_split(patient_ids, train_size=self.train_size, random_state=self.random_state)
if self.train_size == 1:
self.train = patient_ids
self.test = []
else:
self.train, self.test = train_test_split(sorted(patient_ids), train_size=self.train_size, random_state=self.random_state)
else:
self.train, self.test = [], []
# Note that returning any SimpleITK object in process_one_subject is
Expand All @@ -420,16 +491,17 @@ def run(self):
print("Dataset already processed...")
shutil.rmtree(pathlib.Path(self.output_directory, ".temp").as_posix())
else:
Parallel(n_jobs=self.n_jobs, verbose=verbose)(
Parallel(n_jobs=self.n_jobs, verbose=verbose, require='sharedmem')(
delayed(self._process_wrapper)(subject_id) for subject_id in subject_ids)
# for subject_id in subject_ids:
# self._process_wrapper(subject_id)
self.save_data()
all_patient_names = glob.glob(pathlib.Path(self.input_directory, "*"," ").as_posix()[0:-1])
all_patient_names = [os.path.split(os.path.split(x)[0])[1] for x in all_patient_names]
for e in all_patient_names:
if e not in patient_ids:
warnings.warn(f"Patient {e} does not have proper DICOM references")
if not self.is_nnunet:
all_patient_names = glob.glob(pathlib.Path(self.input_directory, "*"," ").as_posix()[0:-1])
all_patient_names = [os.path.split(os.path.split(x)[0])[1] for x in all_patient_names]
for e in all_patient_names:
if e not in patient_ids:
warnings.warn(f"Patient {e} does not have proper DICOM references")


if __name__ == "__main__":
Expand All @@ -451,17 +523,29 @@ def run(self):
# overwrite=True,
# nnunet_info={"study name": "NSCLC-Radiomics-Interobserver1"})

pipeline = AutoPipeline(input_directory="C:/Users/qukev/BHKLAB/larynx/radcure",
output_directory="C:/Users/qukev/BHKLAB/larynx_output",
pipeline = AutoPipeline(input_directory="C:/Users/qukev/BHKLAB/hnscc_testing/HNSCC",
output_directory="C:/Users/qukev/BHKLAB/hnscc_testing_output",
modalities="CT,RTSTRUCT",
visualize=False,
overwrite=True,
# is_nnunet=True,
# train_size=0.5,
is_nnunet=True,
train_size=1,
# label_names={"GTV":"GTV.*", "Brainstem": "Brainstem.*"},
# read_yaml_label_names=True, # "GTV.*",
# ignore_missing_regex=True
)
read_yaml_label_names=True, # "GTV.*",
ignore_missing_regex=True,
roi_yaml_path="C:/Users/qukev/BHKLAB/roi_names.yaml")
# pipeline = AutoPipeline(input_directory="C:/Users/qukev/BHKLAB/larynx/radcure",
# output_directory="C:/Users/qukev/BHKLAB/larynx_output",
# modalities="CT,RTSTRUCT",
# visualize=False,
# overwrite=True,
# is_nnunet=True,
# train_size=1,
# # label_names={"GTV":"GTV.*", "Brainstem": "Brainstem.*"},
# read_yaml_label_names=True, # "GTV.*",
# ignore_missing_regex=True,
# roi_yaml_path="C:/Users/qukev/BHKLAB/roi_names.yaml"
# )

# pipeline = AutoPipeline(input_directory="C:/Users/qukev/BHKLAB/dataset/manifest-1598890146597/NSCLC-Radiomics-Interobserver1",
# output_directory="C:/Users/qukev/BHKLAB/autopipelineoutput",
Expand Down
Loading

0 comments on commit 31cecbe

Please sign in to comment.