Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Nosetest for wavelets on different ROIs #348

Closed

Conversation

michaelschwier
Copy link
Contributor

Follow up on PR #346.

This test check equality of features extracted after wavelet filtering on the same image but different ROIs (with padding large enough such that wavelet filter doesn't run into the image border for pixels under the mask).

Test data is a small and simple generated image containing a checkered sphere and noise around.

if not column.startswith("general"):
featList = dfFeatures[column].tolist()
maxDiff = max(abs(a) - abs(b) for a, b in itertools.combinations(featList, 2))
assert maxDiff == 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asserting equals 0 would not account for machine precision errors. For general feature tests in PyRadiomics we allow a maximum difference of 3% compared to baseline. See also here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moreover, it might be a good idea to work with a csv-defined baseline. This will ensure that your calculating the same values each time. By recalculating your baseline here, it might be possible that some systematic errors are not caught. I.e. if for some reason, the filter adds 1 to each result, it would not be caught here.


features = []
for extractor in extractors:
for data in self.testData:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two loops sort of specify different parameters of testing. In PyRadiomics we split these out as well.
This is acchieved by the @parameterized.expand(<generatorFunction>()), where <generatorFunction> is a function that utilizes the yield statement (i.e. is a python generator). Inside that function are the loops to define your testcases, and the yield statement returns an equal number of arguments as are consumed by your test function.
See here for an example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The loops don't specify different test cases/scenarios. It is just one case which compares 4 different ROI variations for computational equality. Parameterized creates independent test cases which wouldn't help. Only thing I could do is create parameterized test cases that check always two pairs of my 4 variations. I don't see the benefit though :/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we could think about parameterizing the features, so one case for each feature. However, the actual intention is to check the wavelet filter for equality, using the feature values is just a means to make it easier. To implement the intention more correctly I could re-write this to actually get the wavelet filtered images and compare the pixels under the mask for equality ... but I am not sure if it's worth the effort since this does the job.


dfFeatures = pd.DataFrame(features)
for column in dfFeatures:
if not column.startswith("general"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michaelschwier, you can prevent the addition of the general info columns by calling addProvenance(False) on each extractor when you instantiate it in getTestExtractor()

@JoostJM JoostJM added this to the PyRadiomics 2.0 Release milestone Mar 14, 2018
JoostJM added a commit to JoostJM/pyradiomics that referenced this pull request Jun 5, 2018
Add a test to check if precropping and different size input images still produce the same output for the wavelet filter.
This test uses 2 new test cases that have been added to the 1.0.0 release of PyRadiomics and enabled for download by `getTestCase()`.
Additionally ensures temporal consistency by using a baseline file that defines the 1D-array of voxels in the ROI for each wavelet level (This array can be obtained by slicing the image array with a boolean cast of the mask array).

The baseline is generated using the large (64x64x64) image without pre cropping. The function to generate this baseline is added to `add_baseline.py`.

Finally, apply some style changes to the other test files to make them more consistent with pyradiomics style.

Supersedes AIM-Harvard#348.
@JoostJM JoostJM mentioned this pull request Jun 5, 2018
@JoostJM
Copy link
Collaborator

JoostJM commented Jun 5, 2018

Superseded by #387

@JoostJM JoostJM closed this Jun 5, 2018
JoostJM added a commit to JoostJM/pyradiomics that referenced this pull request Jun 5, 2018
Add a test to check if precropping and different size input images still produce the same output for the wavelet filter.
This test uses 2 new test cases that have been added to the 1.0.0 release of PyRadiomics and enabled for download by `getTestCase()`.
Additionally ensures temporal consistency by using a baseline file that defines the 1D-array of voxels in the ROI for each wavelet level (This array can be obtained by slicing the image array with a boolean cast of the mask array).

The baseline is generated using the large (64x64x64) image without pre cropping. The function to generate this baseline is added to `add_baseline.py`.

Finally, apply some style changes to the other test files to make them more consistent with pyradiomics style.

Supersedes AIM-Harvard#348.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants