Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

figure out adding new algorithms / how to smooth #35

Open
wants to merge 15 commits into
base: master
Choose a base branch
from

Conversation

chalkie666
Copy link

Hi folks,

this isn't a proper pull request - i totally don't expect anyone to accept this code as is, since it would break everything.
Rather, you can see that I'm just trying to get a different algorithm working: Gold's method (Ratio) with Agard's acceleration trick.
This should converge much faster than RL. 5-10 iterations.
I didnt see a way to add extra algorithms so i just hacked at the RL algorithm in restoration.py - and the Gold (Ratio) method bit alone works,
but it is missing some required Gaussian smoothing filtering and also the Agard acceleration trick of inverse filtering (Wiener) the initial difference image.

Where I am stuck is pretty dumb... I cant figure out how to do a Gaussian filter by making a Gaussian kernel using TF methods and then using it. You can see i added some code to try and do that and test it but I'm stuck, because i'm not familiar enough with how tensor flow does things i guess...

Anyhow - feel free to disregard this as rubbish, or if you like I would be happy to chat about how best to add alternative algorithms the way you intended, and how to add trivial stuff like Gaussian smoothing and a Wiener filter.

cheers

Dan

@eric-czech
Copy link
Collaborator

Sorry for the delay on this, but I think a good first step for any change like those would be to show the faster convergence. Showing that as SSIM or PSNR (or some other performance measure) vs number of iterations for a few simulated image datasets where the ground truth is known would be a good first step on validating the implementation -- and comparing the execution times with and without the optimizations would be useful too to know if these extra computations aren't outweighing the decrease in iterations.

I don't know off-hand how to do gaussian filtering in 3D via TF, but it would definitely be atypical to see it done with an actual 3D kernel like in the PR. Those filters are separable so successive 2D or 1D filters is how it's normally done, since that's much more efficient (and 1D or 2D kernels are easier to create).

@chalkie666
Copy link
Author

chalkie666 commented Jul 26, 2020

The faster convergence was proved in the literature decades ago: David Agard et al, but The only implementations of Agard's accelerated Gold (ratio) method were propriety in softworx and priism so it never caught on in the open.

Would it make sense to make the FT of the Gaussian kernel then do the operation in frequency space, as done for the covolutions of the image with the PSF? I think that's what I was aiming at....?

Maybe with huge GPU RAM the Fourier transform method for faster image x kernel convolutions than in real space no longer wins vs just doing a real space (separable?) convolution on the GPU? Just thinking out loud....

@chalkie666
Copy link
Author

I guess the question was actually:
How should the code be structured to add multiple deconvolution algorithms and/or basic vs accelerated versions of decovolution algorithms?

@eric-czech
Copy link
Collaborator

The faster convergence was proved in the literature decades ago

Makes sense, but I mean that a validation+benchmarks (or unit tests we can parameterize with larger images) should show that this specific implementation is correct and decreases the overall time taken to obtain similar results even after the extra filtering steps. Sadly I don't have time to work on these things anymore and the most common issues users have is with excessive memory usage (particularly with PSFs that have to be padded to the same size as input images) so I doubt any of the other contributors would be likely to pick it up. I wanted to put that on your radar for this PR because it would probably be a lot more work than the changes themselves.

Would it make sense to make the FT of the Gaussian kernel then do the operation in frequency space, as done for the covolutions of the image with the PSF

Good point, iirc FFT + multiplication is typically slower than separable 1D kernel convolutions but since we have the image in frequency space already, that probably does make more sense.

How should the code be structured to add multiple deconvolution algorithms and/or basic vs accelerated versions of decovolution algorithms?

For different algorithms, new functions in restoration.py. For accelerated versions of existing ones, extra flags/params on the existing RL function would be good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants