-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
figure out adding new algorithms / how to smooth #35
base: master
Are you sure you want to change the base?
Conversation
add blurred model image conv1 to observer add conv1 to observer for use in algorithm convergence analysis add some descriptions in the body of the algorithm disambiguate data and inputData objects.
This reverts commit 5b4a3f1.
This reverts commit d571d6e.
needed to normalise tensors to prevent overflows when multipling etc. needed at add delta parameter tonumerator and denominator to prevent div by close to zero
… with tf.nn.conv3d
Sorry for the delay on this, but I think a good first step for any change like those would be to show the faster convergence. Showing that as SSIM or PSNR (or some other performance measure) vs number of iterations for a few simulated image datasets where the ground truth is known would be a good first step on validating the implementation -- and comparing the execution times with and without the optimizations would be useful too to know if these extra computations aren't outweighing the decrease in iterations. I don't know off-hand how to do gaussian filtering in 3D via TF, but it would definitely be atypical to see it done with an actual 3D kernel like in the PR. Those filters are separable so successive 2D or 1D filters is how it's normally done, since that's much more efficient (and 1D or 2D kernels are easier to create). |
The faster convergence was proved in the literature decades ago: David Agard et al, but The only implementations of Agard's accelerated Gold (ratio) method were propriety in softworx and priism so it never caught on in the open. Would it make sense to make the FT of the Gaussian kernel then do the operation in frequency space, as done for the covolutions of the image with the PSF? I think that's what I was aiming at....? Maybe with huge GPU RAM the Fourier transform method for faster image x kernel convolutions than in real space no longer wins vs just doing a real space (separable?) convolution on the GPU? Just thinking out loud.... |
I guess the question was actually: |
Makes sense, but I mean that a validation+benchmarks (or unit tests we can parameterize with larger images) should show that this specific implementation is correct and decreases the overall time taken to obtain similar results even after the extra filtering steps. Sadly I don't have time to work on these things anymore and the most common issues users have is with excessive memory usage (particularly with PSFs that have to be padded to the same size as input images) so I doubt any of the other contributors would be likely to pick it up. I wanted to put that on your radar for this PR because it would probably be a lot more work than the changes themselves.
Good point, iirc FFT + multiplication is typically slower than separable 1D kernel convolutions but since we have the image in frequency space already, that probably does make more sense.
For different algorithms, new functions in |
Hi folks,
this isn't a proper pull request - i totally don't expect anyone to accept this code as is, since it would break everything.
Rather, you can see that I'm just trying to get a different algorithm working: Gold's method (Ratio) with Agard's acceleration trick.
This should converge much faster than RL. 5-10 iterations.
I didnt see a way to add extra algorithms so i just hacked at the RL algorithm in restoration.py - and the Gold (Ratio) method bit alone works,
but it is missing some required Gaussian smoothing filtering and also the Agard acceleration trick of inverse filtering (Wiener) the initial difference image.
Where I am stuck is pretty dumb... I cant figure out how to do a Gaussian filter by making a Gaussian kernel using TF methods and then using it. You can see i added some code to try and do that and test it but I'm stuck, because i'm not familiar enough with how tensor flow does things i guess...
Anyhow - feel free to disregard this as rubbish, or if you like I would be happy to chat about how best to add alternative algorithms the way you intended, and how to add trivial stuff like Gaussian smoothing and a Wiener filter.
cheers
Dan