-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Daubechies Wavelets #10
Daubechies Wavelets #10
Conversation
Thanks @landmanbester 🦆 🦆 🦆 |
Oh, nice! Glad to be a good little rubber duck ;-) |
This also gets the waverecn test case working.
@landmanbester Could you please review? |
I've made an effort to reproduce the pywavelets interfaces on the provided functions. On the reconstruction side of things, pywavelets supports missing approximation/detail coefficients. This PR doesn't support this, but with some type mangling, it may be possible if it's needed. I can only imagine this happening if coefficients got deleted from the dictionary produced by wavedecn call and then fed into a waverecn call. |
Cool, thanks. I'll have a look asap |
I don't see this being an issue |
assert_array_almost_equal(output, pywt_out) | ||
|
||
|
||
@pytest.mark.parametrize("data_shape", [(50, 24, 63)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we test this with multiple levels as well please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it should be possible to parametrize this on level.
pfb/test/test_wavelets.py
Outdated
for k, v in d1.items(): | ||
assert_array_almost_equal(v, d2[k]) | ||
|
||
pywt_rec = pywt.waverecn(out, "db1", "symmetric") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I usually use this with mode='periodization' but I see below that it has not been implemented. I don't know how big a difference this makes. Will have to test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it not be possible to use zeropad, in light of email conversations?
raise NotImplementedError("constant_edge downsampling not implemented") | ||
elif mode is Modes.smooth: | ||
raise NotImplementedError("smooth downsampling not implemented") | ||
elif mode is Modes.periodic: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment above I think I need mode='periodic' support? How hard would it be to add this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unimplemented modes need their loops filled in from here.
I think zeropad is implemented (although I haven't tested parity)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that periodisation is actually implemented in a separate function but it might no longer be strictly required based on email conversations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sjperkins this looks good and it seems like it was quite a bit of work. The only thing I would need is to make sure that we get consistent results with varying decomposition levels. We can probably test this by parametrising over level in the waverecn test. I need to follow up on the importance of the mode='periodisation' argument. I'll do some tests once this is merged and also check with Audrey
@numba.njit(nogil=True, cache=True) | ||
def downsampling_convolution(input, output, filter, mode, step): | ||
if mode is Modes.periodisation: | ||
raise NotImplementedError("periodisation downsampling not implemented") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Periodisation is actually handled here and should defer to a function specifically for it.
Parametrising the decomposition tests over level should by easy enough I suspect we can just use the zeropad mode now. This probably needs a test. |
Yep, we just need zeropad mode. This should be the simplest so hopefully not too much work to add it in? |
I think it might be done -- I put the code paths in. I just haven't tested the mode yet. |
@landmanbester I've parametrised the wavedecn/waverecn tests over
All the test cases pass. Note that in the case of (1), the number of levels can be too high. pywavelets warns, but in numba this would require dropping to objmode to log the warning. It looks like it may be possible with bleeding edge numba (numba/numba#5674) but I've commented that out for now, since the wavedecn/waverecn compiles take a while and I'd prefer the compilations to be cached. |
Awesome! I'm gonna merge and test how it parallelises. Thanks @sjperkins for this large and valuable contribution |
* Testing on ESO137 * Something broke pfb_freq_chunked. Diverging after miter 5 * Factor of 2 tracked down * Elastic net updates * Elastic Net working with full measuremnet operator * Project structure * Testing alternate form of hpd * Added solve_x0 script for fine tuning * Adding more utilities * Add prep_data script * prep_data and make_dirty ready * Testing fits orientation * Added cgverbosity argument * Fixed keywords not getting passed into function * Make freq chunking the default * Fixed fits orientation * Added pfb.py script * Tweaks * Adaptive sig_21 * Different reweight strategy * good result with solve_x0 * Fix missing sara * Added missing nlevel keyword * Begin adding PSI operator class * New psi operator working * Added GP prior term to gradient * Merged in SARA dict * Using fista for tidy * Fixed bug in prep_data * Changes to prep_data * Commit scratch.py for prox_21 demo * Remove undefined real_type in scratch.py * Restructure solve_x0 * Reweight percentile * Consistent column specification * Update README * Update README * Update README * [WIP] dask PSI operator (#4) * dask functionally working not sure about the actual results yet * Fix real type issue in DaskPSI.dot * test case matching psi and dask psi operators * nchan -> nband * Get dask prox_21 operator working * Don't apply sig_21 twice * Further graph simplication with blockwise * ratio fix * Refactored PSI functions. Dask results agree but no speedup * Also clean up prox_21 * Refactored code producing identical results * Test new prox (#5) * Refactored PSI functions. Dask results agree but no speedup * Also clean up prox_21 * Refactored code producing identical results * Added spi example script from africanus * Changed args.pp to args.psf_pars in simple_spi_fitter * Refactor prox * New prox seems to be working. Testing on ESO137 * Added spi scripts to setup.py * Change git to https in requirements.txt * Testing fix for requirements.txt * Removed CUNIT from headers * Add imaging scripts to setup.py and MANIFEST.in * Change value of alpha -> 1e-8 * Add reweight_alpha_min option * Add testing solve_x0 with tidy * Still no cigar * pre wsl2 * Ok * Daubechies Wavelets (#10) * checkpoint * More tests * fix * Add row slicing * Generic axis slicing. Segfaults in some cases * This seems to resolve the segfaults * Modularise * Test flags are the same * Seems to fix segfaults * Contiguity handling * Remove debug in jit decorators * tidy up * Mode enums * Outline, segfaults when compiling * utils.py -> numba_llvm.py * Use step in downsampling convolution * Fixes and more tests, but not working yet * numba_llvm.py => intrinsics.py * Extend test_slice_axis * dwt working * simplify * idwt_axis working * Working idwtn * Argument promotion fixups * More test case * Complain about periodisation downsampling * cache decorators * waverecn in place, coeff trimming not yet done * Add extent to slice_axis * Clip coefficients in idwt_axis This also gets the waverecn test case working. * cache=True, remove level arg from waverecn * Add axes and level tests to waverecn/wavedecn tests * Add fastmath decorators * Use numba typed Lists * test zeropad and levels * Disable wavelet level warnings for now * Parallel psi (#12) * Refactored PSI functions. Dask results agree but no speedup * Also clean up prox_21 * Refactored code producing identical results * Added some checks for exponential model. Seems to cope much better with point sources * Refactor prox * New prox seems to be working. Testing on ESO137 * Change value of alpha -> 1e-8 * Add reweight_alpha_min option * Add testing solve_x0 with tidy * Still no cigar * Looks like exponential model works best for point source fields * Ok * Add logic for spatial 'taper' in prior * Try convolving expsq kernel * Try convolving expsq kernel * K.iconvolve working * Try new K in exponential model * Trying K.iconvolve in exponential model * Use pywt.ravel_coeffs instead of doing it manually * New PSI working * Remove exponential scripts * Move scripts into subdirectory * Change project structure * Remove old tests. Getting pfb is not a package error * Rename pfb.py to pfbclean.py * Do not put pfb executable in pfb package * Parallel pywt correct but not really much faster * Rewriting gridder * Dirty accumulation * All gridder functions working * pfbclean.py working wth new gridder * Added make_mask script * Printing pixel sizes in make_dirty * Ensure even pixel sizes * nx -> args.nx * Use np.dot instead of freqmul * Modify readme * More readme fiddling * .md -> .rst * Added ci.yml Co-authored-by: Simon Perkins <simon.perkins@gmail.com>
* Testing on ESO137 * Something broke pfb_freq_chunked. Diverging after miter 5 * Factor of 2 tracked down * Elastic net updates * Elastic Net working with full measuremnet operator * Project structure * Testing alternate form of hpd * Added solve_x0 script for fine tuning * Adding more utilities * Add prep_data script * prep_data and make_dirty ready * Testing fits orientation * Added cgverbosity argument * Fixed keywords not getting passed into function * Make freq chunking the default * Fixed fits orientation * Added pfb.py script * Tweaks * Adaptive sig_21 * Different reweight strategy * good result with solve_x0 * Fix missing sara * Added missing nlevel keyword * Begin adding PSI operator class * New psi operator working * Added GP prior term to gradient * Merged in SARA dict * Using fista for tidy * Fixed bug in prep_data * Changes to prep_data * Commit scratch.py for prox_21 demo * Remove undefined real_type in scratch.py * Restructure solve_x0 * Reweight percentile * Consistent column specification * Update README * Update README * Update README * [WIP] dask PSI operator (#4) * dask functionally working not sure about the actual results yet * Fix real type issue in DaskPSI.dot * test case matching psi and dask psi operators * nchan -> nband * Get dask prox_21 operator working * Don't apply sig_21 twice * Further graph simplication with blockwise * ratio fix * Refactored PSI functions. Dask results agree but no speedup * Also clean up prox_21 * Refactored code producing identical results * Test new prox (#5) * Refactored PSI functions. Dask results agree but no speedup * Also clean up prox_21 * Refactored code producing identical results * Added spi example script from africanus * Changed args.pp to args.psf_pars in simple_spi_fitter * Refactor prox * New prox seems to be working. Testing on ESO137 * Added spi scripts to setup.py * Change git to https in requirements.txt * Testing fix for requirements.txt * Removed CUNIT from headers * Add imaging scripts to setup.py and MANIFEST.in * Change value of alpha -> 1e-8 * Add reweight_alpha_min option * Add testing solve_x0 with tidy * Still no cigar * pre wsl2 * Ok * Daubechies Wavelets (#10) * checkpoint * More tests * fix * Add row slicing * Generic axis slicing. Segfaults in some cases * This seems to resolve the segfaults * Modularise * Test flags are the same * Seems to fix segfaults * Contiguity handling * Remove debug in jit decorators * tidy up * Mode enums * Outline, segfaults when compiling * utils.py -> numba_llvm.py * Use step in downsampling convolution * Fixes and more tests, but not working yet * numba_llvm.py => intrinsics.py * Extend test_slice_axis * dwt working * simplify * idwt_axis working * Working idwtn * Argument promotion fixups * More test case * Complain about periodisation downsampling * cache decorators * waverecn in place, coeff trimming not yet done * Add extent to slice_axis * Clip coefficients in idwt_axis This also gets the waverecn test case working. * cache=True, remove level arg from waverecn * Add axes and level tests to waverecn/wavedecn tests * Add fastmath decorators * Use numba typed Lists * test zeropad and levels * Disable wavelet level warnings for now * Parallel psi (#12) * Refactored PSI functions. Dask results agree but no speedup * Also clean up prox_21 * Refactored code producing identical results * Added some checks for exponential model. Seems to cope much better with point sources * Refactor prox * New prox seems to be working. Testing on ESO137 * Change value of alpha -> 1e-8 * Add reweight_alpha_min option * Add testing solve_x0 with tidy * Still no cigar * Looks like exponential model works best for point source fields * Ok * Add logic for spatial 'taper' in prior * Try convolving expsq kernel * Try convolving expsq kernel * K.iconvolve working * Try new K in exponential model * Trying K.iconvolve in exponential model * Use pywt.ravel_coeffs instead of doing it manually * New PSI working * Remove exponential scripts * Move scripts into subdirectory * Change project structure * Remove old tests. Getting pfb is not a package error * Rename pfb.py to pfbclean.py * Do not put pfb executable in pfb package * Parallel pywt correct but not really much faster * Rewriting gridder * Dirty accumulation * All gridder functions working * pfbclean.py working wth new gridder * Added make_mask script * Printing pixel sizes in make_dirty * Ensure even pixel sizes * nx -> args.nx * Use np.dot instead of freqmul * Modify readme * More readme fiddling * Fix README * Change defaults scripts Co-authored-by: Simon Perkins <simon.perkins@gmail.com>
Still a WIP