Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Daubechies Wavelets #10

Merged
merged 36 commits into from
Aug 20, 2020
Merged

Daubechies Wavelets #10

merged 36 commits into from
Aug 20, 2020

Conversation

sjperkins
Copy link
Member

@sjperkins sjperkins commented Jul 20, 2020

Still a WIP

@sjperkins sjperkins changed the title checkpoint Daubechies Wavelets Jul 20, 2020
@sjperkins sjperkins marked this pull request as draft July 20, 2020 11:02
@sjperkins
Copy link
Member Author

Thanks @landmanbester 🦆 🦆 🦆

@landmanbester
Copy link
Collaborator

Oh, nice! Glad to be a good little rubber duck ;-)

@sjperkins sjperkins marked this pull request as ready for review August 18, 2020 10:43
@sjperkins
Copy link
Member Author

@landmanbester Could you please review?

@sjperkins
Copy link
Member Author

I've made an effort to reproduce the pywavelets interfaces on the provided functions.

On the reconstruction side of things, pywavelets supports missing approximation/detail coefficients. This PR doesn't support this, but with some type mangling, it may be possible if it's needed. I can only imagine this happening if coefficients got deleted from the dictionary produced by wavedecn call and then fed into a waverecn call.

@landmanbester
Copy link
Collaborator

Cool, thanks. I'll have a look asap

@landmanbester
Copy link
Collaborator

I've made an effort to reproduce the pywavelets interfaces on the provided functions.

On the reconstruction side of things, pywavelets supports missing approximation/detail coefficients. This PR doesn't support this, but with some type mangling, it may be possible if it's needed. I can only imagine this happening if coefficients got deleted from the dictionary produced by wavedecn call and then fed into a waverecn call.

I don't see this being an issue

assert_array_almost_equal(output, pywt_out)


@pytest.mark.parametrize("data_shape", [(50, 24, 63)])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we test this with multiple levels as well please?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it should be possible to parametrize this on level.

for k, v in d1.items():
assert_array_almost_equal(v, d2[k])

pywt_rec = pywt.waverecn(out, "db1", "symmetric")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I usually use this with mode='periodization' but I see below that it has not been implemented. I don't know how big a difference this makes. Will have to test

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it not be possible to use zeropad, in light of email conversations?

raise NotImplementedError("constant_edge downsampling not implemented")
elif mode is Modes.smooth:
raise NotImplementedError("smooth downsampling not implemented")
elif mode is Modes.periodic:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above I think I need mode='periodic' support? How hard would it be to add this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unimplemented modes need their loops filled in from here.

I think zeropad is implemented (although I haven't tested parity)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that periodisation is actually implemented in a separate function but it might no longer be strictly required based on email conversations?

Copy link
Collaborator

@landmanbester landmanbester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @sjperkins this looks good and it seems like it was quite a bit of work. The only thing I would need is to make sure that we get consistent results with varying decomposition levels. We can probably test this by parametrising over level in the waverecn test. I need to follow up on the importance of the mode='periodisation' argument. I'll do some tests once this is merged and also check with Audrey

@numba.njit(nogil=True, cache=True)
def downsampling_convolution(input, output, filter, mode, step):
if mode is Modes.periodisation:
raise NotImplementedError("periodisation downsampling not implemented")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Periodisation is actually handled here and should defer to a function specifically for it.

@sjperkins
Copy link
Member Author

Thanks @sjperkins this looks good and it seems like it was quite a bit of work. The only thing I would need is to make sure that we get consistent results with varying decomposition levels. We can probably test this by parametrising over level in the waverecn test. I need to follow up on the importance of the mode='periodisation' argument. I'll do some tests once this is merged and also check with Audrey

Parametrising the decomposition tests over level should by easy enough

I suspect we can just use the zeropad mode now. This probably needs a test.

@landmanbester
Copy link
Collaborator

Yep, we just need zeropad mode. This should be the simplest so hopefully not too much work to add it in?

@sjperkins
Copy link
Member Author

Yep, we just need zeropad mode. This should be the simplest so hopefully not too much work to add it in?

I think it might be done -- I put the code paths in. I just haven't tested the mode yet.

@sjperkins
Copy link
Member Author

@landmanbester I've parametrised the wavedecn/waverecn tests over

  1. 10 levels
  2. real and complex data
  3. 3 daubechies wavelets
  4. symmetric and zero padding modes.

All the test cases pass. Note that in the case of (1), the number of levels can be too high. pywavelets warns, but in numba this would require dropping to objmode to log the warning. It looks like it may be possible with bleeding edge numba (numba/numba#5674) but I've commented that out for now, since the wavedecn/waverecn compiles take a while and I'd prefer the compilations to be cached.

@landmanbester
Copy link
Collaborator

Awesome! I'm gonna merge and test how it parallelises. Thanks @sjperkins for this large and valuable contribution

@landmanbester landmanbester merged commit 8c110a0 into ratt-ru:project_init Aug 20, 2020
landmanbester pushed a commit that referenced this pull request Sep 29, 2020
* Testing on ESO137

* Something broke pfb_freq_chunked. Diverging after miter 5

* Factor of 2 tracked down

* Elastic net updates

* Elastic Net working with full measuremnet operator

* Project structure

* Testing alternate form of hpd

* Added solve_x0 script for fine tuning

* Adding more utilities

* Add prep_data script

* prep_data and make_dirty ready

* Testing fits orientation

* Added cgverbosity argument

* Fixed keywords not getting passed into function

* Make freq chunking the default

* Fixed fits orientation

* Added pfb.py script

* Tweaks

* Adaptive sig_21

* Different reweight strategy

* good result with solve_x0

* Fix missing sara

* Added missing nlevel keyword

* Begin adding PSI operator class

* New psi operator working

* Added GP prior term to gradient

* Merged in SARA dict

* Using fista for tidy

* Fixed bug in prep_data

* Changes to prep_data

* Commit scratch.py for prox_21 demo

* Remove undefined real_type in scratch.py

* Restructure solve_x0

* Reweight percentile

* Consistent column specification

* Update README

* Update README

* Update README

* [WIP] dask PSI operator (#4)

* dask functionally working

not sure about the actual results yet

* Fix real type issue in DaskPSI.dot

* test case matching psi and dask psi operators

* nchan -> nband

* Get dask prox_21 operator working

* Don't apply sig_21 twice

* Further graph simplication with blockwise

* ratio fix

* Refactored PSI functions. Dask results agree but no speedup

* Also clean up prox_21

* Refactored code producing identical results

* Test new prox (#5)

* Refactored PSI functions. Dask results agree but no speedup

* Also clean up prox_21

* Refactored code producing identical results

* Added spi example script from africanus

* Changed args.pp to args.psf_pars in simple_spi_fitter

* Refactor prox

* New prox seems to be working. Testing on ESO137

* Added spi scripts to setup.py

* Change git to https in requirements.txt

* Testing fix for requirements.txt

* Removed CUNIT from headers

* Add imaging scripts to setup.py and MANIFEST.in

* Change value of alpha -> 1e-8

* Add reweight_alpha_min option

* Add testing solve_x0 with tidy

* Still no cigar

* pre wsl2

* Ok

* Daubechies Wavelets (#10)

* checkpoint

* More tests

* fix

* Add row slicing

* Generic axis slicing. Segfaults in some cases

* This seems to resolve the segfaults

* Modularise

* Test flags are the same

* Seems to fix segfaults

* Contiguity handling

* Remove debug in jit decorators

* tidy up

* Mode enums

* Outline, segfaults when compiling

* utils.py -> numba_llvm.py

* Use step in downsampling convolution

* Fixes and more tests, but not working yet

* numba_llvm.py => intrinsics.py

* Extend test_slice_axis

* dwt working

* simplify

* idwt_axis working

* Working idwtn

* Argument promotion fixups

* More test case

* Complain about periodisation downsampling

* cache decorators

* waverecn in place, coeff trimming not yet done

* Add extent to slice_axis

* Clip coefficients in idwt_axis

This also gets the waverecn test case working.

* cache=True, remove level arg from waverecn

* Add axes and level tests to waverecn/wavedecn tests

* Add fastmath decorators

* Use numba typed Lists

* test zeropad and levels

* Disable wavelet level warnings for now

* Parallel psi (#12)

* Refactored PSI functions. Dask results agree but no speedup

* Also clean up prox_21

* Refactored code producing identical results

* Added some checks for exponential model. Seems to cope much better with point sources

* Refactor prox

* New prox seems to be working. Testing on ESO137

* Change value of alpha -> 1e-8

* Add reweight_alpha_min option

* Add testing solve_x0 with tidy

* Still no cigar

* Looks like exponential model works best for point source fields

* Ok

* Add logic for spatial 'taper' in prior

* Try convolving expsq kernel

* Try convolving expsq kernel

* K.iconvolve working

* Try new K in exponential model

* Trying K.iconvolve in exponential model

* Use pywt.ravel_coeffs instead of doing it manually

* New PSI working

* Remove exponential scripts

* Move scripts into subdirectory

* Change project structure

* Remove old tests. Getting pfb is not a package error

* Rename pfb.py to pfbclean.py

* Do not put pfb executable in pfb package

* Parallel pywt correct but not really much faster

* Rewriting gridder

* Dirty accumulation

* All gridder functions working

* pfbclean.py working wth new gridder

* Added make_mask script

* Printing pixel sizes in make_dirty

* Ensure even pixel sizes

* nx -> args.nx

* Use np.dot instead of freqmul

* Modify readme

* More readme fiddling

* .md -> .rst

* Added ci.yml

Co-authored-by: Simon Perkins <simon.perkins@gmail.com>
landmanbester pushed a commit that referenced this pull request Oct 1, 2020
* Testing on ESO137

* Something broke pfb_freq_chunked. Diverging after miter 5

* Factor of 2 tracked down

* Elastic net updates

* Elastic Net working with full measuremnet operator

* Project structure

* Testing alternate form of hpd

* Added solve_x0 script for fine tuning

* Adding more utilities

* Add prep_data script

* prep_data and make_dirty ready

* Testing fits orientation

* Added cgverbosity argument

* Fixed keywords not getting passed into function

* Make freq chunking the default

* Fixed fits orientation

* Added pfb.py script

* Tweaks

* Adaptive sig_21

* Different reweight strategy

* good result with solve_x0

* Fix missing sara

* Added missing nlevel keyword

* Begin adding PSI operator class

* New psi operator working

* Added GP prior term to gradient

* Merged in SARA dict

* Using fista for tidy

* Fixed bug in prep_data

* Changes to prep_data

* Commit scratch.py for prox_21 demo

* Remove undefined real_type in scratch.py

* Restructure solve_x0

* Reweight percentile

* Consistent column specification

* Update README

* Update README

* Update README

* [WIP] dask PSI operator (#4)

* dask functionally working

not sure about the actual results yet

* Fix real type issue in DaskPSI.dot

* test case matching psi and dask psi operators

* nchan -> nband

* Get dask prox_21 operator working

* Don't apply sig_21 twice

* Further graph simplication with blockwise

* ratio fix

* Refactored PSI functions. Dask results agree but no speedup

* Also clean up prox_21

* Refactored code producing identical results

* Test new prox (#5)

* Refactored PSI functions. Dask results agree but no speedup

* Also clean up prox_21

* Refactored code producing identical results

* Added spi example script from africanus

* Changed args.pp to args.psf_pars in simple_spi_fitter

* Refactor prox

* New prox seems to be working. Testing on ESO137

* Added spi scripts to setup.py

* Change git to https in requirements.txt

* Testing fix for requirements.txt

* Removed CUNIT from headers

* Add imaging scripts to setup.py and MANIFEST.in

* Change value of alpha -> 1e-8

* Add reweight_alpha_min option

* Add testing solve_x0 with tidy

* Still no cigar

* pre wsl2

* Ok

* Daubechies Wavelets (#10)

* checkpoint

* More tests

* fix

* Add row slicing

* Generic axis slicing. Segfaults in some cases

* This seems to resolve the segfaults

* Modularise

* Test flags are the same

* Seems to fix segfaults

* Contiguity handling

* Remove debug in jit decorators

* tidy up

* Mode enums

* Outline, segfaults when compiling

* utils.py -> numba_llvm.py

* Use step in downsampling convolution

* Fixes and more tests, but not working yet

* numba_llvm.py => intrinsics.py

* Extend test_slice_axis

* dwt working

* simplify

* idwt_axis working

* Working idwtn

* Argument promotion fixups

* More test case

* Complain about periodisation downsampling

* cache decorators

* waverecn in place, coeff trimming not yet done

* Add extent to slice_axis

* Clip coefficients in idwt_axis

This also gets the waverecn test case working.

* cache=True, remove level arg from waverecn

* Add axes and level tests to waverecn/wavedecn tests

* Add fastmath decorators

* Use numba typed Lists

* test zeropad and levels

* Disable wavelet level warnings for now

* Parallel psi (#12)

* Refactored PSI functions. Dask results agree but no speedup

* Also clean up prox_21

* Refactored code producing identical results

* Added some checks for exponential model. Seems to cope much better with point sources

* Refactor prox

* New prox seems to be working. Testing on ESO137

* Change value of alpha -> 1e-8

* Add reweight_alpha_min option

* Add testing solve_x0 with tidy

* Still no cigar

* Looks like exponential model works best for point source fields

* Ok

* Add logic for spatial 'taper' in prior

* Try convolving expsq kernel

* Try convolving expsq kernel

* K.iconvolve working

* Try new K in exponential model

* Trying K.iconvolve in exponential model

* Use pywt.ravel_coeffs instead of doing it manually

* New PSI working

* Remove exponential scripts

* Move scripts into subdirectory

* Change project structure

* Remove old tests. Getting pfb is not a package error

* Rename pfb.py to pfbclean.py

* Do not put pfb executable in pfb package

* Parallel pywt correct but not really much faster

* Rewriting gridder

* Dirty accumulation

* All gridder functions working

* pfbclean.py working wth new gridder

* Added make_mask script

* Printing pixel sizes in make_dirty

* Ensure even pixel sizes

* nx -> args.nx

* Use np.dot instead of freqmul

* Modify readme

* More readme fiddling

* Fix README

* Change defaults scripts

Co-authored-by: Simon Perkins <simon.perkins@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants