Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support batch 1-d convolution in ht.signal.convolve #1515

Merged
merged 18 commits into from
Jul 4, 2024

Conversation

ClaudiaComito
Copy link
Contributor

@ClaudiaComito ClaudiaComito commented Jun 7, 2024

Due Diligence

  • General:
  • Implementation:
    • unit tests: all split configurations tested
    • unit tests: multiple dtypes tested
    • documentation updated where needed

Description

This PR expands ht.signal.convolve to perform 1-D convolutions on a batch of 1-D arrays, i.e.

  • if signal.shape = (..., n) and kernel.shape = (m,): signal's last dimension will be convolved with kernel.

  • if e.g. signal.shape = (i, j, n) and kernel.shape = (i, j, m,): each element signal[i, j] of size n will be convolved with the corresponding element kernel[i,j] of size m.

  • Signal and kernel can be any number of dimensions, as long as the batch dimensions match.

  • Signal and kernel can be distributed along any of the batch dimensions.

Issue/s resolved: #1514
Related: #1396

Changes proposed:

  • enable batch 1-D convolution
  • expand documentation, examples
  • expand tests

Type of change

  • New feature (non-breaking change which adds functionality)

Memory requirements

Performance

Does this change modify the behaviour of other functions? If so, which?

yes, instead of returning ValueError when signal is n-D, ht.convolve now raises a ValueError only if batch convolution isn't possible (shapes mismatch, unsuitable distribution)

@ClaudiaComito ClaudiaComito marked this pull request as ready for review June 7, 2024 09:01
@ClaudiaComito ClaudiaComito modified the milestones: 2.0.0, 1.5.0 Jun 7, 2024
Copy link
Contributor

Thank you for the PR!

Copy link
Contributor

Thank you for the PR!

Copy link

codecov bot commented Jun 11, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.89%. Comparing base (064f495) to head (332ae51).
Report is 239 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1515      +/-   ##
==========================================
+ Coverage   91.85%   91.89%   +0.03%     
==========================================
  Files          80       80              
  Lines       11878    11916      +38     
==========================================
+ Hits        10911    10950      +39     
+ Misses        967      966       -1     
Flag Coverage Δ
unit 91.89% <100.00%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

Thank you for the PR!

@ClaudiaComito ClaudiaComito requested review from mrfh92 and JuanPedroGHM and removed request for mrfh92 June 12, 2024 09:33
heat/core/signal.py Outdated Show resolved Hide resolved
heat/core/tests/test_signal.py Show resolved Hide resolved
heat/core/tests/test_signal.py Show resolved Hide resolved
Copy link
Contributor

github-actions bot commented Jul 1, 2024

Thank you for the PR!

@ClaudiaComito ClaudiaComito removed the request for review from krajsek July 1, 2024 08:12
@ClaudiaComito ClaudiaComito requested a review from mrfh92 July 1, 2024 08:12
Copy link
Collaborator

@mrfh92 mrfh92 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

except for the missing test of ndim > 3 (see comment) looks very fine :)
Thx 👍

v.resplit_(axis=None)
else:
v.resplit_(axis=a.split)
batch_processing = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is batch_processing = True hard coded?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is batch_processing = True hard coded?

Hi @krajsek , do you mean, it would be better to let the user set it as a keyword argument? If so, I agree.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I understand things, it is set batch_processing=True and, depending on some conditions, it is set batch_processing=False lateron; in my opinion this is fine and I would not suggest to introduce a kwarg since actually the correct value of batch_processing is already uniquely determined by the other inputs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

o.k., I agree, I also do not advocate for kwarg. Setting variables to some values in code always irritates me, but after reading the code again it makes sense. Maybe we spend a comment why this is hard coded here.

heat/core/signal.py Show resolved Hide resolved
heat/core/signal.py Show resolved Hide resolved
Copy link
Contributor

github-actions bot commented Jul 2, 2024

Thank you for the PR!

@ClaudiaComito
Copy link
Contributor Author

ClaudiaComito commented Jul 3, 2024

@JuanPedroGHM @krajsek @mrfh92 thanks for reviewing, I think I addressed all points, let me know if something's missing.

Copy link
Collaborator

@mrfh92 mrfh92 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All change requests have been addressed.
Thanks for the work! 👍

Copy link
Member

@krajsek krajsek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good work!

@mrfh92 mrfh92 mentioned this pull request Jul 3, 2024
@ClaudiaComito ClaudiaComito dismissed JuanPedroGHM’s stale review July 4, 2024 06:27

All change requests addressed, thanks JP!

@ClaudiaComito ClaudiaComito merged commit 225dc96 into main Jul 4, 2024
97 checks passed
@ClaudiaComito ClaudiaComito deleted the features/1514-Support_batch_1D_convolution branch July 4, 2024 06:27
@ClaudiaComito ClaudiaComito added the enhancement New feature or request label Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request signal
Projects
Development

Successfully merging this pull request may close these issues.

Support batch 1D convolution
4 participants