Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bnb/fno #45

Merged
merged 6 commits into from
Nov 21, 2023
Merged

Bnb/fno #45

merged 6 commits into from
Nov 21, 2023

Conversation

bnb32
Copy link
Collaborator

@bnb32 bnb32 commented Nov 20, 2023

FNO layer from 10.48550/arXiv.2202.11214. Slow during fwp on cpu unfortunately.

@codecov-commenter
Copy link

codecov-commenter commented Nov 20, 2023

Codecov Report

Attention: 3 lines in your changes are missing coverage. Please review.

Comparison is base (42c914a) 87.58% compared to head (c422acf) 87.78%.

Files Patch % Lines
phygnn/layers/custom_layers.py 94.33% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #45      +/-   ##
==========================================
+ Coverage   87.58%   87.78%   +0.19%     
==========================================
  Files          27       27              
  Lines        3037     3110      +73     
==========================================
+ Hits         2660     2730      +70     
- Misses        377      380       +3     
Flag Coverage Δ
unittests 87.78% <96.05%> (+0.19%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@grantbuster grantbuster left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fine to me although i'm not going to read the paper past the abstract :)

Curious what you're up to and what the compute bottleneck is in the layer. It is a complicated layer so maybe there's not really an opportunity to optimize further. The repeated transpose blocks seem like they could be optimized out but those probably don't take that much time.

@bnb32
Copy link
Collaborator Author

bnb32 commented Nov 21, 2023

This looks fine to me although i'm not going to read the paper past the abstract :)

Curious what you're up to and what the compute bottleneck is in the layer. It is a complicated layer so maybe there's not really an opportunity to optimize further. The repeated transpose blocks seem like they could be optimized out but those probably don't take that much time.

Seems like FNO blocks are used a lot in the ML forecasting space - they are good at learning resolution invariant and global properties, as opposed to convolutions which learn local properties. Messed around with replacing one of the conv3d blocks with an fno block and it looked promising but the fwp is very slow on the cpu. Was wondering if the fft was optimized for the gpu.

@bnb32 bnb32 merged commit a7ca20d into main Nov 21, 2023
10 checks passed
@bnb32 bnb32 deleted the bnb/fno branch November 21, 2023 16:41
github-actions bot pushed a commit that referenced this pull request Nov 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants