-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bnb/fno #45
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #45 +/- ##
==========================================
+ Coverage 87.58% 87.78% +0.19%
==========================================
Files 27 27
Lines 3037 3110 +73
==========================================
+ Hits 2660 2730 +70
- Misses 377 380 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine to me although i'm not going to read the paper past the abstract :)
Curious what you're up to and what the compute bottleneck is in the layer. It is a complicated layer so maybe there's not really an opportunity to optimize further. The repeated transpose blocks seem like they could be optimized out but those probably don't take that much time.
Seems like FNO blocks are used a lot in the ML forecasting space - they are good at learning resolution invariant and global properties, as opposed to convolutions which learn local properties. Messed around with replacing one of the conv3d blocks with an fno block and it looked promising but the fwp is very slow on the cpu. Was wondering if the fft was optimized for the gpu. |
FNO layer from 10.48550/arXiv.2202.11214. Slow during fwp on cpu unfortunately.