-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add quasi-3D Integrated Green Functions solver #5089
base: development
Are you sure you want to change the base?
Conversation
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
@@ -0,0 +1,20 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. Didn't we already have a checksum file for this test before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought so too, but there isn't one in development right now.
Something must have happened when we moved to ctest
, maybe?
Anyway, the main problem here is that the heffte
test never runs in CI, so that's why we didn't notice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, from here it seems like that the checksums were saved in the wrong file, overwriting the ones for the test without heffte
.
That also explains why I had to change Regression/Checksum/benchmarks_json/test_3d_open_bc_poisson_solver.json
too.
Source/FieldSolver/ElectrostaticSolvers/RelativisticExplicitES.H
Outdated
Show resolved
Hide resolved
fft_size, tmp_rho[local_boxid].dataPtr(), | ||
reinterpret_cast<ablastr::math::anyfft::Complex*>(tmp_rho_fft.dataPtr()), | ||
ablastr::math::anyfft::direction::R2C, AMREX_SPACEDIM-1, | ||
nrz, nullptr, 1, nrx*nry, nullptr, 1, nsx*nsy); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we modify the interface of CreatePlanMany
so that the user does not need to pass nullptr, 1
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, but I wanted to keep the wrapper as flexible and general as possible.
What if one day anyone will need batched ffts that use those additional parameters?
That was my idea, but I can make the CreatePlanMany
less flexible, if you think it's best.
ablastr::math::anyfft::FFTplan forward_plan_rho = ablastr::math::anyfft::CreatePlanMany( | ||
fft_size, tmp_rho[local_boxid].dataPtr(), | ||
reinterpret_cast<ablastr::math::anyfft::Complex*>(tmp_rho_fft.dataPtr()), | ||
ablastr::math::anyfft::direction::R2C, AMREX_SPACEDIM-1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a comment to explain why this is AMREX_SPACEDIM-1
and not AMREX_SPACEDIM
?
(i.e. we are doing the FFT only in the transverse direction.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about:
// The 2D slice solver performes, for every z, a 2D transform in the transverse plane (x,y).
// Hence, we use AMREX_SPACEDIM-1=2 and not AMREX_SPACEDIM=3 in the batched FFT calculations.
?
|
||
const std::size_t lengths[] = {AMREX_D_DECL(std::size_t(real_size[0]), | ||
std::size_t(real_size[1]), | ||
std::size_t(real_size[2]))}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this particular application, real_size[2]
is not defined.
Maybe instead of using AMREX_D_DECL
, we should do a loop over dim-1
elements.
#endif | ||
dim, lengths, | ||
howmany, // number of transforms, | ||
desc); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if we don't pass a desc
at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need of the plan description anymore.
However, the rocFFT wrapper is currently less flexible than that of FFTW and cuFFT.
std::size_t(ostride*real_size[2]))}; | ||
|
||
rocfft_plan_description desc = nullptr; | ||
rocfft_status result = rocfft_plan_description_set_data_layout( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation for rocFFT seems to indicate that rocfft_plan_description_create
should be called first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need of the plan description anymore.
However, the rocFFT wrapper is currently less flexible than that of FFTW and cuFFT.
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
This PR implements a quasi-3D Poisson solver based on the Integrated Green's functions.
It solves the 2D Poisson equation on the
(x,y)
plane for every slicez
.It is useful for beam-beam simulations.
See PR #4648 for the full 3D solver.
-DWarpX_FFT
should be necessary with the 3D IGF heFFTe solver (technically it isn't). Edit: actually it is since we decided to select the distributed solver at runtime with the input parameteris_3d_distributed
.🔪 🐟