From cec979ac464ab563fe4ce901fbb244d62ea2cb25 Mon Sep 17 00:00:00 2001 From: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> Date: Tue, 14 Dec 2021 11:45:00 -0800 Subject: [PATCH] Memaster update to latest upstream development (#118) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * C++17, CMake 3.17+ (#2300) * C++17, CMake 3.17+ Update C++ requirements to compile with C++17 or newer. * Superbuild: C++17 in AMReX/PICSAR/openPMD-api * Summit: `cuda/11.0.3` -> `cuda/11.3.1` When compiling AMReX in C++17 on Summit, the `cuda/11.0.3` module (`nvcc 11.0.2211`) dies with: ``` ... Base/AMReX_TinyProfiler.cpp nvcc error : 'cicc' died due to signal 11 (Invalid memory reference) nvcc error : 'cicc' core dumped ``` Although this usually is a memory issue, it also appears in `-j1` compiles. * Replace AMREX_SPACEDIM: Evolve & FieldSolver (#2642) * AMREX_SPACEDIM : Boundary Conditions * AMREX_SPACEDIM : Parallelization * Fix compilation * AMREX_SPACEDIM : Initialization * Fix Typo * space * AMREX_SPACEDIM : Particles * AMREX_SPACEDIM : Evolve and FieldSolver * C++17: structured bindings to replace "std::tie(x,y,z) = f()" (#2644) * use structured bindings * std::ignore equivalent in structured bindings Co-authored-by: Axel Huebl * Perlmutter: December Update (#2645) Update the Perlmutter instructions for the major update from December 8th, 2021. * 1D tests for plasma acceleration (#2593) * modify requirements.txt and add input file for 1D Python pwfa * add 1D Python plasma acceleration test to CI * picmi version * USE_PSATD=OFF for 1D * Update Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py Co-authored-by: Axel Huebl * Update Regression/WarpX-tests.ini Co-authored-by: Axel Huebl * Cartesian1D class in pywarpx/picmi.py * requirements.txt: update picmistandard * update picmi version * requirements.txt: revert unintended changes * 1D Laser Acceleration Test * Update Examples/Physics_applications/laser_acceleration/inputs_1d Co-authored-by: Axel Huebl * Update Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py Co-authored-by: Axel Huebl * add data_list to PICMI laser_acceleration test * increase max steps and fix bug in pywarpx/picmi.py 1DCartesian moving window direction * add data_lust to Python laser acceleration test * picmistandard update Co-authored-by: Prabhat Kumar Co-authored-by: Axel Huebl * CMake 3.22+: Policy CMP0127 (#2648) Fix a warning with CMake 3.22+. We use simple syntax in cmake_dependent_option, so we are compatible with the extended syntax in CMake 3.22+: https://cmake.org/cmake/help/v3.22/policy/CMP0127.html * run_test.sh: Own virtual env (#2653) Isolate builds locally, so we don't overwrite a developer's setup anymore. This also avoids a couple of nifty problems that can occur by mixing those envs. Originally part of #2556 * GNUmake: Fix Python Install (force) (#2655) Local developers and cached CI installs ddi never install `pywarpx` if and old version existed. The `--force` must be with us. * Add: Regression/requirements.txt Forgotten in #2653 * Azure: `set -eu -o pipefail` Lol, that's not the default. We previously had `script` where it was the default. Introduced in #2615 * GNUmake & `WarpX-test.ini`: `python` -> `python3` Consistent with all other calls to Python in tests. * Fix missing checksums1d (#2657) * Docs: Fix missing Checksum Ref * Checksum: LaserAcceleration_1d * Checksum: Python_PlasmaAcceleration_1d * Regression/requirements.txt: openpmd-api Follow-up to 8f93e010ccf2bb2c5b1236330ebd27d104732890 * Azure: pre-install `setuptools` upgrade Might fix: ``` - installing setuptools_scm using the system package manager to ensure consistency - migrating from the deprecated setup_requires mechanism to pep517/518 and using a pyproject.toml to declare build dependencies which are reliably pre-installed before running the build tools warnings.warn( TEST FAILED: /home/vsts/.local/lib/python3.8/site-packages/ does NOT support .pth files You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /home/vsts/.local/lib/python3.8/site-packages/ and your PYTHONPATH environment variable currently contains: '' Here are some of your options for correcting the problem: * You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files * You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python and want to use the package(s) you are installing.) * You can set up the installation directory to support ".pth" files by using one of the approaches described here: https://setuptools.readthedocs.io/en/latest/easy_install.html#custom-installation-locations Please make the appropriate changes for your system and try again. ``` * GNUmake `installwarpx`: `mv` -> `cp` No reason to rebuild. Make will detect dependency when needed. * Python GNUmake: Remove Prefix Hacks FREEEEDOM. venv power. * Azure: Ensure latest venv installed * Python/setup.py: picmistandard==0.0.18 Forgotten in #2593 * Fix: analysis_default_regression.py Mismatched checksum file due to crude hard-coding. * PWFA 1D: Fix output name Hard coded, undocumented convention: turns out this must be the name of the test that we define in the ini file. Logical, isn't it. Not. Follow-up to #2593 * Docs: `python3 -m pip` & Virtual Env (#2656) * Docs: `python3 -m pip` Use `python3 -m pip`: - works independent of PATH - always uses the right Python - is the recommended way to use `pip` * Dependencies: Python incl. venv Backported from #2556. Follow-up to #2653 * CMake: 3.18+ (#2651) With the C++17 switch, we required CMake 3.17+ since that one introduced the `cuda_std_17` target compile feature. It turns out that one of the many CUDA improvements in CMake 3.18+ is also to fix that feature for good, so we bump our requirement in CMake. Since CMake is easy to install, it's easier to require a clean newer version than working around a broken old one. Spotted first by Phil on AWS instances, thx! * fix check for absolute library install path (#2646) Co-authored-by: Hannes T * use if constexpr to replace template specialization (#2660) * fix for setting the boundary condition potentials in 1D ES simulations (#2649) * `use_default_v_` Only w/ Boosted Frame (#2654) * ICC CI: Unbound Vars (`setvars.sh`) (#2663) Ignore: ``` /opt/intel/oneapi/compiler/latest/env/vars.sh: line 236: OCL_ICD_FILENAMES: unbound variable ``` * QED openPMD Tests: Specify H5 Backend (#2661) We default to ADIOS `.bp` if available. Thus, specify HDF5 assumption * C++17: if constexpr for templates in ShapeFactors (#2659) * use if constexpr to replace template specialization * Rmove Interface Annotations * Replace static_assert with amrex::Abort * Add includes & authors Co-authored-by: Axel Huebl * ABLASTR Library (#2263) * [Draft] ABLASTR Library - CMake object library - include FFTW wrappers to start with * Add: MPIInitHelpers * Enable ABLASTR-only builds * Add alias WarpX::ablastr * ABLASTR: openPMD forwarding * make_third_party_includes_system: Avoid Collision * WarpX: depend on `ablastr` * Definitions: WarpX -> ablastr * CMake: Reduce build objects for ABLASTR Skip all object files that we do not use in builds. * CMake: app/shared links all object targets Our `PRIVATE` source/objects are not PUBLICly propagated themselves. * Docs: Fix Warning Logger Typo (#2667) * Python: Add 3.10, Relax upper bound (#2664) There are no breaking changes in Python 3.10 that affect us. Giving the version compatibility of Python and it's ABI stability, there is no need at the moment to provide an upper limit. Thus, relaxed now in general. * Fixing the initialization of the EB data in ghost cells (#2635) * Using ng_FieldSolver ghost cells in the EB data * Removed an unused variable * Fixed makeEBFabFactory also in in WarpXRgrid.cpp * Fixed end of line whitespace * Undoing #2607 * Add PML Support for multi-J Algorithm (#2603) * Add PML Support for multi-J Algorithm * Add CI Test * Fix the scope of profiler for SYCL (#2668) In main.cpp, the destructor of the profiler was called after amrex::Finalize. This caused an error in SYCL due to a device synchronization call in the dtor, because the SYCL queues in amrex had been deleted. In this commit, we limit the scope of the profiler so that its destructor is called before the queues are deleted. Note that it was never an issue for CUDA/HIP, because the device synchronization calls in those backends do not need any amrex objects. * Add high energy asymptotic fit for Proton-Boron total cross section (#2408) * Add high energy asymptotic fit for Proton Boron total cross section * Write keV and MeV instead of kev and mev * Add @return doxystrings * Add anisotropic mesh refinement example (#2650) * Add anisotropic mesh refinement example * Update benchmark * AMReX/PICSAR: Weekly Update (#2666) * AMReX: Weekly Update * Reset: PEC_particle, RepellingParticles, subcyclingMR New AMReX grid layout routines split grids until they truly reach number of MPI ranks, if blocking factor allows. This changes some of our particle orders slightly. * Add load balancing test (#2561) * Added embedded_circle test * Add embedded_circle test files * Removed diag files * removed PICMI input file * Update to use default regression analysis * Added line breaks for spacing Co-authored-by: Axel Huebl * Added description * Fixed benchmark file * Added load balancing to test * Commented out load_balancing portion of test. This will be added back in once load balancing is fixed. * Add load balancing to embedded_boundary test * Updated checksum * Added embedded_circle test * Add embedded_circle test files * removed PICMI input file * Update to use default regression analysis * Added load balancing to test * Commented out load_balancing portion of test. This will be added back in once load balancing is fixed. * Add load balancing to embedded_boundary test * added analysis.py file in order to relax tolerance on test * Ensure that timers are used to update load balancing algorithm * Updated test name retrieval Co-authored-by: Axel Huebl Co-authored-by: Roelof Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> * Adding EB multifabs to the Python interface (#2647) * Adding edge_lengths and face_areas to the Python interface * Added wrappers for the two new arrays of data * Adding a CI test * Fixed test name * Added customRunCmd * Added mpi in test * Refactor DepositCharge so it can be called from ImpactX. (#2652) * Refactor DepositCharge so it can be called from ImpactX. * change thread_num * Fix namespace * remove all static WarpX:: members and methods from DepositChargeDoIt. * fix unused * Don't access ref_ratio unless lev != depos_lev * more unused * remove function to its own file / namespace * don't need a CMakeLists.txt for this * lower case namespace, rename file * Refactor: Profiler Wrapper Explicit control for synchronization instead of global state. Co-authored-by: Axel Huebl * ABLASTR: Fix Doxygen in `DepositCharge` * update version number and changelog Co-authored-by: Axel Huebl Co-authored-by: Prabhat Kumar <89051199+prkkumar@users.noreply.github.com> Co-authored-by: Luca Fedeli Co-authored-by: Prabhat Kumar Co-authored-by: s9105947 <80697868+s9105947@users.noreply.github.com> Co-authored-by: Hannes T Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> Co-authored-by: Phil Miller Co-authored-by: Lorenzo Giacomel <47607756+lgiacome@users.noreply.github.com> Co-authored-by: Weiqun Zhang Co-authored-by: Neïl Zaim <49716072+NeilZaim@users.noreply.github.com> Co-authored-by: Remi Lehe Co-authored-by: Kevin Z. Zhu <86268612+KZhu-ME@users.noreply.github.com> Co-authored-by: Andrew Myers --- .azure-pipelines.yml | 39 +-- .github/workflows/cuda.yml | 8 +- .github/workflows/dependencies/icc.sh | 4 +- .github/workflows/intel.yml | 4 +- .github/workflows/ubuntu.yml | 21 ++ CMakeLists.txt | 121 +++++---- Docs/README.md | 2 +- Docs/source/dataanalysis/openpmdviewer.rst | 2 +- Docs/source/dataanalysis/picviewer.rst | 6 +- Docs/source/dataanalysis/yt.rst | 4 +- Docs/source/developers/documentation.rst | 2 +- Docs/source/developers/testing.rst | 9 +- Docs/source/developers/warning_logger.rst | 2 +- Docs/source/install/dependencies.rst | 21 +- Docs/source/install/hpc/cori.rst | 8 +- Docs/source/install/hpc/perlmutter.rst | 18 +- Docs/source/install/hpc/quartz.rst | 2 +- Docs/source/install/hpc/summit.rst | 2 +- Docs/source/usage/parameters.rst | 4 +- .../PICMI_inputs_EB_API.py | 252 ++++++++++++++++++ .../embedded_boundary_python_API/analysis.py | 10 + .../PICMI_inputs_laser_acceleration.py | 3 +- .../laser_acceleration/inputs_1d | 66 +++++ .../PICMI_inputs_plasma_acceleration_1d.py | 71 +++++ Examples/Tests/embedded_circle/analysis.py | 15 ++ Examples/Tests/embedded_circle/inputs_2d | 9 +- Examples/Tests/multi_J/inputs_2d_pml | 133 +++++++++ Examples/analysis_default_regression.py | 3 +- Python/pywarpx/_libwarpx.py | 145 ++++++++++ Python/pywarpx/fields.py | 42 +++ Python/pywarpx/picmi.py | 51 ++++ Python/setup.py | 2 +- README.md | 2 +- .../Langmuir_multi_2d_MR_anisotropic.json | 44 +++ .../benchmarks_json/LaserAcceleration_1d.json | 23 ++ .../benchmarks_json/PEC_particle.json | 32 +-- .../Python_PlasmaAcceleration1d.json | 20 ++ .../benchmarks_json/RepellingParticles.json | 20 +- .../benchmarks_json/embedded_circle.json | 40 +-- .../benchmarks_json/multi_J_2d_psatd_pml.json | 51 ++++ .../benchmarks_json/subcyclingMR.json | 28 +- Regression/WarpX-GPU-tests.ini | 2 +- Regression/WarpX-tests.ini | 176 +++++++++--- Regression/requirements.txt | 6 + Source/ABLASTR/DepositCharge.H | 182 +++++++++++++ Source/ABLASTR/Make.package | 1 + Source/ABLASTR/ProfilerWrapper.H | 47 ++++ Source/Diagnostics/WarpXOpenPMD.cpp | 21 +- Source/EmbeddedBoundary/WarpXInitEB.cpp | 8 +- Source/Evolve/WarpXComputeDt.cpp | 10 +- Source/Evolve/WarpXEvolve.cpp | 34 ++- Source/FieldSolver/ElectrostaticSolver.cpp | 42 +-- .../FiniteDifferenceSolver/EvolveEPML.cpp | 2 +- .../MacroscopicProperties.cpp | 4 +- .../FieldSolver/SpectralSolver/CMakeLists.txt | 6 +- .../ComovingPsatdAlgorithm.H | 2 +- .../ComovingPsatdAlgorithm.cpp | 20 +- .../SpectralAlgorithms/PMLPsatdAlgorithm.cpp | 8 +- .../SpectralAlgorithms/PsatdAlgorithm.H | 2 +- .../SpectralAlgorithms/PsatdAlgorithm.cpp | 38 +-- .../SpectralBaseAlgorithm.H | 2 +- .../SpectralBaseAlgorithm.cpp | 8 +- .../SpectralSolver/SpectralFieldData.H | 2 +- .../SpectralSolver/SpectralFieldData.cpp | 48 ++-- Source/FieldSolver/WarpX_QED_K.H | 2 +- Source/Initialization/WarpXInitData.cpp | 17 -- .../LaserProfileFromTXYEFile.cpp | 3 +- Source/Make.WarpX | 11 +- Source/Parallelization/WarpXRegrid.cpp | 3 +- .../ProtonBoronFusionCrossSection.H | 98 +++++-- .../Particles/Deposition/ChargeDeposition.H | 3 +- Source/Particles/ShapeFactors.H | 234 ++++++---------- Source/Particles/WarpXParticleContainer.cpp | 195 ++++---------- Source/Python/WarpXWrappers.H | 12 + Source/Python/WarpXWrappers.cpp | 12 + Source/Utils/CMakeLists.txt | 6 +- Source/Utils/MsgLogger/MsgLogger.cpp | 8 +- .../Utils/MsgLogger/MsgLoggerSerialization.H | 173 +++++------- Source/Utils/WarpXProfilerWrapper.H | 42 +-- Source/WarpX.H | 5 +- Source/WarpX.cpp | 103 +++---- Source/main.cpp | 10 +- cmake/WarpXFunctions.cmake | 30 ++- cmake/dependencies/AMReX.cmake | 2 +- cmake/dependencies/FFT.cmake | 8 +- mewarpx/changelog.csv | 8 +- mewarpx/mewarpx/_version.py | 2 +- pyproject.toml | 2 +- requirements.txt | 2 +- run_test.sh | 20 +- setup.py | 9 +- 91 files changed, 2139 insertions(+), 893 deletions(-) create mode 100644 Examples/Modules/embedded_boundary_python_API/PICMI_inputs_EB_API.py create mode 100644 Examples/Modules/embedded_boundary_python_API/analysis.py create mode 100644 Examples/Physics_applications/laser_acceleration/inputs_1d create mode 100644 Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py create mode 100755 Examples/Tests/embedded_circle/analysis.py create mode 100644 Examples/Tests/multi_J/inputs_2d_pml create mode 100644 Regression/Checksum/benchmarks_json/Langmuir_multi_2d_MR_anisotropic.json create mode 100644 Regression/Checksum/benchmarks_json/LaserAcceleration_1d.json create mode 100644 Regression/Checksum/benchmarks_json/Python_PlasmaAcceleration1d.json create mode 100644 Regression/Checksum/benchmarks_json/multi_J_2d_psatd_pml.json create mode 100644 Regression/requirements.txt create mode 100644 Source/ABLASTR/DepositCharge.H create mode 100644 Source/ABLASTR/Make.package create mode 100644 Source/ABLASTR/ProfilerWrapper.H diff --git a/.azure-pipelines.yml b/.azure-pipelines.yml index 6b4796a0c6a..1ff1b58679b 100644 --- a/.azure-pipelines.yml +++ b/.azure-pipelines.yml @@ -50,43 +50,41 @@ jobs: # - for a refresh the key has to change, e.g., hash of a tracked file in the key - task: Cache@2 inputs: - key: 'Ccache | "$(System.JobName)" | cmake/dependencies/AMReX.cmake | run_test.sh' + key: 'Ccache | "$(System.JobName)" | .azure-pipelines.yml | cmake/dependencies/AMReX.cmake | run_test.sh' restoreKeys: | - Ccache | "$(System.JobName)" | cmake/dependencies/AMReX.cmake | run_test.sh - Ccache | "$(System.JobName)" | cmake/dependencies/AMReX.cmake - Ccache | "$(System.JobName)" - Ccache + Ccache | "$(System.JobName)" | .azure-pipelines.yml | cmake/dependencies/AMReX.cmake | run_test.sh + Ccache | "$(System.JobName)" | .azure-pipelines.yml | cmake/dependencies/AMReX.cmake + Ccache | "$(System.JobName)" | .azure-pipelines.yml path: /home/vsts/.ccache cacheHitVar: CCACHE_CACHE_RESTORED displayName: Cache Ccache Objects - task: Cache@2 inputs: - key: 'Python3 | "$(System.JobName)" | run_test.sh' + key: 'Python3 | "$(System.JobName)" | .azure-pipelines.yml | run_test.sh' restoreKeys: | - Python3 | "$(System.JobName)" | run_test.sh - Python3 | "$(System.JobName)" - Python3 + Python3 | "$(System.JobName)" | .azure-pipelines.yml | run_test.sh + Python3 | "$(System.JobName)" | .azure-pipelines.yml path: /home/vsts/.local/lib/python3.8 cacheHitVar: PYTHON38_CACHE_RESTORED displayName: Cache Python Libraries - bash: | + set -eu -o pipefail cat /proc/cpuinfo | grep "model name" | sort -u df -h - sudo apt install -y ccache gcc gfortran g++ openmpi-bin libopenmpi-dev \ + sudo apt install -y ccache curl gcc gfortran git g++ openmpi-bin libopenmpi-dev \ libfftw3-dev libfftw3-mpi-dev libhdf5-openmpi-dev pkg-config make \ python3 python3-pip python3-venv python3-setuptools libblas-dev liblapack-dev ccache --set-config=max_size=10.0G - sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 2 - sudo update-alternatives --set python /usr/bin/python3 - python -m pip install --upgrade pip - python -m pip install --upgrade wheel - python -m pip install --upgrade pipx - python -m pipx install cmake - python -m pipx ensurepath + python3 -m pip install --upgrade pip + python3 -m pip install --upgrade setuptools + python3 -m pip install --upgrade wheel + python3 -m pip install --upgrade virtualenv + python3 -m pip install --upgrade pipx + python3 -m pipx install cmake + python3 -m pipx ensurepath export PATH="$HOME/.local/bin:$PATH" - python -m pip install --upgrade matplotlib mpi4py numpy scipy yt sudo curl -L -o /usr/local/bin/cmake-easyinstall https://git.io/JvLxY sudo chmod a+x /usr/local/bin/cmake-easyinstall if [ "${WARPX_CI_OPENPMD:-FALSE}" == "TRUE" ]; then @@ -95,14 +93,16 @@ jobs: -DCMAKE_CXX_COMPILER_LAUNCHER=$(which ccache) \ -DCMAKE_VERBOSE_MAKEFILE=ON \ -DopenPMD_USE_PYTHON=OFF -DBUILD_TESTING=OFF -DBUILD_EXAMPLES=OFF -DBUILD_CLI_TOOLS=OFF - python -m pip install --upgrade openpmd-api + python3 -m pip install --upgrade openpmd-api fi if [[ "${WARPX_CI_RZ_OR_NOMPI:-FALSE}" == "TRUE" || "${WARPX_CI_PYTHON_MAIN:-FALSE}" == "TRUE" ]]; then cmake-easyinstall --prefix=/usr/local git+https://bitbucket.org/icl/blaspp.git \ -DCMAKE_CXX_COMPILER_LAUNCHER=$(which ccache) \ + -DCMAKE_CXX_STANDARD=17 \ -Duse_openmp=OFF -Dbuild_tests=OFF -DCMAKE_VERBOSE_MAKEFILE=ON cmake-easyinstall --prefix=/usr/local git+https://bitbucket.org/icl/lapackpp.git \ -DCMAKE_CXX_COMPILER_LAUNCHER=$(which ccache) \ + -DCMAKE_CXX_STANDARD=17 \ -Duse_cmake_find_lapack=ON -Dbuild_tests=OFF -DCMAKE_VERBOSE_MAKEFILE=ON fi rm -rf ${CEI_TMP} @@ -110,6 +110,7 @@ jobs: displayName: 'Install dependencies' - bash: | + set -eu -o pipefail df -h ./run_test.sh rm -rf ${WARPX_CI_TMP} diff --git a/.github/workflows/cuda.yml b/.github/workflows/cuda.yml index 0ad0199cdef..23129d91b0f 100644 --- a/.github/workflows/cuda.yml +++ b/.github/workflows/cuda.yml @@ -24,7 +24,13 @@ jobs: run: | .github/workflows/dependencies/nvcc11.sh export CEI_SUDO="sudo" - cmake-easyinstall --prefix=/usr/local git+https://github.com/openPMD/openPMD-api.git@0.14.3 -DopenPMD_USE_PYTHON=OFF -DBUILD_TESTING=OFF -DBUILD_EXAMPLES=OFF -DBUILD_CLI_TOOLS=OFF + cmake-easyinstall --prefix=/usr/local \ + git+https://github.com/openPMD/openPMD-api.git@0.14.3 \ + -DopenPMD_USE_PYTHON=OFF \ + -DBUILD_TESTING=OFF \ + -DBUILD_EXAMPLES=OFF \ + -DBUILD_CLI_TOOLS=OFF \ + -DCMAKE_VERBOSE_MAKEFILE=ON - name: build WarpX run: | export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH} diff --git a/.github/workflows/dependencies/icc.sh b/.github/workflows/dependencies/icc.sh index 72d697c1139..68d5a403a91 100755 --- a/.github/workflows/dependencies/icc.sh +++ b/.github/workflows/dependencies/icc.sh @@ -26,9 +26,9 @@ sudo apt-get update sudo apt-get install -y intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic # activate now via -set +e +set +eu source /opt/intel/oneapi/setvars.sh -set -e +set -eu # cmake-easyinstall sudo curl -L -o /usr/local/bin/cmake-easyinstall https://git.io/JvLxY diff --git a/.github/workflows/intel.yml b/.github/workflows/intel.yml index 46fdcdc6370..793eceaf996 100644 --- a/.github/workflows/intel.yml +++ b/.github/workflows/intel.yml @@ -23,9 +23,9 @@ jobs: .github/workflows/dependencies/icc.sh - name: build WarpX run: | - set +e + set +eu source /opt/intel/oneapi/setvars.sh - set -e + set -eu export CXX=$(which icpc) export CC=$(which icc) diff --git a/.github/workflows/ubuntu.yml b/.github/workflows/ubuntu.yml index bc7505c5dbf..d640f1bc7e4 100644 --- a/.github/workflows/ubuntu.yml +++ b/.github/workflows/ubuntu.yml @@ -34,6 +34,27 @@ jobs: -DWarpX_QED=OFF cmake --build build_RZ -j 2 + build_gcc_ablastr: + name: GCC ABLASTR w/o MPI + runs-on: ubuntu-20.04 + if: github.event.pull_request.draft == false + env: + CMAKE_GENERATOR: Ninja + CXXFLAGS: "-Werror" + steps: + - uses: actions/checkout@v2 + - name: install dependencies + run: | + .github/workflows/dependencies/gcc.sh + sudo apt-get install -y libopenmpi-dev openmpi-bin + - name: build WarpX + run: | + cmake -S . -B build \ + -DCMAKE_VERBOSE_MAKEFILE=ON \ + -DWarpX_APP=OFF \ + -DWarpX_LIB=OFF + cmake --build build -j 2 + build_pyfull: name: Clang pywarpx runs-on: ubuntu-20.04 diff --git a/CMakeLists.txt b/CMakeLists.txt index 61b515ead2a..c0b660ed527 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -1,6 +1,6 @@ # Preamble #################################################################### # -cmake_minimum_required(VERSION 3.15.0) +cmake_minimum_required(VERSION 3.18.0) project(WarpX VERSION 21.12) include(${WarpX_SOURCE_DIR}/cmake/WarpXFunctions.cmake) @@ -19,13 +19,25 @@ endif() # AMReX 21.06+ supports CUDA_ARCHITECTURES with CMake 3.20+ # CMake 3.18+: CMAKE_CUDA_ARCHITECTURES # https://cmake.org/cmake/help/latest/policy/CMP0104.html -if(CMAKE_VERSION VERSION_LESS 3.20) - if(POLICY CMP0104) - cmake_policy(SET CMP0104 OLD) - endif() +if(POLICY CMP0104) + cmake_policy(SET CMP0104 OLD) +endif() + +# We use simple syntax in cmake_dependent_option, so we are compatible with the +# extended syntax in CMake 3.22+ +# https://cmake.org/cmake/help/v3.22/policy/CMP0127.html +if(POLICY CMP0127) + cmake_policy(SET CMP0127 NEW) endif() +# C++ Standard in Superbuilds ################################################# +# +# This is the easiest way to push up a C++17 requirement for AMReX, PICSAR and +# openPMD-api until they increase their requirement. +set_cxx17_superbuild() + + # CCache Support ############################################################## # # this is an optional tool that stores compiled object files; allows fast @@ -104,7 +116,7 @@ option(WarpX_IPO "Compile WarpX with interprocedu # builds AMReX from source (default) or finds an existing install include(${WarpX_SOURCE_DIR}/cmake/dependencies/AMReX.cmake) # suppress warnings in AMReX headers (use -isystem instead of -I) -make_third_party_includes_system(AMReX::amrex AMReX) +warpx_make_third_party_includes_system(AMReX::amrex AMReX) # PICSAR # builds PICSAR from source @@ -128,14 +140,14 @@ endif() # Targets ##################################################################### # -if(NOT WarpX_APP AND NOT WarpX_LIB) - message(FATAL_ERROR "Need to build at least WarpX app or " - "library/Python bindings") -endif() - # collect all objects for compilation add_library(WarpX OBJECT) -set(_ALL_TARGETS WarpX) +add_library(ablastr OBJECT) + +# ABLASTR library +set(_BUILDINFO_SRC ablastr) +set(_ALL_TARGETS WarpX ablastr) +add_library(WarpX::ablastr ALIAS ablastr) # executable application # note: we currently avoid a dependency on a core library @@ -143,7 +155,7 @@ set(_ALL_TARGETS WarpX) if(WarpX_APP) add_executable(app) add_executable(WarpX::app ALIAS app) - target_link_libraries(app PRIVATE WarpX) + target_link_libraries(app PRIVATE WarpX ablastr) set(_BUILDINFO_SRC app) list(APPEND _ALL_TARGETS app) endif() @@ -152,11 +164,11 @@ endif() if(WarpX_LIB) add_library(shared MODULE) add_library(WarpX::shared ALIAS shared) - target_link_libraries(shared PUBLIC WarpX) + target_link_libraries(shared PUBLIC WarpX ablastr) set(_BUILDINFO_SRC shared) list(APPEND _ALL_TARGETS shared) - set_target_properties(WarpX shared PROPERTIES + set_target_properties(WarpX ablastr shared PROPERTIES POSITION_INDEPENDENT_CODE ON WINDOWS_EXPORT_ALL_SYMBOLS ON ) @@ -166,6 +178,10 @@ endif() target_include_directories(WarpX PUBLIC $ ) +target_include_directories(ablastr PUBLIC + # future: own directory root + $ +) # if we include we will need to call: include(AMReXBuildInfo) @@ -179,6 +195,7 @@ if(WarpX_APP) target_sources(app PRIVATE Source/main.cpp) endif() +#add_subdirectory(Source/ABLASTR) add_subdirectory(Source/BoundaryConditions) add_subdirectory(Source/Diagnostics) add_subdirectory(Source/EmbeddedBoundary) @@ -192,9 +209,9 @@ add_subdirectory(Source/Particles) add_subdirectory(Source/Python) add_subdirectory(Source/Utils) -# C++ properties: at least a C++14 capable compiler is needed +# C++ properties: at least a C++17 capable compiler is needed foreach(warpx_tgt IN LISTS _ALL_TARGETS) - target_compile_features(${warpx_tgt} PUBLIC cxx_std_14) + target_compile_features(${warpx_tgt} PUBLIC cxx_std_17) endforeach() set_target_properties(${_ALL_TARGETS} PROPERTIES CXX_EXTENSIONS OFF @@ -207,44 +224,50 @@ if(WarpX_IPO) endif() # link dependencies -target_link_libraries(WarpX PUBLIC WarpX::thirdparty::AMReX) +target_link_libraries(ablastr PUBLIC WarpX::thirdparty::AMReX) +target_link_libraries(WarpX PUBLIC ablastr) if(WarpX_PSATD) - target_link_libraries(WarpX PUBLIC WarpX::thirdparty::FFT) + target_link_libraries(ablastr PUBLIC WarpX::thirdparty::FFT) if(WarpX_DIMS STREQUAL RZ) - target_link_libraries(WarpX PUBLIC blaspp) - target_link_libraries(WarpX PUBLIC lapackpp) + target_link_libraries(ablastr PUBLIC blaspp) + target_link_libraries(ablastr PUBLIC lapackpp) endif() endif() if(WarpX_OPENPMD) - target_compile_definitions(WarpX PUBLIC WARPX_USE_OPENPMD) - target_link_libraries(WarpX PUBLIC openPMD::openPMD) + target_compile_definitions(ablastr PUBLIC WARPX_USE_OPENPMD) + target_link_libraries(ablastr PUBLIC openPMD::openPMD) endif() if(WarpX_QED) - target_compile_definitions(WarpX PUBLIC WARPX_QED) + target_compile_definitions(ablastr PUBLIC WARPX_QED) if(WarpX_QED_TABLE_GEN) - target_compile_definitions(WarpX PUBLIC WARPX_QED_TABLE_GEN) + target_compile_definitions(ablastr PUBLIC WARPX_QED_TABLE_GEN) endif() - target_link_libraries(WarpX PUBLIC PXRMP_QED::PXRMP_QED) + target_link_libraries(ablastr PUBLIC PXRMP_QED::PXRMP_QED) endif() # AMReX helper function: propagate CUDA specific target & source properties if(WarpX_COMPUTE STREQUAL CUDA) - setup_target_for_cuda_compilation(WarpX) foreach(warpx_tgt IN LISTS _ALL_TARGETS) setup_target_for_cuda_compilation(${warpx_tgt}) endforeach() - if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.17) - foreach(warpx_tgt IN LISTS _ALL_TARGETS) - target_compile_features(${warpx_tgt} PUBLIC cuda_std_14) - endforeach() - set_target_properties(${_ALL_TARGETS} PROPERTIES - CUDA_EXTENSIONS OFF - CUDA_STANDARD_REQUIRED ON - ) - endif() + foreach(warpx_tgt IN LISTS _ALL_TARGETS) + target_compile_features(${warpx_tgt} PUBLIC cuda_std_17) + endforeach() + set_target_properties(${_ALL_TARGETS} PROPERTIES + CUDA_EXTENSIONS OFF + CUDA_STANDARD_REQUIRED ON + ) +endif() + +# avoid building all object files if we are only used as ABLASTR library +if(NOT WarpX_APP AND NOT WarpX_LIB) + set_target_properties(WarpX PROPERTIES + EXCLUDE_FROM_ALL 1 + EXCLUDE_FROM_DEFAULT_BUILD 1 + ) endif() # fancy binary name for build variants @@ -254,43 +277,43 @@ set_warpx_binary_name() # Defines ##################################################################### # get_source_version(WarpX ${CMAKE_CURRENT_SOURCE_DIR}) -target_compile_definitions(WarpX PUBLIC WARPX_GIT_VERSION="${WarpX_GIT_VERSION}") +target_compile_definitions(ablastr PUBLIC WARPX_GIT_VERSION="${WarpX_GIT_VERSION}") if(WarpX_QED) - target_compile_definitions(WarpX PUBLIC PICSAR_GIT_VERSION="${PXRMP_QED_GIT_VERSION}") + target_compile_definitions(ablastr PUBLIC PICSAR_GIT_VERSION="${PXRMP_QED_GIT_VERSION}") endif() if(WarpX_DIMS STREQUAL 3) - target_compile_definitions(WarpX PUBLIC WARPX_DIM_3D WARPX_ZINDEX=2) + target_compile_definitions(ablastr PUBLIC WARPX_DIM_3D WARPX_ZINDEX=2) elseif(WarpX_DIMS STREQUAL 2) - target_compile_definitions(WarpX PUBLIC WARPX_DIM_XZ WARPX_ZINDEX=1) + target_compile_definitions(ablastr PUBLIC WARPX_DIM_XZ WARPX_ZINDEX=1) elseif(WarpX_DIMS STREQUAL 1) - target_compile_definitions(WarpX PUBLIC WARPX_DIM_1D_Z WARPX_ZINDEX=0) + target_compile_definitions(ablastr PUBLIC WARPX_DIM_1D_Z WARPX_ZINDEX=0) elseif(WarpX_DIMS STREQUAL RZ) - target_compile_definitions(WarpX PUBLIC WARPX_DIM_RZ WARPX_ZINDEX=1) + target_compile_definitions(ablastr PUBLIC WARPX_DIM_RZ WARPX_ZINDEX=1) endif() if(WarpX_GPUCLOCK) - target_compile_definitions(WarpX PUBLIC WARPX_USE_GPUCLOCK) + target_compile_definitions(ablastr PUBLIC WARPX_USE_GPUCLOCK) endif() if(WarpX_OPENPMD) - target_compile_definitions(WarpX PUBLIC WARPX_USE_OPENPMD) + target_compile_definitions(ablastr PUBLIC WARPX_USE_OPENPMD) endif() if(WarpX_QED) - target_compile_definitions(WarpX PUBLIC WARPX_QED) + target_compile_definitions(ablastr PUBLIC WARPX_QED) if(WarpX_QED_TABLE_GEN) - target_compile_definitions(WarpX PUBLIC WarpX_QED_TABLE_GEN) + target_compile_definitions(ablastr PUBLIC WarpX_QED_TABLE_GEN) endif() endif() if(WarpX_PSATD) - target_compile_definitions(WarpX PUBLIC WARPX_USE_PSATD) + target_compile_definitions(ablastr PUBLIC WARPX_USE_PSATD) endif() # : M_PI if(WIN32) - target_compile_definitions(WarpX PRIVATE _USE_MATH_DEFINES) + target_compile_definitions(ablastr PUBLIC _USE_MATH_DEFINES) endif() @@ -343,7 +366,7 @@ if(WarpX_LIB) else() set(mod_ext "so") endif() - if(IS_ABSOLUTE CMAKE_INSTALL_LIBDIR) + if(IS_ABSOLUTE ${CMAKE_INSTALL_LIBDIR}) set(ABS_INSTALL_LIB_DIR ${CMAKE_INSTALL_LIBDIR}) else() set(ABS_INSTALL_LIB_DIR ${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}) diff --git a/Docs/README.md b/Docs/README.md index 72f3dfd0d54..fbc98b5bb6d 100644 --- a/Docs/README.md +++ b/Docs/README.md @@ -8,7 +8,7 @@ This explains how to generate the documentation for Warpx, and contribute to it. Install the Python requirements for compiling the documentation: ``` -pip install sphinx sphinx_rtd_theme +python3 -m pip install sphinx sphinx_rtd_theme ``` ### Compiling the documentation diff --git a/Docs/source/dataanalysis/openpmdviewer.rst b/Docs/source/dataanalysis/openpmdviewer.rst index 39b550df7b7..5db1ebe368b 100644 --- a/Docs/source/dataanalysis/openpmdviewer.rst +++ b/Docs/source/dataanalysis/openpmdviewer.rst @@ -21,7 +21,7 @@ openPMD-viewer can be installed via ``conda`` or ``pip``: .. code-block:: bash - pip install openPMD-viewer openPMD-api + python3 -m pip install openPMD-viewer openPMD-api Usage ----- diff --git a/Docs/source/dataanalysis/picviewer.rst b/Docs/source/dataanalysis/picviewer.rst index d767ec2ea5a..2772d9aa5e2 100644 --- a/Docs/source/dataanalysis/picviewer.rst +++ b/Docs/source/dataanalysis/picviewer.rst @@ -43,7 +43,7 @@ Required software :: - pip install git+https://github.com/yt-project/yt.git --user + python3 -m pip install git+https://github.com/yt-project/yt.git --user * numba @@ -53,7 +53,7 @@ Installation :: - pip install picviewer + python3 -m pip install picviewer You need to install yt and PySide separately. @@ -61,7 +61,7 @@ You can install from the source for the latest update, :: - pip install git+https://bitbucket.org/ecp_warpx/picviewer/ + python3 -m pip install git+https://bitbucket.org/ecp_warpx/picviewer/ To install manually diff --git a/Docs/source/dataanalysis/yt.rst b/Docs/source/dataanalysis/yt.rst index 285199054d5..a040918bc6a 100644 --- a/Docs/source/dataanalysis/yt.rst +++ b/Docs/source/dataanalysis/yt.rst @@ -19,8 +19,8 @@ From the terminal, install the latest version of yt: .. code-block:: bash - pip install cython - python -m pip install --upgrade yt + python3 -m pip install cython + python3 -m pip install --upgrade yt Alternatively, yt can be installed via their installation script, see `yt installation web page `__. diff --git a/Docs/source/developers/documentation.rst b/Docs/source/developers/documentation.rst index 2277aef9c4a..e8cad8e1b7d 100644 --- a/Docs/source/developers/documentation.rst +++ b/Docs/source/developers/documentation.rst @@ -55,7 +55,7 @@ First, change into ``Docs/`` and install the Python requirements: .. code-block:: sh cd Docs/ - pip install -r requirements.txt + python3 -m pip install -r requirements.txt You will also need Doxygen (macOS: ``brew install doxygen``; Ubuntu: ``sudo apt install doxygen``). diff --git a/Docs/source/developers/testing.rst b/Docs/source/developers/testing.rst index cdbfecfc4ce..23f9521fbfb 100644 --- a/Docs/source/developers/testing.rst +++ b/Docs/source/developers/testing.rst @@ -77,8 +77,9 @@ Add a test to the suite There are three steps to follow to add a new automated test (illustrated here for PML boundary conditions): * An input file for your test, in folder `Example/Tests/...`. For the PML test, the input file is at ``Examples/Tests/PML/inputs_2d``. You can also re-use an existing input file (even better!) and pass specific parameters at runtime (see below). -* A Python script that reads simulation output and tests correctness versus theory or calibrated results. For the PML test, see ``Examples/Tests/PML/analysis_pml_yee.py``. It typically ends with Python statement `assert( error<0.01 )`. -* Add an entry to [Regression/WarpX-tests.ini](./Regression/WarpX-tests.ini), so that a WarpX simulation runs your test in the continuous integration process, and the Python script is executed to assess the correctness. For the PML test, the entry is +* A Python script that reads simulation output and tests correctness versus theory or calibrated results. For the PML test, see ``Examples/Tests/PML/analysis_pml_yee.py``. It typically ends with Python statement ``assert( error<0.01 )``. +* If you need a new Python package dependency for testing, add it in ``Regression/requirements.txt`` +* Add an entry to ``Regression/WarpX-tests.ini``, so that a WarpX simulation runs your test in the continuous integration process, and the Python script is executed to assess the correctness. For the PML test, the entry is .. code-block:: @@ -99,6 +100,10 @@ There are three steps to follow to add a new automated test (illustrated here fo If you re-use an existing input file, you can add arguments to ``runtime_params``, like ``runtime_params = amr.max_level=1 amr.n_cell=32 512 max_step=100 plasma_e.zmin=-200.e-6``. +.. note:: + + If you added ``analysisRoutine = Examples/analysis_default_regression.py``, then run the new test case locally and add the :ref:`checksum ` file for the expected output. + Useful tool for plotfile comparison: ``fcompare`` -------------------------------------------------- diff --git a/Docs/source/developers/warning_logger.rst b/Docs/source/developers/warning_logger.rst index a00f848272e..505782fcd11 100644 --- a/Docs/source/developers/warning_logger.rst +++ b/Docs/source/developers/warning_logger.rst @@ -40,7 +40,7 @@ On the contrary, if warning messages are raised, the list should look as follows ******************************************************************************** Here, ``GLOBAL`` indicates that warning messages are gathered across all the MPI ranks (specifically -after the ``FIRSR STEP``). +after the ``FIRST STEP``). Each entry of warning list respects the following format: diff --git a/Docs/source/install/dependencies.rst b/Docs/source/install/dependencies.rst index c6fa204f675..4dff1ec6634 100644 --- a/Docs/source/install/dependencies.rst +++ b/Docs/source/install/dependencies.rst @@ -6,8 +6,8 @@ Dependencies WarpX depends on the following popular third party software. Please see installation instructions below. -- a mature `C++14 `__ compiler: e.g. GCC 5, Clang 3.6 or newer -- `CMake 3.15.0+ `__ +- a mature `C++17 `__ compiler, e.g., GCC 7, Clang 6, NVCC 11.0, MSVC 19.15 or newer +- `CMake 3.18.0+ `__ - `Git 2.18+ `__ - `AMReX `__: we automatically download and compile a copy of AMReX - `PICSAR `__: we automatically download and compile a copy of PICSAR @@ -15,7 +15,7 @@ Please see installation instructions below. Optional dependencies include: - `MPI 3.0+ `__: for multi-node and/or multi-GPU execution -- `CUDA Toolkit 9.0+ `__: for Nvidia GPU support (see `matching host-compilers `_) +- `CUDA Toolkit 11.0+ `__: for Nvidia GPU support (see `matching host-compilers `_) - `OpenMP 3.1+ `__: for threaded CPU execution (currently not fully accelerated) - `FFTW3 `_: for spectral solver (PSATD) support - `BLAS++ `_ and `LAPACK++ `_: for spectral solver (PSATD) support in RZ geometry @@ -25,6 +25,13 @@ Optional dependencies include: - see `optional I/O backends `__ - `CCache `__: to speed up rebuilds (For CUDA support, needs version 3.7.9+ and 4.2+ is recommended) - `Ninja `__: for faster parallel compiles +- `Python 3.6+ `__ + + - `mpi4py `__ + - `numpy `__ + - `periodictable `__ + - `picmistandard `__ + - see our ``requirements.txt`` file for compatible versions Install @@ -67,7 +74,7 @@ If you also want to run runtime tests and added Python (``spack add python`` and .. code-block:: bash - python -m pip install matplotlib==3.2.2 yt scipy numpy openpmd-api + python3 -m pip install matplotlib==3.2.2 yt scipy numpy openpmd-api virtualenv If you want to run the ``./run_test.sh`` :ref:`test script `, which uses our legacy GNUmake build system, you need to set the following environment hints after ``spack env activate warpx-dev`` for dependent software: @@ -118,7 +125,7 @@ Without MPI: .. code-block:: bash - conda create -n warpx-dev -c conda-forge blaspp ccache cmake compilers git lapackpp openpmd-api python numpy scipy yt fftw matplotlib mamba ninja + conda create -n warpx-dev -c conda-forge blaspp ccache cmake compilers git lapackpp openpmd-api python numpy scipy yt fftw matplotlib mamba ninja pip virtualenv source activate warpx-dev # compile WarpX with -DWarpX_MPI=OFF @@ -127,7 +134,7 @@ With MPI (only Linux/macOS): .. code-block:: bash - conda create -n warpx-dev -c conda-forge blaspp ccache cmake compilers git lapackpp "openpmd-api=*=mpi_openmpi*" python numpy scipy yt "fftw=*=mpi_openmpi*" matplotlib mamba ninja openmpi + conda create -n warpx-dev -c conda-forge blaspp ccache cmake compilers git lapackpp "openpmd-api=*=mpi_openmpi*" python numpy scipy yt "fftw=*=mpi_openmpi*" matplotlib mamba ninja openmpi pip virtualenv source activate warpx-dev For legacy ``GNUmake`` builds, after each ``source activate warpx-dev``, you also need to set: @@ -145,7 +152,7 @@ Apt (Debian/Ubuntu) .. code-block:: bash sudo apt update - sudo apt install build-essential ccache cmake g++ git libfftw3-mpi-dev libfftw3-dev libhdf5-openmpi-dev libopenmpi-dev pkg-config python3 python3-matplotlib python3-numpy python3-scipy + sudo apt install build-essential ccache cmake g++ git libfftw3-mpi-dev libfftw3-dev libhdf5-openmpi-dev libopenmpi-dev pkg-config python3 python3-matplotlib python3-numpy python3-pip python3-scipy python3-venv # optional: # for CUDA, either install diff --git a/Docs/source/install/hpc/cori.rst b/Docs/source/install/hpc/cori.rst index 5c8d313ea34..cd88c890d00 100644 --- a/Docs/source/install/hpc/cori.rst +++ b/Docs/source/install/hpc/cori.rst @@ -70,13 +70,13 @@ And install ADIOS2, BLAS++ and LAPACK++: # BLAS++ (for PSATD+RZ) git clone https://bitbucket.org/icl/blaspp.git src/blaspp rm -rf src/blaspp-knl-build - cmake -S src/blaspp -B src/blaspp-knl-build -Duse_openmp=ON -Duse_cmake_find_blas=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_INSTALL_PREFIX=$HOME/sw/blaspp-master-knl-install + cmake -S src/blaspp -B src/blaspp-knl-build -Duse_openmp=ON -Duse_cmake_find_blas=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/blaspp-master-knl-install cmake --build src/blaspp-knl-build --target install --parallel 16 # LAPACK++ (for PSATD+RZ) git clone https://bitbucket.org/icl/lapackpp.git src/lapackpp rm -rf src/lapackpp-knl-build - CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S src/lapackpp -B src/lapackpp-knl-build -Duse_cmake_find_lapack=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DLAPACK_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_INSTALL_PREFIX=$HOME/sw/lapackpp-master-knl-install + CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S src/lapackpp -B src/lapackpp-knl-build -Duse_cmake_find_lapack=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DLAPACK_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/lapackpp-master-knl-install cmake --build src/lapackpp-knl-build --target install --parallel 16 For PICMI and Python workflows, also install a virtual environment: @@ -132,13 +132,13 @@ And install ADIOS2, BLAS++ and LAPACK++: # BLAS++ (for PSATD+RZ) git clone https://bitbucket.org/icl/blaspp.git src/blaspp rm -rf src/blaspp-haswell-build - cmake -S src/blaspp -B src/blaspp-haswell-build -Duse_openmp=ON -Duse_cmake_find_blas=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_INSTALL_PREFIX=$HOME/sw/blaspp-master-haswell-install + cmake -S src/blaspp -B src/blaspp-haswell-build -Duse_openmp=ON -Duse_cmake_find_blas=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/blaspp-master-haswell-install cmake --build src/blaspp-haswell-build --target install --parallel 16 # LAPACK++ (for PSATD+RZ) git clone https://bitbucket.org/icl/lapackpp.git src/lapackpp rm -rf src/lapackpp-haswell-build - CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S src/lapackpp -B src/lapackpp-haswell-build -Duse_cmake_find_lapack=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DLAPACK_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_INSTALL_PREFIX=$HOME/sw/lapackpp-master-haswell-install + CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S src/lapackpp -B src/lapackpp-haswell-build -Duse_cmake_find_lapack=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DLAPACK_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/lapackpp-master-haswell-install cmake --build src/lapackpp-haswell-build --target install --parallel 16 For PICMI and Python workflows, also install a virtual environment: diff --git a/Docs/source/install/hpc/perlmutter.rst b/Docs/source/install/hpc/perlmutter.rst index 0901bf3b36f..e8b532ea863 100644 --- a/Docs/source/install/hpc/perlmutter.rst +++ b/Docs/source/install/hpc/perlmutter.rst @@ -39,16 +39,16 @@ We use the following modules and environments on the system (``$HOME/perlmutter_ export proj= # LBNL/AMP: m3906_g # required dependencies - module load cmake/git-20210830 # 3.22-dev + module load cmake/3.22.0 module swap PrgEnv-nvidia PrgEnv-gnu - module swap gcc gcc/9.3.0 - module load cuda + module load cudatoolkit # optional: just an additional text editor # module load nano # TODO: request from support # optional: for openPMD support module load cray-hdf5-parallel/1.12.0.7 + export CMAKE_PREFIX_PATH=$HOME/sw/perlmutter/c-blosc-1.21.1:$CMAKE_PREFIX_PATH export CMAKE_PREFIX_PATH=$HOME/sw/perlmutter/adios2-2.7.1:$CMAKE_PREFIX_PATH # optional: Python, ... @@ -83,9 +83,17 @@ And since Perlmutter does not yet provide a module for it, install ADIOS2: .. code-block:: bash + # c-blosc (I/O compression) + git clone -b v1.21.1 https://github.com/Blosc/c-blosc.git src/c-blosc + rm -rf src/c-blosc-pm-build + cmake -S src/c-blosc -B src/c-blosc-pm-build -DBUILD_TESTS=OFF -DBUILD_BENCHMARKS=OFF -DDEACTIVATE_AVX2=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/perlmutter/c-blosc-1.21.1 + cmake --build src/c-blosc-pm-build --target install --parallel 32 + + # ADIOS2 git clone -b v2.7.1 https://github.com/ornladios/ADIOS2.git src/adios2 - cmake -S src/adios2 -B src/adios2-build -DADIOS2_USE_Fortran=OFF -DADIOS2_USE_Python=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/perlmutter/adios2-2.7.1 - cmake --build src/adios2-build --target install -j 32 + rm -rf src/adios2-pm-build + cmake -S src/adios2 -B src/adios2-pm-build -DADIOS2_USE_Blosc=ON -DADIOS2_USE_Fortran=OFF -DADIOS2_USE_Python=OFF -DADIOS2_USE_ZeroMQ=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/perlmutter/adios2-2.7.1 + cmake --build src/adios2-pm-build --target install -j 32 Then, ``cd`` into the directory ``$HOME/src/warpx`` and use the following commands to compile: diff --git a/Docs/source/install/hpc/quartz.rst b/Docs/source/install/hpc/quartz.rst index 6b50d8ddedc..94aa3a24159 100644 --- a/Docs/source/install/hpc/quartz.rst +++ b/Docs/source/install/hpc/quartz.rst @@ -34,7 +34,7 @@ We use the following modules and environments on the system (``$HOME/warpx.profi # required dependencies module load cmake/3.20.2 - module load intel/19.1.2 + module load intel/2021.4 module load mvapich2/2.3 # optional: for PSATD support diff --git a/Docs/source/install/hpc/summit.rst b/Docs/source/install/hpc/summit.rst index 6bab186baef..13fa48f7c29 100644 --- a/Docs/source/install/hpc/summit.rst +++ b/Docs/source/install/hpc/summit.rst @@ -41,7 +41,7 @@ We use the following modules and environments on the system (``$HOME/warpx.profi # required dependencies module load cmake/3.20.2 module load gcc/9.3.0 - module load cuda/11.0.3 + module load cuda/11.3.1 # optional: faster re-builds module load ccache diff --git a/Docs/source/usage/parameters.rst b/Docs/source/usage/parameters.rst index 74ebad0fe4a..a3b8482edad 100644 --- a/Docs/source/usage/parameters.rst +++ b/Docs/source/usage/parameters.rst @@ -1707,8 +1707,8 @@ Numerics and algorithms value here will make the simulation unphysical, but will allow QED effects to become more apparent. Note that this option will only have an effect if the ``warpx.use_Hybrid_QED`` flag is also triggered. -* ``warpx.do_device_synchronize`` (`int`) optional (default `1`) - When running in an accelerated platform, whether to call a deviceSynchronize around profiling regions. +* ``warpx.do_device_synchronize`` (`bool`) optional (default `1`) + When running in an accelerated platform, whether to call a ``amrex::Gpu::synchronize()`` around profiling regions. This allows the profiler to give meaningful timers, but (hardly) slows down the simulation. * ``warpx.sort_intervals`` (`string`) optional (defaults: ``-1`` on CPU; ``4`` on GPU) diff --git a/Examples/Modules/embedded_boundary_python_API/PICMI_inputs_EB_API.py b/Examples/Modules/embedded_boundary_python_API/PICMI_inputs_EB_API.py new file mode 100644 index 00000000000..daabfece554 --- /dev/null +++ b/Examples/Modules/embedded_boundary_python_API/PICMI_inputs_EB_API.py @@ -0,0 +1,252 @@ +from pywarpx import picmi, _libwarpx, fields +import numpy as np + +max_steps = 1 +unit = 1e-3 + +################################## +# Define the mesh +################################## +# mesh cells per direction +nx = 64 +ny = 64 +nz = 64 + +# mesh bounds for domain +xmin = -32*unit +xmax = 32*unit +ymin = -32*unit +ymax = 32*unit +zmin = -32*unit +zmax = 32*unit + +########################## +# numerics components +########################## +lower_boundary_conditions = ['open', 'dirichlet', 'periodic'] +upper_boundary_conditions = ['open', 'dirichlet', 'periodic'] + +grid = picmi.Cartesian3DGrid( + number_of_cells = [nx, ny, nz], + lower_bound = [xmin, ymin, zmin], + upper_bound = [xmax, ymax, zmax], + lower_boundary_conditions = lower_boundary_conditions, + upper_boundary_conditions = upper_boundary_conditions, + lower_boundary_conditions_particles = ['absorbing', 'absorbing', 'periodic'], + upper_boundary_conditions_particles = ['absorbing', 'absorbing', 'periodic'], + moving_window_velocity = None, + warpx_max_grid_size = 64 +) + + +flag_correct_div = False + +solver = picmi.ElectromagneticSolver(grid=grid, method='Yee', cfl=1.) + +n_cavity=30 +L_cavity = n_cavity*unit + +embedded_boundary = picmi.EmbeddedBoundary( + implicit_function="max(max(max(x-L_cavity/2,-L_cavity/2-x),max(y-L_cavity/2,-L_cavity/2-y)),max(z-L_cavity/2,-L_cavity/2-z))", + L_cavity=L_cavity +) + + +########################## +# diagnostics +########################## + +field_diag = picmi.FieldDiagnostic( + name = 'diag1', + grid = grid, + period = 1, + data_list = ['Ex'], + write_dir = '.', + warpx_file_prefix = "embedded_boundary_python_API_plt" +) + +########################## +# simulation setup +########################## + +sim = picmi.Simulation( + solver = solver, + max_steps = max_steps, + warpx_embedded_boundary=embedded_boundary, + verbose = 1 +) + +sim.add_diagnostic(field_diag) + +sim.initialize_inputs() + +sim.step(1) + +print("======== Testing _libwarpx.get_mesh_edge_lengths =========") + +ly_slice_x = np.array(_libwarpx.get_mesh_edge_lengths(0,1,include_ghosts=False)[0])[int(nx/2),:,:] +lz_slice_x = np.array(_libwarpx.get_mesh_edge_lengths(0,2,include_ghosts=False)[0])[int(nx/2),:,:] + +n_edge_y_lo = int((ny - 30)/2) +n_edge_y_hi = int(ny - (ny - 30)/2) +n_edge_z_lo = int((nz - 30)/2) +n_edge_z_hi = int(nz - (nz - 30)/2) + +perimeter_slice_x = (np.sum(ly_slice_x[n_edge_y_lo:n_edge_y_hi, n_edge_z_lo+1]) + + np.sum(ly_slice_x[n_edge_y_lo:n_edge_y_hi, n_edge_z_hi-1]) + + np.sum(lz_slice_x[n_edge_y_lo+1, n_edge_z_lo:n_edge_z_hi]) + + np.sum(lz_slice_x[n_edge_y_hi-1, n_edge_z_lo:n_edge_z_hi])) + +perimeter_slice_x_true = L_cavity*4 + +print("Perimeter of the middle x-slice:", perimeter_slice_x) +assert np.isclose(perimeter_slice_x, perimeter_slice_x_true, rtol=1e-05, atol=1e-08) + + +lx_slice_y = np.array(_libwarpx.get_mesh_edge_lengths(0,0,include_ghosts=False)[0])[:,int(ny/2),:] +lz_slice_y = np.array(_libwarpx.get_mesh_edge_lengths(0,2,include_ghosts=False)[0])[:,int(ny/2),:] + +n_edge_x_lo = int((nx - 30)/2) +n_edge_x_hi = int(nx - (nx - 30)/2) +n_edge_z_lo = int((nz - 30)/2) +n_edge_z_hi = int(nz - (nz - 30)/2) + +perimeter_slice_y = (np.sum(lx_slice_y[n_edge_x_lo:n_edge_x_hi, n_edge_z_lo+1]) + + np.sum(lx_slice_y[n_edge_x_lo:n_edge_x_hi, n_edge_z_hi-1]) + + np.sum(lz_slice_y[n_edge_x_lo+1, n_edge_z_lo:n_edge_z_hi]) + + np.sum(lz_slice_y[n_edge_x_hi-1, n_edge_z_lo:n_edge_z_hi])) + +perimeter_slice_y_true = L_cavity*4 + + +print("Perimeter of the middle y-slice:", perimeter_slice_y) +assert np.isclose(perimeter_slice_y, perimeter_slice_y_true, rtol=1e-05, atol=1e-08) + + +lx_slice_z = np.array(_libwarpx.get_mesh_edge_lengths(0,0,include_ghosts=False)[0])[:,:,int(nz/2)] +ly_slice_z = np.array(_libwarpx.get_mesh_edge_lengths(0,1,include_ghosts=False)[0])[:,:,int(nz/2)] + +n_edge_x_lo = int((nx - 30)/2) +n_edge_x_hi = int(nx - (nx - 30)/2) +n_edge_y_lo = int((ny - 30)/2) +n_edge_y_hi = int(ny - (ny - 30)/2) + +perimeter_slice_z = (np.sum(lx_slice_z[n_edge_x_lo:n_edge_x_hi, n_edge_y_lo+1]) + + np.sum(lx_slice_z[n_edge_x_lo:n_edge_x_hi, n_edge_y_hi-1]) + + np.sum(ly_slice_z[n_edge_x_lo+1, n_edge_y_lo:n_edge_y_hi]) + + np.sum(ly_slice_z[n_edge_x_hi-1, n_edge_y_lo:n_edge_y_hi])) + +perimeter_slice_z_true = L_cavity*4 + +print("Perimeter of the middle z-slice:", perimeter_slice_z) +assert np.isclose(perimeter_slice_z, perimeter_slice_z_true, rtol=1e-05, atol=1e-08) + +print("======== Testing _libwarpx.get_mesh_face_areas =========") + +Sx_slice = np.sum(np.array(_libwarpx.get_mesh_face_areas(0,0,include_ghosts=False)[0])[int(nx/2),:,:]) +dx = (xmax-xmin)/nx +Ax = dx*dx +Sx_slice_true = L_cavity*L_cavity - 2*Ax +print("Area of the middle x-slice:", Sx_slice) +assert np.isclose(Sx_slice, Sx_slice_true, rtol=1e-05, atol=1e-08) + + +Sy_slice = np.sum(np.array(_libwarpx.get_mesh_face_areas(0,1,include_ghosts=False)[0])[:,int(ny/2),:]) +dy = (ymax-ymin)/ny +Ay = dy*dy +Sy_slice_true = L_cavity*L_cavity - 2*Ay +print("Area of the middle y-slice:", Sx_slice) +assert np.isclose(Sy_slice, Sy_slice_true, rtol=1e-05, atol=1e-08) + + +Sz_slice = np.sum(np.array(_libwarpx.get_mesh_face_areas(0,2,include_ghosts=False)[0])[:,:,int(nz/2)]) +dz = (zmax-zmin)/nz +Az = dz*dz +Sz_slice_true = L_cavity*L_cavity - 2*Az +print("Area of the middle z-slice:", Sz_slice) +assert np.isclose(Sz_slice, Sz_slice_true, rtol=1e-05, atol=1e-08) + +print("======== Testing the wrappers of m_edge_lengths =========") + +ly_slice_x = np.array(fields.EdgeLengthsyWrapper().get_fabs(0,1,include_ghosts=False)[0])[int(nx/2),:,:] +lz_slice_x = np.array(fields.EdgeLengthszWrapper().get_fabs(0,2,include_ghosts=False)[0])[int(nx/2),:,:] + +n_edge_y_lo = int((ny - 30)/2) +n_edge_y_hi = int(ny - (ny - 30)/2) +n_edge_z_lo = int((nz - 30)/2) +n_edge_z_hi = int(nz - (nz - 30)/2) + +perimeter_slice_x = (np.sum(ly_slice_x[n_edge_y_lo:n_edge_y_hi, n_edge_z_lo+1]) + + np.sum(ly_slice_x[n_edge_y_lo:n_edge_y_hi, n_edge_z_hi-1]) + + np.sum(lz_slice_x[n_edge_y_lo+1, n_edge_z_lo:n_edge_z_hi]) + + np.sum(lz_slice_x[n_edge_y_hi-1, n_edge_z_lo:n_edge_z_hi])) + +perimeter_slice_x_true = L_cavity*4 + +print("Perimeter of the middle x-slice:", perimeter_slice_x) +assert np.isclose(perimeter_slice_x, perimeter_slice_x_true, rtol=1e-05, atol=1e-08) + + +lx_slice_y = np.array(fields.EdgeLengthsxWrapper().get_fabs(0,0,include_ghosts=False)[0])[:,int(ny/2),:] +lz_slice_y = np.array(fields.EdgeLengthszWrapper().get_fabs(0,2,include_ghosts=False)[0])[:,int(ny/2),:] + +n_edge_x_lo = int((nx - 30)/2) +n_edge_x_hi = int(nx - (nx - 30)/2) +n_edge_z_lo = int((nz - 30)/2) +n_edge_z_hi = int(nz - (nz - 30)/2) + +perimeter_slice_y = (np.sum(lx_slice_y[n_edge_x_lo:n_edge_x_hi, n_edge_z_lo+1]) + + np.sum(lx_slice_y[n_edge_x_lo:n_edge_x_hi, n_edge_z_hi-1]) + + np.sum(lz_slice_y[n_edge_x_lo+1, n_edge_z_lo:n_edge_z_hi]) + + np.sum(lz_slice_y[n_edge_x_hi-1, n_edge_z_lo:n_edge_z_hi])) + +perimeter_slice_y_true = L_cavity*4 + + +print("Perimeter of the middle y-slice:", perimeter_slice_y) +assert np.isclose(perimeter_slice_y, perimeter_slice_y_true, rtol=1e-05, atol=1e-08) + + +lx_slice_z = np.array(fields.EdgeLengthsxWrapper().get_fabs(0,0,include_ghosts=False)[0])[:,:,int(nz/2)] +ly_slice_z = np.array(fields.EdgeLengthsyWrapper().get_fabs(0,1,include_ghosts=False)[0])[:,:,int(nz/2)] + +n_edge_x_lo = int((nx - 30)/2) +n_edge_x_hi = int(nx - (nx - 30)/2) +n_edge_y_lo = int((ny - 30)/2) +n_edge_y_hi = int(ny - (ny - 30)/2) + +perimeter_slice_z = (np.sum(lx_slice_z[n_edge_x_lo:n_edge_x_hi, n_edge_y_lo+1]) + + np.sum(lx_slice_z[n_edge_x_lo:n_edge_x_hi, n_edge_y_hi-1]) + + np.sum(ly_slice_z[n_edge_x_lo+1, n_edge_y_lo:n_edge_y_hi]) + + np.sum(ly_slice_z[n_edge_x_hi-1, n_edge_y_lo:n_edge_y_hi])) + +perimeter_slice_z_true = L_cavity*4 + +print("Perimeter of the middle z-slice:", perimeter_slice_z) +assert np.isclose(perimeter_slice_z, perimeter_slice_z_true, rtol=1e-05, atol=1e-08) + +print("======== Testing the wrappers of m_face_areas =========") + +Sx_slice = np.sum(np.array(fields.FaceAreasxWrapper().get_fabs(0,0,include_ghosts=False)[0])[int(nx/2),:,:]) +dx = (xmax-xmin)/nx +Ax = dx*dx +Sx_slice_true = L_cavity*L_cavity - 2*Ax +print("Area of the middle x-slice:", Sx_slice) +assert np.isclose(Sx_slice, Sx_slice_true, rtol=1e-05, atol=1e-08) + +Sy_slice = np.sum(np.array(fields.FaceAreasyWrapper().get_fabs(0,1,include_ghosts=False)[0])[:,int(ny/2),:]) +dy = (ymax-ymin)/ny +Ay = dy*dy +Sy_slice_true = L_cavity*L_cavity - 2*Ay +print("Area of the middle y-slice:", Sx_slice) +assert np.isclose(Sy_slice, Sy_slice_true, rtol=1e-05, atol=1e-08) + +Sz_slice = np.sum(np.array(fields.FaceAreaszWrapper().get_fabs(0,2,include_ghosts=False)[0])[:,:,int(nz/2)]) +dz = (zmax-zmin)/nz +Az = dz*dz +Sz_slice_true = L_cavity*L_cavity - 2*Az +print("Area of the middle z-slice:", Sz_slice) +assert np.isclose(Sz_slice, Sz_slice_true, rtol=1e-05, atol=1e-08) + +sim.step(1) + diff --git a/Examples/Modules/embedded_boundary_python_API/analysis.py b/Examples/Modules/embedded_boundary_python_API/analysis.py new file mode 100644 index 00000000000..b6b2955cf0f --- /dev/null +++ b/Examples/Modules/embedded_boundary_python_API/analysis.py @@ -0,0 +1,10 @@ +#! /usr/bin/env python + +# This script just checks that the PICMI file executed successfully. +# If it did there will be a plotfile for the final step. + +import sys + +step = int(sys.argv[1][-5:]) + +assert step == 2 diff --git a/Examples/Physics_applications/laser_acceleration/PICMI_inputs_laser_acceleration.py b/Examples/Physics_applications/laser_acceleration/PICMI_inputs_laser_acceleration.py index 87d02472c93..7e75e14de26 100755 --- a/Examples/Physics_applications/laser_acceleration/PICMI_inputs_laser_acceleration.py +++ b/Examples/Physics_applications/laser_acceleration/PICMI_inputs_laser_acceleration.py @@ -114,7 +114,8 @@ part_diag1 = picmi.ParticleDiagnostic(name = 'diag1', period = 10, - species = [electrons]) + species = [electrons], + data_list = ['ux', 'uy', 'uz', 'weighting']) ########################## # simulation setup diff --git a/Examples/Physics_applications/laser_acceleration/inputs_1d b/Examples/Physics_applications/laser_acceleration/inputs_1d new file mode 100644 index 00000000000..55a55969a14 --- /dev/null +++ b/Examples/Physics_applications/laser_acceleration/inputs_1d @@ -0,0 +1,66 @@ +################################# +####### GENERAL PARAMETERS ###### +################################# +max_step = 1000 +amr.n_cell = 512 +amr.max_grid_size = 64 # maximum size of each AMReX box, used to decompose the domain +amr.blocking_factor = 32 # minimum size of each AMReX box, used to decompose the domain +geometry.coord_sys = 0 # 0: Cartesian +geometry.prob_lo = -56.e-6 # physical domain +geometry.prob_hi = 12.e-6 +amr.max_level = 0 # Maximum level in hierarchy (1 might be unstable, >1 is not supported) + +################################# +####### Boundary condition ###### +################################# +boundary.field_lo = pec +boundary.field_hi = pec + +################################# +############ NUMERICS ########### +################################# +warpx.verbose = 1 +warpx.do_dive_cleaning = 0 +warpx.use_filter = 1 +warpx.cfl = 0.9 # if 1., the time step is set to its CFL limit +warpx.do_moving_window = 1 +warpx.moving_window_dir = z # Only z is supported for the moment +warpx.moving_window_v = 1.0 # units of speed of light + +# Order of particle shape factors +algo.particle_shape = 3 + +################################# +############ PLASMA ############# +################################# +particles.species_names = electrons + +electrons.species_type = electron +electrons.injection_style = "NUniformPerCell" +electrons.num_particles_per_cell_each_dim = 10 +electrons.zmin = 10.e-6 +electrons.profile = constant +electrons.density = 2.e23 # number of electrons per m^3 +electrons.momentum_distribution_type = "at_rest" +electrons.do_continuous_injection = 1 + +################################# +############ PLASMA ############# +################################# +lasers.names = laser1 +laser1.profile = Gaussian +laser1.position = 0. 0. 9.e-6 # This point is on the laser plane +laser1.direction = 0. 0. 1. # The plane normal direction +laser1.polarization = 0. 1. 0. # The main polarization vector +laser1.e_max = 16.e12 # Maximum amplitude of the laser field (in V/m) +laser1.profile_waist = 5.e-6 # The waist of the laser (in m) +laser1.profile_duration = 15.e-15 # The duration of the laser (in s) +laser1.profile_t_peak = 30.e-15 # Time at which the laser reaches its peak (in s) +laser1.profile_focal_distance = 100.e-6 # Focal distance from the antenna (in m) +laser1.wavelength = 0.8e-6 # The wavelength of the laser (in m) + +# Diagnostics +diagnostics.diags_names = diag1 +diag1.intervals = 200 +diag1.diag_type = Full +diag1.fields_to_plot = Ex Ey Ez Bx By Bz jx jy jz rho diff --git a/Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py b/Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py new file mode 100644 index 00000000000..33cc5bc05be --- /dev/null +++ b/Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py @@ -0,0 +1,71 @@ +from pywarpx import picmi +#from warp import picmi + +constants = picmi.constants + +nz = 64 + +zmin = -200.e-6 +zmax = +200.e-6 + +moving_window_velocity = [0., 0., constants.c] + +number_per_cell_each_dim = [10] + +max_steps = 1000 + +grid = picmi.Cartesian1DGrid(number_of_cells = [nz], + lower_bound = [zmin], + upper_bound = [zmax], + lower_boundary_conditions = ['dirichlet'], + upper_boundary_conditions = ['dirichlet'], + lower_boundary_conditions_particles = ['absorbing'], + upper_boundary_conditions_particles = ['absorbing'], + moving_window_velocity = moving_window_velocity, + warpx_max_grid_size=32) + +solver = picmi.ElectromagneticSolver(grid=grid, cfl=0.999) + +beam_distribution = picmi.UniformDistribution(density = 1.e23, + lower_bound = [None, None, -150.e-6], + upper_bound = [None, None, -100.e-6], + directed_velocity = [0., 0., 1.e9]) + +plasma_distribution = picmi.UniformDistribution(density = 1.e22, + lower_bound = [None, None, 0.], + upper_bound = [None, None, None], + fill_in = True) + +beam = picmi.Species(particle_type='electron', name='beam', initial_distribution=beam_distribution) +plasma = picmi.Species(particle_type='electron', name='plasma', initial_distribution=plasma_distribution) + +sim = picmi.Simulation(solver = solver, + max_steps = max_steps, + verbose = 1, + warpx_current_deposition_algo = 'esirkepov', + warpx_use_filter = 0) + +sim.add_species(beam, layout=picmi.GriddedLayout(grid=grid, n_macroparticle_per_cell=number_per_cell_each_dim)) +sim.add_species(plasma, layout=picmi.GriddedLayout(grid=grid, n_macroparticle_per_cell=number_per_cell_each_dim)) + +field_diag = picmi.FieldDiagnostic(name = 'diag1', + grid = grid, + period = max_steps, + data_list = ['Ex', 'Ey', 'Ez', 'Jx', 'Jy', 'Jz', 'part_per_cell'], + write_dir = '.', + warpx_file_prefix = 'Python_PlasmaAcceleration1d_plt') + +part_diag = picmi.ParticleDiagnostic(name = 'diag1', + period = max_steps, + species = [beam, plasma], + data_list = ['ux', 'uy', 'uz', 'weighting']) + +sim.add_diagnostic(field_diag) +sim.add_diagnostic(part_diag) + +# write_inputs will create an inputs file that can be used to run +# with the compiled version. +#sim.write_input_file(file_name = 'inputs_from_PICMI') + +# Alternatively, sim.step will run WarpX, controlling it from Python +sim.step() diff --git a/Examples/Tests/embedded_circle/analysis.py b/Examples/Tests/embedded_circle/analysis.py new file mode 100755 index 00000000000..257d36afc8d --- /dev/null +++ b/Examples/Tests/embedded_circle/analysis.py @@ -0,0 +1,15 @@ +#! /usr/bin/env python + +import os +import sys +sys.path.insert(1, '../../../../warpx/Regression/Checksum/') +import checksumAPI + +# this will be the name of the plot file +fn = sys.argv[1] + +# Get name of the test +test_name = os.path.split(os.getcwd())[1] + +# Run checksum regression test +checksumAPI.evaluate_checksum(test_name, fn, rtol=1e-2) diff --git a/Examples/Tests/embedded_circle/inputs_2d b/Examples/Tests/embedded_circle/inputs_2d index b1362ad94eb..4750bd00f55 100644 --- a/Examples/Tests/embedded_circle/inputs_2d +++ b/Examples/Tests/embedded_circle/inputs_2d @@ -10,10 +10,11 @@ warpx.eb_implicit_function = -((x-0.00005)**2+(z-0.00005)**2-1e-05**2) warpx.eb_potential(x,y,z,t) = -10 warpx.self_fields_absolute_tolerance = 0.02 -# algo.load_balance_intervals = 5 -# algo.load_balance_efficiency_ratio_threshold = 1.001 -# algo.load_balance_with_sfc = 0 -# algo.load_balance_knapsack_factor = 2 +algo.load_balance_intervals = 5 +algo.load_balance_costs_update = timers +algo.load_balance_efficiency_ratio_threshold = 1.001 +algo.load_balance_with_sfc = 0 +algo.load_balance_knapsack_factor = 2 amr.n_cell = 128 128 amr.max_grid_size = 16 diff --git a/Examples/Tests/multi_J/inputs_2d_pml b/Examples/Tests/multi_J/inputs_2d_pml new file mode 100644 index 00000000000..0b79db00fc4 --- /dev/null +++ b/Examples/Tests/multi_J/inputs_2d_pml @@ -0,0 +1,133 @@ +# Iterations +max_step = 150 + +# Domain decomposition +amr.n_cell = 128 256 +warpx.numprocs = 1 2 + +# Mesh refinement and geometry +amr.max_level = 0 +geometry.coord_sys = 0 +geometry.prob_lo = -100e-6 -220e-6 +geometry.prob_hi = 100e-6 10e-6 + +# Boundary condition +boundary.field_lo = periodic pml +boundary.field_hi = periodic pml + +# Algorithms +algo.current_deposition = direct +algo.field_gathering = energy-conserving +algo.maxwell_solver = psatd +algo.particle_pusher = vay +algo.particle_shape = 3 + +# Numerics +warpx.cfl = 3.19 +warpx.do_nodal = 1 +warpx.use_filter = 1 +warpx.verbose = 1 + +# Boosted frame +warpx.boost_direction = z +warpx.gamma_boost = 2.870114028490 + +# Moving window +warpx.do_moving_window = 1 +warpx.moving_window_dir = z +warpx.moving_window_v = 1. + +# Spectral solver +psatd.do_time_averaging = 0 +psatd.J_linear_in_time = 1 +psatd.update_with_rho = 1 + +# Multi-J scheme +warpx.do_multi_J = 1 +warpx.do_multi_J_n_depositions = 2 +warpx.do_dive_cleaning = 1 +warpx.do_divb_cleaning = 1 + +# Particles +particles.species_names = driver driver_back plasma_e plasma_p +particles.use_fdtd_nci_corr = 0 +particles.rigid_injected_species = driver driver_back + +# Driver (electrons) +driver.species_type = electron +driver.injection_style = "gaussian_beam" +driver.x_rms = 5e-6 +driver.y_rms = 5e-6 +driver.z_rms = 20.1e-6 +driver.x_m = 0. +driver.y_m = 0. +driver.z_m = -80e-6 +driver.npart = 100000 +driver.q_tot = -1e-10 +driver.momentum_distribution_type = "constant" +driver.ux = 0. +driver.uy = 0. +driver.uz = 2e9 +driver.zinject_plane = 2e-3 +driver.rigid_advance = true +driver.initialize_self_fields = 0 +driver.do_symmetrize = 1 + +# Driver (positrons) +driver_back.species_type = positron +driver_back.injection_style = "gaussian_beam" +driver_back.x_rms = 5e-6 +driver_back.y_rms = 5e-6 +driver_back.z_rms = 20.1e-6 +driver_back.x_m = 0. +driver_back.y_m = 0. +driver_back.z_m = -80e-6 +driver_back.npart = 100000 +driver_back.q_tot = 1e-10 +driver_back.momentum_distribution_type = "constant" +driver_back.ux = 0. +driver_back.uy = 0. +driver_back.uz = 2e9 +driver_back.zinject_plane = 2e-3 +driver_back.rigid_advance = true +driver_back.initialize_self_fields = 0 +driver_back.do_symmetrize = 1 +driver_back.do_backward_propagation = true + +# Electrons +plasma_e.species_type = electron +plasma_e.injection_style = "NUniformPerCell" +plasma_e.zmin = 0. +plasma_e.zmax = 0.05 +plasma_e.xmin = -90e-6 +plasma_e.xmax = 90e-6 +plasma_e.ymin = -90e-6 +plasma_e.ymax = 90e-6 +plasma_e.profile = constant +plasma_e.density = 1e23 +plasma_e.num_particles_per_cell_each_dim = 1 1 1 +plasma_e.momentum_distribution_type = "at_rest" +plasma_e.do_continuous_injection = 1 + +# Hydrogen +plasma_p.species_type = hydrogen +plasma_p.injection_style = "NUniformPerCell" +plasma_p.zmin = 0. +plasma_p.zmax = 0.05 +plasma_p.xmin = -90e-6 +plasma_p.xmax = 90e-6 +plasma_p.ymin = -90e-6 +plasma_p.ymax = 90e-6 +plasma_p.profile = constant +plasma_p.density = 1e23 +plasma_p.num_particles_per_cell_each_dim = 1 1 1 +plasma_p.momentum_distribution_type = "at_rest" +plasma_p.do_continuous_injection = 1 + +# Diagnostics +diagnostics.diags_names = diag1 +diag1.intervals = 150 +diag1.diag_type = Full +diag1.fields_to_plot = Ex Ey Ez Bx By Bz jx jy jz F G divE rho rho_driver rho_driver_back rho_plasma_e rho_plasma_p +diag1.write_species = 1 +diag1.species = driver plasma_e plasma_p diff --git a/Examples/analysis_default_regression.py b/Examples/analysis_default_regression.py index c920393fb55..1c22fb73898 100755 --- a/Examples/analysis_default_regression.py +++ b/Examples/analysis_default_regression.py @@ -1,5 +1,6 @@ #! /usr/bin/env python +import os import sys import re sys.path.insert(1, '../../../../warpx/Regression/Checksum/') @@ -9,7 +10,7 @@ fn = sys.argv[1] # Get name of the test -test_name = fn[:-9] # Could also be os.path.split(os.getcwd())[1] +test_name = os.path.split(os.getcwd())[1] # Run checksum regression test if re.search( 'single_precision', fn ): diff --git a/Python/pywarpx/_libwarpx.py b/Python/pywarpx/_libwarpx.py index 64ab419ad31..f43f5a9b36e 100755 --- a/Python/pywarpx/_libwarpx.py +++ b/Python/pywarpx/_libwarpx.py @@ -214,6 +214,10 @@ def _array1d_from_pointer(pointer, dtype, size): libwarpx.warpx_getGfieldCPLoVects_PML.restype = _LP_c_int libwarpx.warpx_getGfieldFP_PML.restype = _LP_LP_c_real libwarpx.warpx_getGfieldFPLoVects_PML.restype = _LP_c_int +libwarpx.warpx_getEdgeLengths.restype = _LP_LP_c_real +libwarpx.warpx_getEdgeLengthsLoVects.restype = _LP_c_int +libwarpx.warpx_getFaceAreas.restype = _LP_LP_c_real +libwarpx.warpx_getFaceAreasLoVects.restype = _LP_c_int libwarpx.warpx_getParticleBoundaryBufferSize.restype = ctypes.c_int libwarpx.warpx_getParticleBoundaryBufferStructs.restype = _LP_LP_c_particlereal libwarpx.warpx_getParticleBoundaryBuffer.restype = _LP_LP_c_particlereal @@ -234,6 +238,12 @@ def _array1d_from_pointer(pointer, dtype, size): libwarpx.warpx_getG_nodal_flag.restype = _LP_c_int libwarpx.warpx_getF_pml_nodal_flag.restype = _LP_c_int libwarpx.warpx_getG_pml_nodal_flag.restype = _LP_c_int +libwarpx.warpx_get_edge_lengths_x_nodal_flag.restype = _LP_c_int +libwarpx.warpx_get_edge_lengths_y_nodal_flag.restype = _LP_c_int +libwarpx.warpx_get_edge_lengths_z_nodal_flag.restype = _LP_c_int +libwarpx.warpx_get_face_areas_x_nodal_flag.restype = _LP_c_int +libwarpx.warpx_get_face_areas_y_nodal_flag.restype = _LP_c_int +libwarpx.warpx_get_face_areas_z_nodal_flag.restype = _LP_c_int #libwarpx.warpx_getPMLSigma.restype = _LP_c_real #libwarpx.warpx_getPMLSigmaStar.restype = _LP_c_real @@ -1756,6 +1766,60 @@ def get_mesh_G_fp_pml(level, include_ghosts=True): raise Exception('PML not initialized') +def get_mesh_edge_lengths(level, direction, include_ghosts=True): + ''' + + This returns a list of numpy arrays containing the mesh edge lengths + data on each grid for this process. This version returns the density on + the fine patch for the given level. + + The data for the numpy arrays are not copied, but share the underlying + memory buffer with WarpX. The numpy arrays are fully writeable. + + Parameters + ---------- + + level : the AMR level to get the data for + direction : the component of the data you want + include_ghosts : whether to include ghost zones or not + + Returns + ------- + + A List of numpy arrays. + + ''' + + return _get_mesh_field_list(libwarpx.warpx_getEdgeLengths, level, direction, include_ghosts) + + +def get_mesh_face_areas(level, direction, include_ghosts=True): + ''' + + This returns a list of numpy arrays containing the mesh face areas + data on each grid for this process. This version returns the density on + the fine patch for the given level. + + The data for the numpy arrays are not copied, but share the underlying + memory buffer with WarpX. The numpy arrays are fully writeable. + + Parameters + ---------- + + level : the AMR level to get the data for + direction : the component of the data you want + include_ghosts : whether to include ghost zones or not + + Returns + ------- + + A List of numpy arrays. + + ''' + + return _get_mesh_field_list(libwarpx.warpx_getFaceAreas, level, direction, include_ghosts) + + def _get_mesh_array_lovects(level, direction, include_ghosts=True, getlovectsfunc=None): assert(0 <= level and level <= libwarpx.warpx_finestLevel()) @@ -2381,6 +2445,51 @@ def get_mesh_G_fp_lovects_pml(level, include_ghosts=True): except ValueError: raise Exception('PML not initialized') + +def get_mesh_edge_lengths_lovects(level, direction, include_ghosts=True): + ''' + + This returns a list of the lo vectors of the arrays containing the mesh edge lengths + data on each grid for this process. + + Parameters + ---------- + + level : the AMR level to get the data for + direction : the component of the data you want + include_ghosts : whether to include ghost zones or not + + Returns + ------- + + A 2d numpy array of the lo vector for each grid with the shape (dims, number of grids) + + ''' + return _get_mesh_array_lovects(level, direction, include_ghosts, libwarpx.warpx_getEdgeLengthsLoVects) + + +def get_mesh_face_areas_lovects(level, direction, include_ghosts=True): + ''' + + This returns a list of the lo vectors of the arrays containing the mesh face areas + data on each grid for this process. + + Parameters + ---------- + + level : the AMR level to get the data for + direction : the component of the data you want + include_ghosts : whether to include ghost zones or not + + Returns + ------- + + A 2d numpy array of the lo vector for each grid with the shape (dims, number of grids) + + ''' + return _get_mesh_array_lovects(level, direction, include_ghosts, libwarpx.warpx_getFaceAreasLoVects) + + def _get_nodal_flag(getdatafunc): data = getdatafunc() if not data: @@ -2483,6 +2592,42 @@ def get_G_nodal_flag(): ''' return _get_nodal_flag(libwarpx.warpx_getG_nodal_flag) +def get_edge_lengths_x_nodal_flag(): + ''' + This returns a 1d array of the nodal flags for the x edge lengths along each direction. A 1 means node centered, and 0 cell centered. + ''' + return _get_nodal_flag(libwarpx.warpx_get_edge_lengths_x_nodal_flag) + +def get_edge_lengths_y_nodal_flag(): + ''' + This returns a 1d array of the nodal flags for the y edge lengths along each direction. A 1 means node centered, and 0 cell centered. + ''' + return _get_nodal_flag(libwarpx.warpx_get_edge_lengths_y_nodal_flag) + +def get_edge_lengths_z_nodal_flag(): + ''' + This returns a 1d array of the nodal flags for the z edge lengths along each direction. A 1 means node centered, and 0 cell centered. + ''' + return _get_nodal_flag(libwarpx.warpx_get_edge_lengths_z_nodal_flag) + +def get_face_areas_x_nodal_flag(): + ''' + This returns a 1d array of the nodal flags for the x face areas along each direction. A 1 means node centered, and 0 cell centered. + ''' + return _get_nodal_flag(libwarpx.warpx_get_face_areas_x_nodal_flag) + +def get_face_areas_y_nodal_flag(): + ''' + This returns a 1d array of the nodal flags for the y face areas along each direction. A 1 means node centered, and 0 cell centered. + ''' + return _get_nodal_flag(libwarpx.warpx_get_face_areas_y_nodal_flag) + +def get_face_areas_z_nodal_flag(): + ''' + This returns a 1d array of the nodal flags for the z face areas along each direction. A 1 means node centered, and 0 cell centered. + ''' + return _get_nodal_flag(libwarpx.warpx_get_face_areas_z_nodal_flag) + def get_F_pml_nodal_flag(): ''' This returns a 1d array of the nodal flags for F in the PML along each direction. A 1 means node centered, and 0 cell centered. diff --git a/Python/pywarpx/fields.py b/Python/pywarpx/fields.py index 612ea32c0e0..cb63a8af4e0 100644 --- a/Python/pywarpx/fields.py +++ b/Python/pywarpx/fields.py @@ -682,6 +682,48 @@ def GFPWrapper(level=0, include_ghosts=False): get_nodal_flag=_libwarpx.get_G_nodal_flag, level=level, include_ghosts=include_ghosts) +def EdgeLengthsxWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper(direction=0, + get_lovects=_libwarpx.get_mesh_edge_lengths_lovects, + get_fabs=_libwarpx.get_mesh_edge_lengths, + get_nodal_flag=_libwarpx.get_edge_lengths_x_nodal_flag, + level=level, include_ghosts=include_ghosts) + +def EdgeLengthsyWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper(direction=1, + get_lovects=_libwarpx.get_mesh_edge_lengths_lovects, + get_fabs=_libwarpx.get_mesh_edge_lengths, + get_nodal_flag=_libwarpx.get_edge_lengths_y_nodal_flag, + level=level, include_ghosts=include_ghosts) + +def EdgeLengthszWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper(direction=2, + get_lovects=_libwarpx.get_mesh_edge_lengths_lovects, + get_fabs=_libwarpx.get_mesh_edge_lengths, + get_nodal_flag=_libwarpx.get_edge_lengths_z_nodal_flag, + level=level, include_ghosts=include_ghosts) + +def FaceAreasxWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper(direction=0, + get_lovects=_libwarpx.get_mesh_face_areas_lovects, + get_fabs=_libwarpx.get_mesh_face_areas, + get_nodal_flag=_libwarpx.get_face_areas_x_nodal_flag, + level=level, include_ghosts=include_ghosts) + +def FaceAreasyWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper(direction=1, + get_lovects=_libwarpx.get_mesh_face_areas_lovects, + get_fabs=_libwarpx.get_mesh_face_areas, + get_nodal_flag=_libwarpx.get_face_areas_y_nodal_flag, + level=level, include_ghosts=include_ghosts) + +def FaceAreaszWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper(direction=2, + get_lovects=_libwarpx.get_mesh_face_areas_lovects, + get_fabs=_libwarpx.get_mesh_face_areas, + get_nodal_flag=_libwarpx.get_face_areas_z_nodal_flag, + level=level, include_ghosts=include_ghosts) + def ExCPPMLWrapper(level=1, include_ghosts=False): assert level>0, Exception('Coarse patch only available on levels > 0') return _MultiFABWrapper(direction=0, diff --git a/Python/pywarpx/picmi.py b/Python/pywarpx/picmi.py index 460b21602d7..2efe6c1d4eb 100644 --- a/Python/pywarpx/picmi.py +++ b/Python/pywarpx/picmi.py @@ -463,6 +463,57 @@ def initialize_inputs(self): pywarpx.amr.max_level = 0 +class Cartesian1DGrid(picmistandard.PICMI_Cartesian1DGrid): + def init(self, kw): + self.max_grid_size = kw.pop('warpx_max_grid_size', 32) + self.max_grid_size_x = kw.pop('warpx_max_grid_size_x', None) + self.blocking_factor = kw.pop('warpx_blocking_factor', None) + self.blocking_factor_x = kw.pop('warpx_blocking_factor_x', None) + + self.potential_xmin = None + self.potential_xmax = None + self.potential_ymin = None + self.potential_ymax = None + self.potential_zmin = kw.pop('warpx_potential_lo_z', None) + self.potential_zmax = kw.pop('warpx_potential_hi_z', None) + + def initialize_inputs(self): + pywarpx.amr.n_cell = self.number_of_cells + + # Maximum allowable size of each subdomain in the problem domain; + # this is used to decompose the domain for parallel calculations. + pywarpx.amr.max_grid_size = self.max_grid_size + pywarpx.amr.max_grid_size_x = self.max_grid_size_x + pywarpx.amr.blocking_factor = self.blocking_factor + pywarpx.amr.blocking_factor_x = self.blocking_factor_x + + # Geometry + pywarpx.geometry.coord_sys = 0 # Cartesian + pywarpx.geometry.prob_lo = self.lower_bound # physical domain + pywarpx.geometry.prob_hi = self.upper_bound + + # Boundary conditions + pywarpx.boundary.field_lo = [BC_map[bc] for bc in [self.bc_xmin]] + pywarpx.boundary.field_hi = [BC_map[bc] for bc in [self.bc_xmax]] + pywarpx.boundary.particle_lo = [self.bc_xmin_particles] + pywarpx.boundary.particle_hi = [self.bc_xmax_particles] + + if self.moving_window_velocity is not None and np.any(np.not_equal(self.moving_window_velocity, 0.)): + pywarpx.warpx.do_moving_window = 1 + if self.moving_window_velocity[2] != 0.: + pywarpx.warpx.moving_window_dir = 'z' + pywarpx.warpx.moving_window_v = self.moving_window_velocity[2]/constants.c # in units of the speed of light + + if self.refined_regions: + assert len(self.refined_regions) == 1, Exception('WarpX only supports one refined region.') + assert self.refined_regions[0][0] == 1, Exception('The one refined region can only be level 1') + pywarpx.amr.max_level = 1 + pywarpx.warpx.fine_tag_lo = self.refined_regions[0][1] + pywarpx.warpx.fine_tag_hi = self.refined_regions[0][2] + # The refinement_factor is ignored (assumed to be [2,2]) + else: + pywarpx.amr.max_level = 0 + class Cartesian2DGrid(picmistandard.PICMI_Cartesian2DGrid): def init(self, kw): self.max_grid_size = kw.pop('warpx_max_grid_size', 32) diff --git a/Python/setup.py b/Python/setup.py index a15585f834d..4811966b412 100644 --- a/Python/setup.py +++ b/Python/setup.py @@ -59,7 +59,7 @@ package_dir = {'pywarpx': 'pywarpx'}, description = """Wrapper of WarpX""", package_data = package_data, - install_requires = ['numpy', 'picmistandard==0.0.16', 'periodictable'], + install_requires = ['numpy', 'picmistandard==0.0.18', 'periodictable'], python_requires = '>=3.6', zip_safe=False ) diff --git a/README.md b/README.md index 3accfaf7060..76a0aaa13ad 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ [![Supported Platforms](https://img.shields.io/badge/platforms-linux%20|%20osx%20|%20win-blue)](https://warpx.readthedocs.io/en/latest/install/users.html) [![GitHub commits since last release](https://img.shields.io/github/commits-since/ECP-WarpX/WarpX/latest/development.svg)](https://github.com/ECP-WarpX/WarpX/compare/development) [![Exascale Computing Project](https://img.shields.io/badge/supported%20by-ECP-orange)](https://www.exascaleproject.org/research/) -[![Language: C++14](https://img.shields.io/badge/language-C%2B%2B14-orange.svg)](https://isocpp.org/) +[![Language: C++17](https://img.shields.io/badge/language-C%2B%2B17-orange.svg)](https://isocpp.org/) [![Language: Python](https://img.shields.io/badge/language-Python-orange.svg)](https://python.org/) [![License WarpX](https://img.shields.io/badge/license-BSD--3--Clause--LBNL-blue.svg)](https://spdx.org/licenses/BSD-3-Clause-LBNL.html) [![DOI (source)](https://img.shields.io/badge/DOI%20(source)-10.5281/zenodo.4571577-blue.svg)](https://doi.org/10.5281/zenodo.4571577) diff --git a/Regression/Checksum/benchmarks_json/Langmuir_multi_2d_MR_anisotropic.json b/Regression/Checksum/benchmarks_json/Langmuir_multi_2d_MR_anisotropic.json new file mode 100644 index 00000000000..c79b4dbbf77 --- /dev/null +++ b/Regression/Checksum/benchmarks_json/Langmuir_multi_2d_MR_anisotropic.json @@ -0,0 +1,44 @@ +{ + "electrons": { + "particle_cpu": 32768.0, + "particle_id": 1123057664.0, + "particle_momentum_x": 4.2409233886523047e-20, + "particle_momentum_y": 0.0, + "particle_momentum_z": 4.239636641708783e-20, + "particle_position_x": 0.6553604498033957, + "particle_position_y": 0.6553602617123965, + "particle_weight": 3200000000000000.5 + }, + "lev=0": { + "Bx": 0.0, + "By": 29.033843714475076, + "Bz": 0.0, + "Ex": 7575734617759.523, + "Ey": 0.0, + "Ez": 7575399948801.469, + "jx": 7296519320465208.0, + "jy": 0.0, + "jz": 7297090426947096.0 + }, + "lev=1": { + "Bx": 0.0, + "By": 71.46369739075365, + "Bz": 0.0, + "Ex": 4602033610493.496, + "Ey": 0.0, + "Ez": 7017735833493.598, + "jx": 4492590664379721.0, + "jy": 0.0, + "jz": 6825856952745953.0 + }, + "positrons": { + "particle_cpu": 32768.0, + "particle_id": 3371204608.0, + "particle_momentum_x": 4.240515207391037e-20, + "particle_momentum_y": 0.0, + "particle_momentum_z": 4.2396984750798293e-20, + "particle_position_x": 0.6553601899702647, + "particle_position_y": 0.6553597467035968, + "particle_weight": 3200000000000000.5 + } +} \ No newline at end of file diff --git a/Regression/Checksum/benchmarks_json/LaserAcceleration_1d.json b/Regression/Checksum/benchmarks_json/LaserAcceleration_1d.json new file mode 100644 index 00000000000..12f6f0c8189 --- /dev/null +++ b/Regression/Checksum/benchmarks_json/LaserAcceleration_1d.json @@ -0,0 +1,23 @@ +{ + "electrons": { + "particle_cpu": 0.0, + "particle_id": 0.0, + "particle_momentum_x": 0.0, + "particle_momentum_y": 0.0, + "particle_momentum_z": 0.0, + "particle_position_x": 0.0, + "particle_weight": 0.0 + }, + "lev=0": { + "Bx": 178016.7504669478, + "By": 0.0, + "Bz": 0.0, + "Ex": 0.0, + "Ey": 40878227583310.83, + "Ez": 568254685.6950157, + "jx": 0.0, + "jy": 30442928969125.46, + "jz": 1108530282155.6707, + "rho": 3127749.1976868743 + } +} \ No newline at end of file diff --git a/Regression/Checksum/benchmarks_json/PEC_particle.json b/Regression/Checksum/benchmarks_json/PEC_particle.json index eee1a754937..fb6fed1ff05 100644 --- a/Regression/Checksum/benchmarks_json/PEC_particle.json +++ b/Regression/Checksum/benchmarks_json/PEC_particle.json @@ -2,34 +2,34 @@ "electron": { "particle_cpu": 0.0, "particle_id": 1.0, - "particle_momentum_x": 4.561563069992995e-31, - "particle_momentum_y": 4.735240262495866e-34, - "particle_momentum_z": 1.1663276912624509e-48, + "particle_momentum_x": 4.561563069992461e-31, + "particle_momentum_y": 4.735240262497721e-34, + "particle_momentum_z": 2.071049171405733e-48, "particle_position_x": 3.199800000000243e-05, - "particle_position_y": 6.591777047777489e-21, - "particle_position_z": 9.312874618739578e-36, + "particle_position_y": 6.5917770477795185e-21, + "particle_position_z": 8.226638814151006e-36, "particle_weight": 1.0 }, "lev=0": { - "Bx": 5.661370479374975e-05, - "By": 1.3668681064411596e-16, - "Bz": 0.00011100031731970013, - "Ex": 26731.847337630617, - "Ey": 29057.333990451483, - "Ez": 16060.200852127742, - "jx": 4.476492463177985e-05, - "jy": 43090052.26648687, + "Bx": 5.6613704793749595e-05, + "By": 1.3914875815016033e-16, + "Bz": 0.00011100031731969847, + "Ex": 26731.84733762923, + "Ey": 29057.3339904507, + "Ez": 16060.200852126594, + "jx": 4.476492463221237e-05, + "jy": 43090052.26648572, "jz": 0.0 }, "proton": { "particle_cpu": 0.0, "particle_id": 2.0, - "particle_momentum_x": 5.254805380844202e-32, + "particle_momentum_x": 5.254805380842948e-32, "particle_momentum_y": 1.002878875615426e-18, - "particle_momentum_z": 1.4307076091364584e-48, + "particle_momentum_z": 4.1182431708325955e-49, "particle_position_x": 3.199799999999955e-05, "particle_position_y": 6.5726706900619935e-06, - "particle_position_z": 2.9858862553558572e-36, + "particle_position_z": 8.144837844277877e-37, "particle_weight": 1.0 } } \ No newline at end of file diff --git a/Regression/Checksum/benchmarks_json/Python_PlasmaAcceleration1d.json b/Regression/Checksum/benchmarks_json/Python_PlasmaAcceleration1d.json new file mode 100644 index 00000000000..c44ab3f8af0 --- /dev/null +++ b/Regression/Checksum/benchmarks_json/Python_PlasmaAcceleration1d.json @@ -0,0 +1,20 @@ +{ + "lev=0": { + "Ex": 0.0, + "Ey": 0.0, + "Ez": 0.0, + "jx": 0.0, + "jy": 0.0, + "jz": 0.0, + "part_per_cell": 640.0 + }, + "plasma": { + "particle_cpu": 0.0, + "particle_id": 0.0, + "particle_momentum_x": 0.0, + "particle_momentum_y": 0.0, + "particle_momentum_z": 0.0, + "particle_position_x": 0.0, + "particle_weight": 0.0 + } +} \ No newline at end of file diff --git a/Regression/Checksum/benchmarks_json/RepellingParticles.json b/Regression/Checksum/benchmarks_json/RepellingParticles.json index f7b382591d3..21b4f41f455 100644 --- a/Regression/Checksum/benchmarks_json/RepellingParticles.json +++ b/Regression/Checksum/benchmarks_json/RepellingParticles.json @@ -4,9 +4,9 @@ "particle_id": 1.0, "particle_momentum_x": 7.291372825198814e-23, "particle_momentum_y": 0.0, - "particle_momentum_z": 1.546448492277708e-36, - "particle_position_x": 1.297993341485761e-05, - "particle_position_y": 1.779311800015497e-19, + "particle_momentum_z": 1.5464484922777076e-36, + "particle_position_x": 1.2979933414857606e-05, + "particle_position_y": 1.7793118000154973e-19, "particle_weight": 5000000000000.0 }, "electron2": { @@ -15,22 +15,22 @@ "particle_momentum_x": 7.291372825198502e-23, "particle_momentum_y": 0.0, "particle_momentum_z": 1.566783697283594e-36, - "particle_position_x": 1.297993341485725e-05, + "particle_position_x": 1.2979933414857252e-05, "particle_position_y": 1.79978846968537e-19, "particle_weight": 5000000000000.0 }, "lev=0": { "Bx": 0.0, - "By": 10242.82951116687, + "By": 10242.829286325377, "Bz": 0.0, - "Ex": 11290112090935.82, + "Ex": 11290112090935.816, "Ey": 0.0, - "Ez": 15386357337389.65, + "Ez": 15386357337389.652, "F": 8585.384852879466, - "divE": 1.234698921124232e+18, - "jx": 495277180757311.8, + "divE": 1.2346989211242322e+18, + "jx": 495277180757311.75, "jy": 0.0, - "jz": 10.64185986759511, + "jz": 10.641859867595105, "rho": 6408706.535999998 } } \ No newline at end of file diff --git a/Regression/Checksum/benchmarks_json/embedded_circle.json b/Regression/Checksum/benchmarks_json/embedded_circle.json index 68a9087a7d6..6bb0faa05ff 100644 --- a/Regression/Checksum/benchmarks_json/embedded_circle.json +++ b/Regression/Checksum/benchmarks_json/embedded_circle.json @@ -1,27 +1,27 @@ { "ar_ions": { - "particle_cpu": 31744.0, - "particle_id": 3220043046.0, - "particle_momentum_x": 2.673023392216285e-18, - "particle_momentum_y": 2.6733267061000188e-18, - "particle_momentum_z": 2.667060599749979e-18, - "particle_position_x": 3.1743018682048367, - "particle_position_y": 3.1742959462007088, - "particle_weight": 988093872.0703125 + "particle_cpu": 31743.0, + "particle_id": 3219974926.0, + "particle_momentum_x": 2.673080656628151e-18, + "particle_momentum_y": 2.6734826129917346e-18, + "particle_momentum_z": 2.6677137825404595e-18, + "particle_position_x": 3.174244144020173, + "particle_position_y": 3.1742742523212426, + "particle_weight": 988078308.1054688 }, "electrons": { - "particle_cpu": 30724.0, - "particle_id": 1040144086.0, - "particle_momentum_x": 2.991377867057318e-20, - "particle_momentum_y": 3.014091741533624e-20, - "particle_momentum_z": 3.022811783218703e-20, - "particle_position_x": 3.0722092755241888, - "particle_position_y": 3.072232836690298, - "particle_weight": 956467895.5078125 + "particle_cpu": 30723.0, + "particle_id": 1040042009.0, + "particle_momentum_x": 2.99271246971674e-20, + "particle_momentum_y": 3.014893117483374e-20, + "particle_momentum_z": 3.016015662279529e-20, + "particle_position_x": 3.072306870914145, + "particle_position_y": 3.072501289015288, + "particle_weight": 956421203.6132812 }, "lev=0": { - "phi": 56898.52308944405, - "rho_ar_ions": 257.80642870099507, - "rho_electrons": 250.17704417223325 + "phi": 56898.115832092146, + "rho_ar_ions": 257.8023434408326, + "rho_electrons": 250.15834020610757 } -} +} \ No newline at end of file diff --git a/Regression/Checksum/benchmarks_json/multi_J_2d_psatd_pml.json b/Regression/Checksum/benchmarks_json/multi_J_2d_psatd_pml.json new file mode 100644 index 00000000000..d4d56eea570 --- /dev/null +++ b/Regression/Checksum/benchmarks_json/multi_J_2d_psatd_pml.json @@ -0,0 +1,51 @@ +{ + "driver": { + "particle_cpu": 0.0, + "particle_id": 5000050000.0, + "particle_momentum_x": 1.4499934898508315e-16, + "particle_momentum_y": 0.0, + "particle_momentum_z": 9.822790382300442e-09, + "particle_position_x": 0.40000376757269557, + "particle_position_y": 30.100865423602578, + "particle_weight": 124830181489215.27 + }, + "lev=0": { + "Bx": 0.0, + "By": 921660.3791780534, + "Bz": 0.0, + "Ex": 270876373466331.25, + "Ey": 0.0, + "Ez": 92624942849290.58, + "F": 6183.089569602618, + "G": 0.0, + "divE": 2.519019708946362e+19, + "jx": 1.0172819575614014e+16, + "jy": 0.0, + "jz": 6.385273746740672e+16, + "rho": 220331681.67046365, + "rho_driver": 2562225.119933118, + "rho_driver_back": 0.0, + "rho_plasma_e": 1359499350.2687225, + "rho_plasma_p": 1361225460.3586755 + }, + "plasma_e": { + "particle_cpu": 29647.0, + "particle_id": 1194856401.0, + "particle_momentum_x": 7.187004715206369e-19, + "particle_momentum_y": 0.0, + "particle_momentum_z": 2.346497069529724e-17, + "particle_position_x": 1.3919451702129872, + "particle_position_y": 10.07681584795493, + "particle_weight": 6.641904330784497e+16 + }, + "plasma_p": { + "particle_cpu": 29696.0, + "particle_id": 1202173888.0, + "particle_momentum_x": 1.4494684060818139e-18, + "particle_momentum_y": 0.0, + "particle_momentum_z": 4.005707216626881e-14, + "particle_position_x": 1.3456480686756227, + "particle_position_y": 10.082154799552143, + "particle_weight": 6.652881944445523e+16 + } +} \ No newline at end of file diff --git a/Regression/Checksum/benchmarks_json/subcyclingMR.json b/Regression/Checksum/benchmarks_json/subcyclingMR.json index f765e39a0f4..1e1b3022632 100644 --- a/Regression/Checksum/benchmarks_json/subcyclingMR.json +++ b/Regression/Checksum/benchmarks_json/subcyclingMR.json @@ -2,10 +2,10 @@ "beam": { "particle_cpu": 0.0, "particle_id": 150005000.0, - "particle_momentum_x": 4.341975879582419e-19, + "particle_momentum_x": 4.341970536840506e-19, "particle_momentum_y": 0.0, - "particle_momentum_z": 4.854125491397747e-17, - "particle_position_x": 0.0006315950294961043, + "particle_momentum_z": 4.854125969816336e-17, + "particle_position_x": 0.000631595029496105, "particle_position_y": 0.08491248490855219, "particle_weight": 62415090744607.65 }, @@ -25,7 +25,7 @@ "Bz": 0.0, "Ex": 388788109636517.25, "Ey": 0.0, - "Ez": 475734042024292.5, + "Ez": 475734042024292.44, "jx": 3.8453833238655616e+17, "jy": 0.0, "jz": 6.358844776737537e+17 @@ -34,29 +34,29 @@ "Bx": 0.0, "By": 1869002.1405830402, "Bz": 0.0, - "Ex": 876808143853004.9, + "Ex": 876808143853005.0, "Ey": 0.0, - "Ez": 918405586302947.4, + "Ez": 918405586302947.2, "jx": 1.536628245883072e+17, "jy": 0.0, "jz": 4.129776951264712e+17 }, "plasma_e": { - "particle_cpu": 0.0, - "particle_id": 1653014286.0, - "particle_momentum_x": 1.102408072879855e-18, + "particle_cpu": 30217.0, + "particle_id": 1048674286.0, + "particle_momentum_x": 1.10240793316972e-18, "particle_momentum_y": 0.0, - "particle_momentum_z": 1.4306919600099368e-18, - "particle_position_x": 0.23607785290428884, + "particle_momentum_z": 1.430691945336632e-18, + "particle_position_x": 0.23607785290428887, "particle_position_y": 0.290297172085528, "particle_weight": 5532897949218750.0 }, "plasma_p": { - "particle_cpu": 0.0, - "particle_id": 1695657024.0, + "particle_cpu": 31872.0, + "particle_id": 1058217024.0, "particle_momentum_x": 1.8564730642732005e-18, "particle_momentum_y": 0.0, - "particle_momentum_z": 2.4167645961331673e-18, + "particle_momentum_z": 2.416764596133168e-18, "particle_position_x": 0.23904668587287342, "particle_position_y": 0.31004658592211964, "particle_weight": 5835937500000001.0 diff --git a/Regression/WarpX-GPU-tests.ini b/Regression/WarpX-GPU-tests.ini index 7697aac5a8a..01639c64f83 100644 --- a/Regression/WarpX-GPU-tests.ini +++ b/Regression/WarpX-GPU-tests.ini @@ -48,7 +48,7 @@ emailBody = Check https://ccse.lbl.gov/pub/GpuRegressionTesting/WarpX/ for more [AMReX] dir = /home/regtester/git/amrex/ -branch = 60fe729fe2ba65ebffc88c0af18743c254d3992c +branch = 9373709e34b23add981551d5446bc4810fd3b688 [source] dir = /home/regtester/git/WarpX diff --git a/Regression/WarpX-tests.ini b/Regression/WarpX-tests.ini index 0d4d1e99736..bb2144a5888 100644 --- a/Regression/WarpX-tests.ini +++ b/Regression/WarpX-tests.ini @@ -48,7 +48,7 @@ emailBody = Check https://ccse.lbl.gov/pub/RegressionTesting/WarpX/ for more det [AMReX] dir = /home/regtester/AMReX_RegTesting/amrex/ -branch = 60fe729fe2ba65ebffc88c0af18743c254d3992c +branch = 9373709e34b23add981551d5446bc4810fd3b688 [source] dir = /home/regtester/AMReX_RegTesting/warpx @@ -577,6 +577,25 @@ analysisRoutine = Examples/Tests/Langmuir/analysis_langmuir_multi_2d.py analysisOutputImage = Langmuir_multi_2d_MR.png tolerance = 1.e-14 +[Langmuir_multi_2d_MR_anisotropic] +buildDir = . +inputFile = Examples/Tests/Langmuir/inputs_2d_multi_rt +runtime_params = algo.maxwell_solver = ckc warpx.use_filter = 1 amr.max_level = 1 amr.ref_ratio_vect = 4 2 warpx.fine_tag_lo = -10.e-6 -10.e-6 warpx.fine_tag_hi = 10.e-6 10.e-6 diag1.electrons.variables = w ux uy uz diag1.positrons.variables = w ux uy uz +dim = 2 +addToCompileString = +restartTest = 0 +useMPI = 1 +numprocs = 2 +useOMP = 1 +numthreads = 1 +compileTest = 0 +doVis = 0 +compareParticles = 1 +particleTypes = electrons positrons +analysisRoutine = Examples/Tests/Langmuir/analysis_langmuir_multi_2d.py +analysisOutputImage = Langmuir_multi_2d_MR.png +tolerance = 1.e-14 + [Langmuir_multi_2d_MR_psatd] buildDir = . inputFile = Examples/Tests/Langmuir/inputs_2d_multi_rt @@ -808,9 +827,9 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Tests/Langmuir/PICMI_inputs_langmuir_rz_multimode_analyze.py runtime_params = -customRunCmd = python PICMI_inputs_langmuir_rz_multimode_analyze.py +customRunCmd = python3 PICMI_inputs_langmuir_rz_multimode_analyze.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE USE_RZ=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE USE_RZ=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -827,9 +846,9 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Tests/restart/PICMI_inputs_runtime_component_analyze.py runtime_params = -customRunCmd = python PICMI_inputs_runtime_component_analyze.py +customRunCmd = python3 PICMI_inputs_runtime_component_analyze.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 1 restartFileNum = 5 useMPI = 1 @@ -912,6 +931,24 @@ particleTypes = electrons analysisRoutine = Examples/analysis_default_regression.py tolerance = 1.e-14 +[LaserAcceleration_1d] +buildDir = . +inputFile = Examples/Physics_applications/laser_acceleration/inputs_1d +runtime_params = warpx.do_dynamic_scheduling=0 amr.n_cell=256 max_step=100 electrons.zmin=10.e-6 warpx.serialize_ics=1 +dim = 1 +addToCompileString = +restartTest = 0 +useMPI = 1 +numprocs = 2 +useOMP = 1 +numthreads = 1 +compileTest = 0 +doVis = 0 +compareParticles = 1 +particleTypes = electrons +analysisRoutine = Examples/analysis_default_regression.py +tolerance = 1.e-14 + [LaserAcceleration_single_precision_comms] buildDir = . inputFile = Examples/Physics_applications/laser_acceleration/inputs_3d @@ -1008,9 +1045,9 @@ particle_tolerance = 1.e-12 buildDir = . inputFile = Examples/Tests/Langmuir/PICMI_inputs_langmuir_rt.py runtime_params = -customRunCmd = python PICMI_inputs_langmuir_rt.py +customRunCmd = python3 PICMI_inputs_langmuir_rt.py dim = 3 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 1 @@ -1367,7 +1404,7 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Modules/qed/breit_wheeler/inputs_2d aux1File = Examples/Modules/qed/breit_wheeler/analysis_core.py -runtime_params = diag1.format = openpmd +runtime_params = diag1.format = openpmd diag1.openpmd_backend = h5 dim = 2 addToCompileString = QED=TRUE USE_OPENPMD=TRUE restartTest = 0 @@ -1385,7 +1422,7 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Modules/qed/breit_wheeler/inputs_3d aux1File = Examples/Modules/qed/breit_wheeler/analysis_core.py -runtime_params = diag1.format = openpmd +runtime_params = diag1.format = openpmd diag1.openpmd_backend = h5 dim = 3 addToCompileString = QED=TRUE USE_OPENPMD=TRUE restartTest = 0 @@ -1517,10 +1554,10 @@ tolerance = 1.e-14 [Python_gaussian_beam] buildDir = . inputFile = Examples/Modules/gaussian_beam/PICMI_inputs_gaussian_beam.py -customRunCmd = python PICMI_inputs_gaussian_beam.py +customRunCmd = python3 PICMI_inputs_gaussian_beam.py runtime_params = dim = 3 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -1536,10 +1573,10 @@ tolerance = 1.e-14 [Python_gaussian_beam_opmd] buildDir = . inputFile = Examples/Modules/gaussian_beam/PICMI_inputs_gaussian_beam.py -customRunCmd = python PICMI_inputs_gaussian_beam.py --diagformat=openpmd +customRunCmd = python3 PICMI_inputs_gaussian_beam.py --diagformat=openpmd runtime_params = dim = 3 -addToCompileString = USE_OPENPMD=TRUE USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_OPENPMD=TRUE USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -1572,9 +1609,9 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration.py runtime_params = -customRunCmd = python PICMI_inputs_plasma_acceleration.py +customRunCmd = python3 PICMI_inputs_plasma_acceleration.py dim = 3 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -1591,9 +1628,28 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_mr.py runtime_params = -customRunCmd = python PICMI_inputs_plasma_acceleration_mr.py +customRunCmd = python3 PICMI_inputs_plasma_acceleration_mr.py dim = 3 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE +restartTest = 0 +useMPI = 1 +numprocs = 2 +useOMP = 1 +numthreads = 1 +compileTest = 0 +doVis = 0 +compareParticles = 1 +particleTypes = beam +analysisRoutine = Examples/analysis_default_regression.py +tolerance = 1.e-14 + +[Python_PlasmaAcceleration1d] +buildDir = . +inputFile = Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py +runtime_params = +customRunCmd = python3 PICMI_inputs_plasma_acceleration_1d.py +dim = 1 +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -1694,9 +1750,9 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Physics_applications/laser_acceleration/PICMI_inputs_laser_acceleration.py runtime_params = -customRunCmd = python PICMI_inputs_laser_acceleration.py +customRunCmd = python3 PICMI_inputs_laser_acceleration.py dim = 3 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -1713,9 +1769,9 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Tests/Langmuir/PICMI_inputs_langmuir2d.py runtime_params = -customRunCmd = python PICMI_inputs_langmuir2d.py +customRunCmd = python3 PICMI_inputs_langmuir2d.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2218,6 +2274,24 @@ particleTypes = driver driver_back plasma_e plasma_p analysisRoutine = Examples/analysis_default_regression.py tolerance = 1e-14 +[multi_J_2d_psatd_pml] +buildDir = . +inputFile = Examples/Tests/multi_J/inputs_2d_pml +runtime_params = +dim = 2 +addToCompileString = USE_PSATD=TRUE +restartTest = 0 +useMPI = 1 +numprocs = 2 +useOMP = 1 +numthreads = 1 +compileTest = 0 +doVis = 0 +compareParticles = 1 +particleTypes = +analysisRoutine = Examples/analysis_default_regression.py +tolerance = 1e-14 + [multi_J_rz_psatd] buildDir = . inputFile = Examples/Tests/multi_J/inputs_rz @@ -2258,9 +2332,9 @@ tolerance = 1.e-12 buildDir = . inputFile = Examples/Tests/ElectrostaticSphereEB/PICMI_inputs_3d.py runtime_params = -customRunCmd = python PICMI_inputs_3d.py +customRunCmd = python3 PICMI_inputs_3d.py dim = 3 -addToCompileString = USE_EB=TRUE USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_EB=TRUE USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2353,8 +2427,7 @@ compileTest = 0 doVis = 0 compareParticles = 1 particleTypes = electrons ar_ions -analysisRoutine = Examples/analysis_default_regression.py -tolerance = 1.e-12 +analysisRoutine = Examples/Tests/embedded_circle/analysis.py [initial_distribution] buildDir = . @@ -2489,9 +2562,9 @@ analysisRoutine = Examples/Tests/ElectrostaticDirichletBC/analysis.py buildDir = . inputFile = Examples/Tests/ElectrostaticDirichletBC/PICMI_inputs_2d.py runtime_params = -customRunCmd = python PICMI_inputs_2d.py +customRunCmd = python3 PICMI_inputs_2d.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2556,9 +2629,9 @@ tolerance = 1.e-14 buildDir = . inputFile = Examples/Tests/pass_mpi_communicator/PICMI_inputs_2d.py runtime_params = -customRunCmd = python PICMI_inputs_2d.py +customRunCmd = python3 PICMI_inputs_2d.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2629,9 +2702,9 @@ tolerance = 1.0e-4 buildDir = . inputFile = Examples/Physics_applications/capacitive_discharge/PICMI_inputs_2d.py runtime_params = -customRunCmd = python PICMI_inputs_2d.py +customRunCmd = python3 PICMI_inputs_2d.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2682,9 +2755,9 @@ tolerance = 1.0e-4 buildDir = . inputFile = Examples/Modules/ParticleBoundaryScrape/PICMI_inputs_scrape.py runtime_params = -customRunCmd = python PICMI_inputs_scrape.py +customRunCmd = python3 PICMI_inputs_scrape.py dim = 3 -addToCompileString = USE_EB=TRUE USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_EB=TRUE USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2701,9 +2774,9 @@ tolerance = 1.0e-4 buildDir = . inputFile = Examples/Modules/ParticleBoundaryProcess/PICMI_inputs_reflection.py runtime_params = -customRunCmd = python PICMI_inputs_reflection.py +customRunCmd = python3 PICMI_inputs_reflection.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 1 @@ -2717,9 +2790,9 @@ analysisRoutine = Examples/Modules/ParticleBoundaryProcess/analysis_reflection.p buildDir = . inputFile = Examples/Tests/ParticleDataPython/PICMI_inputs_2d.py runtime_params = -customRunCmd = python PICMI_inputs_2d.py +customRunCmd = python3 PICMI_inputs_2d.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2733,9 +2806,9 @@ analysisRoutine = Examples/Tests/ParticleDataPython/analysis.py buildDir = . inputFile = Examples/Tests/ParticleDataPython/PICMI_inputs_2d.py runtime_params = -customRunCmd = python PICMI_inputs_2d.py --unique +customRunCmd = python3 PICMI_inputs_2d.py --unique dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2749,9 +2822,9 @@ analysisRoutine = Examples/Tests/ParticleDataPython/analysis.py buildDir = . inputFile = Examples/Tests/ParticleDataPython/PICMI_inputs_prev_pos_2d.py runtime_params = -customRunCmd = python PICMI_inputs_prev_pos_2d.py +customRunCmd = python3 PICMI_inputs_prev_pos_2d.py dim = 2 -addToCompileString = USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2875,9 +2948,9 @@ tolerance = 1.0e-4 buildDir = . inputFile = Examples/Tests/PythonWrappers/PICMI_inputs_2d.py runtime_params = -customRunCmd = python PICMI_inputs_2d.py +customRunCmd = python3 PICMI_inputs_2d.py dim = 2 -addToCompileString = USE_PSATD=TRUE USE_PYTHON_MAIN=TRUE PYINSTALLOPTIONS="--user --prefix=" +addToCompileString = USE_PSATD=TRUE USE_PYTHON_MAIN=TRUE restartTest = 0 useMPI = 1 numprocs = 2 @@ -2887,3 +2960,20 @@ compileTest = 0 doVis = 0 compareParticles = 0 analysisRoutine = Examples/analysis_default_regression.py + +[embedded_boundary_python_API] +buildDir = . +inputFile = Examples/Modules/embedded_boundary_python_API/PICMI_inputs_EB_API.py +runtime_params = +customRunCmd = python PICMI_inputs_EB_API.py +dim = 3 +addToCompileString = USE_EB=TRUE +restartTest = 0 +useMPI = 1 +numprocs = 1 +useOMP = 1 +numthreads = 1 +compileTest = 0 +doVis = 0 +compareParticles = 0 +analysisRoutine = Examples/Modules/embedded_boundary_python_API/analysis.py \ No newline at end of file diff --git a/Regression/requirements.txt b/Regression/requirements.txt new file mode 100644 index 00000000000..a281ad2b206 --- /dev/null +++ b/Regression/requirements.txt @@ -0,0 +1,6 @@ +matplotlib +mpi4py +numpy +openpmd-api +scipy +yt diff --git a/Source/ABLASTR/DepositCharge.H b/Source/ABLASTR/DepositCharge.H new file mode 100644 index 00000000000..c7a2fc9d880 --- /dev/null +++ b/Source/ABLASTR/DepositCharge.H @@ -0,0 +1,182 @@ +/* Copyright 2019-2021 Axel Huebl, Andrew Myers + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ +#ifndef ABLASTR_DEPOSIT_CHARGE_H_ +#define ABLASTR_DEPOSIT_CHARGE_H_ + +#include "ABLASTR/ProfilerWrapper.H" +#include "Parallelization/KernelTimer.H" +#include "Particles/Pusher/GetAndSetPosition.H" +#include "Particles/ShapeFactors.H" +#include "Particles/Deposition/ChargeDeposition.H" +#ifdef WARPX_DIM_RZ +# include "Utils/WarpX_Complex.H" +#endif + +#include + + +namespace ablastr { + +/** Perform charge deposition for the particles on a tile. + * + * \tparam PC a type of amrex::ParticleContainer + * + * \param pti an amrex::ParIter pointing to the tile to operate on + * \param wp vector of the particle weights for those particles. + * \param ion_lev pointer to array of particle ionization level. This is + required to have the charge of each macroparticle + since q is a scalar. For non-ionizable species, + ion_lev is a null pointer. + * \param rho MultiFab of the charge density + * \param icomp component in MultiFab to start depositing to + * \param nc number of components to deposit + * \param offset index to start at when looping over particles to depose + * \param np_to_depose number of particles to depose + * \param local_rho temporary FArrayBox for deposition with OpenMP + * \param lev the level of the particles we are on + * \param depos_lev the level to deposit the particles to + * \param charge charge of the particle species + * \param nox shape factor in the x direction + * \param noy shape factor in the y direction + * \param noz shape factor in the z direction + * \param ng_rho number of ghost cells to use for rho + * \param dx cell spacing at level lev + * \param xyzmin lo corner of the current tile in physical coordinates. + * \param ref_ratio mesh refinement ratio between lev and depos_lev + * \param cost pointer to (load balancing) cost corresponding to box where present + particles deposit current. If nullptr, costs are not updated. + * \param n_rz_azimuthal_modes number of azimuthal modes in use, irrelevant outside RZ geometry. + * \param load_balance_costs_update_algo selected method for updating load balance costs. + * \param do_device_synchronize call amrex::Gpu::synchronize() for tiny profiler regions + */ +template +void DepositCharge (typename PC::ParIterType& pti, + typename PC::RealVector& wp, + const int * const ion_lev, + amrex::MultiFab* rho, const int icomp, const int nc, + const long offset, const long np_to_depose, + amrex::FArrayBox& local_rho, const int lev, const int depos_lev, + const amrex::Real charge, const int nox, const int noy, const int noz, + const amrex::IntVect& ng_rho, const std::array& dx, + const std::array& xyzmin, + const amrex::IntVect& ref_ratio, + amrex::Real* cost, const int n_rz_azimuthal_modes, + const long load_balance_costs_update_algo, + const bool do_device_synchronize) +{ + AMREX_ALWAYS_ASSERT_WITH_MESSAGE((depos_lev==(lev-1)) || + (depos_lev==(lev )), + "Deposition buffers only work for lev-1"); + + // If no particles, do not do anything + if (np_to_depose == 0) return; + + // Extract deposition order and check that particles shape fits within the guard cells. + // NOTE: In specific situations where the staggering of rho and the charge deposition algorithm + // are not trivial, this check might be too strict and we might need to relax it, as currently + // done for the current deposition. + +#if defined(WARPX_DIM_1D_Z) + amrex::ignore_unused(nox); + amrex::ignore_unused(noy); + const amrex::IntVect shape_extent = amrex::IntVect(static_cast(noz/2+1)); +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + amrex::ignore_unused(noy); + const amrex::IntVect shape_extent = amrex::IntVect(static_cast(nox/2+1), + static_cast(noz/2+1)); +#elif defined(WARPX_DIM_3D) + const amrex::IntVect shape_extent = amrex::IntVect(static_cast(nox/2+1), + static_cast(noy/2+1), + static_cast(noz/2+1)); +#endif + + // On CPU: particles deposit on tile arrays, which have a small number of guard cells ng_rho + // On GPU: particles deposit directly on the rho array, which usually have a larger number of guard cells +#ifndef AMREX_USE_GPU + const amrex::IntVect range = ng_rho - shape_extent; +#else + const amrex::IntVect range = rho->nGrowVect() - shape_extent; +#endif + + AMREX_ALWAYS_ASSERT_WITH_MESSAGE( + amrex::numParticlesOutOfRange(pti, range) == 0, + "Particles shape does not fit within tile (CPU) or guard cells (GPU) used for charge deposition"); + + ABLASTR_PROFILE_VAR_NS("WarpXParticleContainer::DepositCharge::ChargeDeposition", blp_ppc_chd, do_device_synchronize); + ABLASTR_PROFILE_VAR_NS("WarpXParticleContainer::DepositCharge::Accumulate", blp_accumulate, do_device_synchronize); + + // Get tile box where charge is deposited. + // The tile box is different when depositing in the buffers (depos_levixType().toIntVect() ); +#endif + + tilebox.grow(ng_rho); + +#ifdef AMREX_USE_GPU + amrex::ignore_unused(local_rho); + // GPU, no tiling: rho_fab points to the full rho array + amrex::MultiFab rhoi(*rho, amrex::make_alias, icomp*nc, nc); + auto & rho_fab = rhoi.get(pti); +#else + tb.grow(ng_rho); + + // CPU, tiling: rho_fab points to local_rho + local_rho.resize(tb, nc); + + // local_rho is set to zero + local_rho.setVal(0.0); + + auto & rho_fab = local_rho; +#endif + + const auto GetPosition = GetParticlePosition(pti, offset); + + // Indices of the lower bound + const amrex::Dim3 lo = lbound(tilebox); + + ABLASTR_PROFILE_VAR_START(blp_ppc_chd, do_device_synchronize); + + if (nox == 1){ + doChargeDepositionShapeN<1>(GetPosition, wp.dataPtr()+offset, ion_lev, + rho_fab, np_to_depose, dx, xyzmin, lo, charge, + n_rz_azimuthal_modes, cost, + load_balance_costs_update_algo); + } else if (nox == 2){ + doChargeDepositionShapeN<2>(GetPosition, wp.dataPtr()+offset, ion_lev, + rho_fab, np_to_depose, dx, xyzmin, lo, charge, + n_rz_azimuthal_modes, cost, + load_balance_costs_update_algo); + } else if (nox == 3){ + doChargeDepositionShapeN<3>(GetPosition, wp.dataPtr()+offset, ion_lev, + rho_fab, np_to_depose, dx, xyzmin, lo, charge, + n_rz_azimuthal_modes, cost, + load_balance_costs_update_algo); + } + ABLASTR_PROFILE_VAR_STOP(blp_ppc_chd, do_device_synchronize); + +#ifndef AMREX_USE_GPU + // CPU, tiling: atomicAdd local_rho into rho + ABLASTR_PROFILE_VAR_START(blp_accumulate, do_device_synchronize); + (*rho)[pti].atomicAdd(local_rho, tb, tb, 0, icomp*nc, nc); + ABLASTR_PROFILE_VAR_STOP(blp_accumulate, do_device_synchronize); +#endif +} + +} // namespace ablastr + +#endif // ABLASTR_DEPOSIT_CHARGE_H_ + diff --git a/Source/ABLASTR/Make.package b/Source/ABLASTR/Make.package new file mode 100644 index 00000000000..c63fbc6b467 --- /dev/null +++ b/Source/ABLASTR/Make.package @@ -0,0 +1 @@ +VPATH_LOCATIONS += $(WARPX_HOME)/Source/ABLASTR diff --git a/Source/ABLASTR/ProfilerWrapper.H b/Source/ABLASTR/ProfilerWrapper.H new file mode 100644 index 00000000000..c017fcb1d65 --- /dev/null +++ b/Source/ABLASTR/ProfilerWrapper.H @@ -0,0 +1,47 @@ +/* Copyright 2020-2021 Axel Huebl, Maxence Thevenet + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ + +#ifndef ABLASTR_PROFILERWRAPPER_H_ +#define ABLASTR_PROFILERWRAPPER_H_ + +#include +#include + + +namespace ablastr { + + AMREX_FORCE_INLINE + void doDeviceSynchronize(bool const do_device_synchronize = false) { + if (do_device_synchronize) + amrex::Gpu::synchronize(); + } + + // Note that objects are destructed in the reverse order of declaration + struct SynchronizeOnDestruct { + SynchronizeOnDestruct(bool const do_device_synchronize = false) + : m_do_device_synchronize(do_device_synchronize) {} + + AMREX_FORCE_INLINE + ~SynchronizeOnDestruct() { + doDeviceSynchronize(m_do_device_synchronize); + } + + bool m_do_device_synchronize = false; + }; + +} // namespace ablastr + +// `BL_PROFILE_PASTE(SYNC_SCOPE_, __COUNTER__)` and `SYNC_V_##vname` used to make unique names for +// synchronizeOnDestruct objects, like `SYNC_SCOPE_0` and `SYNC_V_pmain` +#define ABLASTR_PROFILE(fname, sync) ablastr::doDeviceSynchronize(sync); BL_PROFILE(fname); ablastr::SynchronizeOnDestruct BL_PROFILE_PASTE(SYNC_SCOPE_, __COUNTER__){sync} +#define ABLASTR_PROFILE_VAR(fname, vname, sync) ablastr::doDeviceSynchronize(sync); BL_PROFILE_VAR(fname, vname); ablastr::SynchronizeOnDestruct SYNC_V_##vname{sync} +#define ABLASTR_PROFILE_VAR_NS(fname, vname, sync) BL_PROFILE_VAR_NS(fname, vname); ablastr::SynchronizeOnDestruct SYNC_V_##vname{sync} +#define ABLASTR_PROFILE_VAR_START(vname, sync) ablastr::doDeviceSynchronize(sync); BL_PROFILE_VAR_START(vname) +#define ABLASTR_PROFILE_VAR_STOP(vname, sync) ablastr::doDeviceSynchronize(sync); BL_PROFILE_VAR_STOP(vname) +#define ABLASTR_PROFILE_REGION(rname, sync) ablastr::doDeviceSynchronize(sync); BL_PROFILE_REGION(rname); ablastr::SynchronizeOnDestruct BL_PROFILE_PASTE(SYNC_R_, __COUNTER__){sync} + +#endif // ABLASTR_PROFILERWRAPPER_H_ diff --git a/Source/Diagnostics/WarpXOpenPMD.cpp b/Source/Diagnostics/WarpXOpenPMD.cpp index f7de8778e45..acd3e831d64 100644 --- a/Source/Diagnostics/WarpXOpenPMD.cpp +++ b/Source/Diagnostics/WarpXOpenPMD.cpp @@ -751,8 +751,7 @@ WarpXOpenPMDPlot::SetupRealProperties (openPMD::ParticleSpecies& currSpecies, // auto const getComponentRecord = [&currSpecies](std::string const comp_name) { // handle scalar and non-scalar records by name - std::string record_name, component_name; - std::tie(record_name, component_name) = detail::name2openPMD(comp_name); + const auto [record_name, component_name] = detail::name2openPMD(comp_name); return currSpecies[record_name][component_name]; }; auto const real_counter = std::min(write_real_comp.size(), real_comp_names.size()); @@ -773,13 +772,11 @@ WarpXOpenPMDPlot::SetupRealProperties (openPMD::ParticleSpecies& currSpecies, auto ii = m_NumAoSRealAttributes + idx; // jump over AoS names if (write_real_comp[ii]) { // handle scalar and non-scalar records by name - std::string record_name, component_name; - std::tie(record_name, component_name) = detail::name2openPMD(real_comp_names[ii]); + const auto [record_name, component_name] = detail::name2openPMD(real_comp_names[ii]); auto currRecord = currSpecies[record_name]; // meta data for ED-PIC extension - bool newRecord = false; - std::tie(std::ignore, newRecord) = addedRecords.insert(record_name); + [[maybe_unused]] const auto [_, newRecord] = addedRecords.insert(record_name); if( newRecord ) { currRecord.setUnitDimension( detail::getUnitDimension(record_name) ); if( record_name == "weighting" ) @@ -797,13 +794,11 @@ WarpXOpenPMDPlot::SetupRealProperties (openPMD::ParticleSpecies& currSpecies, auto ii = m_NumAoSIntAttributes + idx; // jump over AoS names if (write_int_comp[ii]) { // handle scalar and non-scalar records by name - std::string record_name, component_name; - std::tie(record_name, component_name) = detail::name2openPMD(int_comp_names[ii]); + const auto [record_name, component_name] = detail::name2openPMD(int_comp_names[ii]); auto currRecord = currSpecies[record_name]; // meta data for ED-PIC extension - bool newRecord = false; - std::tie(std::ignore, newRecord) = addedRecords.insert(record_name); + [[maybe_unused]] const auto [_, newRecord] = addedRecords.insert(record_name); if( newRecord ) { currRecord.setUnitDimension( detail::getUnitDimension(record_name) ); currRecord.setAttribute( "macroWeighted", 0u ); @@ -843,8 +838,7 @@ WarpXOpenPMDPlot::SaveRealProperty (ParticleIter& pti, for( auto idx=0; idxixType().toIntVect() ); + amrex::Box box = mfi.tilebox(m_edge_lengths[maxLevel()][idim]->ixType().toIntVect(), + m_edge_lengths[maxLevel()][idim]->nGrowVect()); amrex::FabType fab_type = flags[mfi].getType(box); - box.grow(m_edge_lengths[maxLevel()][idim]->nGrowVect()); auto const &edge_lengths_dim = m_edge_lengths[maxLevel()][idim]->array(mfi); if (fab_type == amrex::FabType::regular) { @@ -210,9 +210,9 @@ WarpX::ComputeFaceAreas () { #else amrex::Abort("ComputeFaceAreas: Only implemented in 2D3V and 3D3V"); #endif - amrex::Box box = mfi.tilebox(m_face_areas[maxLevel()][idim]->ixType().toIntVect()); + amrex::Box box = mfi.tilebox(m_face_areas[maxLevel()][idim]->ixType().toIntVect(), + m_face_areas[maxLevel()][idim]->nGrowVect()); amrex::FabType fab_type = flags[mfi].getType(box); - box.grow(m_face_areas[maxLevel()][idim]->nGrowVect()); auto const &face_areas_dim = m_face_areas[maxLevel()][idim]->array(mfi); if (fab_type == amrex::FabType::regular) { // every cell in box is all regular diff --git a/Source/Evolve/WarpXComputeDt.cpp b/Source/Evolve/WarpXComputeDt.cpp index 08d12dde6d9..94edf5b9867 100644 --- a/Source/Evolve/WarpXComputeDt.cpp +++ b/Source/Evolve/WarpXComputeDt.cpp @@ -38,9 +38,9 @@ WarpX::ComputeDt () if (maxwell_solver_id == MaxwellSolverAlgo::PSATD) { // Computation of dt for spectral algorithm // (determined by the minimum cell size in all directions) -#if (AMREX_SPACEDIM == 1) +#if defined(WARPX_DIM_1D_Z) deltat = cfl * dx[0] / PhysConst::c; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) deltat = cfl * std::min(dx[0], dx[1]) / PhysConst::c; #else deltat = cfl * std::min(dx[0], std::min(dx[1], dx[2])) / PhysConst::c; @@ -86,12 +86,12 @@ WarpX::PrintDtDxDyDz () for (int lev=0; lev <= max_level; lev++) { const amrex::Real* dx_lev = geom[lev].CellSize(); amrex::Print() << "Level " << lev << ": dt = " << dt[lev] -#if (defined WARPX_DIM_1D_Z) +#if defined(WARPX_DIM_1D_Z) << " ; dz = " << dx_lev[0] << '\n'; -#elif (defined WARPX_DIM_XZ) || (defined WARPX_DIM_RZ) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) << " ; dx = " << dx_lev[0] << " ; dz = " << dx_lev[1] << '\n'; -#elif (defined WARPX_DIM_3D) +#elif defined(WARPX_DIM_3D) << " ; dx = " << dx_lev[0] << " ; dy = " << dx_lev[1] << " ; dz = " << dx_lev[2] << '\n'; diff --git a/Source/Evolve/WarpXEvolve.cpp b/Source/Evolve/WarpXEvolve.cpp index b3b2462b1e0..1a693269661 100644 --- a/Source/Evolve/WarpXEvolve.cpp +++ b/Source/Evolve/WarpXEvolve.cpp @@ -10,6 +10,7 @@ */ #include "WarpX.H" +#include "BoundaryConditions/PML.H" #include "Diagnostics/BackTransformedDiagnostic.H" #include "Diagnostics/MultiDiagnostics.H" #include "Diagnostics/ReducedDiags/MultiReducedDiags.H" @@ -610,23 +611,50 @@ WarpX::OneStep_multiJ (const amrex::Real cur_time) } } - // Transform fields back to real space and exchange guard cells + // Transform fields back to real space if (WarpX::fft_do_time_averaging) { // We summed the integral of the field over 2*dt PSATDScaleAverageFields(1._rt / (2._rt*dt[0])); PSATDBackwardTransformEBavg(); } + + // Evolve fields in PML + for (int lev = 0; lev <= finest_level; ++lev) + { + if (do_pml && pml[lev]->ok()) + { + pml[lev]->PushPSATD(lev); + } + ApplyEfieldBoundary(lev, PatchType::fine); + if (lev > 0) ApplyEfieldBoundary(lev, PatchType::coarse); + ApplyBfieldBoundary(lev, PatchType::fine, DtType::FirstHalf); + if (lev > 0) ApplyBfieldBoundary(lev, PatchType::coarse, DtType::FirstHalf); + } + + // Damp fields in PML before exchanging guard cells + if (do_pml) + { + DampPML(); + } + + // Exchange guard cells FillBoundaryE(guard_cells.ng_alloc_EB); FillBoundaryB(guard_cells.ng_alloc_EB); - if (WarpX::do_dive_cleaning) FillBoundaryF(guard_cells.ng_alloc_F); - if (WarpX::do_divb_cleaning) FillBoundaryG(guard_cells.ng_alloc_G); + if (WarpX::do_dive_cleaning || WarpX::do_pml_dive_cleaning) FillBoundaryF(guard_cells.ng_alloc_F); + if (WarpX::do_divb_cleaning || WarpX::do_pml_divb_cleaning) FillBoundaryG(guard_cells.ng_alloc_G); // Synchronize E, B, F, G fields on nodal points NodalSync(Efield_fp, Efield_cp); NodalSync(Bfield_fp, Bfield_cp); if (WarpX::do_dive_cleaning) NodalSync(F_fp, F_cp); if (WarpX::do_divb_cleaning) NodalSync(G_fp, G_cp); + + // Synchronize fields on nodal points in PML + if (do_pml) + { + NodalSyncPML(); + } } else { diff --git a/Source/FieldSolver/ElectrostaticSolver.cpp b/Source/FieldSolver/ElectrostaticSolver.cpp index 26393d0750b..5dd18b7d802 100644 --- a/Source/FieldSolver/ElectrostaticSolver.cpp +++ b/Source/FieldSolver/ElectrostaticSolver.cpp @@ -402,16 +402,16 @@ WarpX::computePhiCartesian (const amrex::Vector // get the potential at the current time amrex::Array phi_bc_values_lo; amrex::Array phi_bc_values_hi; + phi_bc_values_lo[WARPX_ZINDEX] = field_boundary_handler.potential_zlo(gett_new(0)); + phi_bc_values_hi[WARPX_ZINDEX] = field_boundary_handler.potential_zhi(gett_new(0)); +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + phi_bc_values_lo[0] = field_boundary_handler.potential_xlo(gett_new(0)); + phi_bc_values_hi[0] = field_boundary_handler.potential_xhi(gett_new(0)); +#elif defined(WARPX_DIM_3D) phi_bc_values_lo[0] = field_boundary_handler.potential_xlo(gett_new(0)); phi_bc_values_hi[0] = field_boundary_handler.potential_xhi(gett_new(0)); -#if (AMREX_SPACEDIM==2) - phi_bc_values_lo[1] = field_boundary_handler.potential_zlo(gett_new(0)); - phi_bc_values_hi[1] = field_boundary_handler.potential_zhi(gett_new(0)); -#elif (AMREX_SPACEDIM==3) phi_bc_values_lo[1] = field_boundary_handler.potential_ylo(gett_new(0)); phi_bc_values_hi[1] = field_boundary_handler.potential_yhi(gett_new(0)); - phi_bc_values_lo[2] = field_boundary_handler.potential_zlo(gett_new(0)); - phi_bc_values_hi[2] = field_boundary_handler.potential_zhi(gett_new(0)); #endif setPhiBC(phi, phi_bc_values_lo, phi_bc_values_hi); @@ -422,9 +422,9 @@ WarpX::computePhiCartesian (const amrex::Vector // Set the value of beta amrex::Array beta_solver = -# if (AMREX_SPACEDIM==1) +# if defined(WARPX_DIM_1D_Z) {{ beta[2] }}; // beta_x and beta_z -# elif (AMREX_SPACEDIM==2) +# elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) {{ beta[0], beta[2] }}; // beta_x and beta_z # else {{ beta[0], beta[1], beta[2] }}; @@ -487,19 +487,19 @@ WarpX::computePhiCartesian (const amrex::Vector if (do_electrostatic == ElectrostaticSolverAlgo::LabFrame) { for (int lev = 0; lev <= max_level; ++lev) { -#if (AMREX_SPACEDIM==1) +#if defined(WARPX_DIM_1D_Z) mlmg.getGradSolution( {amrex::Array{ get_pointer_Efield_fp(lev, 2) }} ); -#elif (AMREX_SPACEDIM==2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) mlmg.getGradSolution( {amrex::Array{ get_pointer_Efield_fp(lev, 0),get_pointer_Efield_fp(lev, 2) }} ); -#elif (AMREX_SPACEDIM==3) +#elif defined(WARPX_DIM_3D) mlmg.getGradSolution( {amrex::Array{ get_pointer_Efield_fp(lev, 0),get_pointer_Efield_fp(lev, 1), @@ -608,11 +608,11 @@ WarpX::computeE (amrex::Vector, 3> > #endif for ( MFIter mfi(*phi[lev], TilingIfNotGPU()); mfi.isValid(); ++mfi ) { -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const Real inv_dx = 1._rt/dx[0]; const Real inv_dy = 1._rt/dx[1]; const Real inv_dz = 1._rt/dx[2]; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) const Real inv_dx = 1._rt/dx[0]; const Real inv_dz = 1._rt/dx[1]; #else @@ -621,7 +621,7 @@ WarpX::computeE (amrex::Vector, 3> > #if (AMREX_SPACEDIM >= 2) const Box& tbx = mfi.tilebox( E[lev][0]->ixType().toIntVect() ); #endif -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const Box& tby = mfi.tilebox( E[lev][1]->ixType().toIntVect() ); #endif const Box& tbz = mfi.tilebox( E[lev][2]->ixType().toIntVect() ); @@ -630,7 +630,7 @@ WarpX::computeE (amrex::Vector, 3> > #if (AMREX_SPACEDIM >= 2) const auto& Ex_arr = (*E[lev][0])[mfi].array(); #endif -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const auto& Ey_arr = (*E[lev][1])[mfi].array(); #endif const auto& Ez_arr = (*E[lev][2])[mfi].array(); @@ -641,7 +641,7 @@ WarpX::computeE (amrex::Vector, 3> > // Calculate the electric field // Use discretized derivative that matches the staggering of the grid. -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) amrex::ParallelFor( tbx, tby, tbz, [=] AMREX_GPU_DEVICE (int i, int j, int k) { Ex_arr(i,j,k) += @@ -668,7 +668,7 @@ WarpX::computeE (amrex::Vector, 3> > +(beta_y*beta_z-1)*inv_dz*( phi_arr(i,j,k+1)-phi_arr(i,j,k) ); } ); -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) amrex::ParallelFor( tbx, tbz, [=] AMREX_GPU_DEVICE (int i, int j, int k) { Ex_arr(i,j,k) += @@ -725,11 +725,11 @@ WarpX::computeB (amrex::Vector, 3> > #endif for ( MFIter mfi(*phi[lev], TilingIfNotGPU()); mfi.isValid(); ++mfi ) { -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const Real inv_dx = 1._rt/dx[0]; const Real inv_dy = 1._rt/dx[1]; const Real inv_dz = 1._rt/dx[2]; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) const Real inv_dx = 1._rt/dx[0]; const Real inv_dz = 1._rt/dx[1]; #else @@ -752,7 +752,7 @@ WarpX::computeB (amrex::Vector, 3> > // Calculate the magnetic field // Use discretized derivative that matches the staggering of the grid. -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) amrex::ParallelFor( tbx, tby, tbz, [=] AMREX_GPU_DEVICE (int i, int j, int k) { Bx_arr(i,j,k) += inv_c * ( @@ -776,7 +776,7 @@ WarpX::computeB (amrex::Vector, 3> > + phi_arr(i+1,j+1,k)-phi_arr(i,j+1,k))); } ); -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) amrex::ParallelFor( tbx, tby, tbz, [=] AMREX_GPU_DEVICE (int i, int j, int k) { Bx_arr(i,j,k) += inv_c * ( diff --git a/Source/FieldSolver/FiniteDifferenceSolver/EvolveEPML.cpp b/Source/FieldSolver/FiniteDifferenceSolver/EvolveEPML.cpp index b182a836be4..b8c43190f6d 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/EvolveEPML.cpp +++ b/Source/FieldSolver/FiniteDifferenceSolver/EvolveEPML.cpp @@ -192,7 +192,7 @@ void FiniteDifferenceSolver::EvolveEPMLCartesian ( const Real* sigmaj_y = sigba[mfi].sigma[1].data(); const Real* sigmaj_z = sigba[mfi].sigma[2].data(); int const x_lo = sigba[mfi].sigma[0].lo(); -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) int const y_lo = sigba[mfi].sigma[1].lo(); int const z_lo = sigba[mfi].sigma[2].lo(); #else diff --git a/Source/FieldSolver/FiniteDifferenceSolver/MacroscopicProperties/MacroscopicProperties.cpp b/Source/FieldSolver/FiniteDifferenceSolver/MacroscopicProperties/MacroscopicProperties.cpp index 3cd60739218..8076bdf8ff7 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/MacroscopicProperties/MacroscopicProperties.cpp +++ b/Source/FieldSolver/FiniteDifferenceSolver/MacroscopicProperties/MacroscopicProperties.cpp @@ -177,7 +177,7 @@ MacroscopicProperties::InitData () Ez_IndexType[idim] = Ez_stag[idim]; macro_cr_ratio[idim] = 1; } -#if (AMREX_SPACEDIM==2) +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) sigma_IndexType[2] = 0; epsilon_IndexType[2] = 0; mu_IndexType[2] = 0; @@ -208,7 +208,7 @@ MacroscopicProperties::InitializeMacroMultiFabUsingParser ( // Shift x, y, z position based on index type amrex::Real fac_x = (1._rt - iv[0]) * dx_lev[0] * 0.5_rt; amrex::Real x = i * dx_lev[0] + real_box.lo(0) + fac_x; -#if (AMREX_SPACEDIM==2) +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) amrex::Real y = 0._rt; amrex::Real fac_z = (1._rt - iv[1]) * dx_lev[1] * 0.5_rt; amrex::Real z = j * dx_lev[1] + real_box.lo(1) + fac_z; diff --git a/Source/FieldSolver/SpectralSolver/CMakeLists.txt b/Source/FieldSolver/SpectralSolver/CMakeLists.txt index ab66d5d7eb4..39e8e86a4f6 100644 --- a/Source/FieldSolver/SpectralSolver/CMakeLists.txt +++ b/Source/FieldSolver/SpectralSolver/CMakeLists.txt @@ -6,11 +6,11 @@ target_sources(WarpX ) if(WarpX_COMPUTE STREQUAL CUDA) - target_sources(WarpX PRIVATE WrapCuFFT.cpp) + target_sources(ablastr PRIVATE WrapCuFFT.cpp) elseif(WarpX_COMPUTE STREQUAL HIP) - target_sources(WarpX PRIVATE WrapRocFFT.cpp) + target_sources(ablastr PRIVATE WrapRocFFT.cpp) else() - target_sources(WarpX PRIVATE WrapFFTW.cpp) + target_sources(ablastr PRIVATE WrapFFTW.cpp) endif() if(WarpX_DIMS STREQUAL RZ) diff --git a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/ComovingPsatdAlgorithm.H b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/ComovingPsatdAlgorithm.H index cc9e7e4fc84..b6d46095ee1 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/ComovingPsatdAlgorithm.H +++ b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/ComovingPsatdAlgorithm.H @@ -89,7 +89,7 @@ class ComovingPsatdAlgorithm : public SpectralBaseAlgorithm // k vectors KVectorComponent kx_vec; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) KVectorComponent ky_vec; #endif KVectorComponent kz_vec; diff --git a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/ComovingPsatdAlgorithm.cpp b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/ComovingPsatdAlgorithm.cpp index ba9613d3b35..0e295362406 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/ComovingPsatdAlgorithm.cpp +++ b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/ComovingPsatdAlgorithm.cpp @@ -35,7 +35,7 @@ ComovingPsatdAlgorithm::ComovingPsatdAlgorithm (const SpectralKSpace& spectral_k // Initialize the infinite-order k vectors (the argument n_order = -1 selects // the infinite order option, the argument nodal = false is then irrelevant) kx_vec(spectral_kspace.getModifiedKComponent(dm, 0, -1, false)), -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) ky_vec(spectral_kspace.getModifiedKComponent(dm, 1, -1, false)), kz_vec(spectral_kspace.getModifiedKComponent(dm, 2, -1, false)), #else @@ -86,7 +86,7 @@ ComovingPsatdAlgorithm::pushSpectralFields (SpectralFieldData& f) const // Extract pointers for the k vectors const amrex::Real* modified_kx_arr = modified_kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const amrex::Real* modified_ky_arr = modified_ky_vec[mfi].dataPtr(); #endif const amrex::Real* modified_kz_arr = modified_kz_vec[mfi].dataPtr(); @@ -111,7 +111,7 @@ ComovingPsatdAlgorithm::pushSpectralFields (SpectralFieldData& f) const // k vector values const amrex::Real kx_mod = modified_kx_arr[i]; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const amrex::Real ky_mod = modified_ky_arr[j]; const amrex::Real kz_mod = modified_kz_arr[k]; #else @@ -169,7 +169,7 @@ void ComovingPsatdAlgorithm::InitializeSpectralCoefficients (const SpectralKSpac // Extract pointers for the k vectors const amrex::Real* kx_mod = modified_kx_vec[mfi].dataPtr(); const amrex::Real* kx = kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const amrex::Real* ky_mod = modified_ky_vec[mfi].dataPtr(); const amrex::Real* ky = ky_vec[mfi].dataPtr(); #endif @@ -187,7 +187,7 @@ void ComovingPsatdAlgorithm::InitializeSpectralCoefficients (const SpectralKSpac // Store comoving velocity const amrex::Real vx = m_v_comoving[0]; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const amrex::Real vy = m_v_comoving[1]; #endif const amrex::Real vz = m_v_comoving[2]; @@ -198,7 +198,7 @@ void ComovingPsatdAlgorithm::InitializeSpectralCoefficients (const SpectralKSpac // Calculate norm of finite-order k vector const amrex::Real knorm_mod = std::sqrt( std::pow(kx_mod[i], 2) + -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) std::pow(ky_mod[j], 2) + std::pow(kz_mod[k], 2)); #else @@ -207,7 +207,7 @@ void ComovingPsatdAlgorithm::InitializeSpectralCoefficients (const SpectralKSpac // Calculate norm of infinite-order k vector const amrex::Real knorm = std::sqrt( std::pow(kx[i], 2) + -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) std::pow(ky[j], 2) + std::pow(kz[k], 2)); #else @@ -224,7 +224,7 @@ void ComovingPsatdAlgorithm::InitializeSpectralCoefficients (const SpectralKSpac // Calculate dot product of k vector with comoving velocity const amrex::Real kv = kx[i]*vx + -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) ky[j]*vy + kz[k]*vz; #else kz[j]*vz; @@ -441,7 +441,7 @@ ComovingPsatdAlgorithm::CurrentCorrection (const int lev, // Extract pointers for the k vectors const amrex::Real* const modified_kx_arr = modified_kx_vec[mfi].dataPtr(); const amrex::Real* const kx_arr = kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const amrex::Real* const modified_ky_arr = modified_ky_vec[mfi].dataPtr(); const amrex::Real* const ky_arr = ky_vec[mfi].dataPtr(); #endif @@ -469,7 +469,7 @@ ComovingPsatdAlgorithm::CurrentCorrection (const int lev, // k vector values, and coefficients const amrex::Real kx_mod = modified_kx_arr[i]; const amrex::Real kx = kx_arr[i]; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const amrex::Real ky_mod = modified_ky_arr[j]; const amrex::Real kz_mod = modified_kz_arr[k]; const amrex::Real ky = ky_arr[j]; diff --git a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PMLPsatdAlgorithm.cpp b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PMLPsatdAlgorithm.cpp index 14ffc2d91b8..bfe02a23810 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PMLPsatdAlgorithm.cpp +++ b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PMLPsatdAlgorithm.cpp @@ -78,7 +78,7 @@ PMLPsatdAlgorithm::pushSpectralFields(SpectralFieldData& f) const { // Extract pointers for the k vectors const Real* modified_kx_arr = modified_kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const Real* modified_ky_arr = modified_ky_vec[mfi].dataPtr(); #endif const Real* modified_kz_arr = modified_kz_vec[mfi].dataPtr(); @@ -155,7 +155,7 @@ PMLPsatdAlgorithm::pushSpectralFields(SpectralFieldData& f) const { // k vector values, and coefficients const Real kx = modified_kx_arr[i]; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const Real ky = modified_ky_arr[j]; const Real kz = modified_kz_arr[k]; #else @@ -362,7 +362,7 @@ void PMLPsatdAlgorithm::InitializeSpectralCoefficients ( // Extract pointers for the k vectors const Real* modified_kx = modified_kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const Real* modified_ky = modified_ky_vec[mfi].dataPtr(); #endif const Real* modified_kz = modified_kz_vec[mfi].dataPtr(); @@ -376,7 +376,7 @@ void PMLPsatdAlgorithm::InitializeSpectralCoefficients ( ParallelFor(bx, [=] AMREX_GPU_DEVICE(int i, int j, int k) noexcept { const Real kx = modified_kx[i]; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const Real ky = modified_ky[j]; const Real kz = modified_kz[k]; #else diff --git a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PsatdAlgorithm.H b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PsatdAlgorithm.H index d0dce0fef62..954c313ee4a 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PsatdAlgorithm.H +++ b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PsatdAlgorithm.H @@ -156,7 +156,7 @@ class PsatdAlgorithm : public SpectralBaseAlgorithm // Centered modified finite-order k vectors KVectorComponent modified_kx_vec_centered; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) KVectorComponent modified_ky_vec_centered; #endif KVectorComponent modified_kz_vec_centered; diff --git a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PsatdAlgorithm.cpp b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PsatdAlgorithm.cpp index 1f3c67629ce..fe9562dbace 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PsatdAlgorithm.cpp +++ b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/PsatdAlgorithm.cpp @@ -49,7 +49,7 @@ PsatdAlgorithm::PsatdAlgorithm( // these are computed always with the assumption of centered grids // (argument nodal = true), for both nodal and staggered simulations modified_kx_vec_centered(spectral_kspace.getModifiedKComponent(dm, 0, norder_x, true)), -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) modified_ky_vec_centered(spectral_kspace.getModifiedKComponent(dm, 1, norder_y, true)), modified_kz_vec_centered(spectral_kspace.getModifiedKComponent(dm, 2, norder_z, true)), #else @@ -183,7 +183,7 @@ PsatdAlgorithm::pushSpectralFields (SpectralFieldData& f) const // Extract pointers for the k vectors const amrex::Real* modified_kx_arr = modified_kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real* modified_ky_arr = modified_ky_vec[mfi].dataPtr(); #endif const amrex::Real* modified_kz_arr = modified_kz_vec[mfi].dataPtr(); @@ -220,7 +220,7 @@ PsatdAlgorithm::pushSpectralFields (SpectralFieldData& f) const // k vector values const amrex::Real kx = modified_kx_arr[i]; -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real ky = modified_ky_arr[j]; const amrex::Real kz = modified_kz_arr[k]; #else @@ -447,7 +447,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficients ( // Extract pointers for the k vectors const amrex::Real* kx_s = modified_kx_vec[mfi].dataPtr(); const amrex::Real* kx_c = modified_kx_vec_centered[mfi].dataPtr(); -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real* ky_s = modified_ky_vec[mfi].dataPtr(); const amrex::Real* ky_c = modified_ky_vec_centered[mfi].dataPtr(); #endif @@ -471,7 +471,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficients ( // Extract Galilean velocity amrex::Real vg_x = m_v_galilean[0]; -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) amrex::Real vg_y = m_v_galilean[1]; #endif amrex::Real vg_z = m_v_galilean[2]; @@ -482,7 +482,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficients ( // Calculate norm of k vector const amrex::Real knorm_s = std::sqrt( std::pow(kx_s[i], 2) + -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) std::pow(ky_s[j], 2) + std::pow(kz_s[k], 2)); #else std::pow(kz_s[j], 2)); @@ -501,7 +501,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficients ( // modified k vectors, to work correctly for both nodal and staggered simulations. // w_c = 0 always with standard PSATD (zero Galilean velocity). const amrex::Real w_c = kx_c[i]*vg_x + -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) ky_c[j]*vg_y + kz_c[k]*vg_z; #else kz_c[j]*vg_z; @@ -646,7 +646,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficientsAveraging ( // Extract pointers for the k vectors const amrex::Real* kx_s = modified_kx_vec[mfi].dataPtr(); const amrex::Real* kx_c = modified_kx_vec_centered[mfi].dataPtr(); -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real* ky_s = modified_ky_vec[mfi].dataPtr(); const amrex::Real* ky_c = modified_ky_vec_centered[mfi].dataPtr(); #endif @@ -663,7 +663,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficientsAveraging ( // Extract Galilean velocity amrex::Real vg_x = m_v_galilean[0]; -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) amrex::Real vg_y = m_v_galilean[1]; #endif amrex::Real vg_z = m_v_galilean[2]; @@ -674,7 +674,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficientsAveraging ( // Calculate norm of k vector const amrex::Real knorm_s = std::sqrt( std::pow(kx_s[i], 2) + -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) std::pow(ky_s[j], 2) + std::pow(kz_s[k], 2)); #else std::pow(kz_s[j], 2)); @@ -692,7 +692,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficientsAveraging ( // modified k vectors, to work correctly for both nodal and staggered simulations. // w_c = 0 always with standard PSATD (zero Galilean velocity). const amrex::Real w_c = kx_c[i]*vg_x + -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) ky_c[j]*vg_y + kz_c[k]*vg_z; #else kz_c[j]*vg_z; @@ -836,7 +836,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficientsAvgLin ( // Extract pointers for the k vectors const Real* kx_s = modified_kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const Real* ky_s = modified_ky_vec[mfi].dataPtr(); #endif const Real* kz_s = modified_kz_vec[mfi].dataPtr(); @@ -853,7 +853,7 @@ void PsatdAlgorithm::InitializeSpectralCoefficientsAvgLin ( // Calculate norm of k vector const Real knorm_s = std::sqrt( std::pow(kx_s[i], 2) + -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) std::pow(ky_s[j], 2) + std::pow(kz_s[k], 2)); #else std::pow(kz_s[j], 2)); @@ -924,7 +924,7 @@ PsatdAlgorithm::CurrentCorrection ( // Extract pointers for the k vectors const amrex::Real* const modified_kx_arr = modified_kx_vec[mfi].dataPtr(); const amrex::Real* const modified_kx_arr_c = modified_kx_vec_centered[mfi].dataPtr(); -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real* const modified_ky_arr = modified_ky_vec[mfi].dataPtr(); const amrex::Real* const modified_ky_arr_c = modified_ky_vec_centered[mfi].dataPtr(); #endif @@ -952,7 +952,7 @@ PsatdAlgorithm::CurrentCorrection ( // k vector values, and coefficients const amrex::Real kx = modified_kx_arr[i]; const amrex::Real kx_c = modified_kx_arr_c[i]; -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real ky = modified_ky_arr[j]; const amrex::Real kz = modified_kz_arr[k]; const amrex::Real ky_c = modified_ky_arr_c[j]; @@ -1040,7 +1040,7 @@ PsatdAlgorithm::VayDeposition ( // Extract pointers for the modified k vectors const amrex::Real* const modified_kx_arr = modified_kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real* const modified_ky_arr = modified_ky_vec[mfi].dataPtr(); #endif const amrex::Real* const modified_kz_arr = modified_kz_vec[mfi].dataPtr(); @@ -1050,7 +1050,7 @@ PsatdAlgorithm::VayDeposition ( { // Shortcuts for the values of D const Complex Dx = fields(i,j,k,Idx.Jx); -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const Complex Dy = fields(i,j,k,Idx.Jy); #endif const Complex Dz = fields(i,j,k,Idx.Jz); @@ -1060,7 +1060,7 @@ PsatdAlgorithm::VayDeposition ( // Modified k vector values const amrex::Real kx_mod = modified_kx_arr[i]; -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real ky_mod = modified_ky_arr[j]; const amrex::Real kz_mod = modified_kz_arr[k]; #else @@ -1071,7 +1071,7 @@ PsatdAlgorithm::VayDeposition ( if (kx_mod != 0._rt) fields(i,j,k,Idx.Jx) = I * Dx / kx_mod; else fields(i,j,k,Idx.Jx) = 0._rt; -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) // Compute Jy if (ky_mod != 0._rt) fields(i,j,k,Idx.Jy) = I * Dy / ky_mod; else fields(i,j,k,Idx.Jy) = 0._rt; diff --git a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/SpectralBaseAlgorithm.H b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/SpectralBaseAlgorithm.H index 4a35871cd8f..ad9f9925e8e 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/SpectralBaseAlgorithm.H +++ b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/SpectralBaseAlgorithm.H @@ -102,7 +102,7 @@ class SpectralBaseAlgorithm // Modified finite-order vectors KVectorComponent modified_kx_vec; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) KVectorComponent modified_ky_vec; #endif KVectorComponent modified_kz_vec; diff --git a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/SpectralBaseAlgorithm.cpp b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/SpectralBaseAlgorithm.cpp index 92e3fcf9484..4ecbb9000fa 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/SpectralBaseAlgorithm.cpp +++ b/Source/FieldSolver/SpectralSolver/SpectralAlgorithms/SpectralBaseAlgorithm.cpp @@ -37,14 +37,14 @@ SpectralBaseAlgorithm::SpectralBaseAlgorithm(const SpectralKSpace& spectral_kspa m_spectral_index(spectral_index), // Compute and assign the modified k vectors modified_kx_vec(spectral_kspace.getModifiedKComponent(dm,0,norder_x,nodal)), -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) modified_ky_vec(spectral_kspace.getModifiedKComponent(dm,1,norder_y,nodal)), modified_kz_vec(spectral_kspace.getModifiedKComponent(dm,2,norder_z,nodal)) #else modified_kz_vec(spectral_kspace.getModifiedKComponent(dm,1,norder_z,nodal)) #endif { -#if (AMREX_SPACEDIM!=3) +#if !defined(WARPX_DIM_3D) amrex::ignore_unused(norder_y); #endif } @@ -77,7 +77,7 @@ SpectralBaseAlgorithm::ComputeSpectralDivE ( Array4 fields = field_data.fields[mfi].array(); // Extract pointers for the k vectors const Real* modified_kx_arr = modified_kx_vec[mfi].dataPtr(); -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const Real* modified_ky_arr = modified_ky_vec[mfi].dataPtr(); #endif const Real* modified_kz_arr = modified_kz_vec[mfi].dataPtr(); @@ -92,7 +92,7 @@ SpectralBaseAlgorithm::ComputeSpectralDivE ( const Complex Ez = fields(i,j,k,Idx.Ez); // k vector values const Real kx = modified_kx_arr[i]; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) const Real ky = modified_ky_arr[j]; const Real kz = modified_kz_arr[k]; #else diff --git a/Source/FieldSolver/SpectralSolver/SpectralFieldData.H b/Source/FieldSolver/SpectralSolver/SpectralFieldData.H index aac4035a1a3..16f8e179c36 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralFieldData.H +++ b/Source/FieldSolver/SpectralSolver/SpectralFieldData.H @@ -142,7 +142,7 @@ class SpectralFieldData // a cell-centered grid in real space, instead of a nodal grid SpectralShiftFactor xshift_FFTfromCell, xshift_FFTtoCell, zshift_FFTfromCell, zshift_FFTtoCell; -#if (AMREX_SPACEDIM==3) +#if defined(WARPX_DIM_3D) SpectralShiftFactor yshift_FFTfromCell, yshift_FFTtoCell; #endif diff --git a/Source/FieldSolver/SpectralSolver/SpectralFieldData.cpp b/Source/FieldSolver/SpectralSolver/SpectralFieldData.cpp index f16fc3aa2e7..9d8563b6bc8 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralFieldData.cpp +++ b/Source/FieldSolver/SpectralSolver/SpectralFieldData.cpp @@ -123,7 +123,7 @@ SpectralFieldData::SpectralFieldData( const int lev, ShiftType::TransformFromCellCentered); xshift_FFTtoCell = k_space.getSpectralShiftFactor(dm, 0, ShiftType::TransformToCellCentered); -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) yshift_FFTfromCell = k_space.getSpectralShiftFactor(dm, 1, ShiftType::TransformFromCellCentered); yshift_FFTtoCell = k_space.getSpectralShiftFactor(dm, 1, @@ -200,12 +200,12 @@ SpectralFieldData::ForwardTransform (const int lev, #if (AMREX_SPACEDIM >= 2) const bool is_nodal_x = (stag[0] == amrex::IndexType::NODE) ? true : false; #endif -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const bool is_nodal_y = (stag[1] == amrex::IndexType::NODE) ? true : false; const bool is_nodal_z = (stag[2] == amrex::IndexType::NODE) ? true : false; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) const bool is_nodal_z = (stag[1] == amrex::IndexType::NODE) ? true : false; -#elif (AMREX_SPACEDIM == 1) +#elif defined(WARPX_DIM_1D_Z) const bool is_nodal_z = (stag[0] == amrex::IndexType::NODE) ? true : false; #endif @@ -254,7 +254,7 @@ SpectralFieldData::ForwardTransform (const int lev, #if (AMREX_SPACEDIM >= 2) const Complex* xshift_arr = xshift_FFTfromCell[mfi].dataPtr(); #endif -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const Complex* yshift_arr = yshift_FFTfromCell[mfi].dataPtr(); #endif const Complex* zshift_arr = zshift_FFTfromCell[mfi].dataPtr(); @@ -268,12 +268,12 @@ SpectralFieldData::ForwardTransform (const int lev, #if (AMREX_SPACEDIM >= 2) if (is_nodal_x==false) spectral_field_value *= xshift_arr[i]; #endif -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) if (is_nodal_y==false) spectral_field_value *= yshift_arr[j]; if (is_nodal_z==false) spectral_field_value *= zshift_arr[k]; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) if (is_nodal_z==false) spectral_field_value *= zshift_arr[j]; -#elif (AMREX_SPACEDIM == 1) +#elif defined(WARPX_DIM_1D_Z) if (is_nodal_z==false) spectral_field_value *= zshift_arr[i]; #endif // Copy field into the right index @@ -306,26 +306,26 @@ SpectralFieldData::BackwardTransform (const int lev, #if (AMREX_SPACEDIM >= 2) const bool is_nodal_x = mf.is_nodal(0); #endif -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const bool is_nodal_y = mf.is_nodal(1); const bool is_nodal_z = mf.is_nodal(2); -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) const bool is_nodal_z = mf.is_nodal(1); -#elif (AMREX_SPACEDIM == 1) +#elif defined(WARPX_DIM_1D_Z) const bool is_nodal_z = mf.is_nodal(0); #endif #if (AMREX_SPACEDIM >= 2) const int si = (is_nodal_x) ? 1 : 0; #endif -#if (AMREX_SPACEDIM == 1) +#if defined(WARPX_DIM_1D_Z) const int si = (is_nodal_z) ? 1 : 0; const int sj = 0; const int sk = 0; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) const int sj = (is_nodal_z) ? 1 : 0; const int sk = 0; -#elif (AMREX_SPACEDIM == 3) +#elif defined(WARPX_DIM_3D) const int sj = (is_nodal_y) ? 1 : 0; const int sk = (is_nodal_z) ? 1 : 0; #endif @@ -353,7 +353,7 @@ SpectralFieldData::BackwardTransform (const int lev, #if (AMREX_SPACEDIM >= 2) const Complex* xshift_arr = xshift_FFTtoCell[mfi].dataPtr(); #endif -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const Complex* yshift_arr = yshift_FFTtoCell[mfi].dataPtr(); #endif const Complex* zshift_arr = zshift_FFTtoCell[mfi].dataPtr(); @@ -367,12 +367,12 @@ SpectralFieldData::BackwardTransform (const int lev, #if (AMREX_SPACEDIM >= 2) if (is_nodal_x==false) spectral_field_value *= xshift_arr[i]; #endif -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) if (is_nodal_y==false) spectral_field_value *= yshift_arr[j]; if (is_nodal_z==false) spectral_field_value *= zshift_arr[k]; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) if (is_nodal_z==false) spectral_field_value *= zshift_arr[j]; -#elif (AMREX_SPACEDIM == 1) +#elif defined(WARPX_DIM_1D_Z) if (is_nodal_z==false) spectral_field_value *= zshift_arr[i]; #endif // Copy field into temporary array @@ -394,25 +394,25 @@ SpectralFieldData::BackwardTransform (const int lev, // Total number of cells, including ghost cells (nj represents ny in 3D and nz in 2D) const int ni = mf_box.length(0); -#if (AMREX_SPACEDIM == 1) +#if defined(WARPX_DIM_1D_Z) constexpr int nj = 1; constexpr int nk = 1; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) const int nj = mf_box.length(1); constexpr int nk = 1; -#elif (AMREX_SPACEDIM == 3) +#elif defined(WARPX_DIM_3D) const int nj = mf_box.length(1); const int nk = mf_box.length(2); #endif // Lower bound of the box (lo_j represents lo_y in 3D and lo_z in 2D) const int lo_i = amrex::lbound(mf_box).x; -#if (AMREX_SPACEDIM == 1) +#if defined(WARPX_DIM_1D_Z) constexpr int lo_j = 0; constexpr int lo_k = 0; -#elif (AMREX_SPACEDIM == 2) +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) const int lo_j = amrex::lbound(mf_box).y; constexpr int lo_k = 0; -#elif (AMREX_SPACEDIM == 3) +#elif defined(WARPX_DIM_3D) const int lo_j = amrex::lbound(mf_box).y; const int lo_k = amrex::lbound(mf_box).z; #endif diff --git a/Source/FieldSolver/WarpX_QED_K.H b/Source/FieldSolver/WarpX_QED_K.H index db8158d700c..a20dde8b9a7 100644 --- a/Source/FieldSolver/WarpX_QED_K.H +++ b/Source/FieldSolver/WarpX_QED_K.H @@ -97,7 +97,7 @@ constexpr amrex::Real c2i = 1._rt/c2; const amrex::Real dxi = 1._rt/dx; const amrex::Real dzi = 1._rt/dz; -#if (AMREX_SPACEDIM == 3) +#if defined(WARPX_DIM_3D) const amrex::Real dyi = 1._rt/dy; // Picking out points for stencil to be used in curl function of M diff --git a/Source/Initialization/WarpXInitData.cpp b/Source/Initialization/WarpXInitData.cpp index cfb0525cc6e..ada88daf74e 100644 --- a/Source/Initialization/WarpXInitData.cpp +++ b/Source/Initialization/WarpXInitData.cpp @@ -886,25 +886,8 @@ void WarpX::InitializeEBGridData (int lev) ScaleEdges(); ScaleAreas(); - const auto &period = Geom(lev).periodicity(); - WarpXCommUtil::FillBoundary(*m_edge_lengths[lev][0], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_edge_lengths[lev][1], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_edge_lengths[lev][2], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_face_areas[lev][0], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_face_areas[lev][1], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_face_areas[lev][2], guard_cells.ng_alloc_EB, period); - if (WarpX::maxwell_solver_id == MaxwellSolverAlgo::ECT) { - WarpXCommUtil::FillBoundary(*m_area_mod[lev][0], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_area_mod[lev][1], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_area_mod[lev][2], guard_cells.ng_alloc_EB, period); MarkCells(); - WarpXCommUtil::FillBoundary(*m_flag_info_face[lev][0], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_flag_info_face[lev][1], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_flag_info_face[lev][2], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_flag_ext_face[lev][0], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_flag_ext_face[lev][1], guard_cells.ng_alloc_EB, period); - WarpXCommUtil::FillBoundary(*m_flag_ext_face[lev][2], guard_cells.ng_alloc_EB, period); ComputeFaceExtensions(); } } diff --git a/Source/Laser/LaserProfilesImpl/LaserProfileFromTXYEFile.cpp b/Source/Laser/LaserProfilesImpl/LaserProfileFromTXYEFile.cpp index d36386591e8..a0bcb474733 100644 --- a/Source/Laser/LaserProfilesImpl/LaserProfileFromTXYEFile.cpp +++ b/Source/Laser/LaserProfilesImpl/LaserProfileFromTXYEFile.cpp @@ -121,8 +121,7 @@ WarpXLaserProfiles::FromTXYEFileLaserProfile::fill_amplitude ( } //Find left and right time indices - int idx_t_left, idx_t_right; - std::tie(idx_t_left, idx_t_right) = find_left_right_time_indices(t); + const auto [idx_t_left, idx_t_right] = find_left_right_time_indices(t); if(idx_t_left < m_params.first_time_index){ Abort("Something bad has happened with the simulation time"); diff --git a/Source/Make.WarpX b/Source/Make.WarpX index c031a1509ba..08a9e695087 100644 --- a/Source/Make.WarpX +++ b/Source/Make.WarpX @@ -7,11 +7,7 @@ USE_PARTICLES = TRUE USE_RPATH = TRUE USE_GPU_RDC = FALSE BL_NO_FORT = TRUE -ifeq ($(USE_DPCPP),TRUE) - CXXSTD = c++17 -else - CXXSTD = c++14 -endif +CXXSTD = c++17 # required for AMReX async I/O MPI_THREAD_MULTIPLE = TRUE @@ -74,6 +70,7 @@ endif -include Make.package include $(WARPX_HOME)/Source/Make.package +include $(WARPX_HOME)/Source/ABLASTR/Make.package include $(WARPX_HOME)/Source/BoundaryConditions/Make.package include $(WARPX_HOME)/Source/Diagnostics/Make.package include $(WARPX_HOME)/Source/EmbeddedBoundary/Make.package @@ -238,8 +235,8 @@ else endif installwarpx: libwarpx.$(PYDIM).so - mv libwarpx.$(PYDIM).so Python/pywarpx - cd Python; python setup.py install --with-libwarpx $(PYDIM) $(PYINSTALLOPTIONS) + cp libwarpx.$(PYDIM).so Python/pywarpx + cd Python; python3 setup.py install --force --with-libwarpx $(PYDIM) $(PYINSTALLOPTIONS) libwarpx.$(PYDIM).a: $(objForExecs) @echo Making static library $@ ... diff --git a/Source/Parallelization/WarpXRegrid.cpp b/Source/Parallelization/WarpXRegrid.cpp index c96d5e1c782..7c1371f2955 100644 --- a/Source/Parallelization/WarpXRegrid.cpp +++ b/Source/Parallelization/WarpXRegrid.cpp @@ -197,8 +197,9 @@ WarpX::RemakeLevel (int lev, Real /*time*/, const BoxArray& ba, const Distributi #ifdef AMREX_USE_EB RemakeMultiFab(m_distance_to_eb[lev], dm, false); + int max_guard = guard_cells.ng_FieldSolver.max(); m_field_factory[lev] = amrex::makeEBFabFactory(Geom(lev), ba, dm, - {1,1,1}, // Not clear how many ghost cells we need yet + {max_guard, max_guard, max_guard}, amrex::EBSupport::full); InitializeEBGridData(lev); diff --git a/Source/Particles/Collision/BinaryCollision/ProtonBoronFusionCrossSection.H b/Source/Particles/Collision/BinaryCollision/ProtonBoronFusionCrossSection.H index fad4d87de9a..ee81a620ac0 100644 --- a/Source/Particles/Collision/BinaryCollision/ProtonBoronFusionCrossSection.H +++ b/Source/Particles/Collision/BinaryCollision/ProtonBoronFusionCrossSection.H @@ -15,29 +15,27 @@ #include /** - * \brief Computes the total proton-boron fusion cross section using the analytical fits given in - * W.M. Nevins and R. Swain, Nuclear Fusion, 40, 865 (2000). The result is returned in SI units - * (square meters). + * \brief Computes the total proton-boron fusion cross section in the range 0 < E < 3.5 MeV using + * the analytical fits given in W.M. Nevins and R. Swain, Nuclear Fusion, 40, 865 (2000). * For the record, note that there is a typo in equation (1) of this paper: the total cross section - * should read S(E)/E*exp(-sqrt(E_G/G)) instead of S(E)/E*exp(sqrt(E_G/G)) (minus sign in the + * should read S(E)/E*exp(-sqrt(E_G/E)) instead of S(E)/E*exp(sqrt(E_G/E)) (minus sign in the * exponential). * - * @param[in] E_kin_star the kinetic energy of the proton-boron pair in its center of mass frame, - * in SI units. + * @param[in] E_keV the kinetic energy of the proton-boron pair in its center of mass frame, in + * keV. + * @return The total cross section in barn. */ AMREX_GPU_HOST_DEVICE AMREX_INLINE -amrex::ParticleReal ProtonBoronFusionCrossSection (const amrex::ParticleReal& E_kin_star) +amrex::ParticleReal ProtonBoronFusionCrossSectionNevins (const amrex::ParticleReal& E_keV) { using namespace amrex::literals; // If kinetic energy is 0, return a 0 cross section and avoid later division by 0. - if (E_kin_star == 0._prt) {return 0._prt;} + if (E_keV == 0._prt) {return 0._prt;} - // Fits use energy in keV and MeV - constexpr amrex::ParticleReal joule_to_kev = 1.e-3_prt/PhysConst::q_e; - constexpr amrex::ParticleReal joule_to_mev = 1.e-6_prt/PhysConst::q_e; - const amrex::ParticleReal E_kev = E_kin_star*joule_to_kev; - const amrex::ParticleReal E_mev = E_kin_star*joule_to_mev; + // Fits also use energy in MeV + const amrex::ParticleReal E_MeV = E_keV*1.e-3_prt; + constexpr amrex::ParticleReal joule_to_MeV = 1.e-6_prt/PhysConst::q_e; // Compute Gamow factor, in MeV constexpr auto one_pr = 1._prt; @@ -49,13 +47,13 @@ amrex::ParticleReal ProtonBoronFusionCrossSection (const amrex::ParticleReal& E_ (2._prt*PhysConst::ep0*PhysConst::hbar)) * (PhysConst::q_e*PhysConst::q_e * Z_boron / (2._prt*PhysConst::ep0*PhysConst::hbar)) * - joule_to_mev; + joule_to_MeV; // Compute astrophysical factor, in MeV barn, using the fits constexpr auto E_lim1 = 400._prt; // Limits between the different fit regions constexpr auto E_lim2 = 642._prt; amrex::ParticleReal astrophysical_factor; - if (E_kev < E_lim1) + if (E_keV < E_lim1) { constexpr auto C0 = 197._prt; constexpr auto C1 = 0.24_prt; @@ -63,16 +61,16 @@ amrex::ParticleReal ProtonBoronFusionCrossSection (const amrex::ParticleReal& E_ constexpr auto AL = 1.82e4_prt; constexpr auto EL = 148._prt; constexpr auto dEL_sq = 2.35_prt*2.35_prt; - astrophysical_factor = C0 + C1*E_kev + C2*E_kev*E_kev + - AL/((E_kev - EL)*(E_kev - EL) + dEL_sq); + astrophysical_factor = C0 + C1*E_keV + C2*E_keV*E_keV + + AL/((E_keV - EL)*(E_keV - EL) + dEL_sq); } - else if (E_kev < E_lim2) + else if (E_keV < E_lim2) { constexpr auto D0 = 330._prt; constexpr auto D1 = 66.1_prt; constexpr auto D2 = -20.3_prt; constexpr auto D5 = -1.58_prt; - const amrex::ParticleReal E_norm = (E_kev-400._prt) * 1.e-2_prt; + const amrex::ParticleReal E_norm = (E_keV-400._prt) * 1.e-2_prt; astrophysical_factor = D0 + D1*E_norm + D2*E_norm*E_norm + D5*std::pow(E_norm,5); } else @@ -90,18 +88,66 @@ amrex::ParticleReal ProtonBoronFusionCrossSection (const amrex::ParticleReal& E_ constexpr auto dE2_sq = 138._prt*138._prt; constexpr auto dE3_sq = 309._prt*309._prt; constexpr auto B = 4.38_prt; - astrophysical_factor = A0 / ((E_kev-E0)*(E_kev-E0) + dE0_sq) + - A1 / ((E_kev-E1)*(E_kev-E1) + dE1_sq) + - A2 / ((E_kev-E2)*(E_kev-E2) + dE2_sq) + - A3 / ((E_kev-E3)*(E_kev-E3) + dE3_sq) + B; + astrophysical_factor = A0 / ((E_keV-E0)*(E_keV-E0) + dE0_sq) + + A1 / ((E_keV-E1)*(E_keV-E1) + dE1_sq) + + A2 / ((E_keV-E2)*(E_keV-E2) + dE2_sq) + + A3 / ((E_keV-E3)*(E_keV-E3) + dE3_sq) + B; } // Compute cross section, in barn - const amrex::ParticleReal cross_section_b = astrophysical_factor/E_mev* - std::exp(-std::sqrt(gamow_factor/E_mev)); + return astrophysical_factor/E_MeV*std::exp(-std::sqrt(gamow_factor/E_MeV)); +} + +/** + * \brief Computes the total proton-boron fusion cross section in the range E > 3.5 MeV using a + * simple power law fit of the data presented in Buck et al., Nuclear Physics A, 398(2), 189-202 + * (1983) (data can also be found in the EXFOR database). + * + * @param[in] E_keV the kinetic energy of the proton-boron pair in its center of mass frame, in + * keV. + * @return The total cross section in barn. + */ +AMREX_GPU_HOST_DEVICE AMREX_INLINE +amrex::ParticleReal ProtonBoronFusionCrossSectionBuck (const amrex::ParticleReal& E_keV) +{ + using namespace amrex::literals; + + constexpr amrex::ParticleReal E_start_fit = 3500._prt; // Fit starts at 3.5 MeV + // cross section at E = E_start_fit, in barn + constexpr amrex::ParticleReal cross_section_start_fit = 0.2168440845211521_prt; + constexpr amrex::ParticleReal slope_fit = -2.661840717596765; + + // Compute fitted value + return cross_section_start_fit*std::pow(E_keV/E_start_fit, slope_fit); +} + +/** + * \brief Computes the total proton-boron fusion cross section. When E_kin_star < 3.5 MeV, we use + * the analytical fits given in W.M. Nevins and R. Swain, Nuclear Fusion, 40, 865 (2000). When + * E_kin_star > 3.5 MeV, we use a simple power law fit of the data presented in Buck et al., + * Nuclear Physics A, 398(2), 189-202 (1983). Both fits return the same value for + * E_kin_star = 3.5 MeV. + * + * @param[in] E_kin_star the kinetic energy of the proton-boron pair in its center of mass frame, + * in SI units. + * @return The total cross section in SI units (square meters). + */ +AMREX_GPU_HOST_DEVICE AMREX_INLINE +amrex::ParticleReal ProtonBoronFusionCrossSection (const amrex::ParticleReal& E_kin_star) +{ + using namespace amrex::literals; + + // Fits use energy in keV + constexpr amrex::ParticleReal joule_to_keV = 1.e-3_prt/PhysConst::q_e; + const amrex::ParticleReal E_keV = E_kin_star*joule_to_keV; + constexpr amrex::ParticleReal E_threshold = 3500._prt; + + const amrex::ParticleReal cross_section_b = (E_keV <= E_threshold) ? + ProtonBoronFusionCrossSectionNevins(E_keV) : + ProtonBoronFusionCrossSectionBuck(E_keV); // Convert cross section to SI units: barn to square meter - constexpr auto barn_to_sqm = amrex::ParticleReal(1.e-28); + constexpr auto barn_to_sqm = 1.e-28_prt; return cross_section_b*barn_to_sqm; } diff --git a/Source/Particles/Deposition/ChargeDeposition.H b/Source/Particles/Deposition/ChargeDeposition.H index 259cd050f89..bc8094e5f7a 100644 --- a/Source/Particles/Deposition/ChargeDeposition.H +++ b/Source/Particles/Deposition/ChargeDeposition.H @@ -1,4 +1,4 @@ -/* Copyright 2019 Axel Huebl, David Grote, Maxence Thevenet +/* Copyright 2019 Axel Huebl, Andrew Myers, David Grote, Maxence Thevenet * Weiqun Zhang * * This file is part of WarpX. @@ -12,6 +12,7 @@ #include "Particles/Pusher/GetAndSetPosition.H" #include "Particles/ShapeFactors.H" #include "Utils/WarpXAlgorithmSelection.H" +#include "Utils/WarpXProfilerWrapper.H" #ifdef WARPX_DIM_RZ # include "Utils/WarpX_Complex.H" #endif diff --git a/Source/Particles/ShapeFactors.H b/Source/Particles/ShapeFactors.H index 835d1237ea6..09f96634012 100644 --- a/Source/Particles/ShapeFactors.H +++ b/Source/Particles/ShapeFactors.H @@ -1,4 +1,4 @@ -/* Copyright 2019 Maxence Thevenet, Michael Rowan +/* Copyright 2019-2021 Maxence Thevenet, Michael Rowan, Luca Fedeli, Axel Huebl * * This file is part of WarpX. * @@ -7,10 +7,14 @@ #ifndef SHAPEFACTORS_H_ #define SHAPEFACTORS_H_ +#include +#include + + /** * Compute shape factor and return index of leftmost cell where * particle writes. - * Specialized templates are defined below for orders 0 to 3. + * Specializations are defined for orders 0 to 3 (using "if constexpr"). * Shape factor functors may be evaluated with double arguments * in current deposition to ensure that current deposited by * particles that move only a small distance is still resolved. @@ -23,170 +27,100 @@ struct Compute_shape_factor { template< typename T > AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const /*sx*/, T /*xint*/) const { return 0; } -}; - -/** - * Compute shape factor and return index of leftmost cell where - * particle writes. - * Specialization for order 0 - */ -template <> -struct Compute_shape_factor< 0 > -{ - template< typename T > - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const sx, T xmid) const + int operator()( + T* const sx, + T xmid) const { - const auto j = static_cast(xmid + T(0.5)); - sx[0] = T(1.0); - return j; + if constexpr (depos_order == 0){ + const auto j = static_cast(xmid + T(0.5)); + sx[0] = T(1.0); + return j; + } + else if constexpr (depos_order == 1){ + const auto j = static_cast(xmid); + const T xint = xmid - T(j); + sx[0] = T(1.0) - xint; + sx[1] = xint; + return j; + } + else if constexpr (depos_order == 2){ + const auto j = static_cast(xmid + T(0.5)); + const T xint = xmid - T(j); + sx[0] = T(0.5)*(T(0.5) - xint)*(T(0.5) - xint); + sx[1] = T(0.75) - xint*xint; + sx[2] = T(0.5)*(T(0.5) + xint)*(T(0.5) + xint); + // index of the leftmost cell where particle deposits + return j-1; + } + else if constexpr (depos_order == 3){ + const auto j = static_cast(xmid); + const T xint = xmid - T(j); + sx[0] = (T(1.0))/(T(6.0))*(T(1.0) - xint)*(T(1.0) - xint)*(T(1.0) - xint); + sx[1] = (T(2.0))/(T(3.0)) - xint*xint*(T(1.0) - xint/(T(2.0))); + sx[2] = (T(2.0))/(T(3.0)) - (T(1.0) - xint)*(T(1.0) - xint)*(T(1.0) - T(0.5)*(T(1.0) - xint)); + sx[3] = (T(1.0))/(T(6.0))*xint*xint*xint; + // index of the leftmost cell where particle deposits + return j-1; + } + else{ + amrex::Abort("Unknown particle shape selected in Compute_shape_factor"); + amrex::ignore_unused(sx, xmid); + return 0; + } } }; -/** - * Compute shape factor and return index of leftmost cell where - * particle writes. - * Specialization for order 1 - */ -template <> -struct Compute_shape_factor< 1 > -{ - template< typename T > - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const sx, T xmid) const - { - const auto j = static_cast(xmid); - const T xint = xmid - T(j); - sx[0] = T(1.0) - xint; - sx[1] = xint; - return j; - } -}; -/** - * Compute shape factor and return index of leftmost cell where - * particle writes. - * Specialization for order 2 - */ -template <> -struct Compute_shape_factor< 2 > -{ - template< typename T > - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const sx, T xmid) const - { - const auto j = static_cast(xmid + T(0.5)); - const T xint = xmid - T(j); - sx[0] = T(0.5)*(T(0.5) - xint)*(T(0.5) - xint); - sx[1] = T(0.75) - xint*xint; - sx[2] = T(0.5)*(T(0.5) + xint)*(T(0.5) + xint); - // index of the leftmost cell where particle deposits - return j-1; - } -}; - -/** - * Compute shape factor and return index of leftmost cell where - * particle writes. - * Specialization for order 3 - */ -template <> -struct Compute_shape_factor< 3 > -{ - template< typename T > - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const sx, T xmid) const - { - const auto j = static_cast(xmid); - const T xint = xmid - T(j); - sx[0] = (T(1.0))/(T(6.0))*(T(1.0) - xint)*(T(1.0) - xint)*(T(1.0) - xint); - sx[1] = (T(2.0))/(T(3.0)) - xint*xint*(T(1.0) - xint/(T(2.0))); - sx[2] = (T(2.0))/(T(3.0)) - (T(1.0) - xint)*(T(1.0) - xint)*(T(1.0) - T(0.5)*(T(1.0) - xint)); - sx[3] = (T(1.0))/(T(6.0))*xint*xint*xint; - // index of the leftmost cell where particle deposits - return j-1; - } -}; /** * Compute shifted shape factor and return index of leftmost cell where * particle writes, for Esirkepov algorithm. - * Specialized templates are defined below for orders 1, 2 and 3. + * Specializations are defined below for orders 1, 2 and 3 (using "if constexpr"). */ template struct Compute_shifted_shape_factor { template< typename T > AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const sx, const T x_old, const int i_new) const; -}; - -/** - * Compute shifted shape factor and return index of leftmost cell where - * particle writes, for Esirkepov algorithm. - * Specialization for order 1 - */ -template <> -struct Compute_shifted_shape_factor< 1 > -{ - template< typename T > - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const sx, const T x_old, const int i_new) const - { - const auto i = static_cast(x_old); - const int i_shift = i - i_new; - const T xint = x_old - T(i); - sx[1+i_shift] = T(1.0) - xint; - sx[2+i_shift] = xint; - return i; - } -}; - -/** - * Compute shifted shape factor and return index of leftmost cell where - * particle writes, for Esirkepov algorithm. - * Specialization for order 2 - */ -template <> -struct Compute_shifted_shape_factor< 2 > -{ - template< typename T > - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const sx, const T x_old, const int i_new) const - { - const auto i = static_cast(x_old + T(0.5)); - const int i_shift = i - (i_new + 1); - const T xint = x_old - T(i); - sx[1+i_shift] = T(0.5)*(T(0.5) - xint)*(T(0.5) - xint); - sx[2+i_shift] = T(0.75) - xint*xint; - sx[3+i_shift] = T(0.5)*(T(0.5) + xint)*(T(0.5) + xint); - // index of the leftmost cell where particle deposits - return i - 1; - } -}; - -/** - * Compute shifted shape factor and return index of leftmost cell where - * particle writes, for Esirkepov algorithm. - * Specialization for order 3 - */ -template <> -struct Compute_shifted_shape_factor< 3 > -{ - template< typename T > - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - int operator()(T* const sx, const T x_old, const int i_new) const + int operator()( + T* const sx, + const T x_old, + const int i_new) const { - const auto i = static_cast(x_old); - const int i_shift = i - (i_new + 1); - const T xint = x_old - i; - sx[1+i_shift] = (T(1.0))/(T(6.0))*(T(1.0) - xint)*(T(1.0) - xint)*(T(1.0) - xint); - sx[2+i_shift] = (T(2.0))/(T(3.0)) - xint*xint*(T(1.0) - xint/(T(2.0))); - sx[3+i_shift] = (T(2.0))/(T(3.0)) - (T(1.0) - xint)*(T(1.0) - xint)*(T(1.0) - T(0.5)*(T(1.0) - xint)); - sx[4+i_shift] = (T(1.0))/(T(6.0))*xint*xint*xint; - // index of the leftmost cell where particle deposits - return i - 1; + if constexpr (depos_order == 1){ + const auto i = static_cast(x_old); + const int i_shift = i - i_new; + const T xint = x_old - T(i); + sx[1+i_shift] = T(1.0) - xint; + sx[2+i_shift] = xint; + return i; + } + else if constexpr (depos_order == 2){ + const auto i = static_cast(x_old + T(0.5)); + const int i_shift = i - (i_new + 1); + const T xint = x_old - T(i); + sx[1+i_shift] = T(0.5)*(T(0.5) - xint)*(T(0.5) - xint); + sx[2+i_shift] = T(0.75) - xint*xint; + sx[3+i_shift] = T(0.5)*(T(0.5) + xint)*(T(0.5) + xint); + // index of the leftmost cell where particle deposits + return i - 1; + } + else if constexpr (depos_order == 3){ + const auto i = static_cast(x_old); + const int i_shift = i - (i_new + 1); + const T xint = x_old - i; + sx[1+i_shift] = (T(1.0))/(T(6.0))*(T(1.0) - xint)*(T(1.0) - xint)*(T(1.0) - xint); + sx[2+i_shift] = (T(2.0))/(T(3.0)) - xint*xint*(T(1.0) - xint/(T(2.0))); + sx[3+i_shift] = (T(2.0))/(T(3.0)) - (T(1.0) - xint)*(T(1.0) - xint)*(T(1.0) - T(0.5)*(T(1.0) - xint)); + sx[4+i_shift] = (T(1.0))/(T(6.0))*xint*xint*xint; + // index of the leftmost cell where particle deposits + return i - 1; + } + else{ + amrex::Abort("Unknown particle shape selected in Compute_shifted_shape_factor"); + amrex::ignore_unused(sx, x_old, i_new); + return 0; + } } }; diff --git a/Source/Particles/WarpXParticleContainer.cpp b/Source/Particles/WarpXParticleContainer.cpp index 5905ccdb711..b52e31e4287 100644 --- a/Source/Particles/WarpXParticleContainer.cpp +++ b/Source/Particles/WarpXParticleContainer.cpp @@ -9,6 +9,7 @@ */ #include "WarpXParticleContainer.H" +#include "ABLASTR/DepositCharge.H" #include "Deposition/ChargeDeposition.H" #include "Deposition/CurrentDeposition.H" #include "Pusher/GetAndSetPosition.H" @@ -598,146 +599,62 @@ WarpXParticleContainer::DepositCharge (WarpXParIter& pti, RealVector& wp, const long offset, const long np_to_depose, int thread_num, int lev, int depos_lev) { - AMREX_ALWAYS_ASSERT_WITH_MESSAGE((depos_lev==(lev-1)) || - (depos_lev==(lev )), - "Deposition buffers only work for lev-1"); - - // If no particles, do not do anything - if (np_to_depose == 0) return; - - // If user decides not to deposit - if (do_not_deposit) return; - - // Number of guard cells for local deposition of rho - WarpX& warpx = WarpX::GetInstance(); - const amrex::IntVect& ng_rho = warpx.get_ng_depos_rho(); - - // Extract deposition order and check that particles shape fits within the guard cells. - // NOTE: In specific situations where the staggering of rho and the charge deposition algorithm - // are not trivial, this check might be too strict and we might need to relax it, as currently - // done for the current deposition. - -#if defined(WARPX_DIM_1D_Z) - const amrex::IntVect shape_extent = amrex::IntVect(static_cast(WarpX::noz/2+1)); -#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - const amrex::IntVect shape_extent = amrex::IntVect(static_cast(WarpX::nox/2+1), - static_cast(WarpX::noz/2+1)); -#elif defined(WARPX_DIM_3D) - const amrex::IntVect shape_extent = amrex::IntVect(static_cast(WarpX::nox/2+1), - static_cast(WarpX::noy/2+1), - static_cast(WarpX::noz/2+1)); -#endif - - // On CPU: particles deposit on tile arrays, which have a small number of guard cells ng_rho - // On GPU: particles deposit directly on the rho array, which usually have a larger number of guard cells -#ifndef AMREX_USE_GPU - const amrex::IntVect range = ng_rho - shape_extent; -#else - const amrex::IntVect range = rho->nGrowVect() - shape_extent; -#endif - - AMREX_ALWAYS_ASSERT_WITH_MESSAGE( - amrex::numParticlesOutOfRange(pti, range) == 0, - "Particles shape does not fit within tile (CPU) or guard cells (GPU) used for charge deposition"); - - const std::array& dx = WarpX::CellSize(std::max(depos_lev,0)); - const Real q = this->charge; - - WARPX_PROFILE_VAR_NS("WarpXParticleContainer::DepositCharge::ChargeDeposition", blp_ppc_chd); - WARPX_PROFILE_VAR_NS("WarpXParticleContainer::DepositCharge::Accumulate", blp_accumulate); - - // Get tile box where charge is deposited. - // The tile box is different when depositing in the buffers (depos_levixType().toIntVect() ); -#endif - - tilebox.grow(ng_rho); - - const int nc = WarpX::ncomps; - -#ifdef AMREX_USE_GPU - amrex::ignore_unused(thread_num); - // GPU, no tiling: rho_fab points to the full rho array - MultiFab rhoi(*rho, amrex::make_alias, icomp*nc, nc); - auto & rho_fab = rhoi.get(pti); -#else - tb.grow(ng_rho); - - // CPU, tiling: rho_fab points to local_rho[thread_num] - local_rho[thread_num].resize(tb, nc); - - // local_rho[thread_num] is set to zero - local_rho[thread_num].setVal(0.0); - - auto & rho_fab = local_rho[thread_num]; -#endif - - const auto GetPosition = GetParticlePosition(pti, offset); - - // Lower corner of tile box physical domain - // Note that this includes guard cells since it is after tilebox.ngrow - Real cur_time = warpx.gett_new(lev); - Real dt = warpx.getdt(lev); - const auto& time_of_last_gal_shift = warpx.time_of_last_gal_shift; - // Take into account Galilean shift - Real time_shift_rho_old = (cur_time - time_of_last_gal_shift); - Real time_shift_rho_new = (cur_time + dt - time_of_last_gal_shift); - amrex::Array galilean_shift; - if (icomp==0){ - galilean_shift = { - m_v_galilean[0]*time_shift_rho_old, - m_v_galilean[1]*time_shift_rho_old, - m_v_galilean[2]*time_shift_rho_old }; - } else{ - galilean_shift = { - m_v_galilean[0]*time_shift_rho_new, - m_v_galilean[1]*time_shift_rho_new, - m_v_galilean[2]*time_shift_rho_new }; - } - const std::array& xyzmin = WarpX::LowerCorner(tilebox, galilean_shift, depos_lev); - - // Indices of the lower bound - const Dim3 lo = lbound(tilebox); - - WARPX_PROFILE_VAR_START(blp_ppc_chd); - amrex::LayoutData* costs = WarpX::getCosts(lev); - amrex::Real* cost = costs ? &((*costs)[pti.index()]) : nullptr; - - if (WarpX::nox == 1){ - doChargeDepositionShapeN<1>(GetPosition, wp.dataPtr()+offset, ion_lev, - rho_fab, np_to_depose, dx, xyzmin, lo, q, - WarpX::n_rz_azimuthal_modes, cost, - WarpX::load_balance_costs_update_algo); - } else if (WarpX::nox == 2){ - doChargeDepositionShapeN<2>(GetPosition, wp.dataPtr()+offset, ion_lev, - rho_fab, np_to_depose, dx, xyzmin, lo, q, - WarpX::n_rz_azimuthal_modes, cost, - WarpX::load_balance_costs_update_algo); - } else if (WarpX::nox == 3){ - doChargeDepositionShapeN<3>(GetPosition, wp.dataPtr()+offset, ion_lev, - rho_fab, np_to_depose, dx, xyzmin, lo, q, - WarpX::n_rz_azimuthal_modes, cost, - WarpX::load_balance_costs_update_algo); + if (!do_not_deposit) { + WarpX& warpx = WarpX::GetInstance(); + const amrex::IntVect& ng_rho = warpx.get_ng_depos_rho(); + const std::array& dx = WarpX::CellSize(std::max(depos_lev,0)); + amrex::IntVect ref_ratio; + if (lev == depos_lev) { + ref_ratio = IntVect(AMREX_D_DECL(1, 1, 1 )); + } else { + ref_ratio = WarpX::RefRatio(depos_lev); + } + const int nc = WarpX::ncomps; + + // Get tile box where charge is deposited. + // The tile box is different when depositing in the buffers (depos_lev galilean_shift; + if (icomp==0){ + galilean_shift = { + m_v_galilean[0]*time_shift_rho_old, + m_v_galilean[1]*time_shift_rho_old, + m_v_galilean[2]*time_shift_rho_old }; + } else{ + galilean_shift = { + m_v_galilean[0]*time_shift_rho_new, + m_v_galilean[1]*time_shift_rho_new, + m_v_galilean[2]*time_shift_rho_new }; + } + const auto& xyzmin = WarpX::LowerCorner(tilebox, galilean_shift, depos_lev); + + // pointer to costs data + amrex::LayoutData* costs = WarpX::getCosts(lev); + amrex::Real* cost = costs ? &((*costs)[pti.index()]) : nullptr; + + ablastr::DepositCharge + (pti, wp, ion_lev, rho, icomp, nc, offset, np_to_depose, + local_rho[thread_num], lev, depos_lev, this->charge, + WarpX::nox, WarpX::noy, WarpX::noz, ng_rho, dx, xyzmin, ref_ratio, + cost, WarpX::n_rz_azimuthal_modes, WarpX::load_balance_costs_update_algo, + WarpX::do_device_synchronize); } - WARPX_PROFILE_VAR_STOP(blp_ppc_chd); - -#ifndef AMREX_USE_GPU - // CPU, tiling: atomicAdd local_rho into rho - WARPX_PROFILE_VAR_START(blp_accumulate); - (*rho)[pti].atomicAdd(local_rho[thread_num], tb, tb, 0, icomp*nc, nc); - WARPX_PROFILE_VAR_STOP(blp_accumulate); -#endif } void diff --git a/Source/Python/WarpXWrappers.H b/Source/Python/WarpXWrappers.H index a1c335b485f..0cdfed7db3b 100644 --- a/Source/Python/WarpXWrappers.H +++ b/Source/Python/WarpXWrappers.H @@ -172,6 +172,12 @@ extern "C" { int* warpx_getCurrentDensityCPLoVects (int lev, int direction, int *return_size, int **ngrowvect); int* warpx_getCurrentDensityFPLoVects (int lev, int direction, int *return_size, int **ngrowvect); + amrex::Real** warpx_getEdgeLengths (int lev, int direction, int *return_size, int *ncomps, int **ngrowvect, int **shapes); + int* warpx_getEdgeLengthsLoVects (int lev, int direction, int *return_size, int **ngrowvect); + + amrex::Real** warpx_getFaceAreas (int lev, int direction, int *return_size, int *ncomps, int **ngrowvect, int **shapes); + int* warpx_getFaceAreasLoVects (int lev, int direction, int *return_size, int **ngrowvect); + int* warpx_getEx_nodal_flag (); int* warpx_getEy_nodal_flag (); int* warpx_getEz_nodal_flag (); @@ -185,6 +191,12 @@ extern "C" { int* warpx_getPhi_nodal_flag (); int* warpx_getF_nodal_flag (); int* warpx_getG_nodal_flag (); + int* warpx_get_edge_lengths_x_nodal_flag (); + int* warpx_get_edge_lengths_y_nodal_flag (); + int* warpx_get_edge_lengths_z_nodal_flag (); + int* warpx_get_face_areas_x_nodal_flag (); + int* warpx_get_face_areas_y_nodal_flag (); + int* warpx_get_face_areas_z_nodal_flag (); amrex::Real** warpx_getChargeDensityCP (int lev, int *return_size, int *ncomps, int **ngrowvect, int **shapes); amrex::Real** warpx_getChargeDensityFP (int lev, int *return_size, int *ncomps, int **ngrowvect, int **shapes); diff --git a/Source/Python/WarpXWrappers.cpp b/Source/Python/WarpXWrappers.cpp index 140fa1b5738..8b0a2232e26 100644 --- a/Source/Python/WarpXWrappers.cpp +++ b/Source/Python/WarpXWrappers.cpp @@ -324,6 +324,9 @@ namespace WARPX_GET_FIELD(warpx_getBfieldCP, WarpX::GetInstance().get_pointer_Bfield_cp) WARPX_GET_FIELD(warpx_getBfieldFP, WarpX::GetInstance().get_pointer_Bfield_fp) + WARPX_GET_FIELD(warpx_getEdgeLengths, WarpX::GetInstance().get_pointer_edge_lengths) + WARPX_GET_FIELD(warpx_getFaceAreas, WarpX::GetInstance().get_pointer_face_areas) + WARPX_GET_FIELD(warpx_getCurrentDensity, WarpX::GetInstance().get_pointer_current_fp) WARPX_GET_FIELD(warpx_getCurrentDensityCP, WarpX::GetInstance().get_pointer_current_cp) WARPX_GET_FIELD(warpx_getCurrentDensityFP, WarpX::GetInstance().get_pointer_current_fp) @@ -340,6 +343,9 @@ namespace WARPX_GET_LOVECTS(warpx_getCurrentDensityCPLoVects, WarpX::GetInstance().get_pointer_current_cp) WARPX_GET_LOVECTS(warpx_getCurrentDensityFPLoVects, WarpX::GetInstance().get_pointer_current_fp) + WARPX_GET_LOVECTS(warpx_getEdgeLengthsLoVects, WarpX::GetInstance().get_pointer_edge_lengths) + WARPX_GET_LOVECTS(warpx_getFaceAreasLoVects, WarpX::GetInstance().get_pointer_face_areas) + int* warpx_getEx_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_Efield_aux(0,0) );} int* warpx_getEy_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_Efield_aux(0,1) );} int* warpx_getEz_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_Efield_aux(0,2) );} @@ -353,6 +359,12 @@ namespace int* warpx_getPhi_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_phi_fp(0) );} int* warpx_getF_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_F_fp(0) );} int* warpx_getG_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_G_fp(0) );} + int* warpx_get_edge_lengths_x_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_edge_lengths(0, 0) );} + int* warpx_get_edge_lengths_y_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_edge_lengths(0, 1) );} + int* warpx_get_edge_lengths_z_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_edge_lengths(0, 2) );} + int* warpx_get_face_areas_x_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_face_areas(0, 0) );} + int* warpx_get_face_areas_y_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_face_areas(0, 1) );} + int* warpx_get_face_areas_z_nodal_flag() {return getFieldNodalFlagData( WarpX::GetInstance().get_pointer_face_areas(0, 2) );} #define WARPX_GET_SCALAR(SCALAR, GETTER) \ amrex::Real** SCALAR(int lev, \ diff --git a/Source/Utils/CMakeLists.txt b/Source/Utils/CMakeLists.txt index 26982fb91ab..3f5c46411d9 100644 --- a/Source/Utils/CMakeLists.txt +++ b/Source/Utils/CMakeLists.txt @@ -4,7 +4,6 @@ target_sources(WarpX CoarsenMR.cpp Interpolate.cpp IntervalsParser.cpp - MPIInitHelpers.cpp ParticleUtils.cpp RelativeCellPosition.cpp WarnManager.cpp @@ -15,3 +14,8 @@ target_sources(WarpX ) add_subdirectory(MsgLogger) + +target_sources(ablastr + PRIVATE + MPIInitHelpers.cpp +) diff --git a/Source/Utils/MsgLogger/MsgLogger.cpp b/Source/Utils/MsgLogger/MsgLogger.cpp index b58cf398b67..9b310a6b786 100644 --- a/Source/Utils/MsgLogger/MsgLogger.cpp +++ b/Source/Utils/MsgLogger/MsgLogger.cpp @@ -251,9 +251,7 @@ Logger::collective_gather_msgs_with_counter_and_ranks() const // Find out who is the "gather rank" and how many messages it has const auto my_msgs = get_msgs(); const auto how_many_msgs = my_msgs.size(); - int gather_rank = 0; - std::int64_t gather_rank_how_many_msgs = 0; - std::tie(gather_rank, gather_rank_how_many_msgs) = + const auto [gather_rank, gather_rank_how_many_msgs] = find_gather_rank_and_its_msgs(how_many_msgs); // If the "gather rank" has zero messages there are no messages at all @@ -273,9 +271,7 @@ Logger::collective_gather_msgs_with_counter_and_ranks() const m_messages, is_gather_rank); // Send back all the data to the "gather rank" - auto all_data = std::vector{}; - auto displacements = std::vector{}; - std::tie(all_data, displacements) = + const auto [all_data, displacements] = ::gather_all_data( package_for_gather_rank, gather_rank, m_rank); diff --git a/Source/Utils/MsgLogger/MsgLoggerSerialization.H b/Source/Utils/MsgLogger/MsgLoggerSerialization.H index fba8bb0d13c..ed642a1487f 100644 --- a/Source/Utils/MsgLogger/MsgLoggerSerialization.H +++ b/Source/Utils/MsgLogger/MsgLoggerSerialization.H @@ -22,7 +22,6 @@ namespace MsgLogger{ * This function transforms a variable of type T into a vector of chars holding its * byte representation and it appends this vector at the end of an * existing vector of chars. T must be either a trivially copyable type or an std::string - * (see specialization) * * @tparam T the variable type * @param[in] val a variable of type T to be serialized @@ -31,29 +30,20 @@ namespace MsgLogger{ template void put_in(const T& val, std::vector& vec) { - static_assert(std::is_trivially_copyable(), - "Cannot serialize non-trivally copyable types, except std::string."); - - const auto* ptr_val = reinterpret_cast(&val); - vec.insert(vec.end(), ptr_val, ptr_val+sizeof(T)); - } - - /** - * This function transforms a string into a vector of chars holding its - * byte representation and it appends this vector at the end of an - * existing vector of chars (specialization of put_in). - * - * @param[in] val a std::string to be serialized - * @param[in, out] vec a reference to the vector to which the byte representation of val is appended - */ - template <> - inline void put_in (const std::string& val, std::vector& vec) - { - const char* c_str = val.c_str(); - const auto length = static_cast(val.size()); - - put_in(length, vec); - vec.insert(vec.end(), c_str, c_str+length); + if constexpr (std::is_same()){ + const char* c_str = val.c_str(); + const auto length = static_cast(val.size()); + + put_in(length, vec); + vec.insert(vec.end(), c_str, c_str+length); + } + else{ + static_assert(std::is_trivially_copyable(), + "Cannot serialize non-trivally copyable types, except std::string."); + + const auto* ptr_val = reinterpret_cast(&val); + vec.insert(vec.end(), ptr_val, ptr_val+sizeof(T)); + } } /** @@ -69,36 +59,26 @@ namespace MsgLogger{ template inline void put_in_vec (const std::vector& val, std::vector& vec) { - static_assert(std::is_trivially_copyable() || std::is_same(), - "Cannot serialize vectors of non-trivally copyable types" - ", except vectors of std::string."); - - put_in(static_cast(val.size()), vec); - for (const auto& el : val) - put_in(el, vec); - } - - /** - * This function transforms an std::vector into a vector of chars holding its - * byte representation and it appends this vector at the end of an - * existing vector of chars (specialization of put_in_vec). - * - * @tparam T the variable type - * @param[in] val a variable of type T to be serialized - * @param[in, out] vec a reference to the vector to which the byte representation of val is appended - */ - template <> - inline void put_in_vec (const std::vector& val, std::vector& vec) - { - put_in(static_cast(val.size()), vec); - vec.insert(vec.end(), val.begin(), val.end()); + if constexpr (std::is_same()){ + put_in(static_cast(val.size()), vec); + vec.insert(vec.end(), val.begin(), val.end()); + } + else{ + static_assert(std::is_trivially_copyable() || std::is_same(), + "Cannot serialize vectors of non-trivally copyable types" + ", except vectors of std::string."); + + put_in(static_cast(val.size()), vec); + for (const auto& el : val) + put_in(el, vec); + } } /** * This function extracts a variable of type T from a byte vector, at the position * given by a std::vector iterator. The iterator is then advanced according to * the number of bytes read from the byte vector. T must be either a trivially copyable type - * or an std::string (see specialization below). + * or an std::string. * * @tparam T the variable type (must be trivially copyable) * @param[in, out] it the iterator to a byte vector @@ -107,36 +87,26 @@ namespace MsgLogger{ template T get_out(std::vector::const_iterator& it) { - static_assert(std::is_trivially_copyable(), - "Cannot extract non-trivally copyable types from char vectors," - " with the exception of std::string."); - - auto temp = std::array{}; - std::copy(it, it + sizeof(T), temp.begin()); - it += sizeof(T); - T res; - std::memcpy(&res, temp.data(), sizeof(T)); + if constexpr (std::is_same()){ + const auto length = get_out (it); + const auto str = std::string{it, it+length}; + it += length; + + return str; + } + else{ + static_assert(std::is_trivially_copyable(), + "Cannot extract non-trivally copyable types from char vectors," + " with the exception of std::string."); + + auto temp = std::array{}; + std::copy(it, it + sizeof(T), temp.begin()); + it += sizeof(T); + T res; + std::memcpy(&res, temp.data(), sizeof(T)); return res; - } - - /** - * This function extracts an std::string from a byte vector, at the position - * given by a std::vector iterator. The iterator is then advanced according to - * the number of bytes read from the byte vector. This is a specialization of - * get_out - * - * @param[in, out] it the iterator to a byte vector - * @return the std::string extracted from the byte array - */ - template<> - inline std::string get_out (std::vector::const_iterator& it) - { - const auto length = get_out (it); - const auto str = std::string{it, it+length}; - it += length; - - return str; + } } /** @@ -152,37 +122,28 @@ namespace MsgLogger{ template inline std::vector get_out_vec (std::vector::const_iterator& it) { - static_assert(std::is_trivially_copyable() || std::is_same(), - "Cannot extract non-trivally copyable types from char vectors," - " with the exception of std::string."); - - const auto length = get_out (it); - std::vector res(length); - for (int i = 0; i < length; ++i) - res[i] = get_out(it); - - return res; + if constexpr (std::is_same()){ + const auto length = get_out (it); + std::vector res(length); + std::copy(it, it+length, res.begin()); + it += length; + + return res; + } + else + { + static_assert(std::is_trivially_copyable() || std::is_same(), + "Cannot extract non-trivally copyable types from char vectors," + " with the exception of std::string."); + + const auto length = get_out (it); + std::vector res(length); + for (int i = 0; i < length; ++i) + res[i] = get_out(it); + + return res; + } } - - /** - * This function extracts an std::vector from a byte vector, at the position - * given by a std::vector iterator. The iterator is then advanced according to - * the number of bytes read from the byte vector. This is a specialization of get_out_vec. - * - * @param[in, out] it the iterator to a byte vector - * @return the variable extracted from the byte array - */ - template<> - inline std::vector get_out_vec (std::vector::const_iterator& it) - { - const auto length = get_out (it); - std::vector res(length); - std::copy(it, it+length, res.begin()); - it += length; - - return res; - } - } } diff --git a/Source/Utils/WarpXProfilerWrapper.H b/Source/Utils/WarpXProfilerWrapper.H index 492a90a62c5..6e781e34a4a 100644 --- a/Source/Utils/WarpXProfilerWrapper.H +++ b/Source/Utils/WarpXProfilerWrapper.H @@ -1,4 +1,4 @@ -/* Copyright 2020 Axel Huebl, Maxence Thevenet +/* Copyright 2020-2021 Axel Huebl, Maxence Thevenet * * This file is part of WarpX. * @@ -8,41 +8,17 @@ #ifndef WARPX_PROFILERWRAPPER_H_ #define WARPX_PROFILERWRAPPER_H_ -#include -#include -#include +#include "WarpX.H" +#include "ABLASTR/ProfilerWrapper.H" -template -AMREX_FORCE_INLINE -void doDeviceSynchronize () -{ - if ( WarpX::do_device_synchronize >= detail_level ) - amrex::Gpu::synchronize(); -} - -// Note that objects are destructed in the reverse order of declaration -template -struct synchronizeOnDestruct { - AMREX_FORCE_INLINE - ~synchronizeOnDestruct () { - doDeviceSynchronize(); - } -}; // `BL_PROFILE_PASTE(SYNC_SCOPE_, __COUNTER__)` and `SYNC_V_##vname` used to make unique names for // synchronizeOnDestruct objects, like `SYNC_SCOPE_0` and `SYNC_V_pmain` -#define WARPX_PROFILE(fname) doDeviceSynchronize<1>(); BL_PROFILE(fname); synchronizeOnDestruct<1> BL_PROFILE_PASTE(SYNC_SCOPE_, __COUNTER__){} -#define WARPX_PROFILE_VAR(fname, vname) doDeviceSynchronize<1>(); BL_PROFILE_VAR(fname, vname); synchronizeOnDestruct<1> SYNC_V_##vname{} -#define WARPX_PROFILE_VAR_NS(fname, vname) BL_PROFILE_VAR_NS(fname, vname); synchronizeOnDestruct<1> SYNC_V_##vname{} -#define WARPX_PROFILE_VAR_START(vname) doDeviceSynchronize<1>(); BL_PROFILE_VAR_START(vname) -#define WARPX_PROFILE_VAR_STOP(vname) doDeviceSynchronize<1>(); BL_PROFILE_VAR_STOP(vname) -#define WARPX_PROFILE_REGION(rname) doDeviceSynchronize<1>(); BL_PROFILE_REGION(rname); synchronizeOnDestruct<1> BL_PROFILE_PASTE(SYNC_R_, __COUNTER__){} - -#define WARPX_DETAIL_PROFILE(fname) doDeviceSynchronize<2>(); BL_PROFILE(fname); synchronizeOnDestruct<2> BL_PROFILE_PASTE(SYNC_SCOPE_, __COUNTER__){} -#define WARPX_DETAIL_PROFILE_VAR(fname, vname) doDeviceSynchronize<2>(); BL_PROFILE_VAR(fname, vname); synchronizeOnDestruct<2> SYNC_V_##vname{} -#define WARPX_DETAIL_PROFILE_VAR_NS(fname, vname) BL_PROFILE_VAR_NS(fname, vname); synchronizeOnDestruct<2> SYNC_V_##vname{} -#define WARPX_DETAIL_PROFILE_VAR_START(vname) doDeviceSynchronize<2>(); BL_PROFILE_VAR_START(vname) -#define WARPX_DETAIL_PROFILE_VAR_STOP(vname) doDeviceSynchronize<2>(); BL_PROFILE_VAR_STOP(vname) -#define WARPX_DETAIL_PROFILE_REGION(rname) doDeviceSynchronize<2>(); BL_PROFILE_REGION(rname); synchronizeOnDestruct<2> BL_PROFILE_PASTE(SYNC_R_, __COUNTER__){} +#define WARPX_PROFILE(fname) ABLASTR_PROFILE(fname, WarpX::do_device_synchronize) +#define WARPX_PROFILE_VAR(fname, vname) ABLASTR_PROFILE_VAR(fname, vname, WarpX::do_device_synchronize) +#define WARPX_PROFILE_VAR_NS(fname, vname) ABLASTR_PROFILE_VAR_NS(fname, vname, WarpX::do_device_synchronize) +#define WARPX_PROFILE_VAR_START(vname) ABLASTR_PROFILE_VAR_START(vname, WarpX::do_device_synchronize) +#define WARPX_PROFILE_VAR_STOP(vname) ABLASTR_PROFILE_VAR_STOP(vname, WarpX::do_device_synchronize) +#define WARPX_PROFILE_REGION(rname) ABLASTR_PROFILE_REGION(rname, WarpX::do_device_synchronize) #endif // WARPX_PROFILERWRAPPER_H_ diff --git a/Source/WarpX.H b/Source/WarpX.H index 05d8d33598b..c2fd5a5d81f 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -276,7 +276,7 @@ public: static int do_multi_J_n_depositions; static int J_linear_in_time; - static int do_device_synchronize; + static bool do_device_synchronize; static bool safe_guard_cells; // buffers @@ -321,6 +321,9 @@ public: amrex::MultiFab * get_pointer_F_cp (int lev) const { return F_cp[lev].get(); } amrex::MultiFab * get_pointer_G_cp (int lev) const { return G_cp[lev].get(); } + amrex::MultiFab * get_pointer_edge_lengths (int lev, int direction) const { return m_edge_lengths[lev][direction].get(); } + amrex::MultiFab * get_pointer_face_areas (int lev, int direction) const { return m_face_areas[lev][direction].get(); } + const amrex::MultiFab& getcurrent (int lev, int direction) {return *current_fp[lev][direction];} const amrex::MultiFab& getEfield (int lev, int direction) {return *Efield_aux[lev][direction];} const amrex::MultiFab& getBfield (int lev, int direction) {return *Bfield_aux[lev][direction];} diff --git a/Source/WarpX.cpp b/Source/WarpX.cpp index 8ab57499d38..8ee910f81a2 100644 --- a/Source/WarpX.cpp +++ b/Source/WarpX.cpp @@ -194,9 +194,9 @@ int WarpX::n_current_deposition_buffer = -1; int WarpX::do_nodal = false; #ifdef AMREX_USE_GPU -int WarpX::do_device_synchronize = 1; +bool WarpX::do_device_synchronize = true; #else -int WarpX::do_device_synchronize = 0; +bool WarpX::do_device_synchronize = false; #endif WarpX* WarpX::m_instance = nullptr; @@ -745,11 +745,6 @@ WarpX::ReadParameters () pp_warpx.query("do_pml_j_damping", do_pml_j_damping); pp_warpx.query("do_pml_in_domain", do_pml_in_domain); - if (do_multi_J && isAnyBoundaryPML()) - { - amrex::Abort("Multi-J algorithm not implemented with PMLs"); - } - // Default values of WarpX::do_pml_dive_cleaning and WarpX::do_pml_divb_cleaning: // false for FDTD solver, true for PSATD solver. if (maxwell_solver_id != MaxwellSolverAlgo::PSATD) @@ -1086,22 +1081,37 @@ WarpX::ReadParameters () "\nVay current deposition does not guarantee charge conservation with local FFTs over guard cells:\n" "set psatd.periodic_single_box_fft=1 too, in order to guarantee charge conservation"); + // Auxiliary: boosted_frame = true if warpx.gamma_boost is set in the inputs + amrex::ParmParse pp_warpx("warpx"); + const bool boosted_frame = pp_warpx.query("gamma_boost", gamma_boost); + // Check whether the default Galilean velocity should be used bool use_default_v_galilean = false; pp_psatd.query("use_default_v_galilean", use_default_v_galilean); - if (use_default_v_galilean) { + if (use_default_v_galilean == true && boosted_frame == true) + { m_v_galilean[2] = -std::sqrt(1._rt - 1._rt / (gamma_boost * gamma_boost)); - } else { + } + else if (use_default_v_galilean == true && boosted_frame == false) + { + amrex::Abort("psatd.use_default_v_galilean = 1 can be used only if warpx.gamma_boost is also set"); + } + else + { queryArrWithParser(pp_psatd, "v_galilean", m_v_galilean, 0, 3); } // Check whether the default comoving velocity should be used bool use_default_v_comoving = false; pp_psatd.query("use_default_v_comoving", use_default_v_comoving); - if (use_default_v_comoving) + if (use_default_v_comoving == true && boosted_frame == true) { m_v_comoving[2] = -std::sqrt(1._rt - 1._rt / (gamma_boost * gamma_boost)); } + else if (use_default_v_comoving == true && boosted_frame == false) + { + amrex::Abort("psatd.use_default_v_comoving = 1 can be used only if warpx.gamma_boost is also set"); + } else { queryArrWithParser(pp_psatd, "v_comoving", m_v_comoving, 0, 3); @@ -1422,14 +1432,6 @@ WarpX::ClearLevel (int lev) void WarpX::AllocLevelData (int lev, const BoxArray& ba, const DistributionMapping& dm) { -#ifdef AMREX_USE_EB - m_field_factory[lev] = amrex::makeEBFabFactory(Geom(lev), ba, dm, - {1,1,1}, // Not clear how many ghost cells we need yet - amrex::EBSupport::full); -#else - m_field_factory[lev] = std::make_unique(); -#endif - bool aux_is_nodal = (field_gathering_algo == GatheringAlgo::MomentumConserving); #if (AMREX_SPACEDIM == 1) @@ -1461,6 +1463,17 @@ WarpX::AllocLevelData (int lev, const BoxArray& ba, const DistributionMapping& d WarpX::fft_do_time_averaging, this->refRatio()); + +#ifdef AMREX_USE_EB + int max_guard = guard_cells.ng_FieldSolver.max(); + m_field_factory[lev] = amrex::makeEBFabFactory(Geom(lev), ba, dm, + {max_guard, max_guard, max_guard}, + amrex::EBSupport::full); +#else + m_field_factory[lev] = std::make_unique(); +#endif + + if (mypc->nSpeciesDepositOnMainGrid() && n_current_deposition_buffer == 0) { n_current_deposition_buffer = 1; // This forces the allocation of buffers and allows the code associated @@ -1618,42 +1631,42 @@ WarpX::AllocLevelMFs (int lev, const BoxArray& ba, const DistributionMapping& dm if(WarpX::maxwell_solver_id == MaxwellSolverAlgo::Yee || WarpX::maxwell_solver_id == MaxwellSolverAlgo::CKC || WarpX::maxwell_solver_id == MaxwellSolverAlgo::ECT) { - m_edge_lengths[lev][0] = std::make_unique(amrex::convert(ba, Ex_nodal_flag), dm, ncomps, ngE, tag("m_edge_lengths[x]")); - m_edge_lengths[lev][1] = std::make_unique(amrex::convert(ba, Ey_nodal_flag), dm, ncomps, ngE, tag("m_edge_lengths[y]")); - m_edge_lengths[lev][2] = std::make_unique(amrex::convert(ba, Ez_nodal_flag), dm, ncomps, ngE, tag("m_edge_lengths[z]")); - m_face_areas[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, ngE, tag("m_face_areas[x]")); - m_face_areas[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, ngE, tag("m_face_areas[y]")); - m_face_areas[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, ngE, tag("m_face_areas[z]")); + m_edge_lengths[lev][0] = std::make_unique(amrex::convert(ba, Ex_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_edge_lengths[x]")); + m_edge_lengths[lev][1] = std::make_unique(amrex::convert(ba, Ey_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_edge_lengths[y]")); + m_edge_lengths[lev][2] = std::make_unique(amrex::convert(ba, Ez_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_edge_lengths[z]")); + m_face_areas[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_face_areas[x]")); + m_face_areas[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_face_areas[y]")); + m_face_areas[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_face_areas[z]")); } constexpr int nc_ls = 1; constexpr int ng_ls = 2; m_distance_to_eb[lev] = std::make_unique(amrex::convert(ba, IntVect::TheNodeVector()), dm, nc_ls, ng_ls, tag("m_distance_to_eb")); if(WarpX::maxwell_solver_id == MaxwellSolverAlgo::ECT) { - m_edge_lengths[lev][0] = std::make_unique(amrex::convert(ba, Ex_nodal_flag), dm, ncomps, ngE, tag("m_edge_lengths[x]")); - m_edge_lengths[lev][1] = std::make_unique(amrex::convert(ba, Ey_nodal_flag), dm, ncomps, ngE, tag("m_edge_lengths[y]")); - m_edge_lengths[lev][2] = std::make_unique(amrex::convert(ba, Ez_nodal_flag), dm, ncomps, ngE, tag("m_edge_lengths[z]")); - m_face_areas[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, ngE, tag("m_face_areas[x]")); - m_face_areas[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, ngE, tag("m_face_areas[y]")); - m_face_areas[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, ngE, tag("m_face_areas[z]")); - m_flag_info_face[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, ngE, tag("m_flag_info_face[x]")); - m_flag_info_face[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, ngE, tag("m_flag_info_face[y]")); - m_flag_info_face[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, ngE, tag("m_flag_info_face[z]")); - m_flag_ext_face[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, ngE, tag("m_flag_ext_face[x]")); - m_flag_ext_face[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, ngE, tag("m_flag_ext_face[y]")); - m_flag_ext_face[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, ngE, tag("m_flag_ext_face[z]")); - m_area_mod[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, ngE, tag("m_area_mod[x]")); - m_area_mod[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, ngE, tag("m_area_mod[y]")); - m_area_mod[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, ngE, tag("m_area_mod[z]")); + m_edge_lengths[lev][0] = std::make_unique(amrex::convert(ba, Ex_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_edge_lengths[x]")); + m_edge_lengths[lev][1] = std::make_unique(amrex::convert(ba, Ey_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_edge_lengths[y]")); + m_edge_lengths[lev][2] = std::make_unique(amrex::convert(ba, Ez_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_edge_lengths[z]")); + m_face_areas[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_face_areas[x]")); + m_face_areas[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_face_areas[y]")); + m_face_areas[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_face_areas[z]")); + m_flag_info_face[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_flag_info_face[x]")); + m_flag_info_face[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_flag_info_face[y]")); + m_flag_info_face[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_flag_info_face[z]")); + m_flag_ext_face[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_flag_ext_face[x]")); + m_flag_ext_face[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_flag_ext_face[y]")); + m_flag_ext_face[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_flag_ext_face[z]")); + m_area_mod[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_area_mod[x]")); + m_area_mod[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_area_mod[y]")); + m_area_mod[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("m_area_mod[z]")); m_borrowing[lev][0] = std::make_unique>(amrex::convert(ba, Bx_nodal_flag), dm); m_borrowing[lev][1] = std::make_unique>(amrex::convert(ba, By_nodal_flag), dm); m_borrowing[lev][2] = std::make_unique>(amrex::convert(ba, Bz_nodal_flag), dm); - Venl[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, ngE, tag("Venl[x]")); - Venl[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, ngE, tag("Venl[y]")); - Venl[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, ngE, tag("Venl[z]")); + Venl[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("Venl[x]")); + Venl[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("Venl[y]")); + Venl[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("Venl[z]")); - ECTRhofield[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, ngE, tag("ECTRhofield[x]")); - ECTRhofield[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, ngE, tag("ECTRhofield[y]")); - ECTRhofield[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, ngE, tag("ECTRhofield[z]")); + ECTRhofield[lev][0] = std::make_unique(amrex::convert(ba, Bx_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("ECTRhofield[x]")); + ECTRhofield[lev][1] = std::make_unique(amrex::convert(ba, By_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("ECTRhofield[y]")); + ECTRhofield[lev][2] = std::make_unique(amrex::convert(ba, Bz_nodal_flag), dm, ncomps, guard_cells.ng_FieldSolver, tag("ECTRhofield[z]")); ECTRhofield[lev][0]->setVal(0.); ECTRhofield[lev][1]->setVal(0.); ECTRhofield[lev][2]->setVal(0.); diff --git a/Source/main.cpp b/Source/main.cpp index 185746a4db7..d6939a6f189 100644 --- a/Source/main.cpp +++ b/Source/main.cpp @@ -55,11 +55,11 @@ int main(int argc, char* argv[]) CheckGriddingForRZSpectral(); #endif - WARPX_PROFILE_VAR("main()", pmain); + { + WARPX_PROFILE_VAR("main()", pmain); - const auto strt_total = static_cast(amrex::second()); + const auto strt_total = static_cast(amrex::second()); - { WarpX warpx; warpx.InitData(); @@ -73,9 +73,9 @@ int main(int argc, char* argv[]) ParallelDescriptor::ReduceRealMax(end_total, ParallelDescriptor::IOProcessorNumber()); Print() << "Total Time : " << end_total << '\n'; } - } - WARPX_PROFILE_VAR_STOP(pmain); + WARPX_PROFILE_VAR_STOP(pmain); + } #if defined(AMREX_USE_HIP) && defined(WARPX_USE_PSATD) rocfft_cleanup(); diff --git a/cmake/WarpXFunctions.cmake b/cmake/WarpXFunctions.cmake index 0d17d38eccc..7db6c3bda9d 100644 --- a/cmake/WarpXFunctions.cmake +++ b/cmake/WarpXFunctions.cmake @@ -1,3 +1,31 @@ +# Set C++17 for the whole build if not otherwise requested +# +# This is the easiest way to push up a C++17 requirement for AMReX, PICSAR and +# openPMD-api until they increase their requirement. +# +macro(set_cxx17_superbuild) + if(NOT DEFINED CMAKE_CXX_STANDARD) + set(CMAKE_CXX_STANDARD 17) + endif() + if(NOT DEFINED CMAKE_CXX_EXTENSIONS) + set(CMAKE_CXX_EXTENSIONS OFF) + endif() + if(NOT DEFINED CMAKE_CXX_STANDARD_REQUIRED) + set(CMAKE_CXX_STANDARD_REQUIRED ON) + endif() + + if(NOT DEFINED CMAKE_CUDA_STANDARD) + set(CMAKE_CUDA_STANDARD 17) + endif() + if(NOT DEFINED CMAKE_CUDA_EXTENSIONS) + set(CMAKE_CUDA_EXTENSIONS OFF) + endif() + if(NOT DEFINED CMAKE_CUDA_STANDARD_REQUIRED) + set(CMAKE_CUDA_STANDARD_REQUIRED ON) + endif() +endmacro() + + # find the CCache tool and use it if found # macro(set_ccache) @@ -130,7 +158,7 @@ endfunction() # Take an and expose it as INTERFACE target with # WarpX::thirdparty:: naming and SYSTEM includes. # -function(make_third_party_includes_system imported_target propagated_name) +function(warpx_make_third_party_includes_system imported_target propagated_name) add_library(WarpX::thirdparty::${propagated_name} INTERFACE IMPORTED) target_link_libraries(WarpX::thirdparty::${propagated_name} INTERFACE ${imported_target}) diff --git a/cmake/dependencies/AMReX.cmake b/cmake/dependencies/AMReX.cmake index 7996a02a1fc..492d0774d61 100644 --- a/cmake/dependencies/AMReX.cmake +++ b/cmake/dependencies/AMReX.cmake @@ -239,7 +239,7 @@ set(WarpX_amrex_src "" set(WarpX_amrex_repo "https://github.com/AMReX-Codes/amrex.git" CACHE STRING "Repository URI to pull and build AMReX from if(WarpX_amrex_internal)") -set(WarpX_amrex_branch "60fe729fe2ba65ebffc88c0af18743c254d3992c" +set(WarpX_amrex_branch "9373709e34b23add981551d5446bc4810fd3b688" CACHE STRING "Repository branch for WarpX_amrex_repo if(WarpX_amrex_internal)") diff --git a/cmake/dependencies/FFT.cmake b/cmake/dependencies/FFT.cmake index 404a3f011a4..56dc396b31f 100644 --- a/cmake/dependencies/FFT.cmake +++ b/cmake/dependencies/FFT.cmake @@ -103,14 +103,14 @@ if(WarpX_PSATD) # create an IMPORTED target: WarpX::thirdparty::FFT if(WarpX_COMPUTE STREQUAL CUDA) # CUDA_ADD_CUFFT_TO_TARGET(WarpX::thirdparty::FFT) - make_third_party_includes_system(cufft FFT) + warpx_make_third_party_includes_system(cufft FFT) elseif(WarpX_COMPUTE STREQUAL HIP) - make_third_party_includes_system(roc::rocfft FFT) + warpx_make_third_party_includes_system(roc::rocfft FFT) else() if(WarpX_FFTW_SEARCH STREQUAL CMAKE) - make_third_party_includes_system(FFTW3::fftw3${HFFTWp} FFT) + warpx_make_third_party_includes_system(FFTW3::fftw3${HFFTWp} FFT) else() - make_third_party_includes_system(PkgConfig::fftw3${HFFTWp} FFT) + warpx_make_third_party_includes_system(PkgConfig::fftw3${HFFTWp} FFT) endif() if(WarpX_COMPUTE STREQUAL OMP) if(WarpX_FFTW_IGNORE_OMP) diff --git a/mewarpx/changelog.csv b/mewarpx/changelog.csv index ac876d47f49..9a4d8999353 100644 --- a/mewarpx/changelog.csv +++ b/mewarpx/changelog.csv @@ -1,5 +1,11 @@ Version, Physics version, Date, List of changes -2.0.8, 2, In progress, " +2.0.9, 2, In progress, " + +**Other changes**: + +- Merge ``upstream/development`` (git hash ``9685a3d``) into memaster. +" +2.0.8, 2, 12/14/2021, " **Other changes**: diff --git a/mewarpx/mewarpx/_version.py b/mewarpx/mewarpx/_version.py index 8b8c702040c..09116178bf4 100644 --- a/mewarpx/mewarpx/_version.py +++ b/mewarpx/mewarpx/_version.py @@ -1,6 +1,6 @@ # One and only one place to store the version info # https://stackoverflow.com/questions/458550/standard-way-to-embed-version-into-python-package -__version_info__ = (2, 0, 8) +__version_info__ = (2, 0, 9) __version__ = '.'.join([str(x) for x in __version_info__]) # One and only one place to store the Physics version diff --git a/pyproject.toml b/pyproject.toml index 86da50654e6..112927f6cdc 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -2,6 +2,6 @@ requires = [ "setuptools>=42", "wheel", - "cmake>=3.15.0,<4.0.0" + "cmake>=3.18.0,<4.0.0" ] build-backend = "setuptools.build_meta" diff --git a/requirements.txt b/requirements.txt index aad3798f6bc..489be8c3ff8 100644 --- a/requirements.txt +++ b/requirements.txt @@ -3,7 +3,7 @@ numpy~=1.15 periodictable~=1.5 # PICMI -picmistandard==0.0.16 +picmistandard==0.0.18 # for development against an unreleased PICMI version, use: #picmistandard @ git+https://github.com/picmi-standard/picmi.git#subdirectory=PICMI_Python diff --git a/run_test.sh b/run_test.sh index 2df2806be7d..1b6ffbeb174 100755 --- a/run_test.sh +++ b/run_test.sh @@ -57,9 +57,16 @@ ln -s ${tmp_dir} test_dir cd test_dir echo "cd $PWD" +# Prepare a virtual environment +rm -rf py-venv +python3 -m venv py-venv +source py-venv/bin/activate +python3 -m pip install --upgrade pip setuptools wheel +python3 -m pip install --upgrade -r warpx/Regression/requirements.txt + # Clone PICSAR, AMReX and warpx-data git clone https://github.com/AMReX-Codes/amrex.git -cd amrex && git checkout --detach 60fe729fe2ba65ebffc88c0af18743c254d3992c && cd - +cd amrex && git checkout --detach 9373709e34b23add981551d5446bc4810fd3b688 && cd - # Use QED brach for QED tests git clone https://github.com/ECP-WarpX/picsar.git cd picsar && git checkout --detach 7b5449f92a4b30a095cc4a67f0a8b1fc69680e15 && cd - @@ -73,7 +80,7 @@ git clone https://github.com/ECP-WarpX/regression_testing.git mkdir -p rt-WarpX/WarpX-benchmarks cd warpx/Regression echo "cd $PWD" -python prepare_file_ci.py +python3 prepare_file_ci.py cp ci-tests.ini ../../rt-WarpX cp -r Checksum ../../regression_testing/ @@ -82,8 +89,13 @@ cd ../../regression_testing/ echo "cd $PWD" # run only tests specified in variable tests_arg (single test or multiple tests) if [[ ! -z "${tests_arg}" ]]; then - python regtest.py ../rt-WarpX/ci-tests.ini --no_update all "${tests_run}" + python3 regtest.py ../rt-WarpX/ci-tests.ini --no_update all "${tests_run}" # run all tests (variables tests_arg and tests_run are empty) else - python regtest.py ../rt-WarpX/ci-tests.ini --no_update all + python3 regtest.py ../rt-WarpX/ci-tests.ini --no_update all fi + +# clean up python virtual environment +cd ../ +echo "cd $PWD" +deactivate diff --git a/setup.py b/setup.py index 25e5d6b225f..fea78791e90 100644 --- a/setup.py +++ b/setup.py @@ -52,7 +52,7 @@ def run(self): out = subprocess.check_output(['cmake', '--version']) except OSError: raise RuntimeError( - "CMake 3.15.0+ must be installed to build the following " + + "CMake 3.18.0+ must be installed to build the following " + "extensions: " + ", ".join(e.name for e in self.extensions)) @@ -60,8 +60,8 @@ def run(self): r'version\s*([\d.]+)', out.decode() ).group(1)) - if cmake_version < '3.15.0': - raise RuntimeError("CMake >= 3.15.0 is required") + if cmake_version < '3.18.0': + raise RuntimeError("CMake >= 3.18.0 is required") for ext in self.extensions: self.build_extension(ext) @@ -278,7 +278,7 @@ def build_extension(self, ext): cmdclass=cmdclass, # scripts=['warpx_1d', 'warpx_2d', 'warpx_3d', 'warpx_rz'], zip_safe=False, - python_requires='>=3.6, <3.10', + python_requires='>=3.6', # tests_require=['pytest'], install_requires=install_requires, # see: src/bindings/python/cli @@ -306,6 +306,7 @@ def build_extension(self, ext): 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', + 'Programming Language :: Python :: 3.10', ('License :: OSI Approved :: ' 'BSD License'), # TODO: use real SPDX: BSD-3-Clause-LBNL ],