Skip to content

Latest commit

 

History

History
85 lines (58 loc) · 3.54 KB

README_setup.md

File metadata and controls

85 lines (58 loc) · 3.54 KB

Installations

Here are instructions how the compilation environments used in the course were created.

LUMI ROCm container with hipstdpar and AdaptiveCpp

This container is built with recipe here.

OneAPI on Mahti

Download Intel oneAPI base toolkit:

wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/96aa5993-5b22-4a9b-91ab-da679f422594/intel-oneapi-base-toolkit-2025.0.0.885_offline.sh

Install:

sh ./intel-oneapi-base-toolkit-2025.0.0.885_offline.sh -a --silent --cli --eula accept --download-cache $SCRATCH/$USER/oneapi_tmp --install-dir $PROJAPPL/intel/oneapi

Get Codeplay oneAPI for NVIDIA GPUs:

curl -LOJ "https://developer.codeplay.com/api/v1/products/download?product=oneapi&variant=nvidia&version=2025.0.0&filters[]=12.0&filters[]=linux"

Install:

sh ./oneapi-for-nvidia-gpus-2025.0.0-cuda-12.0-linux.sh -y --extract-folder $SCRATCH/$USER/oneapi_tmp --install-dir $PROJAPPL/intel/oneapi

OneAPI on LUMI

Download Intel oneAPI base toolkit:

wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/96aa5993-5b22-4a9b-91ab-da679f422594/intel-oneapi-base-toolkit-2025.0.0.885_offline.sh

Install:

sh ./intel-oneapi-base-toolkit-2025.0.0.885_offline.sh -a --silent --cli --eula accept --download-cache $SCRATCH/$USER/oneapi_tmp --install-dir $PROJAPPL/intel/oneapi

Get Codeplay oneAPI for AMD GPUs:

curl -LOJ "https://developer.codeplay.com/api/v1/products/download?product=oneapi&variant=amd&version=2025.0.0&filters[]=6.0.2&filters[]=linux"

Install:

sh ./oneapi-for-amd-gpus-2025.0.0-rocm-6.0.2-linux.sh -y --extract-folder $SCRATCH/$USER/oneapi_tmp --install-dir $PROJAPPL/intel/oneapi

AdaptiveCpp on Mahti

module purge
git clone -c feature.manyFiles=true https://github.com/spack/spack.git
cd spack
 . share/spack/setup-env.sh
spack bootstrap now
spack compiler find
spack config add "modules:default:enable:[tcl]"
spack install lmod
$(spack location -i lmod)/lmod/lmod/init/bash
. share/spack/setup-env.sh
module load gcc/10.4.0
spack compiler find

Edit the recipe var/spack/repos/builtin/packages/hipsycl/package.py. Add "-DWITH_ACCELERATED_CPU:Bool=TRUE", after line 84. Edit the recipe var/spack/repos/builtin/packages/llvmpackage.py, remove all the versions higher than 18.

spack install hipsycl@24.06  %gcc@10.4.0 + cuda

AdaptiveCpp on LUMI

Load the modules needed:

module load LUMI/24.03
module load partition/G
module load rocm/6.0.3

Clone repository and Compile with both cpu and rocm support:

git clone https://github.com/AdaptiveCpp/AdaptiveCpp.git
cd AdaptiveCpp
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/projappl/project_462000752/ACPP/  -DROCM_PATH=$ROCM_PATH -DWITH_CPU_BACKEND=ON -DWITH_CUDA_BACKEND=OFF  -DWITH_ROCM_BACKEND=ON -DACPP_TARGETS="gfx90a"  -DWITH_ACCELERATED_CPU=ON -DWITH_SSCP_COMPILER=OFF  -DWITH_OPENCL_BACKEND=OFF -DWITH_LEVEL_ZERO_BACKEND=OFF -DBOOST_ROOT=/appl/lumi/SW/LUMI-24.03/G/EB/Boost/1.83.0-cpeGNU-24.03/ ..
make -j 64 
make install