-
Notifications
You must be signed in to change notification settings - Fork 8
2_First_run
Now that the code is compiled, and the executable has been created in ${CORAL_ROOT}
, we can run it. A recommended work-flow is to create a different directory for your runs. We will call this your ${PROJECTS}
repository. (For instance, on my machine PROJECTS=/data/plane_layer
.)
To summarize:
-
${CORAL_ROOT}/src
contains the sources and the makefile, -
${CORAL_ROOT}/build
contains executables, -
${CORAL_ROOT}/etc
contains useful routines and files, -
${PROJECTS}
will host the data produced by your runs.
Let us get equipped by copying at the root of ${PROJECTS}
various useful scripts:
cp ${CORAL_ROOT}/etc/manage_dataDir_scripts/* ${PROJECTS}/.
To give brief context, dealiase_volumes.py
is a python scripts that dealiases volume outputs (making volumes smaller by a factor 27/8!!); dealiase_and_tar.sh
is a bash script that dealiases a list of runs, and archive them in a tarball, which is particularly useful on clusters. We will come back to those scripts later. For the time being, our focus is on prepare_directory.sh
, a script that you will use whenever you need to create a new directory for a new run.
Let us modify this script to suit our needs. The WHERE_TO
variable should contain the full path to the directory you wish to create. For a first test, a possibility is WHERE_TO=${PROJECTS}/test_1
for example. After editing, save and source the script.
source ./prepare_directory.sh
The script creates the directory for the run, together with suitable subdirectories for the output of the code. The executable and useful post-processing python routines as also copied locally. It also copies input files, which we discuss now.
- coral.equations defines the set of PDEs that are going to be time-stepped. Incidentally, it defines the name of variables, which we need for the output routines below.
- coral.parameters.in defines some general parameters: resolution, time-stepper, CFL, timer before shutting down, initial conditions, etc.
- coral.timeseries contains a list of variables we want output as time series
- coral.usrOutput contains a list of profiles, slices, and volumes of variables we want exported to the disk as the simulation marches forward.
Once all these files are properly set-up, we are ready to run. On a laptop/desktop machine, simply type the following command to run the codes on (e.g.) 6 cores.
mpiexec -n 6 ./coral_SL.exe
For reference, the memory requirements for the Rayleigh-Bénard equations using (64,64,48) modes are: 1.5GB when using 1 core; 1.7GB when using 6 cores; 2.2GB when using 20 cores. The slight drift in memory usage when using more cores comes from auxiliary buffers allocated by the 2decomp&fft
library for implementing transposes.
Don't forget to source set_env_myMachine.sh
prior to execution for environment variables definitions, so that libraries can be found. Additionally, if you want the output written in a file output.txt
in the background, as opposed to displayed on the screen, type:
source ${CORAL_ROOT}/set_env_myMachine.sh
mpiexec -n 6 ./coral_SL.exe > output.txt &
On a supercomputer, first load the necessary modules with module load
. The modules necessary for execution are identical to the requirements for compilation: an MPI implementation, the Intel MKL library, and the FFTW library (again, Vanilla, not Intel).
The code should now run and shut down after the timer expires.
During the runs, you can monitor time series of the kinetic energy with the corresponding python script:
python read_timeseries.py
For more details on data visualisation, see here.