From c421f08f9ca16390ddf1662937389d2269bafd62 Mon Sep 17 00:00:00 2001 From: nghi01 Date: Tue, 24 Jan 2023 15:33:22 -0500 Subject: [PATCH 1/6] Fix error in apptainer documentation --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 2e77ae1..689463e 100644 --- a/README.md +++ b/README.md @@ -32,7 +32,7 @@ Or with docker: docker run --rm -ti -p 3000:3000 -p 8080:8080 opendronemap/clusterodm [parameters] ``` -Or with apptainer: +Or with apptainer, after cd into ClusterODM directory: ```bash apptainer run docker://opendronemap/clusterodm [parameters] From 90a04059331a163f03b86b862ca04db09edc2af2 Mon Sep 17 00:00:00 2001 From: nghi01 Date: Tue, 24 Jan 2023 15:46:14 -0500 Subject: [PATCH 2/6] Adding SLURM script --- README.md | 7 +++++++ mytesting.slurm | 19 +++++++++++++++++++ 2 files changed, 26 insertions(+) create mode 100644 mytesting.slurm diff --git a/README.md b/README.md index 689463e..201e153 100644 --- a/README.md +++ b/README.md @@ -94,6 +94,13 @@ A docker-compose file is available to automatically setup both ClusterODM and No docker-compose up ``` +## HPC set up with SLURM + +You can write a SLURM script to schedule and set up available nodes with NodeODM for the ClusterODM to be wired to if you are on the HPC. Using SLURM will decrease the amount of time and processes needed to set up nodes for ClusterODM each time. This provides an easier way for user to use ODM on the HPC. + +To setup HPC with SLURM, you must have make sure SLURM is installed. + + ## Windows Bundle ClusterODM can run as a self-contained executable on Windows without the need for additional dependencies. You can download the latest `clusterodm-windows-x64.zip` bundle from the [releases](https://github.com/OpenDroneMap/ClusterODM/releases) page. Extract the contents in a folder and run: diff --git a/mytesting.slurm b/mytesting.slurm new file mode 100644 index 0000000..6a2c100 --- /dev/null +++ b/mytesting.slurm @@ -0,0 +1,19 @@ +#!/usr/bin/bash +#source .bashrc + +#SBATCH --partition=8core +#SBATCH --nodelist=node[48,50,51] +#SBATCH --time=20:00:00 + +cd $HOME +cd ODM/NodeODM/ + +#Launched on Node 48 +srun --nodes=1 apptainer run --writable node/ & + +#Launch on node 50 +srun --nodes=1 apptainer run --writable node/ & + +#Launch on node 51 +srun --nodes=1 apptainer run --writable node/ & +wait From 8c6a2659dfc2fe5244e891f72867e8ae65f488c6 Mon Sep 17 00:00:00 2001 From: nghi01 Date: Tue, 24 Jan 2023 15:48:39 -0500 Subject: [PATCH 3/6] Change name --- mytesting.slurm => sample.slurm | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename mytesting.slurm => sample.slurm (100%) diff --git a/mytesting.slurm b/sample.slurm similarity index 100% rename from mytesting.slurm rename to sample.slurm From b3fe67bde90a933181323bd767b8dfa345250341 Mon Sep 17 00:00:00 2001 From: nghi01 Date: Tue, 24 Jan 2023 16:04:36 -0500 Subject: [PATCH 4/6] Update instructions for SLURM --- README.md | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 201e153..527ddb4 100644 --- a/README.md +++ b/README.md @@ -98,8 +98,31 @@ docker-compose up You can write a SLURM script to schedule and set up available nodes with NodeODM for the ClusterODM to be wired to if you are on the HPC. Using SLURM will decrease the amount of time and processes needed to set up nodes for ClusterODM each time. This provides an easier way for user to use ODM on the HPC. -To setup HPC with SLURM, you must have make sure SLURM is installed. +To setup HPC with SLURM, you must make sure SLURM is installed. +SLURM script will be different from HPC to HPC, depending on which nodes in the cluster that you have. However, the main idea is we want to run NodeODM on each node once, and by default, each NodeODM will be running on port 3000. Apptainer will be taking available ports starting from port 3000, so if your node's port 3000 is open, by default NodeODM will be run on that node. After that, we want to run ClusterODM on the head node and connect the running NodeODMs to the ClusterODM. With that, we will have a functional ClusterODM running on HPC. + +Here is an example of SLURM script assigning nodes 48, 50, 51 to run NodeODM. You can freely change and use it depending on your system: + +![image](https://user-images.githubusercontent.com/70782465/214411148-cdf43e44-9756-4115-9195-d1f36b3a31b9.png) + +Run the following commands to schedule using the SLURM script: + +``` +sbatch sample.slurm +``` + +Unfortunately, SLURM does not handle assigning jobs to the head node. Hence, if we want to run ClusterODM on the head node, we have to run it locally. After that, you can connect to the CLI and wire the NodeODMs to the ClusterODMs. Here is an example following the sample SLURM script: + +``` +telnet localhost 8080 +> NODE ADD node48 3000 +> NODE ADD node50 3000 +> NODE ADD node51 3000 +> NODE LIST +``` + +You should always check to make sure which ports are being used to run NodeODM if ClusterODM is not wired correctly. ## Windows Bundle From 5ad58a600cd79f01fc7c42267faf3be552152954 Mon Sep 17 00:00:00 2001 From: nghi01 Date: Tue, 24 Jan 2023 16:07:11 -0500 Subject: [PATCH 5/6] Finalizing --- README.md | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 527ddb4..d328239 100644 --- a/README.md +++ b/README.md @@ -100,18 +100,30 @@ You can write a SLURM script to schedule and set up available nodes with NodeODM To setup HPC with SLURM, you must make sure SLURM is installed. -SLURM script will be different from HPC to HPC, depending on which nodes in the cluster that you have. However, the main idea is we want to run NodeODM on each node once, and by default, each NodeODM will be running on port 3000. Apptainer will be taking available ports starting from port 3000, so if your node's port 3000 is open, by default NodeODM will be run on that node. After that, we want to run ClusterODM on the head node and connect the running NodeODMs to the ClusterODM. With that, we will have a functional ClusterODM running on HPC. +SLURM script will be different from cluster to cluster, depending on which nodes in the cluster that you have. However, the main idea is we want to run NodeODM on each node once, and by default, each NodeODM will be running on port 3000. Apptainer will be taking available ports starting from port 3000, so if your node's port 3000 is open, by default NodeODM will be run on that node. After that, we want to run ClusterODM on the head node and connect the running NodeODMs to the ClusterODM. With that, we will have a functional ClusterODM running on HPC. Here is an example of SLURM script assigning nodes 48, 50, 51 to run NodeODM. You can freely change and use it depending on your system: ![image](https://user-images.githubusercontent.com/70782465/214411148-cdf43e44-9756-4115-9195-d1f36b3a31b9.png) +You can check for available nodes using sinfo: + +``` +sinfo +``` + Run the following commands to schedule using the SLURM script: ``` sbatch sample.slurm ``` +You can also check for currently running jobs using squeue: + +``` +squeue -u $user +``` + Unfortunately, SLURM does not handle assigning jobs to the head node. Hence, if we want to run ClusterODM on the head node, we have to run it locally. After that, you can connect to the CLI and wire the NodeODMs to the ClusterODMs. Here is an example following the sample SLURM script: ``` From e9a96d078c9e17b986ae19bdb62d0810fc89fc53 Mon Sep 17 00:00:00 2001 From: nghi01 Date: Tue, 24 Jan 2023 16:07:33 -0500 Subject: [PATCH 6/6] Finalizing --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d328239..38a94dd 100644 --- a/README.md +++ b/README.md @@ -112,7 +112,7 @@ You can check for available nodes using sinfo: sinfo ``` -Run the following commands to schedule using the SLURM script: +Run the following command to schedule using the SLURM script: ``` sbatch sample.slurm