Skip to content

An algorithm that predicts yearly Aboveground Biomass for Finnish forests using satellite imagery. [NeurIPS 2023 Datasets & Benchmarks Track]

License

Notifications You must be signed in to change notification settings

quqixun/BioMassters

Repository files navigation

The BioMassters

This code is one of three benchmarks for dataset BioMassters.

Competition Page and Leaderboard and Paper

Team: Just4Fun

Contact: quqixun@gmail.com

Source Code: https://github.com/quqixun/BioMassters

1. Method

  • S1 and S2 features and AGBM labels were carefully preprocessed according to statistics of training data. See code in process.py and ./libs/process for details.
  • Training data was splited into 5 folds for cross validation in split.py.
  • Processed S1 and S2 features were concatenated to 3D tensor in shape [B, 15, 12, 256, 256] as input, targets were AGBM labels in shape [B, 1, 256, 256].
  • Some operations, including horizontal flipping, vertical flipping and random rotation in 90 degrees, were used as data augmentation on 3D features [12, 256, 256] and 2D labels [256, 256].
  • We applied Swin UNETR with the attention from Swin Transformer V2 as the regression model. In ./libs/models, Swin UNETR was adapted from the implementation by MONAI project.
  • In training steps, Swin UNETR was optimized by the sum of weighted MAE and SSIM. RMSE of validation data was used to select the best model.
  • We trained Swin UNETR using 5 folds, and got 5 models.
  • For each testing sample, the average of 5 predictions was the final result.

2. Environment

  • Ubuntu 20.04 LTS
  • CUDA 11.3 or later
  • Any GPU with at least 40Gb VRAM for training
  • Any GPU with at least 8Gb VRAM for predicting
  • At least 16Gb RAM for training and predicting
  • Minconda or Anaconda for Python environment management
  • AWS CLI for downloading dataset
# create environment
conda create --name biomassters python=3.9
conda activate biomassters

# install dependencies
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt

Clone code:

git clone git@github.com:quqixun/BioMassters.git
# working dir
cd BioMassters

3. Dataset Preparation

./data/information
├── biomassters-download-instructions.txt  # Instructions to download satellite images and AGBM data
├── features_metadata_FzP19JI.csv          # Metadata for satellite images
└── train_agbm_metadata.csv                # Metadata for training set AGBM tifs
  • Download image data by running ./scripts/download.sh:
s3_node=as  # options: as, us, eu
split=all   # download specific dataset, options: train, test, all
            # set split to test for predicting only
            # set split to train for training only
            # set split to all otherwise
download_root=./data/source
features_metadata=./data/information/features_metadata_FzP19JI.csv
training_labels_metadata=./data/information/train_agbm_metadata.csv

python download.py \
    --download_root            $download_root            \
    --features_metadata        $features_metadata        \
    --training_labels_metadata $training_labels_metadata \
    --s3_node                  $s3_node                  \
    --split                    $split

Data will be saved in ./data/source as following arrangement. Or you can reorganize the exist dataset in the same structure.

./data/source
├── test
│   ├── aa5e092e
│   │   ├── S1
│   │   │   ├── aa5e092e_S1_00.tif
│   │   │   ├── ...
│   │   │   └── aa5e092e_S1_11.tif
│   │   └── S2
│   │       ├── aa5e092e_S2_00.tif
│   │       ├── ...
│   │       └── aa5e092e_S2_11.tif
|   ├── ...
│   └── fff812c0
└── train
    ├── aa018d7b
    |   ├── S1
    |   |   └── ...
    |   ├── S2
    |   |   └── ...
    |   └── aa018d7b_agbm.tif
    ├── ...
    └── fff05995
  • Calculate statistics for normalization and split dataset into 5 folds by running ./scripts/process.sh:
source_root=./data/source
split_seed=42
split_folds=5

python process.py \
    --source_root    $source_root \
    --process_method plain

python split.py \
    --data_root   $source_root \
    --split_seed  $split_seed  \
    --split_folds $split_folds

Outputs in ./data/source should be same as the following structure:

./data/source
├── plot              # plot of data distribution
├── splits.pkl        # 5 folds for cross validation
├── stats_log2.pkl    # statistics of log2 transformed dataset
├── stats_plain.pkl   # statistics of original dataset
├── test
└── train

This step takes about 80Gb RAM. You don't have to run the above script again since all outputs can be found in ./data/source.

4. Training

Train model with arguments (see ./scripts/train.sh):

  • data_root: root directory of training dataset
  • exp_root: root directory to save checkpoints, logs and models
  • config_file: file path of configurations
  • process_method: processing method to calculate statistics, log2 or plain, default is plain
  • folds: list of folds, separated by ,
device=0
process=plain
folds=0,1,2,3,4
data_root=./data/source
config_file=./configs/swin_unetr/exp1.yaml

CUDA_VISIBLE_DEVICES=$device \
python train.py              \
    --data_root      $data_root             \
    --exp_root       ./experiments/$process \
    --config_file    $config_file           \
    --process_method $process               \
    --folds          $folds

Run ./scripts/tran.sh for training, then models and logs will be saved in ./experiments/plain/swin_unetr/exp1.

Training on 5 folds will take about 1 week if only one GPU is available. If you have 5 GPUs, you can run each fold training on each GPU, and it will take less than 2 days. You can download the trained models from BaiduDisc (code:jarp), MEGA or Google Drive, and then unzip models as following arrangement:

./experiments/plain/swin_unetr/exp1
├── fold0
│   ├── logs.csv
│   └── model.pth
├── fold1
│   ├── logs.csv
│   └── model.pth
├── fold2
│   ├── logs.csv
│   └── model.pth
├── fold3
│   ├── logs.csv
│   └── model.pth
└── fold4
    ├── logs.csv
    └── model.pth

5. Predicting

Make predictions with almost the same arguments as training (see ./scripts/predict.sh):

  • data_root: root directory of training dataset
  • exp_root: root directory of checkpoints, logs and models
  • output_root: root directory to save predictions
  • config_file: file path of configurations
  • process_method: processing method to calculate statistics, log2 or plain, default is plain
  • folds: list of folds, separated by ,
  • apply_tta: if apply test-time augmentation, default is False
device=0
process=plain
folds=0,1,2,3,4
apply_tta=false
data_root=./data/source
config_file=./configs/swin_unetr/exp1.yaml

CUDA_VISIBLE_DEVICES=$device \
python predict.py            \
    --data_root      $data_root             \
    --exp_root       ./experiments/$process \
    --output_root    ./predictions/$process \
    --config_file    $config_file           \
    --process_method $process               \
    --folds          $folds                 \
    --apply_tta      $apply_tta

Run ./scripts/predict.sh for predicting, then predictions will be saved in ./predictions/plain/swin_unetr/exp1/folds_0-1-2-3-4.

Predicting public testing samples on 5 folds and calculating the average will take about 30 minutes. You can download the submission for public testing dataset from BaiduDisc (code:w61j) or MEGA.

6. Metrics

Metrics of submitted models and predictions on validation dataset and testing dataset.

Metrics Val
Fold 0
Val
Fold 1
Val
Fold 2
Val
Fold 3
Val
Fold 4
Val
Average
Test
Public
Test
Private
Lrec 0.03562 0.03516 0.03527 0.03522 0.03626 - -
Lssim 0.04758 0.04684 0.04713 0.04691 0.04834 - -
RMSE 27.9676 27.4368 27.5011 27.8954 28.0946 27.7781 27.3891 27.6779

7. Reference

  • Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. [Paper]
  • Swin Transformer V2: Scaling Up Capacity and Resolution. [Paper , Code]
  • Implementation of Swin UNETR by MONAI project. [Code]
  • Differentiable structure similarity metric. [Code]
  • Library for 3D augmentations. [Paper, Code]

8. License

  • MIT License

About

An algorithm that predicts yearly Aboveground Biomass for Finnish forests using satellite imagery. [NeurIPS 2023 Datasets & Benchmarks Track]

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published