Skip to content

goutamgmb/NTIRE22_BURSTSR

Repository files navigation

Table of contents

Introduction

Burst Image Super-Resolution Challenge will be held as part of the 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop to be held in conjunction with CVPR 2022. The task of this challenge is to generate a denoised, demosaicked, higher-resolution image, given a RAW burst as input. The challenge has 2 tracks, namely Track 1: Synthetic and Track 2: Real-world. The top ranked participants in each track will be awarded and all participants are invited to submit a paper describing their solution to the associated NTIRE workshop at CVPR 2022

Dates

  • 2022.01.31 Release of train and validation data
  • 2022.02.01 Validation server online
  • 2022.03.23 Final test data release (inputs only)
  • 2022.03.30 Test output results submission deadline
  • 2022.03.30 Fact sheets and code/executable submission deadline
  • 2022.04.01 Preliminary test results released to the participants
  • 2022.04.11 Paper submission deadline for entries from the challenge
  • 2022.06.19 NTIRE workshop and challenges, results and award ceremony

Description

Given multiple noisy RAW images of a scene, the task in burst super-resolution is to predict a denoised higher-resolution RGB image by combining information from the multiple input frames. Concretely, the method will be provided a burst sequence containing 14 images, where each image contains the RAW sensor data from a bayer filter (RGGB) mosaic. The images in the burst have unknown offsets with respect to each other, and are corrupted by noise. The goal is to exploit the information from the multiple input images to predict a denoised, demosaicked RGB image having a 4 times higher resolution, compared to the input. The challenge will have two tracks, namely 1) Synthetic and 2) Real-world based on the source of the input data. The goal in both the tracks is to reconstruct the original image as well as possible, and not to artificially generate a plausible, visually pleasing image.

Track 1 - Synthetic

In the synthetic track, the input bursts are generated from RGB images using a synthetic data generation pipeline.

Data generation: The input sRGB image is first converted to linear sensor space using an inverse camera pipeline. A LR burst is then generated by applying random translations and rotations, followed by bilinear downsampling. The generated burst is then mosaicked and corrupted by random noise.

Training set: We provide code to generate the synthetic bursts using any image dataset for training. Note that any image dataset except the validation split of the BurstSR dataset can be used to generate synthetic bursts for training.

Validation set: The bursts in the validation set have been pre-generated with the data generation code, using the RGB images from the validation split of the BurstSR dataset. NOTE: Compared to the validation set from the 2021 version of the challenge, the validation set this year contains bursts of higher spatial resolution as input (256x256 instead of 96x96).

Registration

If you wish to participate in the Synthetic track, please register for the challenge at the codalab page to get access to the evaluation server and receive email notifications for the challenge.

Evaluation

The methods will be ranked using the fidelity (in terms of PSNR) with the high-resolution ground truth, i.e. the linear sensor space image used to generate the burst. The focus of the challenge is on learning to reconstruct the original high-resolution image, and not the subsequent post-processing. Hence, the PSNR computation will be computed in the linear sensor space, before post-processing steps such as color correction, white-balancing, gamma correction etc.

Validation

The results on the validation set can be uploaded on the Codalab server (live now) to obtain the performance measures, as well as a live leaderboard ranking. The results should be uploaded as a ZIP file containing the network predictions for each burst. The predictions must be normalized to the range [0, 2^14] and saved as 16 bit (uint16) png files. Please refer to save_results_synburst_val.py for an example on how to save the results. An example submission file is available here.

Final Submission

The test set is now public. You can download the test set containing 92 synthetic bursts from this link. You can use the dataset class provided in synthetic_burst_test_set.py in the latest commit to load the burst sequences.

For the final submission, you need to submit:

  • The predicted outputs for each burst sequence as a zip folder, in the same format as used for uploading results to the codalab validation server (see this for details).
  • The code and model files necessary to reproduce your results.
  • A factsheet (both PDF and tex files) describing your method. The template for the factsheet is available here.

The results, code, and factsheet should be submitted via the google form

NOTE: Training on the validation split is NOT allowed for test set submissions.

Track 2 - Real-world

This track deals with the problem of real-world burst super-resolution. Methods will be evaluated on a test set containing bursts captured from a handheld Samsung Galaxy S8 smartphone camera.

Training set: Participants are allowed to use any dataset for training, except the validation and test splits of the BurstSR dataset. Specifically, the participants can use the training split of BurstSR dataset. BurstSR dataset contains RAW bursts captured from a handheld Samsung Galaxy S8 smartphone camera. For each burst, a corresponding high-resolution RGB image captured using a DSLR is also provided. The participants are allowed to use either the pre-processed version of the dataset containing 160x160 crops, or the unprocessed dataset containing full-sized bursts. Additionally, the participants are encouraged to develop unsupervised training methods, or training pipelines using synthetic data.

Registration

If you wish to participate in the Real-world track, please register for the challenge at the codalab page to receive email notifications for the challenge.

Evaluation

The methods will be evaluated based on user study on a test set containing bursts captured from a handheld Samsung Galaxy S8 smartphone camera, i.e. the same camera which was used to collect the BurstSR dataset. The test set will contain bursts containing 14 RAW images, each of spatial size 256x256. The emphasis of the user study will be on which method can best reconstruct the original high-frequency details. The goal is thus not to generate more pleasing images by modifying the output color space or generating artificial high frequency content not existing in the high-resolution ground truth.

NOTE: Unlike in 2021 version of the challenge, we will not use the AlignedPSNR metric to rank the methods this year. This is because the AlignedPSNR metric is biased to prefer methods trained use the AlignedL2 loss. Thus, in order to encourage development of alternate training strategies for real data, we will rank the methods using only a user study.

Validation

The will be no evaluation server for Track 2. The participants can use the bursts from the validation split of BurstSR dataset for validating their methods.

Final Submission

The test set is now public. You can download the test set containing 20 real-world bursts from this link. You can use the dataset class provided in realworld_burst_test_set.py in the latest commit to load the burst sequences.

For the final submission, you need to submit:

  • The predicted outputs for each burst sequence as a zip folder, in the same format as used for uploading results to the codalab validation server (see this for details).
  • The code and model files necessary to reproduce your results.
  • A factsheet (both PDF and tex files) describing your method. The template for the factsheet is available here.

The results, code, and factsheet should be submitted via the google form

Toolkit

We also provide a Python toolkit which includes the necessary data loading and evaluation scripts. The toolkit contains the following modules.

Installation: The toolkit requires PyTorch and OpenCV for track 1, and additionally exifread for track 2. The necessary packages can be installed with anaconda, using the install.sh script.

Data

We provide the following data as part of the challenge.

Synthetic validation set: The official validation set for track 1. The dataset contains 100 synthetic bursts, each containing 14 RAW images of 256x256 resolution. The synthetic bursts are generated from the RGB Canon images from the validation split of the BurstSR dataset. The dataset can be downloaded from here.

Synthetic test set: The official test set for track 1. The dataset contains 92 synthetic bursts, each containing 14 RAW images of 256x256 resolution. The synthetic bursts are generated from the RGB Canon images from the test split of the BurstSR dataset. The dataset can be downloaded from here.

Real world test set: The official test set for track 2. The dataset contains 20 real world bursts, each containing 14 RAW images of 256x256 resolution. The bursts are captured using a Samsung Galaxy S8 smartphone camera. The dataset can be downloaded from here.

BurstSR train and validation set (pre-processed): The dataset has been split into 10 parts and can be downloaded and unpacked using the download_burstsr_dataset.py script. In case of issues with the script, the download links are available here.

BurstSR train and validation set (raw): The dataset can be downloaded and unpacked using the scripts/download_raw_burstsr_data.py script.

Zurich RAW to RGB mapping set: The RGB images from the training split of the Zurich RAW to RGB mapping dataset can be downloaded from here. These RGB images can be used to generate synthetic bursts for training using the SyntheticBurst class.

Issues and questions:

In case of any questions about the challenge or the toolkit, feel free to open an issue on Github.

Organizers

Terms and conditions

The terms and conditions for participating in the challenge are provided here

Acknowledgements

The toolkit uses the forward and inverse camera pipeline code from unprocessing.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published