This project is aim to apply super resolution (SR) model on satellite data. So far, SR models are developed for normal photos and it is interesting to see how would it be for such kind of data.
Using 2 satellite images around Taiwan to produce two datasets for training. These two datasets were uploaded on kaggle as following.
Sentinel-2 | SPOT-6/7 | |
---|---|---|
Resolution | 10 m | 5 m |
Number of Bands | 13 (RGB used only) | 4 (RGB used only) |
Number of Files | 1022 | 1000 |
File Size | 487.99(MB) | 489.3(MB) |
In order to training Super-Resolution (SR) model, High-Resolution (HR) and Low-Resolution (LR) patches are needed for whole processing. The original satellite data would be cropped into 512
Most of images (~80%) were used for training the EDSR model and some of them (~20%) were used for validation during training phase. After training, the weight was saved in ./edsr_wts_030_mae.h5
. 10 satellite images in ./Samples
were additionally chosen for testing and evaluating the model performance.
Enhanced Deep Super-Resolution (EDSR) is applied for this project, which is based on this paper, and source code reference.
The factor of increased resolution is only set by 2 here, and the total parameter is 926,723, and trained for 30 epochs.
To evaluate the performance of EDSR, we introduce 2 indexes to help us: Peak Signal-to-Noise Ratio (PSNR) and Structure Simmilarity (SSIM).
The 2 indexes are calculated by the following equations:
Here, we compared EDSR's results with directly applying bicubic and bilinear method.
The LR images are reduced their resolution by factor of 2 from the images in ./Samples
, and enlarged at the centers for better visibility.