PSNR: 49.162236700
ISSM: 0
FSIM: 0.42840393111
SSIM: 0.97565720236
Run the colab notebook get_metrics.ipynb
The model is setup inside the train_and_preprocess.ipynb colab notebook. We use two models first proposed in CVPR 2022(https://arxiv.org/abs/2111.08918). In particular we use SWINIR and EDSR models as base models and futher employ local texture estimation to improve these models.
The model is trained on High Resolution - Low Resolution parirs generated by downsampling the OHRC images by 16x(the approximate resolution ratio of OHRC and TMC). Downsampling is performed implicitly during runtime. For inference and validation purposes a separate dataset has been created located in the folder lr_train_16x.
- Set the variabele
path_for_dataet=/path/to/dir/containing/ohrc_folders_in_pds4_format/
cropped_images=/path/to/destination/where/cropped/pngs/will/be/saved/
--root
----ohrc1
----ohrc2
----ohrc3
- We divide the .IMG files present in OHRC dataset into small chunks and generate PNGs with shape 256x256.
- We then remove dark patches.
- Execute all the cells under the heading Training Model to setup the conda environment.
- To train SWINIR-LTE model run
!python train.py --config configs/train-div2k/train_swinir-lte.yaml --gpu 0
- To train EDS-LTE Model use
!python train.py --config configs/train-div2k/train_swinir-lte.yaml --gpu 0
- Run the last two cells in train_and_preprocess.ipynb to get output corresponding to any image in PNG format.
atlas.png is the 16x upscaled image generated by upscaling a section of ch2_tmc_ndn_20211225T1926394313_d_dtm_d18.tif the coordinates corresponding to the edges are the image is ~ 1 degrees in width. UL=273.4,-55.91 UR=273.5,-55.91 LL=273.59,-56.86 LR=273.49,-56.87 Image was generated by stitching together 256x256 images which were generated by upscaling 32x32 pixel chunks of the original TMC image. To stitch images run stitch.ipynb
All training was performed on Nvidia RTX P8 16GB GPU.