Skip to content

Latest commit

 

History

History
189 lines (130 loc) · 8.06 KB

README.md

File metadata and controls

189 lines (130 loc) · 8.06 KB

Real-ESRGAN

This is a forked version of Real-ESRGAN. This repo includes detailed tutorials on how to use Real-ESRGAN on Windows locally through the .exe or PyTorch for both images and videos.

This version of Real-ESRGAN is out of date. The main branch has now officially support Windows, go here to the main branch. You can still use this repo as a reference for setting up environment and such.

download Open issue LICENSE python lint

  1. Colab Demo for Real-ESRGAN google colab logo.
  2. Portable Windows executable file. You can find more information here.

Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.

🚩 Updates

  • The inference code supports: 1) tile options; 2) images with alpha channel; 3) gray images; 4) 16-bit images.
  • ✅ The training codes have been released. A detailed guide can be found in Training.md.

📖 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

[Paper]   [Project Page]   [Demo]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Applied Research Center (ARC), Tencent PCG
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences


Any updates on the main repository will not be updated here. Please use this just as a tutorial reference, and refer any new updates from the original.

There are 2 options to run Real-ESRGAN:

  1. Windows Executable Files (.exe)
  2. CUDA & PyTorch

Windows Executable Files (.exe) VULKAN ver.

(1:4 ratio against CUDA, time it takes VULKAN to run 1 image, CUDA can run 4 images)

Does not require a NVIDIA GPU.

You can download Windows executable files from https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRGAN-ncnn-vulkan-20210725-windows.zip

This executable file is portable and includes all the binaries and models required. No CUDA or PyTorch environment is needed.

  1. Place the images in the same folder level as your .exe file.

  2. cd To where your file is located on your command prompt, and you can simply run the following command: (replace the <> with the corresponding name)

    realesrgan-ncnn-vulkan.exe -i <input_image> -o output.png
  3. (Optional) run through a video

    I've wrote a simple Python file that would generate a .bat file that will help you run through all the frames in a video. Download the func.py in this repo.

    Open up Anaconda prompt, input these commands to download these libraries:

    pip install opencv-python
    conda install -c conda-forge ffmpeg
    

    Create a folder called "📂input_videos" and drop the video inside this folder.

    📂Real-ESRGAN-Master/
     ├── 📂input_videos/
     │   └── 📜your_video.mp4 <--
    

    Run the following command in anaconda prompt: (replace the <>)

    python func.py <your_video_file>
    

    And after everything is done, you can find your result under the name <your_video_name>_result.mp4

Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.

This executable file is based on the wonderful Tencent/ncnn and realsr-ncnn-vulkan by nihui.


CUDA & PyTorch

(1:4 ratio against CUDA, time it takes VULKAN to run 1 image, CUDA can run 4 images)

Requires a NVIDIA GPU

Installation

  1. Clone repo Either download this repo manually through the download button on the top right,

    git clone https://github.com/xinntao/Real-ESRGAN.git

    and enter the folder with the command

    cd <your_file_path>/Real-ESRGAN
  2. Install dependent packages

    conda create -n RESRGAN python=3.7
    conda activate RESRGAN #activate the virtual environment
    conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
    pip install basicsr
    pip install -r requirements.txt
  3. Download pre-trained models

    Download pre-trained models here: RealESRGAN_x4plus.pth

    and put it in experiments/pretrained_models

    📂Real-ESRGAN-Master/
     ├── 📂experiments/
     │   └── 📂pretrained_models/
     │       └── 📜RealESRGAN_x4plus.pth
     │   
    
  4. Inference to obtain image results! Drag and drop any images into the "📂inputs" folder, and run the following command:

    python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs

    You can find your results in the "📂results" folder!

  5. (optional) Inference to obtain video results!

    If you want to upscale a video, you will have to manually seperate the video into images with FFMPEG

    So first install ffmpeg:

    conda install -c conda-forge ffmpeg
    

    Then you drag and drop your video into the base folder. which is inside "📂Real-ESRGAN-Master" and on the same level with "📂experiments".

    convert your video into png with the following command. replace out <> with the video name.

    ffmpeg -i <your_video.format, eg: video.mp4> inputs/<video_name>%d.png
    

    Run the AI

    python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs

    Replace the details in <> and run this command

    ffmpeg -i results/<video_name>%d.png -c:v libx264 -vf fps=<your original video's FPS> -pix_fmt yuv420p <video_name>_result.mp4

You will see your video is now upscaled x4 and can be found under the name <video_name>_result.mp4

Remember to delete all the images inside "📂inputs" if you want to run on another video or image.

💻 Training

A detailed guide can be found in Training.md.

BibTeX

@Article{wang2021realesrgan,
    title={Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
    author={Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
    journal={arXiv:2107.10833},
    year={2021}
}

📧 Contact

If you have any question, please join the discord channel: https://dsc.gg/bycloud.