You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your paper and code, I really liked the way you built your LR/HR pairs.
In the paper, you use 4 different losses to estimate the kernel based on what was done in KernelGAN. However, you slightly modify the original KernelGAN loss by removing the sparsity loss and by adding the following loss:
Can you explain to me the advantages of such changes in comparison to the classical KernelGAN method?
Thanks,
Charles
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for your paper and code, I really liked the way you built your LR/HR pairs.
In the paper, you use 4 different losses to estimate the kernel based on what was done in KernelGAN. However, you slightly modify the original KernelGAN loss by removing the sparsity loss and by adding the following loss:
Can you explain to me the advantages of such changes in comparison to the classical KernelGAN method?
Thanks,
Charles
The text was updated successfully, but these errors were encountered: