-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
input/output size for inference / transfer #24
Comments
Hi @materialvision, Thank you for your interest in our work. The For example, the code below from the uvcgan2/scripts/celeba/train_celeba_male2female_translation.py Lines 71 to 75 in f741603
b. During Inference: will resize images to make the smallest side to have size of 256 pixels, followed by taking a center crop of size 256x256 pixels. uvcgan2/scripts/celeba/train_celeba_male2female_translation.py Lines 76 to 79 in f741603
These configuration options will allow Please, let me know if I should elaborate more on these points |
Thanks for your answer and great work. I just wanted to make things clearer for myself... I have tried to train a model with the following config: but when testing with inference on images of size 2048x2048 I get the following error: RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 1024 but got size 16384 for tensor number 1 in the list. Did I miss something here? Maybe it was wrong to adjust the "shape" argument? Thanks again for your help. |
I think, that is an expected outcome.
No, I think this is correct. The The problem happens because the network was trained on random crops of size |
Thank you. Yes, changing the center-crop of the test config to 512 does fix the error. But to explain the usecase, the goal was to train on 512px images (or crops) to keep the load on the gpu down and train faster, but infer on larger 2048 images (not sized down or cropped but keeping the full quality). My test project is a "de-blur / de-convolution" of images, so the model needs to work with larger resolutions. Are there some "adjustments to the transformations" as you mention that I can do to achieve this? Thanks again for guidance and advice. |
Oh, I see now. Unfortunately, this is not possible with UVCGAN. CycleGAN uses FCN-type generator which can transparently work with images of any size. UVCGAN generator is not FCN, thus one cannot easily train it on crops, but infer on full images. |
Hi in the "original" pytorch CycleGAN it is possible to train on larger images like 2048 but cut up to square 256 or 512 in size for example by using the arguments --load_size 2048 --crop_size 256, like described here: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/tips.md#trainingtesting-with-high-res-images
When using the model I can infer large images even if the model is trained on 256. Would something like that in theory be possible with uvcgan2? Any pointers to how to modify it for this? It is very useful to be able to use the model on larger images in the end...
The text was updated successfully, but these errors were encountered: