Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why does multiscale generation outputs very different results for two almost identical images? #62

Open
ouhenio opened this issue Jan 1, 2020 · 5 comments

Comments

@ouhenio
Copy link

ouhenio commented Jan 1, 2020

I'm trying to apply style transfer to a video frame by frame, but when I use multiscale generation the results vary heavily even for images that are almost identical. I tried without multiscale generation and I didn't have this issue, but the resulting image quality was worse. Is there a way to use multiscale generation and avoid this?

@ouhenio ouhenio changed the title Why multiscale generation outputs very different results for two almost identical images? Why does multiscale generation outputs very different results for two almost identical images? Jan 2, 2020
@ProGamerGov
Copy link
Owner

Are you running multiscale generation for each frame before adding them together, or adding the frames together for each step?

@ouhenio
Copy link
Author

ouhenio commented Jan 6, 2020

I'm running the multiscale generation for every frame and then I join the resulting frames together. This is the command for every frame:

python ./neural_style.py -seed 100 \
  -style_scale 1 -init image \
  -image_size 256 -num_iterations 1000 -save_iter 50 \
  -content_weight 2 -style_weight 1000 \
  -style_image style.png \
  -content_image content.png \
  -output_image out.png \
  -model_file ./models/vgg19-d01eb7cb.pth \
  -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu4_2,relu5_1 \
  -style_layers relu3_1,relu4_1,relu4_2,relu5_1 \
  -tv_weight 0.000085 -original_colors 0 && rm *_*0.png

python ./neural_style.py -seed 100 \
  -style_scale 1 -init image \
  -image_size 512 -num_iterations 500 -save_iter 50 \
  -content_weight 1 -style_weight 1000 \
  -style_image style.png \
  -content_image content.png \
  -output_image out2.png \
  -init_image out.png \
  -model_file ./models/vgg19-d01eb7cb.pth \
  -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu4_2,relu5_1 \
  -style_layers relu3_1,relu4_1,relu4_2,relu5_1 \
  -tv_weight 0.000085 -original_colors 0 && rm *_*0.png

python ./neural_style.py -seed 100 \
  -style_scale 1 -init image \
  -image_size 1024 -num_iterations 500 -save_iter 50 \
  -content_weight 0 -style_weight 1000 \
  -style_image style.png \
  -content_image content.png \
  -output_image out3.png \
  -init_image out2.png \
  -model_file ./models/vgg19-d01eb7cb.pth \
  -content_layers relu1_1,relu2_1,relu3_1,relu4_1,relu4_2,relu5_1 \
  -style_layers relu3_1,relu4_1,relu4_2,relu5_1 \
  -tv_weight 0.000085 -original_colors 0 && rm *_*0.png```

@sawtl
Copy link

sawtl commented Aug 31, 2020

Even with the same seed, you will not have the same starting point at each execution. Even worse, the patching do not apply to the same level of details given the different sizes of images. Combine both and you get your results,. The differences are less if you keep the same size.
You can try with adding the previous image in the style image list for the next one. It may reduce the differences.

@jamahun
Copy link

jamahun commented Jan 6, 2022

@ouhenio Out of curiousity have you been able to create a loop so that you can automate this process so you don't need to run the shell for every frame? I'm also trying to do something similar but not having much luck

@ouhenio
Copy link
Author

ouhenio commented Jan 7, 2022

Hi @jamahun!

It's easy to run the script for every image automatically, take a look into Python's subprocesses.

I don't have the code with me anymore (this issue is 2 years old 😅), but as you may see in the docs of subprocess, it lets you run shell commands and even give runtime variables to them, like the filename of each frame in your case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants