Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed optimizations, original-colors option, usability improvements #37

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

NameRX
Copy link

@NameRX NameRX commented Jan 10, 2017

Here are some useful changes for artistic-videos code.

  1. Some speed-up optimizations in .sh and .lua code. Now optical flow calculations can be safely launched in background, both forward and backward processes are launching separately. As this process runs on CPU, main process of video stylizing now can be running on GPU simultaneously without of waiting for first pre-calculations of optical flow. In case the CPU is running slowly and required files for stylizing are not ready, main process will wait them (can be controlled by -timer parameter for artistic_video.lua in seconds, default 600)

  2. Added -original_colors option like in neural-style, If you set this to 1, then the output image will keep the colors of the content image.

  3. Some changes in stylizeVideo.sh: more options to control in console, background deepflow launching, 'ctrl+c' hotkey kills all background and foreground processes, default values changed to more usable, ffmpeg resolution now can be set only by width (at this moment only ffmpeg is supported, I suppose). Script now more handy, processing can be launched from everywhere by drag'n'droping it and required files into any terminal window like that:

'/media/andrew/backup/neural/sea/artistic-videos-master/stylizeVideo.sh' '/media/andrew/backup/neural/sea/artistic-videos-master/in/den_eqypt.mov' '/media/andrew/backup/neural/sea/artistic-videos-master/Styles/Akhenaten.jpg'

19 jan 2017 update:

  1. More stylizeVideo.sh changes:
    Added some useful features. Script now can input user arguments stored in .txt file, script can now continue after interrupting from last frame, auto-loading of previous settings. All intermediate files are stored now in inProgress directory. Script parameters are automatically exporting now to run_parameters.txt file, that can be loaded after using last argument for "stylizeVideo.sh"

New usage available:

./stylizeVideo.sh <path_to_video> <path_to_style_image>
./stylizeVideo.sh <path_to_video> <path_to_style_image> <path_to_parameters>.txt
./stylizeVideo.sh <path_to_video> <path_to_parameters>.txt
./stylizeVideo.sh <path_to_parameters>.txt

Not all arguments are needed, if argument missing, script uses default values. here is example params_file_example.txt:

backend="cudnn"
filepath="/media/andrew/backup/neural/sea/artistic-videos-master/in/Metro_inside3.mov"
gpu="0"
init="random"
num_iterations="200,200"
opt_res="2"
resolution="1280"
style_image="/media/andrew/backup/neural/sea/artistic-videos-master/Styles/CqUoO3jWYAA8bsI.jpg"
style_scale="1.0"
  1. makeOptFlow.sh changes:
    -stepSize argument change: The step size can now be set to create long-term flow. May be an array, similar to -flow_relative_indices parameter: 1,15,40. Default value 1 makes optical flow for short-term flow.

20 jan 2017 update:

  1. Added -original_colors option for artistic_video_multiPass.lua, If you set this to 1, then the output image will keep the colors of the content image.

  2. Also added easy-lo-launch script stylizeVideo_multipass.sh, like stylizeVideo.sh

Here are some useful changes for artistic-videos code.

1. Some speed-up optimizations in stylizeVideo.sh and lua code. Now optical flow calculations can be safely launched in background, both forward and backward processes are launching separately. As this process runs on CPU, main process of video stylising now can be running on GPU simultaneously without of waiting for first pre-calculations of optical flow. In case the CPU is running slowly and required files for stylising are not ready, main process will wait them (can be controlled by -timer parameter for artistic_video.lua in seconds, default 600)

2. Added -original_colors option like in neural-style, If you set this to 1, then the output image will keep the colors of the content image.

3. Some changes in stylizeVideo.sh: more options to control in console, background deepflow launching, 'ctrl+c' hotkey kills all background and foreground processes, default values changed to more usable, ffmpeg resolution now can be set on
@Teepareep
Copy link

killing it. thanks!

In some cases, when required file have just created, but not fully written to disk, errors appeared. Also changed sleep function to unix os sleep command
waitForFile function fix
@NameRX NameRX changed the title Speed optimisations, original-colors option, usability improvements Speed optimizations, original-colors option, usability improvements Jan 17, 2017
makeOptFlow.sh changes:
-stepSize argument change: The step size can now be set to create long-term flow. May be an array, similar to -flow_relative_indices parameter, example: 1,15,40. Default value 1 makes optical flow for short-term flow.
stylizeVideo.sh changes:
Added some useful features. Script now can input user arguments stored in .txt file, script can now continue after interrupting from last frame, auto-loading of previous settings. All intermediate files are stored now in inProgress directory. Script parameters are automatically exporting now to run_parameters.txt file, that can be loaded after using last argument for "stylizeVideo.sh"

Usage:
./stylizeVideo.sh <path_to_video> <path_to_style_image>
./stylizeVideo.sh <path_to_video> <path_to_style_image> <path_to_parameters>.txt
./stylizeVideo.sh <path_to_video> <path_to_parameters>.txt
./stylizeVideo.sh <path_to_parameters>.txt

params_example.txt:
backend="cudnn"
filepath="/media/andrew/backup/neural/sea/artistic-videos-master/in/Metro_inside3.mov"
gpu="0"
init="random"
num_iterations="200,200"
opt_res="2"
resolution="1280"
style_image="/media/andrew/backup/neural/sea/artistic-videos-master/Styles/CqUoO3jWYAA8bsI.jpg"
style_scale="1.0"
Added -original_colors option for artistic_video_multiPass.lua, If you set this to 1, then the output image will keep the colors of the content image.

Also added easy-lo-launch script stylizeVideo_multipass.sh, like stylizeVideo.sh
Finally fixed waitForFile function, no more errors when file is in use.... I hope so....
@martync
Copy link

martync commented Feb 15, 2017

Hi,
First, you've made a great job, this lib + these improvements make it my favorite lib for style transfering on video (excellent results + amazing performances).

I'm having an issue with high resolution images and I think it's coming with this PR changes.
(I've cloned @NameRX branch).

With a 1280x720 video + same size style image, the scripts fails at the first iteration 2/2000 with no error message.

...
Setting up temporal consistency.	
Setting up style layer  	2	:	relu1_1	
Setting up style layer  	7	:	relu2_1	
Setting up style layer  	12	:	relu3_1	
Setting up style layer  	21	:	relu4_1	
Setting up content layer	23	:	relu4_2	
Setting up style layer  	30	:	relu5_1	
Detected 105 content images.	
Running optimization with L-BFGS	
<optim.lbfgs> 	creating recyclable direction/step/history buffers	
<optim.lbfgs> 	function value changing less than tolX	
Running time: 2s	
Iteration 2 / 2000	
  Content 1 loss: 0.000000	
  Style 1 loss: 515876.342773	
  Style 2 loss: 91595007.812500	
  Style 3 loss: 52583910.156250	
  Style 4 loss: 1642812250.000000	
  Style 5 loss: 125239.051819	
  Total loss: 1787632283.363342	
...

It renders a out-0001.png file which is clean, it has no style transfer, it's the same than the source.
I've tried with adam optimizer and it seems to fix the issue but the result is not as good. Also, the problem is not happening with the main branch on manuelruder repo.

I have the latest CUDA and Cudnn correctly installed on my Ubuntu 16.04 64bits, a Titan X, 30Gb RAM, and a [email protected] so I don't think handling this resolution should be a problem.

Let me know if you think of a something obvious, I'll be happy to help. I keep digging and I'll tell you if I find something interesting. I guess there is something around waitForFile functions when one of the scripts requires more time to initialize.

@martync
Copy link

martync commented Feb 15, 2017

FYI, removing the init argument on stylizeVideo.sh makes it work again.

https://github.com/manuelruder/artistic-videos/pull/37/files#diff-19f29606d12ea3a4edddf1ee679016baR375

@NameRX
Copy link
Author

NameRX commented Feb 15, 2017

@martync i had the same bug, try decrease style weight to lower values, this may help.
this is not waitForFile bug. I think that a combination of input parameters produces this error.

@NameRX
Copy link
Author

NameRX commented Feb 16, 2017

@martync I checked original code and my branch with the same input arguments, and this bug appeared in both cases. That means this is not my bug, people had the same problem with neural-style code.

@martync
Copy link

martync commented Feb 16, 2017

Thanks for your reply. Good thing that it's not a bug shipped with your PR. I will try changing the style weight.

@Teepareep
Copy link

Teepareep commented Mar 25, 2017

Your code passes the ffmpeg flag framerate to avconv but avconv requires -r, not -framerate.

When avconv is used, it spits out the error Option framerate not found.

@linrio
Copy link

linrio commented Apr 15, 2017

@NameRX You mean that decreasing the style weight can process the largher size of video?

@linrio
Copy link

linrio commented Apr 15, 2017

@martync Why the figure out-0001.png is not been changed?

@martync
Copy link

martync commented Apr 28, 2017

@linrio I have no clue, that was the issue

@Vassay
Copy link

Vassay commented Sep 10, 2017

Hey, great work on this branch, loving it! One question - is it possible to queue multiple stylizevideo commands, so that it would automatically process multiple video files in succession?

@martync
Copy link

martync commented Sep 10, 2017

@Vassay I'm currently working on it but I don't think it should be a part of that code. My approach was to use a lockfile and launch a new instance once the lockfile is free. I'm doing it in python, it's more friendly to me, if you're interested, I could share what I did

@Vassay
Copy link

Vassay commented Sep 14, 2017

@martync I ended up using quick and dirty method of putting || between commands, it seems to work for me. Like this:
./stylizeVideo.sh 27.mov params_27.txt || ./stylizeVideo.sh 22.mov params_22.txt
But I think others might benefit from seeing your approach, it seems to be far more intelligent than mine =)

@manuelruder
Copy link
Owner

Well done! I didn't merge because I never found the time to review and test your changes in depth, but there is now a link in the README to your fork.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants