-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed optimizations, original-colors option, usability improvements #37
base: master
Are you sure you want to change the base?
Speed optimizations, original-colors option, usability improvements #37
Conversation
Here are some useful changes for artistic-videos code. 1. Some speed-up optimizations in stylizeVideo.sh and lua code. Now optical flow calculations can be safely launched in background, both forward and backward processes are launching separately. As this process runs on CPU, main process of video stylising now can be running on GPU simultaneously without of waiting for first pre-calculations of optical flow. In case the CPU is running slowly and required files for stylising are not ready, main process will wait them (can be controlled by -timer parameter for artistic_video.lua in seconds, default 600) 2. Added -original_colors option like in neural-style, If you set this to 1, then the output image will keep the colors of the content image. 3. Some changes in stylizeVideo.sh: more options to control in console, background deepflow launching, 'ctrl+c' hotkey kills all background and foreground processes, default values changed to more usable, ffmpeg resolution now can be set on
killing it. thanks! |
In some cases, when required file have just created, but not fully written to disk, errors appeared. Also changed sleep function to unix os sleep command
waitForFile function fix
makeOptFlow.sh changes: -stepSize argument change: The step size can now be set to create long-term flow. May be an array, similar to -flow_relative_indices parameter, example: 1,15,40. Default value 1 makes optical flow for short-term flow.
stylizeVideo.sh changes: Added some useful features. Script now can input user arguments stored in .txt file, script can now continue after interrupting from last frame, auto-loading of previous settings. All intermediate files are stored now in inProgress directory. Script parameters are automatically exporting now to run_parameters.txt file, that can be loaded after using last argument for "stylizeVideo.sh" Usage: ./stylizeVideo.sh <path_to_video> <path_to_style_image> ./stylizeVideo.sh <path_to_video> <path_to_style_image> <path_to_parameters>.txt ./stylizeVideo.sh <path_to_video> <path_to_parameters>.txt ./stylizeVideo.sh <path_to_parameters>.txt params_example.txt: backend="cudnn" filepath="/media/andrew/backup/neural/sea/artistic-videos-master/in/Metro_inside3.mov" gpu="0" init="random" num_iterations="200,200" opt_res="2" resolution="1280" style_image="/media/andrew/backup/neural/sea/artistic-videos-master/Styles/CqUoO3jWYAA8bsI.jpg" style_scale="1.0"
Added -original_colors option for artistic_video_multiPass.lua, If you set this to 1, then the output image will keep the colors of the content image. Also added easy-lo-launch script stylizeVideo_multipass.sh, like stylizeVideo.sh
Finally fixed waitForFile function, no more errors when file is in use.... I hope so....
Hi, I'm having an issue with high resolution images and I think it's coming with this PR changes. With a 1280x720 video + same size style image, the scripts fails at the first iteration 2/2000 with no error message.
It renders a out-0001.png file which is clean, it has no style transfer, it's the same than the source. I have the latest CUDA and Cudnn correctly installed on my Ubuntu 16.04 64bits, a Titan X, 30Gb RAM, and a [email protected] so I don't think handling this resolution should be a problem. Let me know if you think of a something obvious, I'll be happy to help. I keep digging and I'll tell you if I find something interesting. I guess there is something around waitForFile functions when one of the scripts requires more time to initialize. |
FYI, removing the |
@martync i had the same bug, try decrease style weight to lower values, this may help. |
@martync I checked original code and my branch with the same input arguments, and this bug appeared in both cases. That means this is not my bug, people had the same problem with neural-style code. |
Thanks for your reply. Good thing that it's not a bug shipped with your PR. I will try changing the style weight. |
Your code passes the ffmpeg flag When avconv is used, it spits out the error |
@NameRX You mean that decreasing the style weight can process the largher size of video? |
@martync Why the figure out-0001.png is not been changed? |
@linrio I have no clue, that was the issue |
Hey, great work on this branch, loving it! One question - is it possible to queue multiple stylizevideo commands, so that it would automatically process multiple video files in succession? |
@Vassay I'm currently working on it but I don't think it should be a part of that code. My approach was to use a lockfile and launch a new instance once the lockfile is free. I'm doing it in python, it's more friendly to me, if you're interested, I could share what I did |
@martync I ended up using quick and dirty method of putting || between commands, it seems to work for me. Like this: |
Well done! I didn't merge because I never found the time to review and test your changes in depth, but there is now a link in the README to your fork. |
Here are some useful changes for artistic-videos code.
Some speed-up optimizations in .sh and .lua code. Now optical flow calculations can be safely launched in background, both forward and backward processes are launching separately. As this process runs on CPU, main process of video stylizing now can be running on GPU simultaneously without of waiting for first pre-calculations of optical flow. In case the CPU is running slowly and required files for stylizing are not ready, main process will wait them (can be controlled by -timer parameter for artistic_video.lua in seconds, default 600)
Added -original_colors option like in neural-style, If you set this to 1, then the output image will keep the colors of the content image.
Some changes in stylizeVideo.sh: more options to control in console, background deepflow launching, 'ctrl+c' hotkey kills all background and foreground processes, default values changed to more usable, ffmpeg resolution now can be set only by width (at this moment only ffmpeg is supported, I suppose). Script now more handy, processing can be launched from everywhere by drag'n'droping it and required files into any terminal window like that:
19 jan 2017 update:
Added some useful features. Script now can input user arguments stored in .txt file, script can now continue after interrupting from last frame, auto-loading of previous settings. All intermediate files are stored now in inProgress directory. Script parameters are automatically exporting now to run_parameters.txt file, that can be loaded after using last argument for "stylizeVideo.sh"
New usage available:
Not all arguments are needed, if argument missing, script uses default values. here is example params_file_example.txt:
-stepSize argument change: The step size can now be set to create long-term flow. May be an array, similar to -flow_relative_indices parameter: 1,15,40. Default value 1 makes optical flow for short-term flow.
20 jan 2017 update:
Added -original_colors option for artistic_video_multiPass.lua, If you set this to 1, then the output image will keep the colors of the content image.
Also added easy-lo-launch script stylizeVideo_multipass.sh, like stylizeVideo.sh