Temporal Coherence Workflow #14
Replies: 6 comments 15 replies
-
I must finish my current contract asap so I can test this ! It sounds like you found something really impressive, and I can't wait to try that technique. |
Beta Was this translation helpful? Give feedback.
-
Should I apply the AESD effect to a new adjustment layers or to the video layer I want to use as a reference ? |
Beta Was this translation helpful? Give feedback.
-
Add me on discord! |
Beta Was this translation helpful? Give feedback.
-
Here is my first result with the technique you found and shared ! Nothing groundbreaking content-wise, but technically impressive. This is extremely smooth. And done from a single keyframe, which is this video's very last frame. Very similar workflow and result to what you get from EBSynth, but all done directly in AE. redcarloop_aesd_roto_fill.mp4Thank you so much Master @Trentonom0r3 ! |
Beta Was this translation helpful? Give feedback.
-
So, I started doing some tests on your footage-- The issue seems to be that the comp start was not set to 0, for some reason. After changing that batch works properly! |
Beta Was this translation helpful? Give feedback.
-
And another test-- this time done with 3 keyframes and some simple matching for extra consistency. 3keyframetest.mp4 |
Beta Was this translation helpful? Give feedback.
-
While running some tests, I came across a workflow to achieve temporal coherence almost entirely from the SD output. A bit of post processing, and temporal coherence has been achieved on a level almost reaching EBSynth.
Discoveries;
Img2img alt runs faster if you're using images generated by SD.
Img2img alt accurately turns stylized and/or animated frames into realistic images.
Workflow:
After you have a seed you like, set the seed, and proceed to step 2.
Using the same controlnets as in step 1, run a batch generation on the whole video. Enable reference only on CN 3, and add the generated frame 1 you created. Enable loopBack on reference only.
Run the batch generation.
(Optional) using the same process, run the generated images through img2img alt to get a photorealistic version.
(After this step, the style flicker is minimal, and easily corrected using deflickering and/or temporal smoothing.
Using optical flow data and occlusion masks, interpolate extra frames in the generated video to account for choppiness.
Perform a final deflicker and temporal smoothing pass, with optional color correction step.
You now have a much more temporally coherent video!
Beta Was this translation helpful? Give feedback.
All reactions