We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I figure it would be easier to have stereo generation with an already existing RGB video + Depth map video to free up VRAM
Like the first step has RGB video, plus the depthmap. and it just does the occlusion mask, and splatting right video.
The text was updated successfully, but these errors were encountered:
Thanks for your suggestions.
It may require less VRAM by separating depth estimation and splatting. We will consider it in the next update.
Sorry, something went wrong.
Thanks!
I have updated the "depth_splatting_inference.py" and it requires less VRAM now.
I created a new function "DepthSplatting" in "depth_splatting_inference.py" which you could use to splat the video with the video depth as input.
Is this the proper input video type to use? A SBS video frame set up like this? Or do I keep it separate?
Are there any specific arguments to put in the inference code for it to work? Is it --video_depth and --depth_vis?
Or is the argument --video_depth the path to the depth video itself?
No branches or pull requests
I figure it would be easier to have stereo generation with an already existing RGB video + Depth map video to free up VRAM
Like the first step has RGB video, plus the depthmap. and it just does the occlusion mask, and splatting right video.
The text was updated successfully, but these errors were encountered: