Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unchanged #10

Open
ReverendThing opened this issue Oct 13, 2024 · 2 comments
Open

Unchanged #10

ReverendThing opened this issue Oct 13, 2024 · 2 comments

Comments

@ReverendThing
Copy link

Firstly thank you for all your awesome work on Depth creation! I have been pouring over all of your repositories and it's so cool to have new interesting ideas and work to look at!

Regarding BoostYourOwnDepth - are there any parameters we can tweak to increase/decrease the "merge level"? I have put SSI depth maps as the low res base and DepthAnythingV2 as the high res base (as these have greater fine detail) - the images are the same resolution - but when I run this tool - the output just looks exactly the same as the low_res original image! Perhaps with a tiny bit more variation in the background - but none of the fine details/actual detail have changed at all?

I don't know if it is a Windows thing but I also realised that the script was simply overwriting my original low_res images instead of outputting to the output dir. For some reason the line: "path = os.path.join(result_dir, images.name)" was just resulting in it overwriting the original image file - and I changed to hardcoding my output dir with a counter to stop this. I gave the output dir in this format " --output_dir "C:\Users\Chris\AI Depthify\Depthified\Output"".

@ReverendThing
Copy link
Author

One thought - is the model trained on colourised depth maps? Maybe my problem is that I'm doing grayscale?

@sebastian-dille
Copy link
Collaborator

Hi @ReverendThing ,
thanks so much for the feedback.

There is no specific parameter for the merge level. The merging network should take the high frequencies from the high-res input and merge them onto the low-res base, ignoring estimation artifacts. It is also trained on grayscale images, the colorization is just for visualization purposes.

The merging should in principle work for any depth map, but since we have released the code some time ago, there's a small chance that the DepthAnythingV2 distribution is too different from its training data.

Do you mind sharing an example? I can do some tests to see if I can get it to work.

Cheers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants