-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Am I generating colors correctly? #27
Comments
你好,请问下你这个3d模型的颜色是如何生成的,为什么我的3d模型没有颜色 |
mix colors according to the 'normal' vector. 2 images are enough to create. full back and front. |
Cher MustafaHilmiYAVUZHAN,
Je vous remercie de votre email. Je suis heureux de vous aider avec la
génération de couleurs. Si vous avez besoin d'une explication plus
détaillée, n'hésitez pas à me le faire savoir. Je suis à votre disposition
pour vous fournir toutes les informations nécessaires.
Cordialement,
Housseinelmi Mohamed
Le sam. 29 juil. 2023 à 11:53, MustafaHilmiYAVUZHAN <
***@***.***> a écrit :
… mix colors according to the 'normal' vector. 2 images are enough to
create. full back and front.
—
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7FJOG7P3Y4GXVIXHE5ZLUTXSUWVZANCNFSM6AAAAAA2XUEDVQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Hi, how did you get the RGB texture ? |
@MustafaHilmiYAVUZHAN thanks for your input. Do you have any reference code for this? When you say normal vector, do you mean the w vector? Should I run multiple w vectors through torgb and then take an average or something? |
|
Hi! Very interesting attempt! Could you please share the code of "look for the nearest color on the isosurface mesh"? Thanks a lot! |
Hi, did anyone find a good way to get vertex color? |
I would like some advice on extracting a voxel representation for generating colors.
I was able to extract very poor vertex color by editing these lines in the
G.sample_mixed
loop ingen_videos_proj_withseg.py
:If I look for the nearest color on the isosurface mesh, it gives me this:
But when I look at the render I see this:
I realize that the render has a final superresolution pass that makes it so clear, but I feel like I might be missing something.
My understanding of the process is something like:
G.sample_mixed
takes the samples (xyz coordinates in a 3d grid) and the transformed_ray_directions_expanded (which is just 0,0,-1) and w (which is the latent vectors of shape (14,512) from the mapping network output, combining latent and camera pose) and then outputs a few results (sigma, rgb, and a copy of xyz).G.torgb
network. This is what I find tricky. The network seems designed to process 2D images, but here we only have a bundle of N=10M feature vectors. So I pass it in a 10Mx1 image, and I hope this is ok. Also,torgb
expects only a singlew
from the 14 options. I just picked the first onews[0,0,0,:1]
but I'm not sure if this is correct. Would it be better to runtorgb
for eachw
and then average them, or find the median, or something else?My questions are:
torgb
a 10Mx1 image or is this damaging the performance of the feature-to-color conversion?ws
or should I be using multiple ones somehow? Are each of thews
latents representing a different camera pose, or do they represent something else?Thanks @SizheAn!
The text was updated successfully, but these errors were encountered: