Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem regarding added noise in the samples #171

Open
ArghyaChatterjee opened this issue Feb 12, 2024 · 7 comments
Open

Problem regarding added noise in the samples #171

ArghyaChatterjee opened this issue Feb 12, 2024 · 7 comments

Comments

@ArghyaChatterjee
Copy link

Hello,

I was testing the NVISII with DOPE. I have set the z depth limit to around 2 m. When the object is close to camera, the rendering looks good and the image sample also looks good but as soon as the object goes far away (roughly more than 1m) from the camera, then the rendered pixels start to add noise inside the rendered scenes (on top of the model) as well as the image samples.

N.B: I am using Nvidia RTX 3070 Ti with 535 driver.

Bad sample

00003

Z = 1.74 m
Width x Height = 1280 x 720
number of sample per pixel = 100

00016

Z = 1.50 m
Width x Height = 1280 x 720
number of sample per pixel = 100

00025

Z = 1.75 m
Width x Height = 1280 x 720
number of sample per pixel = 100

00028

Z = 1.12 m
Width x Height = 1280 x 720
number of sample per pixel = 100

Good sample

00011

Z = 0.37 m
Width x Height = 1280 x 720
number of sample per pixel = 100

00029

Z = 0.98 m
Width x Height = 1280 x 720
number of sample per pixel = 100

00030

Z = 0.85 m
Width x Height = 1280 x 720
number of sample per pixel = 100

00012

Z = 0.43 m
Width x Height = 1280 x 720
number of sample per pixel = 100

For all samples (Good and Bad), here are the configurations that I used during dataset generation:


parser = argparse.ArgumentParser()

parser.add_argument(
    '--spp',
    default=100,
    type=int,
    help = "number of sample per pixel, higher the more costly"
)
parser.add_argument(
    '--width',
    default=1280,
    type=int,
    help = 'image output width'
)
parser.add_argument(
    '--height',
    default=720,
    type=int,
    help = 'image output height'
)
# TODO: change for an array
parser.add_argument(
    '--objs_folder_distractors',
    default='distractor_models/',
    help = "object to load folder"
)
parser.add_argument(
    '--objs_folder',
    default='base_mug_model/',
    help = "object to load folder"
)
parser.add_argument(
    '--path_single_obj',
    default=None,
    help='If you have a single obj file, path to the obj directly.'
)
parser.add_argument(
    '--scale',
    default=1,
    type=float,
    help='Specify the scale of the target object(s). If the obj mesh is in '
         'meters -> scale=1; if it is in cm -> scale=0.01.'
)

# for zed image testing
# parser.add_argument(
#     '--skyboxes_folder',
#     default='background_hdr_images_zed/',
#     help = "dome light hdr"
# )

# for external image testing
parser.add_argument(
    '--skyboxes_folder',
    default='background_hdr_images_dome/',
    help = "dome light hdr"
)

# for single image testing
# parser.add_argument(
#     '--skyboxes_folder',
#     default='background_hdr_images_single/',
#     help = "dome light hdr"
# )

parser.add_argument(
    '--nb_objects',
    default=1,
    type = int,
    help = "how many objects"
)
parser.add_argument(
    '--nb_distractors',
    default=3,
    help = "how many objects"
)
parser.add_argument(
    '--nb_frames',
    default=1,
    help = "how many frames to save"
)
parser.add_argument(
    '--skip_frame',
    default=200,
    type=int,
    help = "how many frames to skip"
)
parser.add_argument(
    '--noise',
    action='store_true',
    default=False,
    help = "if added the output of the ray tracing is not sent to optix's denoiser"
)
parser.add_argument(
    '--outf',
    default='mug_dataset/',
    help = "output filename inside output/"
)
parser.add_argument('--seed',
    default = None,
    help = 'seed for random selection'
)

parser.add_argument(
    '--interactive',
    action='store_true',
    default=False,
    help = "make the renderer in its window"
)

parser.add_argument(
    '--motionblur',
    action='store_true',
    default=False,
    help = "use motion blur to generate images"
)

parser.add_argument(
    '--box_size',
    default=0.5,
    type=float,
    help = "make the object movement easier"
)

parser.add_argument(
    '--focal-length',
    default=521.6779174804688,
    type=float,
    help = "focal length of the camera"
)

# parser.add_argument(
#     '--optical_center_x',
#     default=630.867431640625,
#     type=float,
#     help = "focal length of the camera"
# )

# parser.add_argument(
#     '--optical_center_y',
#     default=364.546142578125,
#     type=float,
#     help = "focal length of the camera"
# )

parser.add_argument(
    '--visibility-fraction',
    action='store_true',
    default=False,
    help = "Compute the fraction of visible pixels and store it in the "
           "`visibility` field of the json output. Without this argument, "
           "`visibility` is always set to 1. Slows down rendering by about "
           "50 %%, depending on the number of visible objects."
)

parser.add_argument(
    '--debug',
    action='store_true',
    default=False,
    help="Render the cuboid corners as small spheres. Only for debugging purposes, do not use for training!"
)

opt = parser.parse_args()

if os.path.isdir(f'output/{opt.outf}'):
    print(f'folder output/{opt.outf}/ exists')
else:
    os.makedirs(f'output/{opt.outf}')
    print(f'created folder output/{opt.outf}/')

opt.outf = f'output/{opt.outf}'

if not opt.seed is None:
    random.seed(int(opt.seed))


# visii.initialize(headless = not opt.interactive)
visii.initialize(headless = opt.interactive)

if not opt.motionblur:
    visii.sample_time_interval((1,1))

visii.sample_pixel_area(
    x_sample_interval = (.5,.5),
    y_sample_interval = (.5, .5))

# visii.set_max_bounce_depth(1)

if not opt.noise:
    visii.enable_denoiser()

Is there any particular way that I can improve the data quality for samples greater than 1m distance from the camera ?? Thanks in advance.

@TontonTremblay
Copy link
Collaborator

TontonTremblay commented Feb 13, 2024 via email

@ArghyaChatterjee
Copy link
Author

Though I am using nvidia driver 535, I don't think that's the exact issue. For some reason, it looks like the renderer is having a little tough time rendering objects > than 1m. This is completely from observation and may be improved by tweaking something that I am unaware of.

Here is a video:
https://www.youtube.com/watch?v=dQHALpEL5ME

Look at 1:57 of the video where you can see the noise is apparent on the surface.

Also, I have tried to change the resolution of the image (500 x 500) to see any improvement but not much that I see.

@ArghyaChatterjee
Copy link
Author

ArghyaChatterjee commented Feb 13, 2024

Also, here is a demonstration with the downgraded GPU version (driver 525). The result looks same / similar.

https://www.youtube.com/watch?v=y9hRhYSCKhk

@ArghyaChatterjee
Copy link
Author

I can send you the script in your email if you want to try that out to see if we are in the same page.

@TontonTremblay
Copy link
Collaborator

Yeah I have observed this problem on my end as well. I dont have a solution. I have slowly migrated the scripts to use blender. I could probably push a blender 4.0 version to generate the data. https://github.com/TontonTremblay/blender_robot_animation right now this code only does robot animation + export, but the bounding box code is there, I can probably do a similar thing to what is on the DOPE repo. Are you on a deadline for this? I think blender would be a little bit more bulletproof to the future.

@ArghyaChatterjee
Copy link
Author

Hey @TontonTremblay, thanks for the reply. I have been trying to use NVISII for < 1m object distance and exploring blenderproc2 for >1m object distance. Though I am not sure if I will be successful or not, but exploring is always helpful. Thanks for the suggestion. I will keep an eye on your blender_robot_animation repo as well. Also, not having updated version of NVISII is an issue. Thanks for continuing to work on this (I assume you will continue to update and maintain the new repo). It's very helpful for our research.

@TontonTremblay
Copy link
Collaborator

For sure I will try my best. Thank you for your kind words.

As for NVISII @natevm is really the only one that could maintain this repo "easily", but he is trying to graduate and secure a job, I would not expect him to maintain this repo. Also there might or might not be alternatives in the near future that would be as easy to use to nvisii with python bindings with a bigger community to maintain it. Whom knows maybe chatgpt5 would be able to maintain this code base :P

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants