You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm preprocessing training datasets to train your network.
But there are several problems.
I found ego_pose.json and scene.json in nuScenes dataset. However, I don't know how to separate scenes and put them in the corresponding scene folder like you did (in test dataset) and how to extract ego(.float3), floats&floats-label(.float3), scene.txt from nuScenes dataset. Is there any code you have referenced or other details about this?
As you did, I train the inpainting module (https://github.com/JiahuiYu/generative_inpainting) on semantic segmentation map (using Mapillary Vistas dataset(18000 training images, 2000 validation images)). However, the training is not going well. Is there anything you changed from that code?
You need a mask to erase dynamic object from semantic map during the test, where did you get the mask and use it? Or did you use random mask? If you use random mask, how can you find and erase dynamic object?
In the paper, it was stated that removed dynamic object of class c is the ground-truth sample of reachability. Does this ground-truth mean the file in the floats folder in the test dataset you provided? If it is correct, how do you extract the values (ex. tl_x, tl_y, w, h, score, class_id)? Or do you extract the values from the nuScenes dataset?
I'm sorry to bother you and thank you for your great work.
The text was updated successfully, but these errors were encountered:
Hi,
I'm preprocessing training datasets to train your network.
But there are several problems.
I found ego_pose.json and scene.json in nuScenes dataset. However, I don't know how to separate scenes and put them in the corresponding scene folder like you did (in test dataset) and how to extract ego(.float3), floats&floats-label(.float3), scene.txt from nuScenes dataset. Is there any code you have referenced or other details about this?
As you did, I train the inpainting module (https://github.com/JiahuiYu/generative_inpainting) on semantic segmentation map (using Mapillary Vistas dataset(18000 training images, 2000 validation images)). However, the training is not going well. Is there anything you changed from that code?
You need a mask to erase dynamic object from semantic map during the test, where did you get the mask and use it? Or did you use random mask? If you use random mask, how can you find and erase dynamic object?
In the paper, it was stated that removed dynamic object of class c is the ground-truth sample of reachability. Does this ground-truth mean the file in the floats folder in the test dataset you provided? If it is correct, how do you extract the values (ex. tl_x, tl_y, w, h, score, class_id)? Or do you extract the values from the nuScenes dataset?
I'm sorry to bother you and thank you for your great work.
The text was updated successfully, but these errors were encountered: