-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Result #2
Comments
Hello. I have similar problems when using the code you provided to train on shapenet and ShapeNetRendering. We found that sdf_loss hardly dropped during training. And we noticed that the preprocessed sdf value is relatively small, whether some additional training skills are needed. In addition, can you provide a pre-trained model for better testing? grateful! |
Hello @2577624123 , Thanks a lot for your interest in our work. Could you please provide more information about the training, testing, evaluation procedure. Did you use the all the classes and images for training and testing? Did you train the coarse prediction module separately or together? How many points did you use to evaluate the reconstructions? Best |
Hi @XiaolinHe8, Thanks a lot for your interest in our work. Yes, SDF values are very small. We scaled the values by a factor of 10.0 during training. Best |
How to set the environment for the project? Did there has some related work? |
1 similar comment
How to set the environment for the project? Did there has some related work? |
I am very sorry that I have put this project on hold because I am busy with other projects.
I currently have a question as to whether you can provide your trained model of rough prediction of ShapeNet data, because I suspect that there may have been something wrong with me in the first stage that led to poor training. Looking forward to your reply! Best. |
Hello. Nice job!
I wonder if there are some parameter problems here, just to be clear, I use the parameters in the code provided so far.
Because the results I ran with the shapenet dataset were not as good as those in the paper, they were quite different. Such as a chair:
CD and IoU are big difference, relative to 9.20 and 52.70 in the paper.
In addition, I also wonder why the accuracy is so low?
Here are the parameters I used:
from argparse import ArgumentParser
def get_args():
parser = ArgumentParser(description='Image_to_3D')
parser.add_argument('--cuda', type=bool, default=True)
parser.add_argument('--gpu', type=int, default=0)
parser.add_argument('--plot_every_batch', type=int, default=10)
parser.add_argument('--save_every_epoch', type=int, default=25)
parser.add_argument('--save_after_epoch', type=int, default=1)
parser.add_argument('--test_every_epoch', type=int, default=25)
parser.add_argument('--load_pretrain', type=bool, default=True)
parser.add_argument('--skip_train', action='store_true')
if name == 'main':
args = get_args()
print(len(args.testlist))
Everything else follows your code.
I want to know if there are some parameter changes here, or changes in some other part of the code, for the shapeNet dataset.
I really need your help!
Looking forward to your recovery!
Some results:
The text was updated successfully, but these errors were encountered: