-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some differences from the original code #5
Comments
I also wonder the accuracy achieved by this repo. I tried but cannot get a reasonable results. |
@JW-J I have not train the model in this repo because it's different from the original paper. I just wander how the author of this repo construct this model. |
@zeal-github I have trained this model and i found that the loss can not even decrease after several epoch on the modelnet dataset. Now, i am doubt about the correctness of this repo. |
Hi, have you found a correct pointcnn pytorch project which could match the accuracy of the paper? |
I'm implementing it right now. I will make my implementation public in a few days. Maybe you can choose to follow me in Github😂 |
Hi jxqhhh, |
I will post my code a few hours later here: |
try this repository: |
I slightly modified the code and trained it for classification task,and it seems to encounter the problem of overfitting. At the 38th epoch, the Train Instance Accuracy: 0.802424, while the test only is 0.637903. And the Best Instance Accuracy: 0.796774 occured in epoch 36. |
It might be the problem of dataset, I replaced the data with H5 type that processed by pointnet, and the training seems going well: |
Thanks so much for your works! I try to reproduce the results of PointCNN in Pytorch but still cannot get a reasonable result and I find your project just now. But I have some question about the code you provided.
The network in this repository is some kind of different from the origin one.
1, the author uses three depthwise conv layer to produce the transform matrix X. You just use one depthwise layer plus another two dense layer. It will need more parameters compared to the original and maybe arise overfitting.
2, why do you remove batchnorm in your dense layer? I think this is unreasonable.
3, the configuration is quite different from the original one
in the original code, the output channel has a coefficient 3. And it seems that you use 5 xconv layers while the author uses 4.
4, what's the best accuracy you get on ModelNet40?
Nevertheless, thanks for your sharing. Hope my questions are not too much.
The text was updated successfully, but these errors were encountered: