You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I find your work highly interesting and would like to cite it, when can we expect the paper to be published?
Furthermore, I was curious about the distillation loss. The configuration for the model that achieved 27.1% mIoU on the hidden test set is ssc.yaml, correct? If so, is it correct, that no distillation was used in the training of that model, as MODEL.DISTILLATION is set to false in the yaml file?
The text was updated successfully, but these errors were encountered:
Hello, Thank you so much for looking at this repo. Unfortunately, I don't think it will be published, as it was just a project for one of my Master's courses. I can share the report with you, though.
Indeed, the best results on the Semantic Kitti benchmark test set were reached without distillation loss. Unintuitively, when training on the validation set, I saw a small improvement, but this was not the case for the test set. Moreover, distillation loss slows training since we have to add an additional forward pass of the teacher model. There might be something wrong with my implementation of the distillation loss as I haven't tested it enough.
@jdgalviss Thank you for sharing your code. It is highly valuable and might be helpful for my research. I was wondering if there is any corresponding report or document available that could help me better understand the implementation logic of the code. If possible, could you kindly share it with me via email at [email protected]? Your work is greatly appreciated, and I thank you in advance!
Hello, I find your work highly interesting and would like to cite it, when can we expect the paper to be published?
Furthermore, I was curious about the distillation loss. The configuration for the model that achieved 27.1% mIoU on the hidden test set is ssc.yaml, correct? If so, is it correct, that no distillation was used in the training of that model, as MODEL.DISTILLATION is set to false in the yaml file?
The text was updated successfully, but these errors were encountered: