You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for your excellent work on this project. I have encountered a couple of questions while reproducing your work, and I hope you can provide some help.
1. Training and Inference Optimization: In your documentation, you mentioned that the training memory usage was reduced by 4 times and the training time was reduced by 15%. Could you clarify whether these optimizations also apply to the inference process? Specifically, do the optimizations impact both training and inference, or are they limited to training only?
2. Comparison with Original GS: Did you perform any downsampling of the images, when comparing your method to the original GS? While reproducing your work using the Mip-Nerf 360 captures on A100, I found that the original GS did not consume as much memory as stated in your documentation. For instance, in the Room scene, the memory usage of the original GS was 3.56GB, which is lower than your reported 8.50GB. I applied a 4x downsampling to align with gsplat. I am unsure where the discrepancy in memory usage arises. Could you please help clarify this?
Thank you very much for your time and assistance. Your insights would be incredibly valuable to my research. I look forward to your response.
The text was updated successfully, but these errors were encountered:
For you first question, the answer is yes. You could check out this profiling page we provide, which lists out the performance forward and backward pass separately. https://docs.gsplat.studio/main/tests/profile.html
Hello,
First of all, thank you for your excellent work on this project. I have encountered a couple of questions while reproducing your work, and I hope you can provide some help.
1. Training and Inference Optimization: In your documentation, you mentioned that the training memory usage was reduced by 4 times and the training time was reduced by 15%. Could you clarify whether these optimizations also apply to the inference process? Specifically, do the optimizations impact both training and inference, or are they limited to training only?
2. Comparison with Original GS: Did you perform any downsampling of the images, when comparing your method to the original GS? While reproducing your work using the Mip-Nerf 360 captures on A100, I found that the original GS did not consume as much memory as stated in your documentation. For instance, in the Room scene, the memory usage of the original GS was 3.56GB, which is lower than your reported 8.50GB. I applied a 4x downsampling to align with gsplat. I am unsure where the discrepancy in memory usage arises. Could you please help clarify this?
Thank you very much for your time and assistance. Your insights would be incredibly valuable to my research. I look forward to your response.
The text was updated successfully, but these errors were encountered: