Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the optimization of memory & the comparison with the original GS #282

Open
uto-lt opened this issue Jul 13, 2024 · 1 comment
Open

Comments

@uto-lt
Copy link

uto-lt commented Jul 13, 2024

Hello,

First of all, thank you for your excellent work on this project. I have encountered a couple of questions while reproducing your work, and I hope you can provide some help.

1. Training and Inference Optimization: In your documentation, you mentioned that the training memory usage was reduced by 4 times and the training time was reduced by 15%. Could you clarify whether these optimizations also apply to the inference process? Specifically, do the optimizations impact both training and inference, or are they limited to training only?

2. Comparison with Original GS: Did you perform any downsampling of the images, when comparing your method to the original GS? While reproducing your work using the Mip-Nerf 360 captures on A100, I found that the original GS did not consume as much memory as stated in your documentation. For instance, in the Room scene, the memory usage of the original GS was 3.56GB, which is lower than your reported 8.50GB. I applied a 4x downsampling to align with gsplat. I am unsure where the discrepancy in memory usage arises. Could you please help clarify this?

Thank you very much for your time and assistance. Your insights would be incredibly valuable to my research. I look forward to your response.

@liruilong940607
Copy link
Collaborator

Hi,

For you first question, the answer is yes. You could check out this profiling page we provide, which lists out the performance forward and backward pass separately. https://docs.gsplat.studio/main/tests/profile.html

For your second question, our code to benchmark Inria's training is at here: https://github.com/liruilong940607/gaussian-splatting/tree/benchmark, which simply add some logging on top of the official code. Are you using this repo to benchmark?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants