-
Notifications
You must be signed in to change notification settings - Fork 779
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
How to finetune OG VM size? #3765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Is this incremental or from-scratch reindex ? I assume this is not a per project reindex. In overall, https://github.com/oracle/opengrok/wiki/Tuning-for-large-code-bases gives some tips on how to deal with JVM size setting. For production I'd recommend to run the web app with JVM monitoring and alerting based on JVM size, see e.g. https://github.com/OpenGrok/opengrok-monitoring-docker - the JVM Grafana dashboard has a preset alert just for that. |
Quoting part of Tomcat's
So no matter how well thought the formula is, there will be always something that will make the heap jump higher. And I am not even considering #3541 or #1806. My impression is that this is never ending battle that requires robust monitoring in place (both for heap size and request latencies) and occasional heap size readjustments. |
As for the indexer JVM heap tuning, I'd be interested in an analysis of a heap dump (say with the MAT tool or YourKit). Another thing from Tomcat's
This will dump the heap on JVM OOM. Since the web app runs with 48 GB heap size, there needs to be enough space in |
Hi @vladak We run incremental reindexing nightly: we mirror all Git repos to be indexed (pull if already there, clone if new, delete if no more needed) then indexing is run on all repos and projects. Up to now, we are not dumping indexer heap automatically on OOM error. Will eventually do that from now. But this is disk consuming and not so easy to analyze when available... Since I have opened this ticket, we have upgraded the VM adding 64G RAM: VM has now 128G available. Tomcat is running with -Xmx26g like before and indexer with -Xmx64g to have margin. No more trouble now. So, in short: no way to plan needed JVM heap resources for Tomcat and indexer, just keep monitoring both processes, and act when alerts are triggered. |
Given this is incremental reindex, I'd be interested in heap dump analysis. Possibly there is something that can be optimized to reduce the memory footprint. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hi team,
Since some days now, my OG PRD instance (that was running OK) is unable to finish the nightly indexing.
I have OOM error while indexing that happens let's say randomly: I have digged into the OG and Tomcat logs but found nothing special. So, in short, it seems that VM has reached its memory limit.
OG is 1.7.19 running Tomcat 10.0.4 / Java 11.0.9. VM is running RHEL 8.3 with 20 CPUs and 64G RAM. ~2000 projects for a total of ~33k Git repositories are indexed.
I had set 26G for Tomcat and 26G for indexing job before OG indexing starts crashing.
I have tried 20G Tomcat / 40G indexing but it failed: indexing crashed while trying to update configuration on Tomcat.
My question.
Is there a way to evaluate the minimal min and max JVM heap size needed for Tomcat, based on the contents of configuration file?
Is there a formula that could be used for a oom estimate using the number of projects, the number of repositories, the number of tags, the number of files indexed, the number of history cache files, ...?
Purpose is for me to finetune the min and max JVM heap size while OG is up and running and indexing OK, to monitor the VM and plan for a VM size upgrade.
The text was updated successfully, but these errors were encountered: