Replies: 2 comments 1 reply
-
The buildpacks set some memory-related settings to ensure that your application is correctly provisioned to take advantage of all the memory in your container but does not exceed its container limits causing your apps to crash. We don't directly set any GC options though, so if you've noticed a difference in behavior after upgrading to Java 21, it is likely due to changes in the major Java version, not the buildpacks. You can pass additional JVM options through the |
Beta Was this translation helpful? Give feedback.
-
Yes I know that. It just came as a surprise, that jdk 21 seems to be more cpu intensive than jdk 17 during GC with default settings. |
Beta Was this translation helpful? Give feedback.
-
Hello Java Team
I'm using buildpacks to build spring boot apps running on k8s.
After upgrading a app from jdk17 og jdk21 in production, we see a increased cpu usage during GC causing pod scalling.
The initial k8s request.limit was set to 200m, and only after raising this to 300m there where enough cpu room for GC without causing scaling.
I'm wondering if that is expected behaviour using the default garbagecollector, and how to address this issue.
The pod is started using:
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=8 -XX:MaxDirectMemorySize=10M -Xmx201770K -XX:MaxMetaspaceSize=105429K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enable=true
The initial deployment with request.limit = 200m caused unforeseen scaling.
Beta Was this translation helpful? Give feedback.
All reactions