Hi Belligerent,
Can you share your JVM arguments?
memory is abundant, any idea why it happened?
Report URL - https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjMvMDcvMjcvMS5sb2ctLTMtNTEtMjk=&channel=WEB
-Xlog:gc=trace:file=/tmp/gc.log:utctime:filecount=5,filesize=20m -XX:MaxRAMPercentage=70 -XX:SurvivorRatio=3 -XX:NewRatio=1 -XX:ParallelGCThreads=2 -XX:+UseParallelGC -Dcom.mysql.cj.disableAbandonedConnectionCleanup=true -Ddtp.name=prod-default -Dcom.zaxxer.hikari.useWeakReferences=true -Dcom.alibaba.nacos.naming.log.level=error -Dcom.alibaba.nacos.config.log.level=error -Drocketmq.client.logUseSlf4j=true -Dsun.net.inetaddr.ttl=3 -Dsun.net.inetaddr.negative.ttl=1 -Dnetworkaddress.cache.ttl=3 --add-opens java.base/java.util.concurrent=ALL-UNNAMED --add-reads java.base=ALL-UNNAMED -Dspring.cloud.nacos.config.shared-configs[0].data-id=platform-shared-config.yaml -Dspring.cloud.nacos.config.shared-configs[0].refresh=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/.tss/uploads/oom -XX:+ExitOnOutOfMemoryError -XX:OnOutOfMemoryError="curl tm-tss.service.consul/scripts/operator/oom-handle.sh | sh" -XX:ErrorFile=/tmp/hs_err_pid%p.log -XX:OnError="curl tm-tss.service.consul/scripts/operator/crash-handle.sh | sh"
Hi Belligerent,
You are passing "-XX:MaxRAMPercentage=70" JVM argument. This argument sets a limit on the maximum heap size based on a percentage of available memory. For example, if your system has 10 GB of RAM, the JVM will try to set the maximum heap size to 7 GB (70% of 10 GB). However, this setting might lead to memory fragmentation or insufficient memory for other processes running on the system, which could result in the heap size not returning to its original size after garbage collection.
Additionally, setting a static percentage for the heap size might not be suitable for all environments, as system resources and requirements can vary. Consider using specific values for -Xmx (maximum heap size) and -Xms (initial heap size) to better control the heap size allocation.
Hello Belligerent!
Greetings. I reviewed your GC report. Yes, you are right. Your application's heap size shrinked and never returned back to it's orignal size. Below is your heap usage graph:
You can notice towards the right side of the graph, heap size didn't grow back to it's maximum size. It's primarily because of Full GCs happening. These Full GCs are triggered because of 'ergonomics'.
What is 'Full GC - ergonomics'?
GC ergonomics tries to grow or shrink the heap dynamically to meet a specified goal such as minimum pause time and/or throughput.
Your application is using Parallel GC algorithm. In Parallel GC algorithm, there is a '-XX:GCTimeRatio' JVM argument enabled by default. More details about this argument can be found in this post. As per this argument's default value, GC events will be triggered frequently to meet the 99% GC throughput goal. You can considering lowering this value or disable it, if you would like the heap size to scale back.
However you don't have to worry about doing so, because your application's current GC throughput and pause times are pretty good.
Edit your Comment