Profile Image
QA Guides

Assitance with high memory usage on appD for microservice

We have a microservice on openshift with two pods active we see high memory usage even after the test has completed on both pods we took heap dumps and below are reports and you assit with recommendation 

 

 

 

 

appD pod1

 

appD pod 2

 

we see heap utilisation is low on both pods from report but the overall memory smees to be high on appD

 

https://ycrash.io/yc-load-report-hd?isWCReport=true&ou=YU80S0xIeGJZVWROQndJN3ZIU1dwZz09&de=172.31.24.209&app=yc&ts=2024-03-26T10-59-38

19,319 instances of "java.lang.Class", loaded by "<system class loader>" occupy 9,729,456 (13.14%) bytes. Biggest instances:class com.singularity.ee.agent.util.reflect.AgentR... To see the details click here.

Used Heap Size70.6 MB

https://ycrash.io/yc-load-report-hd?isWCReport=true&ou=YU80S0xIeGJZVWROQndJN3ZIU1dwZz09&de=172.31.24.209&app=yc&ts=2024-03-26T11-45-09

danger

2,598 instances of "java.lang.ref.Finalizer", loaded by "<system class loader>" occupy 35,956,440 (44.11%) bytes. To see the details click here.
Used Heap Size77.7 MB
Current assignment of memory is

-Xms512m
-Xmx512m
-XX:+UseParallelGC
-XX:MinHeapFreeRatio=10
-XX:MaxHeapFreeRatio=20
-XX:GCTimeRatio=4
-XX:AdaptiveSizePolicyWeight=90
-XX:MaxMetaspaceSize=250m
-XX:+ExitOnOutOfMemoryError

  • heap-dump

Please Sign In or to post your comment or answer

Profile Image

Unni Vemanchery Mana

Hello QA Guides,

Not sure what exactly is causing this issue. But it is worth looking into this thread.

https://docs.openshift.com/container-platform/3.11/dev_guide/application_memory_sizing.html

Hope this will help you.

Profile Image

Ram Lakshmanan

Hello Anshita!

 

 Greetings. Few things are not adding up here.
 
 You have configured your application's heap size (-Xmx) is 512m, metaspace size is 250m and there could be data from the 'others'. But still I find it hard to see appD reporting the memory consumption as 865mb. I don't think without any traffic, your memory consumption should be at 865mb as reported by appD. Is appD reporting overall pod's memory consumption OR just the JVM's memory consumption?
 
 Also how you are taking heap dump? If you are using yc script to take the heap dump, it will trigger full GC in the JVM and then collect the heap dump. So collected heap dump only has active/live objects in the memory, thus the memory consumption reported by heap hero will be significantly lower than what you will be observing in appD.
 
 Next step: We want to look in to GC log. That would give us clear picture of what's going in the memory (instead of looking at appD and heap dump analysis).

Got something else on mind? Post Your Question

Not the answer you're looking for? Browse other questions tagged
  • heap-dump