Profile Image
Silviu

Do we need to increase the heap size?

I attached the GC Easy report.

 

Report URL - https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjMvMDgvMTYvZ2MtMjAyMy0wOC0wOV8xOS0zOS5sb2ctLTE1LTgtMzI=&channel=WEB

  • increaseheapsize

  • reducedcloudhostingcost

Please Sign In or to post your comment or answer

Profile Image

Ram Lakshmanan

Hello Silviu!

 

 Greetings. I reviewed your GC log analysis report. You are having an excellent GC throughput: 99.992%. One can't ask better than this. Also your average pause time is only 261ms. So I don't see a need to increase the heap size. I am just curious, what makes you think that you need to increase the heap size?

 

 On contrary, you can even consider reducing heap size. Reducing heap size on JVM = means you can run on less memory capacity  EC2 = reduced cloud hosting cost.

Profile Image

Silviu

Hello Ram,

Thank you very much for your reply. 

I've noticed from the excellent GC Easy report that our GC is running well. The issue is that sometimes, when we re-compile and re-deploy some Struts tomcat applications, we get (this is just an example of a small app):

----------------------------------------

compile: [javac] Compiling 50 source files to /chroots/tomcat70/apps/sd/WEB-INF/classes Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000772700000, 66060288, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 66060288 bytes for committing reserved memory. # An error report file with more information is saved as: # /nfs/home/s/silviu/workspace/sd/hs_err_pid6855.log

----------------------------------------

 

Due to limited budget of the university, we have relatively modest hardware. We run this tomcat on a virtual server with 2 CPUs and 16GB of memory allocated from the physical server.

 

When we got the error above, the "top" showed:

---------------------------------------

top - 18:55:54 up 14 days, 11:37, 2 users, load average: 0.03, 0.04, 0.05 Tasks: 123 total, 1 running, 122 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.2 us, 0.5 sy, 0.0 ni, 99.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.2 st KiB Mem : 16266332 total, 286648 free, 12823072 used, 3156612 buff/cache KiB Swap: 2097148 total, 2074876 free, 22272 used. 3104424 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1187 tomcat70 20 0 17.2g 11.6g 8888 S 0.7 75.0 357:51.28 jsvc

 

with 75% of the memory (~ 13GB as out of 16GB 3GB used for buff/cash) used by the tomcat process. There was only 

286648kb free left.

 

The only solution was to restart tomcat: we want to avoid that as we have a couple of apps on that tomcat container.

For the apps on this tomcat server we do not have many concurrent users. It is for academic/university use.

 

As a solution, I asked for more memory on the virtual server where tomcat runs and also for a configuration of 2 load balancing cluster tomcats(we have such a set up running well on some other server).

That is why I asked if we should increase the heap size.

 

Thank you very much for any help with this issue.

 

Silviu

 

Profile Image

Ram Lakshmanan

Hello Silviu!

 

 Ah! Now I understand what's going on. Thanks for giving the context. I recommend you to watch this 8 minutes video clip, it will give you a good understanding of JVM memory regions. It's necessary to solve this problem.

 

Here are potential solutions to your problem:

 

a. You are having the contention in the 'Others' region (as explained in the video). Thus to give more room for 'Others' regions , you can reduce your JVM's heap size (-Xmx). Currently you are allocating 10gb, you can may be drop it to 8gb. Because when you reduce the -Xmx, there will be more room for others region. Based on your GC performance, you might not have any issues in dropping the -Xmx to 8gb.

 

b. You have 25% free space. Thus free space goes down over the period of time? Does it become 10% or 5%.  I have another suspect as well: Probably your application is having a 'thread leak' and not a memory leak. If so here is the right approach to diagnose thread leak.

 

 

 

Profile Image

Silviu

Hello Ram,

I applied your a. suggestion and so far it works great. From 83% system memory use, the tomcat JVM uses only 23% now, leaving over 11GB for the OS/server.

I also changed from Xms=Xmx (i.e. fixed heap size) to Xms = 1/2 Xmx(i.e. dynamic allocation in the heap). You suggested to use only 8GB for the Xmx(I was using Xms=Xmx for 8GB) but after consulting a few GC reports using your GC Easy I noticed that the peak of young generation in the heap uses ~4-5GB and the peak of old generation ~2.5GB for a total <8GB. That young/old generation ratio and values are for tomcat restarts of 7 days or so, since we needed to restart tomcat as there was not enough system memory to complile/redeploy our apps. For tomcat restarts of > 1 month, the young generation would grow to 6GB+ and the old generation to 8-9GB(again using 

Xms=Xmx for 8GB).

Since the probability for both peaks(in young and old) to happen in the same time is very small, I allocated 6GB for the heap only.

Now we use -Xms3g -Xmx6g. 

Again, your suggestion and your excellent GC Easy reports helped to solve the issue.

Thank you again and all the good things to come in your way.

 

Profile Image

Ram Lakshmanan

I am quite happy to hear this good news my friend. Congratulations on resolving complex JVM problem.

 

If you would like to enrich your JVM troubleshooting knowledge further, you might find this online course helpful. Several engineers have benefitted by this course.

Got something else on mind? Post Your Question

Not the answer you're looking for? Browse other questions tagged
  • increaseheapsize

  • reducedcloudhostingcost