Hello Babu!
Greetings. I reviewed your thread dump. Below is the excerpt from your thread dump report:
There are 248 tibco threads are waiting to receive message from jms. Below is their stack trace:
java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) at com.tibco.tibjms.TibjmsxSessionImp._getSyncMessage(TibjmsxSessionImp.java:2296) at com.tibco.tibjms.TibjmsxSessionImp._receive(TibjmsxSessionImp.java:2130) - locked <0x00000006cec7eba8> (a java.lang.Object) at com.tibco.tibjms.TibjmsMessageConsumer._receive(TibjmsMessageConsumer.java:276) at com.tibco.tibjms.TibjmsMessageConsumer.receive(TibjmsMessageConsumer.java:481) at com.tibco.plugin.share.jms.impl.JMSReceiver$SessionController.run(Unknown Source) - locked <0x00000006cf771668> (a com.tibco.plugin.share.jms.impl.JMSReceiver$SessionController) Locked ownable synchronizers: - None
You need to check whether these many threads are required? Even if thread is idle (like this case), it will tend to consume memory. Even though these idle threads is a concern, however it should not cause OutOfMemoryError (unless if tibco thread count keeps on growing).
Did you capture this thread dump right when OutOfMemoryError problem was happening or when application was healthy? You need to capture thread dump around the time when problem surfaces. Also just thread dumps isn't enough to diagnose the OutOfMemoryError, you need heap dump (and GC log). Here is a open source script which will capture 360-degree from your application stack in prisitine format. You may use this script to capture heap dump. Once you have captured heap dump, you may use this video tutorial, which walks through how to analyze heap dump.
Edit your Comment