Profile Image
Kaofela

Long running Garbage collection pauses, Poor DB query, device's capacity, or ineffecient code can be the reason for application unresponsiveness?

java application instance keeps crushing/being unresponsive, trying to identify root cause. assist to interpret the analysis of thread dump



Report URL - https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMjEvMDMvMTcvLS1zZXJ2ZXI1X3RoZWFkX2R1bXBfMjAyMDIxMDMxNy56aXAtLTgtMjktMjI=

  • javaapplication

  • unresponsive

  • threaddumpanalysis

Please Sign In or to post your comment or answer

Profile Image

Ram Lakshmanan

Hello Kaofela!

 Greetings.

 

 It looks like you had captured the thread dump using -F option (i.e. Forced option). Did JVM became very sick thus you have to use Forced option to capture thread dumps?

 

 Forced option doesn't give all the details in the thread dumps (like lock information, thread state information, ...). Having said that here are some of my observations:

 

a. I could see 330 threads are created from your application internal thread pool and they are doing nothing. Below is the stacktrace of these threads (they are identical):

 

- sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise)
- java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=175 (Compiled frame)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() @bci=42, line=2039 (Compiled frame)
- java.util.concurrent.LinkedBlockingQueue.take() @bci=29, line=442 (Compiled frame)
- java.util.concurrent.ThreadPoolExecutor.getTask() @bci=149, line=1074 (Compiled frame)
- java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker) @bci=26, line=1134 (Compiled frame)
- java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=624 (Compiled frame)
- java.lang.Thread.run() @bci=11, line=748 (Compiled frame)

 

Have you setup maximum thread count limit on your thread pool? Doesn't threads exit down after a period of inactivity? Why are so many threads ideal? 

 

 b. 39 threads are waiting for response from activemq. Is it expected? 

 

- java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be imprecise)
- org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.receive(long, boolean) @bci=424, line=261 (Compiled frame)
- org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.receive(long) @bci=107, line=388 (Compiled frame)
- org.apache.activemq.artemis.jms.client.ActiveMQMessageConsumer.getMessage(long, boolean) @bci=23, line=211 (Compiled frame)
- org.apache.activemq.artemis.jms.client.ActiveMQMessageConsumer.receive(long) @bci=3, line=132 (Compiled frame)
- org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveMessage(javax.jms.MessageConsumer) @bci=23, line=430 (Compiled frame)
- org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(java.lang.Object, javax.jms.Session, javax.jms.MessageConsumer, org.springframework.transaction.TransactionStatus) @bci=119, line=310 (Compiled frame)
- org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(java.lang.Object, javax.jms.Session, javax.jms.MessageConsumer) @bci=94, line=263 (Compiled frame)
- org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener() @bci=17, line=1102 (Compiled frame)
- org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop() @bci=154, line=1094 (Interpreted frame)
- org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run() @bci=51, line=991 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=748 (Compiled frame)

 

 c. Application unresponsiveness could happen because of several reasons: Long running Garbage collection pauses, Poor DB query, device's capacity, network latency, inefficient code, ...  Thread dump only gives small subset of information, to diagnose the problem. Inorder to diagnose poor response time, you need to capture 360-degree data such as: Thread dumps, Heap dumps, Garbage collection logs, vmstat, iostat, top, top -H, df, netstat... You can consider using 14-day trial of yCrash. It captures all of the above data and generates root cause analysis report of the slowdown.

Got something else on mind? Post Your Question

Not the answer you're looking for? Browse other questions tagged
  • javaapplication

  • unresponsive

  • threaddumpanalysis