Profile Image
Nikhil Ninawe

Application running out of Database connections in c3p0 connection pool!

Please find the issue in the thread dump



Report URL - https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMjEvMDgvMjkvLS0yMDIxMjcwOC0xMzEyLnByb2R1Y3Rpb24uYXBpLmlwLTEwLTAtMTAtMjQyLm5lZWRsZS56aXAtLTktNDUtNDQ=

  • threaddumpissue

  • rootcause

Please Sign In or to post your comment or answer

Profile Image

Umayal

Hi Nikhil!

 

Here are my Observations. I can see there are 4 problems in your Thread dump:

1. BLOCKED STATE:

The following threads are in a BLOCKED state and they all have the same stack trace.  If threads are BLOCKED for a prolonged period, your application may become unresponsive. Examine their stack trace.

 

Threads that are in a blocked state: [NOTE: You can check the following Thread's stack trace in the section "Threads with identical stack trace"]

  • 44 threads - These threads are BLOCKED on wait() method in java.lang.Object file.
  • 13 threads - These threads are BLOCKED on wait() method in java.lang.Object file.
  • 13 threads - These threads are BLOCKED on line #1152 of org.apache.catalina.loader.WebappClassLoaderBase file in loadClass() method.

Before getting stuck, the following threads obtained locks: [NOTE: You can check the Thread's stack trace by clicking on the highlighted area as mentioned in the Screenshot]

+ java.util.jar.JarFile is blocking 24 threads:

'http-nio-8080-exec-105' thread is stuck on getEntry() method in java.util.zip.ZipFile file. Before getting stuck, this thread obtained 5 locks (java.util.jar.JarFile lock, java.lang.Object lock...) and never released them. Due to that 24 threads are BLOCKED as shown in the below graph. Examine 'http-nio-8080-exec-105' stack trace to see why it is BLOCKED.

 

+org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager is blocking 13 threads:

'http-nio-8080-exec-137' thread is stuck on writeBytes() method in java.io.RandomAccessFile file. Before getting stuck, this thread obtained 2 locks (org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager lock, org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper lock) and never released it. Due to that 13 threads are BLOCKED as shown in the below graph. Examine 'http-nio-8080-exec-137' stack trace to see why it is BLOCKED.

 

+ com.mchange.v2.resourcepool.BasicResourcePool is blocking 5 threads:

'pool-36-thread-8798' thread is stuck on wait() method in java.lang.Object file. Before getting stuck, this thread obtained 1 lock (com.mchange.v2.resourcepool.BasicResourcePool lock) and never released it. Due to that 5 threads are BLOCKED as shown in the below graph. Examine 'pool-36-thread-8798' stack trace to see why it is BLOCKED.

 

+ java.lang.Object is blocking 1 thread:

 'http-nio-8080-exec-149' thread is stuck on interrupt() method in sun.nio.ch.EPollArrayWrapper file. Before getting stuck, this thread obtained 2 locks (java.lang.Object lock, org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper lock) and never released it. Due to that 1 thread is BLOCKED as shown in the below graph. Examine 'http-nio-8080-exec-149' stack trace to see why it is BLOCKED.

 

2. HIGH CPU:

Following threads are consuming High CPU:

  • GC task thread#0 (ParallelGC) thread is consuming a high CPU (62.875%). 
  •  GC task thread#1 (ParallelGC) thread is consuming a high CPU (60.975%). 
  •  VM Thread thread is consuming a high CPU (17.875%). 

 

3. THREADS THAT ARE STUCK WAITING FOR RESPONSE FROM EXTERNAL SYSTEM:

362 threads are stuck waiting for a response from the external system. It can slow down transactions. Examine its stack trace. Here are the tips to resolve this problem.

 

4. Finalizer Thread:

If the finalizer thread is executing inefficient code, it can result in OutOfMemoryError.  finalize() method is implemented in java.util.concurrent.ThreadPoolExecutor class. Poorly implemented finalize() method will degrade entire application's performance.

 

Profile Image

Ram Lakshmanan

Hello Nikhil!

 

 Greetings.

 

 I see primarily couple of problems:

 

1. Running out of connection in C3P0 connection pool:

 

Your application is running out of Database connections in your c3p0 connection pool. Due to that new threads which are requesting the connection from the c3p0 connection pool are getting blocked. This would make your application unresponsive.

 

 

 

 This observation seems to be co-relating with your 'db_conn.txt' file as well. We got see 210 MySQL connections, we also see 360 ArangoDB connections. Is so many connections expected? 

 

 Below is the stacktrace of one of the thread which is getting stuck because of lack of connection in the c3p0 connection pool:

 

java.lang.Thread.State: BLOCKED (on object monitor)
at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:611)
- waiting to lock <0x00000000c2b66ab8> (a com.mchange.v2.resourcepool.BasicResourcePool)
at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:554)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutAndMarkConnectionInUse(C3P0PooledConnectionPool.java:758)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:685)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:140)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:87)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167)
at org.hibernate.jdbc.BorrowedConnectionProxy.invoke(BorrowedConnectionProxy.java:74)
at com.sun.proxy.$Proxy118.setReadOnly(Unknown Source)
at org.springframework.jdbc.datasource.DataSourceUtils.prepareConnectionForTransaction(DataSourceUtils.java:155)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:514)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373)
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:427)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:276)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
at com.arl.saahas.api.access.service.AccessService$$EnhancerBySpringCGLIB$$6523b0b8.getUserPermissions(<generated>)
at com.arl.saahas.api.user.service.UserService$9.call(UserService.java:921)
at com.arl.saahas.api.user.service.UserService$9.call(UserService.java:917)
at com.turvo.logging.LogContextPropagatingExecutorService$LogContextCallable.call(LogContextPropagatingExecutorService.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) 

 

2. GC Threads consuming lot CPU

 We could also see GC threads to consume lot of CPU. This indicates GC activity could be heavy on the application. You might consider captuing GC logs from the application and analyze it.

 

NOTE: It's very good that the dump file you uploaded contains lot of artifacts like: vmstat, netstat, sar, top.... - You might consider evaluating yCrash product. It analyzes all these artifacts and generates unified root cause analysis report. FastThread only analyze thread dump and top output. 

Got something else on mind? Post Your Question

Not the answer you're looking for? Browse other questions tagged
  • threaddumpissue

  • rootcause