Hello SimplyPrasu!
Greetings. We could see 2 patterns in your application that is causing the threads go in to WAITING state.
a. 420 threads WAITING in Logback module
There are 420 threads WAITING in Logback module. Below is the top part of the stacktrace of those threads:
stackTrace: java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00000000822d7d78> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353) at ch.qos.logback.core.AsyncAppenderBase.put(AsyncAppenderBase.java:156) at ch.qos.logback.core.AsyncAppenderBase.append(AsyncAppenderBase.java:147) at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88) at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48) at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273) at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260) at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442) at ch.qos.logback.classic.Logger.filterAndLog_2(Logger.java:433) at ch.qos.logback.classic.Logger.debug(Logger.java:511) at uk.co.igindex.singlesignon.filters.PathBasedRequestExclusionStrategy.isExcluded(PathBasedRequestExclusionStrategy.java:54) at uk.co.igindex.singlesignon.filters.AbstractServiceAccessFilter.isExcluded(AbstractServiceAccessFilter.java:77) at uk.co.igindex.singlesignon.filters.AbstractServiceAccessFilter.doFilter(AbstractServiceAccessFilter.java:88) : :
I am suspicious this might happen because of one of the following reasons:
1. Wrong (appender) configuration in logback
2. You might be running on a old version of logback. Current latest version in 1.4.14. You might want to try to upgrade and see whether this issue gets resolved.
b. 510 Tomcat container threads in TIMED_WAITING state
There are 520 Tomcat container threads in TIMED_WAITING state, they are waiting for new requests to arrive to the application. Below is the stacktrace of those threads
stackTrace: java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0000000083f148a0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:90) at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:33) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748)
I suspect this is caused because of the excessive Tomcat container threads. You might consider lowering your min thread pool size of the tomcat thread pool
Edit your Comment