Additionally, My Instance Spec is GCP e2-medium (vCPU2, 4GB RAM).
I’m making a simple API call to a RabbitMQ server using rabbitTemplate.convertAndSend(). However, during load testing, I observed an unusually high number of threads stuck in the TIMED_WAITING state.
Here’s my K6 script:
import http from "k6/http";
import { check, sleep } from "k6";
export const options = {
stages: [
{ duration: "0.5m", target: 100 },
{ duration: "1m", target: 300 },
{ duration: "1m", target: 500 },
],
};
export default function () {
const headers = {
headers: {
"Content-Type": "application/json",
Authorization: `Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiIxIiwiaWF0IjoxNzM2MTc1MTk1LCJleHAiOjE3MzYxODIzOTV9.B61E3jmweOWHJLfXLr0k_4kRii_RitUqOxn3XKvfGL2N8VGhvWOmTpuGco7-mRxVKFipKL-l16FkLnVqWYNESw`,
},
};
const payload = JSON.stringify({ minute: 15 });
const res = http.post(
"https://amorgakco.store/api/group-participants/groups/1/tardiness",
payload,
headers
);
check(res, {
"status is 200": (r) => r.status === 200,
});
sleep(Math.random() * 2); // Random delay between requests
}
Is it normal to see so many threads in TIMED_WAITING, or could this indicate an issue in my configuration or RabbitMQ settings? Any insights would be greatly appreciated!
Report URL - https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMjUvMDEvNy9qc3RhY2tfZHVtcHNfMTE5LnppcC0tMS0xNS01NQ==
Hello bukak2019,
It is natural to see the TIMED_WAITING in the thread dump because the application internally may be using Thread.sleep(..).
But your application is having some other issues where the few threads are stuck. This happens in the JDBC layer.
Check the "Problem Detected" section in the report.
Hello bukak2019,
Greetings!
We observed that the TIMED_WAITING state peaks at around 200 threads in this report, which is not a major concern. Out of these, 194 threads are waiting in the http-nio-8080-exec thread pool, associated with Apache Tomcat, and are waiting for requests. You may consider reducing the thread pool size to optimize performance. Attaching screenshot for reference,
I’ve included the stack trace of one such thread below. Kindly review it to verify if it matches the issue.
stackTrace: java.lang.Thread.State: TIMED_WAITING (parking) at jdk.internal.misc.Unsafe.park(java.base@17.0.5/Native Method) - parking to wait for <0x00000000c603ded0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(java.base@17.0.5/LockSupport.java:252) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@17.0.5/AbstractQueuedSynchronizer.java:1672) at java.util.concurrent.LinkedBlockingQueue.poll(java.base@17.0.5/LinkedBlockingQueue.java:460) at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:99) at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:33) at org.apache.tomcat.util.threads.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1113) at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1175) at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) at java.lang.Thread.run(java.base@17.0.5/Thread.java:833)
And I have also noticed a minor concer, 6 threads from jdk.proxy2.$Proxy210.findByReceiver(jdk.proxy2/Unknown Source) are stuck, waiting for a response. These threads seem to have acquired two locked statements on a com.mysql.cj.jdbc.ConnectionImpl object, suggesting they are waiting for a database connection lock.
Thanks.
Edit your Comment