Please Sign In or to post your comment or answer

Profile Image

Kousika M

Hello Akhil Adhitya,

Greetings!

I had a look at the thread dump report you attached and noticed a couple of things you might want to check out.

 

First, there are 109 threads stuck at line #4790 inside java.util.regex.Pattern$GroupHead in the
match() method. If your app’s CPU usage is high or it’s running a bit slow, these threads could be part of the reason. It might be worth going through their stack traces to see what’s happening there. Stack traces of one thread is given below,

stackTrace:
java.lang.Thread.State: RUNNABLE
at java.util.regex.Pattern$GroupHead.match(java.base@17.0.13/Pattern.java:4790)
at java.util.regex.Pattern$CharPropertyGreedy.match(java.base@17.0.13/Pattern.java:4291)
at java.util.regex.Pattern$Begin.match(java.base@17.0.13/Pattern.java:3672)
at java.util.regex.Matcher.match(java.base@17.0.13/Matcher.java:1755)
at java.util.regex.Matcher.matches(java.base@17.0.13/Matcher.java:712)
at io.prometheus.jmx.shaded.io.prometheus.jmx.JmxCollector$Receiver.recordBean(JmxCollector.java:363)
at io.prometheus.jmx.shaded.io.prometheus.jmx.JmxScraper.processBeanValue(JmxScraper.java:191)
at io.prometheus.jmx.shaded.io.prometheus.jmx.JmxScraper.scrapeBean(JmxScraper.java:159)
at io.prometheus.jmx.shaded.io.prometheus.jmx.JmxScraper.doScrape(JmxScraper.java:117)
at io.prometheus.jmx.shaded.io.prometheus.jmx.JmxCollector.collect(JmxCollector.java:460)
at io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.findNextElement(CollectorRegistry.java:183)
at io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.nextElement(CollectorRegistry.java:216)
at io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.nextElement(CollectorRegistry.java:137)
at io.prometheus.jmx.shaded.io.prometheus.client.exporter.common.TextFormat.write004(TextFormat.java:22)
at io.prometheus.jmx.shaded.io.prometheus.client.exporter.HTTPServer$HTTPMetricHandler.handle(HTTPServer.java:59)
at com.sun.net.httpserver.Filter$Chain.doFilter(jdk.httpserver@17.0.13/Filter.java:95)
at sun.net.httpserver.AuthFilter.doFilter(jdk.httpserver@17.0.13/AuthFilter.java:82)
at com.sun.net.httpserver.Filter$Chain.doFilter(jdk.httpserver@17.0.13/Filter.java:98)
at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(jdk.httpserver@17.0.13/ServerImpl.java:853)
at com.sun.net.httpserver.Filter$Chain.doFilter(jdk.httpserver@17.0.13/Filter.java:95)
at sun.net.httpserver.ServerImpl$Exchange.run(jdk.httpserver@17.0.13/ServerImpl.java:820)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.13/ThreadPoolExecutor.java:1136)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.13/ThreadPoolExecutor.java:635)
at java.lang.Thread.run(java.base@17.0.13/Thread.java:842)

 

Also, 45 threads are stuck waiting for a response from the RMI System. This can slow down your transactions if those responses are delayed or if something’s blocking them.

Also on checking the other report I can see, around 96% of the threads in the pool-8-thread pool are idle and not performing any tasks. This likely means there’s an over-allocation of threads, which can unnecessarily consume system resources and potentially affect overall application performance. You might want to consider resizing this thread pool based on actual workload requirements.

 

If CPU is a concern right now, I’d suggest starting with the CPU Consuming Threads section in the report — that should give a clearer picture of what’s hogging resources.



Thanks.

Got something else on mind? Post Your Question

Not the answer you're looking for? Browse other questions tagged
  • thread-dump

  • java