Hello Derek!
Strategic/Right approach:
Inorder to accurately point the lines of code causing the CPU spike, you need to analyze not only thread dumps but also 'top -H -p {PID}' command output, where {PID} is your Java application's process Id which is experiencing 100% CPU spike. When you issue this command, it will list all the threads running in the application and amount of CPU each one of the thread consume. Once you have both the data, you can identify high CPU consuming thread and lines of code they are executing. You can either capture these artifacts and analyze them manually, or use the yCrash tool - which automatically captures application-level data (thread dump, heap dump, Garbage Collection log) and system-level data (netstat, vmstat, iostat, top, top -H, dmesg,...). It marries these two datasets and generates an instant root cause analysis report. Here is more information on how to diagnose high CPU spike.
Tactical approach:
In your case, fastThread tool uses it's heuristics algorithms and shows the potential threads are consuming high cpu in the section 'CPU consuming threads'.
I could see there are few Quartz scheduler threads making backend DB calls through Hibernate and doing lot regular expression activity. In general both hibernate and regular expression activities consume lot of CPU. But we wouldn't be able to tell exactly how much CPU those threads are consuming (because of lack of 'top -H -p {PID}' output. Below is the excerpt from your report, showing the CPU consuming threads:
Edit your Comment