FusionReactor Blocking Threads?

193 views
Skip to first unread message

jl...@guidance.com

unread,
Mar 29, 2016, 12:25:08 PM3/29/16
to FusionReactor
Howdy! I am supporting an application running ColdFusion+FusionReactor that at times encounters a sudden influx of transactional volume. Despite efforts to optimize the underlying application codebase, the system encounters a dead period at the start of this wave. Arguably this is due to thread blocking. In looking at the output from FusionReactor, I find it curious that the blocking is occurring within FusionReactor itself? See the stack trace below - in my thread dump I have countless instances of this. I am curious if someone can help me decipher this stack trace and confirm that FusionReactor is the source of blocking within my application.

"jrpp-30" Id=364 BLOCKED on com.intergral.fusionreactor.guard.internal.gate.FRRequestQuantityGate@13eb26a9 owned by "jrpp-101" Id=1639
java.lang.Thread.State: BLOCKED
at com.intergral.fusionreactor.guard.internal.gate.FRRequestQuantityGate.doNotification(FRRequestQuantityGate.java:689)
- waiting to lock com.intergral.fusionreactor.guard.internal.gate.FRRequestQuantityGate@13eb26a9 owned by "jrpp-101"
at com.intergral.fusionreactor.guard.internal.gate.FRRequestQuantityGate.test(FRRequestQuantityGate.java:474)
at com.intergral.apm.transit.impl.gate.TransitGateManager.runGateChain(TransitGateManager.java:89)
at com.intergral.apm.transit.impl.TransitImpl.runGateChain(TransitImpl.java:1562)
at com.intergral.apm.transit.txn.BaseTransaction._startTransaction(BaseTransaction.java:1344)
at com.intergral.apm.transit.txn.BaseTransaction._startTransaction(BaseTransaction.java:1303)
at com.intergral.apm.transit.txn.BaseTransaction.start(BaseTransaction.java:525)
at com.intergral.fusionreactor.j2ee.core.FusionRequest.<init>(FusionRequest.java:257)
at com.intergral.fusionreactor.j2ee.filter.FusionReactorRequestHandler.doHttpServletRequest(FusionReactorRequestHandler.java:300)
at com.intergral.fusionreactor.j2ee.filter.FusionReactorRequestHandler.doFusionRequest(FusionReactorRequestHandler.java:192)
at com.intergral.fusionreactor.j2ee.filter.FusionReactorRequestHandler.handle(FusionReactorRequestHandler.java:507)
at com.intergral.fusionreactor.j2ee.filter.FusionReactorCoreFilter.doFilter(FusionReactorCoreFilter.java:36)
at sun.reflect.GeneratedMethodAccessor60.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intergral.fusionreactor.j2ee.filterchain.WrappedFilterChain.doFilter(WrappedFilterChain.java:79)
at sun.reflect.GeneratedMethodAccessor59.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intergral.fusionreactor.agent.filter.FusionReactorStaticFilter.doFilter(FusionReactorStaticFilter.java:53)
at com.intergral.fusionreactor.agent.pointcuts.NewFilterChainPointCut$1.invoke(NewFilterChainPointCut.java:41)
at jrun.servlet.FilterChain.doFilter(FilterChain.java)
at jrun.servlet.FilterChain.service(FilterChain.java:101)
at jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106)
at jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:286)
at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
at jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203)
at jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:320)
at jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
at jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:266)
at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)

Charlie Arehart

unread,
Mar 29, 2016, 12:40:47 PM3/29/16
to fusion...@googlegroups.com
Have you or did anyone else set up the FR Crash Protection feature (the "Protection" option on the left) so that either of the first 3 offered "protection" conditions may be set to "queue" requests once that condition is reached?

If that's not it, are you saying that one or all the hanging requests show this as the stack trace?

Also, are you getting this info from the "resources>threads>stack trace all" feature, the FR CP email alert (and its thread dump), or from the use of the stack trace option on "requests>activity" for a specific long-running request? The latter is the best way to focus on "why is this request hanging up"? The "stack trace all" feature is useful when you have many and want to see if there's some pattern, while the CP alert thread dump is valuable for looking at the problem offline (since it's in email) or after a restart or the problem condition has passed.

Indeed, before we go further, can you confirm that you DO see many slow requests in requests>activity? There are scenarios where it can SEEM that "CF is unresponsive", when in fact FR shows no problem...no or few running requests. In that case, if this is CF10 or above, I would next suspect the problem (of CF seeming unresponsive) could be down to the CF/external web server connector (IIS, Apache, etc.), where THAT could hangup incoming requests. They never arrive to CF, so FR never sees them.

Let us know more about your situation and thoughts on the above..

/charlie
--
You received this message because you are subscribed to the Google Groups "FusionReactor" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fusionreacto...@googlegroups.com.
To post to this group, send email to fusion...@googlegroups.com.
Visit this group at https://groups.google.com/group/fusionreactor.
For more options, visit https://groups.google.com/d/optout.

Darren Pywell

unread,
Mar 29, 2016, 12:41:30 PM3/29/16
to FusionReactor
Hi,

Can you let me know what version of FusionReactor you are using? FusionReactor had a bug some time ago with blocking when the email server took time to send the crash protection email, which has been fixed. Just want to make sure that is not the cause.

Thanks,
Darren

jl...@guidance.com

unread,
Mar 29, 2016, 1:00:09 PM3/29/16
to FusionReactor
Thank you very much for the quick replies!

RE: Can you let me know what version of FusionReactor you are using? FusionReactor had a bug some time ago with blocking when the email server took time to send the crash protection email, which has been fixed. Just want to make sure that is not the cause.

System Information
Server Details
Server Product ColdFusion
Version 9,0,1,274733
Edition Standard
Operating System UNIX
OS Version 2.6.18-409.el5
Adobe Driver Version 4.0 (Build 0005)

JVM Details
Java Version 1.6.0_22
Java Vendor Sun Microsystems Inc.

FusionReactor ENT Edition v.5 (1 Year SUBSCRIPTION)
Revision: 5.2.7, Build: fusionreactor.204.48715.branches/FR-5.x-MAINT


RE: Have you or did anyone else set up the FR Crash Protection feature (the "Protection" option on the left) so that either of the first 3 offered "protection" conditions may be set to "queue" requests once that condition is reached?

WebRequest Quantity Protection: Email Notification Only
WebRequest Runtime Protection: Continue Tracking Request
WebRequest Memory Protection: Email Notification Only

RE: If that's not it, are you saying that one or all the hanging requests show this as the stack trace?

At peak each server is handling about 300 requests per second. At the moment when this stack trace was captured, the server was handling 8 requests each taking 24K+ms. The information exposed is dense and I can't say if this message can be tied back to all 8 requests. The message does appear 33 times in the output. I can provide the entire output if desired. The reason I focused in on this message is because I see some other references to Jetty which the application is not explicitly making use of - I assume FusionReactor is - see below. Is it possible that the FR agent is struggling to exchange information with the service?

"qtp2137456540-1679" Id=1679 TIMED_WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@24cbbdf3
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@24cbbdf3
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:342)
at org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:526)
at org.eclipse.jetty.util.thread.QueuedThreadPool.access$600(QueuedThreadPool.java:44)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:662)

Thank you again!!

Darren Pywell

unread,
Mar 29, 2016, 1:10:56 PM3/29/16
to FusionReactor
Hi,

It looks like the bug in Crash Protection Emails was only fixed in 5.2.8.

FR5227 Bug Protection – Email sending causes slow performance

Can you try updating to 5.2.8 and let us know if the problem recurs.

Thanks,
Darren

jl...@guidance.com

unread,
Mar 29, 2016, 1:41:15 PM3/29/16
to FusionReactor
Will do.  Thank you for the suggestion!
Reply all
Reply to author
Forward
0 new messages