Deadlock with Lift 2.5-RC4 + Jetty 6.1.24

168 views
Skip to first unread message

Geir Harald Hansen

unread,
May 23, 2013, 7:26:15 PM5/23/13
to Lift
Hi,

running Lift 2.5-RC4 with Jetty 6.1.24-6ubuntu0.12.04.1 under OpenJDK
1.6.0_27, I am getting some deadlocks. The web server becomes
unresponsive and must be restarted.

Anyone else seeing this?

2.5-RC5 gave me errors about the openid module missing, if I remember
correctly. I'll give 2.5-RC6 a go.

Interesting data from jstack below.

Regards,
Geir H. Hansen


jstack output:

Found one Java-level deadlock:
=============================
"1627879514@qtp-930501550-595":
waiting to lock monitor 0x00000000012b4fe0 (object 0x00000007833775b8,
a org.mortbay.io.nio.SelectorManager$SelectSet),
which is held by "997133288@qtp-930501550-8 - Acceptor1
SelectChann...@127.0.0.1:8080"
"997133288@qtp-930501550-8 - Acceptor1
SelectChann...@127.0.0.1:8080":
waiting to lock monitor 0x00000000013cdab0 (object 0x00000007921cd078,
a org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation),
which is held by "pool-4-thread-14"
"pool-4-thread-14":
waiting to lock monitor 0x00000000012b4fe0 (object 0x00000007833775b8,
a org.mortbay.io.nio.SelectorManager$SelectSet),
which is held by "997133288@qtp-930501550-8 - Acceptor1
SelectChann...@127.0.0.1:8080"

Java stack information for the threads listed above:
===================================================
"1627879514@qtp-930501550-595":
at
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.reset(SelectChannelConnector.java:441)
- waiting to lock <0x00000007833775b8> (a
org.mortbay.io.nio.SelectorManager$SelectSet)
at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:598)
at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"997133288@qtp-930501550-8 - Acceptor1
SelectChann...@127.0.0.1:8080":
at
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.expire(SelectChannelConnector.java:511)
- waiting to lock <0x00000007921cd078> (a
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation)
at org.mortbay.thread.Timeout.tick(Timeout.java:137)
- locked <0x00000007833775b8> (a
org.mortbay.io.nio.SelectorManager$SelectSet)
at org.mortbay.thread.Timeout.tick(Timeout.java:153)
at
org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:754)
at
org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:191)
at
org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
at
org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"pool-4-thread-14":
at
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.resume(SelectChannelConnector.java:497)
- waiting to lock <0x00000007833775b8> (a
org.mortbay.io.nio.SelectorManager$SelectSet)
at sun.reflect.GeneratedMethodAccessor236.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
net.liftweb.http.provider.servlet.containers.Jetty6AsyncProvider.resume(Jetty6AsyncProvider.scala:121)
- locked <0x00000007921cd078> (a
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation)
at
net.liftweb.http.provider.servlet.HTTPRequestServlet.resume(HTTPRequestServlet.scala:163)
at
net.liftweb.http.LiftServlet$$anonfun$17.apply(LiftServlet.scala:701)
at
net.liftweb.http.LiftServlet$$anonfun$17.apply(LiftServlet.scala:701)
at
net.liftweb.http.LiftServlet$ContinuationActor$$anonfun$messageHandler$1.applyOrElse(LiftServlet.scala:687)
at
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at
net.liftweb.actor.LiftActor$class.execTranslate(LiftActor.scala:440)
at
net.liftweb.http.LiftServlet$ContinuationActor.execTranslate(LiftServlet.scala:662)
at
net.liftweb.actor.SpecializedLiftActor$class.liftedTree2$1(LiftActor.scala:288)
at
net.liftweb.actor.SpecializedLiftActor$class.net$liftweb$actor$SpecializedLiftActor$$proc2(LiftActor.scala:287)
at
net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply$mcV$sp(LiftActor.scala:210)
at
net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply(LiftActor.scala:210)
at
net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply(LiftActor.scala:210)
at
net.liftweb.actor.SpecializedLiftActor$class.around(LiftActor.scala:224)
at
net.liftweb.http.LiftServlet$ContinuationActor.around(LiftServlet.scala:662)
at
net.liftweb.actor.SpecializedLiftActor$class.net$liftweb$actor$SpecializedLiftActor$$processMailbox(LiftActor.scala:209)
at
net.liftweb.actor.SpecializedLiftActor$$anonfun$2$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiftActor.scala:173)
at
net.liftweb.actor.LAScheduler$$anonfun$9$$anon$2$$anon$3.run(LiftActor.scala:64)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)

Found 1 deadlock.

David Pollak

unread,
May 23, 2013, 8:17:11 PM5/23/13
to liftweb
Weird.

If you can package up a repro case, I'd love to sink my teeth into it.



--
--
Lift, the simply functional web framework: http://liftweb.net
Code: http://github.com/lift
Discussion: http://groups.google.com/group/liftweb
Stuck? Help us help you: https://www.assembla.com/wiki/show/liftweb/Posting_example_code

---
You received this message because you are subscribed to the Google Groups "Lift" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liftweb+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.





--
Telegram, Simply Beautiful CMS https://telegr.am
Lift, the simply functional web framework http://liftweb.net

Sergey Trofimov

unread,
May 24, 2013, 11:40:15 AM5/24/13
to lif...@googlegroups.com
I rarely get a lock (in several threads, CPU starts eating, jetty restart required, happened 2 or 3 times after upgrade to 2.5-M4, not clear how to reproduce) like:

---------------------------

"qtp1988368988-4348" - Thread t@4348
   java.lang.Thread.State: BLOCKED
    at net.liftweb.http.LiftSession.fixSessionTime(LiftSession.scala:796)
    - waiting to lock <62789d78> (a net.liftweb.http.LiftSession) owned by "qtp1988368988-3601" t@3601
    at net.liftweb.http.LiftRules.net$liftweb$http$LiftRules$$_getLiftSession(LiftRules.scala:281)
    at net.liftweb.http.LiftRules$$anonfun$11.apply(LiftRules.scala:250)
    at net.liftweb.http.LiftRules$$anonfun$11.apply(LiftRules.scala:250)
    at net.liftweb.http.LiftServlet.getLiftSession(LiftServlet.scala:92)
    at net.liftweb.http.LiftServlet.doService(LiftServlet.scala:293)
    at net.liftweb.http.LiftServlet$$anonfun$doIt$1$1.apply(LiftServlet.scala:145)
    at net.liftweb.http.LiftServlet$$anonfun$doIt$1$1.apply(LiftServlet.scala:144)
    at net.liftweb.util.TimeHelpers$class.calcTime(TimeHelpers.scala:345)
    at net.liftweb.util.Helpers$.calcTime(Helpers.scala:34)
    at net.liftweb.util.TimeHelpers$class.logTime(TimeHelpers.scala:364)
    at net.liftweb.util.Helpers$.logTime(Helpers.scala:34)
    at net.liftweb.http.LiftServlet.doIt$1(LiftServlet.scala:144)
    at net.liftweb.http.LiftServlet.service(LiftServlet.scala:155)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply$mcV$sp(HTTPProvider.scala:72)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(HTTPProvider.scala:71)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(HTTPProvider.scala:71)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.URLRewriter$.doWith(Req.scala:1278)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2$$anonfun$apply$mcV$sp$1.apply$mcV$sp(HTTPProvider.scala:70)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2$$anonfun$apply$mcV$sp$1.apply(HTTPProvider.scala:70)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2$$anonfun$apply$mcV$sp$1.apply(HTTPProvider.scala:70)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2.apply$mcV$sp(HTTPProvider.scala:66)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2.apply(HTTPProvider.scala:60)
    at net.liftweb.http.provider.HTTPProvider$$anonfun$service$2.apply(HTTPProvider.scala:60)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.provider.HTTPProvider$class.service(HTTPProvider.scala:60)
    at net.liftweb.http.LiftFilter.service(LiftServlet.scala:922)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply$mcV$sp(ServletFilterProvider.scala:82)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ServletFilterProvider.scala:82)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ServletFilterProvider.scala:82)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$class.handleLoan$1(ServletFilterProvider.scala:59)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$class.net$liftweb$http$provider$servlet$ServletFilterProvider$$handleLoanWrappers(ServletFilterProvider.scala:63)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ServletFilterProvider.scala:82)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1$$anonfun$apply$mcV$sp$1.apply(ServletFilterProvider.scala:77)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1$$anonfun$apply$mcV$sp$1.apply(ServletFilterProvider.scala:77)
    at net.liftweb.http.CoreRequestVarHandler$$anonfun$apply$14$$anonfun$apply$15$$anonfun$apply$16$$anonfun$apply$17.apply(Vars.scala:619)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.CoreRequestVarHandler$$anonfun$apply$14$$anonfun$apply$15$$anonfun$apply$16.apply(Vars.scala:618)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.CoreRequestVarHandler$$anonfun$apply$14$$anonfun$apply$15.apply(Vars.scala:617)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.CoreRequestVarHandler$$anonfun$apply$14.apply(Vars.scala:616)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.CoreRequestVarHandler$class.apply(Vars.scala:615)
    at net.liftweb.http.RequestVarHandler$.apply(Vars.scala:515)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1.apply$mcV$sp(ServletFilterProvider.scala:75)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1.apply(ServletFilterProvider.scala:75)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$$anonfun$doFilter$1.apply(ServletFilterProvider.scala:75)
    at net.liftweb.http.CoreRequestVarHandler$$anonfun$apply$14$$anonfun$apply$15$$anonfun$apply$16$$anonfun$apply$17.apply(Vars.scala:619)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.CoreRequestVarHandler$$anonfun$apply$14$$anonfun$apply$15$$anonfun$apply$16.apply(Vars.scala:618)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.CoreRequestVarHandler$$anonfun$apply$14$$anonfun$apply$15.apply(Vars.scala:617)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.CoreRequestVarHandler$$anonfun$apply$14.apply(Vars.scala:616)
    at net.liftweb.util.ThreadGlobal.doWith(ThreadGlobal.scala:71)
    at net.liftweb.http.CoreRequestVarHandler$class.apply(Vars.scala:615)
    at net.liftweb.http.TransientRequestVarHandler$.apply(Vars.scala:528)
    at net.liftweb.http.provider.servlet.ServletFilterProvider$class.doFilter(ServletFilterProvider.scala:74)
    at net.liftweb.http.LiftFilter.doFilter(LiftServlet.scala:922)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:443)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:532)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
    at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
    at org.eclipse.jetty.server.Server.handle(Server.java:367)
    at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:486)
    at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926)
    at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988)
    at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
    at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
    at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
    at java.lang.Thread.run(Thread.java:722)

   Locked ownable synchronizers:
    - None

-------------------------------

"pool-10-thread-15" - Thread t@121
   java.lang.Thread.State: BLOCKED
    at net.liftweb.http.LiftSession.exitComet(LiftSession.scala:683)
    - waiting to lock <7ff773cc> (a net.liftweb.http.LiftSession) owned by "pool-9-thread-16" t@110
    at net.liftweb.http.LiftServlet$ContinuationActor$$anonfun$messageHandler$1.applyOrElse(LiftServlet.scala:683)

    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at net.liftweb.actor.LiftActor$class.execTranslate(LiftActor.scala:440)
    at net.liftweb.http.LiftServlet$ContinuationActor.execTranslate(LiftServlet.scala:662)
    at net.liftweb.actor.SpecializedLiftActor$class.liftedTree2$1(LiftActor.scala:288)
    at net.liftweb.actor.SpecializedLiftActor$class.net$liftweb$actor$SpecializedLiftActor$$proc2(LiftActor.scala:287)
    at net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply$mcV$sp(LiftActor.scala:210)
    at net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply(LiftActor.scala:210)
    at net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply(LiftActor.scala:210)
    at net.liftweb.actor.SpecializedLiftActor$class.around(LiftActor.scala:224)
    at net.liftweb.http.LiftServlet$ContinuationActor.around(LiftServlet.scala:662)
    at net.liftweb.actor.SpecializedLiftActor$class.net$liftweb$actor$SpecializedLiftActor$$processMailbox(LiftActor.scala:209)
    at net.liftweb.actor.SpecializedLiftActor$$anonfun$2$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiftActor.scala:173)
    at net.liftweb.actor.LAScheduler$$anonfun$9$$anon$2$$anon$3.run(LiftActor.scala:64)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)

   Locked ownable synchronizers:
    - locked <5ccd8418> (a java.util.concurrent.ThreadPoolExecutor$Worker)

--
Sergey Trofimov

David Pollak

unread,
May 24, 2013, 12:53:20 PM5/24/13
to liftweb
Please open a ticket on this and assign it to me. Lemme see what I can work out.

Peter Petersson

unread,
May 24, 2013, 4:30:41 PM5/24/13
to lif...@googlegroups.com
On 05/24/2013 01:26 AM, Geir Harald Hansen wrote:
> Hi,
>
> running Lift 2.5-RC4 with Jetty 6.1.24-6ubuntu0.12.04.1 under OpenJDK
> 1.6.0_27, I am getting some deadlocks. The web server becomes
> unresponsive and must be restarted.
>
> Anyone else seeing this?
>
> 2.5-RC5 gave me errors about the openid module missing, if I remember
> correctly. I'll give 2.5-RC6 a go.

If you get problems referring to the openid module chances is you need
to change the way your refer to it.

See this thread [1](Modules: changes to how you include them (***
BREAKING CHANGE ***)) and if you are using maven take a look at (for
example) the dependency settings section [2] for FoBo for a general
description on how to refer to lift modules.

[1] https://groups.google.com/forum/#!topic/liftweb/aIcGqqK9Oqo
[2] https://github.com/karma4u101/FoBo#dependency-settings

best regards
Peter Petersson

Geir Harald Hansen

unread,
May 26, 2013, 10:39:39 AM5/26/13
to lif...@googlegroups.com
Hi,

not sure if I will be able to reproduce this reliably. It only happens
after moving to a new server (different hardware, different linux distro
version). And it only happens 1-3 times per day on a server with 70 to
100 000 pageviews per day.

But perhaps the stack traces can be useful. I had 6 of these lockups so
far, and it's always the same:

A Jetty thread (X@qtp-Y-Z) locks a
org.mortbay.io.nio.SelectorManager$SelectSet and then tries to lock a
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation. The
stack trace contains only org.mortbay.* code.

A LiftActor thread (pool-X-thread-Y) locks a
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation and then
tries to lock a org.mortbay.io.nio.SelectorManager$SelectSet.

It seems to take the first lock from
net.liftweb.http.provider.servlet.containers.Jetty6AsyncProvider.resume
and tries to take the second in Jetty code,
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.resume

Since the locks are taken in opposite order, a deadlock occurs.

Looking at Jetty6AsyncProvider.scala this method appears to be the problem:

def resume(what: (Req, LiftResponse)): Boolean = {
val cont = getContinuation.invoke(contSupport, servletReq, LiftRules)
cont.synchronized {
logger.trace("In resume on Jetty 6")
val pending = isPending.invoke(cont).asInstanceOf[Boolean]
if (pending) {
setObject.invoke(cont, what)
resumeMeth.invoke(cont)
}
pending
}
}

cont is a org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation
and is locked first.

resumeMeth.invoke() will lock a
org.mortbay.io.nio.SelectorManager$SelectSet.

The problem is that Jetty internally takes these two locks in the
opposite order.

Regards,
Geir H. Hansen

On 24.05.2013 02:17, David Pollak wrote:> Weird.
>
> If you can package up a repro case, I'd love to sink my teeth into it.
>
>
> On Thu, May 23, 2013 at 4:26 PM, Geir Harald Hansen <dr.h...@gmail.com
> <mailto:dr.h...@gmail.com>> wrote:
>
> Hi,
>
> running Lift 2.5-RC4 with Jetty 6.1.24-6ubuntu0.12.04.1 under OpenJDK
> 1.6.0_27, I am getting some deadlocks. The web server becomes
> unresponsive and must be restarted.
>
> Anyone else seeing this?
>
> 2.5-RC5 gave me errors about the openid module missing, if I remember
> correctly. I'll give 2.5-RC6 a go.
>
> Interesting data from jstack below.
>
> Regards,
> Geir H. Hansen
>
>
> jstack output:
>
> Found one Java-level deadlock:
> =============================
> "1627879514@qtp-930501550-595":
> waiting to lock monitor 0x00000000012b4fe0 (object
0x00000007833775b8,
> a org.mortbay.io.nio.SelectorManager$SelectSet),
> which is held by "997133288@qtp-930501550-8 - Acceptor1
> SelectChann...@127.0.0.1:8080
> <http://SelectChann...@127.0.0.1:8080>"
> "997133288@qtp-930501550-8 - Acceptor1
> SelectChann...@127.0.0.1:8080
> <http://SelectChann...@127.0.0.1:8080>":
> waiting to lock monitor 0x00000000013cdab0 (object
0x00000007921cd078,
> a org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation),
> which is held by "pool-4-thread-14"
> "pool-4-thread-14":
> waiting to lock monitor 0x00000000012b4fe0 (object
0x00000007833775b8,
> a org.mortbay.io.nio.SelectorManager$SelectSet),
> which is held by "997133288@qtp-930501550-8 - Acceptor1
> SelectChann...@127.0.0.1:8080
> <http://SelectChann...@127.0.0.1:8080>"
>
> Java stack information for the threads listed above:
> ===================================================
> "1627879514@qtp-930501550-595":
> at
>
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.reset(SelectChannelConnector.java:441)
> - waiting to lock <0x00000007833775b8> (a
> org.mortbay.io.nio.SelectorManager$SelectSet)
> at
>
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:598)
> at
>
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at
> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at
> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at
>
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> at
>
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> "997133288@qtp-930501550-8 - Acceptor1
> SelectChann...@127.0.0.1:8080
> <http://SelectChann...@127.0.0.1:8080>":
<http://class.net>$liftweb$actor$SpecializedLiftActor$$proc2(LiftActor.scala:287)
> at
>
net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply$mcV$sp(LiftActor.scala:210)
> at
>
net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply(LiftActor.scala:210)
> at
>
net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply(LiftActor.scala:210)
> at
>
net.liftweb.actor.SpecializedLiftActor$class.around(LiftActor.scala:224)
> at
>
net.liftweb.http.LiftServlet$ContinuationActor.around(LiftServlet.scala:662)
> at
> net.liftweb.actor.SpecializedLiftActor$class.net
>
<http://class.net>$liftweb$actor$SpecializedLiftActor$$processMailbox(LiftActor.scala:209)

Geir Harald Hansen

unread,
May 26, 2013, 11:04:02 AM5/26/13
to lif...@googlegroups.com
On 24.05.2013 22:30, Peter Petersson wrote:
> On 05/24/2013 01:26 AM, Geir Harald Hansen wrote:
>> 2.5-RC5 gave me errors about the openid module missing, if I remember
>> correctly. I'll give 2.5-RC6 a go.
>
> If you get problems referring to the openid module chances is you need
> to change the way your refer to it.
>
> See this thread [1](Modules: changes to how you include them (***
> BREAKING CHANGE ***)) and if you are using maven take a look at (for
> example) the dependency settings section [2] for FoBo for a general
> description on how to refer to lift modules.
>
> [1] https://groups.google.com/forum/#!topic/liftweb/aIcGqqK9Oqo
> [2] https://github.com/karma4u101/FoBo#dependency-settings

Thanks. Works great for 2.5-RC6 :)

Regards,
Geir H. Hansen

Geir Harald Hansen

unread,
May 26, 2013, 3:59:36 PM5/26/13
to lif...@googlegroups.com
Is there an easy way to disable the use of Jetty 6 continuations in
Lift, even when running under Jetty 6? Since that is what is locking up
for me, it could be a temporary workaround.

Regards,
Geir H. Hansen

On 24.05.2013 02:17, David Pollak wrote:
> Weird.
>
> If you can package up a repro case, I'd love to sink my teeth into it.
>
>
> On Thu, May 23, 2013 at 4:26 PM, Geir Harald Hansen <dr.h...@gmail.com
> <mailto:dr.h...@gmail.com>> wrote:
>
> Hi,
>
> running Lift 2.5-RC4 with Jetty 6.1.24-6ubuntu0.12.04.1 under OpenJDK
> 1.6.0_27, I am getting some deadlocks. The web server becomes
> unresponsive and must be restarted.
>
> Anyone else seeing this?
>
> 2.5-RC5 gave me errors about the openid module missing, if I remember
> correctly. I'll give 2.5-RC6 a go.
>
> Interesting data from jstack below.
>
> Regards,
> Geir H. Hansen
>
>
> jstack output:
>
> Found one Java-level deadlock:
> =============================
> "1627879514@qtp-930501550-595":
> waiting to lock monitor 0x00000000012b4fe0 (object 0x00000007833775b8,
> a org.mortbay.io.nio.SelectorManager$SelectSet),
> which is held by "997133288@qtp-930501550-8 - Acceptor1
> SelectChann...@127.0.0.1:8080
> <http://SelectChann...@127.0.0.1:8080>"
> "997133288@qtp-930501550-8 - Acceptor1
> SelectChann...@127.0.0.1:8080
> <http://SelectChann...@127.0.0.1:8080>":
> waiting to lock monitor 0x00000000013cdab0 (object 0x00000007921cd078,
> a org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation),
> which is held by "pool-4-thread-14"
> "pool-4-thread-14":
> waiting to lock monitor 0x00000000012b4fe0 (object 0x00000007833775b8,
> a org.mortbay.io.nio.SelectorManager$SelectSet),
> which is held by "997133288@qtp-930501550-8 - Acceptor1
> SelectChann...@127.0.0.1:8080
> <http://SelectChann...@127.0.0.1:8080>"
>
> Java stack information for the threads listed above:
> ===================================================
> "1627879514@qtp-930501550-595":
> at
> org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.reset(SelectChannelConnector.java:441)
> - waiting to lock <0x00000007833775b8> (a
> org.mortbay.io.nio.SelectorManager$SelectSet)
> at
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:598)
> at
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at
> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at
> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> at
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> "997133288@qtp-930501550-8 - Acceptor1
> SelectChann...@127.0.0.1:8080
> <http://SelectChann...@127.0.0.1:8080>":
> <http://class.net>$liftweb$actor$SpecializedLiftActor$$proc2(LiftActor.scala:287)
> at
> net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply$mcV$sp(LiftActor.scala:210)
> at
> net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply(LiftActor.scala:210)
> at
> net.liftweb.actor.SpecializedLiftActor$$anonfun$net$liftweb$actor$SpecializedLiftActor$$processMailbox$1.apply(LiftActor.scala:210)
> at
> net.liftweb.actor.SpecializedLiftActor$class.around(LiftActor.scala:224)
> at
> net.liftweb.http.LiftServlet$ContinuationActor.around(LiftServlet.scala:662)
> at
> net.liftweb.actor.SpecializedLiftActor$class.net
> <http://class.net>$liftweb$actor$SpecializedLiftActor$$processMailbox(LiftActor.scala:209)
> at
> net.liftweb.actor.SpecializedLiftActor$$anonfun$2$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiftActor.scala:173)
> at
> net.liftweb.actor.LAScheduler$$anonfun$9$$anon$2$$anon$3.run(LiftActor.scala:64)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:679)
>
> Found 1 deadlock.
>
> --
> --
> Lift, the simply functional web framework: http://liftweb.net
> Code: http://github.com/lift
> Discussion: http://groups.google.com/group/liftweb
> Stuck? Help us help you:
> https://www.assembla.com/wiki/show/liftweb/Posting_example_code
>
> ---
> You received this message because you are subscribed to the Google
> Groups "Lift" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to liftweb+u...@googlegroups.com
> <mailto:liftweb%2Bunsu...@googlegroups.com>.

Diego Medina

unread,
May 26, 2013, 4:02:24 PM5/26/13
to Lift
I don't know about disabling continuation but jetty 6 is pretty old, the current stable version is at version 9, would upgrade to 7 or 8 be an option for you?
--
Diego Medina
Lift/Scala Developer
di...@fmpwizard.com
http://fmpwizard.telegr.am

Geir Harald Hansen

unread,
May 26, 2013, 4:37:06 PM5/26/13
to lif...@googlegroups.com
Maybe, if upgrading Jetty fixes this issue. For some reason Jetty 6 is
what comes with Ubuntu, and it's nice to have quick and easy (security)
upgrades.

On 26.05.2013 22:02, Diego Medina wrote:
> I don't know about disabling continuation but jetty 6 is pretty old, the
> current stable version is at version 9, would upgrade to 7 or 8 be an
> option for you?
>
>
> On Sun, May 26, 2013 at 3:59 PM, Geir Harald Hansen <dr.h...@gmail.com
> <mailto:dr.h...@gmail.com>> wrote:
>
> Is there an easy way to disable the use of Jetty 6 continuations in
> Lift, even when running under Jetty 6? Since that is what is locking up
> for me, it could be a temporary workaround.
>
> Regards,
> Geir H. Hansen
>
> On 24.05.2013 02 <tel:24.05.2013%2002>:17, David Pollak wrote:
> > Weird.
> >
> > If you can package up a repro case, I'd love to sink my teeth into it.
> >
> >
> > On Thu, May 23, 2013 at 4:26 PM, Geir Harald Hansen
> <dr.h...@gmail.com <mailto:dr.h...@gmail.com>
> > <mailto:liftweb%2Bunsu...@googlegroups.com
> <mailto:liftweb%252Buns...@googlegroups.com>>.
> Diego Medina
> Lift/Scala Developer
> di...@fmpwizard.com <mailto:di...@fmpwizard.com>
> http://fmpwizard.telegr.am

Sergey Trofimov

unread,
May 29, 2013, 12:34:56 PM5/29/13
to lif...@googlegroups.com
Hello David.

The ticket is created https://github.com/lift/framework/issues/1455.

  waiting to lock monitor 0x00000000013cdab0 (object 0x00000007921cd078,
a org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation),
  which is held by "pool-4-thread-14"
"pool-4-thread-14":
  waiting to lock monitor 0x00000000012b4fe0 (object 0x00000007833775b8,
a org.mortbay.io.nio.SelectorManager$SelectSet),
  which is held by "997133288@qtp-930501550-8 - Acceptor1


Java stack information for the threads listed above:
===================================================
"1627879514@qtp-930501550-595":
        at
org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.reset(SelectChannelConnector.java:441)
        - waiting to lock <0x00000007833775b8> (a
org.mortbay.io.nio.SelectorManager$SelectSet)
        at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:598)
        at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
        at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"997133288@qtp-930501550-8 - Acceptor1

Geir Harald Hansen

unread,
May 29, 2013, 7:44:07 PM5/29/13
to lif...@googlegroups.com
Hi,

is this enough information to go on? Should I open a ticket?

Regards,
Geir H. Hansen

David Pollak

unread,
May 29, 2013, 8:07:41 PM5/29/13
to liftweb
The ticket is open. Lemme see what I can do.


--
--
Lift, the simply functional web framework: http://liftweb.net
Code: http://github.com/lift
Discussion: http://groups.google.com/group/liftweb
Stuck? Help us help you: https://www.assembla.com/wiki/show/liftweb/Posting_example_code

---
You received this message because you are subscribed to the Google Groups "Lift" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liftweb+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Geir Harald Hansen

unread,
May 30, 2013, 11:15:38 AM5/30/13
to lif...@googlegroups.com
Hi,

the deadlock I was having was not the same that Sergey Trofimov had.

I switched to Jetty 9 to be rid of the deadlock issue with Jetty 6
continuations. Now I am getting the same problem that he's been having.

# grep BLOCKED /tmp/threads.txt |wc -l
78

The web server is still responsive, but CPU usage keeps going higher. At
first everything looked fine. Then over night CPU usage indicated one
thread spinning, CPU usage always over 100%. Then, as I watched, it
suddenly went to 300-700% CPU usage. And later 500-800% CPU, according
to top.

Normal CPU usage for this website is 8-12%, so I take sustained CPU
usage over 100% to mean one or more spinning threads.

The web server is noticeably slower, but still responsive.

With the deadlock I was having with Jetty 6 the web server became
completely unresponsive, and the stack trace looks quite different -
different locks are involved in the deadlock.

Shall I open a ticket on the Jetty 6 issue?

I realize Jetty 6 is ancient, but I think it's a good idea to keep
supporting it. As long as Ubuntu is pushing Jetty 6, people will be
using it.

Regards,
Geir H. Hansen
> <mailto:liftweb%2Bunsu...@googlegroups.com>.

David Pollak

unread,
May 30, 2013, 12:57:26 PM5/30/13
to liftweb
Please try with https://github.com/lift/framework/commit/afc9487413dc614b7683e8564c67c1d63c737b51

This would be 2.6-SNAPSHOT.

If you do see a deadlock, please include all the deadlocked threads. I'd like to see the object synchronizations.

Sergey Trofimov

unread,
May 31, 2013, 3:03:47 AM5/31/13
to lif...@googlegroups.com
Hello David.

I'll try to try 2.6-SNAPSHOT in production.

Today it happened again, I've attached full thread dump.
Looks like all actor functionality was affected in the application.

--
Sergey

>     >>     "997133288@qtp-930501550-8 - Acceptor1
>     >>     SelectChannelConnector@127.0.0.1:8080

>     >>       waiting to lock monitor 0x00000000013cdab0 (object
>     > 0x00000007921cd078,
>     >>     a
>     org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation),
>     >>       which is held by "pool-4-thread-14"
>     >>     "pool-4-thread-14":
>     >>       waiting to lock monitor 0x00000000012b4fe0 (object
>     > 0x00000007833775b8,
>     >>     a org.mortbay.io.nio.SelectorManager$SelectSet),
>     >>       which is held by "997133288@qtp-930501550-8 - Acceptor1

>     >>
>     >>     Java stack information for the threads listed above:
>     >>     ===================================================
>     >>     "1627879514@qtp-930501550-595":
>     >>             at
>     >>
>     >
>     org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.reset(SelectChannelConnector.java:441)
>     >>             - waiting to lock <0x00000007833775b8> (a
>     >>     org.mortbay.io.nio.SelectorManager$SelectSet)
>     >>             at
>     >>
>     >
>     org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:598)
>     >>             at
>     >>
>     >
>     org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)
>     >>             at
>     org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>     >>             at
>     >>     org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>     >>             at
>     >>     org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>     >>             at
>     >>
>     >
>     org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>     >>             at
>     >>
>     >
>     org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>     >>     "997133288@qtp-930501550-8 - Acceptor1
threaddump-1369982976657.tdump

David Pollak

unread,
May 31, 2013, 12:14:20 PM5/31/13
to lif...@googlegroups.com
There's no deadlock in this stack trace.

What do you mean "all actor functionality was affected?"

David Pollak

unread,
May 31, 2013, 12:22:55 PM5/31/13
to liftweb
Also, this code was not run against 2.6-SNAPSHOT. For example at net.liftweb.http.LiftSession.findFunc(LiftSession.scala:705)

findFunc is not at line 705 in LiftSession in 2.6-SNAPSHOT

Sergey Trofimov

unread,
May 31, 2013, 5:43:59 PM5/31/13
to lif...@googlegroups.com
> There's no deadlock in this stack trace.

Hm, maybe I incorrectly interpret info like “"qtp186413147-4162" - Thread t@4162 java.lang.Thread.State: BLOCKED” and “"pool-10-thread-16" - Thread t@118 java.lang.Thread.State: BLOCKED” as deadlock...


> What do you mean "all actor functionality was affected?"

There is an actor in our application which repeatedly executes some operations every several seconds (this can be seen in logs). When CPU usage was rised this actor stop working (and logging). So it looks like actor message processing broke.

> Also, this code was not run against 2.6-SNAPSHOT.

Yes, the application was not update yet at those time.
I just added more detail in case if I and Geir Harald Hansen are saying about totally different issues. Also today was a “happy” day when issue is reproduced.



--
Sergey Trofimov

Sergey Trofimov

unread,
Jun 1, 2013, 11:05:13 AM6/1/13
to lif...@googlegroups.com
I've upgraded our application to 2.6-SNAPSHOT.
Evaluating.

Have a nice weekend.


--
Sergey Trofimov

Sergey Trofimov

unread,
Jun 2, 2013, 3:52:06 PM6/2/13
to lif...@googlegroups.com
Now, after upgrade to 2.6-SNAPSHOT, I've got a following issue:

- no "qtp..." threads are affected
- pairs of "pool-..." threads are inter-blocked like

"pool-10-thread-13" - Thread t@115
   java.lang.Thread.State: BLOCKED
    at net.liftweb.http.LiftSession$$anonfun$removeCometActor$1.apply$mcV$sp(LiftSession.scala:2227)
    - waiting to lock <13bb1222> (a net.liftweb.http.LiftSession) owned by "pool-9-thread-6" t@95

"pool-9-thread-6" - Thread t@95
   java.lang.Thread.State: BLOCKED
    at net.liftweb.http.LiftSession$$anonfun$shutDown$1.apply$mcV$sp(LiftSession.scala:1087)
    - waiting to lock <5ea3804c> (a java.lang.Object) owned by "pool-10-thread-13" t@115

- CPU usage is increased
- HAProxy's session rate (there is a haproxy in front of web application) is dramatically increased.

Corresponding thread dump is attached.



--
Sergey Trofimov
threaddump-1370201412791.tdump

David Pollak

unread,
Jun 2, 2013, 4:08:04 PM6/2/13
to liftweb
Yes. This is a real deadlock (the last thread dump you sent was not).

I've fixed this deadlock and pushed the code to master. It'll take an hour+ to build and be in maven central.

Diego Medina

unread,
Jun 2, 2013, 4:17:47 PM6/2/13
to Lift
I've fixed this deadlock and pushed the code to master. It'll take an hour+ to build and be in maven central.



@Sergey: You can see once the build is finished by going here

And about getting this fix on a final version of Lift, once we are sure the deadlock is fixed, we'll release Lift 2.5.1 to include the fix (as we just released the final version of Lift 2.5) So you won't have to be on a snapshot version for too long

Thanks

  Diego



--
Diego Medina
Lift/Scala Developer
di...@fmpwizard.com

Sergey Trofimov

unread,
Jun 3, 2013, 1:52:26 AM6/3/13
to lif...@googlegroups.com
Application is updated.

Questions about “the last thread dump you sent was not”.
How those situation can be correctly interpreted? Is there problem or it is normal situation? If it's a problem where is a possible cause for such issue? How to manage it?


--
Sergey Trofimov

David Pollak

unread,
Jun 3, 2013, 10:10:27 AM6/3/13
to liftweb
A deadlock is where two threads mutually attempt to access interlocking locks:

T1 lock A
T2 lock B
T1 lock B
T2 lock A

If lots of threads attempt to access lock A, that's not a deadlock.

In the first thread-dump, lots of threads were blocking on one lock. This is not a deadlock... this is what locks are supposed to do. They serialize the computation.

In the event of the first logs... all the threads were waiting for an O(n) traversal through the function table. The traversal through the function table should not take that long unless you have tens of thousands of functions associated with the session... in which case there's likely something that can be done to reduce the number of functions associated with the session.

Sergey Trofimov

unread,
Jun 3, 2013, 2:22:31 PM6/3/13
to lif...@googlegroups.com
all the threads were waiting for an O(n) traversal through the function table.

Ah, I now see.

Well, as I remember all my previous cases was like this (the last one, with deadlock, was unusual).
The problem is that this lock is so long that only application restart can resolve the issue.
Are there any hints how to find out a reason for so long computation?



--
Sergey Trofimov

David Pollak

unread,
Jun 3, 2013, 2:33:52 PM6/3/13
to liftweb
On Mon, Jun 3, 2013 at 11:22 AM, Sergey Trofimov <sergey....@gmail.com> wrote:
all the threads were waiting for an O(n) traversal through the function table.

Ah, I now see.

Well, as I remember all my previous cases was like this (the last one, with deadlock, was unusual).
The problem is that this lock is so long that only application restart can resolve the issue.
Are there any hints how to find out a reason for so long computation?

There are hundreds of thousands of function objects in the function table. You may be swapping which means that a walk through the function table is very slow because of paging stuff in/out of memory.

Sergey Trofimov

unread,
Jun 10, 2013, 3:04:05 AM6/10/13
to lif...@googlegroups.com
No more deadlocks occurred since update.

--
Sergey


--
Sergey Trofimov




--
Sergey Trofimov




--
Sergey Trofimov



>     >>       waiting to lock monitor 0x00000000013cdab0 (object
>     > 0x00000007921cd078,
>     >>     a
>     org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation),
>     >>       which is held by "pool-4-thread-14"
>     >>     "pool-4-thread-14":
>     >>       waiting to lock monitor 0x00000000012b4fe0 (object
>     > 0x00000007833775b8,
>     >>     a org.mortbay.io.nio.SelectorManager$SelectSet),
>     >>       which is held by "997133288@qtp-930501550-8 - Acceptor1

>     >>
>     >>     Java stack information for the threads listed above:
>     >>     ===================================================
>     >>     "1627879514@qtp-930501550-595":
>     >>             at
>     >>
>     >
>     org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.reset(SelectChannelConnector.java:441)
>     >>             - waiting to lock <0x00000007833775b8> (a
>     >>     org.mortbay.io.nio.SelectorManager$SelectSet)
>     >>             at
>     >>
>     >
>     org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:598)
>     >>             at
>     >>
>     >
>     org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)
>     >>             at
>     org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>     >>             at
>     >>     org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>     >>             at
>     >>     org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>     >>             at
>     >>
>     >
>     org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>     >>             at
>     >>
>     >
>     org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>     >>     "997133288@qtp-930501550-8 - Acceptor1
>     For more options, visit https://groups.google.com/groups/opt_out.
>
>
>
>
>
> --
> Telegram, Simply Beautiful CMS https://telegr.am
> Lift, the simply functional web framework http://liftweb.net
> Follow me: http://twitter.com/dpp
> Blog: http://goodstuff.im
>
> --
> --
> Lift, the simply functional web framework: http://liftweb.net
> Code: http://github.com/lift
> Discussion: http://groups.google.com/group/liftweb
> Stuck? Help us help you:
> https://www.assembla.com/wiki/show/liftweb/Posting_example_code
>
> ---
> You received this message because you are subscribed to the Google
> Groups "Lift" group.
> To unsubscribe from this group and stop receiving emails from it, send

> For more options, visit https://groups.google.com/groups/opt_out.
>
>

--
--
Lift, the simply functional web framework: http://liftweb.net
Code: http://github.com/lift
Discussion: http://groups.google.com/group/liftweb
Stuck? Help us help you: https://www.assembla.com/wiki/show/liftweb/Posting_example_code

---
You received this message because you are subscribed to the Google Groups "Lift" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liftweb+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.





--
Telegram, Simply Beautiful CMS https://telegr.am
Lift, the simply functional web framework http://liftweb.net

--
--
Lift, the simply functional web framework: http://liftweb.net
Code: http://github.com/lift
Discussion: http://groups.google.com/group/liftweb
Stuck? Help us help you: https://www.assembla.com/wiki/show/liftweb/Posting_example_code
 
---
You received this message because you are subscribed to the Google Groups "Lift" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liftweb+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
--
Lift, the simply functional web framework: http://liftweb.net
Code: http://github.com/lift
Discussion: http://groups.google.com/group/liftweb
Stuck? Help us help you: https://www.assembla.com/wiki/show/liftweb/Posting_example_code
 
---
You received this message because you are subscribed to the Google Groups "Lift" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liftweb+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Telegram, Simply Beautiful CMS https://telegr.am
Lift, the simply functional web framework http://liftweb.net

--
--
Lift, the simply functional web framework: http://liftweb.net
Code: http://github.com/lift
Discussion: http://groups.google.com/group/liftweb
Stuck? Help us help you: https://www.assembla.com/wiki/show/liftweb/Posting_example_code
 
---
You received this message because you are subscribed to the Google Groups "Lift" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liftweb+unsubscribe@googlegroups.com.

Geir Harald Hansen

unread,
Jun 12, 2013, 3:57:34 AM6/12/13
to lif...@googlegroups.com
Same here. 2.6-SNAPSHOT looking nice and stable. :)

Regards,
Geir H. Hansen

On 10.06.2013 09:04, Sergey Trofimov wrote:
> No more deadlocks occurred since update.
>
> --
> Sergey
>
> On Monday, June 3, 2013 7:52:26 AM UTC+2, Sergey Trofimov wrote:
>
> Application is updated.
>
> Questions about �the last thread dump you sent was not�.
> How those situation can be correctly interpreted? Is there problem
> or it is normal situation? If it's a problem where is a possible
> cause for such issue? How to manage it?
>
>
> --
> Sergey Trofimov
>
>
> On Sun, Jun 2, 2013 at 10:08 PM, David Pollak
> <feeder.of...@gmail.com
> <mailto:feeder.of...@gmail.com>> wrote:
>
> Yes. This is a real deadlock (the last thread dump you sent was
> not).
>
> I've fixed this deadlock and pushed the code to master. It'll
> take an hour+ to build and be in maven central.
>
>
> On Sun, Jun 2, 2013 at 12:52 PM, Sergey Trofimov
> <sergey....@gmail.com <mailto:sergey....@gmail.com>>
> <mailto:sergey....@gmail.com>> wrote:
>
> I've upgraded our application to 2.6-SNAPSHOT.
> Evaluating.
>
> Have a nice weekend.
>
>
> --
> Sergey Trofimov
>
>
> On Thu, May 30, 2013 at 6:57 PM, David Pollak
> <feeder.of...@gmail.com
> <mailto:feeder.of...@gmail.com>> wrote:
>
> Please try
> with https://github.com/lift/framework/commit/afc9487413dc614b7683e8564c67c1d63c737b51
> <https://github.com/lift/framework/commit/afc9487413dc614b7683e8564c67c1d63c737b51>
>
> This would be 2.6-SNAPSHOT.
>
> If you do see a deadlock, please include all the
> deadlocked threads. I'd like to see the object
> synchronizations.
>
>
> On Thu, May 30, 2013 at 8:15 AM, Geir Harald Hansen
> <dr.h...@gmail.com <mailto:dr.h...@gmail.com>>
> <http://SelectChann...@127.0.0.1:8080>>":
> > >> waiting to lock monitor
> 0x00000000013cdab0 (object
> > > 0x00000007921cd078,
> > >> a
> >
> org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation),
> > >> which is held by "pool-4-thread-14"
> > >> "pool-4-thread-14":
> > >> waiting to lock monitor
> 0x00000000012b4fe0 (object
> > > 0x00000007833775b8,
> > >> a
> org.mortbay.io.nio.SelectorManager$SelectSet),
> > >> which is held by
> "997133288@qtp-930501550-8 - Acceptor1
> > >>
> <http://SelectChann...@127.0.0.1:8080>>"
> > >>
> > >> Java stack information for the
> threads listed above:
> > >>
> ===================================================
> > >> "1627879514@qtp-930501550-595":
> > >> at
> > >>
> > >
> >
> org.mortbay.jetty.nio.SelectChannelConnector$RetryContinuation.reset(SelectChannelConnector.java:441)
> > >> - waiting to lock
> <0x00000007833775b8> (a
> > >>
> org.mortbay.io.nio.SelectorManager$SelectSet)
> > >> at
> > >>
> > >
> >
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:598)
> > >> at
> > >>
> > >
> >
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)
> > >> at
> >
> org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> > >> at
> > >>
> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> > >> at
> > >>
> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> > >> at
> > >>
> > >
> >
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> > >> at
> > >>
> > >
> >
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> > >> "997133288@qtp-930501550-8 - Acceptor1
> > >>
> <http://SelectChann...@127.0.0.1:8080>>":
> <mailto:liftweb%252Buns...@googlegroups.com>>.
> > For more options, visit
> https://groups.google.com/groups/opt_out
> <https://groups.google.com/groups/opt_out>.
> >
> >
> >
> >
> >
> > --
> > Telegram, Simply Beautiful CMS https://telegr.am
> > Lift, the simply functional web framework
> http://liftweb.net
> > Follow me: http://twitter.com/dpp
> > Blog: http://goodstuff.im
> >
> > --
> > --
> > Lift, the simply functional web framework:
> http://liftweb.net
> > Code: http://github.com/lift
> > Discussion:
> http://groups.google.com/group/liftweb
> <http://groups.google.com/group/liftweb>
> > Stuck? Help us help you:
> >
> https://www.assembla.com/wiki/show/liftweb/Posting_example_code
> <https://www.assembla.com/wiki/show/liftweb/Posting_example_code>
> >
> > ---
> > You received this message because you are
> subscribed to the Google
> > Groups "Lift" group.
> > To unsubscribe from this group and stop
> receiving emails from it, send
> > an email to
> liftweb+u...@googlegroups.com
> <mailto:liftweb%2Bunsu...@googlegroups.com>.
> > For more options, visit
> https://groups.google.com/groups/opt_out
> <https://groups.google.com/groups/opt_out>.
> >
> >
>
> --
> --
> Lift, the simply functional web framework:
> http://liftweb.net
> Code: http://github.com/lift
> Discussion:
> http://groups.google.com/group/liftweb
> <http://groups.google.com/group/liftweb>
> Stuck? Help us help you:
> https://www.assembla.com/wiki/show/liftweb/Posting_example_code
> <https://www.assembla.com/wiki/show/liftweb/Posting_example_code>
>
> ---
> You received this message because you are
> subscribed to the Google Groups "Lift" group.
> To unsubscribe from this group and stop
> receiving emails from it, send an email to
> liftweb+u...@googlegroups.com
> <mailto:liftweb%2Bunsu...@googlegroups.com>.
> For more options, visit
> https://groups.google.com/groups/opt_out
> <https://groups.google.com/groups/opt_out>.
>
>
>
>
>
> --
> Telegram, Simply Beautiful CMS https://telegr.am
> Lift, the simply functional web framework
> http://liftweb.net
> Follow me: http://twitter.com/dpp
> Blog: http://goodstuff.im
>
> --
> --
> Lift, the simply functional web framework:
> http://liftweb.net
> Code: http://github.com/lift
> Discussion: http://groups.google.com/group/liftweb
> <http://groups.google.com/group/liftweb>
> Stuck? Help us help you:
> https://www.assembla.com/wiki/show/liftweb/Posting_example_code
> <https://www.assembla.com/wiki/show/liftweb/Posting_example_code>
>
> ---
> You received this message because you are subscribed
> to the Google Groups "Lift" group.
> To unsubscribe from this group and stop receiving
> emails from it, send an email to
> liftweb+u...@googlegroups.com
> <mailto:liftweb%2Bunsu...@googlegroups.com>.
> For more options, visit
> https://groups.google.com/groups/opt_out
> <https://groups.google.com/groups/opt_out>.
>
>
>
>
>
> --
> --
> Lift, the simply functional web framework: http://liftweb.net
> Code: http://github.com/lift
> Discussion: http://groups.google.com/group/liftweb
> <http://groups.google.com/group/liftweb>
> Stuck? Help us help you:
> https://www.assembla.com/wiki/show/liftweb/Posting_example_code
> <https://www.assembla.com/wiki/show/liftweb/Posting_example_code>
>
> ---
> You received this message because you are subscribed to the
> Google Groups "Lift" group.
> To unsubscribe from this group and stop receiving emails
> from it, send an email to
> liftweb+u...@googlegroups.com
> <mailto:liftweb%2Bunsu...@googlegroups.com>.
> For more options, visit
> https://groups.google.com/groups/opt_out
> <https://groups.google.com/groups/opt_out>.
>
>
>
>
>
>
> --
> Telegram, Simply Beautiful CMS https://telegr.am
> Lift, the simply functional web framework http://liftweb.net
> Follow me: http://twitter.com/dpp
> Blog: http://goodstuff.im
>
> --
> --
> Lift, the simply functional web framework: http://liftweb.net
> Code: http://github.com/lift
> Discussion: http://groups.google.com/group/liftweb
> <http://groups.google.com/group/liftweb>
> Stuck? Help us help you:
> https://www.assembla.com/wiki/show/liftweb/Posting_example_code
> <https://www.assembla.com/wiki/show/liftweb/Posting_example_code>
>
> ---
> You received this message because you are subscribed to the
> Google Groups "Lift" group.
> To unsubscribe from this group and stop receiving emails from
> it, send an email to liftweb+u...@googlegroups.com
> <mailto:liftweb%2Bunsu...@googlegroups.com>.
> For more options, visit https://groups.google.com/groups/opt_out
> <https://groups.google.com/groups/opt_out>.
>
>
>
>
> --
> --
> Lift, the simply functional web framework: http://liftweb.net
> Code: http://github.com/lift
> Discussion: http://groups.google.com/group/liftweb
> Stuck? Help us help you:
> https://www.assembla.com/wiki/show/liftweb/Posting_example_code
>
> ---
> You received this message because you are subscribed to the Google
> Groups "Lift" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to liftweb+u...@googlegroups.com.

David Pollak

unread,
Jun 12, 2013, 8:10:18 AM6/12/13
to liftweb
Yay.

So... note to self... remove the Scala Maps from Lift over time... :-(


On Wed, Jun 12, 2013 at 3:57 AM, Geir Harald Hansen <dr.h...@gmail.com> wrote:
Same here.  2.6-SNAPSHOT looking nice and stable. :)

Regards,
Geir H. Hansen

On 10.06.2013 09:04, Sergey Trofimov wrote:
> No more deadlocks occurred since update.
>
> --
> Sergey
>
> On Monday, June 3, 2013 7:52:26 AM UTC+2, Sergey Trofimov wrote:
>
>     Application is updated.
>
>     Questions about “the last thread dump you sent was not”.
Reply all
Reply to author
Forward
0 new messages