Issue with Lift state post jetty-stop or jetty hot redeploy (lift 2.4-M3, sbt 0.10.1)

130 views
Skip to first unread message

James Ousby

unread,
Aug 19, 2011, 8:52:25 AM8/19/11
to Lift
Hi,

My primary issue is the hot redeploy on change feature of jetty-run
causes Lift to throw an exception. Doing a jetty-stop and then jetty-
run again also results in the same exception.

Details:

Scala: 2.9.0-1
Lift: 2.4-M3
sbt: 0.10.1

I've tested this issue on both Ubuntu 64bit and windows 7 64bit.
Example project demonstrating the issue: https://github.com/jousby/squerylrecord_redeploy_issue

So in my lift app i was playing around with squerlyrecord and thought
the squerylrecord might be the main culprit here. However even when i
comment out the stuff to do with squerylrecord things still behave
strangely after a jetty-stop.

My exact error when i have the squerylrecord code in:

$ sbt
[info] Set current project to default-987a38 (in build file:/usr/dev/
scala/projects/personal/portfolio-manager/project/plugins/)

> jetty-run
2011-08-19 21:32:53.961:INFO::Logging to STDERR via
org.mortbay.log.StdErrLog
-- table declarations :
create table Trade (
id bigint not null primary key auto_increment,
tradeType int not null,
tradeDate timestamp not null
);

> jetty-stop
19/08 21:34:13.599 DEBUG n.l.http.LiftServlet - Destroyed Lift
handler.

> jetty-run
2011-08-19 21:38:46.264:INFO::Logging to STDERR via
org.mortbay.log.StdErrLog
19/08 21:38:53.650 ERROR n.l.h.p.HTTPProvider - Failed to Boot!
Your application may not run properly
java.sql.SQLException: No suitable driver found for
jdbc:h2:mem:potfolio-manager;DB_CLOSE_DELAY=-1
at java.sql.DriverManager.getConnection(DriverManager.java:640) ~[na:
1.6.0_22]
at java.sql.DriverManager.getConnection(DriverManager.java:200) ~[na:
1.6.0_22]
at bootstrap.liftweb.Boot$$anonfun$boot$3.apply(Boot.scala:28)
~[classes/:na]
at bootstrap.liftweb.Boot$$anonfun$boot$3.apply(Boot.scala:27)
~[classes/:na]
at org.squeryl.SessionFactory$.newSession(Session.scala:95)
~[squeryl_2.9.0-1-0.9.4.jar:na]
at org.squeryl.dsl.QueryDsl$class.transaction(QueryDsl.scala:64)
~[squeryl_2.9.0-1-0.9.4.jar:na]
at net.liftweb.squerylrecord.RecordTypeMode
$.transaction(RecordTypeMode.scala:34) ~[lift-squeryl-
record_2.9.0-1-2.4-M3.jar:2.4-M3]
at bootstrap.liftweb.Boot.boot(Boot.scala:35) ~[classes/:na]



Things I've tried:

- Running VisualVM over the sbt session shows that after a jetty-
stop, rather than threads being cleaned up, lots of new threads
actually startup? In particular despite the log message saying
'Destroyed Lift handler.', a new thread called LiftDispatcher starts
up along with an ever increasing number or threads called 'pool-x-
thread-x' (x is a number).

- I've tried the following line to shutdown any squeryl stuff that
might be lying around post jetty stop:
LiftRules.unloadHooks.append(() =>
Session.currentSession.close)

- I've tried commenting out all the SquerylRecord code which stops
this exception, but the weird behaviour with threads starting up after
'jetty-stop' remains.

- Turning off hot redeploy (jetty scan dirs := Nil ) and using jrebel
instead. Same issue.


What I suspect:

- I've done something dumb with my Lift application setup so it is not
cleaning up things cleanly after shutdown/restart.
- The sbt web plugin is not being aggressive enough in regards to
kiling threads when jetty-stop is called?

Any more thoughts on how to resolve would be much appreciated.

Regards,

James.


David Pollak

unread,
Aug 19, 2011, 12:42:20 PM8/19/11
to lif...@googlegroups.com
I suspect the issue is with DB_CLOSE_DELAY.  See http://www.h2database.com/html/features.html

What I think is happening is that the first time you run Jetty, the DB is created.  With DB_CLOSE_DELAY=-1, it means that the DB is not closed when your Lift app exits because the JVM is still running.  When you run the app the second time, the DB cannot be opened because it's already open and you get the exception.



--
You received this message because you are subscribed to the Google Groups "Lift" group.
To post to this group, send email to lif...@googlegroups.com.
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/liftweb?hl=en.




--
Lift, the simply functional web framework http://liftweb.net

James Ousby

unread,
Aug 20, 2011, 5:15:39 AM8/20/11
to Lift
Thanks David.

So i tried moving the H2 database out into server mode running in a
seperate jvm and connecting using tcp:

jdbc:h2:tcp://localhost//usr/dev/data/portfolio-manager

However still no joy, same exception.

The next thing I tried was making use of a connection pool:

// initialise database
val cpds = new ComboPooledDataSource
cpds.setDriverClass(Props.get("db.driver") openOr
"org.h2.Driver")
cpds.setJdbcUrl(Props.get("db.url") openOr "jdbc:h2:tcp://
localhost//usr/dev/data/portfolio-manager")
cpds.setUser(Props.get("db.user") openOr "sa")
cpds.setPassword(Props.get("db.password") openOr "")

SquerylRecord.initWithSquerylSession(
Session.create(cpds.getConnection, new H2Adapter))

LiftRules.unloadHooks.append(() => cpds.close())

And this worked :) So it looks like the connection created by
DriveManager.getConnection not being shutdown correctly in the prev
code was the culprit.

However....

So I'm now able to get hot redeploy on change working (via 'jetty-run'
then '~prepare-webapp') but I'm still seeing the unusual behaviour
around thread growth. By the time i've redeployed 10 times there are
around 370 threads running. Happy to raise a bug for this if needed.

I've update the github example project with the connection pool change
and moved back to an in memory database to remove any env
dependencies.

Regards,

James.



On Aug 20, 2:42 am, David Pollak <feeder.of.the.be...@gmail.com>
wrote:
> I suspect the issue is with DB_CLOSE_DELAY.  Seehttp://www.h2database.com/html/features.html
> Simply Lifthttp://simply.liftweb.net

David Pollak

unread,
Aug 20, 2011, 5:21:44 AM8/20/11
to lif...@googlegroups.com

This is not a lift related issue and a ticket is not going to get it to change.

I suggest using jrebel so you don't have to restart jetty over and over.

Naftoli Gugenheim

unread,
Aug 20, 2011, 11:56:14 PM8/20/11
to lif...@googlegroups.com
Why are you using DB_CLOSE_DELAY=-1?


James Ousby

unread,
Aug 21, 2011, 9:09:11 AM8/21/11
to Lift
Hi David,

I have to respectfully disagree. To me it looks like the destroy
method in the LiftServlet.scala class is not doing what is intended.

def destroy = {
try {
LiftRules.ending = true

tryo {
SessionMaster.shutDownAllSessions()
}

val cur = millis

// wait 10 seconds or until the request count is zero
while (LiftRules.reqCnt.get > 0 && (millis - cur) < 10000L) {
Thread.sleep(20)
}

tryo {
Schedule.shutdown
}
tryo {
LAScheduler.shutdown()
}

LiftRules.runUnloadHooks()
logger.debug("Destroyed Lift handler.")
// super.destroy
} catch {
case e => logger.error("Destruction failure", e)
}
}


The two lines here that shutdown the thread pools ->

tryo {
Schedule.shutdown
}
tryo {
LAScheduler.shutdown()
}

When I call jetty-stop what I observe is that the call to
LAScheduler.shutdown does nothing because the LAScheduler object
hasn't been lazily created at this point. But some time after the call
to LiftServlet.destroy has finished something goes off and triggers
the creation of the LAScheduler object which then spawns a thread
pool. Rinse and repeat for 10 redeploys and you have 370 threads.

I actually tried moving the call to LAScheduler.shutdown() to the very
end of the destroy method but this didn't seem to help. So not sure
how its happening but in my opinion it looks like an LAScheduler
thread pool is created after the LiftServlet.destory method is called
and is never terminated.

This isn't a show stopper for me obviously and I'm not asking for it
to be fixed. I'm just trying to clarify if this is a real issue that
you might want to add to your defect list.



On Aug 20, 7:21 pm, David Pollak <feeder.of.the.be...@gmail.com>
wrote:

David Pollak

unread,
Aug 26, 2011, 6:50:04 PM8/26/11
to lif...@googlegroups.com
It may be the LAPinger was not shutdown.  I've updated the code to explicitly shut down LAPinger.
Lift, the simply functional web framework http://liftweb.net
Reply all
Reply to author
Forward
0 new messages