Haven't read it in detail so don't know how he warmed up the JVM etc. but this
"No sessions/cookies nor HTTP request managers were set in JMeter, so
each HTTP request was performed independently of the other requests."
basically means he starts a new session on each request, probably not
what you want to test
/Jeppe
On Mon, May 30, 2011 at 12:59 PM, Wei-Cheng Sun <sunwe...@gmail.com> wrote:Haven't read it in detail so don't know how he warmed up the JVM etc. but this
> Hi all,
>
> I found a new benchmark was posted on May 28th.
> The author tested lift 2.3 with scala 2.8.1 as well as other language/
> frameworks.
> Because, I didn't see his full source code. I want to ask the
> following questions:
> Do you have any comments about his results? Is this testing credible?
> If not, what is the better method for benchmarking.
>
> http://www.jtict.com/blog/rails-wicket-grails-play-lift-jsp/
"No sessions/cookies nor HTTP request managers were set in JMeter, so
each HTTP request was performed independently of the other requests."
"tr"
#
>
products.map{ p
=
>
(
".productname *"
#
> <div
class
=
"product"
><img src
=
{p.name +
".jpg"
} /><span
class
=
"productname"
>{p.name}</span>, <span
class
=
"price"
>${p.price}</span></div> &
".categories *"
#
> p.categories.map(
_
.name).mkString(
", "
))
}
class
ProductsSnippet {
def
showProducts: NodeSeq
=
{
val
products
=
Service.products
products.map(p => <tr><td>
<div
class
=
"product"
><img src
=
{p.name +
".jpg"
} /><span
class
=
"productname"
>{p.name}</span>, <span
class
=
"price"
>${p.price}</span></div></td><td>
{p.categories.map(
_
.name).mkString(
", "
)}</td></tr>)
}
}
basically means he starts a new session on each request, probably not
what you want to test
/Jeppe
--
You received this message because you are subscribed to the Google Groups "Lift" group.
To post to this group, send email to lif...@googlegroups.com.
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/liftweb?hl=en.
thanks for reply,
for a newcomer, it will be nice to have a set of performance rules,
best practices and development guide. is there such resource around?
play framework committers are already marketing their results on
twitter. there should be some response to that benchmark.
On Mon, May 30, 2011 at 7:17 PM, Wei-Cheng Sun <sunwe...@gmail.com> wrote:
thanks for reply,
for a newcomer, it will be nice to have a set of performance rules,
best practices and development guide. is there such resource around?
The best practices are simple:
- Use a 64 bit JVM
- Allocate enough heap (I like 2-3 GB)
- Make sure the JVM has warmed up
-- Andreas Joseph Krogh <and...@officenet.no> Senior Software Developer / CTO Public key: http://home.officenet.no/~andreak/public_key.asc ------------------------+---------------------------------------------+ OfficeNet AS | The most difficult thing in the world is to | Rosenholmveien 25 | know how to do a thing and to watch | 1414 Trollåsen | somebody else doing it wrong, without | NORWAY | comment. | Org.nr: NO 981 479 076 | | | | Tlf: +47 24 15 38 90 | | Fax: +47 24 15 38 91 | | Mobile: +47 909 56 963 | | ------------------------+---------------------------------------------+
On 05/31/2011 07:35 PM, David Pollak wrote:Just curious:
On Mon, May 30, 2011 at 7:17 PM, Wei-Cheng Sun <sunwe...@gmail.com> wrote:
thanks for reply,
for a newcomer, it will be nice to have a set of performance rules,
best practices and development guide. is there such resource around?
The best practices are simple:
- Use a 64 bit JVM
- Allocate enough heap (I like 2-3 GB)
- Make sure the JVM has warmed up
JVM "warm-up" has several times been mentioned as a key issue when discussing Lift performance and Lift vs. framework-X. Is there any reason to believe Lift is more sensitive to this than other JVM-based frameworks?
-- Andreas Joseph Krogh <and...@officenet.no> Senior Software Developer / CTO Public key: http://home.officenet.no/~andreak/public_key.asc ------------------------+---------------------------------------------+ OfficeNet AS | The most difficult thing in the world is to | Rosenholmveien 25 | know how to do a thing and to watch | 1414 Trollåsen | somebody else doing it wrong, without | NORWAY | comment. | Org.nr: NO 981 479 076 | | | | Tlf: +47 24 15 38 90 | | Fax: +47 24 15 38 91 | | Mobile: +47 909 56 963 | | ------------------------+---------------------------------------------+
--
You received this message because you are subscribed to the Google Groups "Lift" group.
To post to this group, send email to lif...@googlegroups.com.
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/liftweb?hl=en.
On Tue, May 31, 2011 at 10:58 AM, Andreas Joseph Krogh <andrea...@gmail.com> wrote:
On 05/31/2011 07:35 PM, David Pollak wrote:Just curious:
On Mon, May 30, 2011 at 7:17 PM, Wei-Cheng Sun <sunwe...@gmail.com> wrote:
thanks for reply,
for a newcomer, it will be nice to have a set of performance rules,
best practices and development guide. is there such resource around?
The best practices are simple:
- Use a 64 bit JVM
- Allocate enough heap (I like 2-3 GB)
- Make sure the JVM has warmed up
JVM "warm-up" has several times been mentioned as a key issue when discussing Lift performance and Lift vs. framework-X. Is there any reason to believe Lift is more sensitive to this than other JVM-based frameworks?
HotSpot (especially 64 bit HotSpot running in server mode) does an amazing job of inlining code, especially monomorphic code. It turns out that Scala's small functions (that get turned into anonymous inner classes) slot perfectly into HotSpot's inlining methodology. But it takes a lot of calls to a given method to let HotSpot gather the statistics it needs to inline the code correctly.
So, it's my opinion based on my experience (no hard facts here), that it takes a little more time for HotSpot to optimize Scala code because of the code that ScalaC generates, but once optimized, Scala code performs very, very well.
> So, rather than focusing on a micro-benchmark, I focus on the actual Lift
> successes. Stuff like http://twitter.com/#!/rasmus/status/5929904263 (and
> for those of you who don't know who Rasmus is... Google him.) Stuff like
I remember the day I got my novell vibe account, the first thing I did
after sign up was try to break it using all kinds of xss. It was very
puzzling how it was using "weird" form field names that kept changing
on page reload (this was before I met lift).
So I second what David said, not having to even think about security
is very refreshing.
my 2 cents
/Diego
--
Diego Medina
Web Developer
http://www.fmpwizard.com
But I would say that all the frameworks subjected to the test are
in the same condition, that is, the warming issue is present in all of
them. Why then you invalidate the test by this? I acknowledge that the
final results may be somehow different after the warming, but given
that the conditions are identical for all the tests (or at least we
are assuming so) I don't understand the issue.
Please note that I'm aware of the impact of warming up a JVM on the
initial run vs subsequent runs, I'm not arguing against that. I'm
asking why, given that the warming happens in all of them, this is
considered a major issue. All servers will suffer the same issue.
Thanks
Hi,
sorry for ressurrecting this, but I just noticed it thanks to a
discussion in Stack Overflow. I just want to add one piece of
information and ask a question. As a disclaimer, I'm a Play developer,
but I also used Lift (although time ago), and my intention is not to
create a "flame" on Play vs Lift but to get more information and
knowledge in the area.
The piece of information is about the security issues. As per
documentation http://www.playframework.org/documentation/1.2.2/security
since 1.0.2 I believe Play provides the major security features you
may need. It may not avoid some security issues caused by the
developer, but I believe that to be true for all frameworks. I would
like to know exactly what extra security features Lfit has
implemented.
It would be good to add these features, if any, to all
frameworks as this would make the web more safe.
On the question, it's related to the evaluation done here on the
performance test. I'm leaving aside the fact that this is a micro test
and the impact on a real application may not be as clear as shown
here. As far as I understood (and please correct me if I'm wrong), the
test is not considered valid as it doesn't respect the warming of the
JVM. But I would say that all the frameworks subjected to the test are
in the same condition, that is, the warming issue is present in all of
them. Why then you invalidate the test by this?
I acknowledge that the
final results may be somehow different after the warming, but given
that the conditions are identical for all the tests (or at least we
are assuming so) I don't understand the issue.
Please note that I'm aware of the impact of warming up a JVM on the
initial run vs subsequent runs, I'm not arguing against that. I'm
asking why, given that the warming happens in all of them, this is
considered a major issue. All servers will suffer the same issue.
Thanks
On Jun 12, 12:23 pm, Ben Phelan <bem...@gmail.com> wrote:
> Yeah, that is a pretty nice thing. Not having to religiously double-check
> all of your team's code for a hundred subtle security vulnerabilities is
> nice too.
--
You received this message because you are subscribed to the Google Groups "Lift" group.
To post to this group, send email to lif...@googlegroups.com.
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/liftweb?hl=en.