TechEmpower Round 10 [was Re: Techempower Round 3]

199 views
Skip to first unread message

Rich Dougherty

unread,
Apr 26, 2015, 4:22:06 PM4/26/15
to play-framework, Donovan Muller, play-fram...@googlegroups.com
(Changing the subject to Round 10.)

I've been maintaining the Play 2 TechEmpower tests on the side along with Donovan Muller and some other contributors. I'm not quite sure what happened in Round 10. We can't tell be cause stderr isn't captured.

The Play 2 tests have been working up to now. They passed the TechEmpower CI tests that run every day and they passed the trial rounds of testing, but then they failed for some reason in the final test.

Since the Play 2 test code didn't change I suspect there's some variation in the testing environment that accounts for this, but I'm not sure what it is. Once I've finished some 2.4 work (docs to support the recent RC) I should have some time to look at the TechEmpower results and engage with TechEmpower to fix the issue.

Or if anyone else wants to investigate, I wouldn't mind. :)

Aside from the TechEmpower tests, we do run our own automated performance regression tests, and performance is stable or improving with each release. (Ignore any recent regressions you can see in the graphs—we'll fix them!)

– Rich

On Mon, Apr 27, 2015 at 7:48 AM, alex s <iwt...@gmail.com> wrote:
воскресенье, 26 апреля 2015 г., 17:09:27 UTC+3 пользователь Slim Slam написал:
Why is Play at the bottom of the results and it says "Did not complete"?

https://github.com/TechEmpower/TFB-Round-10/search?utf8=%E2%9C%93&q=port+not+available
https://github.com/TechEmpower/TFB-Round-10/search?utf8=%E2%9C%93&q=bind

I think Play falls in the same category, but it looks like the benchmark runner doesn't capture stderr output (!), so there is only 'Oops, cannot start the server' log entry.

Note:
1. A bunch of failed attempts (dropwizard, unfiltered, etc) for some reason are absent from the techempower site, that's really unfair to the frameworks at the bottom of the results table;
2. the jetty-servlet sample was somehow 'fixed' between preview4 and preview5, but it did not receive any updates in that period.

So, there is a general stability problem with the techempower benchmark, of which the benchmark maintainers are somewhat in denial. I mean they obviously know about the issue https://github.com/TechEmpower/FrameworkBenchmarks/commit/04a6c14005d6bc501415de42cd2838f5c670c357, but that didn't stop them from releasing round 10.

--
Rich Dougherty
Engineer, Typesafe, Inc

Rich Dougherty

unread,
Apr 27, 2015, 8:36:17 PM4/27/15
to play-fr...@googlegroups.com, play-fram...@googlegroups.com, Donovan Muller
On Mon, Apr 27, 2015 at 10:08 PM, virtualeyes <sit...@gmail.com> wrote:
Is this the issue, or is there something else going on on TE's end?
 
I don't think that will help. Explicit address/port configuration won't change Play's behaviour unless we bind to a non-default address/port. However, if Play is failing on the default port then that mostly likely means a another unrelated test is still running on that port, and needs to be cleaned up. This is what @bhauer suggests.

If another test is still running then we don't actually want to start Play another address/port, because then we'll have two tests running at the same time and we might get incorrect results. If another test is still running then I think it's better for Play to fail!

– Rich

Rich Dougherty

unread,
Jul 22, 2015, 5:51:57 PM7/22/15
to play-framework, Donovan Muller, play-fram...@googlegroups.com
FYI, the issues with the TechEmpower tests should be fixed now. The problem was that another test installed HHVM, and HHVM held onto port 9000, which stopped the Play tests from starting.

– Rich

On Mon, Apr 27, 2015 at 7:48 AM, alex s <iwt...@gmail.com> wrote:
воскресенье, 26 апреля 2015 г., 17:09:27 UTC+3 пользователь Slim Slam написал:
Why is Play at the bottom of the results and it says "Did not complete"?

benmccann

unread,
Aug 6, 2015, 9:01:47 PM8/6/15
to Play framework dev, play-fr...@googlegroups.com, donovan...@gmail.com
Looks like we took a 20% performance hit that never got addressed after we switched to Akka streams for request body processing. Are there any plans to see if we can fix that performance regression?

Rich Dougherty

unread,
Aug 6, 2015, 9:55:12 PM8/6/15
to benmccann, Play framework dev, play-framework, Donovan Muller
James is still working on this code. At the moment there's still lots of conversion between iteratees and Akka Streams. Once that conversion is eliminated performance should improve. We'll keep an eye on performance. I don't think we should release Play 2.5 unless performance is on par with Play 2.4.

Cheers
Rich

--
You received this message because you are subscribed to the Google Groups "Play framework dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to play-framework-...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

benmccann

unread,
Sep 24, 2015, 8:09:24 PM9/24/15
to Play framework dev, benjamin...@gmail.com, play-fr...@googlegroups.com, donovan...@gmail.com
Would you be able to extend the graphs on Prune so that they cover a longer time period? Right now they only go back to July 1.

Also, there's lots of graphs that are completely blank. Maybe we could update those as well?

-Ben

James Roper

unread,
Sep 24, 2015, 9:19:23 PM9/24/15
to benmccann, Play framework dev, play-framework, donovan...@gmail.com
I missed this conversation.

I've done some profiling of Play (though not much).  One of the biggest reasons why Play is currently slower than it used to be is because we have additional "context switches", that is, parts of the critical path of a request that switch threads through tasks being dispatched to thread pools.  This is due to an overzealous use of reactive streams when we switched to Netty 4 with reactive streams, every request now starts with a switch from the Netty thread pool to the Akka thread pool, that didn't exist before.  I think there's also an additional switch in response processing that didn't used to exist, but I'm not sure if we can avoid that.

I also think there's an additional Akka message being sent when we consume the request body - Akka streams has not yet been optimised in any way, version 1.1 may have some performance improvements that help this case.  We'll need to profile to see exactly what's happening here.  There are some improvements we can make here if there is a problem, we currently have an accumulator abstraction that is based on Akka streams, for cases where the body is being ignored, we could short cut this such that Akka streams is not being used at all.

We will not release Play 2.5 without returning performance to at least what it was in Play 2.4.

Regards,

James

--
You received this message because you are subscribed to the Google Groups "Play framework dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to play-framework-...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
James Roper
Software Engineer

Typesafe – Build reactive apps!
Twitter: @jroper
Reply all
Reply to author
Forward
0 new messages