Round 5 ETA: May 17

521 views
Skip to first unread message

Brian Hauer

unread,
May 14, 2013, 6:44:44 PM5/14/13
to
Hi everyone,

We are presently targeting Thursday, May 16 for posting Round 5 results.  That means that the test runs will likely start on or around Tuesday, May 14.

It is possible (although not certain, given time constraints) that Round 5 will include the first implementations of Test 5, which is a database update test.  Read the simply preliminary specification for Test 5 at the following Github issue: https://github.com/TechEmpower/FrameworkBenchmarks/issues/263

Brian Hauer

unread,
May 9, 2013, 2:33:30 PM5/9/13
to framework-...@googlegroups.com
Round 5 is not yet available, but the results web site has been updated in preparation for Round 5.

Changes:
  1. Rounds 1 and 2 are now not included at the results page.  You can still see them at our blog.  The results file format for Rounds 1 and 2 would have required manual work to migrate to our latest format.  We opted to just remove them.
  2. Fixes an incorrect labeling of Scalatra as Laravel in the Round 3 data.
  3. Changes to filters are now reflected in the URL so that you can bookmark and exchange custom filters with others.  This is still a work in progress and may have a couple glitches.
  4. The "Source code" navigation tab (rightmost) now includes requirements for each test.  We expect that several implementations are not strictly compliant with these requirements presently, but we would like to move toward compliance as soon as possible.
  5. The requirements for Test #5, a database update test, are included on that same page.  The first Test #5 results will be in Round 5.  We've already received some implementations of Test #5 and invite others to do the same.  It's a fairly quick evolution of the multiple database query test that includes updating.

Mathias

unread,
May 9, 2013, 5:18:59 PM5/9/13
to Brian Hauer, johannes@spray.io Rudolph, framework-...@googlegroups.com
Brian,

thanks for the ETA of Round 5 and the updated requirements!

Since we'd like to get spray into Test 1 of Round 5:
What would be the "safe" deadline for a pull-request?

Cheers,
Mathias

---
mat...@spray.io
http://spray.io

On 09.05.2013, at 20:33, Brian Hauer <teona...@gmail.com> wrote:

> Round 5 is not yet available, but the results web site has been updated in
> preparation for Round 5.
>
> Changes:
>
> 1. Rounds 1 and 2 are now not included at the results page. You can
> still see them at our blog. The results file format for Rounds 1 and 2
> would have required manual work to migrate to our latest format. We opted
> to just remove them.
> 2. Fixes an incorrect labeling of Scalatra as Laravel in the Round 3
> data.
> 3. Changes to filters are now reflected in the URL so that you can
> bookmark and exchange custom filters with others. This is still a work in
> progress and may have a couple glitches.
> 4. The "Source code" navigation tab (rightmost) now includes
> requirements for each test. We expect that several implementations are not
> strictly compliant with these requirements presently, but we would like to
> move toward compliance as soon as possible.
> 5. The requirements for Test #5, a database update test, are included on
> that same page. The first Test #5 results will be in Round 5. We've
> already received some implementations of Test #5 and invite others to do
> the same. It's a fairly quick evolution of the multiple database query
> test that includes updating.
>
>
> --
> You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchm...@googlegroups.com.
> Visit this group at http://groups.google.com/group/framework-benchmarks?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Brian Hauer

unread,
May 9, 2013, 9:19:53 PM5/9/13
to framework-...@googlegroups.com, Brian Hauer, johannes@spray.io Rudolph
Pat might have a stronger voice here because he actually processes pull requests and runs the tests.  But I would say it should be safe to submit the PR by Monday, May 13.  Pat will want to kick off the tests on Tuesday morning.

Mathias

unread,
May 10, 2013, 3:43:02 AM5/10/13
to Brian Hauer, framework-...@googlegroups.com, johannes@spray.io Rudolph
Thanks, Brian, for this info.
We are on UTC+2, so 9 hours ahead of you, and hereby commit to having the pull-request in by noon PDT on Monday.
Hopefully this will allow for sufficient time for integration on your side.

Thanks again for your effort with this great benchmark initiative!

Cheers,
Mathias

---
mat...@spray.io
http://spray.io

Brian Hauer

unread,
May 10, 2013, 10:17:40 AM5/10/13
to framework-...@googlegroups.com, Brian Hauer, johannes@spray.io Rudolph
Sounds great!  Thank you for your help and contributions.

Incidentally, can I get your opinion on whether you find any value in an "official" flag as I described here: https://groups.google.com/d/msg/framework-benchmarks/dtmYuBgQA5U/8Ra4jFS7v6wJ

As someone who works on a framework, do you think that sort of indicator would be valuable to your users or not?

Mathias

unread,
May 10, 2013, 5:32:06 PM5/10/13
to Brian Hauer, framework-...@googlegroups.com, johannes@spray.io Rudolph
Brian,

thanks for pointing us to the discussion around the "official" flag.
I understand the motivation of wanting to increase the credibility of the results by adding a "stamp of approval" from the framework authors.

The main point for me here would be to make it very clear what this "official" flag actually means.
My current understanding for it would be this (very rough wording proposal):

"The benchmark setup for this framework is free from obvious and significant mistakes with regard to architecture, wiring and configuration.
Therefore the benchmark results represent a somewhat honest assessment of how the framework would perform in the respective benchmark scenario if deployed by an experienced user. Additionally the results have passed a "sanity check" by the framework authors, i.e. the frameworks ranking position in the competitive landscape doesn't really raise any eyebrows."

I would not go as far as to say that the frameworks benchmark setup is what the authors would recommend as a "good" and recommended application architecture/configuration for the general case. This benchmark is a purely performance-based comparison. As such it is to be expected that the benchmark setup for a framework is geared towards the single goal of performance maximization, which can sometimes be a tradeoff against other dimensions, like code size minimization, configuration simplicity or newbie friendliness.

Does this make sense?

Cheers,
Mathias

---
mat...@spray.io
http://spray.io

Brian Hauer

unread,
May 10, 2013, 6:25:59 PM5/10/13
to framework-...@googlegroups.com, Brian Hauer, johannes@spray.io Rudolph
Hi Mathias,

That is precisely what I had in mind with an "official" indicator.  Namely, that the creator, maintainer, or a credible significant user of the framework in question has given the implementation a once over and in so doing was not concerned that there were obvious problems.  No one would feel comfortable saying that without a doubt the implementation is perfect, but it might be reassuring to users of the framework in question that it has seen review by someone whose work with the framework gives their judgment weight.  It also might indicate to users that the implementation is an example of some best-practices.

The genesis of this is selfishness: we would rather receive useful feedback than feedback asking us if, for example, we're using all of the CPU cores in node.js.  If node.js were marked as "official," the reader would more likely assume that it's running with the cluster module enabled and then dig in further to yield more insightful criticism.

So far, the feedback on adding an official indicator has been inconclusive, so I'm going to hold off.  But I may revisit it later.

Thanks for your input and for putting the notion in your own words, words that I think summarize the concept very well.

Brian Hauer

unread,
May 14, 2013, 9:59:05 AM5/14/13
to framework-...@googlegroups.com
We've purchased an SSD for hosting the database server in our i7-2600K physical hardware test for Round 5.  In all previous rounds, the i7 database server used a traditional hard drive.

With a traditional hard drive, the new database update test was--as you would expect--running into a write-speed wall.  Even with an SSD, we expect that there will be a strict performance ceiling created by the database server's need to log, write, and verify.  The ceiling should be considerably higher than it was with a traditional hard drive, but still lower than previous tests.

Using an SSD to host the database may also marginally affect the other database tests, but we expect only superficially.  The previous tests exercised a data set intended to fit easily within MySQL's memory cache.

A

unread,
May 14, 2013, 11:59:27 AM5/14/13
to framework-...@googlegroups.com
Looking forward to round 5

Thank you for doing this, very interesting stuff.

Please be sure to use latest Golang v1.1!

Brian Hauer

unread,
May 14, 2013, 6:43:03 PM5/14/13
to framework-...@googlegroups.com
Hi A,

I believe updating to 1.1 is in our plan.

Brian Hauer

unread,
May 14, 2013, 6:45:24 PM5/14/13
to
Brief update: We just received and set up the SSD today, so we are looking to kick off the test runs tomorrow.  That means data will likely not be available until Thursday afternoon, meaning in turn that we will probably post the data on Friday.

With that in mind, I'm revising the ETA to May 17.

Julien Schmidt

unread,
May 14, 2013, 8:25:20 PM5/14/13
to framework-...@googlegroups.com
Maybe you should just stick to go tip. There are not really noticeable changes in Go 1.1 from rc1 because of the implementation freeze.
But the brake is released now. There are already a few performance related changes in tip and I expect more in the next hours:
https://code.google.com/p/go/source/list

At least it would be nice if we could include Go tip in Round 6. Otherwise we can't see the results of our changes until the next minor release (presumably in a few months) and big changes have to wait until the next major release (presumably at least a year).

Brian Hauer

unread,
May 14, 2013, 9:26:14 PM5/14/13
to framework-...@googlegroups.com
Hi Julien,

We'll use 1.1 in Round 5 since the 1.1 release just dropped and I suspect we'll get a lot of questions about which version of Go we're using ("You guys are using 1.1, right?")  It will be easiest for us to explain that we're using 1.1, the fresh stable release.  :)

But based on the conversation over on golang-nuts, starting with Round 6, I'd like to get a second instance of the Go tests run with go-tip for ongoing tracking of enhancements alongside the stable version.

Brian Hauer

unread,
May 16, 2013, 1:34:18 PM5/16/13
to framework-...@googlegroups.com
A quick update:

We are running the Round 5 tests presently.  We ran into a server configuration hiccup with the run that was started yesterday and needed to restart.  It will be a little bit of a rush, but we still think we can post the results tomorrow, May 17.

Brian Hauer

unread,
May 17, 2013, 10:08:43 AM5/17/13
to framework-...@googlegroups.com
It appears the EC2 run failed with a full EBS volume at test #86 of 117.  Several more gigabytes of disk space have been consumed by recent framework additions and we had not considered it would be necessary to grow the EBS volume.  We've done so now and resumed the run.

We still believe we can post the results today, but that will be later in the day.

Skamander

unread,
May 17, 2013, 1:32:23 PM5/17/13
to framework-...@googlegroups.com
Whoa, the mysql version of play is faster than the mongo version thanks to the ssd in round 5. Interesting.

Claudiu Clau

unread,
May 17, 2013, 1:45:25 PM5/17/13
to framework-...@googlegroups.com
When do round 5 will be available?
link p
Message has been deleted

Skamander

unread,
May 17, 2013, 1:55:32 PM5/17/13
to framework-...@googlegroups.com
You can look at the raw results like I did until the TechEmpower guys announce the rest: https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/results/i7/20130516070712

Brian Hauer

unread,
May 17, 2013, 2:02:20 PM5/17/13
to framework-...@googlegroups.com
Hi Skamander,

Note that the raw data we captured this morning (even on the i7!) had several problems.  Entirely coincidentally, the hard-drive hosting the application server on the i7 also reached capacity during the run.  We are presently re-running many tests after having cleared out some large log files to make room.  One of our design goals is that we run with logging disabled even though that is a known concession versus a true production environment.  Clearly at least one log file slipped through and filled up the drive.

Until we have the correct data, we're not yet posting or announcing the results.

On Friday, May 17, 2013 10:54:49 AM UTC-7, Skamander wrote:
You can look at the raw results like I did until the TechEmpower guys announce the rest: https://groups.google.com/forum/?fromgroups=#!topic/play-framework/gyOqzJ_IEXQ
Message has been deleted

Skamander

unread,
May 17, 2013, 2:52:28 PM5/17/13
to framework-...@googlegroups.com
Ah, thanks for pointing that out. I thought only ec2 was affected.
Reply all
Reply to author
Forward
0 new messages