Round 10 preview data available for sanity checks

1,229 views
Skip to first unread message

Brian Hauer

unread,
Feb 24, 2015, 7:02:48 PM2/24/15
to framework-...@googlegroups.com
In a project that has seen many delays, the delay for Round 10 sets a new high, and all I can do is apologize for the long time you've been waiting and thank you for your enduring patience.  A great deal of new frameworks have been contributed since Round 9.  Round 10 will include 114 frameworks!  Equally important, the benchmark toolset has evolved considerably thanks to contributions from many but most significantly from Hamilton Turner.  The improvements greatly reduce the pain of setting up test environments and should aid future contributors.

We have sanity checked preliminary results captured from the test environment at Peak Hosting on January 21 sufficiently that we believe it is suitable for your review.  Meanwhile, we've started another run today that should be ready for review later this week, which will incorporate any fixes that have been submitted between January 21 and today.

As with previous rounds, we ask that you sanity check this preview data to identify errors that can be resolved prior to the Round 10 final run.  We aim to start a Round 10 final run on approximately March 16.  Note that if you are not a contributor to the project, we strongly recommend that you avoid basing any conclusions on this preview data since it is not for general consumption and may contain easily-fixed errors.

Round 10 preview data from the Peak Hosting environment:

http://www.techempower.com/benchmarks/previews/round10/

This will be revised as additional preview rounds conclude until the final is started, at which point the URL will redirect to the official results page.

Thanks again for your patience and we look forward to your feedback and corrections!

Brian Hauer

unread,
Feb 24, 2015, 7:03:49 PM2/24/15
to framework-...@googlegroups.com
If you have reviewed the preview data, your eye may have been drawn to the database results achieved by a test implementation named "ulib-sqlite."  As the name implies, this implementation uses SQLite as its database rather than a traditional network-connected database such as MySQL, Postgres, or MongoDB.

I had originally been asked for my opinion on including SQLite and elected to be permissive.  However, by avoiding the expense of network communication, it is clear that SQLite tests significantly skew the results rendering.

It can be argued that avoiding network communication violates the spirit of the exercise and disqualifies the particular test implementation.  I had based my original permissive stance on deference to the infinite variation of architectures that can be employed when building a web application—who am I to disallow an architecture that I feel is not valid or sufficiently robust?  There is no unambiguous consensus on what is right and what is wrong when designing a web application.

That said, the database tests have historically been predicated on the value of measuring the framework's ability to efficiently communicate with an external database server.  Avoiding that work is approximately analogous to using in-memory caching, which we have already identified as expressly disallowed for database tests.

I would like to hear the community's opinion on this.  Here are the options I am considering:
  1. Change the default rendering to hide SQLite by default.
  2. Remove SQLite tests outright.  Consider re-introducing them when in-memory caching tests are specified and accepted.
  3. Leave SQLite tests and rendering as is.
For option #2, I would re-write the specification of the database tests to explicitly disallow implementations that use a local database—tests would be required to communicate to a database hosted by the server/VM with the database role.

Hamilton Turner

unread,
Feb 24, 2015, 9:19:53 PM2/24/15
to framework-...@googlegroups.com
Glad to see the preview!

For sqlite, I lean towards option 1 -- allowing sqlite but segregating it. Perhaps even enforce a more restrictive segregation, such as a toggle button in the filter panel, or a new categorization tag (perhaps "localdb:[None|Sqlite]") in the benchmark_config meta data, with the intent being that you're either visualizing "local databases" or "non-local databases". Fundamentally I see value in knowing how well sqlite-based tests perform - If my interest is a running a small application on a tiny VPS for the lowest price possible, I would be interested to know what load range I can reasonably expect my setup to handle. Also fundamentally, it's absolutely unfair to group it with the remote databases. If framework communities are interested in creating sqlite-based benchmarks and comparing amongst themselves, TFB already has the business logic to support that quite easily. 

My main qualifier is that TFB should be able to police the sqlite schema+data population before we allow further sqlite tests into the codebase. There has already been an "oops I built the schema/values slightly wrong" bug for a sqlite test, and I expect more if TFB isn't controlling sqlite in the same manner that we control the current database setups. I'm not sure what the "right" way is to get this control - perhaps we establish a file config/sqlite_setup.sql that frameworks are then required to use? I'd hope most sqlite ORMs have a method to directly load sql... Alternatively, perhaps we construct a small sqlite database and directly store it into the repository? We could then copy it inside installs once per test using sqlite, so that each framework test is guaranteed a fresh copy. Thoughts or suggestions? 

Best, 
Hamilton

Erwan Ameil

unread,
Feb 24, 2015, 9:37:03 PM2/24/15
to framework-...@googlegroups.com
We should consider a fourth option: having an additional 'Hardware' tab
(at the same level than 'i7', 'EC2' and 'Peak') that would test every
framework+DB on a single machine (for example, it could be named
'Peak-single' or something that conveys the same idea).

If we did have that tab, the SQlite tests would be disqualified from
'i7', 'EC2' and 'Peak' (for not being able to run the database
externally), but would qualify for 'Peak-single'. We would be able to
compare fairly how [framework]+[DB] fares against [framework]+[SQlite]
because there wouldn't be a disadvantage for other databases (in terms
of network round-trips).

On 25/02/15 00:03, Brian Hauer wrote:
> Here are the options I am considering:
>
> 1. Change the default rendering to hide SQLite by default.
> 2. Remove SQLite tests outright. Consider re-introducing them when
> in-memory caching tests are specified and accepted.
> 3. Leave SQLite tests and rendering as is.

Hamilton Turner

unread,
Feb 24, 2015, 9:49:02 PM2/24/15
to framework-...@googlegroups.com
I'd avoid naming scheme "single" - I doubt you'd ever want to run the client generation on the same machine as the application server (it doesn't emulate any real-world scenario), 
so you will always have at least two servers and that name could cause confusion. 

I love the idea of utilizing the existing database tests, but just running the database on the same server as the 
application. FWIW that would be very simple to implement from a code standpoint, and would really populate the "local database" (aka "peak-single") category
of tests, instead of just having 2-3 tests implementing sqlite. 

To be clear, I think that for R10 we should just go with option 1 and hide sqlite by default, and then between R10-11 we can implement these other
solutions. 

Best, 
Hamilton

Ludovic Gasc

unread,
Feb 25, 2015, 2:21:09 AM2/25/15
to Brian Hauer, framework-...@googlegroups.com

Hi,

I see aiohttp.web and API-Hour as filter but no results in the datasets.
I've missed something?

Regards.

Ludovic Gasc (GMLudo)

--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchm...@googlegroups.com.
Visit this group at http://groups.google.com/group/framework-benchmarks.
For more options, visit https://groups.google.com/d/optout.

Stephen Samuel

unread,
Feb 25, 2015, 3:19:29 AM2/25/15
to framework-...@googlegroups.com
Thanks for the preview. Are developers of a framework able to make further PRs where there might have been an error in the initial submission leading to poor performance?

Martin Grigorov

unread,
Feb 25, 2015, 3:52:19 AM2/25/15
to Brian Hauer, framework-...@googlegroups.com
Hi,

Thanks for run!

I'd vote for #2.
#1 would make ulib-sqlite to compete to itself.

--

Michael Hixson

unread,
Feb 25, 2015, 4:39:44 AM2/25/15
to framework-...@googlegroups.com
I would also go for #2: remove SQLite tests. I don't see those
results providing value in this context.

-Michael

Andy

unread,
Feb 25, 2015, 8:14:01 AM2/25/15
to framework-...@googlegroups.com
I vote for option #3.

I'm interested to see how much performance is given up by going from an embedded DB to a server DB. That would help greatly in architecture decisions.

I'd actually want to see more variaties of embedded DB in the benchmarks: LevelDB, LMDB, embedded MySQL, RocksDB, etc - how do they compare.

Ludovic Gasc

unread,
Feb 25, 2015, 8:34:14 AM2/25/15
to Brian Hauer, framework-...@googlegroups.com

On production systems with some load, you've always several servers to handle charge.
Database is used as central persistent storage.
With sqlite, how you can have the same setup?

From my point of view, sqlite in memory is a caching mecanism with SQL access.
Maybe should be interesting to add a separate section for caching, where you could add several tools to increase performance like Redis.

Ludovic Gasc (GMLudo)

--

Adam Chlipala

unread,
Feb 25, 2015, 8:48:56 AM2/25/15
to Stephen Samuel, framework-...@googlegroups.com
On 02/25/2015 03:19 AM, Stephen Samuel wrote:
> Thanks for the preview. Are developers of a framework able to make further PRs where there might have been an error in the initial submission leading to poor performance?

I think that's exactly the idea of the preview round: participants get a
chance to submit error-fix PRs to go into the final run.

Adam Chlipala

unread,
Feb 25, 2015, 8:50:06 AM2/25/15
to Ludovic Gasc, framework-...@googlegroups.com
On 02/25/2015 08:33 AM, Ludovic Gasc wrote:
>
> On production systems with some load, you've always several servers to
> handle charge.
> Database is used as central persistent storage.
>

Somewhat off-topic-ly: I think the TFB results show pretty clearly that
most web apps in the real world don't need more than one server for
reasons of load scaling, considering how well a single server does in
these results.

Brian Hauer

unread,
Feb 25, 2015, 12:42:21 PM2/25/15
to framework-...@googlegroups.com, teona...@gmail.com
Hi Ludovic,

API-Hour should be included in the second preview run (which is running presently).  It was not included in the first run, which was actually captured back on 2015-01-21.  However, the metadata for the results web site was grabbed very recently, which is why you see the names available as filter options.

Brian Hauer

unread,
Feb 25, 2015, 3:00:48 PM2/25/15
to framework-...@googlegroups.com
Concerning SQLite tests, it appears the consensus opinion so far is to either hide them by default (option 1) or remove them outright (option 2).  Thank you for all of your thoughts!

My personal opinion is a slight preference for removing them outright based on my current opinion that they are approximately analogous to in-memory caching, which we have already established is not permissible in the database tests.

I recognize this is a reversal of my previous permissive opinion concerning alternative database implementations and it is admittedly largely based on how much the results are skewed.  In other words, I admit that had SQLite results been similar to MySQL and Postgres results, I would likely have remained permissive of SQLite—at least until such time that I was reminded that the spirit of the test was to exercise network communication to an external service and that SQLite tests avoid that work.

Valuing the implementation effort, I would like to retain the implementations until such time that we have a cache-enabled test type that fully expects requests to be fulfilled without requiring repeated network communication to the database server.

I have asked the contributor of the Ulib tests to weigh in as well in case his opinion differs.  Right now, I do not believe I will be persuaded toward leaving the tests as-is (option 3), but I nevertheless wanted to make an effort to hear other opinions before concluding this.

Brian Hauer

unread,
Feb 25, 2015, 3:09:23 PM2/25/15
to framework-...@googlegroups.com
Hi Hamilton,

This is a very good point for us to bear in mind.  With each additional database platform, we have the additional burden of reaching a consensus among frameworks concerning how that database will be structured and tested.  More accurately, since the addition of database platforms typically arrives to us bundled inside a framework implementation, we should be prepared to iterate those implementations as necessary to organically reach a consensus as other implementations for that database platform are created.

Take Cassandra for example.  The first implementation that included Cassandra established the schema and general configuration.  As more Cassandra implementations appear, those may steer the consensus schema and configuration slightly.  We should expect this will happen and the contributor(s) of the initial implementation should understand some schema and configuration iteration may occur in the future as others add their opinions to the soup.

Similarly, assuming we retain the SQLite tests in some fashion (e.g., as an implementation of a future cache-permitting test), it would be nice to eventually reach a consensus on how the data should be structured.

And as Hamilton pointed out, a platform like SQLite may be quite difficult for our toolset to audit.  For the sake of time and effort, we initially rely on trust and the open source community to audit the code as they see fit.

Brian Hauer

unread,
Feb 25, 2015, 3:16:20 PM2/25/15
to framework-...@googlegroups.com, s...@sksamuel.com
Hi Stephen,

Adam is right.  The intent of the preview is to solicit your review and sanity-checking before we finalize Round 10.  If you identify a problem and can produce a fix in the next few weeks, we'd like to receive a pull request!

Thanks!

Brian Hauer

unread,
Feb 25, 2015, 3:30:11 PM2/25/15
to framework-...@googlegroups.com, gml...@gmail.com
Hi Adam,

The truth you have captured here was a significant impetus for the project.  A single server is indeed sufficient for many needs.

Prior to embarking on this project, we had anecdotally perceived that the selection of platform and framework could measurably impact the bottom line of small start-ups that are testing their business models with minimum viable products.  In many cases, such a start-up is not yet concerned with five-nines of uptime and as long as suitable performance could be achieved from a single server and the data could be soundly backed up, a single-server configuration enjoys simplified system architecture and minimized hosting costs.  We have seen many businesses scale to tens or hundreds of thousands of users with a single server (with backups, of course), allowing them to defer the architectural costs/complexity of a distributed application until they can get more funding or in some cases indefinitely.

That said, installing the database server on a separate VM or separate hardware is (I believe) the most common configuration for production web application deployments, and that is why the project assumes a network between the application and database.

Marko Asplund

unread,
Feb 25, 2015, 4:15:59 PM2/25/15
to framework-...@googlegroups.com
Hi Brian,

Thanks for posting preview results. Great to see concrete progress on round 10!

I noticed the database tests were failing with "servlet3-cass" test implementation. I looked at the test run output, but all it reveals is that there was an internal server error. My guess is that this is because the test implementation is unable to talk to the Cassandra database for some reason, but with this information it's impossible to tell for certain.

Are there any other logs, such as the database server logs, available?
Am I allowed to turn on application logging for the preview period?

thanks,

marko

Marko Asplund

unread,
Feb 25, 2015, 4:36:24 PM2/25/15
to framework-...@googlegroups.com
it seems Cassandra is not (re)started in benchmarker.py, like the other databases.

Cody Goodman

unread,
Feb 25, 2015, 5:27:54 PM2/25/15
to framework-...@googlegroups.com
how I can find the logs showing where wai/Haskell's latency for json serialization? I'm having trouble mapping these results to the right logs

http://www.techempower.com/benchmarks/previews/round10/#section=data-r10&hw=peak&test=json&f=0-9zlds-0-0

I found this:

https://github.com/TechEmpower/TFB-Round-10/blob/master/peak/linux/results-2015-01-21-peak-preview1/latest/logs/wai/out.txt

But it doesn't get me very far in reproducing the 3120.0 ms max latency that was produced and then debugging it. Any pointers?

Hamilton Turner

unread,
Feb 26, 2015, 1:43:51 AM2/26/15
to framework-...@googlegroups.com
Cody, 

The logs stored in (results/latest/logs/<framework_name>) are the output and error logs from our toolset while it's running the wai test.

What you're probably looking for are the logs for the load generation tool, as that is where TFB gets numbers for latency and throughput. THose are under results/<date>/<test_type>/<framework_name>. Specifically, that 3120 number comes from right here

Note that part of what TFB does is collect these numbers into one big results file, so you can also see wai's performance numbers here in results.json. You'll notice it's a JSON array with each entry corresponding to the raw output I linked you to above e.g. right here is that 3.12s number. 

Also, note that there are some opinions that the method wrk uses to measure latency is not intuitive, I invite you to read this issue and any linked threads and offer comments in the github tracker. 

HTH,
Hamilton

Hamilton Turner

unread,
Feb 26, 2015, 1:50:23 AM2/26/15
to framework-...@googlegroups.com
Cody, 

Just to be fully clear - that number 222106 comes from these lines, where we've applied the simple formula: requests/sec = total requests / (end-start), or in the case of wai's performance on concurrency level 128 is

3331603 / (1421972725-1421972710) ~= 222106

Best, 
Hamilton

Hamilton Turner

unread,
Feb 26, 2015, 2:06:41 AM2/26/15
to Marko Asplund, framework-...@googlegroups.com
Ouch, Cassandra should definitely be restarted as well.
Do you know how to restart it? If so, please submit a PR against this section of code. If not, let's open an issue so someone can figure out how to restart it and add that ASAP.

Oddly, that shouldn't cause connection issues, because AFAIK servlet3-cass is the only test to use Cassandra currently, so as long as the database initially turns on it should never turn back off. 

Unfortunately I doubt the logs for running the database installation are available, as it was likely only run once at the beginning. The preview's configuration options show that TFB was run with --install server. I've opened an issue for that. Your best bet for a quick debugging session is to say hello over in IRC (#techempower-fwbm on freenode) and see if msmith can help you debug that the cassandra is alive and responding well. Finally, there's an open issue to verify DB connectivity before running a test

Best, 
Hamilton



On Wed, Feb 25, 2015 at 4:36 PM, Marko Asplund <marko....@gmail.com> wrote:
it seems Cassandra is not (re)started in benchmarker.py, like the other databases.

--
You received this message because you are subscribed to a topic in the Google Groups "framework-benchmarks" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/framework-benchmarks/yyK8VclJ6Zc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to framework-benchm...@googlegroups.com.

Marko Asplund

unread,
Feb 27, 2015, 1:28:42 AM2/27/15
to framework-...@googlegroups.com, marko....@gmail.com
Yes, I'll start working on restarting Cassandra and maybe create proper init scripts, as well.

The installation logs would be quite useful to have in these sorts of situations.
Any chance of making them available?

marko

Hamilton Turner

unread,
Feb 27, 2015, 10:55:53 AM2/27/15
to framework-...@googlegroups.com

Forgot to cc list

---------- Forwarded message ----------
From: "Hamilton Turner" <hami...@gmail.com>
Date: Feb 27, 2015 10:01 AM
Subject: Re: [frameworks] Re: Round 10 preview data available for sanity checks
To: "Marko Asplund" <marko....@gmail.com>
Cc:

Marko, 

A number of logs are already available:
 - Application server installation log - do a find for servlet3 and you'll see it being installed.
 - - The above logs come from running prerequisites.sh and all of the install.sh files, such as this
 - stdout and stderr for servlet3-cass and servlet3-cass-raw

The only one missing I can think of is the log from running databases.sh. At this point it's
unrecoverable for preview 1, but I've opened an issue so that it's hopefully recorded 
correctly for R10 preview 2

Best, 
Hamilton

--

Marko Asplund

unread,
Feb 28, 2015, 4:55:38 AM2/28/15
to framework-...@googlegroups.com, marko....@gmail.com
Hamilton,

It would be great if the database installation log could be recorded for the next round 10 preview run. Also, I think adding a DB connectivity test would be really helpful in diagnosing test failures in many cases.

I refactored the Cassandra deployment code a bit and changed Cassandra to restart at the beginning of each benchmark run. Here's the PR for review:

Based on the available logs, my guess is that the servlet3-cass test implementation was unable to connect to the database.

thanks,

marko

Marko Asplund

unread,
Feb 28, 2015, 9:32:11 AM2/28/15
to framework-...@googlegroups.com, marko....@gmail.com
Hi,

I implemented database connection verification in https://github.com/TechEmpower/FrameworkBenchmarks/pull/1357

marko

Brian Hauer

unread,
Mar 5, 2015, 6:28:13 PM3/5/15
to framework-...@googlegroups.com
The Round 10 preview site has been updated with a second capture of preview data.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

Kiswono Prayogo

unread,
Mar 5, 2015, 6:37:44 PM3/5/15
to Brian Hauer, framework-...@googlegroups.com
I notice that Go is now gone from all test?

On 6 March 2015 at 06:28, Brian Hauer <teona...@gmail.com> wrote:
The Round 10 preview site has been updated with a second capture of preview data.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

--
Regards,
Kiswono P
GB

Kiswono Prayogo

unread,
Mar 5, 2015, 8:27:35 PM3/5/15
to Brian Hauer, framework-...@googlegroups.com
maybe because Go upgraded (1.4.1 to 1.4.2 for example) but the $GOPATH/pkg contents not yet cleared (manually)?

Kiswono Prayogo

unread,
Mar 5, 2015, 8:36:45 PM3/5/15
to Brian Hauer, framework-...@googlegroups.com

Naoki INADA

unread,
Mar 5, 2015, 11:07:38 PM3/5/15
to framework-...@googlegroups.com
All tests using uWSGI failed.
Sadly, my PR https://github.com/TechEmpower/FrameworkBenchmarks/pull/1346 has not fixed the err.txt.

I found "Listen queue size is greater than the system max net.core.somaxconn (128)." in broken err.txt.
Was linux restarted after `sysctl -w net.core.somaxconn=65535`?
I think we should write it to sysctl.conf.

On Wednesday, February 25, 2015 at 9:02:48 AM UTC+9, Brian Hauer wrote:
In a project that has seen many delays, the delay for Round 10 sets a new high, and all I can do is apologize for the long time you've been waiting and thank you for your enduring patience.  A great deal of new frameworks have been contributed since Round 9.  Round 10 will include 114 frameworks!  Equally important, the benchmark toolset has evolved considerably thanks to contributions from many but most significantly from Hamilton Turner.  The improvements greatly reduce the pain of setting up test environments and should aid future contributors.

We have sanity checked preliminary results captured from the test environment at Peak Hosting on January 21 sufficiently that we believe it is suitable for your review.  Meanwhile, we've started another run today that should be ready for review later this week, which will incorporate any fixes that have been submitted between January 21 and today.

As with previous rounds, we ask that you sanity check this preview data to identify errors that can be resolved prior to the Round 10 final run.  We aim to start a Round 10 final run on approximately March 16.  Note that if you are not a contributor to the project, we strongly recommend that you avoid basing any conclusions on this preview data since it is not for general consumption and may contain easily-fixed errors.

Round 10 preview data from the Peak Hosting environment:

http://www.techempower.com/benchmarks/previews/round10/

This will be revised as additional preview rounds conclude until the final is started, at which point the URL will redirect to the official results page.

Thanks again for your patience and we look forward to your feedback and corrections!

Martin Grigorov

unread,
Mar 6, 2015, 4:21:00 AM3/6/15
to Brian Hauer, framework-...@googlegroups.com
Hi,

I see that Wicket failed executing the tests for a second time.
In out.txt and err.txt [1] logs I cannot see any error that would cause this.
It looks to me like some problem with Resin but I am not sure.
How we can identify what exactly is the reason so I can fix it ?
E.g. start Resin, make a http request manually, if it hangs then "kill -3 resinPID" and see the stacktraces in Resin output (out.txt)

Martin


On Fri, Mar 6, 2015 at 1:28 AM, Brian Hauer <teona...@gmail.com> wrote:
The Round 10 preview site has been updated with a second capture of preview data.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchm...@googlegroups.com.

Hamilton Turner

unread,
Mar 8, 2015, 8:59:42 PM3/8/15
to framework-...@googlegroups.com
For anyone interested, here's a log of the changes between R10 preview 1 and R10 preview 2. 

$ git shortlog bed8db1c82dca92421846d8984fb68f323a8b2ae..02f2557068b71d2013c0706a0e55c4d6ddc541ce

Alex Schneider (5):
      Reduce verbosity of python and mono
      Add output every 100 lines to install scripts
      GCC logs errors in stderr, let's pipe to stdout.
      Quiet down apt-get a little
      Log installation files

Brittany Mazza (2):
      Remove php-silica micro-framework from tests
      Turn off 'beautiful' column names

Hamilton Turner (16):
      Merge pull request #1332 from Eyepea/api_hour_release_fix
      Update apt on client computer
      Retry limit for starting Mongo and Cassandra
      Make Cassandra default to one-node cluster and simplify database.sh
      Add retry limit for mongo and cassandra inside travis-CI
      Typo in Cassandra config
      Merge branch 'update-lwan' of https://github.com/lpereira/FrameworkBenchmarks into lpereira-update-lwan2
      Merge pull request #1230 from zloster/postgresql-jdbc-bump
      Merge remote-tracking branch 'upstream/reduce-python-csharp-install-verbosity' into reduce-python-csharp-install-verbosity
      Merge branch 'master' of github.com:TechEmpower/FrameworkBenchmarks
      Create testrunner home folder inside vagrant
      Revert "Add retry limit for mongo and cassandra inside travis-CI"
      Add retry limit for mongo and cassandra inside travis-CI
      Merge pull request #1196 from jberger/mojo_ev
      Merge pull request #1328 from raphaelbauer/master
      Merge pull request #1340 from lhotari/grails-2.4.4-upgrade

INADA Naoki (1):
      Fix uwsgi errorlog is written to `--ini` file.

Joel Berger (2):
      update cpanfile.snapshot
      less logging during perl build phase

Juan José Aguililla (8):
      Add Sabina benchmark
      Complete test cases
      Add new tests skeleton
      Documentation change
      Add sabina framework to travis configuration
      Comment unmodified frameworks in Travis configuration
      Complete tests to fulfill specs and refactor
      Uncomment frameworks in Travis configuration

Keith Newman (1):
      reimplemented Gemini-Postgres due to Travis error

Lari Hotari (7):
      add rewriteBatchedStatements=true to MySQL jdbc driver config
      update to Grails 2.4.4
      add extra "refresh-dependencies" step to setup.sh script
      remove unneeded patch that was introduced earlier to fix a problem in some older Grails release
      remove unused imports from HelloController
      use GrailsCompileStatic for all Grails artefact classes
      remove bash_profile.sh from Grails benchmark

Leandro Pereira (2):
      Build a newer Lwan.
      Define a default test for Lwan.

Ludovic Gasc (GMLudo) (1):
      Change API-Hour version because 0.6.0 release is broken on PyPI

Mike Smith (10):
      Merge pull request #1325 from LadyMozzarella/remove-silica
      Merge pull request #1327 from LadyMozzarella/php-slim-fix-db-responses
      Merge pull request #1330 from stefanocasazza/master
      Merge pull request #1331 from jamming/master
      Merge pull request #1335 from denkab/master
      Fixed a bug with lapis
      Fixed a bug with openresty
      Fixed some issues with benchmark_configs
      Merge pull request #1346 from methane/fix/uwsgi-daemonize-err
      Merge pull request #1344 from TechEmpower/gemini-postgres2

Radoslav Petrov (3):
      postgresql jdbc driver updated to latest released
      postgresql jdbc driver fix maven groupId
      reverting changes in resin-web.xml

Raphael A. Bauer (1):
      Bump to Ninja 4.0.5.

Stefano Casazza (1):
      avoid dependency on bash_profile.sh and doc fix

denkab (3):
      parallelize read queries
      Parallelize write queries; use tabs for formatting (doh!)
      ...use tabs for formatting to comply with original

Brian Hauer

unread,
Mar 9, 2015, 6:18:39 PM3/9/15
to framework-...@googlegroups.com
The Round 10 preview site has been updated with a third capture of preview data, based on the Git repository as of 2015-03-06.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

We anticipate one more preview run and then the final run for Round 10.

Martin Grigorov

unread,
Mar 9, 2015, 6:24:19 PM3/9/15
to Brian Hauer, framework-...@googlegroups.com
Hi,

Again Wicket tests did not complete [1] [2].
Could you please run only the Wicket tests and give more details what is the problem so I can fix it before the final run ?

Thanks!


--

kainsavage44

unread,
Mar 9, 2015, 6:36:45 PM3/9/15
to framework-...@googlegroups.com, teona...@gmail.com
I don't have a lot to go on, presently, but from https://github.com/TechEmpower/TFB-Round-10/blob/master/peak/linux/results-2015-03-06-peak-preview3/2015-03-06.txt it seems like wicket isn't properly connecting/detecting the cassandra connection (search for "wicket" until you find the big "==========" blocks around it for additional logging).

Specifically, it says "cassandra: is _NO_ GO!: ERROR: [Errno 111] Connection refused"

I'm not exactly sure why that would be, but that's what happened.

-msmith


On Monday, March 9, 2015 at 3:24:19 PM UTC-7, Martin Grigorov wrote:
Hi,

Again Wicket tests did not complete [1] [2].
Could you please run only the Wicket tests and give more details what is the problem so I can fix it before the final run ?

Thanks!

On Tue, Mar 10, 2015 at 12:18 AM, Brian Hauer <teona...@gmail.com> wrote:
The Round 10 preview site has been updated with a third capture of preview data, based on the Git repository as of 2015-03-06.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

We anticipate one more preview run and then the final run for Round 10.

--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchmarks+unsub...@googlegroups.com.

Ludovic Gasc

unread,
Mar 9, 2015, 6:49:53 PM3/9/15
to Brian Hauer, framework-...@googlegroups.com
Thank you Brian for theses results.
I've just check the values for API-Hour, I'm little bit surprised, because it isn't really the values I've seen during my own tests:
I'm preparing a new blog post with Nginx and uWSGI, it will be a confirmation of my previous article.
Maybe I've missed something in my benchmarks, I'm interested in to know.

I'd also tested with your benchmark suite for Python part on my computers, I didn't really have the same results as you have.
Is it possible to know the command line you use for wrk ?

Maybe the API-Hour performance issue is related to this: https://github.com/TechEmpower/FrameworkBenchmarks/issues/1394
Because some workers are down.

Another difference is between JSON serialization of Django and Flask: you have a lot of differences. To my experience, Flask is better, but not in the proportions you have in your benchmark.

--
Ludovic Gasc

--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchm...@googlegroups.com.

Hamilton Turner

unread,
Mar 9, 2015, 6:50:06 PM3/9/15
to framework-...@googlegroups.com
Shortlog between round 10 preview 2 and round 10 preview 3: 

Kiswono Prayogo

unread,
Mar 9, 2015, 7:28:17 PM3/9/15
to framework-...@googlegroups.com
Again, all "Go" benchmark still missing since round 10 preview 2

Marko Asplund

unread,
Mar 10, 2015, 2:31:10 AM3/10/15
to framework-...@googlegroups.com, teona...@gmail.com
On Tuesday, March 10, 2015 at 12:36:45 AM UTC+2, kainsavage44 wrote:
I don't have a lot to go on, presently, but from https://github.com/TechEmpower/TFB-Round-10/blob/master/peak/linux/results-2015-03-06-peak-preview3/2015-03-06.txt it seems like wicket isn't properly connecting/detecting the cassandra connection (search for "wicket" until you find the big "==========" blocks around it for additional logging).

Specifically, it says "cassandra: is _NO_ GO!: ERROR: [Errno 111] Connection refused"
...
 
The message means the benchmark server is unable to connect to the Cassandra database.
Previously, there was an error in the Cassandra installation script which caused it to be unavailable to the tests.
The error was fixed manually in the Peak environment during the preview 3 run.

Here's a couple of different reasons why this would happen for the wicket test:
a) wicket test was run before the Cassandra installation issue was manually fixed
b) database installation was re-run after the manual installation fix,
  causing the manual fix to be overwritten (the fix [PR #1383] is not merged in master yet)
c) Cassandra startup is slower than expected causing connection test to fail

Output from running the following commands on the servers should help further diagnose the issue:

# benchmark server
nc -vz 10.0.3.5 9160

# on database server
ls -la /opt/cassandra/conf/cassandra.yaml
grep listen_address: /opt/cassandra/conf/cassandra.yaml
netstat -atn | grep :9160 | grep LISTEN

PR #1383 should fix this issue if the root cause is a or b.
Could someone from the project team please review the fix.

marko

Martin Grigorov

unread,
Mar 10, 2015, 3:56:59 AM3/10/15
to Marko Asplund, framework-...@googlegroups.com, Brian Hauer
Thanks for explaining the problem, Marko!
I've created https://github.com/TechEmpower/FrameworkBenchmarks/issues/1403 before reading your answer.
I still think #1403 would be useful because in my opinion it is unnecessary to require access to databases which are not used by the test.
In the case of Wicket only MySQL is needed.

--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchm...@googlegroups.com.

Brian Hauer

unread,
Mar 13, 2015, 6:15:08 PM3/13/15
to framework-...@googlegroups.com
The Round 10 preview site has been updated with a fifth capture of preview data, based on the Git repository as of 2015-03-11.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

Yes, fifth.  There was a fourth preview run, but I did not get a chance to post it before the fifth completed.  So I just skipped to this run.  You can find logs from all the preview runs here:

https://github.com/TechEmpower/TFB-Round-10/tree/master/peak/linux/

This will be the final preview run.  Any last minute PRs can come in through the weekend, but expect the final run for Round 10 to start on Monday March 16.

Ludovic Gasc

unread,
Mar 15, 2015, 7:42:11 PM3/15/15
to Brian Hauer, framework-...@googlegroups.com
Hi Brian,

As some person here, I'm waiting for the merge of my pull requests ;-)
No stress, I understand you receive a lot solicitations, I'm not alone.
Nevertheless, if you merge this PR before to launch the final assault: https://github.com/TechEmpower/FrameworkBenchmarks/pull/1407
It could change some values for all tests that use PostgreSQL.
If possible, it should be interesting to launch another preview to validate that it doesn't break something.
From my point of view, this PR should be merged, not only for my plateform, but also because MySQL is more used in FrameworkBenchmarks, I've the impression that MySQL setup has been more optimized than PostgreSQL, it should be interesting for others.

By the way, could you give us the benchmark.cfg file you use to test ? https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/benchmark.cfg.example

Regards.

--
Ludovic Gasc

--

Marcio Andrey Oliveira

unread,
Mar 18, 2015, 10:31:08 AM3/18/15
to framework-...@googlegroups.com
Hi.

I miss web2py. Could you please add it?

Brian Hauer

unread,
Mar 18, 2015, 1:52:23 PM3/18/15
to framework-...@googlegroups.com, teona...@gmail.com
Hi all,

Due to a heavy work-load this week, we've not been able to finalize Round 10 just yet, so as Ludovic Gasc recommended, we're doing another preview or two.  I will likely update the preview later today or tomorrow morning.

In the meantime, Mike has applied most of the Postgres configuration changes in PR #1407.  However, significantly increasing the connection limit didn't work (Postgres would not start).  For the time being, we've left that at 2,000 connections.

With the additional time, if any one still has fix PRs, we'll try to accept those.

Thanks!

Ludovic Gasc

unread,
Mar 18, 2015, 4:50:10 PM3/18/15
to Brian Hauer, framework-...@googlegroups.com
On Wed, Mar 18, 2015 at 6:52 PM, Brian Hauer <teona...@gmail.com> wrote:
Hi all,

Due to a heavy work-load this week, we've not been able to finalize Round 10 just yet, so as Ludovic Gasc recommended, we're doing another preview or two.  I will likely update the preview later today or tomorrow morning.

Thank you.
 

In the meantime, Mike has applied most of the Postgres configuration changes in PR #1407.  However, significantly increasing the connection limit didn't work (Postgres would not start).  For the time being, we've left that at 2,000 connections.

Ok.
Could be possible to know more details (error message, sysctl -a, number of cpus, memory...), maybe in an issue ?
I understand you don't have the time to debug that right now, but if it should be possible to fix that for Round 11, I can help if you are interested in.

 
With the additional time, if any one still has fix PRs, we'll try to accept those.

This is my PR to reduce pgsql socket pool: https://github.com/TechEmpower/FrameworkBenchmarks/pull/1427
I've also another fix to reduce error messsages: https://github.com/TechEmpower/FrameworkBenchmarks/pull/1428

I know I've added some tests, but certainly Python community should be interested in to see the overhead between AsyncIO and aiohttp.
I don't implement that before because I've the idea only this week.
 

Thanks!

Matthieu Garrigues

unread,
Mar 18, 2015, 6:48:57 PM3/18/15
to framework-...@googlegroups.com


Le mercredi 18 mars 2015 18:52:23 UTC+1, Brian Hauer a écrit :

With the additional time, if any one still has fix PRs, we'll try to accept those.


I know it is late and I will understand if you do not include it in round 10, but the pull
request [1] for a new C++ framework is ready. It implements and passes all the 6 TFB tests.
It could be interesting to compare this framework with the other C/C++ frameworks that
have a much longer TFB implementation.


Brian Hauer

unread,
Mar 19, 2015, 7:04:27 PM3/19/15
to framework-...@googlegroups.com
As I mentioned in my previous message, we are swamped this week so we are continuing to run previews of Round 10.  We'll finalize the round once we get a breather.

In the meantime, the Round 10 preview site has been updated with a sixth capture of preview data, based on the Git repository as of 2015-03-16.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

Thanks for your continued patience!

Ludovic Gasc

unread,
Mar 20, 2015, 9:29:57 AM3/20/15
to Brian Hauer, framework-...@googlegroups.com
Thanks for your continued patience!

Thank you a lot for your work, it's really time consuming job, especially to merge PR.

Brian Hauer

unread,
Mar 20, 2015, 6:24:03 PM3/20/15
to framework-...@googlegroups.com
The Round 10 preview site has been updated again with a seventh set of preview data, based on the Git repository as of 2015-03-18.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

Ludovic Gasc

unread,
Mar 21, 2015, 6:17:18 AM3/21/15
to Brian Hauer, framework-...@googlegroups.com
Hi,

Thanks for the update, but I see no more pull requests merged between your previous tests.
Do you have an issue to merge pull requests ?
Are you interested in by a pull request to rollback postgresql configuration, to be consistent between your modifications on live and github repository ?

Regards.

--
Ludovic Gasc

On Fri, Mar 20, 2015 at 11:24 PM, Brian Hauer <teona...@gmail.com> wrote:
The Round 10 preview site has been updated again with a seventh set of preview data, based on the Git repository as of 2015-03-18.  It is at the same URL:

http://www.techempower.com/benchmarks/previews/round10/

--

Hamilton Turner

unread,
Apr 2, 2015, 12:03:30 AM4/2/15
to framework-...@googlegroups.com
Marcio, 

It's rare for the TFB team to have time to add new frameworks ourself, we normally rely on the community for that. We're very available if you're interested in contributing and need guidance, and there is extensive documentation at https://frameworkbenchmarks.readthedocs.org/en/latest/ 

I've made an issue so the framework is not forgotten - https://github.com/TechEmpower/FrameworkBenchmarks/issues/1471

Thanks, 
Hamilton
Reply all
Reply to author
Forward
0 new messages