Round 9 timing: to be determined based on validation effort

1,574 views
Skip to first unread message

Brian Hauer

unread,
Jan 6, 2014, 2:53:01 PM1/6/14
to
Thanks to everyone who helped with Round 8!  It's an honor to see such continued interest in the project.  Despite being ~2 weeks late, I think we've made decent progress toward a more regular cycle.

To put some dates on the calendar for Round 9, here is what we have as targets:
  • Thursday, January 9: Last day for change and addition PRs prior to starting the Round 9 preview run.
  • Tuesday, January 14: Round 9 preview posted.
  • Thursday, January 16: Last day for fix PRs.  Final run started.
  • Tuesday, January 21: Round 9 posted.
Obviously all of this is subject to change, but having a target helps keep things moving.

Adam Chlipala

unread,
Dec 17, 2013, 5:25:25 PM12/17/13
to framework-...@googlegroups.com
On 12/17/2013 02:56 PM, Brian Hauer wrote:
> Thanks to everyone who helped with Round 8! It's an honor to see such
> continued interest in the project. Despite being ~2 weeks late, I
> think we've made decent progress toward a more regular cycle.

And thank you very much for running this program! The web frameworks
world has seriously needed a comparison like this for many years, and
I'm grateful to have a chance now to gather objective data on the
framework I work on.

I wanted to ask about a few potential additions for Round 9 or later rounds.

On ease of development for participants:
- I've seen somewhere a goal of more frequent "preview round"
executions. Something like that would be great; say running a complete
benchmarking overnight every night or every other night or something
(maybe only for those frameworks that have had requests submitted?),
saving the results somewhere easy to reach. Otherwise, I suspect every
framework team will need to construct its own replica of your i7
environment to make speedy progress on fixing silly configuration
mistakes and so on.

On other useful measurements of the existing benchmarks:
- I think I previously saw discussion of including memory usage
statistics in future results. Any concrete plans for that now? I would
imagine measuring all writable process memory in a Linux instance (at
key points during benchmarking?) would be good enough, since
framework-specific code runs on its own machine, where you could
subtract the memory usage of processes that are running for all frameworks.
- How about some simple measure(s) of code size or complexity, like
size of a .tgz of all identified source files?
- [Warning: this idea might be crazy.] Automatically testing
submissions for security vulnerabilities? :)

On interesting directions for new benchmarks:
- What about some more substantial but manageable application,
where we would actually give frameworks a chance to shine in the
programmer productivity dimension? Say, something like a basic online
store, starting from images and HTML templates provided for everyone to
work with. [I'd especially like to see an example where traditional
database transactions are useful to maintain consistency, such as
preventing the online store from "selling" more copies of an item than
it actually has in stock.]

Brian Hauer

unread,
Dec 17, 2013, 11:23:06 PM12/17/13
to framework-...@googlegroups.com
Hi Adam,

Thanks for the very thoughtful comments.  As you might imagine, we have a long queue of enhancements in mind for future rounds, enough to keep us busy for a very long time.  That said, it's especially fun and useful to hear the opinions of others.  It helps us decide which enhancements to prioritize given our limited time.

You're right, we have a goal of eventually having our i7 environment continuously running the test suite.  That is, as soon as a run completes, automatically start again.  You can imagine all of the fun that could lead to, such as automatic commits of results and automatic pulls of updates prior to each run.  Plus, with a continuously-running environment, we could relax the constraints we have in place on run-time.  For example, where we presently limit each sample to 15 seconds so that the full suite can run overnight, the sample times could be pushed back up to 30 or 60 seconds.  I also just posted a message on the Hacker News thread that suggested that with a continuous model, I would be more tolerant of a wider spectrum of concurrency levels even for tests that are already CPU-bound.  Right now, I resist increasing the number concurrency levels where I don't think doing so will add value because each additional sample level adds a chunk of total execution time.

There are many hurdles to getting there.  For instance, we first want to get to a better state with respect to dependency isolation and self-management of dependencies by framework.  In other words, if framework X wants to use nginx N and framework Y wants to use nginx N+1, this should not cause a problem if the nginx installation is framework-specific (presently some dependencies are shared, causing conflicts).  Not only that, if we require that frameworks manage their dependencies, a continuously-running environment would simply run the frameworks' "validate dependencies" functions prior to running, so any changes to dependencies that had been committed to a framework would be applied.  Another hurdle: coordinating the various reboots necessary to text Linux + Linux, Windows + Linux, and Windows + Windows (each server is in a dual-boot configuration).

Yes, memory, CPU, and IO utilization metrics are desired.  See GitHub issue #108.  My current thinking is that while each test is running, I'd like to just capture a few snapshots of these metrics, which could then be averaged/maxed for rendering.  If someone is willing to start putting together a proof of concept for this, I'd love to see it.  But please include memory, CPU, and IO utilization--we've had requests for each and all.

As for code size and complexity, we do not have a GitHub issue for that but we have been making some headway on that internally.  We are considering measuring things like source lines of code (sloc), and the number and size of libraries.  I posted somewhere else today (I think in the HN thread again) concerning our other plan here: capturing the number of commits to each directory in the GitHub repo so that readers can get a feeling for which tests have seen lots of community input, and when the last change was made.

Your "crazy idea" about testing for security vulnerabilities is interesting.  It speaks to the possibility of related projects that use the same suite of test implementations but exercises them in a different way.  That could be a fork of the project, so to speak.  :)

As you can imagine, a full application test type is considerably more difficult to specify and implement.  I have been hesitant with the idea in the past because of the amount of labor involved.  However, I could change my opinion there, especially considering how generous the community has been with providing test implementations.  That leaves just coming up with a solid set of requirements and some reference implementations.  So it could happen!  But even if we defer such an enhancement for several rounds, we do plan to add test types with more substantial workloads in upcoming rounds.  We'd like to get some tests where even the fastest platforms and frameworks struggle to provide 100 responses per second.

I really appreciate your ideas and I would like to invite you (and anyone else reading along) to put some ideas into the GitHub issues.  Thanks again!

Michael Hixson

unread,
Dec 18, 2013, 12:29:18 AM12/18/13
to Brian Hauer, framework-...@googlegroups.com
For what it's worth, I think we should prioritize additional test types over pretty much everything else.  Especially the external resource, websockets, and caching test types.  New test types will have the highest "our time" to "external contributions" ratio.

It's not that I value new tests more highly than, say, measuring CPU utilization.  It's that if we declare new test type A three months from now instead of right now, we miss out on three months worth of contributions.  Meanwhile, measuring the CPU or counting source code lines doesn't really help anyone make pull requests.  If we defer those for three months we end up in exactly the same place as if we hadn't.

-Michael


--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchm...@googlegroups.com.
Visit this group at http://groups.google.com/group/framework-benchmarks.
For more options, visit https://groups.google.com/groups/opt_out.

Brian Hauer

unread,
Dec 18, 2013, 1:07:50 AM12/18/13
to framework-...@googlegroups.com, Brian Hauer
That's also a great point.  Namely, that the earlier we can get new test types specified and provide reference implementations, the sooner that contributors can begin implementing those.

Of the three you listed, Michael, the caching test is the low-hanging fruit (see #374).  We should discuss this a bit internally to determine how much pain we cause ourselves if we add this now or after we have better dependency isolation in place.  If, for example, we receive implementations that use Memcached, I'd like to have those start with Memcached being managed on a per-framework basis rather than a shared basis that later needs to be revisited and isolated.

But aside from that, the specification of that first caching test is simple and should be easy to implement in many cases.

On the WebSocket front, a testing tool was linked to me in a tweet recently.  I should have posted that in a GitHub issue, but I'm sure I can find it again.

Adam Chlipala

unread,
Dec 18, 2013, 10:59:18 AM12/18/13
to Brian Hauer, framework-...@googlegroups.com
On 12/17/2013 11:23 PM, Brian Hauer wrote:
> As for code size and complexity, we do not have a GitHub issue for
> that but we have been making some headway on that internally. We are
> considering measuring things like source lines of code (sloc)

Would this depend on, say, a regular expression for each language
identifying where comments appear, to avoid counting them?

> As you can imagine, a full application test type is considerably more
> difficult to specify and implement. I have been hesitant with the
> idea in the past because of the amount of labor involved. However, I
> could change my opinion there, especially considering how generous the
> community has been with providing test implementations.

It might be useful to change the whole perspective for this project, to
the point where you expect the community of each framework to provide a
good implementation of each benchmark, rather than thinking of a small
set of benchmark developers doing everything. That seems to be how the
venerable Great Programming Languages Shootout worked, for instance.
The people contributing code have natural incentives to work hard to
maximize quality! (And with the high profile of the TechEmpower
benchmarks now, I think framework developers will see the value of
working on benchmarks, where they may not have when you began.)


One more pair of suggestions I wanted to add, to reduce friction for
some frameworks in processing requests:
1. The current specification requires inclusion of "Server" and "Date"
headers in responses. Is this necessary? It requires a bit of
marginally ugly code in implementations for frameworks that run
bare-bones HTTP servers that don't habitually include such headers. We
would typically throw a proxy in front of such a server in a realistic
deployment, but I think it's useful to benchmark the underlying server
directly in this sort of context.
2. The current specification requires, for some benchmarks, processing
of "an integer query string parameter named queries." However, the
benchmark_config parameters like "query_url" seem to indicate that the
infrastructure will be flexible about formatting; perhaps no code
changes would be necessary to support arbitrary URL formats that just
end in the numeric "queries" parameter? The specification also requires
applying a default interpretation in case of a missing or malformed (not
an integer) parameter, but it's not clear if benchmarking actually uses
that flexibility. Frameworks that adopt a higher-level view of web apps
(like Ur/Web) may require more work than others to accommodate such a
fixed, ad-hoc URL format; Ur/Web would be happier with URIs like
"/queries/20", and no explicit request processing code would be
required, in contrast to the current benchmark implementation.
So, would it actually be easy to change both these sorts of handling?
Maybe it doesn't even break backwards compatibility, if it would solely
be a relaxation of the problem specification?


A last question: would it be substantially easier to implement new
features if participants who are interested in them combined to pay a
modest amount of money for the work to be done? This could prompt
allegations of biasing results in favor of the frameworks whose
developers contribute financially, but I figured I'd put it on the table.

kgustafson

unread,
Dec 18, 2013, 12:34:44 PM12/18/13
to framework-...@googlegroups.com, Brian Hauer
Hi Adam,


On Wednesday, December 18, 2013 7:59:18 AM UTC-8, Adam Chlipala wrote:
On 12/17/2013 11:23 PM, Brian Hauer wrote:
> As for code size and complexity, we do not have a GitHub issue for
> that but we have been making some headway on that internally.  We are
> considering measuring things like source lines of code (sloc)

Would this depend on, say, a regular expression for each language
identifying where comments appear, to avoid counting them?

We'd definitely want to use a tool for this. In our preliminary research, we like this tool: http://cloc.sourceforge.net/ I believe it covers all the languages currently in the benchmarks.

We realize, of course, that line count isn't the most useful measurement on its own. But taken in the right context we're hoping this will provide another interesting attribute for each test.

-Keith

Greivin López

unread,
Dec 18, 2013, 12:45:27 PM12/18/13
to framework-...@googlegroups.com
May I suggest to include this Go framework into the test:


Regards.


On Tuesday, December 17, 2013 1:56:18 PM UTC-6, Brian Hauer wrote:
Thanks to everyone who helped with Round 8!  It's an honor to see such continued interest in the project.  Despite being ~2 weeks late, I think we've made decent progress toward a more regular cycle.

Florin Patan

unread,
Dec 19, 2013, 11:15:13 AM12/19/13
to framework-...@googlegroups.com
Hello,


Thanks for the work on this.

May I suggest running the next round of tests to use the c3.large instances on AWS instead of m1.large? They are supposed to be optimized for CPU usage and the price is even a bit lower then that the m1.large ones.



Thanks,
Florin

Brian Hauer

unread,
Dec 19, 2013, 3:28:09 PM12/19/13
to framework-...@googlegroups.com
Hi Florin,

Good thought, and thanks for pointing out that c3.large is slightly cheaper.  I am a cheapskate, so saving a few dollars is appealing.  Every time we kick off an EC2 run over a weekend, I start worrying.  :)

Austin Riendeau

unread,
Dec 22, 2013, 8:44:04 PM12/22/13
to framework-...@googlegroups.com
Hey Brian,

I love this program and use it quite often for benchmarks. I just wanted to see if we could add a Node.js one for benchmarking. After attending Node Summit this year, Joyent was driving home how much they prefer "node-restify" for their apis because it doesn't have has much bundle into it like Express so it would be interesting to see the benchmarks on that as well if possible.

--Austin

Brian Hauer

unread,
Dec 22, 2013, 9:22:26 PM12/22/13
to framework-...@googlegroups.com
Hi Austin,

We'd be glad to include it.  Would you be willing to put together an implementation and submit it as a pull request?  That's the easiest way to get it included.

Paul Jones

unread,
Dec 26, 2013, 4:38:54 PM12/26/13
to framework-...@googlegroups.com
I am *so* happy to see someone else taking up this charge.  I've done similar work, although restricted to the PHP world, for several iterations over the past few years. Here's the blog category and one blog post from years ago ...


... and here's the Gthub repo for that work:


Everything I've read about the TechEmpower project so far mimics both my approach and my motivations, and includes far more than I could have done on my own.

I look forward to retiring my work and making pull requests against TechEmpower instead.

Thanks very much, guys, this looks great.


-- pmj

Brian Hauer

unread,
Dec 26, 2013, 9:33:52 PM12/26/13
to framework-...@googlegroups.com
Hi Paul,

We would be delighted to have you join us in this project.  From the looks of your prior work, I get the feeling we've gone through much of the same learning and discovery that you had (for example, researching the right tools).  Prior to starting this project, we did see several other similar projects, but had not seen yours.  It's very impressive, especially as solo work!

We've been especially lucky (I'm not sure that's the right word, but it's how we feel) to have received a great deal of community input to-date, and over time--with each round--we've received more input from subject-matter experts, which gives the data more credibility and value.

To that end, since you are clearly a PHP subject-matter expert, we'd very much like to receive any pull requests you put together to tune up the PHP tests.  In particular, and perhaps this is just an echo of the recent Round 8 results, I am fascinated to see the performance of PHP frameworks (and not simply plain PHP) running on hhvm.  Have you had a chance to build up experience with hhvm?  I'm not particularly well versed in PHP, least of all hhvm.

But anything at all that you know will improve matters, please feel free to send the pull requests. :)  And ask any questions here about the project as a whole.  We've been evolving the process piecemeal in a fairly extemporaneous fashion since the beginning, so if there is something procedural that you picked up from your experience that you'd like to share that might short-circuit future problems, let us know.

Thanks for the kind words and welcome!

Brian Hauer

unread,
Dec 27, 2013, 12:17:44 AM12/27/13
to framework-...@googlegroups.com
Administrative bit: Although this group does not require moderation, periodically, I am prompted to moderate an incoming message.  I just did so this afternoon, quickly accepting a message from a Dart community member before I read the message in its entirety.  Unfortunately, now that I have accepted the message, it does not appear anywhere within the Google Group user interface, so I am unable to continue reading and reply.

To whomever sent that message, I apologize, I don't even remember your name and the only thing I recall noticing was that it pertained to Dart.  Please re-send it if you wouldn't mind.

I assume I'm doing something wrong with the Google Group, but there aren't that many options to confuse matters, so I am at a loss to explain what happened.

Paul M. Jones

unread,
Dec 27, 2013, 11:12:16 AM12/27/13
to Brian Hauer, framework-...@googlegroups.com

On Dec 26, 2013, at 8:33 PM, Brian Hauer <teona...@gmail.com> wrote:

> Prior to starting this project, we did see several other similar projects, but had not seen yours. It's very impressive, especially as solo work!

It's a big internet, and you're kind to say so. You guys have far surpassed what I've completed. Nicely done.


> We've been especially lucky (I'm not sure that's the right word, but it's how we feel) to have received a great deal of community input to-date, and over time--with each round--we've received more input from subject-matter experts, which gives the data more credibility and value.

Heh -- I'm pretty sure I know about at least one kind of feedback you have received, prompting you to purchase extra-thick flame-retardent undergarments. ;-)


> To that end, since you are clearly a PHP subject-matter expert, we'd very much like to receive any pull requests you put together to tune up the PHP tests.

I will send them along as I have the time; I'm eager to switch my testing over to yours, if for no other reason than to lighten my own load, but other commitments may interfere with that for a while.

FWIW your PHP framework results, in the sense of "relative to maximum PHP responsiveness", look very similar to the ones I was getting, so you can consider that external confirmation.


> In particular, and perhaps this is just an echo of the recent Round 8 results, I am fascinated to see the performance of PHP frameworks (and not simply plain PHP) running on hhvm. Have you had a chance to build up experience with hhvm? I'm not particularly well versed in PHP, least of all hhvm.

I'm interested in hhvm too, but not well-versed in it either. We've been running HHVM on travis-ci against the Aura libraries, and it *mostly* works. I'll see who I can drum up in the wider PHP community to help with that.


> But anything at all that you know will improve matters, please feel free to send the pull requests. :) And ask any questions here about the project as a whole. We've been evolving the process piecemeal in a fairly extemporaneous fashion since the beginning, so if there is something procedural that you picked up from your experience that you'd like to share that might short-circuit future problems, let us know.

Again, it looks like you guys have gotten to where I was, and then passed it. Your choice of `wrk` seems good although I have not used it myself.

The biggest problems I had were:

1. The unbelievable amount of time it took to update, prepare, test, run, fix, re-run, etc, and

2. The unwillingness/inability of framework authors/experts to participate in a numeric measurement project where they might not end up in first place. (As a side note, I found it ... funny? ... that one project claimed "benchmarks don't matter" when they were in last place, then started their version 2 and advertised "now we're the fastest!" while in alpha, then stopped talking about it when their release fell back to previous performance levels.)

Basically, there were no shortcuts -- nothing but work, work, work all the time, and then tons of complaining about the results afterwards.


* * *

Last item, a question: Had you considered using Apache/mod_php in addition to Nginx/PHP-FPM? Not advocating for it, just curious what reasons you may have had in mind there.

Again, thanks for the great work.


--
Paul M. Jones
pmjo...@gmail.com
http://paul-m-jones.com


Brian Hauer

unread,
Dec 27, 2013, 12:22:03 PM12/27/13
to framework-...@googlegroups.com, Brian Hauer
Hi Paul,

You're right, the feedback certainly has been varied, but we've been very pleasantly surprised by the positive reactions and, as I said earlier, the volume of contributions.  Even critical feedback is mostly constructive and sometimes positive in the long-term once we have a chance to incorporate contributed fixes and configuration tweaks.  Perhaps we see everything with rose-colored glasses, as they say, because before we posted the first blog entry ("Round 1" in retrospect), we psyched ourselves up for a bunch of criticism.  When only fair and reasonable criticism arrived, mixed in with positive feedback and other thoughtful responses as well, we considered ourselves really lucky.

Definitely take your time with the PHP pull requests.  We plan to keep running rounds of this for the foreseeable future.  So whenever you have contributions, we'll merge them in and get them evaluated in the next round.

I'm glad to hear you say that it took a huge amount of time because we often marvel at how much work there is to do for each round, even with the community contributing more and more of that effort with each round.  I think we're slowly chipping away at the total effort for each round, but there is still much room for improvement.  Since we're doing this in free time and between the projects that "pay the bills," at times it can seem like a race against a clock.

As for Apache + mod_php, we did use that in the first round and, in fact, one of the criticisms was that we should be using nginx + FPM.  So we switched over at Round 2 and beyond.

Prior to Round 3, the results were rendered directly within blog entries, so there was limited space.  With limited space, we kept the permutations in check.  Since Round 3, though, we've had the ability to filter the results, so more permutations is now not a problem.  If you'd like to re-introduce some Apache + mod_php permutations, we'd welcome it.  An approach I'd suggest is to start with just plain PHP and maybe one of the frameworks on Apache + mod_php.  This would allow a reader to make a reasonable estimate for any framework on Apache + mod_php (by evaluating plain PHP on nginx+FPM versus Apache+mod_php and their preferred framework's overhead on nginx+FPM; apply the same overhead ratio to Apache+mod_php and there is an decent estimate).  Again, considering the results are not the RPS of actual applications but rather a proxy to indicate relative performance, a specific sample for every permutation is not strictly necessary.

Eduardo Silva

unread,
Dec 30, 2013, 12:48:26 PM12/30/13
to framework-...@googlegroups.com
HI All,

my name is Eduardo Silva from Duda I/O project, a C web services framework: http://duda.io .

We are *very* interested into be part of Round #9. I would like to know if you have some available tool to reproduce your tests locally, on that way we can make sure we are replying each request as expected.

any help is appreciated,

best

Adam Chlipala

unread,
Dec 30, 2013, 1:44:30 PM12/30/13
to Eduardo Silva, framework-...@googlegroups.com
On 12/30/2013 12:48 PM, Eduardo Silva wrote:
> my name is Eduardo Silva from Duda I/O project, a C web services
> framework: http://duda.io .
>
> We are *very* interested into be part of Round #9. I would like to
> know if you have some available tool to reproduce your tests locally,
> on that way we can make sure we are replying each request as expected.

I managed to get the benchmarking framework running myself recently.
Since I'm new to the effort, too, my perspective may be especially useful!

My answer might be disappointingly close to "RTFM," though. I was able
to start from the README.md here:
https://github.com/TechEmpower/FrameworkBenchmarks
(the Github repository that you'd clone to get started)

There's a link to a "Benchmark Tools README file," which has the rest of
what you need to know. That file has a further link to a "Benchmark
Suite Deployment README file," but I found I didn't need to consult it.
Instead, you can just set up standard tools like database servers in the
usual way.

My experience has only been with reproducing tests for one framework in
particular, and I'm sure more complications would arise if you want to
automatically install all of the current frameworks.

Eduardo Silva

unread,
Jan 6, 2014, 1:36:34 PM1/6/14
to framework-...@googlegroups.com, Eduardo Silva

thanks for the help, even i searched initially i did not catch it, thanks again!

Brian Hauer

unread,
Jan 6, 2014, 2:52:07 PM1/6/14
to framework-...@googlegroups.com, Eduardo Silva
Hi everyone,

We are going to delay Round 9 by a bit because we have been making progress on long-needed validation tests in the benchmark toolset.  Mike Smith has built out functionality to confirm that each test implementation matches the requirements (at least to the extent that this can be reasonably automated).  During that effort, he has made many tweaks to implementations to bring them into compliance.  There remain several others that will require community contribution to bring them into compliance.  Since the validation tests are strict, many tweaks are small things such as the precise case and punctuation of "Hello, World!" and the keys and values in JSON maps.  Those are easy for us to fix up ourselves.  But in some cases, third-party libraries are being used to prepare responses, and we are not precisely certain how to force compliance in all cases.

The validation tests will make the previews more useful by highlighting failed tests, and ultimately the process should make even the final results per round more sensible for tests that did not complete.  Rather than simply vanishing from view as they do now, failed tests will be rendered at the bottom of charts with a "did not complete" indicator.

We will provide details soon, including a list of tests that still require attention.  Then, to allow some time for implementations to be made compliant, we'll allocate more time than usual to process pull requests.

I'm updating the title of this thread to reflect that Round 9 timing is now TBD.  The validation process may be a week or more.

Thanks for understanding, and thanks for the pull requests to-date!  We'll aim to get working on those after we have the validation work stabilized.

Igor Polevoy

unread,
Jan 20, 2014, 1:06:53 PM1/20/14
to framework-...@googlegroups.com, Eduardo Silva
Hi, Brian. 

While preparing ActiveWeb for Round 9, I got side-tracked by other work, and wonder if it is too late for pull request to make it into Round 9?

Looks like Google Groups is mis-behaving. I sent  this message twice, but it it not posted. If it gets posted twice, apologies. 

Thanks
Igor 

Brian Hauer

unread,
Jan 20, 2014, 1:20:02 PM1/20/14
to framework-...@googlegroups.com, Eduardo Silva
Hi Igor,

No worries!  It is not too late for Round 9.  If you have changes, please go ahead and submit a PR.  Thanks!

Igor Polevoy

unread,
Jan 20, 2014, 5:56:02 PM1/20/14
to framework-...@googlegroups.com, Eduardo Silva
awesome, will push tonight!

Igor Polevoy

unread,
Jan 21, 2014, 2:23:00 AM1/21/14
to framework-...@googlegroups.com, Eduardo Silva
Hi, guys. I added ActiveWeb here:

Please, let me know if everything is OK, and thanks for patience!

Igor 

Sébastien Lorion

unread,
Jan 21, 2014, 4:54:48 AM1/21/14
to framework-...@googlegroups.com, Eduardo Silva
Hello,

I just discovered this benchmark a couple of days ago and out of curiosity, I created a new test suite for asp.net that uses the asynchronous features of .NET 4.5 / MVC 4 and also a little known JSON serializer named Jil (https://github.com/kevin-montrose/Jil).

If it is not too late to include it in round 9, that would be awesome ! I did my best to follow the setup by looking at /aspnet, /aspnet-stripped and /httplistener. If there is any problems and/or changes required, let me know !


Sébastien

ro...@emweb.be

unread,
Feb 6, 2014, 10:53:36 AM2/6/14
to framework-...@googlegroups.com
How's the schedule looking now? I just issued a pull request to have Wt included (https://github.com/TechEmpower/FrameworkBenchmarks/pull/787). Is it still possible to include it in round 9? I've tested it and it should install and validate without issue.

Op dinsdag 17 december 2013 20:56:18 UTC+1 schreef Brian Hauer:

Brian Hauer

unread,
Feb 6, 2014, 12:36:08 PM2/6/14
to framework-...@googlegroups.com
Hi Roel,

Thanks for the contribution!

I'll check with Mike over here to see if Wt can be added easily enough.  Right now our focus is setup and validation of the new server environment.  But once we have that stabilized we may be able to go back and process some more pull requests.

To set the right expectations, I would say no guarantees for Round 9, but if we get the time, we'll do so.  In either case, it will be processed before Round 10.

Joel Berger

unread,
Feb 15, 2014, 8:42:58 PM2/15/14
to framework-...@googlegroups.com
I see that on Feb 6 round 9 hadn't happened yet. I know I would be pushing it, but a few of us on the Mojolicious IRC channel (including me a core-dev) have taken up the challenge to improve the rather wimpy entry. We have a rather major overhaul in the works (current state here: https://gist.github.com/jberger/9026436) and if it could be included in 9 (or else 10) would would be very happy.

Thanks,

Joel Berger

Joel Berger

unread,
Feb 17, 2014, 10:19:34 AM2/17/14
to framework-...@googlegroups.com
I have issued a pull request to overhaul the Mojolicious entry. Of course we would love to see it included in round 9, but if that is not possible then we look forward to round 10.

Greg Saunders

unread,
Apr 5, 2014, 12:48:10 PM4/5/14
to framework-...@googlegroups.com
I may have missed it somewhere, but has Round 9 happened?  If not is there any update.

Thanks for your hard work!

Brian Hauer

unread,
Apr 6, 2014, 8:13:56 PM4/6/14
to framework-...@googlegroups.com
Hi Greg,

Thanks for the interest and sorry for the lack of updates.  Round 9 has not yet been published.  We believe we have good data from all three environments, so I should be able to post it soon!

Abel Avram

unread,
Apr 11, 2014, 4:31:00 AM4/11/14
to framework-...@googlegroups.com
Dart 1.3 is out with important server-side improvements. They claim they have doubled req/sec for Hello world, JSON and file responses. Is it possible to include Dart 1.3 in this iteration of the benchmark? I am interested to see if you confirm/infirm Dart's claims.

For details on latest Dart 1.3 see this InfoQ post: http://www.infoq.com/news/2014/04/dart-nodejs

Regards,

Abel

Brian Hauer

unread,
Apr 11, 2014, 8:32:33 PM4/11/14
to framework-...@googlegroups.com
Hi Abel,

We're excited to hear that Dart 1.3 is available.  We have already frozen what will be in Round 9 and I apologize for its delay.  We'll be posted it soon.  The latest EC2 data we collected was determined to once again be missing some results, so we are re-running that presently.

Once we've got Round 9 done, we'll process pull requests, including any that upgrade to Dart 1.3.

Stephen Samuel

unread,
Apr 14, 2014, 6:19:18 PM4/14/14
to framework-...@googlegroups.com
I have a PR that I want to submit that I have been waiting for round 10 before submitting it.

For how long after round 9 is published do you think you'll be accepting PRs before closing for round 10 - just roughly will do (few weeks, few months etc).

Brian Hauer

unread,
Apr 15, 2014, 9:31:26 AM4/15/14
to framework-...@googlegroups.com
Hi Stephen,

It had been my intent to get into a routine with the rounds, but we have not been able to stick with the routine.  So it's a bit uncertain, but I expect that we will have about a month of accepting PRs after Round 9 is posted.

Emir Kurtovic

unread,
Apr 20, 2014, 2:07:20 PM4/20/14
to framework-...@googlegroups.com
Is there any way to see Yii shortly ?

Brian Hauer

unread,
Apr 24, 2014, 5:57:34 PM4/24/14
to framework-...@googlegroups.com
Hi Emir,

We've long had a GitHub issue noting that we'd like to have a Yii test added:

https://github.com/TechEmpower/FrameworkBenchmarks/issues/54

However, we have not yet received a pull request implementing a Yii set of tests.  If you are able and willing to contribute, we'd love to get a Yii PR merged in for Round 10.
Reply all
Reply to author
Forward
0 new messages