Frontend servers

199 views
Skip to first unread message

Harshad RJ

unread,
Oct 6, 2013, 1:22:10 PM10/6/13
to framework-...@googlegroups.com
Hi,

Quick question: Why do some tests have a front end server while others don't? For example, the Go test doesn't have any front-end server from Round 6 reports.

The main reason a front-end server is used is to serve static content efficiently and securely. It also helps in routing, rewriting URLs, etc. But I don't see how that is useful in the context of a benchmark, or rather why it would be matter to some tests only.

thanks,
Harshad

Naoki INADA

unread,
Oct 7, 2013, 12:29:27 PM10/7/13
to framework-...@googlegroups.com
It's for realistic.

I use only slowloris-safe, keep-alive supported webserver in front of client.
For example, Go's webserver and tornado can be used as "front" server safely.
 

2013年10月7日月曜日 2時22分10秒 UTC+9 Harshad RJ:

Brian Hauer

unread,
Oct 7, 2013, 4:27:08 PM10/7/13
to framework-...@googlegroups.com
Hi Harshad,

Thanks for the question!

Bearing in mind that many of the frameworks' tests are community-contributed, the intent of the project is to simulate the behavior of web applications in reasonable production deployment environments.

Since, in production, static assets are routinely delivered via a CDN, we consider it acceptable for test implementations to avoid the use of a front-end web server if the application platform and its user community is comfortable doing so.  In many cases, the built-in server is also a superb web server in its own right (e.g., Resin or Undertow), and I suspect these see plenty of production deployment without additional front-end servers.

My understanding from the Go community is that they are indeed comfortable (that is, they do not advise against) deploying Go to production without a separate front-end server.

However, we intend to add an SSL test in the future, and doing so will require many tests to use a front-end server since few platforms provide built-in SSL.

Harshad RJ

unread,
Oct 8, 2013, 2:32:45 AM10/8/13
to Brian Hauer, framework-...@googlegroups.com
Thanks Brian for your response.

Good to hear that the SSL test might level the playing field a bit, but still, I think it is not 100% fair to platforms which have SSL support and yet don't strictly require a FE.

Would it be possible to run a test with and without a FE and see roughly how much overhead is added by the FE server? If it is insignificant then this thread can be closed without much debate.

However, if the overhead of a FE server is significant (that is, it can affect ranking), then I think it should be necessary to define the requirements more precisely. Going by how "comfortable" a community is, will lead to very subjective tests, at least in my opinion, would diminish the credence of the benchmark.

best,
--
Harshad RJ

Brian Hauer

unread,
Oct 8, 2013, 10:28:16 AM10/8/13
to framework-...@googlegroups.com, Brian Hauer
Hi Harshad,

If I may extrapolate a bit about your point of view from what you have written, I suspect you would be interested in a test permutation that combined Go (for example, among others) with a front-end web server so that you are able to compare it to other platforms with the same front-end web server.

Since Round 4, when we added filtering to the results web site, we've become more welcoming of what I call "test permutations," meaning tests of a single framework with various configurations of software components.  To a degree we've also relaxed the "production readiness" requirement to allow for including pre-release software, although I do intend to eventually add in a filter attribute distinguishing "Release" versus "Pre-release" grade software.  For example, Undertow (presently at the top of the JSON charts) is Pre-release software and I'd like for it and similar to be so indicated.

But although we have been accepting of pre-release software (eventually indicating it as such), each test should be representative of a production deployment of the software in question.  In other words, although Undertow is in Beta, the test we have should represent how you would deploy the Undertow beta to a production environment if you were so daring.  If it does not, we generally turn to the community for help in addressing that problem.

Some communities have embraced the notion of test permutations and have submitted a wide variety, even going beyond what we have attribute flags for, such as various JSON encoders and database drivers.  I think this is a more likely outcome for communities that do not have a definitive best-of-breed selection for all parts of the stack.  I believe they see value in knowing how these various permutations compare to one another, performance-wise.

If you have time to contribute to the project, we would very much welcome a Go+nginx test or any other similar permutation as long as the configuration is representative of a production-grade deployment and the particular combination is not discouraged by the community.  For example, some communities actively discourage using their application servers without a front-end server.

I think your point about SSL illustrates the value in permutations.  Some platforms have built-in SSL support and do not necessarily require a front-end web server.  If we can get enough participation from the community when we implement SSL tests, I would be interested to see those various permutations in order to weigh system complexity versus performance.  For example, is Resin with its built-in JSSE support substantially slower than Resin+Apache+mod_ssl because JSSE is a pure-Java SSL implementation?  I don't presently know the answer.

To summarize and answer your question of whether it is possible to run tests with and without a FE server, the answer is: yes, assuming we have the test implementations.  To be blunt, that comes down to getting GitHub pull requests for the permutations.  :)

On the final matter of subjectivity, I've become comfortable with the fact that a project covering such a wide spectrum of software is inevitably somewhat subjective.  I suspect no one person (certainly not myself) can know the ins and outs of every platform to make an objective judgment on whether a given test is "production class."  We must defer to the subject-matter experts within each community.

I don't know Nimrod, for example, so I can't tell whether the Jester test is production-class.  But I think we are clear that using production class tests is our goal.  I think with the help of each framework's community, we have a set of tests that mostly meet that goal.  For Jester, to continue that example, I am fairly confident in its quality because that test was submitted by the framework's author, as several others have been.  Still, there will always be room for improvement.

I hope this addresses your concerns, but I'd be happy to discuss it further.  Although I aim to set the overall disposition of the project, community input and opinion has driven most decisions since the first round.

Tangentially, I personally have fairly strong opinions about the value of comparisons that many would consider unfair.  If you're interested, it's more of a rant on that subject.

Harshad RJ

unread,
Oct 9, 2013, 1:02:42 AM10/9/13
to Brian Hauer, framework-...@googlegroups.com
Brian,

Thanks again for your considered response. It was good to read it and the blog post too.

It also helped me crystallize my thoughts.

The main question in my mind is: what value does a front-end server add to a production setup and is that value really important for the purpose of this benchmark?

There are two ways to answer these questions:
  • Adding a permutation variable for front-end servers. Each test will be written with and without a front-end server. (It may be possible to take this to an extreme: use several different front-end servers, and possibly have different configurations for each). 
  • Determine if a front-end server really matters to this benchmark, and if so, specify its value as an objective criteria. Then ensure that all the tests meet this criteria.

Both are unbiased ways of performing the benchmark. The first approach says, lets try everything. The second says, lets try the bare minimum.

The first approach will lead to an explosion in number of tests, time to code the tests, time to run the tests and time to interpret the data. However, like you say in your blog post, if we have infinite time available to us, then the ideal is to test all permutations.

My suggestion is, in order to save time, the second option could be attempted first. If we can objectively specify the criteria for a production setup, then we can  mandate that criteria for all tests. If we can't objectify the criteria, then we can fall back to the first option (unless there is a third way out).

best,

--
Harshad RJ

Harshad RJ

unread,
Oct 9, 2013, 1:49:26 AM10/9/13
to Brian Hauer, framework-...@googlegroups.com
Btw, I am not arguing against test permutations in general. I think it is great to be able to test permutations of different databases for example.

I am just suggesting a way to minimise the number of permutation variables. If we can determine a priori that a permutation variable isn't significant for the benchmark we will be able to contain the combinatorial explosion.


Michael Hixson

unread,
Oct 9, 2013, 2:32:17 AM10/9/13
to Harshad RJ, Brian Hauer, framework-...@googlegroups.com
There is a simpler reason for front-end servers in a few of the tests:
multithreading. For example, Dart (at the time that I wrote the
original test, and probably still now) was basically single-threaded.
In order to make use of all the available CPU cores, I had to run one
Dart server for each core and load balance them behind a front-end
server. Dart has a concept of "isolates" which could in theory have
let me use multiple threads within one application, but in practice
(because of the particulars of how isolates work, especially in the
context of HTTP servers) that would have been a poor choice. A Dart
test without a front-end server would be pointless for these
benchmarks. I believe some tests written in other languages are in
the same boat.

-Michael
> --
> You received this message because you are subscribed to the Google Groups
> "framework-benchmarks" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to framework-benchm...@googlegroups.com.
> Visit this group at http://groups.google.com/group/framework-benchmarks.
> For more options, visit https://groups.google.com/groups/opt_out.

Brian Hauer

unread,
Oct 9, 2013, 10:14:29 AM10/9/13
to framework-...@googlegroups.com, Brian Hauer
Hi Harshad,

I like the way you think!  It's been humbling to have so many thoughtful people such as yourself participate in this project.

I agree that if a (concurrency-capable) application server can provide N requests per second in stand-alone mode, a front end web server by definition cannot increase that.  After all, we expressly do not permit any reverse-proxy caching in our tests.

With a front-end server, the result will inevitably be N-O where O is some overhead lost to the proxying of requests from the web server to the application server.  When considered from this point of view, it seems that a front-end web server could only ever be a hindrance to performance.

Nevertheless, given what we now know of the implementations we have received to date, I fear it's quite difficult if not impossible to objectively specify the criteria for a production setup across the full spectrum.  We would receive strong resistance to removing the front-end server on many platforms.

Many application servers are simply not designed to face the Internet, whatever their rationale.  Perhaps they do not provide enough worker threads and expect a front-end to marshal requests to a pool of workers.  Or perhaps they have a very limited ability to parse request or response headers.  I frankly don't have a good grasp on why some application servers are considered unsuitable to face the Internet, but it comes up and is often phrased as, "No one would ever deploy [app-server] without [nginx/apache-httpd]."

I believe readers want to see results that represent a realistic high-water mark.  If every application deployed on the platform is going to suffer that O overhead, they want to know it has been factored into the results.

Michael Hixson points out the most important reason some platforms require a front-end web server: thread-level concurrency is not provided by all application servers.  Application servers that rely on process-level concurrency require a front-end web server to use more than one CPU core.  In this case, the application server's N is going to be low since it can only use one CPU core, but the front-end server then provides a concurrency multiplier (let's say C) to make the total requests per second something like C(N-O).

I too would like to constrain the permutation variables where possible.  However, a challenge is that the permutation variables that can be constrained on platform A do not necessarily align with platform B.  For example, on A it may make sense that the permutations only cover database and ORM; whereas on B the community opinion may vary sufficiently that the ideal spectrum of permutations would cover front-end server, platform, ORM, and even things like JSON serializer and database driver.  Further confounding matters, B may be canonical on, say, Postgres meaning from B's perspective, the database permutations offered by A are meaningless.

Constraining permutation variables would certainly would make the results web site easier to understand.  :)  But I think the front-end server is out of the bag, so to speak.  At this point, I'd rather aim to provide sufficient tests to allow people to reach useful conclusions about their front-end server selection.

That last point is important enough to clarify.  Although there is an ideal to have full coverage, I think the project can provide good value to a majority of readers without covering all permutations.  In other words, there are diminishing returns after a certain set of particular permutations are covered.  Take for example the desire to re-introduce Tomcat tests for the Servlet-based frameworks.  To constrain the level of effort, I'd prefer to just start with a plain Servlet test on Tomcat.  Combining that result with the results of Servlet and a Servlet-based framework on Resin, I should be able to extrapolate a good guess of that framework's performance on Tomcat.

I'm looking forward to your continued thoughts.

Harshad RJ

unread,
Oct 9, 2013, 11:45:42 AM10/9/13
to Brian Hauer, framework-...@googlegroups.com
Thanks for your resonses, Michael and Brian.

I think we are heading towards an impasse. We seem to agree with each other's arguments but differ on how to drive the execution.

Couple of thoughts from my side:
  • I had once hosted a site without a front-end server. It was just a bare servlet container. One of my friends quickly pointed out a way to steal my /etc/passwd file. All they had to do was append ../../../ to a static folder's URL and they could access any file with that. The J2EE containers don't protect against such kind of attacks.
  • The other use for a front-end server I found was to support SPDY. Simply add a plugin to Nginx/Apache and you get a SPDY capable website. There are probably other such plugins to enhance the http layer. But I think this functionality is orthogonal to the app-server functionality.
  • I know about two large Go deployments. One is by google itself for their dl.google.com site. They have blogged about how they wrote a custom server in Go to serve static content in the most efficient way (for their use-case).
    I also heard from an engineer in a database company that they are writing a HTTP front-end for their database server in Go.
    I don't know much more about Go servers, but if the above are typical use-cases, then I can understand why a front-end server is not recommended by Go community.
  • About Dart like platforms which are not multi-threaded, I would say the front-end server is a necessary part of their stack on a benchmark. Without it, they are loosing their rank.
  • For platforms that gain rank without a front-end server, and don't fail any tests, the front-end server is bloat ware for the benchmark in my opinion, although it might be recommended in production use.
cheers,
--
Harshad RJ

Brian Hauer

unread,
Oct 9, 2013, 1:15:28 PM10/9/13
to framework-...@googlegroups.com, Brian Hauer
Hi Harshad,

I think you're right about the impasse.  I'd be interested to hear more opinions if anyone else is reading this thread.

It sounds as if you agree that front-end servers are required in several scenarios.  The one area of disagreement with the current methodology seems to be about allowing the most-subjective rationale for using a front-end server, that being where front-end servers are used out of tradition or conventional best-practice rather than hard evidence.  I partially agree with you, which is why I would welcome additional permutations that would show the performance overhead that can be avoided if the front-end server is ditched.

On the other hand, given our goal of demonstrating production deployments, I'd prefer the first permutation for any new framework be a good approximation of the most commonplace configuration, whether or not eventually-available hard data could prove the configuration less than ideal.  (Though to be clear, we accept what we receive as pull requests and we're not very picky there.  It's certainly possible for the only PR we receive for framework X to represent some fringe configuration)  If the most commonplace configuration includes a front-end server, I don't want to second-guess that community's subject-matter experts.

More thoughts:
  • Directory sanitization of the sort you describe is exactly the sort of thing that may lead some communities to specify the use of a front-end server even though doing so will result in marginally less performance.  However, incidentally, the Java Servlet containers we've included (Resin and eventually WildFly) have robust web servers built-in.
  • My understanding from the Go community matches the examples you provided: that it is at least considered reasonable, perhaps even commonplace, to deploy Go web applications with no front-end server.
  • If anything, your last point is a potential disagreement.  If a platform gains performance without a front-end server, while not failing any tests, I would not necessarily consider the front-end server bloat.  For reasons such as your example of directory sanitization (which is not exercised by our tests), the community may have a best-practice of never deploying without a front-end server.  Removing it would not be suitable for a production deployment and therefore makes the resulting data less valuable to readers because a necessary part of the recommended stack was omitted.

Harshad RJ

unread,
Oct 9, 2013, 1:51:14 PM10/9/13
to Brian Hauer, framework-...@googlegroups.com
I hit upon an idea that could help break the impasse.
  • Add a performance test for a static file.
    Almost all production setups will need a static file (CSS, JS, image, etc).
  • Include a penetration test suite.
    There are several open-source penetration test suites available.
The penetration tests will be an objective way to specify the robustness / security of the platform.

They will not only level the field, as a side effect they will also help make platforms more secure.

--
Harshad RJ

Brian Hauer

unread,
Oct 9, 2013, 2:38:34 PM10/9/13
to framework-...@googlegroups.com, Brian Hauer
Hi Harshad,

Great ideas.  I've added those to the Additional Test Types issue at GitHub.
Reply all
Reply to author
Forward
0 new messages