Hi all,
Does act rely on the capitalization of the table being "Fortune" and
not "fortune"? If so, it may be running into the problem I described
here: https://github.com/TechEmpower/FrameworkBenchmarks/pull/3396#issuecomment-373103181
To summarize, there is a problem with our docker toolset right now
such that the "Fortune" table is not always there. We don't know why
this is happening.
Also, in case you're wondering why none of the recent runs are
completing: there is a separate problem in our toolset related to
logging that's causing it to crash. We have a work in progress PR to
improve how logging is done in general so we're hoping that stops
these crashes.
https://github.com/TechEmpower/FrameworkBenchmarks/pull/3416
-Michael
On Fri, Mar 16, 2018 at 2:21 PM, Gelin Luo <green...@gmail.com> wrote:
> Hi
>
> It looks like most of the Fortune tests on the latest few test runs have
> been failed. However I didn't see any wrong with the out file, e.g.
> https://tfb-status.techempower.com/unzip/results.2018-03-16-12-34-28-192.zip/actframework-eclipselink-mysql-rythm/out.txt
>
> Anyone knows what's going on with those Fortune tests?
>
> Thanks,
> Green
>
> --
> You received this message because you are subscribed to the Google Groups
> "framework-benchmarks" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to framework-benchmarks+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchmarks+unsub...@googlegroups.com.
-Michael
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchmarks+unsubscrib...@googlegroups.com.
> email to framework-benchmarks+unsub...@googlegroups.com.
> Visit this group at https://groups.google.com/group/framework-benchmarks.
> For more options, visit https://groups.google.com/d/optout.
--
Daniel Nicoletti
KDE Developer - http://dantti.wordpress.com
--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchmarks+unsub...@googlegroups.com.
Thanks for your comments, everyone.
Pipelining does avoid some of the per-request network overhead that we
originally intended to be part of all the database tests. But what it
avoids is just that: overhead. It uses the network more efficiently
while still doing the essential work. It issues a query over the
network to a database server and receives a result back. We don't
care what the individual packets look like.
We like to think that this project encourages web frameworks and
related tools to be faster. Disallowing this form of pipelining would
seem to do the opposite. It's a novel and useful feature that we'd
like to see appear in more database drivers. Hopefully, by allowing
pipelining in our test implementations, we're pushing towards that
eventual outcome. We're willing to accept that, at least for the time
being, our database test results may show a large split between
implementations that use pipelining and those that don't, heavily
favoring pipelining.
var cmd = new NpgsqlCommand("SELECT 1; SELECT 2", connection); var reader = cmd.ExecuteReader();
> email to framework-benchmarks+unsub...@googlegroups.com.
Thanks for giving us some of those low-level details from the database
driver perspective.
I agree that it would be odd for us to permit a batch statement like
"select * from world where id = ?; select * from world where id = ?;"
only on the condition that it kept executing through failures. That's
not what I meant to imply in my previous message. I would want to
disallow that form of batching regardless of how failures are handled.
That feature seems clearly different to me than the pipelining
implemented in reactive-pg-client (which vertx-postgres uses). I take
it we're having a disconnect there - you don't think those two
features are different, at least not in any way that we should care
about?
As far as I can tell, the reactive Postgres client as used by Vert.x does both multiplexing and pipelining (using your terminology), that is HTTP requests and database connections are completely decoupled (and it doesn't matter whether a HTTP request results in one query or more), which is the reason why it has the best performance in all database tests, single query in particular, except fortunes. However, in the updates test it seems that those techniques provide just a slight edge. That's why requiring dependent queries for the multiple queries test would not be sufficient in general.
As for the fortunes test, I suspect that the reason for the difference in behaviour is the fact that the data volume associated with each query is an order of magnitude larger.
For the record, the h2o implementation (which I am the author of) also decouples HTTP requests and database connections completely, and given that one Web application worker thread may handle several database connections at the same time (this is the current configuration in fact), it is quite possible to end up in a situation where the queries from one HTTP request are spread over many connections to the database (in the multiple queries test).
The reactive Postgres client also uses another optimization - as you have mentioned, the protocol messages sent for each query are Parse/Bind/Describe/Execute/Sync. Now, I am not that familiar with the PostgreSQL protocol, but the Parse message is used only when creating a new prepared statement, right? Well, by default prepared statements have the same lifetime as the connection, which means that message is necessary only once per prepared statement (when establishing the connection). In order words, each query can result in Bind/Describe/Execute/Sync. However, I couldn't find any specification that required the Describe message, so we can reduce that further to only Bind/Execute/Sync, which is what the reactive Postgres client does.
> Now, I do admit that pipelining and batching aren't exactly identical. Specifically, in a scenario where a program listens> to requests (of some sort) as they come in and executes database operations for each one, pipelining does have an> obvious advantage - database requests can be sent (enqueued) whenever needed, where with batching you can pack> several statement up-front, but then you're blocked and have to wait for the batch to complete before sending any more.Yes, but with batching you can enqueue several incoming requests, and when the current batch completes, send all requests in the queue as a single batch, and so on - in fact, this is what I alluded to with my example of a clever driver above, and I expect that in the limit (i.e. many incoming requests) batching will behave quite similarly to pipelining.
P.S. Concerning the fortunes test, previously I was asked to provide some additional verification for the h2o implementation. I posted a comment in the pull request where the request originated, but there were no replies, so I don't know if anyone noticed it.
In addition, pipelining allows application code to be simplified by simply sending statements as they come in, rather than having to batch them yourself.
In addition, pipelining allows application code to be simplified by simply sending statements as they come in, rather than having to batch them yourself.I agree with this 100%, especially if you consider the failure cases in which the database driver will have to replay the queries starting from the middle of the batch, which can get tricky implementation-wise.
In addition, pipelining allows application code to be simplified by simply sending statements as they come in, rather than having to batch them yourself.I agree with this 100%, especially if you consider the failure cases in which the database driver will have to replay the queries starting from the middle of the batch, which can get tricky implementation-wise.This conversation is fascinating and those of us who are not in the thick of it are learning a lot from you guys! Or at least I am. So thank you!
For what it's worth, the point above played a role in our decision to permit this functionality. The feature seems to be the sort of enhancement to the protocol and drivers that simultaneously improves application developer ergonomics and performance. We had to debate what precisely was the "spirit" of our test, and we ultimately decided that the spirit was that from the application developer's point of view, N queries were being executed and there was no need to manually batch them and deal with a batch failure. Obviously, this was a gray area but the balance was tipped by our feeling this was precisely the sort of clever improvement we want to see in frameworks and infrastructure software.