--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchmarks+unsub...@googlegroups.com.
Visit this group at https://groups.google.com/group/framework-benchmarks.
For more options, visit https://groups.google.com/d/optout.
Round 13 preview data from Azure is available for sanity checks.
https://www.techempower.com/benchmarks/previews/round13/
Thank you for your patience! We hope to address some known issues with the results and accept fix pull requests for approximately two weeks prior to finalizing the round. If you identify and can correct any issues, we'd appreciate pull requests that fix problems.
SELECT ... WHERE id IN (...)
clause."Hi Rikard,
I removed both the Multi-query result and Updates result for Revenj because both appeared to be selecting data using a single round-trip to the database server. We have previously done the same in similar circumstances for other test implementations until they were corrected.
The Multiple-query and Updates tests are designed to exercise the database connection pool, the database driver, the ORM, and all other aspects of the database pipeline repeatedly (as well as the HTTP request pipeline once per request). The N iterations are intended to be the equivalent of doing the database work of the Single-query test in its entirety N times, but without the overhead of an additional the HTTP request.
The use-case this is approximating is an application behavior where you need to read item A and then based on A's value and other logic, you need to read item B and then based on B's value and other logic, you now need to read item C, and so on. Imagine branching code.
The intent has always been to do N round-trips to the database server to fully exercise the connection pool, database driver, ORM (where applicable), and other elements of database connectivity. I have attempted to clarify the requirements further as a result of your feedback. I have rewritten requirement #6 of the Multi-query test as such:
"This test is designed to exercise multiple queries, each requiring a round-trip to the database server, and with each resulting row selected individually. It is not acceptable to use batches. It is not acceptable to execute multiple SELECTs within a single statement. It is not acceptable to retrieve all required rows using aSELECT ... WHERE id IN (...)
clause."
It sounds as if you are generating multiple ResultSets from a single statement. That is obviously a perfectly sensible thing to do in some use-cases and may be a suitable use-case to use as the basis for a future test type in our project. But it is not in-line with the intent of the existing Multi-query test. If you would like to propose a new test type that executes multiple queries within a single statement, please do so here:
https://github.com/TechEmpower/FrameworkBenchmarks/issues/133
I hope you understand that we do this not to frustrate your efforts but to keep the results accurate and fair. This is analogous to our (current) stance that we do not yet have a test type suitable for SQLite since an embedded database also avoids the principal work of making round-trips to an external service.
Thank you for your understanding!
Can you point me to an instance where you instructed implementations which did not break rules as stated at that time to change the implementation?
What you are saying is that you want implementations to work hard and will not allow implementations to work smart.
The use-case this is approximating is an application behavior where you need to read item A and then based on A's value and other logic, you need to read item B and then based on B's value and other logic, you now need to read item C, and so on. Imagine branching code.
Frankly I think you now just made up that use case in trying to prove that Revenj implementation does not abide by it.
But let's say that this is your originally wanted use case. In that case your current requirements are again lacking, eg. you need to state that queries must be executed in serial fashion. Otherwise knowledge obtained by previous result can't be used for next query.
By quickly browsing few implementation it's obvious that they don't abide by this rule. Instead they are starting N parallel request to the database to minimize duration of total DB interaction. For example:
https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Scala/akka-http/src/main/scala/com/typesafe/akka/http/benchmark/handlers/QueriesHandler.scala#L50
https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Java/undertow/src/main/java/hello/DbSqlHandler.java#L53
https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/JavaScript/nodejs/handlers/mongodb-raw.js#L57
And on top of that, Revenj actually can execute multiple queries in single roundtrip and use result of previous query as information for the next query. Are you saying that if I change the implementation to support such an use case you will allow it (or will you invoke multiple roundtrips to the database rule again)?
The intent has always been to do N round-trips to the database server to fully exercise the connection pool, database driver, ORM (where applicable), and other elements of database connectivity. I have attempted to clarify the requirements further as a result of your feedback. I have rewritten requirement #6 of the Multi-query test as such:
"This test is designed to exercise multiple queries, each requiring a round-trip to the database server, and with each resulting row selected individually. It is not acceptable to use batches. It is not acceptable to execute multiple SELECTs within a single statement. It is not acceptable to retrieve all required rows using aSELECT ... WHERE id IN (...)
clause."
Don't take this the wrong way, but you should try and learn from constructive criticism.
Rules which specify how implementation should behave are broken rules.
Rules should specify only the resulted output and behavior in specific scenarios (eg. flushing to disk before returning response).
But your benchmark due to use of random and various simplifications has issues verifying if frameworks are behaving correctly.
Since it's unlikely that anything will change at this point due to sheer size of the implementations, maybe you should reconsider your stance of letting community help you with the benchmark.
What frustrates me is that you have few "broken" rules in the benchmark. But that is irrelevant, this is your benchmark, not a community one, and you are free to make up rules as you see fit. But you should not be surprised when people complain about issues with your benchmark.
Allowing bulk update on a single table, but no allowing bulk reading implementation which supports multiple table makes no sense.
In the end, which result would benefit more people looking at them?
Sorry for not having too much understanding for your explanations.
We have removed and will continue to remove results from SQLite implementations for a similar reason—namely, such tests are avoiding a principal portion of the expected work. We've also removed Redis implementations since we would prefer to have Redis implementations show up in a future caching-enabled test type. NB: This has been and continues to be a manual process; we're working on automating more of this as time permits.
What you are saying is that you want implementations to work hard and will not allow implementations to work smart.
Yes. This is a benchmark exercise. Working hard is precisely what we want.
Working "smart" is encouraged but—as I am sure you can understand—working too smart is potentially dangerous for a benchmarking project. Being too clever may mean intentionally avoiding the expected work load (e.g., an implementation that doesn't even talk to a database and just returns results that fool our validation tests) or unintentionally avoiding expected work (using an optimization that seems reasonable but we deem invalid for the test type). Previously we have had to enforce very subtle things like requiring that the JSON implementations instantiate objects, rather than return a serialization of a single static object. Enforcing some of these things may not even measurably affect the results, but we try to do so as we can to keep the results fair.
That said, I will admit that we're only so good at noticing every clever thing that may or may not be violating the spirit of the tests. In large part we count on the generosity of the community to help us keep an eye on test implementations that may be playing a bit fast and loose with the rules.
Here is the bottom-line: the multi-query test type has always been intended to require N round-trips to the database server as an external system. Implementations that use SQLite are avoiding those round-trips to an external system. An implementation that runs the N queries as a batch avoids making several round-trips. An implementation that runs the N queries in a single Statement with multiple ResultSets avoids making several round-trips. None of these are acceptable.
The legwork of communicating with an external database is in large part what we are concerned with in this test type. If you can remove that legwork in a real-world application, that's great. But we're measuring a scenario where you are required to make N round-trips to an external system. That's just what the test is.
Don't take this the wrong way, but you should try and learn from constructive criticism.Rules which specify how implementation should behave are broken rules.
You may feel that way, but I feel otherwise. These tests are required to use a database server—they are not permitted to generate responses that appear convincing to our validation tests without hitting a database. The Fortunes test is always returning the same payload but it is required to repeatedly query the database for those fortune cookie messages that never change. The JSON serialization test must incur the cost of instantiating an object or allocating memory. And so on. These are implementation details that we consider fundamental to the benchmarking exercise at hand.
Yes, a framework may have clever features that in some use-cases would avoid similar workload in real-world applications. We simply do not have the necessary test type diversity (yet?) to demonstrate all of those cases.
But you are right, there is a large volume of code in the existing implementations that the community has contributed. We prefer to not change test type requirements unless there is a necessary clarification. Changing the Multi-query test to allow batching would change the intention of the test and render many/most implementations out of date. We are averse to changes that will render many implementations obsolete. Doing so would not just be a burden on us, but more importantly, a burden on the community contributors.
What frustrates me is that you have few "broken" rules in the benchmark. But that is irrelevant, this is your benchmark, not a community one, and you are free to make up rules as you see fit. But you should not be surprised when people complain about issues with your benchmark.
Complaining about benchmarks is commonplace and this one is no exception. And you are right, we continue to have final authority on this particular benchmark project. But I feel we have been open and engaging with the community. The community contributions to this project are extensive. My feeling is that saying this is not a community project is not fair to the community.
It is impossible to have community consensus on everything all the time. Software development is an opinionated universe.
Sometimes consensus is more or less clear. We recently changed the implementation approach classification of the Rapidoid implementation to Stripped based on feedback from the community. Other times, it's not as clear and we have to make an executive decision. We try to communicate the rationale for those decisions and understand that not everyone will agree.
Allowing bulk update on a single table, but no allowing bulk reading implementation which supports multiple table makes no sense.
Your desire to bulk read required debating whether to allow bulk reading in the Updates test. I was on the fence about this but we ultimately landed where we did in deference to leaving implementations as-is. It is also perhaps worthwhile to know that the Updates test was originally derived from the Multi-query test. While we decided to allow batch updates, we anticipated that implementations would leverage the existing implementation of the Multi-query test for the reads portion and then add writes.
In the end, which result would benefit more people looking at them?
I feel the most benefit would be achieved by a greater diversity of test types. It remains a goal of ours to diversify test types.
Sorry for not having too much understanding for your explanations.
No worries. I don't know if we will end up agreeing here, but I do hope that you at least recognize our perspective. I also invite anyone else in the community to join the conversation.
--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchmarks+unsub...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchmarks+unsubscrib...@googlegroups.com.
Visit this group at https://groups.google.com/group/framework-benchmarks.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to a topic in the Google Groups "framework-benchmarks" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/framework-benchmarks/nePDNY9jp-4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to framework-benchmarks+unsub...@googlegroups.com.
>> email to framework-benchmarks+unsub...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/framework-benchmarks.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> Daniel Nicoletti
>
> KDE Developer - http://dantti.wordpress.com
--
Daniel Nicoletti
KDE Developer - http://dantti.wordpress.com
--
You received this message because you are subscribed to a topic in the Google Groups "framework-benchmarks" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/framework-benchmarks/nePDNY9jp-4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to framework-benchmarks+unsub...@googlegroups.com.
--
--
You received this message because you are subscribed to the Google Groups "framework-benchmarks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to framework-benchmarks+unsub...@googlegroups.com.
Hi Brian,Could you give us any update on whether this was the final preview or not? This would be very valuable to know.Kind regards,Fredrik
Hi Nick,
The database schema is initialized with https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/config/create.sql.
Which framework are you working on?
-Shawn
On Saturday, November 12, 2016 at 8:41:34 AM UTC-8, Nick Kasvosve wrote:
NickThanksI see this in my logs:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'world0_.random_number' in 'field list'Which is odd because the framework builds and works just fine locally and on Travis.Would somebody kindly post the database schema for the most recent Preview run please.
On Fri, Nov 11, 2016 at 5:21 PM, Brian Hauer <teona...@gmail.com> wrote:
We have posted another preview of Round 13. This one is from our new Azure environment. Note that we are aware that a large number of the MongoDB tests failed in this run and are investigating.
https://www.techempower.com/benchmarks/previews/round13/azure.html
Logs for this preview:
http://tfb-logs.techempower.com/round-13/preview-3/
We may have one more preview, but this also may be the final preview for this round.
--
You received this message because you are subscribed to a topic in the Google Groups "framework-benchmarks" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/framework-benchmarks/nePDNY9jp-4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to framework-benchmarks+unsubscrib...@googlegroups.com.
Visit this group at https://groups.google.com/group/framework-benchmarks.
For more options, visit https://groups.google.com/d/optout.