Hey Brett,
Thank you for taking the time for reading and crafting a well thought out response. I've opened this thread to continue the conversation as it's a better platform for conversation vs somebody's blog. I'll link the post to this thread so others may find this too.
what version of HikariCP was used in the test
HikariCP v2.6.0
Validation
The Tomcat pool was using
Connection.isValid()(via the validator class),
while HikariCP was configured to use a SQL query
(config.setConnectionTestQuery()). If the query is left un-set in HikariCP,
it will also useConnection.isValid(). If a validator is set on Tomcat, as
it is here, any similar test query is/was also configured on Tomcat would be
ignored and preference given to the validator.
Yes, I missed this, I did not realize they were mutually exclusive. From Tomcat's PoolProperties:
setValidatorClassName: Set the name for an optional validator class which will be used in place of test queries
So forcing HikariCP to use test queries while allowing Tomcat to use a validator, as you've alluded to, make these benchmarks invalid.
Ah, I realize how lucky I've been for not needing to write applications that changes the transaction levels, modify autocommit, etc! Even though the benchmark does not change these properties they really should be enabled by default. The fact that Tomcat "relies on the application to remember how and when these settings have been applied" (source) is unfortunate. When I redo the benchmark I'll be sure to add that interceptor.
The JDBC specification also requires that when Connections are closed, any
open Statements will also be closed. And the specification states that when
a Statement is closed, any associated ResultSets will be closed. So, there
is a kind of cascade cleanup that occurs. Failing to close Statements and
ResultSets can leave cursors open and locks held, causing potential
unexplained deadlocks in the DB."Unsafe Case #2" is that Tomcat also does not do this by default. This is
solved by configuring the org.apache.tomcat...StatementFinalizer
interceptor.
Huh, I did not know this and the Tomcat docs agree with you. In the HikariCP wiki I see the feature chart of "Track/Close Open Statements", and Tomcat is listed as "Not Supported". Shouldn't the feature be supported, but not enabled by default, as StatementFinalizer will close open statements "created using createStatement, prepareStatement or prepareCall"
Anyways calling out the StatementFinalizer interceptor in the wiki somewhere may be beneficial as people may erroneously believe that any kind of validator set on Tomcat would close open statements.
By default, Tomcat does not use “disposable facades”, unless setUseDisposableConnectionFacade() is configured
I would find this to be a code smell in any code base, but you're right that this should be enabled by default (and in the benchmark)
There still appears to be some sort of contention issues in Hikari, as the benchmarks that had the highest 99th percentile for response times were the ones where Jetty had the highest number of threads to serve requests (so these requests are blocked waiting for a db connection to become available). If you'd like I can try and provide more information such as jvisualvm output.
it is extremely difficult for the average user to configure Tomcat for compliant/safe behavior
I'm in full agreement. I was on a mission to compare apples and apples and I've overlooked a significant amount!
I'll have the benchmarks re-ran by Friday and shortly after I'll post an update and a follow up in this issue and the blog.
From the change log in the v2.6.1 release announcement:
Changes in 2.6.1
* issue 835 fix increased CPU consumption under heavy load caused by excessive spinning in the ConcurrentBag.requite() method.
--
You received this message because you are subscribed to a topic in the Google Groups "HikariCP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/hikari-cp/mOtyGp2fIeY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to hikari-cp+unsubscribe@googlegroups.com.
Visit this group at https://groups.google.com/group/hikari-cp.
For more options, visit https://groups.google.com/d/optout.
Notice that both HikariCP and Tomcat contained improvements, but this may be because the Jetty (the web server) and JDBI (the SQL abstraction over the connection) were upgraded as well. On average, though, it does appear that HikariCP improved more than Tomcat.
Below contain the absolute mean and 99th percentile response times.
HikariCP tends to have a much larger 99th percentile response time when contention is present (ie. many request threads accessing a fewer number of db pool connections).
It is a basic Law of Computing that given a single CPU resource, executing A and B sequentially will always be faster than executing A and B "simultaneously" through time-slicing. Once the number of threads exceeds the number of CPU cores, you're going slower by adding more threads, not faster.
...
It is not quite as simple as stated above...because threads become blocked on I/O, we can actually get more work done by having a number of connections/threads that is greater than the number of physical computing cores.
If you think of performance, and of connection pools, you might be tempted into thinking that the pool is the most important part of the performance equation. Not so clearly so. The number of getConnection() operations in comparison to other JDBC operations is small. A large amount of performance gains come in the optimization of the "delegates" that wrap Connection, Statement, etc.
We're testing HikariCP at the client and have had great initial success - an application loading 1 million records over multiple HTTP threads and putting them in the DB had it's run time cut by 70% after moving from Tomcat CP to Hikari CP!
Now we are having an issue with a new application. The application is a batch process that launches N threads. Each thread starts a transaction and ETLs some data from a few tables to some other tables. This application slowed down markedly after changing the connection pool to Hikari from Tomcat.
...
This was a bug in our side, using some unrelated non-threadsafe code. No issue. After fixing the bug, the code runs about 2x faster using HikariCP than Tomcat CP.
Thanks for the update. It took me a while to digest, and ponder the dynamics. I apologise for the possibly rambling response. I'll say it again, benchmarking is hard -- more of an art than a science
I don't have any issue with the accuracy of the results, certainly not the new ones, and not particularly even the original ones taken as a whole. They are what they are. I do think the workload is a bit overly simplistic to serve as guidence to users trying to select a pool for production applications, i.e the one-request/one-query workload. Given that the same query is run each request, this ensures that the page cache contains the entirety of the result, removing database I/O from the equation, and resulting in the optimal PostgreSQL/Pool-size being essentially equal to the number of cores.
The conclusion that "Neither HikariCP or Tomcat were the clear winner. While HikariCP had the best performance, it also had the worst performance depending on configuration" also includes the aforementioned pool size of 1. And Jetty thread-pool sizes of 32 and 64, which on 4-core hardware under a load with no blocking I/O, is likely four to eight times optimal.
The question that I, and future readers of your results, would likely find most interesting if answered, is:
What are the Top 5 Pool/Jetty size combinations for each pool (HikariCP and Tomcat) that provide:
The highest throughput (req/min)
Lowest mean response time
Lowest 99% response time
And for each of the above, what is the delta between the two (HikariCP vs. Tomcat)?
p.s. It would be truly awesome if you could create a github repo with your test harness and associated scripts. I would love to run permutations on our 64-core server, with the additional axis of varying available cores over the test.