> On Tuesday, November 10, 2020 at 12:03:30 AM UTC-5 val wrote:
> I'm guessing this is how the h2 client/driver (not the db engine it self) behaves to handle a ResultSet. It will
> fully serialize the contents of the ResultSet to disk first and then serve that. Perhaps this is necessary to
No, it's just the easiest way to handle multiple resultset performantly, because then the engine doesn't have to
remember any state about the resultset.
On 2020/11/10 7:31 am, val wrote:
> Adding *LAZY_QUERY_EXECUTION=1 * to the connection url is closer to what I was looking for.
Yeah, that is currently our best answer for extremely large result sets, note that it only works for relatively simple
queries, and it will likely be quite bad performance-wise if you have more than connection attempting to hit the same table.
The alternative is to grab chunks of the resultset at a time, using OFFSET...LIMIT....
>
> I wonder where the bottleneck is. What would it take to bring this 20 minutes total iteration time down to the 6 minutes
> it takes if you prebuild the ResultSet first.
>
You'd have to run a profiler, either our simple one (see our performance page), or one of the other ones, to see where
the bottleneck is.
TBH, H2 is mostly optimised for small quick queries, not really large ones.