On 2013-07-12 15:53, Lukas Eder wrote:
>
> OK, I'm going to be a bit sarcastic in this mail. I hope you're not
> offended.
>
Not at all.
> Now, if I specify OFFSET 600000 and H2 will evaluate the projection
> for all the 599999 records that are not of interest, then OFFSET might
> be a bit slow, right? Consider this code from Select.queryFlat().
> 600000x the following piece of code
>
You are correct, that it is how we currently perform that query.
>
> That's 600000 arrays put into the result arraylist, each with the
> length of the number of columns in the queried table. I don't know if
> this accounts for 10% or 90% of the OP's reported 20 seconds. But
> 600000 arrays with 17 columns (from the OP's table) is a lot of wasted
> memory when skipping to OFFSET 600000.
You say waste, I say nice simple code.
Now, I'm not saying you're wrong, and maybe we can improve that.
But I'd like to see a real-world case first, and I'd also like to see
some profiling that points to that code as a problem.
Because at the moment our performance is pretty darn good, and a large
chunk of that is precisely because we don't go chasing benchmarks and
fixing problems that are only theoretical.
>
> And this happens for non-silly queries, too. In fact, this happens for
> every OFFSET clause unless I made a mistake? OFFSET 1000 is a more
> real-world use-case, agreed. And as a side-effect of this issue, there
> was also the wrong (or unexpected) FOR UPDATE behaviour.
Yeah, the side effect is that we lock a few more rows than we should.
But that's not a correctness issue, it's just a performance issue.
And this is my last word on this topic for today, further queries will
be > /dev/null