I noticed the following strange performance-related behaviour last week: it seems that inserting data in a transaction and the first `COUNT(*)` on that data gets disproportionately slower the larger the data gets.
Take a look at the timings in this transcript:
As you can see, both the `INSERT` and subsequent `COUNT` on the data is more than 10x slower, even though the data is only 10x bigger. The second `COUNT` is *343 times slower* than than the first `COUNT`, despite being on a dataset only 10x greater in size! And the example above was a modest one - I've seen the same test take over a minute for that second `COUNT`. The same also goes for the INSERTS - the second one is more than 10x slower than the first, despite just being 10x larger. What's going on?
I did a little bit of investigation into this with profiling. I found that, in the second profile, a huge amount of time and allocations were attributed to `MVCCMetadata.Unmarshal`, which was generated by several different paths through the intent resolution code - the scan path was doing it directly, but also 3 commands were responsible: ResolveIntentRange, ResolveIntent, and QueryIntent.
QueryIntent, for example, seems to call `MVCCGet` on a single key, which needs to allocate an `MVCCMetadata` and `TxnMeta` field, just to throw it away later. And in this profile, it seems like we were spending a huge amount of time doing those things.
My understanding of intent resolution is rather weak. What's going on here? Are were essentially resolving 1000000 intents during the scan, and that's why we're slow? But why wasn't that happening in the 100000 case? And if that's what's happening, is there some kind of pathological one-at-a-time behavior that's causing things to flow through `QueryIntent` instead of the equivalent batch method?
Or, finally, is this some kind of well-understood anomaly that occurs in large transactions only and that we don't care about? (I think that will have to be documented if so - as a user this behavior is surprising and concerning.)