Hit testing metrics

14 views
Skip to first unread message

Tim Dresser

unread,
Nov 24, 2015, 8:21:40 AM11/24/15
to David Tapuska, input-dev
We have a bunch of metrics on how frequently we're hitting the hit test cache, but we don't appear to have any metrics for how long hit tests take, or how often we're doing them.

Adding hit testing information to traces seems like a good place to start - does that sound reasonable?

Do we have any idea how frequently we perform a hit test, or how long the median hit test takes?

Dave Tapuska

unread,
Nov 24, 2015, 9:41:35 AM11/24/15
to Tim Dresser, input-dev
I'm not aware we capture any latency information inside blink itself. All of the latency info I have seen is captured inside chromium. 

Hit testing is inside the blink event processing. I'd expect ~1 hit test per event. Touch events generate more than one hit test (because of the region); and some duplicated events such as mousedown, click, mouseup would end up using the hit test cache. There are some layout tests that assert how many hit tests occur for various scenarios.

The painting team definitely wants to replace hit testing entirely with an algorithm based on display lists. So I'm not sure how much we should invest in here but it might be trivial to report.

dave.

Tim Dresser

unread,
Nov 24, 2015, 10:13:13 AM11/24/15
to Dave Tapuska, input-dev
We can trace things from within blink (example).

It sounds like the volume of hit tests is probably low enough that we could dump those slices into the existing input category.

Do you think the hit testing refactor will be isolated well enough that we could put some tracing in now and have it still apply to the new model? If so, then it might be useful to be able to compare the performance before and after via tracing.

Chris Harrelson

unread,
Nov 24, 2015, 11:02:47 AM11/24/15
to Tim Dresser, Dave Tapuska, input-dev
Paint team here. I think it's definitely valuable to track how many and how long they take. We may see unexpected results or regressions along the way that are actionable. Also, when the paint team gets around to re-implementing on display lists, this data will be a very useful guide for performance goals to hit.

Chris

Tim Dresser

unread,
Nov 24, 2015, 11:55:57 AM11/24/15
to Chris Harrelson, Dave Tapuska, input-dev
I didn't realize that we already record these in traces.

It sounds like it might be worth adding an UMA stat for this though.
Dave is going to take a look at look at this. (bug)

Based on a quick investigation, hit testing frequently triggers layout, which could heavily skew our results.
We may want to record just LayoutView::hitTestNoLifecycleUpdate, not all of EventHandler::hitTestResultAtPoint.

Chris Harrelson

unread,
Nov 24, 2015, 1:28:23 PM11/24/15
to Tim Dresser, Dave Tapuska, input-dev
On Tue, Nov 24, 2015 at 8:55 AM Tim Dresser <tdre...@google.com> wrote:
I didn't realize that we already record these in traces.

It sounds like it might be worth adding an UMA stat for this though.
Dave is going to take a look at look at this. (bug)

Based on a quick investigation, hit testing frequently triggers layout, which could heavily skew our results.
We may want to record just LayoutView::hitTestNoLifecycleUpdate, not all of EventHandler::hitTestResultAtPoint.

That seems reasonable to me. You could also record an UMA that has both the hit test and layout time, in order to distinguish.
Reply all
Reply to author
Forward
0 new messages