Data for Diagnosing "Interaction to Next Paint" Issues (Elements, Actions, Page State, etc.)

105 views
Skip to first unread message

Kyle P

unread,
Jul 17, 2023, 12:03:04 PMJul 17
to web-vitals-feedback
From my review of the INP documentation and announcements, Google Lighthouse/CWV/CRUX will collect field data from real visitors and tell us the overall INP timing.

When researching how to diagnose slow INP timings, articles say that the following items should be identified to hone in on what needs to be address/improved.
- The elements visitors were interacting with (element selector string to pinpoint the exact element)
- The event type (click, keypress, etc.)
- The loading state of the page (was it during page load, after, etc.)
- The interaction start time

As Google rolls out the new INP metric, it would be extremely helpful if Google could provide the above details about their INP field data analysis. Webmasters around the world are going to be asked about their website INP timings and they won't know where to start. Instead of guessing at common user flows or having to set up a RUM (real user monitoring) system themselves, can Google provide us the details on what they've already monitored about the site? For example, the most common interactions that had the slowest INP times?

I see this similar to the Largest Contentful Paint metric being rolled out and PageSpeed Insights telling you which specific element it considered the largest contentful paint so you could make focus on it and optimize it.

I feel this new metric will be very abstract and vague to most webmasters as for what their next steps are.

Barry Pollard

unread,
Jul 17, 2023, 12:36:20 PMJul 17
to Kyle P, web-vitals-feedback
Hi Kyle,

Thanks for your feedback. It's important to hear, and even better when it comes with suggestions as well!

You are correct that INP is a tricky metric to optimize for - for a number of reasons.

To answer your main suggestion, as a public dataset, CrUX is limited in what metrics we can report through it without impacting privacy or expectations of both our users and the sites they are using. In particular, we limit this to statistics (i.e. numbers, rather than more complex data like selectors). So, some of the data you are asking (e.g. element selector) for will almost certainly never be added to CrUX, though we have discussed if there is any more data we could surface to make identification easier at a broader level, but nothing definitive on that yet.

Lighthouse will list audits that impact TBT (Total Blocking Time), which could lead to INP issues if a user tries to interact during this time (we've had some discussions about how to make the connection between INP and TBT more obvious to users btw). However this is only an approximation of potential INP issues as INP does depend on the exact interactions that take place, which really are impossible for a generic lab-based tool to guess at. LCP is more obvious for sites, but even then, it can be incorrect if it changes based on different users or deep links into the page. So the limitations of lab-based tools was an issue with LCP too (and more so with CLS), but I agree it's even harder to gather specific data from lab-based tools for INP.

In my opinion, INP issues are - at least in this initial stage as few sites have optimized for it at all - often generic issues on a page rather than just specific interactions that are more difficult to pin down. Therefore looking at your TBT, or doing a performance profile of loading the page or with common interactions, are often sufficient to surface issues with the site in general. Hopefully, with a bit of cleanup and better practices regarding JavaScript, INP can be greatly improved for the general case. And this applies also to frameworks/libraries/third-parties too btw as we hope INP will lead to improvements in those as well, not to mention the improvements browser engineers can make to improve INP, either in optimizing browsers or providing new APIs for developers to use.

Poor INP is often the cumulative impact of lots of JavaScript, or really complex pages, so highlighting specific interactions until this initial review and clean-up has happened can often lead to false positives (those interactions may be the victim of other heavy JavaScript, rather than the cause), or a feeling of chasing your tail. So I suggest a generic review of the site initially, rather than trying to identify the main poor interaction.

Often only once the site is performing well for most interactions, does it pay to have specific interactions to investigate further, and yes this is something that is best actioned with collecting RUM data (including all the details you listed above), for sites who really want to optimize this metric.

So while I do agree with many of the points you raised, I still do feel that INP is actionable for most sites at this time. But we'll continue to providing tooling and guidance as best we can to further help with this.

Once again thanks for your valuable feedback.

Thanks,
Barry 

--
You received this message because you are subscribed to the Google Groups "web-vitals-feedback" group.
To unsubscribe from this group and stop receiving emails from it, send an email to web-vitals-feed...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/web-vitals-feedback/71a2e258-6e33-4b70-8c5e-c9d6b36530f4n%40googlegroups.com.

Kyle P

unread,
Jul 18, 2023, 5:26:08 PMJul 18
to web-vitals-feedback
Thank you for the great reply Barry and the things to consider.
Reply all
Reply to author
Forward
0 new messages