Hi Kyle,
Thanks for your feedback. It's important to hear, and even better when it comes with suggestions as well!
You are correct that INP is a tricky metric to optimize for - for a number of reasons.
To answer your main suggestion, as a public dataset, CrUX is limited in what metrics we can report through it without impacting privacy or expectations of both our users and the sites they are using. In particular, we limit this to statistics (i.e. numbers, rather than more complex data like selectors). So, some of the data you are asking (e.g. element selector) for will almost certainly never be added to CrUX, though we have discussed if there is any more data we could surface to make identification easier at a broader level, but nothing definitive on that yet.
Lighthouse will list audits that impact TBT (Total Blocking Time), which
could lead to INP issues if a user tries to interact during this time (we've had some discussions about how to make the connection between INP and TBT more obvious to users btw). However this is only an approximation of
potential INP issues as INP does depend on the exact interactions that take place, which really are impossible for a generic lab-based tool to guess at. LCP is more obvious for sites, but even then, it can be incorrect if it changes based on different users or deep links into the page. So the limitations of lab-based tools was an issue with LCP too (and
more so with CLS), but I agree it's even harder to gather specific data from lab-based tools for INP.
In my opinion, INP issues are - at least in this initial stage as few sites have optimized for it at all - often generic issues on a page rather than just specific interactions that are more difficult to pin down. Therefore looking at your TBT, or doing a performance profile of loading the page or with common interactions, are often sufficient to surface issues with the site in general. Hopefully, with a bit of cleanup and better practices regarding JavaScript, INP can be greatly improved for the general case. And this applies also to frameworks/libraries/third-parties too btw as we hope INP will lead to improvements in those as well, not to mention the improvements browser engineers can make to improve INP, either in optimizing browsers or providing new APIs for developers to use.
Poor INP is often the cumulative impact of lots of JavaScript, or really complex pages, so highlighting specific interactions until this initial review and clean-up has happened can often lead to false positives (those interactions may be the victim of other heavy JavaScript, rather than the cause), or a feeling of chasing your tail. So I suggest a generic review of the site initially, rather than trying to identify the main poor interaction.
Often only once the site is performing well for most interactions, does it pay to have specific interactions to investigate further, and yes this is something that is best actioned with collecting RUM data (including all the details you listed above), for sites who really want to optimize this metric.
So while I do agree with many of the points you raised, I still do feel that INP is actionable for most sites at this time. But we'll continue to providing tooling and guidance as best we can to further help with this.
Once again thanks for your valuable feedback.
Thanks,
Barry