Intent to Ship: Largest Contentful Paint

276 views
Skip to first unread message

Nicolás Peña

unread,
Jul 8, 2019, 4:32:33 PM7/8/19
to blink-dev

Contact emails

n...@chromium.org, ma...@chromium.org, tdre...@chromium.org, yoav...@chromium.org  


Explainer

https://github.com/WICG/largest-contentful-paint/blob/master/README.md


Spec

https://wicg.github.io/largest-contentful-paint/


TAG review: https://github.com/w3ctag/design-reviews/issues/378


Summary (taken from explainer)

Developers today don't have a reliable metric that is correlated with their user's visual rendering experience. Existing metrics such as First Paint and First Contentful Paint focus on initial rendering, but don't take into account the importance of the painted content, and therefore may indicate the times in which the user still does not consider the page useful.


Largest Contentful Paint (LCP) aims to be a new page-load metric that:

  • better correlates with user experience than the existing page-load metrics

  • is easy to understand and reason about


At the same time, LCP does not try to be representative of the user's entire rendering journey. That's something that the lower-level Element Timing can help developers accomplish.


Link to “Intent to Implement” blink-dev discussion

https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/WVqgYtyaGJk/q-TGbeExBgAJ


Is this feature supported on all six Blink platforms (Windows, Mac, Linux, Chrome OS, Android, and Android WebView)?

Yes.


Demo link

https://wicg.github.io/largest-contentful-paint/#sec-example


Debuggability

LargestContentfulPaint entries can be obtained from PerformanceObserver on JavaScript.


Risks

Interoperability and Compatibility

Compatibility risk is low, as this is a new feature with no existing usage.

Interoperability risk is relatively high - the metric was presented at a WebPerfWG call (video) and was met with skepticism from other vendors as well as some developers. The main contention points were:

  • The metric is using the content’s display dimensions as a proxy to its importance to the user. Some folks disagreed about it being a good proxy.

  • The metric’s collection stops once the user scrolled or clicked the page, as those operations can change what is loaded and when it is painted. Folks disagreed that this measure will reduce more bias than it introduces.

At the same time, analytics vendors in the room agreed on the necessity of this metric as a complement to Element Timing, which requires developers to annotate their content. We saw a similar sentiment on the mailing list.

Improved visual metrics is something that we’ve been interested in tackling for many years, but made little progress on when discussing it in theory. Therefore, we believe the only way to get it right is to iterate over a solution in order to get some data on that front, and understand if the heuristics and tradeoffs we’re taking here are the right ones.

In case we see that they can be improved, or in case other vendors show interest in shipping under a different set of trade-offs, we’d be willing to modify our metric to accommodate that.


Furthermore, based on telemetry data from a screenshot viewer developed by maxlg@ (but not shareable externally unfortunately), we believe that it is a big improvement over FCP.


Edge: Mixed signals, more leaning negative.

Firefox: Mixed signals, more leaning negative.

Safari: Opposed

Web / Framework developers: Some negative signals (example, another), but also a lot of positive signals (example, another, another).


Ergonomics

It will be used via PerformanceObserver, perhaps along with the (also new) companion metric Element Timing. This launch will not have a significant impact on Chrome’s performance: computations are already being made for UKM data, we are just exposing some of this data to developers.


Activation

Easy to use. For cross-origin images, developers will need to make sure they use Timing-Allow-Origin headers in order to obtain data on their paint times, beyond their resource loading times. This is necessary for security reasons. Similar to other performance APIs, aggregation of data across multiple frames will need to be done manually, if needed.


Is this feature fully tested by web-platform-tests? Link to test suite results from wpt.fyi.

https://cs.chromium.org/chromium/src/third_party/blink/web_tests/external/wpt/largest-contentful-paint/


Entry on the feature dashboard

https://www.chromestatus.com/feature/5666250908762112


nicj...@gmail.com

unread,
Jul 15, 2019, 5:03:54 PM7/15/19
to blink-dev
We (Akamai) are interested in experimenting with LCP for use with Boomerang / mPulse RUM.

We're hoping LCP provides some value above FP/FCP for websites that may not have the time or desire to annotate specific page components with Element Timing.  Having LCP provide an "automatic" metric around the largest paints seems valuable, with a low barrier to entry.

st...@speedcurve.com

unread,
Jul 16, 2019, 12:29:03 PM7/16/19
to blink-dev
SpeedCurve is super excited about seeing LCP results. We will definitely add this to our RUM product. We added Hero Rendering Times to WebPageTest and rolled that out to our users about a year or two ago. It’s very similar to LCP in terms of goals. We find that customers love seeing a metric that focuses on the specific content in their pages. (See example below.) As Nicj says, the fact that this can be an “automatic” metric let’s all sites participate. 

hero-rendering-times.jpg

Alex Russell

unread,
Jul 18, 2019, 3:30:12 PM7/18/19
to blink-dev
Per today's OWNERS meeting, I'm happy to see this metric going forward, but have reservations regarding the interop story. You have my LGTM for an Origin-Trial to get more evidence from developers that this metric is trustworthy and meets their needs, but I can't currently support I2S, particularly given that the TAG review hasn't completed in any meaningful way.

Is the team willing to run an Origin Trial for a release or two?

Regards

Daniel Bratell

unread,
Jul 18, 2019, 3:49:40 PM7/18/19
to blink-dev, Alex Russell
I want to mention some thoughts I have about this, hoping that someone can correlate or refute them.

As I understand it, there is a problem with web page performance (perceived or absolute) at times and web developers don't have enough tools to investigate the situation and improve things because every user's experience is different. Different networks, different hardware, different browsers, different usage patterns.

This would be a tool to add to the toolbox and considering that the current toolbox feels inadequate, everyone welcomes additions. Still, would this tool in the end be used? Would it be useful? How can we know?

Origin Trials (as Alex just suggested) is one way to go, though I've understood that it might not work so well since CDNs, rather than content sites, want to use this. What other way is there to gather feedback? We really want to improve the situation but how do we know this is a useful step in the right direction?

/Daniel
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/ab42327d-b1e5-4c0d-9616-8a288d83f300%40chromium.org.



--
/* Opera Software, Linköping, Sweden: CEST (UTC+2) */

Nicolás Peña

unread,
Jul 18, 2019, 5:53:45 PM7/18/19
to blink-dev, sligh...@google.com

On Thursday, July 18, 2019 at 3:49:40 PM UTC-4, Daniel Bratell wrote:
I want to mention some thoughts I have about this, hoping that someone can correlate or refute them.

As I understand it, there is a problem with web page performance (perceived or absolute) at times and web developers don't have enough tools to investigate the situation and improve things because every user's experience is different. Different networks, different hardware, different browsers, different usage patterns.

This would be a tool to add to the toolbox and considering that the current toolbox feels inadequate, everyone welcomes additions. Still, would this tool in the end be used? Would it be useful? How can we know?

Origin Trials (as Alex just suggested) is one way to go, though I've understood that it might not work so well since CDNs, rather than content sites, want to use this. What other way is there to gather feedback? We really want to improve the situation but how do we know this is a useful step in the right direction?

/Daniel

Hi Daniel, your comment captures well the problem we're at. We know we need better metrics, but sadly we don't have a good process for evaluating these new metrics:
  • Doing an Origin Trial could provide some feedback. But generally it's a very small amount of feedback, as seen for example in our Element Timing trial. It's not enough to make an informed decision afterwards. It also does not lessen the interop concern from shipping after the trial: without substantive feedback, other browser vendors won't change their skepticism. Therefore, and especially in this case where the most important users are not appropriate targets of an Origin Trial (analytics providers or other aggregators that do not have full control over the websites they measure), I highly doubt the trial would be of significant value. Maybe having more developer outreach would help with this (and with Origin Trials in general), but developers would need to be incentivized not only to sign up for the trial but also to provide meaningful feedback on it. And that seems like a hard problem.
  • Shipping the API could cause interop concerns in the future. If it ends up not working, or if other browsers request significant API reshapes, the removal / change would cause developer pain. In this case, I think the concerns are lower, for various reasons. This is a performance API, so removing it will almost surely not cause any user visible changes (unless the developer is really misusing the API somehow). In addition, even though the TAG has not had a chance to take a look (2 months after the issue was opened...) at the API shape, it attempts to follow the one of Element Timing, which has been reviewed by TAG and has support from the Web Perf WG. Finally, we do have investigations made by Googlers (sorry - cannot be shared publicly) which show that this metric is an improvement over FCP, both in the lab (we have an LCP viewer developed by maxlg@ - I can go over these and grab some screenshots to share if you'd like) and in the wild, since we have had LCP on UKM for some time.
These considerations make me confident that shipping is the superior option over Origin Trial. It is not a perfect option, but there is no better option at the moment. Because the alternative is FCP, which is not good at holistically capturing the load experience, we're unlikely to see developers complaining about it not being useful. Because users are generally analytics providers and others who would not be able to participate in Origin Trials, we're unlikely to get any more useful feedback until we ship.
 
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Bryan McQuade

unread,
Jul 20, 2019, 9:57:59 AM7/20/19
to blink-dev, sligh...@google.com
I agree with Nicolás's assessment.

Following up on Daniel's questions, Daniel asks: "would this tool in the end be used? Would it be useful? How can we know?"

RE: "would it be used", the feedback on thread from Akamai and Speedcurve shows that this metric would be used by analytics providers / aggregators, which reach developers across many sites.

RE: "would it be useful", one of the primary goals for LCP was to address the known shortcomings of First Contentful Paint (FCP). FCP was a big step forward in understanding user-perceived load speed, relative to previous metrics like onload. However, FCP also has limitations, firing before the content of a page is available for many sites. As Nicolás mentions, we have done investigations over the course of developing LCP to ensure it addresses these shortcomings in FCP, based on both real-world and lab data. We haven't published this data externally, but if it would help to address the "would it be useful" question and get support from API owners for shipping LCP, we can look into making that data public. Daniel, other owners, if we publish this data, would this answer the "would it be useful?" question for you?
To unsubscribe from this group and stop receiving emails from it, send an email to blin...@chromium.org.

Daniel Bratell

unread,
Jul 22, 2019, 6:37:40 AM7/22/19
to blink-dev, Bryan McQuade, sligh...@google.com
Thanks for the very informative explanations. It seems to me that the intent to ship process doesn't quite fit here. These features (LCP and the instability api) are in an experimental phase but need to be exposed to the a large group of users. There is an obvious risk they will need to change in incompatible ways, or be replaced by something different and at that time we might have a hard time not breaking things.

The data that shows this can be used in a useful way would probably fit very well in the end user documentation for this feature. 

My question about usefulness is more related to end user pickup. Will they find the answers they need from this of the instability API or will they end up unsatisfied. End user satisfaction is only indirectly a shipping criteria, through the requirement that multiple parties/vendors agree that this is progress. Right now other vendors are neutral to skeptical, probably waiting to see how well it works out in Chromium. 

Every code change adds risk and the shipping process ensures that the risk level doesn't become too high and that the risk level is well known. Certain factors seem to reduce the risk here:

* Primary users seem to be well staffed third parties that will be able to adapt to changes.
* Breaking these APIs might still leave a site fully functional for the end user.
* The lack of support from other vendors force (hopefully!) code writers to have alternative code paths.

If it was possible to ship under a "no warranty, caveat emptor" clause, that might fit better but we know that users (rightfully) become unhappy when the platform changes.

/Daniel
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/03253436-193c-437c-9a2d-492c2173505d%40chromium.org.

Philip Jägenstedt

unread,
Jul 23, 2019, 10:25:32 AM7/23/19
to Daniel Bratell, Yoav Weiss, blink-dev, Bryan McQuade, Alex Russell
The upside of this is pretty clear, and the explainer is great, especially LCP compared to other metrics.

Some thoughts on future compat risk. Because of the structure of the API and expected usage, the risk of sites being broken for end users if the API is removed or changes is quite low. It'd be similar to an event that never fires if removed, and if changed, most likely the performance observer callback might throw and drop some other entry types.

On interop, we have the usual risk of the behavior being slightly different in other engines, and the answer is the usual one: spec and tests. I've sent https://github.com/tidoust/reffy/pull/154 to allow for a idlharness.js test to be added, to catch any very basic mistakes.

I went ahead and filed some issues and small fixes on the spec. https://github.com/WICG/largest-contentful-paint/issues/17 is the most significant one, without that some guesswork is needed for others to implement this. Nicolás, +Yoav Weiss do you agree, and if so do you think it can be addressed before moving to ship this?

I also wonder about "reduces the chance of gaming" mentioned in the explainer. Clipping the size to the viewport is part of this, but isn't it quite easy to have a large transparent or have a large invisible text to improve the metric? Can a cat-and-mouse game with this sort of gaming be won, or does it ultimately require reading back the pixels in a way that will have an unacceptable performance penalty?

Nicolás Peña

unread,
Jul 23, 2019, 11:01:26 AM7/23/19
to blink-dev, bra...@opera.com, yoav...@google.com, bmcq...@chromium.org, sligh...@google.com
Hi Philip, thanks for all the feedback on the GitHub repo! Yoav and I have replied on issue 17.

'Reduces the chance of gaming' does not really belong on the spec, this is a copy-paste issue. You're right that the metric can be improved easily, but 'hard to game' should not be a spec goal of this metric. I'll remove it as it has caused some confusion.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.



--
/* Opera Software, Linköping, Sweden: CEST (UTC+2) */

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Ilya Grigorik

unread,
Jul 23, 2019, 3:05:02 PM7/23/19
to Nicolás Peña, blink-dev, bra...@opera.com, Yoav Weiss, Bryan McQuade, Alex Russell
Hey folks. 

Philip already captured many of the same points I wanted to highlight, but I'd like to re-emphasize and add a few layers..

The goal of LCP is to augment existing metrics like FP and FCP, and to provide a proxy for rendering of ~main page content. None of these metrics are perfect (good writeup by Steve Souders) but they are necessary and very important to provide nonetheless because most site owners do not have the ability or know-how to manually instrument their pages, yet they still need directional signals to help them gauge potential performance issues on their pages. Both Ian Withrow (AppDynamics) and Shuangyang (Alibaba) captured this well on webperf-wg list..

On Fri, Apr 26, 2019 at 4:58 AM Ian Withrow <ian.w...@appdynamics.com> wrote:
Long time listener, first time commenter. Agree with Phillip and others based on my experience with AppDynamics' customer base. Any capability that requires per app customization is largely targeting the 1% of accounts that have committed to web performance very deeply.  This group is also heavily skewed towards some verticals like e-commerce. Most enterprises haven't prioritized custom instrumentation to date.

On Wed, Apr 24, 2019 at 10:51 PM 杨森(双扬) <shuang...@alibaba-inc.com> wrote:
I have built a few RUM products in Alibaba Group & Ant Financial Group, when it comes to enterprise level performance measuring and data analysis, I can really vouch for Philip that our users seldomly understand what is being measured and how to interpret the figures, they just want to know is it fast or slow.
So I believe it's important to set up a mindset for those non-professionals to understand how fast does the webpage load even it's not entirely accurate, and a dedicated metric for that couldn't be more essential. `load` is well-known but far from accurate for SPAs, FP/FCP is a good start but have their own limits, FMP is kind of alleivating this problem but not officially supported.

For the motivated and technically capable site owners we also provide Element Timing, which allows them to annotate specific elements on their pages.

Separately from above, as Philip already pointed out, the interop risk here is low: it's an event emitted by PerformanceObserver that does not affect the page itself. If the developer registers an observer for this event but the UA does not support it, they won't get any events: it won't throw or break their code if we (in the unlikely case) decide to revoke it either. A more likely outcome is that we might identify new opportunities to improve the accuracy of LCP, in which case we can update the spec, tests, and roll out an update to the metric — just as we have for FP and FCP previously — without any breakage.. The distributions might shift in various dashboards but that's normal behaviour and wouldn't pose interop risk. 

LCP is an important step forward towards helping site owners better understand real-world performance of their content, and we have a strong need and demand for it from the industry. It might not be perfect, but that's neither the right expectation (it never will be), nor does it need to be from the start (we can iterate on it safely).

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.



--
/* Opera Software, Linköping, Sweden: CEST (UTC+2) */

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/98119659-2bf4-413a-b4ec-12b769371550%40chromium.org.

Philip Jägenstedt

unread,
Jul 24, 2019, 8:20:09 AM7/24/19
to Nicolás Peña, blink-dev, Daniel Bratell, Yoav Weiss, Bryan McQuade, Alex Russell
Thanks Nicolás!

My issue #17 turned out to be invalid, thanks for explaining the relationship between the Element Timing and LCP specs.

I poked around a bit more and stumbled on an issue related to exposing GC, which is shared with Element Timing but may be harder to address for LCP since the candidate element may have no distinguishing features (attributes, children, etc.) at all. Hopefully this can be addressed without reducing the usefulness of the API.

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.



--
/* Opera Software, Linköping, Sweden: CEST (UTC+2) */

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/98119659-2bf4-413a-b4ec-12b769371550%40chromium.org.

Nicolás Peña

unread,
Jul 24, 2019, 5:31:48 PM7/24/19
to blink-dev, n...@chromium.org, bra...@opera.com, yoav...@google.com, bmcq...@chromium.org, sligh...@google.com
Hi Philip, thanks for poking around and providing great feedback :) I think the GC issue will be resolved soon as mentioned in the Element Timing thread. 
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.



--
/* Opera Software, Linköping, Sweden: CEST (UTC+2) */

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Beng Eu

unread,
Jul 25, 2019, 12:22:22 PM7/25/19
to Nicolás Peña, blink-dev, Bryan McQuade, bra...@opera.com, sko...@chromium.org, sligh...@google.com, yoav...@google.com
Similar to what analytics vendors have expressed, we (the Google AdSpeed team) are super interested in LCP (and the Layout Instability API too) because they are “automatic” metrics. As third parties, we’ve a few different considerations from first parties measuring their own sites.

We do RUM for Google Display and Video Ads across millions of unique domains, to track performance improvements and regressions, for thousands of A/B experiments at any given time, and for overall tracking.

We've been looking forward to better indicators of user-perceived speed. FP and FCP don’t capture the UX impact of ads all that well, since the majority of impact tends to be after those points. LCP seems more promising to evaluate ads’ impact to content speed. Layout Instability API is also likely to be valuable, since ads can contribute significantly towards layout instability depending on how they're loaded.

We understand some of the concerns related to these metrics being based on heuristics. For our use cases, as long as these metrics give a directionally useful signal in aggregate across the majority of sites, they would be invaluable when making UX improvements to ads-related scripts, or detecting regressions.

We've participated in Origin Trials but we can only do those for a few origins, which can let us give early feedback on an API but doesn't help us much with evaluating its effectiveness for our purposes.

If shipped, we would be able to evaluate these metrics by performing A/B experiments across all sites with our ad scripts, with changes expected to shift the metrics in known ways, e.g. by injecting/removing latency/jank. We have no problems with adapting quickly to API changes if need be.

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.



--
/* Opera Software, Linköping, Sweden: CEST (UTC+2) */

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/9ab5354b-dc7c-40a1-a077-50ebb023a13d%40chromium.org.

Bryan McQuade

unread,
Jul 25, 2019, 1:16:43 PM7/25/19
to blink-dev, n...@chromium.org, bmcq...@chromium.org, bra...@opera.com, sko...@chromium.org, sligh...@google.com, yoav...@google.com
We wanted to share some of the analysis we have done which compares LCP and FCP, based on real-world aggregated data from the Chrome User Experience Report.


The data shows how LCP improves on FCP for measuring time to main content.

We hope this is useful in helping both API owners and RUM providers on thread to see how LCP addresses the well known shortcomings in the existing FCP metric.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.



--
/* Opera Software, Linköping, Sweden: CEST (UTC+2) */

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

bmcq...@google.com

unread,
Jul 25, 2019, 3:12:08 PM7/25/19
to blink-dev, n...@chromium.org, bmcq...@chromium.org, bra...@opera.com, sko...@chromium.org, sligh...@google.com, yoav...@google.com
We wanted to share some of the analysis we have done which compares LCP and FCP, based on real-world aggregated data from the Chrome User Experience Report.


The data shows how LCP improves on FCP for measuring time to main content.

We hope this is useful in helping both API owners and RUM providers on thread to see how LCP addresses the well known shortcomings in the existing FCP metric.




On Thursday, July 25, 2019 at 12:22:22 PM UTC-4, Beng Eu wrote:



--
/* Opera Software, Linköping, Sweden: CEST (UTC+2) */

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blin...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blin...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blin...@chromium.org.

Alex Russell

unread,
Jul 25, 2019, 3:33:51 PM7/25/19
to blink-dev, n...@chromium.org, bmcq...@chromium.org, bra...@opera.com, sko...@chromium.org, sligh...@google.com, yoav...@google.com
TL;DR: Per today's API OWNERs meeting, you have 3 LGTMS (mine, Rick's, and Ojan's). 
Drilling in:

First, thanks so much for sharing this, Bryan! The deck provides a lot of the evidence I would have been looking for an OT to provide, unblocking my vote. The goal of API OWNERs review isn't to demand a specific set of checkboxes be ticked, but rather to balance risk, and the level of detail on this research helps allay my concerns, even though the interop story is somewhat difficult at the moment.

On the slides, I'd like to better understand what slides 8-11 are demonstrating; is it possible to add speaker notes to them to discuss?

Lastly, it seems as though the discussion of what a "paint" means in terms of compositor driven animations relative to the spec text. Tim mentioned that carousels are hard; would be great to get some non-normative text into the spec discussing these issues.  

Regards

Rick Byers

unread,
Jul 25, 2019, 4:49:46 PM7/25/19
to Alex Russell, blink-dev, n...@chromium.org, Bryan McQuade, Daniel Bratell, Steve Kobes, Yoav Weiss
Thanks Alex.

For the permanent record, API owners present at the meeting were Chris, Daniel, Alex, Ojan, Yoav and myself, and there was consensus to ship (with Yoav recusing himself as an editor of the spec).

The tradeoffs involving getting more data from an OT, other vendors or TAG were discussed, and my takeaway at least was roughly:
  • The extent and importance of the problem is clear from RUM provider feedback
  • The API shape is pretty low risk as this is a simple extension of PerformaneTimeline
  • There are things we could debate around details of the algorithm, but Bryan has shared some solid public evidence for how this improves on FCP
  • It's almost certainly not perfect, but as usual we will primarily rely on new metrics to try to make up for this. This will only be successful as a data driven process, so we'll depend on data from the real world from customers / other implementors to drive this. The team has analyzed a huge volume of UKM data to arrive on this design, so additional data is only likely to come via others.
  • Most customers for this API (RUM analytics providers) cannot reasonably use an origin trial. Specific sites would almost always prefer the Element Timing API
  • Worst case and we decide it really provides little value, we're quite confident we could remove the API
Rick

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/a38a7968-23ef-4529-a77c-3b240fa51b5f%40chromium.org.

Steve Souders

unread,
Jul 25, 2019, 6:44:31 PM7/25/19
to bmcq...@google.com, blink-dev, n...@chromium.org, bmcq...@chromium.org, bra...@opera.com, sko...@chromium.org, sligh...@google.com, Yoav Weiss
That deck is really helpful. Thanks Bryan!

First Meaningful Paint seems closer in spirit to LCP. Did you do any analysis comparing FMP to LCP?

Thanks, Steve

You received this message because you are subscribed to a topic in the Google Groups "blink-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/a/chromium.org/d/topic/blink-dev/S36p6Y16u5c/unsubscribe.
To unsubscribe from this group and all its topics, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/74142dcc-c419-424b-8c57-898c3827e92e%40chromium.org.

Timothy Dresser

unread,
Jul 29, 2019, 9:57:30 AM7/29/19
to Steve Souders, bmcq...@google.com, blink-dev, n...@chromium.org, bmcq...@chromium.org, bra...@opera.com, sko...@chromium.org, sligh...@google.com, Yoav Weiss
We haven't done any rigorous comparison of LCP and FMP, but we have done significant digging into both, and I can provide some color on why we went with LCP over FMP.

First Meaningful Paint is based on heuristics which are coupled with Chrome's implementation, and thus can't easily be specified.
These heuristics are hard to explain and hard to understand. We've also found that in ~20% of cases, FMP provides results that don't correlate well with user experience, potentially in hard to understand ways.

Conversely, while LCP is definitely not perfect, it's possible to specify, and fairly easy to understand. When it behaves poorly, which is fairly rare, figuring out what's happening is normally not too hard.

Steve Souders wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAJKLUAy9vs5t3j2tOzx64n%2B7JFkT8NE39oQ3qDUhAtwz--3uXg%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages