LCP docs only mention the mobile "good scores"

87 views
Skip to first unread message

Ariel Juodziukynas

unread,
Jan 10, 2024, 8:03:30 AMJan 10
to web-vitals-feedback
In this document https://web.dev/articles/lcp?hl=en#what_is_a_good_lcp_score,
the image shows that a value of 2.5s or lower is good, between 2.5 and 4s needs
improvements, and over 4s it's poor.

When running lighthouse for a site in desktop mode, it flags times like 1.3s in
yellow/orange, putting them in the `needs improvement` range, even if they are
in the `good` range according to the docs.

Lighthouse seems to actually use a different range when testing in desktop mode
https://github.com/GoogleChrome/lighthouse/blob/main/core/audits/metrics/largest-contentful-paint.js#L37-L64
1.2s or lower is good, 1.2-2.4s needs improvements, over 2.4s is poor. So a 2.4s
value for desktop would be displayed in red and at the same time the docs say it's
good.

According to this other document, this change was made in lighthouse to prevent
inflated numbers for desktop, because they were not correct
https://developer.chrome.com/docs/lighthouse/performance/performance-scoring?hl=en#desktop

I think the LCP documentation should be updated to reflect that values to consider
are actually different depending on the device type.

Barry Pollard

unread,
Jan 10, 2024, 8:30:26 AMJan 10
to web-vitals-feedback
Yes Lighthouse uses different thresholds for desktop and mobile, which I admit can be confusing.

Lighthouse is a simulated test, where the simulation parameters are known in advance. By default mobile is run on a slower network with CPU throttling. In the real world it's much more complicated. Some of the latest hi-end phones are faster than many desktops, and in many cases Wi-Fi is used for both with the same internet connection. So it's not as clear cut.

For Lighthouse, it is a tool to provide optimisation opportunities and the metrics and score is intended to show that. To do that, they are calibrated across the known metric values from a run across millions of the most popular sites on the internet. That way it can be seen how much room for improvement compared to other sites there is. Lab-based tests like Lighthouse may not be reflective of field based data and really should be used as a measure of potential opportunities rather than of specific speed.

For field Core Web Vitals, we have different goals and want to set a threshold that is a measure of realistic, good user experience regardless of device type, network type, tech stack or the many other facts that can impact performance. So we use the same thresholds for both mobile and desktop.

For this reason, the LCP documentation for the Core Web Vitals initiative (at https://web.dev/articles/lcp?hl=en#what_is_a_good_lcp_score) should not be updated in my opinion. Though we maybe should make this clearer in the Lighthouse documentation. Could you raise an issue there: https://github.com/GoogleChrome/lighthouse/issues
Reply all
Reply to author
Forward
0 new messages