Distinction between mobile and desktop benchmarks

52 views
Skip to first unread message

Jon L

unread,
Sep 10, 2024, 1:25:28 PMSep 10
to web-vitals-feedback
I'm curious why there is no distinction between Core Web Vitals Assessment benchmarks for desktop and mobile.

I understand that mobile scores are often lower due to variations in network conditions, hardware capabilities, and user behaviour.

Thanks,
Jon

Barry Pollard

unread,
Sep 10, 2024, 1:44:05 PMSep 10
to Jon L, web-vitals-feedback
I presume by this you mean why do we use the same thresholds (e.g. 2.5 seconds for good LCP) for mobile and desktop instead of having more lax ones on mobile (or more tighter ones on desktop)?

The Core Web Vitals initiative is primarily focused about measuring user experience, rather than technical measures. This is why we worked on user-centric performance metrics over the more traditional technical measures used in the past (TTFB, DomContentLoaded, onLoad).

A good user experience is the same regardless of barriers to achieving that. We don't think it's right to say that a good user experience on mobile is 2.5 seconds but on desktop it should be 1.5 seconds. Yes there is an element of expectation, but we don't believe they are that different.

Saying that we do take "achievability" into account (you can read more about that in this article on how we set the thresholds) and yes for some of the metrics mobiles do struggle to make those metrics—though maybe less than you think as there's lots of other factors in play here too. Many mobile devices are more powerful than the desktop/laptops people have. A lot of mobile internet browsing happens over the same high-speed WiFi networks as desktop. Mobile versions of sites often are smaller (though often not by much, and sometimes, seemingly paradoxically, mobiles can download larger images due to often having high resolution screen densities). Anyway, on aggregate, and particularly with a more global view, mobile devices ARE more constrained. And in these cases the thresholds are more based on mobile achievability rather than desktop. But for the reasons above we keep the desktop thresholds the same.

Interestingly enough, Lighthouse lab tests (as used on PageSpeed Insights for example) does use tougher thresholds for desktop (the mobile thresholds are set the same as Core Web Vitals ones). But that is more because the conditions are preset and always slower so there is little value in using the same thresholds.

There have also been questions about whether we should use different thresholds per country/geographical region but again the measure of a "good" user experience is the same.

And finally there is also the case of simplicity. Having one clear set of thresholds is good to clarify focus, rather than a more complex, more segregated set of thresholds.

Hope that answers your question (and was the question you were asking and that I didn't just drone on here for ages on the wrong question!)

Barry


--
You received this message because you are subscribed to the Google Groups "web-vitals-feedback" group.
To unsubscribe from this group and stop receiving emails from it, send an email to web-vitals-feed...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/web-vitals-feedback/723508e4-d088-4f15-aa6a-584c80c653e1n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages