Intent to ship: Cache sharing for extremely-pervasive resources

941 views
Skip to first unread message

Patrick Meenan

unread,
Oct 17, 2025, 3:12:03 PMOct 17
to blink-dev
Contact emails
pme...@chromium.org

Specification
N/A

Design docs
https://docs.google.com/document/d/1xaoF9iSOojrlPrHZaKIJMK4iRZKA3AD6pQvbSy4ueUQ/edit?usp=sharing

Summary
For a small number (hundreds) of hand-curated static third-party script, stylesheet and compression-dictionary resources that are used on a large portion of the web, Chrome will use a single-keyed HTTP cache to store those resources.

This helps users and site owners with faster performance for those resources that are very widely used while maintaining the privacy protections of the partitioned disk cache. This feature targets the resources that most users are likely to see multiple times across multiple sites in any given browsing session. They are usually not in the critical path of the page loading and may not impact the common performance metrics but they are still important for the healthy operation of the web.

The list of candidate resources is manually curated from the HTTP Archive dataset and updated on an ongoing basis. This includes site-independent things like common analytics scripts, social media embeds, video player embeds, captcha providers and ads libraries.

It allows for code that uses versioned URLs as long as the versioning is not a manual process by embedders and that the same version is sent to everybody at a given point in time with the same contents. This does not include things like common Javascript libraries where they are commonly self-hosted or where the URL references a specific version of the library and it is up to site owners to manually select a version.

i.e. 

No: https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js

Blink component
Blink>Network

Web Feature ID
No information provided

Risks


Interoperability and Compatibility
This change is internal to Chrome and should be completely transparent to the web platform with no interoperability risks.

Gecko: N/A

WebKit: N/A

Web developers: No signals

Other signals:

Ergonomics
N/A

Activation
N/A

Security
Cache partitioning added a level of privacy protection that is being disabled for a small number of resources where it is deemed safe to do so. The linked document and issue provide the details on the protections that are in place to minimize the privacy exposure.

WebView application risks

Does this intent deprecate or change behavior of existing APIs, such that it has potentially high risk for Android WebView-based applications?

No


Debuggability
N/A

Will this feature be supported on all six Blink platforms (Windows, Mac, Linux, ChromeOS, Android, and Android WebView)?
Yes

Is this feature fully tested by web-platform-tests?
N/A

Tracking bug
https://issues.chromium.org/u/1/issues/404196743

Measurement
The success of this feature will be measured directly with the owners of a small number of targeted scripts with a web-exposed experiment.

Estimated milestones
Shipping on desktop144
DevTrial on desktop138
Shipping on Android144
DevTrial on Android138
Shipping on WebView144


Link to entry on the Chrome Platform Status
https://chromestatus.com/feature/5202380930678784

This intent message was generated by Chrome Platform Status.

Ben Kelly

unread,
Oct 17, 2025, 4:19:17 PMOct 17
to Patrick Meenan, blink-dev
Will the list of manually curated scripts be published somewhere?  I did not immediately see something like this skimming the doc or chrome status entry.

Thanks.

Ben

From: Patrick Meenan <pme...@chromium.org>
Date: Friday, October 17, 2025 at 3:12 PM
To: blink-dev <blin...@chromium.org>
Subject: [blink-dev] Intent to ship: Cache sharing for extremely-pervasive resources

This Message Is From an External Sender
 
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAPq58w6%2BOj%2BOOHa1PHp-9auhEBJRGyyLZGWYaJeC3w-E9Y07-g%40mail.gmail.com.

Patrick Meenan

unread,
Oct 17, 2025, 4:30:30 PMOct 17
to Ben Kelly, blink-dev
Yes. The list will be committed directly to the Chromium repository. The list of candidate resources (before the manual vetting) is in a sheet here.

Ben Kelly

unread,
Oct 17, 2025, 4:43:05 PMOct 17
to Patrick Meenan, blink-dev
Thank you.  Do you expect the list of resources to change over time?  Will http archive data be analyzed for every milestone release?

Patrick Meenan

unread,
Oct 17, 2025, 4:44:19 PMOct 17
to Ben Kelly, blink-dev
I expect it will change over time but the changes should be pretty minor (as new resources increase or decrease popularity). Generally we'd want to include resources that have been common for a while and are likely to continue to be common.  I'd expect small changes every milestone (or two).  There's also the pervasi...@chromium.org mailing list that you can ping to give us a heads-up (or a CL to Chromium) if there's something specific you want to draw attention to.

Alex Russell

unread,
Oct 17, 2025, 4:58:20 PMOct 17
to blink-dev, Patrick Meenan, blink-dev, wande...@meta.com
This is interesting, given all of the problems involved in governance and the way it cuts against platform progress. Will the full set be downloaded before any pre-population is used? What controls will be in place to make sure that this does not exacerbate cross-site tracking via timing? Will these caches be pushed in a versioned way? Who will make the call about how much can be in the set? And are these delivered via component-updater?

Best,

Alex

On Friday, October 17, 2025 at 1:44:19 PM UTC-7 Patrick Meenan wrote:
I expect it will change over time but the changes should be pretty minor (as new resources increase or decrease popularity). Generally we'd want to include resources that have been common for a while and are likely to continue to be common.  I'd expect small changes every milestone (or two).  There's also the pervasive-cache@chromium.org mailing list that you can ping to give us a heads-up (or a CL to Chromium) if there's something specific you want to draw attention to.

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Patrick Meenan

unread,
Oct 17, 2025, 5:15:39 PMOct 17
to Alex Russell, blink-dev, wande...@meta.com
Nothing is being pre-populated (other than the list of URL patterns). When a "pervasive" resource is encountered naturally by a given user, it will be stored in a single-keyed cache instead of a site/frame/url-keyed cache (reducing storage duplication for that specific resource for that specific user).

The linked doc has all of the mitigations in place, but the big one for explicit tracking is that it can only be used when full cookie access is also allowed, otherwise it falls through to the fully-partitioned cache.

On Fri, Oct 17, 2025 at 4:58 PM Alex Russell <sligh...@chromium.org> wrote:
This is interesting, given all of the problems involved in governance and the way it cuts against platform progress. Will the full set be downloaded before any pre-population is used? What controls will be in place to make sure that this does not exacerbate cross-site tracking via timing? Will these caches be pushed in a versioned way? Who will make the call about how much can be in the set? And are these delivered via component-updater?

Best,

Alex

On Friday, October 17, 2025 at 1:44:19 PM UTC-7 Patrick Meenan wrote:
I expect it will change over time but the changes should be pretty minor (as new resources increase or decrease popularity). Generally we'd want to include resources that have been common for a while and are likely to continue to be common.  I'd expect small changes every milestone (or two).  There's also the pervasi...@chromium.org mailing list that you can ping to give us a heads-up (or a CL to Chromium) if there's something specific you want to draw attention to.

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Patrick Meenan

unread,
Oct 17, 2025, 7:02:42 PMOct 17
to Alex Russell, blink-dev, wande...@meta.com
Assuming we have the privacy and security concerns under control (which we're pretty confident is the case for the extremely small subset of resources we are talking about), the main discussion point is the balancing of incentives and "playing favorites".

As you noted, the performance impact is relatively small but it's not non-zero when you get to the scale of some of the resources we are talking about (like https://www.google-analytics.com/analytics.js which is on 10M of the pages in the 50M page HTTP Archive dataset). This isn't really a framework discussion because those kinds of resources aren't a good fit for the restrictions that are in place for the privacy and security protections but there is a discussion about providing network benefits to specific third-parties that cross the pervasive threshold and meet the rest of the conditions (for at least part of the library).

The question is if the marginal performance benefit that can only be realized at that level of scale (where a given user would be likely to have the current version of that specific library in cache) would be a meaningful competitive factor for a newcomer (vs features, ease of use, pricing, mindshare, etc).

That also gets balanced against the user benefit of not having to keep re-downloading the exact same thing and from having to keep dozens of copies of it in cache, cached with different cache keys (taking cache space away from other resources).

There's also a more nuanced benefit that allows for federation of a feature across domains vs centralizing the experience. For example, Shopify and Wix's platform libraries show up in the candidate list as they are used by hundreds of thousands of sites in the CrUX dataset. Having a shared cache for those libraries would put them into a more competitive position against centralized platforms (Amazon, Etsy, Blogger, etc) while still allowing sites to own the overall experience.

I guess that's the long way of saying "it's complicated but we feel pretty strongly that the theoretical risk in this case is more than offset by the platform benefits".

Patrick Meenan

unread,
Oct 18, 2025, 8:35:24 AMOct 18
to Alex Russell, blink-dev, wande...@meta.com
Sorry, I missed a step in making the candidate resource list public. I have moved it to my chromium account and made it public here.

Not everything in that list meets all of the criteria - it's just the first step in the manual curation (same URL served the same content across > 20k sites in the HTTP Archive dataset).

The manual steps frome there for meeting the criteria are basically:

- Cull the list for scripts, stylesheets and compression dictionaries.
- Remove any URLs that use query parameters.
- Exclude any responses that set cookies.
- Identify URLs that are not manually versioned by site embedders (i.e. the embedded resource can not get stale). This is either in-place updating resources or automatically versioned resources.
- Only include URLs that can reliably target a single resource by pattern (i.e. ..../<hash>-common.js but not ..../<hash>.js)
- Get confirmation from the resource owner that the given URL Pattern is and will continue to be appropriate for the single-keyed cache


Patrick Meenan

unread,
Oct 22, 2025, 9:43:53 AMOct 22
to Alex Russell, blink-dev
FYI, I updated Chromestatus with Mozilla's position on the feature (strongly negative). It's not a platform-exposed feature but they do have concerns that they wanted to note in a position issue.

Daniel Bratell

unread,
Oct 22, 2025, 11:50:11 AMOct 22
to Patrick Meenan, Alex Russell, blink-dev

Thanks for pointing out Mozilla's response. I think they have some fair points.

If we look at the list, it is clear from domain names, that some sites, or entities, will benefit more from others from faster first-loads. I mean, something like 30% of the list are from Google entities, from very harmless (IMHO) hosted fonts to analytics, ads and video hosting (YouTube). Some other big sites like Shopify and Wix/parastorage combine to another 15%.

Having your key resources on that list may not matter very much, or it may. The existence of the list is scary and I lack the data to understand what the pros are both in scale and area. Is the disk space issue a real problem or just something that is a potential concern lacking investigation, for instance?

I understand that the published list is not a final, set-in-stone, item, so I will not comment on individual items, but what thought have there been on maintaining such a list long term? It seems to me that it might become an area of contention that would be nice to avoid.

In the initial intent, the claim was that this is important for the healthy operation of the web. But is it really?

/Daniel

Patrick Meenan

unread,
Oct 22, 2025, 4:53:53 PMOct 22
to Daniel Bratell, Alex Russell, blink-dev
The level of importance for the healthy operation of the web is opinion and debatable but for everything that is determined to be pervasive and safe to include, it will marginally increase the reliability and/or performance of that item loading. Likely fractions of a percent improvement in things like conversion measurement, successful checkouts or recaptcha challenges succeeding. Unlikely to be noticed in individual samples but meaningful for the web at scale. My personal opinion is that these marginal improvements are important for the overall health of the web but others may disagree (or determine that the benefits are not worth the complexity).

As far as disk space goes, the overall cache size is a concern on lower-end devices but it's not clear how much this will improve any individual user's cache. There is probably a lot that could be done in that space to age-out stale versioned resources and prioritize render-blocking resources but it's another marginal cost that we can recover.

As far as maintaining the list long-term, there has been a fair bit of thought put into the process. The list itself has an expiration to make sure we don't get stuck with stale copies if, for some reason, it stops being maintained (yeah, I'm still sore about reader). There is tooling to automate the building of the candidate list and there are plans to improve the automation around filtering the list for patterns to make it much easier to update. It will also be in the git repo so it's not restricted to the core maintainers to keep it updated (i.e. if a provider has a pattern that is in the candidate list but hasn't been added yet, they can submit a CL). The list of patterns is expected to be pretty stable once the initial build-out has been completed though since the patterns are expected to be long-lived and popularity at that scale doesn't change quickly (the privacy protections require the stability to prevent history sniffing).

Mike Taylor

unread,
Oct 22, 2025, 5:31:22 PMOct 22
to Patrick Meenan, blink-dev

On 10/18/25 8:34 a.m., Patrick Meenan wrote:

Sorry, I missed a step in making the candidate resource list public. I have moved it to my chromium account and made it public here.

Not everything in that list meets all of the criteria - it's just the first step in the manual curation (same URL served the same content across > 20k sites in the HTTP Archive dataset).

The manual steps frome there for meeting the criteria are basically:

- Cull the list for scripts, stylesheets and compression dictionaries.
- Remove any URLs that use query parameters.
- Exclude any responses that set cookies.
- Identify URLs that are not manually versioned by site embedders (i.e. the embedded resource can not get stale). This is either in-place updating resources or automatically versioned resources.
- Only include URLs that can reliably target a single resource by pattern (i.e. ..../<hash>-common.js but not ..../<hash>.js)
- Get confirmation from the resource owner that the given URL Pattern is and will continue to be appropriate for the single-keyed cache

A few questions on list curation:

Can you clarify how big the list will be? The privacy review at https://chromestatus.com/feature/5202380930678784?gate=5174931459145728 mentions ~500, while the design doc mentions 1000. I see the candidate resource list starts at ~5000, then presumably manual curation begins to get to one of those numbers.

What is the expected list curation/update cadence? Is it actually manual?

Is there any recourse process for owners of resources that don't want to be included? Do we have documentation on what it mean to be appropriate for the single-keyed cache?

thanks,
Mike

Patrick Meenan

unread,
Oct 22, 2025, 5:48:38 PMOct 22
to Mike Taylor, blink-dev
The candidate list goes down to 20k occurrences in order to catch resources that were updated mid-crawl and may have multiple entries with different hashes that add up to 100k+ occurrences. In the candidate list, without any filtering, the 100k cutoff is around 600, I'd estimate that well less than 25% of the candidates make it through the filtering for stable pattern, correct resource type and reliable pattern. First release will likely be 100-200 and I don't expect it will ever grow above 500.

As far as cadence goes, I expect there will be a lot of activity for the next few releases as individual patterns are coordinated with the origin owners but then it will settle down to a much more bursty pattern of updates every few Chrome releases (likely linked with an origin changing their application and adding more/different resources). And yes, it is manual.

As far as the process goes, resource owners need to actively assert that their resource is appropriate for the single-keyed cache and that they would like it included (usually in response to active outreach from us but we have the external-facing list for owner-initiated contact as well).  The design doc has the documentation for what it means to be appropriate (and the doc will be moved to a readme page in the repository next to the actual list so it's not a hard-to-find Google doc):

5. Require resource owner opt-in
For each URL to be included, reach out to the team/company responsible for the resource to validate the URL pattern and get assurances that the pattern will always serve the same content to all sites and not be abused for tracking (by using unique URLs within the pattern mask as a bit-mask for fingerprinting). They will also need to validate that the URLs covered by the pattern will not rely on being able to set cookies over HTTP using a Set-Cookie HTTP response header because they will not be re-applied across cache boundaries (the set-cookie is not cached with the resource).


Mike Taylor

unread,
Oct 27, 2025, 9:40:09 AM (11 days ago) Oct 27
to Patrick Meenan, blink-dev

On 10/22/25 5:48 p.m., Patrick Meenan wrote:

The candidate list goes down to 20k occurrences in order to catch resources that were updated mid-crawl and may have multiple entries with different hashes that add up to 100k+ occurrences. In the candidate list, without any filtering, the 100k cutoff is around 600, I'd estimate that well less than 25% of the candidates make it through the filtering for stable pattern, correct resource type and reliable pattern. First release will likely be 100-200 and I don't expect it will ever grow above 500.
Thanks - I see the living document has been updated to mention 500 as a ceiling.

As far as cadence goes, I expect there will be a lot of activity for the next few releases as individual patterns are coordinated with the origin owners but then it will settle down to a much more bursty pattern of updates every few Chrome releases (likely linked with an origin changing their application and adding more/different resources). And yes, it is manual.
As far as the process goes, resource owners need to actively assert that their resource is appropriate for the single-keyed cache and that they would like it included (usually in response to active outreach from us but we have the external-facing list for owner-initiated contact as well).  The design doc has the documentation for what it means to be appropriate (and the doc will be moved to a readme page in the repository next to the actual list so it's not a hard-to-find Google doc):
Will there be any kind of public record of this assertion? What happens if a site starts using query params or sending cookies? Does the person in charge of manual list curation discover that in the next release? Does that require a new release (I don't know if this lives in component updater, or in the binary itself)?

Patrick Meenan

unread,
Oct 27, 2025, 10:28:03 AM (11 days ago) Oct 27
to Mike Taylor, blink-dev
I don't believe the security/privacy protections actually rely on the assertions (and it's unlikely those would be public). It's more for awareness and to make sure they don't accidentally break something with their app if they were relying on the responses being partitioned by site.

As far as query params go, the browser code already only filters for requests with no query params so any that do rely on query params won't get included anyway.

The same goes for cookies. Since the feature is only enabled when third-party cookies are enabled, adding cookies to these responses or putting unique content in them won't actually pierce any new boundaries but it goes against the intent of only using it for public/static resources and they'd lose the benefit of the shared cache when it gets updated. Same goes for the fingerprinting risks if the pattern was abused.

Rick Byers

unread,
Oct 29, 2025, 2:56:21 PM (8 days ago) Oct 29
to Patrick Meenan, Mike Taylor, blink-dev
If this is enabled only when 3PCs are enabled, then what are the tradeoffs of going through all this complexity and governance vs. just broadly coupling HTTP cache keying behavior to 3PC status in some way? What can a tracker credibly do with a single-keyed HTTP cache that they cannot do with 3PCs? Are there also concerns about accidental cross-site resource sharing which could be mitigated more simply by other means, eg. by scoping to just to ETag-based caching?

I remember the controversy and some real evidence of harm to users and businesses in 2020 when we partitioned the HTTP cache, but I was convinced that we had to accept that harm in order to credibly achieve 3PCD. At the time I was personally a fan of a proposal like this (even for users without 3PCs) in order to mitigate the harm. But now it seems to me that if we're going to start talking about poking holes in that decision, perhaps we should be doing a larger review of the options in that space with the knowledge that most Chrome users are likely to continue to have 3PCs enabled. WDYT?

Thanks,
   Rick

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Erik Anderson

unread,
Oct 29, 2025, 5:08:42 PM (8 days ago) Oct 29
to Rick Byers, Patrick Meenan, Mike Taylor, blink-dev

My understanding was that there was believed to be a meaningful security benefit with partitioning the cache. That’s because it would limit a party from being able to inferr that you’ve visited some other site by measuring a side effect tied to how quickly a resource loads. That observation could potentially be made even if that specific adversary doesn’t have any of their own content loaded on the other site.

 

Of course, if there is an entity with a resource loaded across both sites with a 3p cookie and they’re willing to share that info/collude, there’s not much benefit. And even when partitioned, if 3p cookies are enabled, there are potentially measurable side effects that differ based on if the resource request had some specific state in a 3p cookie.

 

Does that incremental security benefit of partitioning the cache justify the performance costs when 3p cookies are still enabled? I’m not sure.

 

Even if partitioning was eliminated, a site could protect themselves a bit by specifying Vary: Origin, but that probably doesn’t sufficiently cover iframe scenarios (nor would I expect most sites to hold it right).

Patrick Meenan

unread,
Oct 29, 2025, 6:13:21 PM (8 days ago) Oct 29
to Erik Anderson, Rick Byers, Mike Taylor, blink-dev
There are two general threat vectors for privacy/security with "shared state" (cache, connections, etc).

1 - Explicit user tracking across site boundaries. Either through unique responses or fingerprinting. This is effectively moot since the tracking can be done directly by cookies when 3PCs are allowed. This proposal isn't aimed at this use case where a cooperating resource can already explicitly track you (and is why the feature is linked to the state of 3PC access). Additional bits of entropy aren't a concern since it is assumed you have direct access to sharing ID's.

2 - XS leaks (history sniffing). This is where I can tell that you bank at BofA because you have the JS loaded that is only used when logged-in, I can tell that you have a gmail account because of similar resource exposure and I can tell that you likely live in or visited Groom Lake, Nevada because the map tiles from Google maps for that location are in your cache.

I think keeping the partitioned cache is still important to solve #2 and the proposal is all about how to find the sweet spot where we can share a cache for the resources that a given user is actually likely to benefit from being shared while limiting the exposure on the information that can be gathered about the browser. It is essentially reduced to "this browser likely visited a page that had ads and may have visited a storefront that was hosted on the shopify platform".

I say "may" because even though the resources are picked to limit the usefulness of the information that can be extrapolated, we still add a bunch of protections to make probing for that state destructive and difficult:
- Any attempt to check a cache entry will write the cache entry (so you can't tell if it was there naturally or from someone else probing).
- Any attempt to probe for multiple versions of a resource that match the same pattern will fall back to using the partitioned cache.

I'm happy to explore if you think there are other options to solve for #2 while unlocking more of the shared cache but this is my best attempt at clawing back the real-world benefits of the single-keyed cache while keeping the privacy and security benefits that come with the fully-partitioned cache.

Matt Menke

unread,
Oct 30, 2025, 9:09:43 AM (8 days ago) Oct 30
to blink-dev, Erik Anderson, Mike Taylor, blink-dev, Rick Byers, Patrick Meenan
Note that even with Vary: Origin, we still have to load the HTTP request headers from the disk cache to apply the vary header, which leaks timing information, so "Vary: Origin" is not a sufficient security mechanism to prevent that sort of cross-site attack.

Rick Byers

unread,
Oct 30, 2025, 12:28:29 PM (7 days ago) Oct 30
to Matt Menke, blink-dev, Erik Anderson, Mike Taylor, Patrick Meenan
Thanks Erik and Patrick, of course that makes sense. Sorry for the naive question. My naive reading of the design doc suggested to me that a lot of the privacy mitigations were about preventing the cross-site tracking risk. Could the design be simplified by removing some of those mitigations? For example, the section about reaching out to the resource owners, to what extent is that really necessary when all we're trying to mitigate is XS leaks? Don't the popularity properties alone mitigate that sufficiently?

What can you share about the magnitude of the performance benefit in practice in your experiments? In particular for LCP, since we know that correlates well with user engagement (and against abandonment) and so presumably user value. 

The concern about not wanting to further advantage more popular sites over less popular ones resonates with me. Part of that argument seems to apply broadly to the idea of any LRU cache (especially one with a reuse bias which I believe ours has?). But perhaps an important distinction here is that the benefits are determined globally vs. on a user-by-user basis? But I think any solution that worked on a user-by-user basis would have the XS leak problem, right? Perhaps it's worth reflecting on our stance on using crowd-sourced data to try to improve the experience for all users while still being fair to sites broadly. In general I think this is something Chromium is much more open to (where it brings significant user benefit) than other engines. For example, our Media Engagement Index system has some similar properties in terms of using aggregate user behaviour to help decide which sites have the power to play audio on page load and which don't. I was personally uncertain at the time if the complexity would prove to be worth the benefit, but now I'm quite convinced it is. Playing audio on load is just something users and developers want in a few cases, but not most cases. I wonder if perhaps cross-site caching is similar?

Rick

Patrick Meenan

unread,
Oct 30, 2025, 3:50:55 PM (7 days ago) Oct 30
to Rick Byers, blink-dev, Erik Anderson
Reaching out to site owners was mostly for a sanity check that the resource is not expecting to be partitioned for some reason (even though the payloads are known to be identical). If it helps, we can replace the reach-out step with a requirement that the responses be "Cache-Control: public" (and hard-enforce it in the browser by not writing the resource to cache if it isn't). That is an explicit indicator that the resources are cacheable in shared upstream caches.

I removed the 2 items from the design doc that were specifically targeted at direct fingerprinting since that's moot with the 3PC link (as well as the fingerprinting bits from the validation with resource owners).

On the site-preferencing concern, it doesn't actually preference large sites but it does preference currently-popular third-party resources (most of which are provided by large corporations). The benefit is spread across all of the sites that they are embedded in (funnily enough, most large sites won't benefit because they don't tend to use third-parties).

Determining the common resources at a local level exposes the same XS Leak issues as allowing all resources (i.e. your local map tiles will show up in multiple cache partitions because they all reference your current location but they can be used to identify your location since they are not globally common). Instead of using the HTTP Archive to collect the candidates, we could presumably build a centralized list based on aggregated common resources that are seen across cache partitions by each user but that feels like an awful lot of complexity for a very small number of resulting resources.

On the test results, sorry, I thought I had included the experiment results in the I2S but it looks like I may not have.

The test was specifically just with the patterns for the Google ads scripts because we aren't expecting this feature to impact the vitals for the main page/content since most of the pervasive resources are third-party content that is usually async already and not critical-path. It's possible some video or map embeds might trigger LCP in some cases but that's the exception more than the norm. This is more geared to making those supporting things work better while maintaining the user experience. Ads has the kind of instrumentation that we'd need to be able to get visibility into the success (or failure) of that assumption and to be able to measure small changes.

The results were stat-sig positive but relatively small. The ad iframes displayed their content slightly faster and transmitted fewer bytes for each frame (very low single digit percentages).

The guardrail metrics, including vitals) were all neutral which is what we were hoping for (improvement without a cost of increased contention).

If you'd feel more comfortable with gathering more data, I wouldn't be opposed to running the full list at 1% to check the guardrail metrics again before fully launching. We won't necessarily expect to see positive movement to justify a launch since the resources are still async but we can validate that assumption with the full list at least (if that is the only remaining concern).

Reply all
Reply to author
Forward
0 new messages