Does this intent deprecate or change behavior of existing APIs, such that it has potentially high risk for Android WebView-based applications?
No| Shipping on desktop | 144 |
| DevTrial on desktop | 138 |
| Shipping on Android | 144 |
| DevTrial on Android | 138 |
| Shipping on WebView | 144 |
I expect it will change over time but the changes should be pretty minor (as new resources increase or decrease popularity). Generally we'd want to include resources that have been common for a while and are likely to continue to be common. I'd expect small changes every milestone (or two). There's also the pervasive-cache@chromium.org mailing list that you can ping to give us a heads-up (or a CL to Chromium) if there's something specific you want to draw attention to.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.
This is interesting, given all of the problems involved in governance and the way it cuts against platform progress. Will the full set be downloaded before any pre-population is used? What controls will be in place to make sure that this does not exacerbate cross-site tracking via timing? Will these caches be pushed in a versioned way? Who will make the call about how much can be in the set? And are these delivered via component-updater?Best,Alex
On Friday, October 17, 2025 at 1:44:19 PM UTC-7 Patrick Meenan wrote:
I expect it will change over time but the changes should be pretty minor (as new resources increase or decrease popularity). Generally we'd want to include resources that have been common for a while and are likely to continue to be common. I'd expect small changes every milestone (or two). There's also the pervasi...@chromium.org mailing list that you can ping to give us a heads-up (or a CL to Chromium) if there's something specific you want to draw attention to.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
Thanks for pointing out Mozilla's response. I think they have some fair points.
If we look at the list, it is clear from domain names, that some sites, or entities, will benefit more from others from faster first-loads. I mean, something like 30% of the list are from Google entities, from very harmless (IMHO) hosted fonts to analytics, ads and video hosting (YouTube). Some other big sites like Shopify and Wix/parastorage combine to another 15%.
Having your key resources on that list may not matter very much, or it may. The existence of the list is scary and I lack the data to understand what the pros are both in scale and area. Is the disk space issue a real problem or just something that is a potential concern lacking investigation, for instance?
I understand that the published list is not a final, set-in-stone, item, so I will not comment on individual items, but what thought have there been on maintaining such a list long term? It seems to me that it might become an area of contention that would be nice to avoid.
In the initial intent, the claim was that this is important for the healthy operation of the web. But is it really?
/Daniel
On 10/18/25 8:34 a.m., Patrick Meenan wrote:
Sorry, I missed a step in making the candidate resource list public. I have moved it to my chromium account and made it public here.
Not everything in that list meets all of the criteria - it's just the first step in the manual curation (same URL served the same content across > 20k sites in the HTTP Archive dataset).
The manual steps frome there for meeting the criteria are basically:
- Cull the list for scripts, stylesheets and compression dictionaries.- Remove any URLs that use query parameters.- Exclude any responses that set cookies.- Identify URLs that are not manually versioned by site embedders (i.e. the embedded resource can not get stale). This is either in-place updating resources or automatically versioned resources.- Only include URLs that can reliably target a single resource by pattern (i.e. ..../<hash>-common.js but not ..../<hash>.js)- Get confirmation from the resource owner that the given URL Pattern is and will continue to be appropriate for the single-keyed cache
A few questions on list curation:
Can you clarify how big the list will be? The privacy review at https://chromestatus.com/feature/5202380930678784?gate=5174931459145728 mentions ~500, while the design doc mentions 1000. I see the candidate resource list starts at ~5000, then presumably manual curation begins to get to one of those numbers.
What is the expected list curation/update cadence? Is it actually manual?
Is there any recourse process for owners of resources that don't want to be included? Do we have documentation on what it mean to be appropriate for the single-keyed cache?
thanks,
Mike
On 10/22/25 5:48 p.m., Patrick Meenan wrote:
The candidate list goes down to 20k occurrences in order to catch resources that were updated mid-crawl and may have multiple entries with different hashes that add up to 100k+ occurrences. In the candidate list, without any filtering, the 100k cutoff is around 600, I'd estimate that well less than 25% of the candidates make it through the filtering for stable pattern, correct resource type and reliable pattern. First release will likely be 100-200 and I don't expect it will ever grow above 500.
As far as cadence goes, I expect there will be a lot of activity for the next few releases as individual patterns are coordinated with the origin owners but then it will settle down to a much more bursty pattern of updates every few Chrome releases (likely linked with an origin changing their application and adding more/different resources). And yes, it is manual.
As far as the process goes, resource owners need to actively assert that their resource is appropriate for the single-keyed cache and that they would like it included (usually in response to active outreach from us but we have the external-facing list for owner-initiated contact as well). The design doc has the documentation for what it means to be appropriate (and the doc will be moved to a readme page in the repository next to the actual list so it's not a hard-to-find Google doc):
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
My understanding was that there was believed to be a meaningful security benefit with partitioning the cache. That’s because it would limit a party from being able to inferr that you’ve visited some other site by measuring a side effect tied to how quickly a resource loads. That observation could potentially be made even if that specific adversary doesn’t have any of their own content loaded on the other site.
Of course, if there is an entity with a resource loaded across both sites with a 3p cookie and they’re willing to share that info/collude, there’s not much benefit. And even when partitioned, if 3p cookies are enabled, there are potentially measurable side effects that differ based on if the resource request had some specific state in a 3p cookie.
Does that incremental security benefit of partitioning the cache justify the performance costs when 3p cookies are still enabled? I’m not sure.
Even if partitioning was eliminated, a site could protect themselves a bit by specifying Vary: Origin, but that probably doesn’t sufficiently cover iframe scenarios (nor would I expect most sites to hold it right).