Intent to Prototype: Web environment integrity API

10,545 views
Skip to first unread message

Ben Wiser

unread,
May 8, 2023, 11:30:30 AM5/8/23
to blin...@chromium.org, Sergey Kataev, Eric Trouton, Philipp Pfeiffenberger, Mihai Cîrlănaru, Nick Gaw, Peter Birk Pakkenberg, Ryan Kalla

Contact emails

serg...@chromium.org, pb...@chromium.org, ryan...@google.com, b...@chromium.org, erict...@chromium.org

Explainer

https://github.com/RupertBenWiser/Web-Environment-Integrity/blob/main/explainer.md

Specification

We do not have a specification yet, however we expect to publish in the near future both the considered implementation options for the web layer in an initial spec, which we suspect are not very controversial, and an explanation of our approach for issuing tokens, which we expect will spark more public discussion, but is not directly a web platform component. We are gathering community feedback through the explainer before we actively develop the specification.

TAG Review

Not filed yet.

Blink component

Blink>Identity

Summary

This is a new JavaScript API that lets web developers retrieve a token to attest to the integrity of the web environment. This can be sent to websites’ web servers to verify that the environment the web page is running on is trusted by the attester. The web server can use asymmetric cryptography to verify that the token has not been tampered with. This feature relies on platform level attesters (in most cases from the operating system).


This project was discussed in the W3C Anti-Fraud Community Group on April 28th, and we look forward to more conversations in W3C forums in the future. In the meantime, we welcome feedback on the explainer.

Motivation

This is beneficial for anti-fraud measures. Websites commonly use fingerprinting techniques to try to verify that a real human is using a real device. We intend to introduce this feature to offer an adversarially robust and long-term sustainable anti-abuse solution while still protecting users’ privacy.

Initial public proposal

https://github.com/antifraudcg/proposals/issues/8

Risks

Interoperability and Compatibility

We are currently working on the explainer and specification and are working with the Anti-Fraud Community Group to work towards consensus across the web community. The “attester” is platform specific so this feature needs to be included on a per platform basis. We are initially targeting mobile Chrome and WebView.

Ergonomics

See “How can I use web environment integrity?” in the explainer. Note that we are actively looking for input from the anti-fraud community and may update the API shape based on this. We also expect developers to use this API through aggregated analysis of the attestation signals.

Security

See the “Challenges and threats to address” section of the explainer to see our current considerations.

Will this feature be supported on all six Blink platforms (Windows, Mac, Linux, ChromeOS, Android, and Android WebView)?

We initially support this only for Android platforms (Android, and Android WebView). This feature requires an attester backed by the target platform so it will require active integration per platform.

Is this feature fully tested by web-platform-tests?

Web platform tests will be added as part of this work as part of the prototyping. We will then feed those tests back into the specification.

Requires code in //chrome?

True

Feature flag (until launch)

--enable-features=WebEnvironmentIntegrity

Link to entry on the Chrome Platform Status

https://chromestatus.com/feature/5796524191121408

Ben Wiser

unread,
May 9, 2023, 1:42:29 PM5/9/23
to blink-dev, Ben Wiser, Sergey Kataev, Eric Trouton, Philipp Pfeiffenberger, Mihai Cîrlănaru, Nick Gaw, Peter Birk Pakkenberg, Ryan Kalla

The contact email b...@chromium.org was added in error. That should be bew...@chromium.org.

Morgaine (de la faye)

unread,
May 12, 2023, 2:50:51 AM5/12/23
to blink-dev

This can be sent to websites’ web servers to verify that the environment the web page is running on is trusted by the attester.

I'm not sure how RFC 8890 compliant this proposal is. This seems to create a large power for site owners to dictate & control user behavior. But RFC 8890 says that The Internet Is For End Users. This seems to work against that.

From the explainer:
> For example, this API will show that a user is operating a web client on a secure Android device.

As a user with a rooted phone, it would be highly upsetting & disturb my ability to use the web if you were to create this feature & allow me, a legitimate user, to be locked out of pieces of the web. Trying to narrow down the computing world to only run on attested hardware is the definition of the War Against General Purpose Computing, and it much discussed in intelligent circles as the worst possible dystopian hell that can be brought against users. I hope this feature is abandoned, and if not, I hope it is quickly & readily subverted. This is an indignity to introduce against users.

Ben Wiser

unread,
May 12, 2023, 11:58:27 AM5/12/23
to blink-dev, Morgaine (de la faye)

> This seems to create a large power for site owners to dictate & control user behavior.


I want to be forthright in saying that I have the same concerns. For this reason, it is an explicit goal in the explainer to "Continue to allow web browsers to browse the Web without attestation." (ref).


This is of course easier said than done. One idea is to introduce a holdback mechanism (ref) that only allows a portion of traffic to be attestable so that web developers are not able to enforce any particular request on the attestation verdicts. In the rooting example, this would ensure that a web author could not prevent you from browsing without also locking out a sizable number of potentially attestable (but held back) users.


Regarding how this addresses user needs, I think there are many legitimate reasons why users do not want fraud on the services they use. For example, fake engagement can be used to promote spam and disinformation to unsuspecting users. In other cases, real users may compete to buy concert tickets, but lose out to innumerable instances of fake users. There are a few more examples in the explainer’s introduction (ref). 


Another user-facing consideration is that these same inferences are made today using highly identifiable information from the browser, and inadvertently allow for widespread tracking of users. Given the deprecation of third party cookies and other privacy efforts, we recognize an urgency to create a well-lit path for anti-fraud use cases that does not rely on widespread collection of re-identifiable signals. My north star is for these existing approaches to be dropped for a more reliable, and more private alternative.

Rick Byers

unread,
May 16, 2023, 12:32:17 PM5/16/23
to Ben Wiser, blink-dev, Morgaine (de la faye)
I've also been worried about this space as there seems to be a fundamental tradeoff with no win-win solutions. As with other debates around tradeoffs with privacy, I think it would be naive to think that we can know the ideal balance ahead of time, or that it won't need to change over time. Anything we might decide would ultimately be influenced by the larger  societal debate around privacy (regulations etc.) since perfect privacy means perfect immunity for criminals. Perhaps then a productive line of discussion for this forum is what is the apparatus that we should try to design into chromium and our processes to enable the balance to be effectively tuned over time? Such an apparatus would include both the output metrics (eg. theoretically perfect measures of fraud rates, rates of legit users being locked out) and the knobs we can tune to adjust the balance between the output metrics.

The chromium project has a history of tackling some seemingly previously intractable tradeoffs with apparatus like this. For example, we designed origin trials to let us tune the balance between efficiency and velocity of evolving the web platform with the risk of "excluding vendors" (sites working only in Chrome). We used knobs like how long do we allow a feature to be in an OT state for and what are the page usage limits to try to prevent a repeat of "vendor prefix" hell web developers faced and I think it's fair to say that it's largely been effective. We started conservatively but gradually relaxed some of our OT rules as we've learned about the effect of the ecosystem dynamics in practice and I'd hope we could do the same here. If we found instances of sites unreasonably locking out non-Chrome browsers, we'd adapt our OT rules to try to compensate. IMHO this has been extremely empowering in removing all the fear and uncertainty we used to face around experimental web APIs. Autoplaying audio and Chrome's media engagement system is another example where we finally resolved polarizing fights by a probabilistic system we worked hard to tune to maintain a good balance.

I like the discussion of holdback groups in the explainer as a key knob we could design to permit tuning the balance. I don't think we'll be able to get consensus on a binary question of holdback groups as phrased, but perhaps we could agree on the apparatus around them? Perhaps chromium should have a knob for holdback group size that starts relatively large but plans to fall as long as we see evidence of the value along with absence of evidence of the harms in practice?

Ben, the explainer says holdback groups are problematic because of the desire for "deterministic" solutions, right? But aren't the existing solutions (like fingerprinting) all probabilistic in some way too? Do deterministic solutions really exist at all? At the extreme, even for a fully attested device, I can just plug in a bot-controlled monitor, keyboard and mouse that uses AI to simulate a human, right? It seems to me that this problem space gets a lot more tractable if we give up any pretense of "deterministic solutions" and just focus on providing probabilistic signals to risk engines, because then we can hopefully keep the debate to largely the setting of the position of those knobs based on the outcomes we see, rather than in the intractable space of just guessing at very complex game theory?

Rick

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/1e2f3a79-3e73-4994-bcbd-40388e1ed64an%40chromium.org.

Michaela Merz

unread,
May 16, 2023, 5:20:50 PM5/16/23
to blink-dev, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla
I am a *big* fan pf everything that helps to protect the integrity of a web/javascript environment. Not necessarily to make a site or web/app unusable, but to inform the user that an evironment has changed. It is up to the user to decide to continue to use it or not. To that end I am proposing to be able to sign some hash (e.g. integrity hashes) with a key pair or token that can be downloaded by the user during his first visit. Should the hashes not match, the user will be informed and may select to terminate the session. 

This would protect the complete environment against any form of change or manipulation even if it is done directly on the server.

My two cents.
Michaela

Ben Wiser

unread,
May 17, 2023, 10:56:27 AM5/17/23
to blink-dev, misc...@googlemail.com, Ben Wiser, Sergey Kataev, Eric Trouton, Philipp Pfeiffenberger, Mihai Cîrlănaru, Nick Gaw, Peter Birk Pakkenberg, Ryan Kalla

Hey Michaela, if I understand correctly, you’re proposing an integrity check for user agents to run against websites? Let me know if I got that right. It sounds like you’re proposing an integrity check in the opposite direction from what the explainer is proposing: communicating user agent integrity to web servers. It sounds like an interesting space, but I’d recommend possibly creating an explainer with your thoughts as that doesn’t sound like it is in the same scope as Web Environment Integrity.


@Rick, regarding the probabilistic vs deterministic problem in the explainer, I think there would be a threat regarding bad actors hiding in the holdback. Bad actors who would definitely report an “untrustworthy” signal are able to simply not report attestations from their devices. Web developers would be forced to treat them the same as a user agent that decided to holdback an attestation verdict. I think we’d effectively be lowering the probability of catching bad actors. We’ve already gotten signals from anti-fraud providers that the value add will be smaller if holdbacks are provided.


Having said all that, I do think there would definitely still be utility with web environment integrity in a world where we have holdbacks. I’m also a huge fan of your approach of having configurability and measuring impact on the web over time and think that could be a very effective way to progress responsibility. We are currently fleshing out some ideas for holdbacks and should be able to share some more ideas soon.

Michaela Merz

unread,
May 17, 2023, 11:55:26 AM5/17/23
to Ben Wiser, blink-dev
Ben: I must have read the explainer incorrectly. Yes I am proposing a an integrity check for user agents to run against websites.

Here are my thoughts:

Web-environments are used to transport and process critical data. In order to protect the integrity of the web-environment, we have mechanisms like subresource integrity. However - none of the available mechanisms protect the integrity of the web-environment against malicious interference when it happens directly on the host of the pages and scripts. Additionally, developers may be coerced to change code to circumvent certain mechanisms. This is because the subresource hash can easily be re-calculated by anybody with access to the source code or even removed from the calling/loading page or element. The user(-agent) would thus be unaware of the code changes and may provide data to a now possibly dangerous or unsafe environment.

It is my suggestion that methods are implemented that allow user agents to verify if the elements have been tampered with - even if it happened on the most basic level - the hosting website itself. This could be done by creating a "hash of hashes" from all subresource integrity hashes within the user-agent. On subsequent visits, the user-agent would try to match the stored "hash of hashes" with a newly calculated "hash of hashes" and warn the user if the hashes don't match. Ideally, the warning to the user would be clickable and lead to a well-known page which allows the website to explain why the code has changed. The user may then choose to continue (which will update the hash of hashes to its new value) or to stop the usage of the site.

Thank you for your thoughts and consideration.

Michaela


Rick Byers

unread,
May 17, 2023, 5:47:57 PM5/17/23
to Michaela Merz, Ben Wiser, blink-dev
Thanks Ben,
I see your point about holdbacks. But all the arguments against holdbacks also apply to any user of any browser which doesn't support this API (too old, brand new, philosophical objection, etc.), right? I guess what I'd most like to understand is what exactly would the implication be for a new browser developer or someone who wants to build their own browser from source? The whole history of browsers is about new browsers cloning established ones. Do uses of this API seek to end that tradition or solve it in some other yet-to-be-described way?

Thanks,
   Rick



--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Ben Wiser

unread,
May 18, 2023, 9:16:58 AM5/18/23
to Rick Byers, Michaela Merz, blink-dev
@Rick, yes this might be what ultimately leads to the holdbacks being the enforcement mechanism to protect non-attested traffic. The hesitancy in my answers is just in case there is an awesome technical solution or approach that we haven't thought of yet that could also help counter some of the cons of holdbacks. We will ultimately have to commit to something that protects against these core concerns and I definitely won't be surprised if we land on holdbacks.

Ps: Thanks for that timeline link; I enjoyed looking at the graphical timeline with a cup of coffee.

Morgaine (de la faye)

unread,
Jul 18, 2023, 6:11:04 PM7/18/23
to blink-dev, Ben Wiser, Sergey Kataev, Eric Trouton, Philipp Pfeiffenberger, Mihai Cîrlănaru, Nick Gaw, Peter Birk Pakkenberg, Ryan Kalla
How does this feature interact with users trying to use DevTools to understand how a site works ? There's notably not really any discussion of what an attestable environment is. This specification seems entirely open ended for how locked down a system might be or what might be permitted.

It seems all too likely that anyone using DevTools to look at or twiddle with a site has already broken the "Environment Integrity" seal. Is that the case? How do we maintain RFC 8890 in the face of this open ended specification which seems to have no limits on what it can do to restrict users?

Michaela Merz

unread,
Jul 18, 2023, 9:29:39 PM7/18/23
to Morgaine (de la faye), blink-dev, Ben Wiser, Sergey Kataev, Eric Trouton, Philipp Pfeiffenberger, Mihai Cîrlănaru, Nick Gaw, Peter Birk Pakkenberg, Ryan Kalla
It would be my suggestion that a "broken" integrity should result in a browser warning (like an invalid TSL certificate) allowing the user to continue if he/she so chooses. That would allow "twiddling" while also giving a normal user an amount of security that nobody else has "twiddled" with the code.

m.
 

--
You received this message because you are subscribed to a topic in the Google Groups "blink-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/a/chromium.org/d/topic/blink-dev/Ux5h_kGO22g/unsubscribe.
To unsubscribe from this group and all its topics, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/baef87c2-92ee-4175-8b04-9d229a4043b9n%40chromium.org.

A. M.

unread,
Jul 19, 2023, 10:39:21 AM7/19/23
to blink-dev, Michaela Merz, blink-dev, Ben Wiser, Sergey Kataev, Eric Trouton, Philipp Pfeiffenberger, Mihai Cîrlănaru, Nick Gaw, Peter Birk Pakkenberg, Ryan Kalla, Morgaine (de la faye)
This is literally 1984. Stop the war against general purpose computing. I am not willing to give up privacy for added security.

> I want to be forthright in saying that I have the same concerns. For this reason, it is an explicit goal in the explainer to "Continue to allow web browsers to browse the Web without attestation."
Here's the problem: Once it's implemented, what will stop websites to block users who disable it? You can't use most banking apps (or even the McDonalds app, Snapchat, ...) on phones because they require passing SafetyNet. It's fundamentally the same thing.

It's my device, and I have the right to do what I want to do with it. Not somebody else.

Michaela Merz

unread,
Jul 19, 2023, 10:58:41 AM7/19/23
to A. M., blink-dev, Ben Wiser, Sergey Kataev, Eric Trouton, Philipp Pfeiffenberger, Mihai Cîrlănaru, Nick Gaw, Peter Birk Pakkenberg, Ryan Kalla, Morgaine (de la faye)
There are a number of things to consider:

It's my device, and I have the right to do what I want to do with it. Not somebody else.

While I am 100& supporting this statement, this ship has sailed a long time ago. Our friends @Google already control many things that especially influence the life cycle and functionality of PWAs. But that is a completely different discussion I am having on different channels.

As to the topic here: We can implement integrity in different ways. IMHO the best way would be to not load failed resources at all. Kind like it is today with subresource integrity. Once the resource has been loaded, you may do whatever you want with it as there is no more integrity checking.

M.




Chris Palmer

unread,
Jul 20, 2023, 9:05:24 AM7/20/23
to blink-dev, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla
Speaking as a recent former Chromie who wants you to succeed: retract this proposal.

* The web is the open, mainstream application platform. The world really, really needs it to stay that way.

* Whatever goals publishers might think this serves (although it doesn't), extensions and Dev Tools (and other debuggers) neutralize it. Extensions and Dev Tools are incalculably valuable and not really negotiable. So if something has to give, it's DRM.

* The document claims WEI won't directly break content blockers, accessibility aids, et c. But: (a) this will be used as part of an argument to not bring extensions to Chrome for Android; and (b) assume/realize that publishers will start rejecting clients that support extensions. Chrome for mobile platforms already doesn't support extensions, and mobile is the largest platform class. So publishers might even have a decent chance of getting away with such a restriction.

* DRM will always be cracked and worked around, but that doesn't mean that implementing this will be harmless. DRM still shuts out legitimate use cases (accessibility comes foremost to mind, but not solely), even when it is broken. Everybody loses.

* DRM misaligns incentives: the customer is now the adversary. This is a losing move, both from a business perspective and from a technical security engineering perspective. (Do you want an adversarial relationship with security researchers? No, you do not.) WEI enables publishers to play a losing game, not a winning one.

* In ideal circumstances, WEI would be at best a marginal, probabilistic, lossy 'security' mechanism. (Defenders must always assume that any given client is perfectly 'legitimate' but 'malicious'. For example, Amazon Mechanical Turk is cheap.) Holdbacks nullify even that marginal benefit, while still not effectively stopping the lockout of particular UAs and yet not effectively upholding any IP-maximal goals.

* Chromium has a lot of credibility in safety engineering circles. Don't spend it on this.

Michaela Merz

unread,
Jul 20, 2023, 11:19:04 AM7/20/23
to Chris Palmer, blink-dev, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla
Thanks @Chris Palmer for your input. Nobody is more opposed to DRM than I am. Even today I refuse to load DRM extensions into the browser. I think that DRM is wrong and the open web is the way to go.

But providing provenance and integrity to a resource is not DRM. TLS is not DRM. If you hit a page with an invalid TLS certificate, you are free to continue. If the power to be would decide to NOT allow us to continue to sites without a valid TLS certificate, you'll find me on the barricades right along with you.

Browsers already include a protection mechanism called "Subresource Integrity" (SIR) . If the provided resource doesn't match the hash, the browser refuses to load the resource. Together with "content security policy" we can already create hardened web resources. But we're missing one crucial element: If the web site has been modified on the server. If a malicious attempt to modify a web environment is successful right at the source, we (and our users) have no way to protect us and our users.

That's why I think it is important to extend the SRI with a "master key" or certificate that can not be recreated without the knowledge of the author of the web site.

We can and must discuss the details of such a mechanism of course. I am with you and don't want DRM through the back door. But I think it's crucial for the web environment's credibility to have tools that can be used to protect the integrity of the environment.

m.


--
You received this message because you are subscribed to a topic in the Google Groups "blink-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/a/chromium.org/d/topic/blink-dev/Ux5h_kGO22g/unsubscribe.
To unsubscribe from this group and all its topics, send an email to blink-dev+...@chromium.org.

Reilly Grant

unread,
Jul 20, 2023, 1:41:45 PM7/20/23
to Michaela Merz, Chris Palmer, blink-dev, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla
Michaela, I think you are misunderstanding this proposal. This is not a proposal for a site to prove its integrity to the user. It is a proposal for the user agent to prove its integrity to the site, and that it is acting on behalf of a real user. These are two largely independent problems. I recommend looking at Isolated Web Apps, which attempt to solve exactly the problem you're discussing.

You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAKDb%2By7gDGdiWTKR832P7m2hH0p1VtxXqvnBxwYnAZ0AQjo4jQ%40mail.gmail.com.

Alex Russell

unread,
Jul 22, 2023, 1:26:05 PM7/22/23
to blink-dev, Reilly Grant, Chris Palmer, blink-dev, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, misc...@googlemail.com
Hey folks,

I think it's worth lowering the temperature here a bit, so to Chris' point, we probably need to re-evaluate some of the choices in this design, and signpost any redesign or iteration quickly. But backing up a bit, it would be extremely helpful for y'all to reach out to the TAG ASAP; this is a complex space with lots of tradeoffs, and you'll want their guidance. The "spelling" of this API is good (CBOR, Promises, an API on `navigator`, etc.), but the intent and ecosystem choices will need a lot attention.

As for the Explainer, there are things that I'd expect to see improved before anything could move forward, including (but not limited to):
  • More focus on specific use-cases. There are several listed, but they're high-level, and aren't met with example code that explains how this design will address those specific needs. For example, a sample use-case like "detect social media manipulation and fake engagement" needs a conversation later in the document about:
    • The extent to which such an API can actually perform that role. And is that something this design will take on? Or leave to passthrough underpinnings? What are the risks there? And how can developers know about about them without making unwise assumptions?
    • Alternatives and complementary technologies sites might use today (fingerprinting is mentioned in passing, but not in detail, and forcing users into native apps isn't brought up).
    • Sample code to that shows how the problem is addressed using features this API provides.
  • A conversation in the design around if (or how much) this delegates to OS or platform-specific attestation. My quick read of the spec suggests that this is, roughly, a passthrough to lower-level APIs that have their own constraints, and that making them web-aware is a non-goal (presumably in the interest of broadest reach). Those tradeoffs deserve a discussion in the doc, as do alternatives to this approach. If the plan *isn't* to passthrough directly to system attesters, that probably needs to be called out more visibly.
  • Interop risks. Can this be plausibly implemented by other vendors and engines in a fully interoperable way?
  • The ways that a passthrough risks further entrenching a duopoly in mobile computing. E.g., if this API is wildly successful and heavily used, and requires that the `contentBinding` use pre-arranged, OS-level side-agreements, what does that do to the ecosystem? Does it make things harder for new OS vendors? For AOSP users?
  • User consent. I know we don't put UI in our specs, but if the async nature of the API is designed to enable user control of this sort of capability, it isn't obvious from the design sketch. Likewise, I'd expect to see integration with the Permissions API (and Feature Policy delegation) as part of the design if user control or consent are goals. This also shades into a conversation about <iframe>s and delegation, which also needs a look.
The intro suggests massive-scale use as a success state, with a majority of transactional, social, and gaming use-cases (at least by volume) as users. This suggests an *extremely* high bar.

I'll fight to the hilt to maintain the space for y'all to iterate on addressing these problems, and a good first step might be to restate them in a user-focused way that takes on individual use-cases end-to-end, showing your work as you go, including the way this will (likely) interact with other systems that are roughly hewn in this draft.

Best,

Alex

To unsubscribe from this group and all its topics, send an email to blink-dev+unsubscribe@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Dana Jansens

unread,
Jul 23, 2023, 7:02:58 PM7/23/23
to blink-dev, Reilly Grant, Chris Palmer, blink-dev, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz
There's been a lot of strongly worded negative feedback for this proposal in the Github, and I don't agree with how that feedback was delivered but I do agree that this proposal if followed would not be good for the web.

The proposal talks about trust, but the server does not need to trust the client. Like palmer said, they can never trust the client, this doesn't allow them to trust the client in a way that could be considered a security boundary. That is a fundamental design choice of client-server with open user agents, in place of closed apps/walled gardens. This is an intentional property of the web.

But this proposal provides a mechanism for web sites to force their ideals and preferences onto user agents, which takes away user autonomy and choice, and damages the trust held by Chromium as the dominant user agent today. Let's push the world to be more open, to give more user control, not more controlled and closed.

Dana

Ludwig GUERIN

unread,
Jul 23, 2023, 7:03:39 PM7/23/23
to blink-dev, Alex Russell, Reilly Grant, Chris Palmer, blink-dev, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, misc...@googlemail.com
I'd like to add something I haven't seen discussed yet. I've looked at the RFC here: https://rupertbenwiser.github.io/Web-Environment-Integrity/. I looked at the example, and my first impression was: wow, this is worthless. This seems to introduce some sort of device/browser/session identifier (might say hello to GDPR), which basically turns this into browser fingerprinting (how convenient coming from google, and ironic with section 1.1), but more importantly: it's just an added useless identifier.

This also poses a certain number of problems:
- Either you won't be able to use any extension, or the proposal is rendered useless by the fact that extensions can have access to this API
- Since the browser and server are two very distincts environment, there's no way for the server to validate anything besides anything that wouldn't be redundant after the TLS handshake
- Lastly, this is an IDENTIFICATION method, not an AUTHENTICATION method
- Full chain integrity checks requires non legal methods to inspect other people's machines' current state (memory, disk, etc...)

This feels like a traditional google move to drive away extension users and developers (we know which ones they especially don't like), or for pushing for a DRM-ization of the web. Both are obvious no-nos.

As is, and after reading it, the RFC doesn't make any attempt at explaining how to use said tokens, what problems they solve and how. It also doesn't

And also, section 6.2 title "Privacy concerns" being currently "TODO" is both extremely funny and worrying, but could also indicate the true motives. If they care about security, why is this section one of the only TODOs? It also reminds me of that one infamous talk that features the now very infamous line: "We'll leave the morality aspects to the side".

Sincerely,
A deeply concerned Engineer


To unsubscribe from this group and all its topics, send an email to blink-dev+...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Yoav Weiss

unread,
Jul 24, 2023, 12:10:23 PM7/24/23
to Dana Jansens, blink-dev, Reilly Grant, Chris Palmer, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz
/* with my API OWNER hat on */

Examining this proposed change, it seems to me that the most risky part in the signed attestation information is the part about "application identity". Providing that information to the server seems to go against our past efforts to GREASE UA-CH and will prevent Chromium browsers from identifying themselves as Chrome, something they are (unfortunately) often required to do for compatibility reasons.

Dominic Farolino

unread,
Jul 25, 2023, 10:33:01 AM7/25/23
to Yoav Weiss, Dana Jansens, blink-dev, Reilly Grant, Chris Palmer, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz
At the very least, an explicit commitment to a holdback would seem to quell some of the concerns about this feature. But one thing I'm concerned about is if there'd be a difference in holdback between Chrome and WebView. Since WebView isn't always considered a "real browser" I could see this as an opening to try and not implement holdbacks on WebView. I'm not sure how API OWNERs would evaluate that, but the risks there seem pretty interesting, as I imagine it'd force some sites to aggressively UA-sniff to determine whether they're in a WebView and can interpret the absence of attestation as a perfect signal, vs. a possible holdback user in a browser where lack of attestation is "OK". Having the adoption of an API hinge on that kind of ugly practice seems unfortunate.

Tom Jones

unread,
Jul 25, 2023, 7:31:25 PM7/25/23
to Dana Jansens, blink-dev, Reilly Grant, Chris Palmer, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz
Perhaps it is a good thing for user choice to have a browser that is fully open to any use and allows anonymous user actions.

The result of such open-ness is that an entire series of services that need to trust the client(used in the oauth sense of the word) are not available to web apps.

Consider the effort in IETF Remove Attestation Services (RATS https://datatracker.ietf.org/wg/rats/about/)  to get web trusted client apps. Without knowing the operating environment (ie the sandbox) such a statement loses most of its value.

I have recently worked on a fork of Chromium that is designed to have this functionality and on Native Wallet apps to get it. The lack of this functionality in Chrome will drive developers away from Chrome and fragment the user experience. We already have the problem of directing users away from Chrome to a secure wallet and being unable to bring the original user session back to Chrome. Of course Google and Apple get to solve this problem with their own wallets, but that will not fly in Europe and now the US DHS is asking for solutions that are more open as well.

Something needs to change. If this solution is not available, then the change will be outside the browser. ..tom


Lauren Liberda

unread,
Jul 27, 2023, 10:27:41 PM7/27/23
to blink-dev, Tom Jones, blink-dev, Reilly Grant, Chris Palmer, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz, Dana Jansens
this message got me to write down my thoughts and a little case study of the last year's Big Mac exploit (not an exaggeration). https://liberda.nl/weblog/trust-no-client/

Justin Schuh

unread,
Jul 28, 2023, 11:48:35 AM7/28/23
to Dominic Farolino, Yoav Weiss, Dana Jansens, blink-dev, Reilly Grant, Chris Palmer, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz
Hopefully I'm not adding to the noise, but I wanted to call out a few things as an independent observer with some background in the problem space. (My comments are beyond the process and structure things, which Alex already addressed.)

First, I suggest that anyone commenting on this explainer as currently written should also read the initial public proposal linked in Ben's email, which gives more context on the problem space. To use the terminology from that discussion, this proposal is about detecting/blocking IVT (invalid traffic), which encompasses things like fraud, spam, coordinated disinformation, etc. that originate from inauthentic users (e.g. bots, farms). Site operators have historically relied on fingerprinting and other tracking signals to identify IVT, but as browser makers eliminate fingerprinting/tracking surfaces, site operators need privacy preserving ways to detect/block IVT.

That context sort of comes across from the explainer and linked resources, but IMHO it really needs to lead with plainly stating this. Because the CG discussions show broad consensus on the nature of the problem and the importance of addressing it, but the explainer is written in a way that largely assumes understanding of all the context (which is clearly not the case).

The next big thing that jumps out at me is that the only solution even considered for IVT seems to involve wrapping device attestation APIs (e.g. Android Safety Net and iOS App Attest). This is a common enough approach for native apps dealing with IVT (it basically repurposes a DRM mechanism, with all the baggage that entails). However, it also seems to ignore the fundamentally different privacy and security considerations of the Web platform. Most concerningly, it tightly couples user authenticity to device integrity. I have my doubts that this is necessary, and I think most of the concerns arise from conflating these two concepts.

My recollection is that there was a lot of work done with PrivacyPass to explicitly decouple user authenticity from other ambient state. I also see from the CG discussions that PrivacyPass was not considered adequate for addressing IVT. If I were in a position of assessing this proposal, I know that I'd need more detail in the explainer on specifically how PrivacyPass was lacking, and why a narrower extension of the protocol is insufficient.

I also see questions about holdback, but I feel like that's a bit backwards. I appreciate the need to detect ever evolving adversaries, but IVT is a problem that happens at scale. So, if more signals are needed to stay ahead of the threats, then a conservative sampling rate should be more than adequate to detect new patterns and identify coincident signals. Something like that could mitigate many of the concerns around sites misusing this sort of thing.

Perhaps these sorts of discussions took place in the CG and I just didn't find them. But it certainly isn't captured in the explainer, and the CG discussion read to me like everyone started with the assumption that the solution was to just wrap the Android/iOS native approach.



P.S. This may be total bikeshedding, but I really don't like the term IVT, since invalid traffic is too broad of a concept. The problem space here is concerned with inauthentic traffic at scale, so I'd suggest zeroing in a term that better conveys that reality.


Rick Byers

unread,
Jul 28, 2023, 3:09:04 PM7/28/23
to Justin Schuh, Dominic Farolino, Yoav Weiss, Dana Jansens, blink-dev, Reilly Grant, Chris Palmer, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz
As one of the API owners and chromium community leaders, I'd just like to chime in on this personally with a meta-point: 

Thank you all for the thoughtful and constructive debate in this forum. As I'm sure you know, this topic has gotten a lot of disrespectful, abusive and overly-simplified criticism in other public forums which IMHO has made it hard to get any useful signal from the noise there. I have encouraged the team working on this to ignore feedback in any forum in which something like Chromium's code of conduct is not being maintained as anything else would be creating an unsafe working environment. It's somewhat ironic to me that some folks arguing passionately for the openness of the web (something I and many of the proposal contributors are also passionate about) are relying on physical threats and other forms of abuse, which of course means we must limit the engagement on this topic such that their voices are ignored completely (the antithesis of the openness they are advocating for). Attacks and doxing make me personally MORE likely to support stronger safety features in chromium, as such acts increase my suspicion that there is significant intimidation from criminals who are afraid this feature will disrupt their illegal and/or unethical businesses, and I don't give in to criminals or bullies. 

But then I'm grateful that the blink-dev community remains a place where we can disagree respectfully and iterate openly and publicly on difficult and emotionally charged topics, backing us away from thinking and acting in an "us-vs-them" fashion. I also want to point out that while open to anyone, this forum is moderated for new posters. Moderators like myself approve any post which is consistent with chromium's code of conduct, regardless of the specific point of view being taken. The thoughtful comments here over the past few days have been educational and overall calming for me, thank you!

This community and moderation practices represents the sort of balance between openness and safety which I believe the WEI proposal authors are striving for. At the same time, I believe it's clear to many of us that the tradeoffs being struck by the current proposal do not yet meet the minimum bar necessary to uphold chromium's values. That's OK - that's the whole point of designing in the open and having public debate is to find reasonable compromises between stakeholders with very different perspectives, and creating a safe place to experiment (as we expect most experiments to fail!). In order to start even an origin trial in Chrome, this proposal would need approval from API owners like myself, and the current state of the proposal is not something I'd personally approve due to many of the concerns being raised. At the same time I do think there's an urgent opportunity for chromium to do more to help with the problem of inauthentic traffic, and (like everything we do) some amount of experimentation seems essential to that. I believe the team working on this proposal is taking some time to regroup (and recover from all the stress) and rethink at least the framing, if not some of the core design properties of this feature. I'm sure we'll get an update from them when they feel ready and sufficiently recovered to engage in public again. In the interim, please keep the constructive and respectful criticism coming. Bonus points if you also have suggestions or data on how to actually make meaningful progress on the problem of inauthentic traffic in a way that's fully consistent with the openness of the web :-).

Cheers, and I hope you all have a stress-free weekend,
   Rick

Rick Byers

unread,
Jul 28, 2023, 10:11:09 PM7/28/23
to Lauren N. Liberda, Justin Schuh, Dominic Farolino, Yoav Weiss, Dana Jansens, blink-dev, Reilly Grant, Chris Palmer, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz
On Fri, Jul 28, 2023 at 9:19 PM Lauren N. Liberda <liber...@gmail.com> wrote:
>I have encouraged the team working on this to ignore feedback in any forum in which something like Chromium's code of conduct is not being maintained as anything else would be creating an unsafe working environment. It's somewhat ironic to me that some folks arguing passionately for the openness of the web (something I and many of the proposal contributors are also passionate about) are relying on physical threats and other forms of abuse, which of course means we must limit the engagement on this topic such that their voices are ignored completely (the antithesis of the openness they are advocating for).

I'm gonna say this as politely as I can. Google has got into hugely dominant positions with Chromium and Android. I don't think I have to explain here how much these 2 projects dominate the web and mobile spaces. They are both under Google's governance, and treated by Google as its backyard. Chromium comes up with whatever Chromium wants to implement, and can ignore everyone else. Android keeps getting moved from AOSP to Google Play Services. Nobody can stop this. Nobody can stand up against this.

This, in a direct consequence, means whatever Google does with these projects *will* be watched closely and with little trust, like a government is. If this is a problem for you, maybe suggest to the more important people to stop that. I don't know, turn them into independent non-profit projects, separate from Chrome and Google Play? Request Firefox is shipped by OEMs on some Android phones instead of Chrome? Send some bigger one-time donations for Servo and Ladybird development, with no requests made? I'm not getting paid to give you advice.

WEI is also an especial, highly flammable combo, because it touches (risks of) all of: anti-end user practices, setting Google as an authority to trust with decisions about user's faith, reinforcing user reliance on Chrome specifically and Google Play Services, fight on ad blockers, scraping, and unofficial clients. If I continue with the government analogy, Google is here an unelected official whose death would start street parties. I know, I know, this is just "an experiment", "not a goal", and actually "for privacy".

Currently, as I'm on Android, my banking apps will refuse to enable some features, Snapchat will refuse to log in, and the McDonald's app will refuse to launch if I don't get Google to sign some magic string these apps get from a server. I'm, according to the Play Integrity API documentation, supposed to not get it signed, because I have access to my own device's root user. I want to update apps from my F-Droid as easily as those from Google Play, stop traffic to some domains by editing /etc/hosts, and have a possibility to backup some of my installed apps with their data, but I guess that's too much freedom to have a McDonald's equivalent of a loyalty card. Maybe I'll self-report here now that I also have access as an Administrator to my Windows machine.

The current proposal is to extend this to the web. Apparently if I have root access on my phone, this means I must be a robot that shouldn't see the website that has ads (referring to the first example from the explainer's introduction). That's correct, I don't want to be human. Especially if my humanity is reduced to "advertisement watcher". The left-over crumbs of humanity that used to be in this body are only here to check the "I'm not a robot" boxes.

FWIW I agree with you personally. The web is special because it's open and can be reached by anyone, not just some whitelisted set of UAs. That's a core property of the web that I'll personally always fight for. I appreciate that Google doesn't have a lot of trust with the community here, and people are going to assume bad intent. I've come to terms with that and hope to combat it primarily not through promises, but through helping drive positive actionin Chrome. Eg. I initiated our interop efforts years ago to reduce the risk of the web getting locked into chromium.

>Attacks and doxing make me personally MORE likely to support stronger safety features in chromium, as such acts increase my suspicion that there is significant intimidation from criminals who are afraid this feature will disrupt their illegal and/or unethical businesses, and I don't give in to criminals or bullies.

If wanting to see the news, watch some user-uploaded videos, or scroll some social media feed on the internet is gonna side me with criminals, then at this point maybe I should simply become one. This makes "unethical businesses" sound like they might actually be a moral choice. Torrents actually give me movies, not "The WidevineCdm plugin has crashed". The site you probably know well wants, at worst, 2,80 Euro for a thousand checked boxes, while I'd need half a minute to fill each myself. With mass use of attestation (availability on web *will* increase it on all platforms), APIs returning tokens signed by Google will only become a matter of the price per 1000, not of a fact. If the "strong safety features" mean making it annoying enough to cost 15 Euro instead of 1, while a genuine Huawei user will not be able to get any, and actual criminals already send phones to package lockers as a way of selling bank account access, then maybe this is not safety. (Just kidding. I would never do cybercrime.)

It's been pointed out to me that my wording could be taken to suggest that I think folks who oppose WEI are criminals. That was absolutely not my intent and I apologize for not being more careful in my wording. I'm also the kind of person who likes to run rooted devices (used to compile my own NetBSD kernel from scratch weekly), custom browser builds, etc. and so I sympathize heavily with that use case myself and don't see how I could support a proposal which seriously risked the outcome you describe - users of such devices / niche browsers being locked out of important parts ot the web. AFAIK there is no serious debate as to whether such an outcome would be acceptable for the web (it's not), the debate is whether this proposal could possibly achieve it's aims without causing such an outcome. There's been lots of strong words saying it's impossible to reduce fraud risk without threatening the openness of the web and perhaps that's right, but I, for one, am always willing to be shown that something I thought was impossible was in fact doable with sufficient ingenuity and care. If I've learned anything from my tiny forays into the W3C anti-fraud community group it's that there's a lot of complexity and expertise in this space of which I know almost nothing, so I'm open to new ideas. I'm thrilled to see anti-fraud experts actually collaborating openly and publicly for, perhaps, the first time in Internet history. 

My primary intent with the word "criminal" was to take a strong stand against physical threats and doxxing - IMHO that is criminal activity and is inexcusable. To be consistent with our code of conduct we need to be absolutely clear that any change in direction here will come from the respectful and thoughtful comments on this thread and elsewhere (including your blog, which I quite enjoyed), not the intimidation tactics occurring on the GitHub repo and elsewhere. Sorry for not being more careful to clearly separate these things in my response. 

Rick

Billy Bob

unread,
Jul 29, 2023, 1:24:05 AM7/29/23
to blink-dev, Rick Byers, Justin Schuh, Dominic Farolino, Yoav Weiss, Dana Jansens, blink-dev, Reilly Grant, Chris Palmer, bew...@google.com, Sergey Kataev, erict...@google.com, Philipp Pfeiffenberger, mcirl...@google.com, nic...@google.com, pb...@google.com, Ryan Kalla, Michaela Merz, Lauren N. Liberda

I have thoughtfully and respectfully written this comment. Please review it!

But then I'm grateful that the blink-dev community remains a place where we can disagree respectfully and iterate openly and publicly on difficult and emotionally charged topics, backing us away from thinking and acting in an "us-vs-them" fashion. I also want to point out that while open to anyone, this forum is moderated for new posters. Moderators like myself approve any post which is consistent with chromium's code of conduct, regardless of the specific point of view being taken. The thoughtful comments here over the past few days have been educational and overall calming for me, thank you!

I have watched the Web Integrity API unfold from Hackernews, the GitHub repo, and now the news. I want to be part of the discussion, not be informed of decisions after-the-fact.

It's somewhat ironic to me that some folks arguing passionately for the openness of the web (something I and many of the proposal contributors are also passionate about) are relying on physical threats and other forms of abuse, which of course means we must limit the engagement on this topic such that their voices are ignored completely (the antithesis of the openness they are advocating for)

And, unfortunately, as a developer and well-meaning user, I’ve found that my avenues for giving feedback are closed. I want to share my voice and not be ignored completely. Yes, there is vitriol surrounding this topic, but it’s too important to shut out all dissenting voices. Doxxing and threats are wrong. Period. But so is silencing your community, your users, your developers, and all discussion and debate surrounding the Web Integrity API. Even the discussion here has been a bit heated with misunderstandings.

While you are say you are “looking for a better forum and will update when we have found one”, you have begun adding these changes to chromium. Again, despite wanting to treat this as an early proposal for new web standards, you are already prototyping it in Chromium! It’s a bad look, bad PR, and against your own W3C code of conduct. It’s not just that you’re ignoring or leaving feedback unaddressed, it’s that all feedback is rejected in the first place too. By the time it’s implemented, it may be too late. To quote this article about past Google actions:

But this move for greater democracy would have been more powerful and effective before Google’s unilateral push to impose Manifest V3. This story is disappointin