Re: [blink-dev] Intent to Prototype: Multicast Receive API

289 views
Skip to first unread message

Reilly Grant

unread,
Mar 11, 2021, 8:04:41 PM3/11/21
to Holland, Jake, Qiu, Binqiang, net-dev
Moving blink-dev@ to BCC and forwarding this to the net-dev@ mailing list.

This work proposes adding support for new network protocols to Chromium, which has security and privacy consequences. net-dev@ is the mailing list where this conversation should start. From private communications with the proposers of this new API their primary goal at this stage of development is to land an experimental implementation within Chromium in order to facilitate iteration and evaluation of the technology.
Reilly Grant | Software Engineer | rei...@chromium.org | Google Chrome


On Thu, Feb 4, 2021 at 1:31 PM 'Holland, Jake' via blink-dev <blin...@chromium.org> wrote:

Contact emails

jhol...@akamai.combq...@akamai.com


Explainer


https://github.com/GrumpyOldTroll/wicg-multicast-receiver-api/blob/master/explainer.md


Specification

 

TBD: API spec and design document (currently just entering Intent to Prototype).

IETF standards-track docs (all presently works in progress):

- draft-ietf-mboned-ambi
- draft-ietf-mboned-dorms

- draft-ietf-mboned-mnat
- draft-ietf-mboned-cbacc

(There will be integrated components prototyped in the browser feature based an appropriate subset of the above specs before origin trials begin.)

 

 

Summary

Subscribe to source-specific multicast IP channels and receive UDP payloads in web applications.




Blink component

Blink>Network


Motivation

Currently, Web application developers have no API for receiving multicast traffic from the network. All traffic for web applications thus requires a one-to-one connection with a remote device. Multicast IP provides a one-to-many data stream, and enables packet replication by the network, enabling efficient use of broadcast-capable physical media and reducing load on congested shared paths. Enabling Web applications to receive multicast would solve the receiver distribution problem that contributes to the current under-utilization of multicast on the internet.

 

This effort is coupled with a standardization effort in the MBONED working group at IETF and ongoing trials with multiple network operators to deploy a standardized approach for ISPs to ingest externally sourced multicast UDP traffic.




Initial public proposal

https://discourse.wicg.io/t/proposal-multicastreceiver-api/3939


TAG review

None


TAG review status

Pending


Risks



Interoperability and Compatibility

 

Feature adoption by browsers has an influence on whether the ISPs will deploy the capability to ingest and manage externally source multicast traffic.

 


Gecko: No signal

Edge: No signal

WebKit: No signal

Web developers: Some positive support on ietf mboned mailing list.  (Also importantly: some ISP support.)




Is this feature fully tested by web-platform-tests?

No


Link to entry on the Chrome Platform Status

https://www.chromestatus.com/feature/5683808135282688

This intent message was generated by Chrome Platform Status.

 

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/17EBCB31-7583-4BFB-AEEC-1FD17DFAC7B2%40akamai.com.

Ryan Sleevi

unread,
Mar 11, 2021, 8:55:22 PM3/11/21
to Reilly Grant, Holland, Jake, Qiu, Binqiang, net-dev
While I can understand the desire to experiment, it does seem like there is little to no path to resolve these (fundamental) privacy and security issues. I can understand why network middle-operators may prefer this, as a cost savings approach, but it seems to run counter to the efforts to get the Web "secure by default", right? Is that a good direction, even for experimentation?

You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAEmk%3DMbHM_5CdZFLrzQhQV4ZmTHWSpk%3DWRyjtforxZzbOQr%3DNQ%40mail.gmail.com.

Yutaka Hirano

unread,
Mar 11, 2021, 10:07:33 PM3/11/21
to Ryan Sleevi, Mike West, Reilly Grant, Holland, Jake, Qiu, Binqiang, net-dev

Holland, Jake

unread,
Mar 12, 2021, 3:21:43 PM3/12/21
to Yutaka Hirano, Ryan Sleevi, Mike West, Reilly Grant, Qiu, Binqiang, net-dev

Thanks Reilly for helping me find the right venue for discussion.

 

@Ryan: Could you say more about the fundamental privacy and security issues that don’t have a good path for resolution?

 

(TL;DR: I’d like to discuss the specific concerns in more detail to see whether they can actually be addressed.)

 

The performance benefits here are also fundamental, and IMO should be considered together with the privacy concerns as a tradeoff before being dismissed, including the relative scale of the potential problems and benefits.

 

Though I agree there is a privacy exposure footprint to some on-path devices (most notably just upstream of the receiving device), I’d argue in general the exposure is pretty limited, and it’s possible to mitigate in ways similar to the privacy exposure risks from DNS or from the remote server’s IPs.  (For example, by being exposed to a VPN provider of choice instead of exposed to your local ISP, and by turning the API off when operating in enhanced privacy modes).

 

I’m also prepared to argue that it would be worthwhile to explore even more extreme measures to protect privacy, if that’s what it takes to realize the performance gains this API offers.

 

What I’m thinking here (if necessary) could be done via a moderated allow-list that providers would have to get on and maintain good behavior to stay on.  For example, if this API had to ask permission before use on a page, or if ended up only exposed for browser extensions from the store that had to ask for permission during installation, it would still be a big step forward with regard to the capability for content owners to deploy multicast receivers, so I’d consider something like that to be on the table if the privacy concerns prove severe enough.

(NB: I’m starting out skeptical it’s necessary to go that far, but if someone was able to point out a good threat model for how the exposed info could be exploited in a way not available to them with unicast secured web traffic, that would convince me.  Either way, I’d still prefer to open up experimentation via command-line options while that discussion is ongoing.)

 

I’m not sure whether there are other security issues you believe can’t be addressed...  We have a work-in-progress spec for how we propose to authenticate the traffic asymmetrically, which we think is feasible and intend to implement.  The other security concern I’ve heard yet is the dangers of adding RPCs to blink, which although I understand the reluctance to accept a large and complicated patch even experimentally, I also assume can be addressed if we can get the right test coverage and satisfy a reasonable security review team on code quality, and we’d be very happy to start the engagement on that front.

 

Do you have other concerns in mind besides these?

 

Best regards,

Jake

Ryan Sleevi

unread,
Mar 12, 2021, 4:32:39 PM3/12/21
to Holland, Jake, Yutaka Hirano, Ryan Sleevi, Mike West, Reilly Grant, Qiu, Binqiang, net-dev
On Fri, Mar 12, 2021 at 3:21 PM Holland, Jake <jhol...@akamai.com> wrote:

Thanks Reilly for helping me find the right venue for discussion.

 

@Ryan: Could you say more about the fundamental privacy and security issues that don’t have a good path for resolution?

 

(TL;DR: I’d like to discuss the specific concerns in more detail to see whether they can actually be addressed.)

 

The performance benefits here are also fundamental, and IMO should be considered together with the privacy concerns as a tradeoff before being dismissed, including the relative scale of the potential problems and benefits.

 

Though I agree there is a privacy exposure footprint to some on-path devices (most notably just upstream of the receiving device), I’d argue in general the exposure is pretty limited, and it’s possible to mitigate in ways similar to the privacy exposure risks from DNS or from the remote server’s IPs.  (For example, by being exposed to a VPN provider of choice instead of exposed to your local ISP, and by turning the API off when operating in enhanced privacy modes).

 

I’m also prepared to argue that it would be worthwhile to explore even more extreme measures to protect privacy, if that’s what it takes to realize the performance gains this API offers.

 

What I’m thinking here (if necessary) could be done via a moderated allow-list that providers would have to get on and maintain good behavior to stay on.  For example, if this API had to ask permission before use on a page, or if ended up only exposed for browser extensions from the store that had to ask for permission during installation, it would still be a big step forward with regard to the capability for content owners to deploy multicast receivers, so I’d consider something like that to be on the table if the privacy concerns prove severe enough.

(NB: I’m starting out skeptical it’s necessary to go that far, but if someone was able to point out a good threat model for how the exposed info could be exploited in a way not available to them with unicast secured web traffic, that would convince me.  Either way, I’d still prefer to open up experimentation via command-line options while that discussion is ongoing.)

 

I’m not sure whether there are other security issues you believe can’t be addressed...  We have a work-in-progress spec for how we propose to authenticate the traffic asymmetrically, which we think is feasible and intend to implement.  The other security concern I’ve heard yet is the dangers of adding RPCs to blink, which although I understand the reluctance to accept a large and complicated patch even experimentally, I also assume can be addressed if we can get the right test coverage and satisfy a reasonable security review team on code quality, and we’d be very happy to start the engagement on that front.

 

Do you have other concerns in mind besides these?


TL;DR: I think we should accept that the properties of TLS are table stakes for any new Web-exposed transport layer; minimally, confidentiality, integrity, and authenticity should be accepted, and the network should not be seen as implicitly reliable or trustworthy. This proposal doesn't rise to that level, by design, and it's non-trivial to get it there, but almost certainly necessary to do so before considering implementing.

Jake,

Thanks for the perspective. I think it's one we've certainly heard before, and while I think we've made a lot of progress addressing it, I can understand that as new interests come to the table, we can easily find ourselves discussing things we'd perhaps long considered settled. At a fundamental level for privacy and security, I think it's well understood the move and desire to help move the Web to "Secure by Default", and a non-trivial portion of that has been to moving to "Secure Origins". As the blog post mentioned just over three years ago, "A secure web is here to stay" - a capstone not only for the work Google started with its own services in 2008, but also helped bring visibility to the broader Web ecosystem via the transparency dashboard work.

Complementing that effort has been work, in Google products, Android, and Chrome, to help secure DNS by default, showing a strategic effort to pursue solutions that help secure users by default, rather than, say, requiring the use of a VPN.

At a fundamental level, this proposal is incompatible with those goals of ensuring confidentiality, integrity, and authenticity. I can understand that, as proposed, this draft does try to tackle the integrity angle, and arguably makes an effort to try and tackle authenticity by way of some clever, if problematic, additional dependencies on DNS. But there's a fundamental problem here with confidentiality that is not easily addressed, and while it's good to see acknowledged in the proposal as not addressed, is still significant.

The proposal, if implemented, would move the Web away from the very clear trajectory it's been on, particularly around efforts like making it clear there are No More Mixed Messages about HTTPS, by attempting to transparently and automatically upgrade mixed content, or otherwise block it. This proposal would, in effect, intentionally reintroduce mixed content, via an unsecured, web-developer exposed transport layer, which is nothing if not to say fundamentally challenging and problematic.

Taking a step back, however, from that high level concern about an intrinsically problematic direction, there are also practical security concerns that come with the implementation. As the Rule of 2 captures, there are deep concerns with processing untrustworthy inputs in unsafe languages with no sandbox. It's no wonder that Web RTC is not implemented within the Network Service, which does not have a sandbox on most platforms Chrome runs on, but instead implemented in the Renderer. Yet to be able to do that, we still found it necessary to implement a number of security safeguards within the browser process, and we're also actively aware that the performance challenges that have resulted from that (necessary to security) implementation have caused. For your abstract problem statement and goal, it's inevitable that, at minimum, a similar approach would be needed, yet this proposal would require even more layers of complexity in parsing and protocols (e.g. YANG/RESTCONF) than even WebRTC does to achieve that.

It's also likely that, like WebRTC, there will be a push to place this processing closer to the network, bringing more complexity in the protocol implementation closer to the network, in order to achieve the desired performance goals. While unlike the high-level concern about lack of confidentiality, this is a surmountable problem, it would be entirely fair to say that this proposal's necessary implementation steps raise huge red flags for known problematic patterns, and thus would require significant investment throughout the design and implementation phase to appropriately secure. This is, unfortunately, not something where we can see security as an "add-on": it has to be baked in throughout, and thus requires a strong investment from the implementation phase onwards. In short, we know from past experience that even the experimentation efforts can lead to (accidentally) introducing new attack surfaces, and this one seems very prone to do so, by nature of what it's trying to do.

I realize the goal of implementation is to help flesh these issues out, and especially as new implementers, the experience of past protocols, the IPC overheads and design considerations, and the well-intentioned features that have gone wrong are all things not readily available, nor is there an easily curated list of "Here's what's not to do". I don't mention these things to block implementation, but rather, to highlight why we know this is an area that requires much more up-front investment in thinking and designing before implementation, in order to achieve the desired security goal. Trying to send unreliable messages (UDP frames) over reliable channels (IPC pipes) creates all sorts of performance issues, and efforts to tackle that (e.g. by using shared memory) can easily introduce all manner of TOCTOU bugs

I can understand wanting to explore new trust models for the Web, such as using extensions to serve a function effectively that NPAPI plugins once did ("preinstalled extensions of the Web Platform"), or to use curated lists of "blessed multicast providers", like we do with, say, Certificate Authorities. However, I think the Web's experiences with both NPAPI Plugins and CAs has shown us that these models have a number of flaws, and while they may tackle some problems, they introduce entire new classes of problems, especially for privacy and security. When we think about building such capabilities, security and privacy are naturally at the forefront, and so unfortunately, we can't just handwave this away with a hope for someone introducing a new trust model that appropriately balances the needs of both users and device operators.

I think to make good progress here, at a minimum, we want to make sure we're providing nothing less than the standard of security the Web now expects - namely TLS - and that includes ensuring confidentiality, integrity, and authenticity. In addition to that, I think it may also be worth exploring whether there's a more appropriate balance to be found by a more narrowly scoped API for the problem at hand. For example, if the particular use case is for Audio / Video streaming, exploring how that can either be managed entirely by the user agent (thus helping ensure the necessary privacy and security goals are met, by keeping them transparent to the developer), or can be a purpose built API for the problem (e.g. exposing streams for MSE). I realize this does mean we don't get to achieve the full idealistic goal of the Extensible Web Manifesto, and the potential it affords developers, but as the Priority of Constituencies reminds us, the user (and their security and privacy) come first.

Chris Palmer

unread,
Mar 15, 2021, 2:24:51 PM3/15/21
to net-dev, Ryan Sleevi, Yutaka Hirano, Mike West, Reilly Grant, Qiu, Binqiang, net-dev, Holland, Jake
The web security community has worked for more than a decade to get us to this good place with transport security. I don't think we should experiment with a new form of mixed content. The security properties of TLS are the minimum baseline going forward.

The work of IPC and unsafe parsing code reviews, and the likely launch review, would fall on my teams. I don't think it would be the best use of our time to support a complex experiment which I doubt could graduate to on-by-default in Stable.

I know that's disappointing, Jake, and I am sorry for that. But the security and privacy needs of the people who use the web must be our first priority. The good news is there are likely more ways to improve transport performance and cost without compromising security and privacy.

Holland, Jake

unread,
Mar 15, 2021, 4:43:42 PM3/15/21
to Chris Palmer, net-dev, Ryan Sleevi, Yutaka Hirano, Mike West, Reilly Grant, Qiu, Binqiang

Hi Chris and Ryan, and thanks to you both for your responses.

 

I can understand the resource scheduling argument in your team, and I have no response to that point.  I’ll reluctantly accept it as a showstopper for now.

 

But I want to say a few things about the points raised about privacy before abandoning the thread:

 

I did some reading about the public privacy discussion after Ryan’s response, and I wanted to say it refined my thinking, and I’m now on board with the need for browser APIs to treat end user privacy protection as a strong requirement, rather than a factor that can be considered as part of a tradeoff.

 

With that said, I was also trying to figure out how to say that transport should not be considered a special case in the general space of privacy considerations, and that “anything different from TLS is a non-starter” seems like a wrong stopping point for this discussion.  Although I think I still have some reading to do, the resources that got me to this point explained that the proper anchor for user privacy is the informed consent of the user with regard to information disclosure about them.

 

So as a brief and rough sketch: the model I was going to propose to fix the multicast receive API was based on the API for camera access (which also carries fundamental and deep privacy risks, deferring the choice in the risk tradeoff to the user).  For the UI, I was thinking in addition to the yes/no for “this site wants to get multicast content, this would let your local network know you’re consuming their content” with a checkbox for “always allow this site”, it would have some kind of selection between “any network” vs. “allow only the current local network”, with a “see here for more” link to explain that multicast makes it possible for your local network provider to figure out what information you’re consuming. 

 

That seemed to me a reasonable extension with some complexity that would have to be defined and discussed, and I can see why you’d want that done up front, so I was gearing up to accept and try to integrate that feedback.

 

But in terms of the “non-TLS is a non-starter” answer I heard from you both, it’s worth noting here that TLS does not protect against deliberate exposure to others by the remote site, and that this is commonly practiced today for all the existing methods of using CDNs to deliver traffic on the site’s behalf.  In many cases, this exposes information about the content a being consumed at a specific network location to the exact same entities that would learn about it under the Multicast Receive API proposal (e.g. in the open caching systems currently being deployed).

 

In this sense, adding an explicit user permission confirmation would be an improved privacy posture relative to the current practice of transparently offloading content (which is typically done without any user consent other than their failure to run a narrow allow-list on domain names).  It’s also notable that delivery at scale generally can’t be done without this kind of offloading, and it’s necessary for exactly the same kinds of delivery events that would see the most benefits if multicast were usable instead.

 

I’m not sure how discussion landed on a consensus that TLS is the magic bullet for getting to this “good place” for privacy protection (if you’ve got a good reading list, I’d be interested to have it), but I’d urge reopening that discussion if it’s considered closed.  The technical guarantees that TLS provides end-to-end are only a part of the web privacy protection story, and the principles behind the way you’ve landed on TLS as the only acceptable transport should not be forgotten, nor should they automatically block progress on proposals like this that can offer cheap orders of magnitude benefits to the costs in the delivery ecosystem, since this also has significant consequences for end users.

 

But regardless, thanks for an answer.  It makes the rest of the response I was working on moot for now, I guess. (For instance explaining the reasoning behind not embedding a narrower use case in a user agent for now--note that this would not change the privacy situation, but also the idea has other relevant pros and cons that I’d be happy to discuss further, at an appropriate time, as well as a few other minor responses to points raised.)

 

So I hope I can take this decision as a rain check and re-open this discussion in, say, 1-2 years’ time if we get some deployment with our other use cases, or as your team finishes with your more urgent priorities?

 

(Game and OS delivery is another important case for the broader ecosystem that can benefit from multicast, and we could spend time on those first--these can probably get a long way without browser support, but browser support would still have at least one important benefit to offer.)

 

Anyway, I appreciate the feedback and will look to incorporate it next time, or if I manage to prototype this in another browser as a first release, if there’s one that can spare the time to work with us on upstreaming the submission.

 

Best regards,

Jake

 

 

From: Chris Palmer <pal...@chromium.org>
Date: Monday, March 15, 2021 at 11:24 AM
To: net-dev <net...@chromium.org>
Cc: Ryan Sleevi <rsl...@chromium.org>, Yutaka Hirano <yhi...@chromium.org>, Mike West <mk...@chromium.org>, Reilly Grant <rei...@chromium.org>, "Qiu, Binqiang" <bq...@akamai.com>, net-dev <net...@chromium.org>, "Holland, Jake" <jhol...@akamai.com>
Subject: Re: [blink-dev] Intent to Prototype: Multicast Receive API

 

The web security community has worked for more than a decade to get us to this good place with transport security. I don't think we should experiment with a new form of mixed content. The security properties of TLS are the minimum baseline going forward.

Ryan Sleevi

unread,
Mar 15, 2021, 5:48:54 PM3/15/21
to Holland, Jake, Chris Palmer, net-dev, Ryan Sleevi, Yutaka Hirano, Mike West, Reilly Grant, Qiu, Binqiang
I think there's a tension here that you highlight that even we, as a broader organization and team, are continuing to struggle to work through and find the right balance to.

The idea with usable security is not that we can just abdicate every decision to the user, but that we need to empower them with agency and control and in actionable situations. The chooser flow you describe here is, in some ways, comparable to how browsers used to treat SSL/TLS. I'm not sure if you recall the days when IE and Navigator would prompt you when you entered a secure site, and prompted you when you exited a secure site. The challenge with those prompts, of course, is that they're not clear and actionable, and they end up being information detritus that wears away the foundations of trust and safety.

As a way of thinking about it, consider the past discussions about "Is TLS really that necessary if I'm not entering a password". Yet we know from those past discussions that users who are, say, reading business news or looking up information about medical conditions equally deserve and need confidentiality and integrity, as such side-channels can end up being used to identify or abuse. This is why efforts to protect the DNS lookup and the TLS exchange (via ECH) equally continue, to remove such side-channels from the equation.

To your point about CDNs, you're absolutely correct that servers can (and do!) all sorts of information sharing once the traffic reaches the domain. There's ultimately not much we can do there, nor is it necessarily a bad thing: we've at least made sure we're talking to a party duly empowered by the server, and we've done our best to ensure it aligns with what the user expects. The downside, however, with your proposal, and which admittedly is quite challenging, is that at present, you also end up revealing all the traffic to the intermediary as well. I think the proposal is aware of this, as it alludes to methods of key distribution (ala DRM) that at least ensures confidentiality among-participants, even if every participant ends up aware of what any other participant is viewing/receiving.

Most importantly, I don't want you to feel that this is closing the door on discussion, and I'm glad you're open to revisiting this in the future. As someone who has personally been a "multicast fan" for years (I like scrappy underdogs that are always two years away from being practical), I think you're right for highlighting that there are architectural advantages to rethinking some of the point-to-point/end-to-end designs, even if there are some weighty societal implications that might come from building a consumption-heavy system. However, the challenge is how to do that in a way that is secure by default, and which doesn't simply put the user in a position of being blamed if (when) things go wrong. The hope in exploring a more targeted API is that it gives something concrete to better help evaluate the security/privacy/performance tradeoffs, and that's a huge part of security engineering: finding the balance. The more "low-level" the API, the harder it is to find an acceptable balance, because invariably, every security/privacy improvement will come with some tradeoff for some use cases. By focusing more narrowly, it makes it easier to reason about the overall system, the implications of various design choices, the flow of data through the system (both over the network and within the page), and the individual trade-offs that may be necessary.

Holland, Jake

unread,
Mar 15, 2021, 10:45:30 PM3/15/21
to rsl...@chromium.org, Chris Palmer, net-dev, Yutaka Hirano, Mike West, Reilly Grant, Qiu, Binqiang

Hi Ryan,

 

I’m glad you’re also willing to continue discussing it, and thanks.  I remain hopeful the problems can be solved if people are willing to engage and examine them and think through possible solutions, so I’m very grateful for your thoughtful comments.

 

I agree with your point about usable security, and I’m not really a big fan of a “yet another box you have to click yes on” design.  I think it’s fair to say there’s too many of those and they’re not the ideal solution (though I do think they’re much better than nothing for a concerned user).  But at the same time, there’s not many other viable methods today to give the user any agency on topics like this.  If they know consuming some content will expose data they consider sensitive, it’s better that they know and have the option to decline than otherwise.  (Again, just like the camera, which is highly privacy-sensitive, the main “actionable” part of the user confirmation is that you don’t click yes if it’s a surprise to you, and you live without the feature.  But I do agree it’s not ideal to just blame the user when they screw up.)

 

It seems like a better solution here would be that when users are connected to a server, that server would provide standardized assertions that can be auto-checked, that detail the privacy guarantees provided by those servers, and that users could tune for different privacy profiles (probably with the help of some standard allow-lists).

 

Here I’m imagining assertions like “the operators of this domain name guarantee that information collected is handled in a manner compliant with GDPR 2016/679”, and perhaps with other standards (or specific clarifications of optional actions) as they get defined.  Maybe also along with “and here is our recent auditor signature confirming compliance with these claims”, along with maybe “here is an exception count of breach incidents along with classification of their resolutions (e.g. negligence, 0-day exploited, insider policy violation, etc.)”. 

 

This could in theory be pretty analogous to the cert check that happens automatically on every TLS connection, and in general it could be auto-compared against a user-maintainable posture on the client side.

 

If such an infrastructure were available and users (and their browser defaults, which of course would dominate what happens in practice) could tune the levels they accept vs. decline vs. seek confirmation, it would go a long way toward having a path to ratcheting up the expectation for providers to maintain (and prove) good behavior, in much the same way that alerting on non-secure sites (and downgrading search for them) pushed the ecosystem toward TLS.

 

If such an infrastructure were present, it could apply to CDNs as well as to local network providers, so that people with a mismatch between their privacy expectations and the local CDN practices would at least know whether they needed an alternative VPN or a DOT/DOH provider to meet their needs, or whether they could place trust in their local network.

 

Such a solution could cover both the “delegating to 3rd parties” issue and also the “local networks can fingerprint my network behavior” issue (including problems like the way 95% of sites are identifiable solely from their IP address fingerprint, as well as things like multicast-joining) by giving the users the power to demand that various network entities, including their local ISP, disclose their policies and that the policies meet ordinary standards, and punishing them if they don’t.

 

(I’m late to this party, so I’m not sure why such a thing doesn’t exist already, but I don’t imagine that in 5 minutes I came up with a viable solution nobody’s thought of before, so I guess there must be some well-known problems with this idea?)

 

This idea seems to me at first glance much better than pretending that TLS solves problems that it does not solve (namely hiding semantic content information from the local ISP) and therefore TLS must be preferred to alternatives that have the same problems but can’t pretend to solve them, such as multicast-receiving.  I mean, ECH is a nice idea and I’m all for it, but unless there’s also a solution to the remote IP fingerprinting problem, treating the group-joining exposure of the content being consumed from within a household as a blocker smells a lot like theater to me, since TLS without VPN can’t do much better even if there’s ECH everywhere and DOT to a trusted provider.  (Maybe that’s slightly too strong--I do also get the reluctance to add another new vector, I guess, but would be happier with something that admitted regulation and policy has an important role to play here, and it’s actually not that a magic protocol can solve this problem.)

 

 

Anyway, since you raised it again, I’ll also give my thoughts on a more focused API.  Maybe discussion here can improve the next round, so thanks for engaging:

 

I would love to nail down a more focused API and to use that instead, and I think if the low-level API got adopted and saw significant use, there would also be a later series of more focused APIs that used the same underlying receive path but did more work in the user agent, mainly to get the performance benefits.  However, there are a number of complicating factors to consider:

 

  1. If it uses multicast, it still has the same fundamental privacy problems and I don’t see a way around that.  So with privacy issues as the key stopper, I don’t see a way it would help at all to get past the issue of exposing information to the local network.  (But maybe I’m missing something, and if you have some suggestions for how you imagine it working, I’d love to hear them.)
  2. This is not best viewed as a green-field situation.  There are a number of existing multicast systems, mostly doing video delivery, and all of them would have an easy transition with “compile the receiver in wasm and write a shim for your socket-joining receive path” (because they wouldn’t have to touch their sending side until they’ve got some value already established), and would have a much harder time with “adopt the winning transport protocol that I, web API designer, have chosen for you”.  I think there will eventually be a winner that will come with its own targeted API producing MSE segments or streams and a nicely open sender side implementation, but I do not think it exists today or that it’s appropriate to pick one now.  (Note that AMBI is intended to interoperate with any of these, because it’s out of band and doesn’t require touching the transport protocol.)
    To justify that claim a bit, here’s the existing deployed transport protocols that I know off the top of my head:
    1. FLUTEv1 (used as one of the options in DVB-MABR)
    2. FLUTEv2, (used by ROUTE (meaning “Real-time Object Delivery over Unidirectional Transport”, without which it’s a truly horrifying acronym to search for in networking docs), which is used both in ATSC3 and as the 2nd supported transport option in DVB-MABR)
    3. NORM (used by Cablelab’s Multicast ABR)
    4. HTTP over multicast QUIC (used by BBC’s old demo, but maybe they’re moving to DVB?)
    5. At least 4 different proprietary protocols (Broadpeak, Akamai, Ramp, and at least one bespoke IPTV system owned by an ISP I’m not at liberty to disclose)
    6. ffmpeg and vlc can also produce raw TS UDP packets and RTP with multicast group destinations.
  3. Background File download is another use case that’s very important to drive ISP adoption.  We have a non-browser prototype running, and there might be an open one that might end up working if somebody put in the time, but the stopper here is client distribution, which the browser would help to solve.  This could perhaps be done as a targeted extension without undue suffering, since there’s less existing deployment, but it’s a heavy lift to make this 2 separate APIs instead of 1, and it’s not clear what would be gained, but it might be reasonable.
  4. Less critical for my immediate needs but still worth considering is future use cases that would benefit if the lower-layer API was available.  Here I include MQ-style pub/sub solutions and high-volume transport of undeveloped codecs, particularly including point-cloud for VR or other 3-d realtime systems.  These are lower on my list of priorities, but I would rate it a useful advantage to allow these use cases to experiment with a worthwhile low-level API.

 

I hadn’t seen the extensible web manifesto recently, but I read it from your message’s link, and it seems to speak very well to the desire to solve these kinds of issues by writing a low-level API first, letting people use it wherever it’s applicable, and then later fixing the performance problems with a more targeted API when there’s some amount of coalescing around well-established use cases and good solutions.

 

I do expect there would be significant performance advantages for targeted APIs, so I’d be all for making them for specific use cases in due course, but I don’t think it’s the right first step.  I’m not sure I have anything really to add to what the extensible web manifesto says, on this point, it might as well have been written exactly to generalize the situation I think I’m looking at here.

 

(It’s also not clear it would help much with the sorts of implementation and toctou challenges you raised, though it might in some cases especially where we got to re-use existing apis like efficient segment transport across the blink rpc boundary, so maybe there would be some benefit there.)

 

I would certainly re-think my position if a different design would past the privacy concerns that are the current biggest roadblock.  However, I don’t see how a targeted API would help with that at all, so if that’s the blocker then it doesn’t seem like there’s a point in redesign, and if the privacy concerns could be solved for multicast in general then I don’t see how it would be beneficial to cut down the applicable use cases.

 

(To be clear: I do see that there’s some complexity in the implementation issues you raised, and you’re right I’m going to want a shared memory transport for batches of packets, and this comes with security challenges that will require caution.  But this seems solvable, and not even very different from segment transport from a user agent to handle MSE.  I don’t think I’d object for instance to overlaying the packet transport on an existing stream object for getting the packets to the renderer from the user agent, with a wrap/unwrap layer around it, though I suspect it’s possible to do better.)

 

But with all that said, I’d be very interested in your thoughts on the matter, especially if you have examples that would help clarify how a targeted API could be helpful, especially to solving the privacy issues.

 

The main thing is to get the efficiency gains from multicast transport in the network, and if it actually would help solve the real problems to the point that an API could be accepted as an experiment, then if I had to write 2 different targeted APIs for the 2 use cases I care most about, that could be worth it.

 

Best regards,

Jake

tjamro...@opera.com

unread,
Mar 16, 2021, 6:57:16 AM3/16/21
to Holland, Jake, rsl...@chromium.org, Chris Palmer, net-dev, Reilly Grant, Mike West
That's very interesting topic from economy, ecology and congestion point of view.  I think I understand where the stance that "no TLS is no go" comes from.  Bearing that in mind, just a random idea:  what if there was possibility to bootstrap DRM over TLS for such a multicast?

Let's consider many client keys which fit single multicast DRM key.  The client receives the key over TLS so there's no leak of information to third parties.  Without the key there's no possibility to decipher the multicast resource, so no information leak either.  Thanks to asymmetric cipher, third party cannot inject malicious data.  Extra information can be inferred by third party if it is in possession of its own client key which fits this specific multicast resource and is an actor in the middle between client and server (it's not enough to be in the same client group receiving the same multicast resource, because third party has to compare its own multicast stream with the client it wants to eavesdrop).

Such scenario doesn't differ much from the IP address fingerprinting.

Best Regards,
Tomasz Jamroszczak
--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/85EA2D00-1155-49FA-8FDC-3DBF49FBE36A%40akamai.com.

-- 
Wysłane za pomocą nowego wspaniałego klienta pocztowego przeglądarki Vivaldi. Pobierz go na vivaldi.com.

Holland, Jake

unread,
Mar 16, 2021, 11:01:26 AM3/16/21
to tjamro...@opera.com, rsl...@chromium.org, Chris Palmer, net-dev, Reilly Grant, Mike West

Hi Tomasz, and thanks for the remarks.

 

To me the network layer exposure of user-requested content to the local ISP seems independent of any DRM key information.

 

The core issue is that at layer 2, the receiver notifies the network of its group membership with IGMP or MLD so that the network knows to replicate and forward the relevant set of packets to this end user device.  The next-hop from the receiver in the network doing the routing for that traffic is inherently capable of knowing this has occurred from a specific IP address, if its operators choose to set up monitoring and logging.

 

I also assume the network is capable of discovering the semantic contents of a multicast stream, as there will generally be many avenues for doing so for any kind of broadcast situation (as you point out, one such method is by signing up as a client itself with the provider, in which case the contents can’t be hidden from that client).  For something like video conferencing it might be harder, but still not likely to be hard to at least discover that it’s video conferencing, and what the source of traffic is.

 

I do think this situation is not very different from IP fingerprinting, but I don’t think it depends on DRM keys, so I don’t think TLS bootstrapping would make a difference.

 

(Note that the authentication proposed in this spec would use AMBI to prevent any malicious injection.  It might also be possible to do the kind of asymmetric encryption you’re describing, and that might have benefits in some scenarios, but I don’t think it’s necessary or sufficient to change the main points in the privacy discussion.)

 

Best regards,

Jake

Ryan Sleevi

unread,
Mar 16, 2021, 12:13:46 PM3/16/21
to Holland, Jake, rsl...@chromium.org, Chris Palmer, net-dev, Yutaka Hirano, Mike West, Reilly Grant, Qiu, Binqiang
Hi Jake,

Your message touches on a number of different directions, so I'm not sure how to productively and concisely engage. I think it does speak to some confusion, but I worry that trying to clarify that confusion will only lead to a much longer message (longer than this already is!)

For example, you touch on a privacy disclosure system akin to, say, P3P or DNT, and there's so much history there that no reply I can offer can do justice to the topic, other than that I think this is a misdirect from the core problems with this specific proposal. At best, it feels like an attempt to use "what-aboutism" to say "Well, we don't know what goes on in CDN data centers, so we shouldn't worry about the ISP". While I sincerely hope that's not your intent, and that I'm just misreading, if that was the point, the best I can say is that argument doesn't hold water. The reason that perception comes across is because this was a routine argument against requiring TLS for the web ("You can't be sure what the site does with the content once they decrypt it, so you can't call TLS secure"), but that's not an argument against TLS in transit. While we needn't let the perfect be the enemy of the good, we also shouldn't try to boil the ocean (to liberally mix metaphors); which is to say,  we shouldn't justify that it's OK to weaken privacy in new features simply because we haven't yet solved (largely unrelated) problems such as the CDN backhaul.

I'm glad we agree there are fundamental privacy problems with multicast. This is certainly the point to emphasize, and why the Web Platform does not lend itself easily to this feature. I tried to call out, and Chris re-emphasized, the importance of TLS as table stakes for this discussion. Tomasz's later reply echoed what I referenced in my message; namely, that it may be possible to build transport-level protection on multi-cast that is still limited to multicast participants. As you rightfully point out, that still ends up revealing metadata to the ISP; namely, the multicast groups a client is participating in, and that likely will still remain a challenge. However, we can hopefully agree that it shows that there are grounds for improvement on the proposal, because at least that gets to content confidentiality unless and until the ISP becomes a participant in the multicast group _with_ the keys being revealed to that ISP.

I'm aware that the world of multicast has ample implementation experience, and I can see why my reference to the Extensible Web Manifesto might be seen as somehow justifying bringing these legacy/privacy-insecure implementations to the Web Platform. The balance of the EWP is that we want to make sure _current_ browser features are explained and explainable, and we want to ensure that we offer robust low-level primitives that meet the _minimum_ privacy and security bar. The EWP tells us to "shoot for low level", but that doesn't mean we compromise on security/privacy; it just means we go "as low as possible within those bounds". As a team, and in the context of our collaboration with other browsers, we continue to struggle to find the right balance for that, but we all seem to agree on that basic premise.

My point in highlighting it, in the context of this, was to try to capture that your current proposal goes too low: it proposes so low-level an API that we, as the User's Agent, nor the user themselves, cannot be reasonably sure of the privacy or security, and that's what makes it an (effective) non-starter as proposed. I tried to capture where the floor was, to explore possible balances. Just like functionality like EME, however controversial, replaced the need for the <plugin>/<object> tag for critical use cases, I'd like to believe there's something more narrowly targeted here that can bring value to the Web Platform without sacrificing the very core that makes the Web so great: the built-in "secure by default" approach that few platforms can claim to even remotely approach. For a more concrete example, consider that we do not expose to web pages the ability to manage certificate verification or TLS parameters themselves; these are essential to the browser's understanding of the security of the page, and we cannot delegate control to the page itself, no matter how many new and interesting use cases it might potentially enable.

I don't think it's necessarily our goal to use the browser to drive ISP feature adoption, which I get the impression may be your goal, particularly with respect to file download. Unfortunately, as currently proposed, this feature would not be suitable for that, for the same reason that we block mixed content downloads, and this is, effectively, proposing mixed content. Similarly, the MQ pub/sub examples are certainly interesting and useful to think about, but as mentioned in my previous messages, mixed content being exposed actively to the page for scripting (e.g. JSON messages that can be used) is an active non-goal and would be a serious security regression.

I can understand wanting to solve this for the general case, and it's tempting to want to over-generalize a solution so that something can have maximum adoption. However, that can easily lend itself to unfocused discussion, and, as I mentioned previously, attempt to shut-down any discussion of trade-offs, by suggesting it's inappropriate to rule out specific use cases, which may be necessary for security/privacy. By narrowing the focus, for example, we can better reason about questions like "Is the content of the message exposed to the page"; for example, if only exposed to an opaque media element that prevents media extraction into script (i.e. to reduce the risk of steganographic side-channels), perhaps there's a path, in conjunction with expansions on my previous suggestion provided by Tomasz.

I realize that thinking about the Web Platform and its security/privacy requirements is often a challenging leap; few platforms are as deeply opinionated about security/privacy, and fewer still are subjected to the same collaborative care. The WICG discourse definitely provides a great way to introduce proposals for consideration by browsers and discussion with them, but it can still be quite challenging to build consensus and positive signals. As it stands, however, I don't think we can support experimentation of the proposal in Chrome, and hopefully this thread has helped provide a better understanding about why that is, and what directions to explore with the proposal to get a more positive consideration. It may be that it's not possible, and however unfortunate that may be, the security and privacy of users must come first. However, I hope this doesn't discourage you from continuing to iterate and explore, better understanding at least now where some of the non-negotiables are.

Holland, Jake

unread,
Mar 16, 2021, 2:44:03 PM3/16/21
to rsl...@chromium.org, Chris Palmer, net-dev, Yutaka Hirano, Mike West, Reilly Grant, Qiu, Binqiang

Hi Ryan,

 

Thanks for this, it’s helpful and maybe does point to at least a partial path forward.

 

I’ve been assuming that symmetric shared keys are essentially pointless for confidentiality of content, because they break basic transport security principles when shared among many participants.  But it’s interesting that you and Tomasz seem to see it as an improvement because it does at least require on-path attackers to obtain a key.

 

If that would make the difference in whether it’s considered viable, I would certainly accept it as a requirement for use in a web API.  I wouldn’t normally try to argue that it would provide confidentiality that reaches the “equivalent to TLS” bar, but I agree it does add something that could be viewed as an improvement to confidentiality in transit against observers who don’t have the key.  It’s maybe at least “one step closer to TLS”.

 

(My first instinct here is that symmetric per-packet payload encryption with symmetric keys could be easily folded into AMBI as an option, and required for use in the web API at least when in a secure context--would it help to write that in?)

 

Thank you also for the references on P3P and DNT.  As I said, I’m late to the party and still catching up, and I hadn’t heard of P3P and was only vaguely aware of DNT, and don’t know much about their histories nor why attempts to provide user privacy protections wouldn’t rely heavily on tools like these in cases (such as IP fingerprinting) where the security protocols don’t provide protection against significant information disclosure.  But I’ll add them to my reading list to try to understand a bit more about them and why they haven’t been useful enough in this context.

 

And I can see why this looks like what-aboutism, and I apologize for any confusion on that front, especially if it was caused by poor phrasing on my part.

 

I was aiming to look at the issue as more like a generalization of the problem of information exposure (particularly to the first-hop network provider) for which no solution seems on the horizon.  I don’t think it’s just about solving a CDN backhaul addressing issue after ECH is more deployed, it’s a more fundamental issue about fingerprinting based on the full set of observable information about the end user’s traffic.  If IP address fingerprinting were solved there’s still going to be a pretty good next-best fingerprint based on any number of signals, including volume and rate of traffic, timing of inter-packet or inter-burst gaps from the sender, concurrence of traffic with other users, the amount of traffic in the reverse direction, history of user behavior, etc.  It’s hard for me to see this as being essentially solvable at the protocol level, with just one last issue to solve.

 

My point is that a real solution to user privacy regarding high-level confidentiality of content against a hostile ISP requires steps that go beyond TLS, and steps that go beyond TLS can perhaps also be applied to the information disclosure that’s inherent to a technology like multicast.  (And the less-explicit higher-level point is that the goal should be a real solution to user privacy, and that it’s a mistake to throw the baby out with the bathwater when considering proposals that don’t make a material change to the achievable user privacy.)

 

To speak to the other point raised, part of the reason I’m having some trouble understanding the non-negotiables is demonstrated by the comment about file download:

 

If we consider a secure context that wanted to do a file download with a javascript/wasm implementation that constructs a file from authenticated multicast packets using the proposed API, if I understood correctly the key objection raised was that it “would not be suitable for that, for the same reason that we block mixed content downloads”.

 

But this confuses me, because the link gave 2 example reasons why mixed content has to be blocked: “insecurely-downloaded programs can be swapped out for malware by attackers, and eavesdroppers can read users' insecurely-downloaded bank statements”, neither of which is true of a broadcast file transfer built by a trusted and securely delivered web app using authenticated and integrity-verified packet payloads.  There’s no opportunity for a malware swap because of the authenticated and integrity-verified packets, and an individual user’s bank statements would not be transmitted over multicast for the same reasons that a provider would not publish them to everybody on their web home page, (which they could technically do but would not for other reasons about the sender’s responsibility to publicly distribute only suitable non-private content).

 

I’m not sure if I’m meant to understand that the proposal needs to be updated to explicitly say “when used in a secure context the payloads MUST be authenticated with integrity verification before delivery to the web app, otherwise dropped”, and that would address your objection?  If so, I apologize for the misunderstanding and I’d be happy to accept that feedback and incorporate it into an update to the proposal.

 

But I thought from the discussion the point was about something different, and I think the different thing is something like “this proposal exposes information to the local ISP about the file being downloaded by a user, and therefore it’s unsuitable for a secure context”.  But this rings a lot more hollow given that TLS (or maybe more accurately after ECH: the IP substrate on which TLS generally operates) exposes similar information today and for the foreseeable future.  I mean, I guess you can call that what-aboutism, but I don’t understand the user privacy concern that’s being protected here, especially in the typical case that this file is something like a very common software package update.  (Would it also help to add a section to the spec about the suitability of the API being for popular content?)

 

(And again, I have no objection to adding a symmetric key encryption option to AMBI, and even requiring its use for the web API, though I think it will still need a section describing the limited utility for security purposes, and that it cannot be safely relied upon for authenticity and integrity, as these need a higher bar to provide safety.  But it’s a good point that for someone without a key, there is a significant difference in the level of detail in the information available to them about the content.)

 

Anyway, sorry again for any confusion or if I seem to be raising irrelevant points, that’s not my intent and I regret that it’s coming across that way.  To me it seems like the word “secure” is overloaded here, and some of the excellent reasons for authenticity and integrity are getting conflated with a requirement for a level of confidentiality that is not actually achieved nor likely to be achievable by the currently accepted best practices or by anything presently on a roadmap.

 

Thanks again for the feedback, and I’m sorry if I’m being dense here or coming across as trying to argue in favor of weakened user protections.  That’s very much not my intent and thanks for giving me the benefit of the doubt if my awkward explanations or failures in my understanding are making it appear that way.

Holland, Jake

unread,
Jun 21, 2021, 5:49:09 PM6/21/21
to Chris Palmer, net-dev, Ryan Sleevi, Yutaka Hirano, Mike West, Reilly Grant

Hi net-dev,

 

I wanted to send one last update to this thread, along with an invitation.

 

We’ve recently launched a W3C community group intending to incubate multicast capabilities in web APIs to the point they can be safely added to the web platform:

https://www.w3.org/community/multicast/

 

Please consider joining if you might be able to help.  The first meeting is this Wednesday at 8am Pacific time, the .ics is attached.

 

Thanks again for all the constructive feedback in this thread, it was instrumental to our decision to take this work in this direction.

 

Best regards,

Jake

 

From: Chris Palmer <pal...@chromium.org>


Date: Mon,2021-03-15 at 11:24 AM
To: net-dev <net...@chromium.org>

Cc: Ryan Sleevi <rsl...@chromium.org>, Yutaka Hirano <yhi...@chromium.org>, Mike West <mk...@chromium.org>, Reilly Grant <rei...@chromium.org>, "Qiu, Binqiang" <bq...@akamai.com>, net-dev <net...@chromium.org>, "Holland, Jake" <jhol...@akamai.com>
Subject: Re: [blink-dev] Intent to Prototype: Multicast Receive API

 

The web security community has worked for more than a decade to get us to this good place with transport security. I don't think we should experiment with a new form of mixed content. The security properties of TLS are the minimum baseline going forward.

Multicast Community Group Kickoff[1].ics

Holland, Jake

unread,
Sep 17, 2021, 6:49:40 PM9/17/21
to rsl...@chromium.org, Chris Palmer, net-dev, Yutaka Hirano, Mike West, rei...@chromium.org

Hi net-dev, and especially Ryan and Chris:

 

I’m writing to check my understanding on the feedback we got on this thread back in March:

https://groups.google.com/a/chromium.org/g/net-dev/c/TjbMyPKuRHs/m/79PVEJl-GwAJ

 

The key top-level takeaway I saw was that in order for a multicast-related PR to be considered for chromium, it will need consensus from web stakeholders, and will need to include a more fully fleshed out security model that has confidentiality as part of the design, as well as authentication and integrity.

 

Driven largely by that feedback, we’ve done 2 main things so far to start addressing it (and one more doc to define the encryption scheme is presumed likely after we get further with these):

  • posted an internet draft about the security considerations, which we’re aiming to turn into a standards-track RFC:
    https://datatracker.ietf.org/doc/html/draft-krose-multicast-security
    • This was inspired by RFC 8826, which discussed the security considerations for WebRTC since, like multicast transport, it comes with some fundamental differences in the security model as compared with client/server TLS communication.
  • established a Multicast Community Group at W3C to incubate the work and find a good answer for how best to extend or interface with which APIs:
    https://www.w3.org/community/multicast/

 

I wanted to check that this response is on the right track to addressing the concerns raised.

 

If and when we get good IETF and W3C consensus through these channels, I’m hoping it’ll be appropriate to bring a robust implementation that incorporates that consensus back to chromium as a PR, and call the early feedback from this thread addressed? Or am I still missing some key considerations?

 

Thanks and regards,

Jake

 

Reply all
Reply to author
Forward
0 new messages