--Contact emails
jhol...@akamai.com, bq...@akamai.com
Explainer
https://github.com/GrumpyOldTroll/wicg-multicast-receiver-api/blob/master/explainer.md
Specification
TBD: API spec and design document (currently just entering Intent to Prototype).
IETF standards-track docs (all presently works in progress):
- draft-ietf-mboned-ambi
- draft-ietf-mboned-dorms- draft-ietf-mboned-mnat
- draft-ietf-mboned-cbacc(There will be integrated components prototyped in the browser feature based an appropriate subset of the above specs before origin trials begin.)
Summary
Subscribe to source-specific multicast IP channels and receive UDP payloads in web applications.
Blink component
Motivation
Currently, Web application developers have no API for receiving multicast traffic from the network. All traffic for web applications thus requires a one-to-one connection with a remote device. Multicast IP provides a one-to-many data stream, and enables packet replication by the network, enabling efficient use of broadcast-capable physical media and reducing load on congested shared paths. Enabling Web applications to receive multicast would solve the receiver distribution problem that contributes to the current under-utilization of multicast on the internet.
This effort is coupled with a standardization effort in the MBONED working group at IETF and ongoing trials with multiple network operators to deploy a standardized approach for ISPs to ingest externally sourced multicast UDP traffic.
Initial public proposal
https://discourse.wicg.io/t/proposal-multicastreceiver-api/3939
TAG review
None
TAG review status
Pending
Risks
Interoperability and Compatibility
Feature adoption by browsers has an influence on whether the ISPs will deploy the capability to ingest and manage externally source multicast traffic.
Gecko: No signal
Edge: No signal
WebKit: No signal
Web developers: Some positive support on ietf mboned mailing list. (Also importantly: some ISP support.)
Is this feature fully tested by web-platform-tests?
No
Link to entry on the Chrome Platform Status
https://www.chromestatus.com/feature/5683808135282688
This intent message was generated by Chrome Platform Status.
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/17EBCB31-7583-4BFB-AEEC-1FD17DFAC7B2%40akamai.com.
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAEmk%3DMbHM_5CdZFLrzQhQV4ZmTHWSpk%3DWRyjtforxZzbOQr%3DNQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CACvaWvZmH%2BKPo-V0PWEdd4es7NUu%3DsZL0NrU23UVEGypAxHztQ%40mail.gmail.com.
Thanks Reilly for helping me find the right venue for discussion.
@Ryan: Could you say more about the fundamental privacy and security issues that don’t have a good path for resolution?
(TL;DR: I’d like to discuss the specific concerns in more detail to see whether they can actually be addressed.)
The performance benefits here are also fundamental, and IMO should be considered together with the privacy concerns as a tradeoff before being dismissed, including the relative scale of the potential problems and benefits.
Though I agree there is a privacy exposure footprint to some on-path devices (most notably just upstream of the receiving device), I’d argue in general the exposure is pretty limited, and it’s possible to mitigate in ways similar to the privacy exposure risks from DNS or from the remote server’s IPs. (For example, by being exposed to a VPN provider of choice instead of exposed to your local ISP, and by turning the API off when operating in enhanced privacy modes).
I’m also prepared to argue that it would be worthwhile to explore even more extreme measures to protect privacy, if that’s what it takes to realize the performance gains this API offers.
What I’m thinking here (if necessary) could be done via a moderated allow-list that providers would have to get on and maintain good behavior to stay on. For example, if this API had to ask permission before use on a page, or if ended up only exposed for browser extensions from the store that had to ask for permission during installation, it would still be a big step forward with regard to the capability for content owners to deploy multicast receivers, so I’d consider something like that to be on the table if the privacy concerns prove severe enough.
(NB: I’m starting out skeptical it’s necessary to go that far, but if someone was able to point out a good threat model for how the exposed info could be exploited in a way not available to them with unicast secured web traffic, that would convince me. Either way, I’d still prefer to open up experimentation via command-line options while that discussion is ongoing.)
I’m not sure whether there are other security issues you believe can’t be addressed... We have a work-in-progress spec for how we propose to authenticate the traffic asymmetrically, which we think is feasible and intend to implement. The other security concern I’ve heard yet is the dangers of adding RPCs to blink, which although I understand the reluctance to accept a large and complicated patch even experimentally, I also assume can be addressed if we can get the right test coverage and satisfy a reasonable security review team on code quality, and we’d be very happy to start the engagement on that front.
Do you have other concerns in mind besides these?
Best regards,
Jake
Thanks Reilly for helping me find the right venue for discussion.
@Ryan: Could you say more about the fundamental privacy and security issues that don’t have a good path for resolution?
(TL;DR: I’d like to discuss the specific concerns in more detail to see whether they can actually be addressed.)
The performance benefits here are also fundamental, and IMO should be considered together with the privacy concerns as a tradeoff before being dismissed, including the relative scale of the potential problems and benefits.
Though I agree there is a privacy exposure footprint to some on-path devices (most notably just upstream of the receiving device), I’d argue in general the exposure is pretty limited, and it’s possible to mitigate in ways similar to the privacy exposure risks from DNS or from the remote server’s IPs. (For example, by being exposed to a VPN provider of choice instead of exposed to your local ISP, and by turning the API off when operating in enhanced privacy modes).
I’m also prepared to argue that it would be worthwhile to explore even more extreme measures to protect privacy, if that’s what it takes to realize the performance gains this API offers.
What I’m thinking here (if necessary) could be done via a moderated allow-list that providers would have to get on and maintain good behavior to stay on. For example, if this API had to ask permission before use on a page, or if ended up only exposed for browser extensions from the store that had to ask for permission during installation, it would still be a big step forward with regard to the capability for content owners to deploy multicast receivers, so I’d consider something like that to be on the table if the privacy concerns prove severe enough.
(NB: I’m starting out skeptical it’s necessary to go that far, but if someone was able to point out a good threat model for how the exposed info could be exploited in a way not available to them with unicast secured web traffic, that would convince me. Either way, I’d still prefer to open up experimentation via command-line options while that discussion is ongoing.)
I’m not sure whether there are other security issues you believe can’t be addressed... We have a work-in-progress spec for how we propose to authenticate the traffic asymmetrically, which we think is feasible and intend to implement. The other security concern I’ve heard yet is the dangers of adding RPCs to blink, which although I understand the reluctance to accept a large and complicated patch even experimentally, I also assume can be addressed if we can get the right test coverage and satisfy a reasonable security review team on code quality, and we’d be very happy to start the engagement on that front.
Do you have other concerns in mind besides these?
Hi Chris and Ryan, and thanks to you both for your responses.
I can understand the resource scheduling argument in your team, and I have no response to that point. I’ll reluctantly accept it as a showstopper for now.
But I want to say a few things about the points raised about privacy before abandoning the thread:
I did some reading about the public privacy discussion after Ryan’s response, and I wanted to say it refined my thinking, and I’m now on board with the need for browser APIs to treat end user privacy protection as a strong requirement, rather than a factor that can be considered as part of a tradeoff.
With that said, I was also trying to figure out how to say that transport should not be considered a special case in the general space of privacy considerations, and that “anything different from TLS is a non-starter” seems like a wrong stopping point for this discussion. Although I think I still have some reading to do, the resources that got me to this point explained that the proper anchor for user privacy is the informed consent of the user with regard to information disclosure about them.
So as a brief and rough sketch: the model I was going to propose to fix the multicast receive API was based on the API for camera access (which also carries fundamental and deep privacy risks, deferring the choice in the risk tradeoff to the user). For the UI, I was thinking in addition to the yes/no for “this site wants to get multicast content, this would let your local network know you’re consuming their content” with a checkbox for “always allow this site”, it would have some kind of selection between “any network” vs. “allow only the current local network”, with a “see here for more” link to explain that multicast makes it possible for your local network provider to figure out what information you’re consuming.
That seemed to me a reasonable extension with some complexity that would have to be defined and discussed, and I can see why you’d want that done up front, so I was gearing up to accept and try to integrate that feedback.
But in terms of the “non-TLS is a non-starter” answer I heard from you both, it’s worth noting here that TLS does not protect against deliberate exposure to others by the remote site, and that this is commonly practiced today for all the existing methods of using CDNs to deliver traffic on the site’s behalf. In many cases, this exposes information about the content a being consumed at a specific network location to the exact same entities that would learn about it under the Multicast Receive API proposal (e.g. in the open caching systems currently being deployed).
In this sense, adding an explicit user permission confirmation would be an improved privacy posture relative to the current practice of transparently offloading content (which is typically done without any user consent other than their failure to run a narrow allow-list on domain names). It’s also notable that delivery at scale generally can’t be done without this kind of offloading, and it’s necessary for exactly the same kinds of delivery events that would see the most benefits if multicast were usable instead.
I’m not sure how discussion landed on a consensus that TLS is the magic bullet for getting to this “good place” for privacy protection (if you’ve got a good reading list, I’d be interested to have it), but I’d urge reopening that discussion if it’s considered closed. The technical guarantees that TLS provides end-to-end are only a part of the web privacy protection story, and the principles behind the way you’ve landed on TLS as the only acceptable transport should not be forgotten, nor should they automatically block progress on proposals like this that can offer cheap orders of magnitude benefits to the costs in the delivery ecosystem, since this also has significant consequences for end users.
But regardless, thanks for an answer. It makes the rest of the response I was working on moot for now, I guess. (For instance explaining the reasoning behind not embedding a narrower use case in a user agent for now--note that this would not change the privacy situation, but also the idea has other relevant pros and cons that I’d be happy to discuss further, at an appropriate time, as well as a few other minor responses to points raised.)
So I hope I can take this decision as a rain check and re-open this discussion in, say, 1-2 years’ time if we get some deployment with our other use cases, or as your team finishes with your more urgent priorities?
(Game and OS delivery is another important case for the broader ecosystem that can benefit from multicast, and we could spend time on those first--these can probably get a long way without browser support, but browser support would still have at least one important benefit to offer.)
Anyway, I appreciate the feedback and will look to incorporate it next time, or if I manage to prototype this in another browser as a first release, if there’s one that can spare the time to work with us on upstreaming the submission.
Best regards,
Jake
From: Chris Palmer <pal...@chromium.org>
Date: Monday, March 15, 2021 at 11:24 AM
To: net-dev <net...@chromium.org>
Cc: Ryan Sleevi <rsl...@chromium.org>, Yutaka Hirano <yhi...@chromium.org>, Mike West <mk...@chromium.org>, Reilly Grant <rei...@chromium.org>, "Qiu, Binqiang" <bq...@akamai.com>, net-dev <net...@chromium.org>, "Holland, Jake" <jhol...@akamai.com>
Subject: Re: [blink-dev] Intent to Prototype: Multicast Receive API
The web security community has worked for more than a decade to get us to this good place with transport security. I don't think we should experiment with a new form of mixed content. The security properties of TLS are the minimum baseline going forward.
Hi Ryan,
I’m glad you’re also willing to continue discussing it, and thanks. I remain hopeful the problems can be solved if people are willing to engage and examine them and think through possible solutions, so I’m very grateful for your thoughtful comments.
I agree with your point about usable security, and I’m not really a big fan of a “yet another box you have to click yes on” design. I think it’s fair to say there’s too many of those and they’re not the ideal solution (though I do think they’re much better than nothing for a concerned user). But at the same time, there’s not many other viable methods today to give the user any agency on topics like this. If they know consuming some content will expose data they consider sensitive, it’s better that they know and have the option to decline than otherwise. (Again, just like the camera, which is highly privacy-sensitive, the main “actionable” part of the user confirmation is that you don’t click yes if it’s a surprise to you, and you live without the feature. But I do agree it’s not ideal to just blame the user when they screw up.)
It seems like a better solution here would be that when users are connected to a server, that server would provide standardized assertions that can be auto-checked, that detail the privacy guarantees provided by those servers, and that users could tune for different privacy profiles (probably with the help of some standard allow-lists).
Here I’m imagining assertions like “the operators of this domain name guarantee that information collected is handled in a manner compliant with GDPR 2016/679”, and perhaps with other standards (or specific clarifications of optional actions) as they get defined. Maybe also along with “and here is our recent auditor signature confirming compliance with these claims”, along with maybe “here is an exception count of breach incidents along with classification of their resolutions (e.g. negligence, 0-day exploited, insider policy violation, etc.)”.
This could in theory be pretty analogous to the cert check that happens automatically on every TLS connection, and in general it could be auto-compared against a user-maintainable posture on the client side.
If such an infrastructure were available and users (and their browser defaults, which of course would dominate what happens in practice) could tune the levels they accept vs. decline vs. seek confirmation, it would go a long way toward having a path to ratcheting up the expectation for providers to maintain (and prove) good behavior, in much the same way that alerting on non-secure sites (and downgrading search for them) pushed the ecosystem toward TLS.
If such an infrastructure were present, it could apply to CDNs as well as to local network providers, so that people with a mismatch between their privacy expectations and the local CDN practices would at least know whether they needed an alternative VPN or a DOT/DOH provider to meet their needs, or whether they could place trust in their local network.
Such a solution could cover both the “delegating to 3rd parties” issue and also the “local networks can fingerprint my network behavior” issue (including problems like the way 95% of sites are identifiable solely from their IP address fingerprint, as well as things like multicast-joining) by giving the users the power to demand that various network entities, including their local ISP, disclose their policies and that the policies meet ordinary standards, and punishing them if they don’t.
(I’m late to this party, so I’m not sure why such a thing doesn’t exist already, but I don’t imagine that in 5 minutes I came up with a viable solution nobody’s thought of before, so I guess there must be some well-known problems with this idea?)
This idea seems to me at first glance much better than pretending that TLS solves problems that it does not solve (namely hiding semantic content information from the local ISP) and therefore TLS must be preferred to alternatives that have the same problems but can’t pretend to solve them, such as multicast-receiving. I mean, ECH is a nice idea and I’m all for it, but unless there’s also a solution to the remote IP fingerprinting problem, treating the group-joining exposure of the content being consumed from within a household as a blocker smells a lot like theater to me, since TLS without VPN can’t do much better even if there’s ECH everywhere and DOT to a trusted provider. (Maybe that’s slightly too strong--I do also get the reluctance to add another new vector, I guess, but would be happier with something that admitted regulation and policy has an important role to play here, and it’s actually not that a magic protocol can solve this problem.)
Anyway, since you raised it again, I’ll also give my thoughts on a more focused API. Maybe discussion here can improve the next round, so thanks for engaging:
I would love to nail down a more focused API and to use that instead, and I think if the low-level API got adopted and saw significant use, there would also be a later series of more focused APIs that used the same underlying receive path but did more work in the user agent, mainly to get the performance benefits. However, there are a number of complicating factors to consider:
I hadn’t seen the extensible web manifesto recently, but I read it from your message’s link, and it seems to speak very well to the desire to solve these kinds of issues by writing a low-level API first, letting people use it wherever it’s applicable, and then later fixing the performance problems with a more targeted API when there’s some amount of coalescing around well-established use cases and good solutions.
I do expect there would be significant performance advantages for targeted APIs, so I’d be all for making them for specific use cases in due course, but I don’t think it’s the right first step. I’m not sure I have anything really to add to what the extensible web manifesto says, on this point, it might as well have been written exactly to generalize the situation I think I’m looking at here.
(It’s also not clear it would help much with the sorts of implementation and toctou challenges you raised, though it might in some cases especially where we got to re-use existing apis like efficient segment transport across the blink rpc boundary, so maybe there would be some benefit there.)
I would certainly re-think my position if a different design would past the privacy concerns that are the current biggest roadblock. However, I don’t see how a targeted API would help with that at all, so if that’s the blocker then it doesn’t seem like there’s a point in redesign, and if the privacy concerns could be solved for multicast in general then I don’t see how it would be beneficial to cut down the applicable use cases.
(To be clear: I do see that there’s some complexity in the implementation issues you raised, and you’re right I’m going to want a shared memory transport for batches of packets, and this comes with security challenges that will require caution. But this seems solvable, and not even very different from segment transport from a user agent to handle MSE. I don’t think I’d object for instance to overlaying the packet transport on an existing stream object for getting the packets to the renderer from the user agent, with a wrap/unwrap layer around it, though I suspect it’s possible to do better.)
But with all that said, I’d be very interested in your thoughts on the matter, especially if you have examples that would help clarify how a targeted API could be helpful, especially to solving the privacy issues.
The main thing is to get the efficiency gains from multicast transport in the network, and if it actually would help solve the real problems to the point that an API could be accepted as an experiment, then if I had to write 2 different targeted APIs for the 2 use cases I care most about, that could be worth it.
Best regards,
Jake
--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/85EA2D00-1155-49FA-8FDC-3DBF49FBE36A%40akamai.com.
Hi Tomasz, and thanks for the remarks.
To me the network layer exposure of user-requested content to the local ISP seems independent of any DRM key information.
The core issue is that at layer 2, the receiver notifies the network of its group membership with IGMP or MLD so that the network knows to replicate and forward the relevant set of packets to this end user device. The next-hop from the receiver in the network doing the routing for that traffic is inherently capable of knowing this has occurred from a specific IP address, if its operators choose to set up monitoring and logging.
I also assume the network is capable of discovering the semantic contents of a multicast stream, as there will generally be many avenues for doing so for any kind of broadcast situation (as you point out, one such method is by signing up as a client itself with the provider, in which case the contents can’t be hidden from that client). For something like video conferencing it might be harder, but still not likely to be hard to at least discover that it’s video conferencing, and what the source of traffic is.
I do think this situation is not very different from IP fingerprinting, but I don’t think it depends on DRM keys, so I don’t think TLS bootstrapping would make a difference.
(Note that the authentication proposed in this spec would use AMBI to prevent any malicious injection. It might also be possible to do the kind of asymmetric encryption you’re describing, and that might have benefits in some scenarios, but I don’t think it’s necessary or sufficient to change the main points in the privacy discussion.)
Best regards,
Jake
Hi Ryan,
Thanks for this, it’s helpful and maybe does point to at least a partial path forward.
I’ve been assuming that symmetric shared keys are essentially pointless for confidentiality of content, because they break basic transport security principles when shared among many participants. But it’s interesting that you and Tomasz seem to see it as an improvement because it does at least require on-path attackers to obtain a key.
If that would make the difference in whether it’s considered viable, I would certainly accept it as a requirement for use in a web API. I wouldn’t normally try to argue that it would provide confidentiality that reaches the “equivalent to TLS” bar, but I agree it does add something that could be viewed as an improvement to confidentiality in transit against observers who don’t have the key. It’s maybe at least “one step closer to TLS”.
(My first instinct here is that symmetric per-packet payload encryption with symmetric keys could be easily folded into AMBI as an option, and required for use in the web API at least when in a secure context--would it help to write that in?)
Thank you also for the references on P3P and DNT. As I said, I’m late to the party and still catching up, and I hadn’t heard of P3P and was only vaguely aware of DNT, and don’t know much about their histories nor why attempts to provide user privacy protections wouldn’t rely heavily on tools like these in cases (such as IP fingerprinting) where the security protocols don’t provide protection against significant information disclosure. But I’ll add them to my reading list to try to understand a bit more about them and why they haven’t been useful enough in this context.
And I can see why this looks like what-aboutism, and I apologize for any confusion on that front, especially if it was caused by poor phrasing on my part.
I was aiming to look at the issue as more like a generalization of the problem of information exposure (particularly to the first-hop network provider) for which no solution seems on the horizon. I don’t think it’s just about solving a CDN backhaul addressing issue after ECH is more deployed, it’s a more fundamental issue about fingerprinting based on the full set of observable information about the end user’s traffic. If IP address fingerprinting were solved there’s still going to be a pretty good next-best fingerprint based on any number of signals, including volume and rate of traffic, timing of inter-packet or inter-burst gaps from the sender, concurrence of traffic with other users, the amount of traffic in the reverse direction, history of user behavior, etc. It’s hard for me to see this as being essentially solvable at the protocol level, with just one last issue to solve.
My point is that a real solution to user privacy regarding high-level confidentiality of content against a hostile ISP requires steps that go beyond TLS, and steps that go beyond TLS can perhaps also be applied to the information disclosure that’s inherent to a technology like multicast. (And the less-explicit higher-level point is that the goal should be a real solution to user privacy, and that it’s a mistake to throw the baby out with the bathwater when considering proposals that don’t make a material change to the achievable user privacy.)
To speak to the other point raised, part of the reason I’m having some trouble understanding the non-negotiables is demonstrated by the comment about file download:
If we consider a secure context that wanted to do a file download with a javascript/wasm implementation that constructs a file from authenticated multicast packets using the proposed API, if I understood correctly the key objection raised was that it “would not be suitable for that, for the same reason that we block mixed content downloads”.
But this confuses me, because the link gave 2 example reasons why mixed content has to be blocked: “insecurely-downloaded programs can be swapped out for malware by attackers, and eavesdroppers can read users' insecurely-downloaded bank statements”, neither of which is true of a broadcast file transfer built by a trusted and securely delivered web app using authenticated and integrity-verified packet payloads. There’s no opportunity for a malware swap because of the authenticated and integrity-verified packets, and an individual user’s bank statements would not be transmitted over multicast for the same reasons that a provider would not publish them to everybody on their web home page, (which they could technically do but would not for other reasons about the sender’s responsibility to publicly distribute only suitable non-private content).
I’m not sure if I’m meant to understand that the proposal needs to be updated to explicitly say “when used in a secure context the payloads MUST be authenticated with integrity verification before delivery to the web app, otherwise dropped”, and that would address your objection? If so, I apologize for the misunderstanding and I’d be happy to accept that feedback and incorporate it into an update to the proposal.
But I thought from the discussion the point was about something different, and I think the different thing is something like “this proposal exposes information to the local ISP about the file being downloaded by a user, and therefore it’s unsuitable for a secure context”. But this rings a lot more hollow given that TLS (or maybe more accurately after ECH: the IP substrate on which TLS generally operates) exposes similar information today and for the foreseeable future. I mean, I guess you can call that what-aboutism, but I don’t understand the user privacy concern that’s being protected here, especially in the typical case that this file is something like a very common software package update. (Would it also help to add a section to the spec about the suitability of the API being for popular content?)
(And again, I have no objection to adding a symmetric key encryption option to AMBI, and even requiring its use for the web API, though I think it will still need a section describing the limited utility for security purposes, and that it cannot be safely relied upon for authenticity and integrity, as these need a higher bar to provide safety. But it’s a good point that for someone without a key, there is a significant difference in the level of detail in the information available to them about the content.)
Anyway, sorry again for any confusion or if I seem to be raising irrelevant points, that’s not my intent and I regret that it’s coming across that way. To me it seems like the word “secure” is overloaded here, and some of the excellent reasons for authenticity and integrity are getting conflated with a requirement for a level of confidentiality that is not actually achieved nor likely to be achievable by the currently accepted best practices or by anything presently on a roadmap.
Thanks again for the feedback, and I’m sorry if I’m being dense here or coming across as trying to argue in favor of weakened user protections. That’s very much not my intent and thanks for giving me the benefit of the doubt if my awkward explanations or failures in my understanding are making it appear that way.
Hi net-dev,
I wanted to send one last update to this thread, along with an invitation.
We’ve recently launched a W3C community group intending to incubate multicast capabilities in web APIs to the point they can be safely added to the web platform:
https://www.w3.org/community/multicast/
Please consider joining if you might be able to help. The first meeting is this Wednesday at 8am Pacific time, the .ics is attached.
Thanks again for all the constructive feedback in this thread, it was instrumental to our decision to take this work in this direction.
Best regards,
Jake
From: Chris Palmer <pal...@chromium.org>
Date: Mon,2021-03-15 at 11:24 AM
To: net-dev <net...@chromium.org>
Cc: Ryan Sleevi <rsl...@chromium.org>, Yutaka Hirano <yhi...@chromium.org>, Mike West <mk...@chromium.org>, Reilly Grant <rei...@chromium.org>, "Qiu, Binqiang" <bq...@akamai.com>, net-dev <net...@chromium.org>, "Holland, Jake" <jhol...@akamai.com>
Subject: Re: [blink-dev] Intent to Prototype: Multicast Receive API
The web security community has worked for more than a decade to get us to this good place with transport security. I don't think we should experiment with a new form of mixed content. The security properties of TLS are the minimum baseline going forward.
Hi net-dev, and especially Ryan and Chris:
I’m writing to check my understanding on the feedback we got on this thread back in March:
https://groups.google.com/a/chromium.org/g/net-dev/c/TjbMyPKuRHs/m/79PVEJl-GwAJ
The key top-level takeaway I saw was that in order for a multicast-related PR to be considered for chromium, it will need consensus from web stakeholders, and will need to include a more fully fleshed out security model that has confidentiality as part of the design, as well as authentication and integrity.
Driven largely by that feedback, we’ve done 2 main things so far to start addressing it (and one more doc to define the encryption scheme is presumed likely after we get further with these):
I wanted to check that this response is on the right track to addressing the concerns raised.
If and when we get good IETF and W3C consensus through these channels, I’m hoping it’ll be appropriate to bring a robust implementation that incorporates that consensus back to chromium as a PR, and call the early feedback from this thread addressed? Or am I still missing some key considerations?
Thanks and regards,
Jake