Cross-origin server push

228 views
Skip to first unread message

Chris Bentzel

unread,
Oct 7, 2016, 6:31:17 AM10/7/16
to net...@chromium.org, security-dev
A number of folks got together earlier this week to talk about cross-origin server push in H2 and QUIC. Here's some notes.

Background

Both H2 and QUIC allow connection pooling when domains are covered by the same certificate. For example, if a certificate for the connection is for *.example.com, requests to both a.example.com and b.example.com can be handled by the same connection. Certificates can also support more disjoint domains like a.example.com and www.bentzel.net.

With the latter example, it's possible that a PUSH_PROMISE for www.bentzel.net/foo.webp could be created that is associated with a request for a.example.com/index.html. A common case where this could happen is when the H2 connection is terminated by a reverse proxy that supports multiple backends. The initial client-issued request for a.example.com/index.html would have an an :authority for a.example.com and the reverse proxy would route to the appropriate backend server. In the response, it could provide a Link: rel=preload hint for www.bentzel.net/foo.webp. Since the connection to the reverse proxy can also cover www.bentzel.net, the reverse proxy converts the preload hint to a PUSH_PROMISE (and PUSH).

Concerns

There were two classes of concerns around this:
  • Are there fundamental protocol issues, like bypassing CORS checks?
  • Are there practical issues, like www.example.com being able to push actual responses for www.bentzel.net, even though it has no authority.
There was general belief that there were no fundamental issues. For example, CORS preflight checks would not be done at the time the response is pushed, but would be issued prior to being able to use the pushed response.

The practical issues were more of a concern. The HTTP/2 RFC includes a section covering multi-tenant servers and PUSH. However, we don't know whether this is an actual bug on reverse proxies now, especially since this is fairly new and subtle behavior.

Steps Forward

There were two proposals to gather more information:
  • Do a survey of common reverse proxies and see if this is actually a problem.
  • Understand if people currently want to do cross-origin push to see if it's a benefit.
The main question is whether we start conservative and disable cross-origin push until we are more confident that there is not a problem, or whether we allow cross-origin push and then remove support if there's a demonstrated attack. Removing support will not break sites, it will simply mean that cross-origin pushes are rejected and potentially have some performance impact.

Ben Maurer

unread,
Oct 7, 2016, 2:34:28 PM10/7/16
to net-dev, securi...@chromium.org, cben...@google.com


On Friday, October 7, 2016 at 3:31:17 AM UTC-7, Chris Bentzel wrote:
A number of folks got together earlier this week to talk about cross-origin server push in H2 and QUIC. Here's some notes.

Background

Both H2 and QUIC allow connection pooling when domains are covered by the same certificate. For example, if a certificate for the connection is for *.example.com, requests to both a.example.com and b.example.com can be handled by the same connection. Certificates can also support more disjoint domains like a.example.com and www.bentzel.net.

With the latter example, it's possible that a PUSH_PROMISE for www.bentzel.net/foo.webp could be created that is associated with a request for a.example.com/index.html. A common case where this could happen is when the H2 connection is terminated by a reverse proxy that supports multiple backends. The initial client-issued request for a.example.com/index.html would have an an :authority for a.example.com and the reverse proxy would route to the appropriate backend server. In the response, it could provide a Link: rel=preload hint for www.bentzel.net/foo.webp. Since the connection to the reverse proxy can also cover www.bentzel.net, the reverse proxy converts the preload hint to a PUSH_PROMISE (and PUSH).

Concerns

There were two classes of concerns around this:
  • Are there fundamental protocol issues, like bypassing CORS checks?
  • Are there practical issues, like www.example.com being able to push actual responses for www.bentzel.net, even though it has no authority.
There was general belief that there were no fundamental issues. For example, CORS preflight checks would not be done at the time the response is pushed, but would be issued prior to being able to use the pushed response.

The practical issues were more of a concern. The HTTP/2 RFC includes a section covering multi-tenant servers and PUSH. However, we don't know whether this is an actual bug on reverse proxies now, especially since this is fairly new and subtle behavior.

Steps Forward

There were two proposals to gather more information:
  • Do a survey of common reverse proxies and see if this is actually a problem.
  • Understand if people currently want to do cross-origin push to see if it's a benefit.
FB would be interested in cross-origin push. There are many cases where we can not move our CDN content to the same domain as the primary page but where we'd still benefit from pushing CDN content over the user's www.facebook.com connection. In addition, being able to push http alternative services for another origin could be powerful (eg to tell the user that cdn.facebook.com can reuse the connection to www.facebook.com)

The main question is whether we start conservative and disable cross-origin push until we are more confident that there is not a problem, or whether we allow cross-origin push and then remove support if there's a demonstrated attack. Removing support will not break sites, it will simply mean that cross-origin pushes are rejected and potentially have some performance impact.

 I'd hate to see x-origin push get bogged down forever in this security question. IMHO any party that is acquiring certificates that cover unrelated origins and doing a multi-tenant reverse proxy has to have a pretty high level of sophistication. I'd expect that the number of instances of this are small and the parties sophisticated enough they can quickly address the issue.

One middle ground option -- maybe the reverse proxy could be required to affirmatively state that it wishes to have xorigin push. (eg via negotiated TLS extension). This could be a temporary solution to allow wider scale experimentation with xorigin push while having confidence that a reverse proxy can not be tricked into allowing pushes that it shouldn't

Yoav Weiss

unread,
Oct 10, 2016, 4:16:56 AM10/10/16
to Ben Maurer, net-dev, securi...@chromium.org, Chris Bentzel
I strongly support Ben's position here. Cross-origin push has many real-life use-cases (e.g. pushing content over sharded origins).
It's on reverse proxies which use shared certificates to make sure they avoid blindly pushing content which they shouldn't really be pushing.


One middle ground option -- maybe the reverse proxy could be required to affirmatively state that it wishes to have xorigin push. (eg via negotiated TLS extension). This could be a temporary solution to allow wider scale experimentation with xorigin push while having confidence that a reverse proxy can not be tricked into allowing pushes that it shouldn't

--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+unsubscribe@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/c271eeb2-8135-4982-8635-cb6d969000f5%40chromium.org.

Ryan Sleevi

unread,
Oct 10, 2016, 4:47:23 PM10/10/16
to Yoav Weiss, Ben Maurer, net-dev, security-dev, Chris Bentzel
On Mon, Oct 10, 2016 at 1:16 AM, Yoav Weiss <yo...@yoav.ws> wrote:
It's on reverse proxies which use shared certificates to make sure they avoid blindly pushing content which they shouldn't really be pushing.

Could you expand further why you believe this would only be reverse proxies?

For example, is there something I'm missing that would prevent someone from, say, using Apache or NGinx on a single IP with domain-based virtual hosting, and using a Let's Encrypt client to obtain a free certificate for multiple (unrelated) domains hosted on that same machine?

That is, specifically, the ever prevalent shared hosting solution. Given that LE rate limits by IP, and thus specifically encourages sites (and hosting providers) to aggregate multiple names into a single certificate request, I have trouble seeing any reason that the situation would be as you describe - that is, the exception, rather than the norm. 

Ben Maurer

unread,
Oct 10, 2016, 5:02:08 PM10/10/16
to Ryan Sleevi, Yoav Weiss, net-dev, security-dev, Chris Bentzel
The level of sophistication required to do this is fairly high. To do the "multiple domains per cert" approach you'd need:

1) An automated system to request certs from lets encrypt (to manage changes in your set of customers)
2) An automated system to load balance domains to IPs (since there are practical limits to the size of certificates)
3) A system to load balance IPs to servers (since anybody at this level of sophistication is presumably serving more traffic than a single server)

Somebody with this level of sophistication would probably end up implementing a reverse proxy anyways. Setting up the SSL keys for multiple customers on a single box which they all have access to also sounds like a security mess.

That said, I do admit there is a risk here. Do you have an alternate proposal that would address this risk while still allowing the introduction of features that allow a single HTTP2 connection to use push like mechanisms to work cross origin? This risk seems like something that will only grow with time and that we either need to confront now by taking advantage of the features already in the spec or by requiring active opt-in to the scheme

Ryan Sleevi

unread,
Oct 10, 2016, 5:14:58 PM10/10/16
to Ben Maurer, Ryan Sleevi, Yoav Weiss, net-dev, security-dev, Chris Bentzel
On Mon, Oct 10, 2016 at 2:02 PM, Ben Maurer <ben.m...@gmail.com> wrote:
The level of sophistication required to do this is fairly high. To do the "multiple domains per cert" approach you'd need:

1) An automated system to request certs from lets encrypt (to manage changes in your set of customers)

Like CertBot?
 
2) An automated system to load balance domains to IPs (since there are practical limits to the size of certificates)

I don't believe this is relevant at all to our threat model. I don't believe two distinct domains represents any greater or less security risk than one hundred distinct domains (the current LE limit, IIRC)
 
3) A system to load balance IPs to servers (since anybody at this level of sophistication is presumably serving more traffic than a single server)

I'm sorry, but I have to directly challenge this assertion. There are a vast number of shared hosting services doing this, today, without the use of a reverse proxy. Many of the major hosting providers (that is, that provide hosting themselves, not intermediaries such as CloudFlare) operate with such COTS system and without IP-based load balancing for their customer's domains.

These are the sites I'm particularly concerned about. It flies in the face of the advice of the industry for the past several years - to not use dedicated IPs but to use SNI-based systems in which a single IP hosts multiple sites. The advent of LE, and the model in which it operates with respect to issuance, precisely amplify, rather than reduce, these concerns, and many large hosting providers are adopting LE to provide TLS for free to their consumers under this model.

Somebody with this level of sophistication would probably end up implementing a reverse proxy anyways. Setting up the SSL keys for multiple customers on a single box which they all have access to also sounds like a security mess.

It's a single key. And there are many large hosts that do just this. I encourage you to work through the top 10 shared hosting providers, and you will see a similar story.
 
That said, I do admit there is a risk here. Do you have an alternate proposal that would address this risk while still allowing the introduction of features that allow a single HTTP2 connection to use push like mechanisms to work cross origin? This risk seems like something that will only grow with time and that we either need to confront now by taking advantage of the features already in the spec or by requiring active opt-in to the scheme

My foremost goal is making sure the problem, and the underlying assumed threat model, is well understood, and works not just for large sites like Facebook, Akamai, or CloudFlare, but also takes into consideration the practical realities of many of the shared host solutions out there. 
Reply all
Reply to author
Forward
0 new messages