Anticipating a world of encrypted SNI: risks, opportunities, how to win big

413 views
Skip to first unread message

David Fifield

unread,
Aug 18, 2018, 1:25:08 AM8/18/18
to traff...@googlegroups.com
Efforts are underway to add SNI encryption as an extension in TLS 1.3.
* https://tools.ietf.org/html/draft-rescorla-tls-esni-00
* https://www.ietf.org/mail-archive/web/tls/current/msg26842.html
I find this a really hopeful development. I appreciate the work of
everyone who is helping to make it a reality (some of them are on this
list). Encrypted SNI will of course be a boon for online privacy
generally, but in our world of censorship circumvention it could be the
biggest thing since the ascendance of TLS. Along with its benefits, I
foresee that encrypted SNI will change the basic game in ways that we
need to be ready for. I expect that we'll need to reevaluate our
customary models and begin to consider new challenges.

At first glance, encrypted SNI—in whatever form it may eventually
take—is a silver bullet. It's domain fronting without the downsides. It
solves all our problems, up to traffic analysis: payload encryption
prevents blocking by content, and SNI encryption protects the
destination address. The censor cannot distinguish various connections
to a TLS server and faces the old dilemma: block all, or block none. And
experience shows that we can find servers with a sufficient degree of
co-hosting that a censor will hesitate to "block all."

So what's the catch? I don't really think there is one, at least not a
major one. SNI encryption is poised to put censorship circumvention on
substantially securer footing. As
https://tools.ietf.org/html/draft-ietf-tls-sni-encryption-03 (which I
encourage you to read) says, "Historically, adversaries have been able
to monitor the use of web services through three channels... These
channels are getting progressively closed." But what I do think SNI
encryption will do is force us to reconsider our threat models. Solving
our current batch of problems is going to uncover new problems—lesser
problems, to be sure—but ones that we have until now mostly ignored
because they were not the most pressing. Censors, too, will be forced
evolve, when they are finally deprived of their last easy traffic
distinguisher. I predict a displacement of the battleground, from the
land of firewalls to new arenas.

It's a credit to everyone's work on domain fronting, and the basic
soundness of the idea, that when it began to falter, it was not because
of the Great Firewalls of the world, but because of the Googles and
Amazons. This phenomenon is an example of what I mean when I say that
old challenges will give way to new ones. We beat the censors at their
game, and resisted direct blocking long enough for another weakness to
reveal itself; i.e., that network intermediaries don't reliably perform
the functions that we depended on. I mean that as an observation of
fact, not as implied judgement—personally I don't really blame Google
and Amazon for their policy change regarding domain fronting. While the
wisdom of the decision is debatable, and I suspect there is more to
their rationale than they have stated publicly, certainly they are under
no obligation to continue supporting an unintended feature, no matter
how useful we find it. But whatever the cause, the fact is that domain
fronting, while demonstrably robust against border-firewall censors, is
susceptible to the changing dispositions of intermediary services. We
reached this frontier of experiential knowledge only because we had
beaten the censor's usual tricks—we transcended the cat-and-mouse game.
I like to draw an analogy with human health: our caveman forebears
didn't worry about dying from heart disease, because it was almost
certain that they would be killed by something else first, a woolly
mammoth, say. It was only after the immediate threat of death by mammoth
had subsided, that humans had the comparative luxury of being concerned
about heart disease. So it is with us now: we built a cat-proof mouse,
and now we see what other worries a mouse has.

I can see something similar playing out with encrypted SNI, only on a
larger and more pervasive scale. Network intermediaries—CDNs, app
stores, hosting providers—are going to face more and more pressure: as
other links in the chain of communication are strengthened, those
intermediaries will become attractive targets of action by censors. We
already see examples of censors having to step out of their accustomed
comfort zones because of generally increasing Internet security, for
example when the government of China pressured Apple to yank VPN apps
from its app store. I contend that if the government had the ability to
block apps all by itself, without petitioning Apple, then that's what it
would have done. That the censor resorted to pressuring a third party
shows a certain weakness, but the fact that it succeeded shows it is
still strong enough for its purposes. It also highlights a shift in
moral responsibility. If the government were able to block apps without
asking, then Apple could just throw up its hands and say: "not my
fault." But because the censor has no choice but to ask, Apple must make
the deliberate choice of whether to become an agent—however unwilling—of
censorship.

We circumvention designers have customarily assumed that network
intermediaries are benevolent, or at least non-malicious—that they do
not collaborate with a censor. We assumed so, because the risk of direct
blocking by a censor overshadowed any other risk. In a world of
encrypted SNI, where the direct risk from the censor is greatly
diminished, we will need to reexamine this assumption. Intermediaries
will become de-facto gatekeepers of information, to an even greater
degree than they are now, and they'll be in the unenviable position of
being the logical place at which to implement censorship. As things
stand now, when a court in India orders a site blocked, it's Airtel's
problem to block it. But when encrypted SNI renders Airtel unable, it'll
be Cloudflare's problem. Now, if I had to choose between the good will
of Cloudflare et al. and that of the GFW, there's no comparison:
obviously you choose Cloudflare every time. And yet, we can't overlook
that Cloudflare once booted a site on a CEO's whim; nor that Google
secretly started building a censored search engine for China. The
operators of network intermediaries, and their commitment to human
rights, will be tested more than ever, and the population of Internet
users will increasingly rely on them to do the right thing.

As circumvention designers, one thing we can do to help those services
help themselves is not to proxy directly from the services themselves,
but to use at least one additional encrypted hop to an separate proxy
server. That way, it becomes technically harder for the services to do
surgical blocking.

I have to admit that I don't fully understand the apparent enthusiasm
for encrypted SNI from groups that formerly were not excited about
domain fronting. It's possible I misunderstand some subtlety, but to my
mind, they both should amount to about the same thing from the their
perspective. The primary difference is one of scale. The stated concerns
with domain fronting also apply to encrypted SNI; in particular that if
one customer gets blocked, it will have a collateral effect on other
customers. Maybe the difference in scale is really it: the cloud
companies are happier to bet against blocking when *all* their
customer's domains are potentially affected, rather than just one. It's
a rational enough viewpoint (greater potential collateral damage → lower
probability of blocking), but to my mind encrypted SNI doesn't
fundamentally alter the nature of the game, it just raises the stakes.
Don't get me wrong: I welcome the adoption of encrypted SNI for whatever
reason. It's better than domain fronting, it'll be nice to have it in a
standard, and once we have it we won't want to go back. But I hope that
operators understand what they're getting into. Will they get cold feet
when push comes to shove—when a future version of Telegram uses
encrypted SNI and Russia again blocks millions of IPs? Or malware adopts
it for C&C and infosec blue teamers get annoyed?

I said earlier that I didn't see any major catch with encrypted SNI. The
minor catch I see is the potential for increased centralization. TLS
with encrypted SNI is likely to be the most effective form of
circumvention, which means that unless you're a major player, or hosted
on one, you'll be at increased risk of being blocked. I've read some
criticism online of circumvention systems, like domain fronting, that
route traffic through the major cloud companies. One the one hand, I
find that kind of criticism annoying, because it's not that the use of
centralized services is a desired, designed-in feature; it's that we
don't yet know how to do it better. Circumvention is already hard
enough, and by demanding that is be simultaneously decentralized, these
critics are asking us not only to juggle, but to do so backwards on
roller skates. But on the other hand, I can sympathize with their point
of view. Despite the difficulty, we *should* aspire to better designs. I
dislike giving connection metadata to Amazon and Microsoft as much as
anyone. Unfortunately, encrypted SNI is likely to move us even farther
from the decentralized end of the scale. It will be so effective, and so
easy to use, that I predict there will be a convergence of systems using
it. We see something like that effect today, where there is a perception
that if you want to resist DoS, you have no choice but to be on one of
the big CDNs. The outcome I fear for the web is something like we have
today with SMTP, where the costs of setting up an independent server are
so great as to make the choice effectively impossible. But I don't want
to overblow my concern here. We should be thinking about ways to
decentralize, but encrypted SNI is worth pursuing even if we can't think
of any.

What are the risks to reaching a future of easy and effective
circumvention using encrypted SNI? The worst case is if the proposal
fails or is permanently stalled: we'll be stuck in a world that is
pretty much like the world of today, except more hostile to domain
fronting, waiting for something else to come along. As I understand it,
draft-rescorla-tls-esni-00 is subject to change before standardization,
and I suppose there's a chance it could morph into something so unwieldy
or undeployable that it fails despite standardization. Most of the
discussion that I've seen so far has been positive, but not all
stakeholders at the IETF love the idea; in particular I get the
impression that some people rely on plaintext SNI for monitoring or
regulatory compliance, and encrypted SNI will make their lives more
difficult. So we have to watch out for it being neutered in a way that
enables censorship. Crucially, the value of encrypted SNI for
circumvention depends on its adoption. If at first it's only
circumventors and malware authors using encrypted SNI, then censors and
security firewalls will start to block it, and then it's permanently
skunked, no use to anybody. What we need is for at least one of the
major browsers to implement encrypted SNI and (importantly) enable it by
default. It's browsers that have to lead the way here, just as they
effectively snuck TLS deployment past the censors' notice, until it was
too late to do anything about it.

And what about traffic analysis; that is, classification based on the
timing and size of packets in an SNI-encrypted TLS stream? My gut
feeling is that it still won't be the first tool that censors reach for.
I see pressure on third parties as a more likely threat. But it becomes
more likely with each passing day, and anyway, my instinct could be
wrong. So I think that research on traffic analysis obfuscation will
become more and more relevant.

Martin Johnson

unread,
Aug 20, 2018, 1:52:18 AM8/20/18
to traff...@googlegroups.com
Thank you David for sharing news about this very promising development, and your very precise and nuanced reflections on what this will mean.

I agree with your prediction that censorship powers will continue to shift from firewalls to CDNs, app stores and hosting providers. Regarding Google and Amazons lack of support for domain fronting, you said "they are under no obligation to continue supporting an unintended feature" - that's true from a strictly legal point of view. I think they should be held to a higher standard than the legal minimum, though. For one, because they themselves claim a higher moral ground ("Don't be evil" etc). Also, if we defend free speech, and these companies hurt free speech, wittingly or not, then we must speek up and - yes - blame them. Apple's policy to censor its App Store, for example, means that there is almost nothing at all we can do to help iOS users circumvent censorship. Imagine a future where Google, Amazon, CloudFlare etc all actively help China's censorship authorities execute their mission - in such a world, what do we in the circumvention community do? If we don't come up with an effective approach we lose.

On a more practical level, this is great advice and deserves repeating:

"As circumvention designers, one thing we can do to help those services
help themselves is not to proxy directly from the services themselves,
but to use at least one additional encrypted hop to an separate proxy
server. That way, it becomes technically harder for the services to do
surgical blocking."

--
You received this message because you are subscribed to the Google Groups "Network Traffic Obfuscation" group.
To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

John Sarapata

unread,
Aug 20, 2018, 7:58:15 AM8/20/18
to Martin Johnson, traff...@googlegroups.com
Thanks for a great note, David. I've had some experience talking to Google on domain fronting, so at the risk of walking into hostile territory, I thought I'd chime in.

I've heard three main objections.
1. Domain fronting is a hacky side effect of less than tight standards and increases our attack surface.
2. While we in this list tend to focus on

To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Network Traffic Obfuscation" group.
To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf...@googlegroups.com.

John Sarapata

unread,
Aug 20, 2018, 7:58:47 AM8/20/18
to Martin Johnson, traff...@googlegroups.com
Oops. Mistaken send in my phone. Rest to follow.

John Sarapata

unread,
Aug 20, 2018, 8:14:00 AM8/20/18
to Martin Johnson, traff...@googlegroups.com
Resuming at objections:
1. Side effect
2. We look at DPI as censoring, but traffic ops people pay more attention to network efficiency. They like to use routers that set up the most efficient path between the host and user. Hiding the real server causes higher latency and more internal network traffic, which is what a lot of engineers care about most. One Google VP used to famously say "fast is my favorite feature."
3. The main one in my view is the issue of consent. People who domain front are using the platform as collateral damage without even including the platform in the discussion. When signal used AppEngine to front, Google was blocked in Egypt and didn't have a chance to weigh the benefits. This sort of public highlight doesn't do platforms any good.

To add to the collateral issue, Google had a growing cloud business. While we might, and have in the past, used our own properties as collateral, it becomes more complicated when you look at customers. It is less clear that we have the right to put up Snapchat a collateral without their consent, for example.

There was also an article recently that Russian hackers were using domain fronting on AppEngine to exfiltrate data, masking it as legitimate traffic.

All of this gave us headwinds internally, and I suspect other big platforms had the same discussion.

Ben Schwartz, also on this list is deeper in the standards work, but I am intrigued by ideas like secondary certs. If we allowed sites to put in tags, that say they will front for other domains securely, it puts us in a stronger position. Also, having a well thought through standard makes for much easier discussion with product teams. They main downside with secondary certs IMO is that they require people to be on the same CDN. I think a more flexible proxying would make for larger collateral.

John
Reply all
Reply to author
Forward
0 new messages