Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Proposed Mozilla Hosting - HTTPS by default

92 views
Skip to first unread message

Patrick McManus

unread,
Apr 10, 2014, 11:26:22 AM4/10/14
to mc...@mozilla.com, sar...@mozilla.com, gl...@mozilla.com, Doug Turner, Sid Stamm, Jake Maul, mozilla.dev.planning group, Richard Barnes, Julien Vehent, Daniel Veditz, oz...@mozilla.com
Hi Everyone - This post continues a side conversation on dev-planning which
started last week on yammer. My apologies for the to: line, it carries over
interested folks from the original discussion.

The discussion was whether or not all Mozilla hosted web content should be
available over https by default. I am going to argue that some version of
"HTTPS everywhere is best[*]" should be Mozilla's default position for both
clients and servers. It flows from this that mozilla hosted resources
should be made available over https unless they have a some particular
compelling reason not to be. I'm hoping we can get consensus on the
principle of https hosting by default.

First, I want to thank our ops team for the work they've been doing here -
they have been doing great work getting things migrated to support https
recently. Its been amazing - and it takes time - this is not a criticism of
that process at all. As an example of good news, I recently filed a bug
about the public suffix list not being served with https and they brought
that up to speed much faster than I anticipated. Good stuff. Thank you!

I'm talking about current and future projects that don't have https in
their plans, but imo should. There are several that I could cite as
examples. but I'm going to avoid doing so because I think it misses the
larger point: user's confidentiality is better served by https than by
http. Mozilla can lead on this and Individuals should be able to depend on
us serving their interest. We know that cleartext is being abused out there.

The best supporting argument here is principle 4 of the Manifesto:
Individuals' security and privacy on the Internet are fundamental and must
not be treated as optional. When framed like that the content does not
matter - it is our place to do what we reasonably can to ensure their
privacy during the act of consumption. https protects not only the security
of the data, but it contributes to the confidentiality of the consumer. It
is not perfect (today especially heartbleedingly so), but it is beneficial
for transfers of even public data. By analogy, I am pleased that my public
library does not log my borrowing history on a chalk board in the lobby -
the books hold no secrets, but my choices in consumption are something I
choose to keep confidential. (unless I browbeat you into reading one of my
favorites :))

No doubt we can nitpick the particulars - and there clearly are operational
challenges here chiefly in managing the certs. I'm pretty familiar with all
of the costs and I think we've already shown they are generally
surmountable. But where there is pain I think its really important to
embrace that pain and push for ways to make the open web operate better,
rather than just side stepping the issues. This is important. Let's be
awesome and lead. That's our raison d'etre, right?

-Patrick

[*] I often get a followup question of why firefox doesn't integrate https
everywhere, the EFF add-on. The answer is that its not possible to really
do this rewriting correctly and independently on the client side - but I'm
really fond of the notion.

Benjamin Kerensa

unread,
Apr 10, 2014, 11:48:45 AM4/10/14
to Patrick McManus, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, Doug Turner, mc...@mozilla.com, Richard Barnes, Sid Stamm, sar...@mozilla.com
> The best supporting argument here is principle 4 of the Manifesto:
> Individuals' security and privacy on the Internet are fundamental and must
> not be treated as optional. When framed like that the content does not
> matter - it is our place to do what we reasonably can to ensure their
> privacy during the act of consumption. https protects not only the
security
> of the data, but it contributes to the confidentiality of the consumer.

+1 to implementing changes that will enhance users privacy and security
while using sites and services Mozilla offers.

Anne van Kesteren

unread,
Apr 10, 2014, 11:50:58 AM4/10/14
to Patrick McManus, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, Doug Turner, mc...@mozilla.com, Richard Barnes, Sid Stamm, sar...@mozilla.com
On Thu, Apr 10, 2014 at 5:26 PM, Patrick McManus <pmcm...@mozilla.com> wrote:
> But where there is pain I think its really important to
> embrace that pain and push for ways to make the open web operate better,
> rather than just side stepping the issues. This is important. Let's be
> awesome and lead. That's our raison d'etre, right?

Can we lead with HSTS flags out? :-)


--
http://annevankesteren.nl/

Doug Turner

unread,
Apr 10, 2014, 11:54:05 AM4/10/14
to Anne van Kesteren, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, mc...@mozilla.com, Patrick McManus, Richard Barnes, Sid Stamm, sar...@mozilla.com
There are other techs like CPS that I want to also cheerlead for. We need to be leading here. I encourage you, when rolling out a site, make it a requirement. As for MoCo stuff, if you get pushback, please reach out to myself or to gal.

Doug
--
Doug Turner


On Thursday, April 10, 2014 at 8:50 AM, Anne van Kesteren wrote:

Doug Turner

unread,
Apr 10, 2014, 11:55:23 AM4/10/14
to Anne van Kesteren, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, mc...@mozilla.com, Patrick McManus, Richard Barnes, Sid Stamm, sar...@mozilla.com
CSP! :)

--
Doug Turner

Doug Turner

unread,
Apr 10, 2014, 12:50:04 PM4/10/14
to Richard Barnes, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, mc...@mozilla.com, Patrick McManus, Sid Stamm, sar...@mozilla.com
I suspect it is to get IT buy in (I’ll work on that). They will need to audit the sites that don’t do https, and decommission them or add tls.


--
Doug Turner


On Thursday, April 10, 2014 at 9:04 AM, Richard Barnes wrote:

> At the level of motivations, this is all motherhood and apple pie.
>
> What are the next steps?
>
>
>
>
> On Apr 10, 2014, at 11:26 AM, Patrick McManus <pmcm...@mozilla.com (mailto:pmcm...@mozilla.com)> wrote:
>
> > Hi Everyone - This post continues a side conversation on dev-planning which started last week on yammer. My apologies for the to: line, it carries over interested folks from the original discussion.
> >
> > The discussion was whether or not all Mozilla hosted web content should be available over https by default. I am going to argue that some version of "HTTPS everywhere is best[*]" should be Mozilla's default position for both clients and servers. It flows from this that mozilla hosted resources should be made available over https unless they have a some particular compelling reason not to be. I'm hoping we can get consensus on the principle of https hosting by default.
> >
> > First, I want to thank our ops team for the work they've been doing here - they have been doing great work getting things migrated to support https recently. Its been amazing - and it takes time - this is not a criticism of that process at all. As an example of good news, I recently filed a bug about the public suffix list not being served with https and they brought that up to speed much faster than I anticipated. Good stuff. Thank you!
> >
> > I'm talking about current and future projects that don't have https in their plans, but imo should. There are several that I could cite as examples. but I'm going to avoid doing so because I think it misses the larger point: user's confidentiality is better served by https than by http. Mozilla can lead on this and Individuals should be able to depend on us serving their interest. We know that cleartext is being abused out there.
> >
> > The best supporting argument here is principle 4 of the Manifesto: Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional. When framed like that the content does not matter - it is our place to do what we reasonably can to ensure their privacy during the act of consumption. https protects not only the security of the data, but it contributes to the confidentiality of the consumer. It is not perfect (today especially heartbleedingly so), but it is beneficial for transfers of even public data. By analogy, I am pleased that my public library does not log my borrowing history on a chalk board in the lobby - the books hold no secrets, but my choices in consumption are something I choose to keep confidential. (unless I browbeat you into reading one of my favorites :))
> >
> > No doubt we can nitpick the particulars - and there clearly are operational challenges here chiefly in managing the certs. I'm pretty familiar with all of the costs and I think we've already shown they are generally surmountable. But where there is pain I think its really important to embrace that pain and push for ways to make the open web operate better, rather than just side stepping the issues. This is important. Let's be awesome and lead. That's our raison d'etre, right?
> >
> > -Patrick
> >
> > [*] I often get a followup question of why firefox doesn't integrate https everywhere, the EFF add-on. The answer is that its not possible to really do this rewriting correctly and independently on the client side - but I'm really fond of the notion.
>
>
> Attachments:
> - smime.p7s
>


Zack Weinberg

unread,
Apr 10, 2014, 12:58:15 PM4/10/14
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 04/10/2014 11:26 AM, Patrick McManus wrote:
> Hi Everyone - This post continues a side conversation on
> dev-planning which started last week on yammer. My apologies for
> the to: line, it carries over interested folks from the original
> discussion.
>
> The discussion was whether or not all Mozilla hosted web content
> should be available over https by default. I am going to argue that
> some version of "HTTPS everywhere is best[*]" should be Mozilla's
> default position for both clients and servers. It flows from this
> that mozilla hosted resources should be made available over https
> unless they have a some particular compelling reason not to be. I'm
> hoping we can get consensus on the principle of https hosting by
> default.

As a security nerd I 100% support this principle.

There may need to be a small handful of exceptions - the only thing
that comes to mind is that the path from 'mozilla.org' to initial
Firefox download *might* need to be accessible over plain HTTP for the
sake of people stuck without any usable HTTPS client.

zw
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iQIcBAEBCAAGBQJTRs2lAAoJEJH8wytnaapkqrAQAIgzmuZRF1xY6IcvXBzmZUud
tdT7afgAgoCbUoksICCqIskRg9Kj4aarieKRpek0PvOnhjQJ9kVvu82/RLtQTWIc
VACMg/IQkZe/x+jm9WhTMVeVdjgjkSUr3oXFda7u+NW8BDHan8RyNc/LHMF2CxqM
oVyZ74+v59ctohY+vciQXgezCaUx1C9lTRftfji/HKaUlmrde+OeRnaE3BqxkonI
J94yC6UaVX3NOTAsrK7tz1BIo80YTuCS9NgtXhjiYe/9aUeUDwpYrTdTC1SSaNAE
F0jskZN7crXbiNBeN3FyV5eNJCOMryGF8OtQa8hw6TeDbFzlmRZ/T8LH3zFrFCkR
C6fiZkdMq0f4o52/RyQlsvAYPAfxbeRn+q0rxER83b4TQS0kv4ZlAAP0vD7rF+Ls
WpBdZLeu0fOd08SzjHKGhSOP1fB7uDj1DrFpt5gK3xoP+9vJWrNuXIgUc8j/GjZO
r/JitcfxsQH0Bag3Xf2WNhPa++SLykJMcfT2PRcwEQSBnjjxwvTlrlE2wN4xqk7l
ISDaHZMdzXujZuld4QVR4kxFZVS0v3UGhzyIHyAXwSy+yHMVivoGtM3uIEMPVtb2
KGYYi0RW+hIMngvmUikvziUHI9G5y1xhQ3kei9azKn2Vs+Z+ENRVJepC2a/3QsWF
AoI4nxAlpFIFrjggQWdH
=bJVs
-----END PGP SIGNATURE-----

Martin Thomson

unread,
Apr 10, 2014, 1:10:32 PM4/10/14
to Doug Turner, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Richard Barnes, mc...@mozilla.com, Patrick McManus, Jake Maul, Sid Stamm, sar...@mozilla.com
On 2014-04-10, at 09:50, Doug Turner <do...@mozilla.com> wrote:

> I suspect it is to get IT buy in (I’ll work on that). They will need to audit the sites that don’t do https, and decommission them or add tls.

Decommissioning HTTP access entirely might not be possible for some services, as Zack points out. But we can get people on HTTPS by using HSTS. By default, HSTS recommends a hard redirect, which is basically equivalent to disabling HTTP. I think that it was the SSL Everywhere project that used a tiny HTTPS-sourced image on their HTTP pages. If that is retrieved, the response includes the HSTS header and next time (i.e., when the user navigates), everything is secure.

David E. Ross

unread,
Apr 10, 2014, 2:58:38 PM4/10/14
to
Would your site certificate chain to a generally-available, commercial
root? Or do you propose that Mozilla create its own root for this? In
the latter case, how would someone without a Mozilla-based browser -- a
browser whose developer chooses not to include the Mozilla root --
access your site?

--

David E. Ross
<http://www.rossde.com/>

On occasion, I filter and ignore all newsgroup messages
posted through GoogleGroups via Google's G2/1.0 user agent
because of spam, flames, and trolling from that source.

Patrick McManus

unread,
Apr 10, 2014, 3:18:33 PM4/10/14
to David E. Ross, mozilla.dev.planning group
I'm not making a special proposal about the cert - I would expect they
would come from normal broadly acceptable CAs and minted in the usual fee
for signature kind of way.
> _______________________________________________
> dev-planning mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-planning
>

Mike Hommey

unread,
Apr 10, 2014, 6:37:16 PM4/10/14
to Patrick McManus, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, Doug Turner, mc...@mozilla.com, Richard Barnes, Sid Stamm, sar...@mozilla.com
On Thu, Apr 10, 2014 at 11:26:22AM -0400, Patrick McManus wrote:
> Hi Everyone - This post continues a side conversation on dev-planning which
> started last week on yammer. My apologies for the to: line, it carries over
> interested folks from the original discussion.
>
> The discussion was whether or not all Mozilla hosted web content should be
> available over https by default. I am going to argue that some version of
> "HTTPS everywhere is best[*]" should be Mozilla's default position for both
> clients and servers. It flows from this that mozilla hosted resources
> should be made available over https unless they have a some particular
> compelling reason not to be. I'm hoping we can get consensus on the
> principle of https hosting by default.

Is performance a compelling reason? Because here's the problem: HTTPS adds a
significant overhead. Just connecting to www.mozilla.org takes more than 500ms
from where I am when connecting via HTTP takes about 120, because of the
additional roundtrips that SSL/TLS requires. I guess it's worse on 3G.

Mike

Patrick McManus

unread,
Apr 10, 2014, 9:42:27 PM4/10/14
to Mike Hommey, byron jones, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, Doug Turner, mc...@mozilla.com, Richard Barnes, Sid Stamm, sar...@mozilla.com
On Thu, Apr 10, 2014 at 6:37 PM, Mike Hommey <m...@glandium.org> wrote:

>
> > clients and servers. It flows from this that mozilla hosted resources
> > should be made available over https unless they have a some particular
> > compelling reason not to be. I'm hoping we can get consensus on the
> > principle of https hosting by default.
>
> Is performance a compelling reason?


imo in the general case client performance is not a reason to avoid https.
As an existence proof - If google can run search over primarily https,
then mozilla.org ought to be able to handle the performance implications
too.

pro tips for servers:
* embrace OCSP stapling. This saves a whole separate http transaction
during the first handshake to a domain, generally at least 200ms. Often a
lot more. A quick look shows it enabled on mozorg.cdn.mozilla.net but not
yet on mozilla.org
* Get ready for http/2 in early fall. Its going to completely eliminate
lots of handshakes, which are by far the slowest part of https. Setup
domains using wildcard certs when ever possible to encourage coalescing of
connections across origins.
* If you have big certs (a quick look shows that we do) make sure the
servers are using TCP IW10. It looks like ours are from a spot check.
* use NPN or ALPN and a ephemeral cipher suite to encourage false start -
even if you're just serving HTTP/1

If a server does this, there is just a 1 rtt penalty on time to first byte
for https compared to http - and thanks to http/2 we will ramp up the
subresources MUCH faster. We could do this with spdy now, but given the
relative imminent standardization of http/2 waiting for that makes sense.

Reed Loden

unread,
Apr 10, 2014, 10:08:21 PM4/10/14
to Patrick McManus, byron jones, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, Doug Turner, mc...@mozilla.com, Mike Hommey, Richard Barnes, Sid Stamm, sar...@mozilla.com

Julien Vehent

unread,
Apr 10, 2014, 11:31:23 PM4/10/14
to Patrick McManus, byron jones, oz...@mozilla.com, mozilla.dev.planning group, Daniel Veditz, Jake Maul, Doug Turner, mc...@mozilla.com, Mike Hommey, Richard Barnes, Sid Stamm, sar...@mozilla.com
Hi Patrick,

Jake and I spent a lot of time over the last year working on improving
the server side SSL at Mozilla. Most of it is captured in bug 901393 and
subs, and this wiki page: https://wiki.mozilla.org/Security/Server_Side_TLS

I added a few comments on the specifics below.

- Julien

On Thu 10.Apr'14 at 21:42:27 -0400, Patrick McManus wrote:
> On Thu, Apr 10, 2014 at 6:37 PM, Mike Hommey <m...@glandium.org> wrote:
>
> >
> > > clients and servers. It flows from this that mozilla hosted resources
> > > should be made available over https unless they have a some particular
> > > compelling reason not to be. I'm hoping we can get consensus on the
> > > principle of https hosting by default.
> >
> > Is performance a compelling reason?
>
>
> imo in the general case client performance is not a reason to avoid https.
> As an existence proof - If google can run search over primarily https,
> then mozilla.org ought to be able to handle the performance implications
> too.

Google gets fast ECDHE performance using OpenSSL and optimized NIST
curves [1]. Our SSL termination, Riverbed Stingray (ZLB), doesn't yet support
ECDHE. But we have an ongoing relationship with the dev team at Riverbed
to get it deployed later this year.

[1] http://vincent.bernat.im/en/blog/2011-ssl-perfect-forward-secrecy.html

> pro tips for servers:
> * embrace OCSP stapling. This saves a whole separate http transaction
> during the first handshake to a domain, generally at least 200ms. Often a
> lot more. A quick look shows it enabled on mozorg.cdn.mozilla.net but not
> yet on mozilla.org

Brian Smith had a plan for deploying OCSP Stapling in Firefox. Do you
know what happened to that? Can you share a timeline?

On the infra side, we have a version of ZLB that supports OCSP Stapling
that is deployed in Mozilla Labs. This version is not yet deployed on
our main production, but it's in the pipe. Since it's a major version
upgrade, Jake and his team at webops need to allocate extra time for
testing and deployment.

It would be truly awesome to synchronize the deploy of OCSP Stapling
on the infra side, with the release in Firefox. Let's work together on
that.

> * Get ready for http/2 in early fall. Its going to completely eliminate
> lots of handshakes, which are by far the slowest part of https. Setup
> domains using wildcard certs when ever possible to encourage coalescing of
> connections across origins.

The folks at Riverbed have expressed interest in working with us to test
and deploy HTTP/2 asap. I'm happy to introduce you to them if you'd like.

> * If you have big certs (a quick look shows that we do) make sure the
> servers are using TCP IW10. It looks like ours are from a spot check.
> * use NPN or ALPN and a ephemeral cipher suite to encourage false start -
> even if you're just serving HTTP/1
>
> If a server does this, there is just a 1 rtt penalty on time to first byte
> for https compared to http - and thanks to http/2 we will ramp up the
> subresources MUCH faster. We could do this with spdy now, but given the
> relative imminent standardization of http/2 waiting for that makes sense.

As far as SPDY goes, I don't think Riverbed will implement it this year.
I *think* they want to focus on HTTP/2. But that's a guess.

Daniel Veditz

unread,
Apr 11, 2014, 2:54:11 AM4/11/14
to Julien Vehent, Patrick McManus, byron jones, oz...@mozilla.com, mozilla.dev.planning group, David Keeler, Jake Maul, Doug Turner, mc...@mozilla.com, Camilo Viecco, Mike Hommey, Richard Barnes, Sid Stamm, sar...@mozilla.com
On 4/10/2014 8:31 PM, Julien Vehent wrote:
> Brian Smith had a plan for deploying OCSP Stapling in Firefox. Do you
> know what happened to that? Can you share a timeline?

AFAIK we've supported stapling for a while. Mozilla 25 it looks like
(surprising, thought it was earlier)
https://bugzilla.mozilla.org/show_bug.cgi?id=700698

There are still some revocation improvements in the works but stapling
shouldn't be a worry.

-Dan Veditz

Julien Vehent

unread,
Apr 11, 2014, 5:40:07 AM4/11/14
to Daniel Veditz, byron jones, oz...@mozilla.com, mozilla.dev.planning group, David Keeler, Jake Maul, Doug Turner, mc...@mozilla.com, Camilo Viecco, Patrick McManus, Mike Hommey, Richard Barnes, Sid Stamm, sar...@mozilla.com
Right, I should have been more clear. The proposed change was to make
OCSP Stapling a requirement to the green bar of EV certificates, in
order to drive adoption, and disable classic OCSP entirely since it's
not really used. On the infra side, we discussed supporting that change
by having OCSP Stapling enabled on as many Mozilla properties as
possible when the change goes live.

- Julien

Henri Sivonen

unread,
Apr 11, 2014, 8:28:32 AM4/11/14
to dev. planning
> On 04/10/2014 11:26 AM, Patrick McManus wrote:
>> The discussion was whether or not all Mozilla hosted web content
>> should be available over https by default. I am going to argue that
>> some version of "HTTPS everywhere is best[*]" should be Mozilla's
>> default position for both clients and servers. It flows from this
>> that mozilla hosted resources should be made available over https
>> unless they have a some particular compelling reason not to be. I'm
>> hoping we can get consensus on the principle of https hosting by
>> default.

I support this. I even think "by default" is putting it too mildly. I
think the only thing port 80 should do is to redirect to https and the
https side should use HSTS with includeSubDomains. (It follows that if
we want to host non-https test cases, we should probably have a
different domain for hosting those.)

On Thu, Apr 10, 2014 at 7:58 PM, Zack Weinberg <za...@panix.com> wrote:
> There may need to be a small handful of exceptions - the only thing
> that comes to mind is that the path from 'mozilla.org' to initial
> Firefox download *might* need to be accessible over plain HTTP for the
> sake of people stuck without any usable HTTPS client.

In what circumstance would that be the case, realistically?

As long as we support XP, we should make sure that the download path
works in IE on XP. For IE8 on XP, it means making sure to enable TLS
1.0 and TLS_RSA_WITH_3DES_EDE_CBC_SHA on the download path. For IE6 on
XP, it means also enabling SSL3. If the https download path has those
enabled, I don't see a reason to have a non-https download path from
www.mozilla.org.

(Yes, I'm aware that instead of TLS_RSA_WITH_3DES_EDE_CBC_SHA one
might instead enable TLS_RSA_WITH_RC4_128_SHA to avoid 3DES being used
as a DoS vector, but enabling RC4 at all these days seems wrong.)
--
Henri Sivonen
hsiv...@hsivonen.fi
https://hsivonen.fi/

Zack Weinberg

unread,
Apr 11, 2014, 9:29:31 AM4/11/14
to
On 2014-04-11 8:28 AM, Henri Sivonen wrote:
> On Thu, Apr 10, 2014 at 7:58 PM, Zack Weinberg <za...@panix.com> wrote:
>> There may need to be a small handful of exceptions - the only thing
>> that comes to mind is that the path from 'mozilla.org' to initial
>> Firefox download *might* need to be accessible over plain HTTP for the
>> sake of people stuck without any usable HTTPS client.
>
> In what circumstance would that be the case, realistically?

I didn't have one in mind, which is why I said "might". As far as I
know all of the still-supported platforms ship with at least one browser
that can do vaguely modern HTTPS, but there might be a situation I don't
know about.

Right now, if I type "www.mozilla.org" into OSX 10.6 Safari (which is
the oldest thing I have convenient), I get an unencrypted page, but the
download-Firefox link on that page points to an HTTPS URL:

https://download.mozilla.org/?product=firefox-28.0-SSL&os=osx&lang=en-US

... and presumably that's working fine, so that's at least evidence in
favor of "we could go ahead and mandate HTTPS." Yay.

zw

Patrick McManus

unread,
Apr 11, 2014, 10:59:50 AM4/11/14
to Julien Vehent, byron jones, oz...@mozilla.com, mozilla.dev.planning group, Daniel Veditz, Jake Maul, Doug Turner, mc...@mozilla.com, Mike Hommey, Richard Barnes, Sid Stamm, sar...@mozilla.com
On Thu, Apr 10, 2014 at 11:31 PM, Julien Vehent <jve...@mozilla.com> wrote:

>
>
> Google gets fast ECDHE performance using OpenSSL and optimized NIST
> curves [1]. Our SSL termination, Riverbed Stingray (ZLB), doesn't yet
> support
> ECDHE. But we have an ongoing relationship with the dev team at Riverbed
> to get it deployed later this year.
>
>
that's awesome to hear.



> [1] http://vincent.bernat.im/en/blog/2011-ssl-perfect-forward-secrecy.html
>
> > pro tips for servers:
> > * embrace OCSP stapling. This saves a whole separate http transaction
> > during the first handshake to a domain, generally at least 200ms. Often a
> > lot more. A quick look shows it enabled on mozorg.cdn.mozilla.net but
> not
> > yet on mozilla.org
>
> Brian Smith had a plan for deploying OCSP Stapling in Firefox. Do you
> know what happened to that? Can you share a timeline?
>


[the thread later goes on to clarify that firefox supports stapling right
now.]

the question might best be directed to {sid, brian, david keeler} to give
an update on the plan for ending out of band revocation checking. But from
a server perspective nothing will be gained by waiting - the performance
carrot is in place right now so I wouldn't wait to synchronize.


>
> On the infra side, we have a version of ZLB that supports OCSP Stapling
> that is deployed in Mozilla Labs. This version is not yet deployed on
> our main production, but it's in the pipe.


splendid!

The folks at Riverbed have expressed interest in working with us to test

> and deploy HTTP/2 asap. I'm happy to introduce you to them if you'd like.
>
>
yes please - cc :hurley too. our latest -draft support is available in
nightly behind a about:config pref, but I'm more than happy to do interop
tests.

Martin Thomson

unread,
Apr 11, 2014, 11:57:31 AM4/11/14
to Zack Weinberg, dev-pl...@lists.mozilla.org
On 2014-04-11, at 06:29, Zack Weinberg <za...@panix.com> wrote:

> evidence in favor of "we could go ahead and mandate HTTPS." Yay.

What other evidence do people consider necessary? Because implementing HSTS, hard redirects and all, would be ideal.

Daniel Veditz

unread,
Apr 11, 2014, 1:55:32 PM4/11/14
to Julien Vehent, byron jones, oz...@mozilla.com, mozilla.dev.planning group, David Keeler, Jake Maul, Doug Turner, mc...@mozilla.com, Camilo Viecco, Patrick McManus, Mike Hommey, Richard Barnes, Sid Stamm, sar...@mozilla.com
Why would client plans delay a server roll out? Obviously we'll want our
servers upgraded "no later than" the Firefox changes, but I see no
benefit to waiting just because Firefox isn't ready. Wouldn't it be
easier to upgrade servers as you can get to it rather than have to "big
bang" them all around a particular client release?

-Dan

Julien Vehent

unread,
Apr 11, 2014, 5:00:14 PM4/11/14
to Patrick McManus, Daniel Veditz, byron jones, oz...@mozilla.com, jstev...@mozilla.com, mozilla.dev.planning group, David Keeler, Jake Maul, Doug Turner, mc...@mozilla.com, Camilo Viecco, Mike Hommey, Richard Barnes, Sid Stamm, sar...@mozilla.com
On Fri 11.Apr'14 at 10:55:32 -0700, Daniel Veditz wrote:
> Why would client plans delay a server roll out?

On Fri 11.Apr'14 at 10:59:50 -0400, Patrick McManus wrote:
> from a server perspective nothing will be gained by waiting - the
> performance carrot is in place right now so I wouldn't wait to
> synchronize.

A bit more context & historical information may answer some of your
questions.

OCSP Stapling has been on our radar since last summer (bug 896078).
Early on, we involved Riverbed in the discussion, and came up with a
roadmap. They delivered support in version 9.5 of Riverbed Stingray (the
official product name for ZLB), released 3 months ago. We did some
initial testing in Labs, and shared results with them. They came back to
us 3 weeks ago with a new release that improves OCSP responses checking.
Now we need to iterate on testing, and schedule deployment in
production.

That is to say: we have been busy. OCSP Stapling is part of a larger
ongoing plan to improve SSL/TLS support on the server side [1], across
several hundreds of services, and many different infrastructures. As you
can imagine, the load balancers at the head of our two major
data-centers are highly critical pieces of equipment, and upgrading
requires preparation, testing, QA, downtime for release and so on. The
risk of impacting the uptime of these services is a strong factor in any
deployment decision. This is the reason behind the conservative, but
steady, progress.

When I mentioned synchronizing OCSP Stapling changes, what I really
meant is creating momentum to focus more people on it, and accelerating
adoption. Riverbed appreciates being part of our plans for Firefox. They
have been hard at work on their SSL stack to support our effort and
better serve their other customers. On the infrastructure side, we need
roadmap visibility as well, so we can prioritize appropriately (there
are many other projects in the queue), and have OCSP Stapling enabled
when Firefox makes it a requirement.

- Julien

[1] https://blog.mozilla.org/security/2013/11/12/navigating-tls/

Stefan Arentz

unread,
Apr 11, 2014, 9:46:39 PM4/11/14
to Doug Turner, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, mc...@mozilla.com, Patrick McManus, yboily, Richard Barnes, Sid Stamm

On Apr 10, 2014, at 11:54 AM, Doug Turner <do...@mozilla.com> wrote:

> There are other techs like CPS that I want to also cheerlead for. We need to be leading here. I encourage you, when rolling out a site, make it a requirement. As for MoCo stuff, if you get pushback, please reach out to myself or to gal.

Or maybe better, reach out to the Cloud Services team, in which Yvan’s team, Web App (& Services) Security, was merged. We actually officially deal with web app security on a day to day basis and are always happy to explain, assist, review, encourage :-)

So I think SSL-MOST-OF-THE-THINGS is a nobel goal and we should absolutely go for it.

But, I think the bigger story has to start sooner and broader, because as Doug says, there are a whole bunch of other technologies, and coding practices, that we need to embrace in a bigger way than we currently do.

Couple of thoughts on that:

* Related to SSL and STS, I think one of the things we are doing mostly wrong now is have development and staging servers that run with a different set of parameters than the production servers. Often we think, oh those headers and SSL that will be IT/Ops responsibility. Nope. That needs to change. Those headers and proper testing with SSL needs to be part of development and staging and day to day development.

* If we can add headers now to existing sites is great, but security cannot be an afterthought. For example, you can’t properly add CSP at the end of the development cycle. You will get burned if you don’t think about CSP *before* you start writing your first line of code or pick you JS frameworks, I think we need more education for teams and individual developers.

* I think we need to move to a situation where we actually *require* new sites and services to conform to a specific set of security and deployment rules. And if you can’t justify why you don’t meet those security expectations, your site or app or api cannot go live. I don’t think we have been strict enough in the past.

I’m not joking about the last part. We really need to be better.

Needless to say, the AppSec team will also have to do some heavy thinking about improving this situation.

S.

Yvan Boily

unread,
Apr 14, 2014, 5:40:12 PM4/14/14
to Stefan Arentz, gl...@mozilla.com, oz...@mozilla.com, mozilla.dev.planning group, Julien Vehent, Daniel Veditz, Jake Maul, Doug Turner, mc...@mozilla.com, Patrick McManus, Richard Barnes, Sid Stamm
Hi All,

Sorry for the lag time on this, but I have been a proponent for my team taking a stronger engineering focus for quite some time (see our security automation focus and other activities over the last several years), and while we have a broad scope of responsibilities and need to address those, I would very much like to support not just the roll out of HSTS, but CSP initially on our high value properties, but also in all properties.

At the web security meeting tomorrow morning (Cloud Services security team still works on web security for everyone!), we will identify a person to drive a strategy for both STS and CSP, and then pull together people who have commented on this thread to make sure we get the right people involved in pushing both efforts forward. Rather than saying include me, hold off until tomorrow, and we will post some additional info about getting engaged on these efforts.

Cheers,
Yvan
0 new messages