Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Indicators for high-security features

201 views
Skip to first unread message

Richard Barnes

unread,
Sep 17, 2014, 11:20:41 AM9/17/14
to mozilla-dev-s...@lists.mozilla.org, Anne van Kesteren
Hey all,

Anne suggested an idea to me that I thought would be interesting for this group. Consider this email a rough sketch of an idea, not any sort of plan.

There are a bunch of security features right now that I think we all agree improve security over and above just using HTTPS:
-- HTTP Strict Transport Security
-- HTTP Public Key Pinning
-- TLS 1.2+
-- Certificate Transparency
-- Use of ciphersuites with forward secrecy
-- No mixed content
-- Content Security Policy (?)
-- Sub-resource integrity (?)

It would be good if we could create incentives for sites to turn on these features. EFF has already seen some sites trying to turn things green on their "Encrypt the Web Report" [1]. Should we consider creating a suite of features that comprise a "high-security" web site, and create some UI to express that to the user?

We could invent new UI for this (e.g., a green lock icon), or we could overlay these requirements on the EV criteria. Chrome already does this to some extent, downgrading the EV indicator to DV if the site attempts to POST to an "http://" URI or (soon) if the site doesn't do CT.

What would people think about creating a list of security features that must be enabled in order to get special UI (EV or otherwise)? We would obviously want to coordinate this with the other browser vendors, and to some degree with site operators (though the whole point here is to lean on them to do better!)

Thoughts? Suggestions?

Thanks,
--Richard

[1] https://www.eff.org/encrypt-the-web-report

s...@gmx.ch

unread,
Sep 17, 2014, 3:41:13 PM9/17/14
to dev-secur...@lists.mozilla.org
Hi

I would support your idea, but it's quite hard to implement it. If a
server use TLS 1.2 and HSTS, you still don't know if the connection is
really secure.
But it would be easier if Firefox would show more details about
protocol, ciphers etc.
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


signature.asc

Jeremy.Rowley

unread,
Sep 18, 2014, 10:43:00 AM9/18/14
to dev-secur...@lists.mozilla.org
Hi Richard,

I like the concept of promoting better security practices through an
indicator or other means. I especially like the idea of providing a
coordinated effort between multiple members of the security community to
ensure everyone is informed, working towards a common goal, and
promoting the idea. However, I dislike mixing the indicator with EV.
EV means something already - that the identity of the subject was
verified in accordance with the EV Guidelines. Tacking on other
requirements confuses this meaning just as we are seeing an increase in
recognition and understanding. Using the EV indicator will cause the
public to assume there is something wrong with their digital
certificate, attributing the error to the CA instead of the server
operator.

I've expressed similar concerns over deploying CT first as an EV
requirement. Fortunately, CT is an area where CAs can readily assume
responsibility, ensuring that certificate users are compliant without
them necessarily understanding how they are compliant - something that
is vital for operations with small IT departments. Still, the potential
confusion between EV and CT (in addition to my support for CT as a
concept) makes me especially hopeful that CT will rapidly expand to
cover all certificates.

Instead of taking away the EV indicator, perhaps a super-indicator for
technical security? That way there is still an indicator for the
identity of the website operator which can be combined with an indicator
about the technical controls deployed by the server operator.

Jeremy

On 9/17/2014 9:20 AM, Richard Barnes wrote:
> Hey all,
>
> Anne suggested an idea to me that I thought would be interesting for this group. Consider this email a rough sketch of an idea, not any sort of plan.
>
> There are a bunch of security features right now that I think we all agree improve security over and above just using HTTPS:
> -- HTTP Strict Transport Security
> -- HTTP Public Key Pinning
> -- TLS 1.2+
> -- Certificate Transparency
> -- Use of ciphersuites with forward secrecy
> -- No mixed content
> -- Content Security Policy (?)
> -- Sub-resource integrity (?)
>
> It would be good if we could create incentives for sites to turn on these features. EFF has already seen some sites trying to turn things green on their "Encrypt the Web Report" [1]. Should we consider creating a suite of features that comprise a "high-security" web site, and create some UI to express that to the user?
>
> We could invent new UI for this (e.g., a green lock icon), or we could overlay these requirements on the EV criteria. Chrome already does this to some extent, downgrading the EV indicator to DV if the site attempts to POST to an "http://" URI or (soon) if the site doesn't do CT.
>
> What would people think about creating a list of security features that must be enabled in order to get special UI (EV or otherwise)? We would obviously want to coordinate this with the other browser vendors, and to some degree with site operators (though the whole point here is to lean on them to do better!)
>
> Thoughts? Suggestions?
>
> Thanks,
> --Richard
>
> [1] https://www.eff.org/encrypt-the-web-report
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
> .
>

Patrick McManus

unread,
Sep 18, 2014, 2:04:16 PM9/18/14
to Richard Barnes, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org
On Wed, Sep 17, 2014 at 11:20 AM, Richard Barnes <rba...@mozilla.com>
wrote:

> Anne suggested an idea to me that I thought would be interesting for this
> group. Consider this email a rough sketch of an idea, not any sort of plan.


broadly speaking I really favor this kind of thing.

I would caution a bit about lumping in the transport bits (tls versions,
forward secrecy, etc.) that don't have some kind of pinning opt-in.. a host
might use N servers across a mesh of different CDN providers - each
provisioned with the same cert and key, that use different ciphersuites..
if we awarded a security badge from an interaction with one node and took
it away when you were subsequently load balanced that sends an implicit
signal of distrust that we wouldn't be sending for another site where the
badge never appeared at all.

Some kind of transport-feature-pinning feature would solve it.. or perhaps
even a "pin to >= h2" feature which carries a lot of the best practices you
want as guarantees might be sufficient.

just thinking out loud..

Chris Palmer

unread,
Sep 18, 2014, 2:23:48 PM9/18/14
to Patrick McManus, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
Please keep in mind that the origin is the security boundary on the
web, and is defined as being (scheme, host, port).

Assuming we don't expand the definition of the origin, unless we
implement mixed-everything blocking — mixed EV & non-EV, mixed TLS 1.2
& 1.1, mixed AES-128 & AES-256, mixed pinned keys & non-pinned, et c.
— then I don't think we should make any increased promise to the user.
After all, the promise wouldn't be true.

Let's keep our eye on the ball *currently in play*: Getting all
origins up to the minimum standard of nominally-secure transport. Once
we achieve that, then we can consider splitting finer hairs.

The hair I'd much rather split, by the way, is making each
cryptographic identity a separate origin. Ponder for a moment how
enjoyably impossible that will be...

dia...@gmail.com

unread,
Sep 18, 2014, 8:15:13 PM9/18/14
to mozilla-dev-s...@lists.mozilla.org
Instead of trying to pile on more clutter to the lock/warning/globe states, how about letting the user determine the threshold of those states?

The default would be what they are now, but perhaps in about:config you could set the lock state to require perfect forward secrecy, otherwise drop to a warning state.

Daniel

On Thursday, September 18, 2014 11:23:48 AM UTC-7, Chris Palmer wrote:
> Please keep in mind that the origin is the security boundary on the
>
> web, and is defined as being (scheme, host, port).
>
>
>
> Assuming we don't expand the definition of the origin, unless we
>
> implement mixed-everything blocking -- mixed EV & non-EV, mixed TLS 1.2
>
> & 1.1, mixed AES-128 & AES-256, mixed pinned keys & non-pinned, et c.
>
> -- then I don't think we should make any increased promise to the user.

Matt Palmer

unread,
Sep 18, 2014, 8:29:34 PM9/18/14
to dev-secur...@lists.mozilla.org
On Thu, Sep 18, 2014 at 05:15:13PM -0700, dia...@gmail.com wrote:
> Instead of trying to pile on more clutter to the lock/warning/globe
> states, how about letting the user determine the threshold of those
> states?
>
> The default would be what they are now, but perhaps in about:config you
> could set the lock state to require perfect forward secrecy, otherwise
> drop to a warning state.

I like this idea, when combined with a gradual ratcheting up of the default
value over time (or a gradual reduction in "security value" for various
things). It lets power users who really want to be sure they're
"extra-safe" (FSVO) to get an indication of same (on the assumption that,
being power users, they know what all the blinkenlights *mean*), without
overloading *everyone* with information they don't understand.

- Matt

Chris Palmer

unread,
Sep 18, 2014, 8:30:51 PM9/18/14
to Daniel Roesler, mozilla-dev-s...@lists.mozilla.org
On Thu, Sep 18, 2014 at 5:15 PM, <dia...@gmail.com> wrote:

> Instead of trying to pile on more clutter to the lock/warning/globe states, how about letting the user determine the threshold of those states?
>
> The default would be what they are now, but perhaps in about:config you could set the lock state to require perfect forward secrecy, otherwise drop to a warning state.

In Chrome, we are (very) gradually ratcheting up the cipher
suite/other crypto parameter requirements. It has proven quite
fruitful. I can imagine a future in which non-PFS gets treated as
non-secure. But not just yet.

Even experts, in my experience, get hung up on the complexity of about:flags.

Anne van Kesteren

unread,
Sep 19, 2014, 7:52:18 AM9/19/14
to Chris Palmer, Patrick McManus, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Thu, Sep 18, 2014 at 8:23 PM, Chris Palmer <pal...@google.com> wrote:
> Please keep in mind that the origin is the security boundary on the
> web, and is defined as being (scheme, host, port).

And optional additional data:
https://html.spec.whatwg.org/multipage/browsers.html#origin


> Assuming we don't expand the definition of the origin, unless we
> implement mixed-everything blocking — mixed EV & non-EV, mixed TLS 1.2
> & 1.1, mixed AES-128 & AES-256, mixed pinned keys & non-pinned, et c.
> — then I don't think we should make any increased promise to the user.
> After all, the promise wouldn't be true.

I'm not sure I follow. If there's mixed content you no longer get a
lock at all in Firefox. Obviously we should not revert that.

There's a few options here:

* We pose additional requirements on EV UI. E.g. you have a EV
certificate, but you are not deploying HSTS, then we might not deem
that good enough and therefore give you the same UI as TLS domains
without EV.

* In Firefox we could offer a green lock of sorts for sites deploying
TLS and doing so in a good way. E.g. with TLS 1.2, good cipher, etc.


> Let's keep our eye on the ball *currently in play*: Getting all
> origins up to the minimum standard of nominally-secure transport. Once
> we achieve that, then we can consider splitting finer hairs.

Given the time it takes to get changes through, it seems fine to start
discussing them.


> The hair I'd much rather split, by the way, is making each
> cryptographic identity a separate origin. Ponder for a moment how
> enjoyably impossible that will be...

What are the issues?


(There's also an idea floating around about checking certificates
first when doing a same-origin check, potentially allowing distinct
origins that share a certificate through alternate names, to be
same-origin. However, with CORS it might not really be needed
anymore.)


--
https://annevankesteren.nl/

Hubert Kario

unread,
Sep 19, 2014, 8:04:00 AM9/19/14
to Anne van Kesteren, Patrick McManus, mozilla-dev-s...@lists.mozilla.org, Chris Palmer, Richard Barnes
----- Original Message -----
> From: "Anne van Kesteren" <ann...@annevk.nl>
> To: "Chris Palmer" <pal...@google.com>
> Cc: "Patrick McManus" <mcm...@ducksong.com>, mozilla-dev-s...@lists.mozilla.org, "Richard Barnes"
> <rba...@mozilla.com>
> Sent: Friday, 19 September, 2014 1:52:18 PM
> Subject: Re: Indicators for high-security features
>
> On Thu, Sep 18, 2014 at 8:23 PM, Chris Palmer <pal...@google.com> wrote:
> > Assuming we don't expand the definition of the origin, unless we
> > implement mixed-everything blocking — mixed EV & non-EV, mixed TLS 1.2
> > & 1.1, mixed AES-128 & AES-256, mixed pinned keys & non-pinned, et c.
> > — then I don't think we should make any increased promise to the user.
> > After all, the promise wouldn't be true.
>
> I'm not sure I follow. If there's mixed content you no longer get a
> lock at all in Firefox. Obviously we should not revert that.

AFAIK, images do not trigger "mixed content"

> > The hair I'd much rather split, by the way, is making each
> > cryptographic identity a separate origin. Ponder for a moment how
> > enjoyably impossible that will be...
>
> What are the issues?

the vast majority of sites use external resources, CDNs, external APIs,
google script hosting for popular libraries, etc.


--
Regards,
Hubert Kario

Chris Palmer

unread,
Sep 19, 2014, 1:54:40 PM9/19/14
to Anne van Kesteren, Patrick McManus, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Fri, Sep 19, 2014 at 4:52 AM, Anne van Kesteren <ann...@annevk.nl> wrote:

>> Please keep in mind that the origin is the security boundary on the
>> web, and is defined as being (scheme, host, port).
>
> And optional additional data:
> https://html.spec.whatwg.org/multipage/browsers.html#origin

I haven't seen any origin checks lately that use any optional additional data.

>> Assuming we don't expand the definition of the origin, unless we
>> implement mixed-everything blocking — mixed EV & non-EV, mixed TLS 1.2
>> & 1.1, mixed AES-128 & AES-256, mixed pinned keys & non-pinned, et c.
>> — then I don't think we should make any increased promise to the user.
>> After all, the promise wouldn't be true.
>
> I'm not sure I follow. If there's mixed content you no longer get a
> lock at all in Firefox. Obviously we should not revert that.

My point is that UI indicators should reflect the reality of actual
technical security boundaries. Unless we actually create a boundary,
we shouldn't show that we have.

And yet, a hypothetical boundary between TLS 1.1 and TLS 1.2 would not
almost certainly not fly, for compatibility reasons (as much as we all
might like to have such a boundary).

>> The hair I'd much rather split, by the way, is making each
>> cryptographic identity a separate origin. Ponder for a moment how
>> enjoyably impossible that will be...
>
> What are the issues?

* What's a stable cryptographic identity in the web PKI? Is it the
public key in the end-entity certificate, or the public key in any of
the issuing certificates?
* Or maybe the union of all keys?
* Or maybe the presence of any 1 key in the set?
* What about the sometimes weird and hard-to-predict certificate
path-building behavior across platforms?
* What about key rotation that happens legitimately?
* Do we convince CAs to issue name-constrained issuing certificates to
each site operator (with the constrained name being the origin's exact
hostname), that cert's key becomes the origin's key, and site
operators issue end entities from that?
** There'd still be a need to re-issue that key, from time to time.
* Do we use the web PKI to establish a distinct origin key?
** Could the TACK key be the origin key?

> (There's also an idea floating around about checking certificates
> first when doing a same-origin check, potentially allowing distinct
> origins that share a certificate through alternate names, to be
> same-origin. However, with CORS it might not really be needed
> anymore.)

That's terrifying. :)

Anne van Kesteren

unread,
Sep 20, 2014, 3:43:28 AM9/20/14
to Hubert Kario, Patrick McManus, mozilla-dev-s...@lists.mozilla.org, Chris Palmer, Richard Barnes
On Fri, Sep 19, 2014 at 2:04 PM, Hubert Kario <hka...@redhat.com> wrote:
> AFAIK, images do not trigger "mixed content"

In Firefox Nightly they do at least.


>> What are the issues?
>
> the vast majority of sites use external resources, CDNs, external APIs,
> google script hosting for popular libraries, etc.

Those would be cross-origin either way, no?


--
https://annevankesteren.nl/

Anne van Kesteren

unread,
Sep 20, 2014, 4:10:10 AM9/20/14
to Chris Palmer, Patrick McManus, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Fri, Sep 19, 2014 at 7:54 PM, Chris Palmer <pal...@google.com> wrote:
> My point is that UI indicators should reflect the reality of actual
> technical security boundaries. Unless we actually create a boundary,
> we shouldn't show that we have.

So why do you show special UI for EV?


>>> The hair I'd much rather split, by the way, is making each
>>> cryptographic identity a separate origin. Ponder for a moment how
>>> enjoyably impossible that will be...
>>
>> What are the issues?
>
> * What's a stable cryptographic identity in the web PKI? Is it the
> public key in the end-entity certificate, or the public key in any of
> the issuing certificates?
> * Or maybe the union of all keys?
> * Or maybe the presence of any 1 key in the set?
> * What about the sometimes weird and hard-to-predict certificate
> path-building behavior across platforms?
> * What about key rotation that happens legitimately?
> * Do we convince CAs to issue name-constrained issuing certificates to
> each site operator (with the constrained name being the origin's exact
> hostname), that cert's key becomes the origin's key, and site
> operators issue end entities from that?
> ** There'd still be a need to re-issue that key, from time to time.

It seems for same-origin checks where the origin is derived from a
resource and not a URL, we could in fact do one or more of those,
today. E.g. if https://example.com/ fetches https://example.org/image
we'd check if they're same-origin and if their certificate matches.
Now as connections grow more persistent this will likely be the case
anyway, no?


> ** Could the TACK key be the origin key?

Is TACK still going anywhere? The mailing list suggests it's dead.


--
https://annevankesteren.nl/

fhw...@gmail.com

unread,
Sep 22, 2014, 7:47:18 AM9/22/14
to Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org
‎Hi Anne,

Just to clarify, are you saying that effective in FF release ?? that ‎a document obtained via https will allow only https for all subsequent retrievals, images and js, etc. alike?

To the larger discussion, I have 2 questions: 1) what is the specific message you'd like to convey to the user ‎beyond what the simple lock icon provides. 2) What action do you intend the user to take based on seeing the new indicator? 

Thanks.

  Original Message  
From: Anne van Kesteren
Sent: Saturday, September 20, 2014 2:43 AM
To: Hubert Kario
Cc: Patrick McManus; mozilla-dev-s...@lists.mozilla.org; Chris Palmer; Richard Barnes
Subject: Re: Indicators for high-security features

Patrick McManus

unread,
Sep 22, 2014, 8:29:15 AM9/22/14
to fhw...@gmail.com, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org
On Mon, Sep 22, 2014 at 7:47 AM, <fhw...@gmail.com> wrote:

> ‎Hi Anne,
>
> Just to clarify, are you saying that effective in FF release ?? that ‎a
> document obtained via https will allow only https for all subsequent
> retrievals, images and js, etc. alike?
>
>
wrt http:// images from a https:// origin - the images do load but you get
the !-in-a-triangle mixed content icon instead of a lock.

Henri Sivonen

unread,
Sep 22, 2014, 8:56:43 AM9/22/14
to Richard Barnes, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org
On Wed, Sep 17, 2014 at 6:20 PM, Richard Barnes <rba...@mozilla.com> wrote:
> There are a bunch of security features right now that I think we all agree improve security over and above just using HTTPS:
> -- HTTP Strict Transport Security

Yes, but I think this requirement shouldn't apply to subresources for
the page to qualify, since top-level HSTS together with the "No mixed
content" requirement mean that there's no sslstrip risk for embedded
resources even if they are served from a non-HSTS CDN.

> -- HTTP Public Key Pinning

I'm a bit worried about this one. I'd like the bar for this indicator
to be such that it can motivate anyone with nginx to configure it
right. This way, the new indicator could have a broad impact beyond
just the largest sites. It's not clear to me if HPKP is practical for
sites without Google/Twitter-level ops teams.

It seems to me that it's at least currently impractical for small
sites to get CAs to commit to issue future certs from a particular
root or intermediate, so it seems to me that especially pinning an
intermediate is hazardous unless you are big enough a customer of a CA
to get commitments regarding future issuance practices.

It's unclear to me if HPKP makes it safe and practical to use without
actually forming a business relationship with two CAs in advance
(which would be impractical for many small sites). It seems to me that
HPKP makes it possible to generate a backup end-entity key pair in
advance of having it certified. However, the spec itself discourages
end-entity pinning altogether and it's pretty scary to pin a key
before you know for sure you can get it certified by a CA later.

Unless HPKP is practical for pretty much any https site to deploy, I
worry that making it part of the criteria makes sites not try to fix
their act when it comes to the other criteria that they *could* meet
by just having a reasonable nginx config.

(In some way, it seems like HPKP is the simplest thing that makes
sense for Google, which has its own intermediate, but for the rest of
us, being able to maintain a TACK-style signing key *to the side of*
the CA-rooted chain would be better. What's the outlook of us
supporting TACK or some other mechanism that allows pinning a
site-specific signing key that's not part of the CA-rooted chain?)

> -- TLS 1.2+

Yes.

> -- Certificate Transparency

Are we planning to support CT now? (I'm not stating an opinion for or
against. I'm merely surprised to see CT mentioned as if it was
something we'd support, since I don't recall seeing previous
indications that we'd support it.)

> -- Use of ciphersuites with forward secrecy

Yes, but I think it makes sense to go further with ciphersuites. At
minimum, RC4 should not qualify, but given how easy it is to enable
AES-GCM if you can enable TLS 1.2 per the earlier point, why not
require an AEAD suite (i.e. AES-GCM or an upcoming ChaCha20 suite) and
set aside all perceived or actual CBC problems while at it?

Also, to qualify, RSA key length and DHE group size should probably be
at least 2048 bits.

> -- No mixed content

Yes.

> -- Content Security Policy (?)

This is a bit problematic, because CSP can be configured in so many
ways resulting in different levels of meaningful security. Do you mean
we should require just reward any effort to use CSP or that we should
require specific CSP features to be in use?

> -- Sub-resource integrity (?)

What does this mean in practice? Again, I'm worried about throwing so
many criteria into the mix that meeting the bar becomes hard enough
that we miss the opportunity to get people who could, in large
numbers, meet a lower bar by just fixing their server config to
actually go fix the server config.

> It would be good if we could create incentives for sites to turn on these features. EFF has already seen some sites trying to turn things green on their "Encrypt the Web Report" [1]. Should we consider creating a suite of features that comprise a "high-security" web site, and create some UI to express that to the user?

Considering that I filed
https://bugzilla.mozilla.org/show_bug.cgi?id=942136 , I think yes.

> We could invent new UI for this (e.g., a green lock icon), or we could overlay these requirements on the EV criteria.

I strongly think this should be orthogonal to EV. For reasons given in
https://bugzilla.mozilla.org/show_bug.cgi?id=942136#c2 , I view EV as
primarily a price discrimination mechanism. Specifically, I think EV
doesn't have enough security value for the user that it would make
sense to treat EV as a precondition of encouraging the good stuff you
mention. We should take the opportunity to encourage DV sites to use
HSTS, the better ciphersuites, etc., too.

--
Henri Sivonen
hsiv...@hsivonen.fi
https://hsivonen.fi/

Anne van Kesteren

unread,
Sep 22, 2014, 9:56:31 AM9/22/14
to fhw...@gmail.com, mozilla-dev-s...@lists.mozilla.org
On Mon, Sep 22, 2014 at 1:47 PM, <fhw...@gmail.com> wrote:
> To the larger discussion, I have 2 questions: 1) what is the specific message you'd like to convey to the user ‎beyond what the simple lock icon provides. 2) What action do you intend the user to take based on seeing the new indicator?

For downgrading EV UI to a lock when not enabling certain other
features I would expect the effect to be that developers upgrade their
security features.

For showing a green lock instead of a normal lock I would expect
developers to do likewise.

The idea would be to make address bar UI more attractive for those
domains that put effort into getting security right. (There's some
problems here at the moment, currently the address bar is arguably
less cluttered when not deploying TLS. There's a couple of bugs filed
on making them more equal.)


--
https://annevankesteren.nl/

Chris Palmer

unread,
Sep 22, 2014, 2:23:48 PM9/22/14
to Anne van Kesteren, Patrick McManus, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Sat, Sep 20, 2014 at 1:10 AM, Anne van Kesteren <ann...@annevk.nl> wrote:

>> My point is that UI indicators should reflect the reality of actual
>> technical security boundaries. Unless we actually create a boundary,
>> we shouldn't show that we have.
>
> So why do you show special UI for EV?

For historical reasons, i.e. It Was Like That When I Got Here.
(Similar to how getUserMedia does not (yet) require secure origins.)

>> * What's a stable cryptographic identity in the web PKI? Is it the
>> public key in the end-entity certificate, or the public key in any of
>> the issuing certificates?
>> * Or maybe the union of all keys?
>> * Or maybe the presence of any 1 key in the set?
>> * What about the sometimes weird and hard-to-predict certificate
>> path-building behavior across platforms?
>> * What about key rotation that happens legitimately?
>> * Do we convince CAs to issue name-constrained issuing certificates to
>> each site operator (with the constrained name being the origin's exact
>> hostname), that cert's key becomes the origin's key, and site
>> operators issue end entities from that?
>> ** There'd still be a need to re-issue that key, from time to time.
>
> It seems for same-origin checks where the origin is derived from a
> resource and not a URL, we could in fact do one or more of those,
> today. E.g. if https://example.com/ fetches https://example.org/image
> we'd check if they're same-origin and if their certificate matches.
> Now as connections grow more persistent this will likely be the case
> anyway, no?

Perhaps an origin's cryptographic identity might be stable over the
course of a page-load, or even over the course of a browsing session.
But we'd need a stronger guarantee of lifetime than that.

>> ** Could the TACK key be the origin key?
>
> Is TACK still going anywhere? The mailing list suggests it's dead.

But one could imagine it being resuscitated, if it were a way to get a
long-lived cryptographic identity for an origin.

Chris Palmer

unread,
Sep 22, 2014, 2:36:27 PM9/22/14
to Henri Sivonen, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Mon, Sep 22, 2014 at 5:56 AM, Henri Sivonen <hsiv...@hsivonen.fi> wrote:

>> -- HTTP Strict Transport Security
>
> Yes, but I think this requirement shouldn't apply to subresources for
> the page to qualify, since top-level HSTS together with the "No mixed
> content" requirement mean that there's no sslstrip risk for embedded
> resources even if they are served from a non-HSTS CDN.

These days we're blocking loads of active mixed content, but passive
mixed content is still a concern to me. E.g. an attacker can mangle a
web app's UI pretty badly, including to perform attacks, if the app
gets its icons and buttons via SSLstrip-able sources.

>> -- HTTP Public Key Pinning
>
> I'm a bit worried about this one. I'd like the bar for this indicator
> to be such that it can motivate anyone with nginx to configure it
> right. This way, the new indicator could have a broad impact beyond
> just the largest sites. It's not clear to me if HPKP is practical for
> sites without Google/Twitter-level ops teams.

HPKP is indeed dangerous.

I don't anticipate any additional UI for it, let alone additional UI
that would motivate a not-ready-yet ops team to turn it on.

> It seems to me that it's at least currently impractical for small
> sites to get CAs to commit to issue future certs from a particular
> root or intermediate, so it seems to me that especially pinning an
> intermediate is hazardous unless you are big enough a customer of a CA
> to get commitments regarding future issuance practices.

Intermediates move slowly, and roots even more slowly. It's fairly
safe to assume that, for the lifetime if your end-entity cert, the CA
will still be operating, and if that they can and will cross-sign in
cases where they re-key heavily-used issuing certs.

But, yeah, have a backup pin, and pin at various places in the
certificate chain. I'd advise people to look at
net/http/transport_security_state_static.json and consider what
Dropbox, Google, Twitter, and Tor have done, and why.

> It's unclear to me if HPKP makes it safe and practical to use without
> actually forming a business relationship with two CAs in advance
> (which would be impractical for many small sites). It seems to me that
> HPKP makes it possible to generate a backup end-entity key pair in
> advance of having it certified. However, the spec itself discourages
> end-entity pinning altogether and it's pretty scary to pin a key
> before you know for sure you can get it certified by a CA later.

I wouldn't say we discourage EE pinning; but I would discourage
pinning EEs *exclusively*.

> (In some way, it seems like HPKP is the simplest thing that makes
> sense for Google, which has its own intermediate, but for the rest of
> us, being able to maintain a TACK-style signing key *to the side of*
> the CA-rooted chain would be better. What's the outlook of us
> supporting TACK or some other mechanism that allows pinning a
> site-specific signing key that's not part of the CA-rooted chain?)

I consider a backup pin to be enough like an "on the side" pin. But,
however, you may not.

>> -- Certificate Transparency
>
> Are we planning to support CT now? (I'm not stating an opinion for or
> against. I'm merely surprised to see CT mentioned as if it was
> something we'd support, since I don't recall seeing previous
> indications that we'd support it.)

I devoutly hope Mozilla does support CT.

s...@gmx.ch

unread,
Sep 22, 2014, 3:28:39 PM9/22/14
to dev-secur...@lists.mozilla.org

Am 22.09.2014 um 14:56 schrieb Henri Sivonen:
> On Wed, Sep 17, 2014 at 6:20 PM, Richard Barnes <rba...@mozilla.com> wrote:
>> -- Use of ciphersuites with forward secrecy
> Yes, but I think it makes sense to go further with ciphersuites. At
> minimum, RC4 should not qualify, but given how easy it is to enable
> AES-GCM if you can enable TLS 1.2 per the earlier point, why not
> require an AEAD suite (i.e. AES-GCM or an upcoming ChaCha20 suite) and
> set aside all perceived or actual CBC problems while at it?
>
I think 3DES should not qualify, too. It's just the less worse
alternative of RC4 to support IE 8.

>> -- Content Security Policy (?)
> This is a bit problematic, because CSP can be configured in so many
> ways resulting in different levels of meaningful security. Do you mean
> we should require just reward any effort to use CSP or that we should
> require specific CSP features to be in use?
The best would be, if you prevent XSS with unsafe-inline. But nearly
every tool on the web uses inline scripts, so this would break
compatibility.
We could warn if a http resource is included in CSP. Admins should
implement a server side mixed content blocker using CSP if they need to
allow external resources (e.g. img-src https://*:443).

>
>> We could invent new UI for this (e.g., a green lock icon), or we could overlay these requirements on the EV criteria.
AFAIK green lock is the only indicator for EV on Firefox OS / Android.
This could be confusing. Blue is not used, isn't it?
So there would be gray/black for normal, blue for high-security and
green for high-security + EV (how about low-security EV?).

Kind regards
Jonas

signature.asc

Ryan Sleevi

unread,
Sep 22, 2014, 4:41:53 PM9/22/14
to Chris Palmer, Patrick McManus, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
I have great respect for those responsible for TACK, and they have been
invaluable in discovering and discussing the limitations of HPKP.

However, as potentially foot-gun as HPKP is, TACK is exponentially larger,
in that (and yes, this is anecdata, but one you can find backed up at most
organizations), key lifecycle management remains the single biggest
challenge for organizations.

TACKs design - especially with an offline key - is one that we know is
dangerous. The same environmental factors that lead to SHA-1 deprecation
being hard (organizations wanting to have long-lived certs, then
forgetting the organizational knowledge necessary to
manage/rotate/re-issue those certs, as one example) contribute to TACK
being a great way to brick things.

That is, the TSK is almost invariably going to get lost, or someone will
forget the password, or the person who creates the TSK will forget to back
it up and format their machine, or any number of things we _routinely_ see
with SSL certs (and precisely why implicit pinning to EE certs is hard).
The only people who will be able to safely deploy TACK are a subset of
those who can safely deploy HPKP.

Chris Palmer

unread,
Sep 22, 2014, 4:52:21 PM9/22/14
to ryan-mozde...@sleevi.com, Patrick McManus, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Mon, Sep 22, 2014 at 1:41 PM, Ryan Sleevi
<ryan-mozde...@sleevi.com> wrote:

> The only people who will be able to safely deploy TACK are a subset of
> those who can safely deploy HPKP.

Quite so. My point in this thread was: If we are going to change the
definition of what an origin is, the most security-meaningful change
would be to tie cryptographic identities to origins, rather than
anything else; and, OMG that is incredibly hard to do. So, maybe we
should just leave origins alone.

Anne van Kesteren

unread,
Sep 23, 2014, 3:26:33 AM9/23/14
to Chris Palmer, Patrick McManus, mozilla-dev-s...@lists.mozilla.org, ryan-mozde...@sleevi.com, Richard Barnes
On Mon, Sep 22, 2014 at 10:52 PM, Chris Palmer <pal...@google.com> wrote:
> Quite so. My point in this thread was: If we are going to change the
> definition of what an origin is, the most security-meaningful change
> would be to tie cryptographic identities to origins, rather than
> anything else; and, OMG that is incredibly hard to do. So, maybe we
> should just leave origins alone.

What if we offered some new type of certificate. And if you downgraded
from that certificate to a normal certificate, you would have some
guarantees about cookie and localStorage data. And perhaps it
automatically gives you HSTS. Or is that too problematic to roll out?


--
https://annevankesteren.nl/

Henri Sivonen

unread,
Sep 23, 2014, 3:39:23 AM9/23/14
to Chris Palmer, yan-mozde...@sleevi.com, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Mon, Sep 22, 2014 at 9:36 PM, Chris Palmer <pal...@google.com> wrote:
> On Mon, Sep 22, 2014 at 5:56 AM, Henri Sivonen <hsiv...@hsivonen.fi> wrote:
>
>>> -- HTTP Strict Transport Security
>>
>> Yes, but I think this requirement shouldn't apply to subresources for
>> the page to qualify, since top-level HSTS together with the "No mixed
>> content" requirement mean that there's no sslstrip risk for embedded
>> resources even if they are served from a non-HSTS CDN.
>
> These days we're blocking loads of active mixed content, but passive
> mixed content is still a concern to me. E.g. an attacker can mangle a
> web app's UI pretty badly, including to perform attacks, if the app
> gets its icons and buttons via SSLstrip-able sources.

My point was that if the HTML is served from HSTS-enabled
https://hsts.example.com/ and in has <img
src='https://cdn.example.org/foo.png'>, the image is not SSLstrip-able
even if cdn.example.org doesn't support HSTS. I think it would be a
mistake to make the bar for the new indicator needlessly
high--especially in a way that would require pleading with an external
CDN to resolve.

> HPKP is indeed dangerous.
>
> I don't anticipate any additional UI for it, let alone additional UI
> that would motivate a not-ready-yet ops team to turn it on.

OK.

>> It seems to me that it's at least currently impractical for small
>> sites to get CAs to commit to issue future certs from a particular
>> root or intermediate, so it seems to me that especially pinning an
>> intermediate is hazardous unless you are big enough a customer of a CA
>> to get commitments regarding future issuance practices.
>
> Intermediates move slowly, and roots even more slowly. It's fairly
> safe to assume that, for the lifetime if your end-entity cert, the CA
> will still be operating, and if that they can and will cross-sign in
> cases where they re-key heavily-used issuing certs.

The thing is that "fairly safe to assume" isn't really good enough if
your site becomes unreachable for a year if the assumption is wrong.
Current best practice with intermediates is to always re-download
intermediates when renewing the end-entity cert, because the
intermediate *might* have changed. As for roots, major CAs have
numerous roots and at least if you are paying at the cheapest tiers,
there doesn't seem to be any ahead-of-time commitment from CAs to use
a particular root. Looking at Twitter's pre-pins in Chrome, it seems
that even Twitter isn't counting on an ahead-of-time commitment for
future issuance to happen from a particular root. (Moreover, I don't
see Twitter actually serving hashes for the all the CA roots that are
pre-pinned for them as HTTP headers. In fact, I don't see them serving
any pinning headers at all. Especially in the CDN case, it might be a
tough sell that you 1) have to research a large number of roots, just
in case, and 2) stick the hashes of a large number of roots (amounting
to a large number of bytes) in every response.)

Maybe HKPK will lead to CAs making dependable statements about future
issuance relative to intermediates and roots, but it's too early to
rely on that sort of thing before it actually happens.

> But, yeah, have a backup pin, and pin at various places in the
> certificate chain.

My point is that 1) I want the new indicator to improve the long tail,
too, not just big sites, so I want the new indicator to be practically
available to sites that are low-end customers from the CA perspective
and 2) if you are a low-end customer for a CA, AFAICT, there's no
*sure* way to have a backup pin.

> I'd advise people to look at
> net/http/transport_security_state_static.json and consider what
> Dropbox, Google, Twitter, and Tor have done, and why.

I've taken a look and, whoa, that's *complicated*!

Again (I saw we already agreed; mainly talking to others who might be
reading), requiring sites to deal with that sort of complication
before they get the new badge would mean few sites would go for the
new badge and the badge wouldn't have much impact. OTOH, if the story
was that you copy and paste lines like
ssl_ciphers
ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_dhparam /etc/nginx/ssl/dh-2048.pem;
ssl_stapling on;
resolver a.b.c.d;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
in your server config and you get a new badge, chances are that the
new badge would have a lot more impact. Getting admins to even copy
and paste the above would be quite an improvement over the status quo.

>> It's unclear to me if HPKP makes it safe and practical to use without
>> actually forming a business relationship with two CAs in advance
>> (which would be impractical for many small sites). It seems to me that
>> HPKP makes it possible to generate a backup end-entity key pair in
>> advance of having it certified. However, the spec itself discourages
>> end-entity pinning altogether and it's pretty scary to pin a key
>> before you know for sure you can get it certified by a CA later.
>
> I wouldn't say we discourage EE pinning; but I would discourage
> pinning EEs *exclusively*.

I read "Security Considerations" as discouraging end-entity pinning, but OK.

>> (In some way, it seems like HPKP is the simplest thing that makes
>> sense for Google, which has its own intermediate, but for the rest of
>> us, being able to maintain a TACK-style signing key *to the side of*
>> the CA-rooted chain would be better. What's the outlook of us
>> supporting TACK or some other mechanism that allows pinning a
>> site-specific signing key that's not part of the CA-rooted chain?)
>
> I consider a backup pin to be enough like an "on the side" pin. But,
> however, you may not.

If I'm a small-time customer from the CA perspective and CAs won't
make commitments about future issuance practices to me, I can't rely
on just pinning two intermediates. Pinning all the roots of a bunch of
CA would bloat the responses by quite a bit. Buying two certs from
different CAs right away would double my cert costs up front and it's
hard enough to make people pay for certs at all. If I mint two keys,
pin both of them but have only one certified up front, there's the
risk that for some reason by the time there's a need to get the other
one certified, things won't work out for some reason. (The key
generation and pinning was simply botched and never tested [most
likely], a catastrophic RNG bug was discovered in the mean time
[unlikely but has happened; CAs check for Debian weak keys] or some
requirement like minimum key size changed [very unlikely to happen
with less than a year of notice]. The first one, i.e. just botching
something at the site's end and never testing is by far the most
likely problem.)

On Mon, Sep 22, 2014 at 11:41 PM, Ryan Sleevi
<ryan-mozde...@sleevi.com> wrote:
> That is, the TSK is almost invariably going to get lost, or someone will
> forget the password, or the person who creates the TSK will forget to back
> it up and format their machine, or any number of things we _routinely_ see
> with SSL certs (and precisely why implicit pinning to EE certs is hard).
> The only people who will be able to safely deploy TACK are a subset of
> those who can safely deploy HPKP.

All these problems (and more) apply to pinning end-entity keys with
HPKP and, as indicated above, at the low end, you can't really rely on
the CAs keeping the intermediates stable for pinning for you.
(Especially not the other CA you aren't even a customer of yet.) :-(

(Note that I think HPKP is a valuable addition to the platform above
the low end. I'm just arguing that we shouldn't make HPKP as-is a
requirement for the new badge in order to make the long tail chase the
new badge, too.)

Hubert Kario

unread,
Sep 23, 2014, 5:25:41 AM9/23/14
to s...@gmx.ch, dev-secur...@lists.mozilla.org
----- Original Message -----
> From: s...@gmx.ch
> To: dev-secur...@lists.mozilla.org
> Sent: Monday, 22 September, 2014 9:28:39 PM
> Subject: Re: Indicators for high-security features
>
>
> Am 22.09.2014 um 14:56 schrieb Henri Sivonen:
> > On Wed, Sep 17, 2014 at 6:20 PM, Richard Barnes <rba...@mozilla.com>
> > wrote:
> >> -- Use of ciphersuites with forward secrecy
> > Yes, but I think it makes sense to go further with ciphersuites. At
> > minimum, RC4 should not qualify, but given how easy it is to enable
> > AES-GCM if you can enable TLS 1.2 per the earlier point, why not
> > require an AEAD suite (i.e. AES-GCM or an upcoming ChaCha20 suite) and
> > set aside all perceived or actual CBC problems while at it?
> >
> I think 3DES should not qualify, too. It's just the less worse
> alternative of RC4 to support IE 8.

If we accept sha-1 signed certs, then 3DES is less of a concern.

If we clean up everything and require 128 bit security through and
through for high-sec indication, then yes, 3DES needs to get cut.

--
Regards,
Hubert Kario

Gervase Markham

unread,
Sep 23, 2014, 2:02:45 PM9/23/14
to Richard Barnes, Anne van Kesteren
On 17/09/14 16:20, Richard Barnes wrote:
> There are a bunch of security features right now that I think we all
> agree improve security over and above just using HTTPS:
> -- HTTP Strict Transport Security

Check.

> -- HTTP Public Key Pinning

Others have made the point, which I agree with, that HPKP requires an
on-the-ball ops team to deploy right. If we make this part of the bar,
only a few sites will have the marker. Maybe that's what we want, maybe
not. But when the first site goes out of business because they literally
made their website inaccessible to every single existing customer,
because they were pursuing this icon and mis-deployed HPKP, then it will
not do much for the reputation of this program.

The incentive to deploy HPKP in particular should come from site owners
themselves. If other people push them into it, bad things could happen.

> -- TLS 1.2+

Are there any client-compat issues currently blocking sites from rolling
out TLS 1.2+?

> -- Certificate Transparency

I should make clear here that Mozilla currently has not committed to
support CT, although we are watching with interest. But Richard is only
sketching ideas, so that's fine ;-)

> -- Use of ciphersuites with forward secrecy

Check.

> -- No mixed content

Well yes, but you get a degraded UI experience at the moment if you have
mixed content.

> -- Content Security Policy (?)

As others have said, not sure how you could check for this actually
being used in a security-enhancing way.

> -- Sub-resource integrity (?)

What do you mean by that, exactly?

> It would be good if we could create incentives for sites to turn on
> these features. EFF has already seen some sites trying to turn
> things green on their "Encrypt the Web Report" [1]. Should we
> consider creating a suite of features that comprise a "high-security"
> web site, and create some UI to express that to the user?

I am tentatively optimistic about exploring this idea...

> We could invent new UI for this (e.g., a green lock icon), or we
> could overlay these requirements on the EV criteria.

....but I think we should not mess with EV, which has a defined meaning
("the identity of the owner of this website is known with a high degree
of reliability") and therefore, we should also stay away from the colour
green. A little highlight or similar annotation on the lock might be a
good place to start. After all, we can change the UI presentation later
to be more or less visible.

But, like all security UI indicators, the question is: what do you
expect people to do when they see this (or the lack of it)? Do you
expect lack of this indicator to drive site choice decisions?

Gerv

fhw...@gmail.com

unread,
Sep 23, 2014, 2:08:13 PM9/23/14
to Henri Sivonen, mozilla-dev-s...@lists.mozilla.org
‎So what is the reason to use HSTS over a server initiated redirect? Seems to me the latter would provide greater security whereas the former is easy to bypass. 


  Original Message  
From: Henri Sivonen
Sent: Monday, September 22, 2014 7:56 AM‎

On Wed, Sep 17, 2014 at 6:20 PM, Richard Barnes <rba...@mozilla.com> wrote:
> There are a bunch of security features right now that I think we all agree improve security over and above just using HTTPS:
> -- HTTP Strict Transport Security

Yes, but I think this requirement shouldn't apply to subresources for
the page to qualify, since top-level HSTS together with the "No mixed
content" requirement mean that there's no sslstrip risk for embedded
resources even if they are served from a non-HSTS CDN.‎

fhw...@gmail.com

unread,
Sep 23, 2014, 2:08:17 PM9/23/14
to Patrick McManus, mozilla-dev-s...@lists.mozilla.org
‎I was hoping to learn that images too would get blocked. I'm not sure I can think of all the ways to exploit this hole in security but certainly a browser defect in image handling is one of them.

I'm sure blocking such http requests would break some sites but has anyone performed research or analysis into how big the problem is? ‎Is there a user option to force them to be blocked? 

I'm also curious ‎how exhaustively the blocking rules get tested. With all the levels of nesting that occur and caching and redirects and live javascript stuff that take place on most every page load, it seems like there certainly could be holes but I'd rather have hard facts. Anyone have data on that?

Thank you!
From: Patrick McManus
Sent: Monday, September 22, 2014 7:29 AM‎

Anne van Kesteren

unread,
Sep 23, 2014, 3:10:32 PM9/23/14
to fhw...@gmail.com, Patrick McManus, mozilla-dev-s...@lists.mozilla.org
On Tue, Sep 23, 2014 at 8:08 PM, <fhw...@gmail.com> wrote:
> I'm sure blocking such http requests would break some sites but has anyone performed research or analysis into how big the problem is? ‎Is there a user option to force them to be blocked?

Download Firefox Nightly, browse the web, and look for a broken lock.
As far as I can tell a bunch of sites would break, though the breakage
is probably not severe. E.g. mail.google.com, newsblur.com,
indiewebcamp.com.

I doubt there are holes by the way, but if you find any let us know.


--
https://annevankesteren.nl/

Hubert Kario

unread,
Sep 23, 2014, 3:21:32 PM9/23/14
to Anne van Kesteren, Patrick McManus, fhw...@gmail.com, mozilla-dev-s...@lists.mozilla.org
----- Original Message -----
> From: "Anne van Kesteren" <ann...@annevk.nl>
> To: fhw...@gmail.com
> Cc: "Patrick McManus" <mcm...@ducksong.com>, mozilla-dev-s...@lists.mozilla.org
> Sent: Tuesday, 23 September, 2014 9:10:32 PM
> Subject: Re: Mixed content (was: Indicators for high-security features)
>
> On Tue, Sep 23, 2014 at 8:08 PM, <fhw...@gmail.com> wrote:
> > I'm sure blocking such http requests would break some sites but has anyone
> > performed research or analysis into how big the problem is? ‎Is there a
> > user option to force them to be blocked?
>
> I doubt there are holes by the way, but if you find any let us know.

Firefox already had CVEs for jpeg handling, they are not unheard of.

--
Regards,
Hubert Kario

Chris Palmer

unread,
Sep 23, 2014, 4:41:49 PM9/23/14
to fhw...@gmail.com, Henri Sivonen, mozilla-dev-s...@lists.mozilla.org
On Tue, Sep 23, 2014 at 11:08 AM, <fhw...@gmail.com> wrote:

> ‎So what is the reason to use HSTS over a server initiated redirect? Seems to me the latter would provide greater security whereas the former is easy to bypass.

You have it backwards.

http://www.thoughtcrime.org/software/sslstrip/

Matt Palmer

unread,
Sep 23, 2014, 6:00:14 PM9/23/14
to dev-secur...@lists.mozilla.org
On Tue, Sep 23, 2014 at 01:08:13PM -0500, fhw...@gmail.com wrote:
> So what is the reason to use HSTS over a server initiated redirect? Seems
> to me the latter would provide greater security whereas the former is easy
> to bypass.�

On the contrary, HSTS is much harder to bypass, because the browser
remembers the HSTS setting for an extended period of time. While first use
is still vulnerable to a downgrade attack under HSTS, it's only *one* use,
whereas the browser is vulnerable to redirect filtering on *every* use. If
an attacker has enough access to the network to be able to strip the HSTS
header, they also have enough access to be able to block the
server-initiated redirect to HTTPS.

- Matt

fhw...@gmail.com

unread,
Sep 23, 2014, 9:10:26 PM9/23/14
to Matt Palmer, dev-secur...@lists.mozilla.org
OK, thanks Matt.  So the security improvement is because it's a server config plus persistent memory on the client side.

What is the thinking in Firefox (assume Thunderbird will be similar?) for handling of all the different cases that arise with it? I'm thinking of how persistent is the HSTS knowledge, can it be cleared, what errors/warnings might appear, will users be allowed to bypass them, and so forth.


  Original Message  
From: Matt Palmer
Sent: Tuesday, September 23, 2014 5:01 PM‎

- Matt‎

fhw...@gmail.com

unread,
Sep 24, 2014, 1:03:48 AM9/24/14
to Matt Palmer, dev-secur...@lists.mozilla.org
So I read through RFC 6797 and see that ‎some of my concerns are addressed there. Still, I would like to have a better understanding of Mozilla's implementation since there is user agent flexibility that's open to interpretation. One other thing that isn't clear to me is how complete the Mozilla implementation is. Is there more work to do or is it all in there and now we're just waiting for websites to deploy it?

The shortcoming of HSTS is on the deployment side, where on the one hand it purports to help web app developers and deployment teams who falter at security and on the other hand gives those same people all-new ways to falter at security. It's your classic bait-and-switch except this time your site could become unusable.

For example, how do I pick a suitable max-age? Suppose I mistake the units for days instead of seconds (seriously? seconds?!?) and set the value to 10. What are the practical effects of that? What happens when I use a value of 0xFFFFFFFF? If my settings mean I've DoS-ed myself, what can I do to the browser to restore service?

The most ambitious of web sites and services will be up for the challenge of doing a proper HSTS implementation but the rest...I don't know. Any thoughts on how widely this will be adopted?

David Keeler

unread,
Sep 24, 2014, 1:31:03 PM9/24/14
to dev-secur...@lists.mozilla.org
On 09/23/2014 10:03 PM, fhw...@gmail.com wrote:
> So I read through RFC 6797 and see that ‎some of my concerns are
> addressed there. Still, I would like to have a better understanding of
> Mozilla's implementation since there is user agent flexibility that's
> open to interpretation. One other thing that isn't clear to me is how
> complete the Mozilla implementation is. Is there more work to do or is
> it all in there and now we're just waiting for websites to deploy it?

The implementation in Firefox has been complete for a few years now:
http://blog.sidstamm.com/2010/08/http-strict-transport-security-has.html
Many sites do use HSTS, although it would be great if more did.

> The shortcoming of HSTS is on the deployment side, where on the one hand
> it purports to help web app developers and deployment teams who falter
> at security and on the other hand gives those same people all-new ways
> to falter at security. It's your classic bait-and-switch except this
> time your site could become unusable.

It doesn't just help development and deployment - it prevents things
like ssl-stripping attacks (where an attacker actively re-writes https
links as http).

> For example, how do I pick a suitable max-age? Suppose I mistake the
> units for days instead of seconds (seriously? seconds?!?) and set the
> value to 10. What are the practical effects of that? What happens when I
> use a value of 0xFFFFFFFF? If my settings mean I've DoS-ed myself, what
> can I do to the browser to restore service?

HSTS is particularly useful when combined with browser preload lists
(that is, some browsers ship knowing that some sites use HSTS - this
fixes the first-connection issue). To get on the Firefox and Chrome
lists, a site must set a max-age of at least 18 weeks (10886400
seconds). So, if your site fully supports https, 10886400 is a suitable
max-age to use. If max-age is set to 10, your site will not get the
benefit of HSTS. For Firefox, as long as the max-age value fits in a
signed 64-bit integer, it will be honored (this is just an
implementation detail).
A site can only DOS itself if it sets a long-lived header and then stops
supporting https (or if it sets includeSubdomains and a subdomain
doesn't support https). The easy answer is if your site is committed to
always supporting https, then HSTS is appropriate. If not, then it isn't
appropriate.

> The most ambitious of web sites and services will be up for the
> challenge of doing a proper HSTS implementation but the rest...I don't
> know. Any thoughts on how widely this will be adopted?

Again, using HSTS is essentially as difficult as using https properly.
If that's doable (and it's definitely a whole lot easier than it used to
be), then setting an HSTS header is a small incremental step that does
increase a site's security.

fhw...@gmail.com

unread,
Sep 25, 2014, 12:20:48 AM9/25/14
to Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org
I've thought a lot about this and for my money I don't think a UI indicator is the right solution for a a difficult problem like encouraging and rewarding good security practices. I realize my sphere of influence is limited so I’ll just offer the following perspective for whatever it may be worth:

The crux of the problem is how to identify security done right. Two aspects to consider: 1) When coming up with a grading scale for how good a security implementation is, I think the best we do is have 2 categories: "pretty good" and "good enough". Maybe add a "poor" or "not acceptable" category too. Either way, I'm concerned that a "best" category might prove to be out of reach for most sites, assuming we could agree in the first place as to what constitutes "best".

2) Doing a good job on security for a "main page" (i.e. the origin document) is one thing but getting all the ancillary stuff right is a much larger challenge. Think of all the iframes and style sheets and dynamic content and so forth that are loaded on just about any real page you come across: js libraries, branding and marketing artwork, analytics reporting. A lot of the time that stuff is not under the control of the site owner, ‎yet can affect my own security as a visitor to the site. I shouldn't say "everything looks good" if some of the ancillary content is not well protected. So, how can shortcomings on the ancillary side of things be factored in to an overall score in a way that is fair?

Taken together, I'm concerned a UI indicator of some sort might turn into something that's not fair nor accurate and therefore not very meaningful. Were that to happen, I don't think we'd end up encouraging anyone to do a better job on website security.

I like the idea of encouraging and rewarding those who make the effort to have good security practices but I don't think this is the right path.

Henri Sivonen

unread,
Sep 25, 2014, 11:18:46 AM9/25/14
to fhw...@gmail.com, Anne van Kesteren, mozilla-dev-s...@lists.mozilla.org
On Mon, Sep 22, 2014 at 2:47 PM, <fhw...@gmail.com> wrote:
> To the larger discussion, I have 2 questions: 1) what is the specific message you'd like to convey to the user ‎beyond what the simple lock icon provides.

That the site not only uses authenticated https but uses authenticated
https *better*. (I think forward secrecy and HSTS can be considered
the main ingredients of "better".)

The bar for the old lock is pretty low: You get the old lock with
SSL3, RSA key transport and RC4 without HSTS. However, just changing
the criteria for the old lock would probably have the effect of
"crying wolf", since so many currently lock-bearing sites don't meet
the better criteria.

> 2) What action do you intend the user to take based on seeing the new indicator?

I expect most users not to take any action. I'd expect site admins who
see the new indicator on someone else's site to thing "Why does the
other site have a cooler lock than mine? I want the cooler lock, too."
and then learn how to get the cooler lock. I'd also expect a small
group of technically informed users, who currently don't bother
inspecting the ciphersuite and HSTS state of sites, to nag sites that
they use but that don't have the new indicator to fix their act to get
the new indicator.

Cork

unread,
Sep 25, 2014, 11:40:36 AM9/25/14
to fhw...@gmail.com, mozilla-dev-s...@lists.mozilla.org
Regarding the option to block mixed images. Set about:config?filter=security.mixed_content.block_display_content to true

And yes quite few https sites would break as far as I've experienced.

// Cork

----- Original Message -----

> From: fhw...@gmail.com
> To: "Patrick McManus" <mcm...@ducksong.com>
> Cc: mozilla-dev-s...@lists.mozilla.org
> Sent: Tuesday, 23 September, 2014 8:08:17 PM
> Subject: Mixed content (was: Indicators for high-security features)

> ‎I was hoping to learn that images too would get blocked. I'm not sure I can
> think of all the ways to exploit this hole in security but certainly a
> browser defect in image handling is one of them.

> I'm sure blocking such http requests would break some sites but has anyone
> performed research or analysis into how big the problem is? ‎Is there a user
> option to force them to be blocked?

> I'm also curious ‎how exhaustively the blocking rules get tested. With all
> the levels of nesting that occur and caching and redirects and live
> javascript stuff that take place on most every page load, it seems like
> there certainly could be holes but I'd rather have hard facts. Anyone have
> data on that?

> Thank you!
> From: Patrick McManus
> Sent: Monday, September 22, 2014 7:29 AM‎

> wrt http:// images from a https:// origin - the images do load but you get
> the !-in-a-triangle mixed content icon instead of a lock.

> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

fhw...@gmail.com

unread,
Sep 25, 2014, 1:39:52 PM9/25/14
to dev-secur...@lists.mozilla.org
I'll address the DoS thing momentarily but first I'm curious if there's any data out there on how widely deployed HSTS currently is and/or to what extent site/domain owners are committing to support it going forward?

Also are the cases where self-DoS might occur well known? The cases I can think of generally fall into 3 different categories, but since the actual ways in which you might shoot yourself in the foot are numerous (and subtle) I'd argue that choosing to implement HSTS is a much larger commitment than HTTPS alone.  For one thing, you need knowledge of your whole domain and the content being delivered (and how it's being deployed) or you run the risk of screwing something up.

You hit upon one such case below, where a subdomain that doesn't have SSL becomes inaccessible due to ‎the "includeSubdomains" flag. Actually the other case is a problem too, but for illustration purposes I'll talk about the former.


So, consider a brand like Nike who has a large ‎internet presence and a lot of products serving different people and markets. I don't personally know anything about them or how they get their internet needs met, but let's just assume for this discussion they have a bunch of different teams and outsourcing agreements that try to make it all work (something I think could be said for all major corporations). 

Next, let's suppose they want to run a marketing campaign during a major sports event and give away free shoes to the first 500 people who sign up at a new micro site setup just for this campaign. The browser requests go something like this (substituting - for. )

1. Go to the landing page at freeshoes-nike-com using http
2. Grab some some logo graphics from nike-com using https, hsts is enabled with includesubdomains
3. Grab a js file at freeshoes-nike-com that will collect people's information usi‎ng http, which is rewritten to be https but a cert was never installed for "freeshoes"

Clearly you are screwed, the page will not display correctly. And if you try to go back to the landing page (with just http), you're even worse off because then nothing shows up, only the error screen. People will be very upset, especially the marketing team who can do nothing but watch their campaign blow up before their very eyes.

Put simply, a debacle such as this would be a very big deal, and no matter how much people might like the idea of security there is not a person out there who wants to risk losing their job just to be more secure. 

So that's why I have a hard time seeing HSTS becoming widely adopted. Maybe it will make my site more secure but if it's going to screw everything up, I'm not interested. Bait-and-switch.


Some of the other DoS cases might be even more problematic, but I don't know if anyone wants to get into them here.

Thanks.

From: David Keeler
Sent: Wednesday, September 24, 2014 12:32 PM‎

On 09/23/2014 10:03 PM, fhw...@gmail.com wrote:
...snip...


> The shortcoming of HSTS is on the deployment side, where on the one hand
> it purports to help web app developers and deployment teams who falter
> at security and on the other hand gives those same people all-new ways
> to falter at security. It's your classic bait-and-switch except this
> time your site could become unusable.

...snip...

Hubert Kario

unread,
Sep 26, 2014, 7:08:15 AM9/26/14
to fhw...@gmail.com, dev-secur...@lists.mozilla.org
----- Original Message -----
> From: fhw...@gmail.com
> To: dev-secur...@lists.mozilla.org
> Sent: Thursday, 25 September, 2014 7:39:33 PM
> Subject: Re: HSTS

> I'll address the DoS thing momentarily but first I'm curious if there's any
> data out there on how widely deployed HSTS currently is

About 2% of sites advertise HSTS, see https://www.trustworthyinternet.org/ssl-pulse/

--
Regards,
Hubert Kario

fhw...@gmail.com

unread,
Sep 29, 2014, 12:54:13 AM9/29/14
to Hubert Kario, dev-secur...@lists.mozilla.org
Thanks Hubert. I was guessing about 1,000 sites so seeing 3,000 is better but still small. What I didn't expect is that fewer than 50,000 sites present themselves as being secure in ‎the first place. That's smaller than it ought to be. 

The real shocker however is how many sites exhibit known vulnerabilities. The Heartbleed stat especially stands out. ‎I suppose those sites are given an F rating but really the certs need to be revoked in all 738 cases.

Any way the CA's can help us confirm that any site which is vulnerable to Heartbleed has had its cert revoked?


  Original Message  
From: Hubert Kario
Sent: Friday, September 26, 2014 6:07 AM
To: fhw...@gmail.com
Cc: dev-secur...@lists.mozilla.org
Subject: Re: HSTS
0 new messages