Proposal
We, the Chrome Security Team, propose that user agents (UAs) gradually change their UX to display non-secure origins as affirmatively non-secure. We intend to devise and begin deploying a transition plan for Chrome in 2015.
The goal of this proposal is to more clearly display to users that HTTP provides no data security.
Request
We’d like to hear everyone’s thoughts on this proposal, and to discuss with the web community about how different transition plans might serve users.
Background
We all need data communication on the web to be secure (private, authenticated, untampered). When there is no data security, the UA should explicitly display that, so users can make informed decisions about how to interact with an origin.
Roughly speaking, there are three basic transport layer security states for web origins:
Secure (valid HTTPS, other origins like (*, localhost, *));
Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with minor TLS errors); and
Non-secure (broken HTTPS, HTTP).
For more precise definitions of secure and non-secure, see Requirements for Powerful Features and Mixed Content.
We know that active tampering and surveillance attacks, as well as passive surveillance attacks, are not theoretical but are in fact commonplace on the web.
RFC 7258: Pervasive Monitoring Is an Attack
NSA uses Google cookies to pinpoint targets for hacking
Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine
How bad is it to replace adSense code id to ISP's adSense ID on free Internet?
Comcast Wi-Fi serving self-promotional ads via JavaScript injection
Erosion of the moral authority of transparent middleboxes
Transitioning The Web To HTTPS
We know that people do not generally perceive the absence of a warning sign. (See e.g. The Emperor's New Security Indicators.) Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP. Here are screenshots of the status quo for non-secure domains in Chrome, Safari, Firefox, and Internet Explorer:
Particulars
UA vendors who agree with this proposal should decide how best to phase in the UX changes given the needs of their users and their product design constraints. Generally, we suggest a phased approach to marking non-secure origins as non-secure. For example, a UA vendor might decide that in the medium term, they will represent non-secure origins in the same way that they represent Dubious origins. Then, in the long term, the vendor might decide to represent non-secure origins in the same way that they represent Bad origins.
Ultimately, we can even imagine a long term in which secure origins are so widely deployed that we can leave them unmarked (as HTTP is today), and mark only the rare non-secure origins.
There are several ways vendors might decide to transition from one phase to the next. For example, the transition plan could be time-based:
T0 (now): Non-secure origins unmarked
T1: Non-secure origins marked as Dubious
T2: Non-secure origins marked as Non-secure
T3: Secure origins unmarked
Or, vendors might set thresholds based on telemetry that measures the ratios of user interaction with secure origins vs. non-secure. Consider this strawman proposal:
Secure > 65%: Non-secure origins marked as Dubious
Secure > 75%: Non-secure origins marked as Non-secure
Secure > 85%: Secure origins unmarked
The particular thresholds or transition dates are very much up for discussion. Additionally, how to define “ratios of user interaction” is also up for discussion; ideas include the ratio of secure to non-secure page loads, the ratio of secure to non-secure resource loads, or the ratio of total time spent interacting with secure vs. non-secure origins.
We’d love to hear what UA vendors, web developers, and users think. Thanks for reading!
* The biggest problem I see is that to get an accepted certificate traditionally you needed to pay. This was a show-stopper for having TLS certs in small websites. Mozilla, EFF, Cisco, Akamai are trying to fix that [1] and that StartSSL gives free certificates though. Just stating the obvious: you either get easy and free "secure" certificates, or this proposal is going to make some webmasters angry.
To unsubscribe from this group and stop receiving emails from it, send an email to security-dev...@chromium.org.
Indeed, and expect a separate discussion of that. You can already see some of the discussion on the security-dev@ list regarding requiring OCSP stapling or modern ciphersuites for EV, and one can naturally assume that will migrate to DV.
That is, just as EV moves to DV when deployed dangerously, so too should DV move to dubious.
But, as you note, that's something to be discussed alongside.
That is, just as EV moves to DV when deployed dangerously, so too should DV move to dubious.
_______________________________________________
dev-security mailing list
dev-se...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security
We know that people do not generally perceive the absence of a warning sign. (See e.g. The Emperor's New Security Indicators.) Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP. Here are screenshots of the status quo for non-secure domains in Chrome, Safari, Firefox, and Internet Explorer:
Free SSL certificates helps, but another problem is that activating SSL not only generates warnings, but just break the site due to links to insecure resources. Just consider a case of old pages with a few youtube videos served using http iframes. Accessing those pages over https stops the videos from working as browsers blocks access to active insecure context. And in case of youtube one can fix that, but for other resources it may not be possible.
So what is required is ability to refer to insecure context from HTTPS pages without harming user experience.
For example, it should be a way to insert http iframe into https site. Similarly, it would be nice if a web-developer could refer to scripts, images etc. over http as long as the script/image tag is accompanied with a secure hash of known context.
To unsubscribe from this group and stop receiving emails from it, send an email to security-dev...@chromium.org.
I see a big danger in the current trend. Expecting everyone having a free „secure“ certificate and being in requirement to enable HTTPS it will result in nothing won. DV certificates (similar to DANE) do finally say absolute nothing about the website operator.
They ensure encryption, so I can then be phished, be scammed, … encrypted. Big advantage!^^ Pushing real validation (e.g. EV with green adressbar and validated details by an independent third party, no breakable, spoofable automatism) vs. no validation is much more important and should be focussed on.
However, this „change“ could come with marking HTTP as Non-Secure, but just stating HTTPS as secure is the completely wrong sign and will result in more confusion and loosing any trust in any kind of browser padlocks than before.
Yes, unfortunately we have a collective action problem. (http://en.wikipedia.org/wiki/Collective_action#Collective_action_problem) But just because it's hard, doesn't mean we don't have try. I'd suggest that embedders ask embeddees to at least make HTTPS available, even if not the default.Also, keep in mind that this proposal is only to mark HTTP as non-secure — HTTP will still work, and you can still host your site over HTTP.
Is there a plan for HTTP to eventually have an interstitial, the way HTTPS with a bogus cert does?
If serving context over HTTPS generates broken pages, the insensitive of enabling encryption is very low.
As it was already mentioned, a solution to that is to allow to serve encrypted pages over HTTP so pages that refer to unencrypted elements would not break pages but just produces warnings. Such encrypted http:// also allows to generate less warnings for a page where all context is available over self-signed and key-pinned certificate as that solution is strictly more secure then a plain HTTP.
But, again, consider the definition of the origin. If it is possible for securely-transported code to run in the same context as non-securely transported code, the securely-transported code is effectively non-secure.
I.e. just consider that currently a hosting provider has no option to unconditionally encrypt pages they host for modern browsers as that may break pages of the users. With encrypted http:// they get such option delegating the job of fixing warnings about insecure context to the content producers as it should.
The main point of having a visible and stable indicator for encrypted
sites is to communicate to the user that the site offers a good degree
of resilience against the examination or modification of the exchanged
data by network attackers.
However, this „change“ could come with marking HTTP as Non-Secure, but just stating HTTPS as secure is the completely wrong sign and will result in more confusion and loosing any trust in any kind of browser padlocks than before.
Just a proposal:Mark HTTP as Non-Secure (similar to self-signed) e.g. with a red padlock or sth. similar.Mark HTTPS as Secure (and only secure in favor of encrypted) e.g. with a yellow padlock or sth. similarMark HTTPS with Extended Validation (encrypted and validated) as it is with a green padlock or sth. similar
I think there is a strong
impression that a closed lock is better than neutral, but a yellow
warning sign over the lock is worse than neutral.
From an SOP point of view, this is true.
However, it is increasingly less true if you're willing to ignore the (near cataclysmic) SOP failure, as EV gains technical controls such as certificate transparency and pontentially mandatory stronger security settings (e.g. secure ciphersuites in modern TLS, OCSP stapling, etc). Additionally, there are other technical controls (validity periods, key processing) that do offer distinction.
That is, it is not all procedural changes, and UAs can detect and differentiate. While the hope is that these will be able to apply to all sites in the future, any change of this scale takes time.
> If DNS is authentic, then DANE provides stronger assurances than DV or
> EV since the domain operator published the information and the
> veracity does not rely on others like CAs (modulo DBOUND).
>
> Not relying on a CA is a good thing since its usually advantageous to
> minimize trust (for some definition of "trust"). Plus, CAs don't
> really warrant anything, so its not clear what exactly they are
> providing to relying parties (they are providing a a signature for
> money to the applicant).
>
> Open question: do you think the browsers will support a model other
> than the CA Zoo for rooting trust?
Chromium has no plans for this, particularly those based on DNS/DANE, which are empirically less secure and more operationally fraught with peril. I would neither take it as foregone that the CA system cannot improve nor am I confident that any of the known alternatives are either practical or comparable in security to CAs, let alone superior.
That seems somewhat tangential to Chris' original proposal, and there
is probably a healthy debate to be had about this; it may be also
worthwhile to look at SPDY and QUIC. In general, if you're comfortable
with not providing users with a visible / verifiable degree of
transport security, I'm not sure how the proposal changes this?
If there is genuinely no distinction between plain old
HTTP and opportunistically encrypted HTTP, the scheme can be
immediately rendered useless by any active attacker
Hi everyone,Apologies to those of you who are about to get this more than once, due to the cross-posting. I'd like to get feedback from a wide variety of people: UA developers, web developers, and users. The canonical location for this proposal is: https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure.
Proposal
We, the Chrome Security Team, propose that user agents (UAs) gradually change their UX to display non-secure origins as affirmatively non-secure. We intend to devise and begin deploying a transition plan for Chrome in 2015.
The goal of this proposal is to more clearly display to users that HTTP provides no data security.
Request
We’d like to hear everyone’s thoughts on this proposal, and to discuss with the web community about how different transition plans might serve users.
Background
We all need data communication on the web to be secure (private, authenticated, untampered). When there is no data security, the UA should explicitly display that, so users can make informed decisions about how to interact with an origin.
Roughly speaking, there are three basic transport layer security states for web origins:
Secure (valid HTTPS, other origins like (*, localhost, *));
Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with minor TLS errors); and
Non-secure (broken HTTPS, HTTP).
For more precise definitions of secure and non-secure, see Requirements for Powerful Features and Mixed Content.
We know that active tampering and surveillance attacks, as well as passive surveillance attacks, are not theoretical but are in fact commonplace on the web.
RFC 7258: Pervasive Monitoring Is an Attack
NSA uses Google cookies to pinpoint targets for hacking
Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine
How bad is it to replace adSense code id to ISP's adSense ID on free Internet?
Comcast Wi-Fi serving self-promotional ads via JavaScript injection
Erosion of the moral authority of transparent middleboxes
Transitioning The Web To HTTPS
We know that people do not generally perceive the absence of a warning sign. (See e.g. The Emperor's New Security Indicators.) Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP. Here are screenshots of the status quo for non-secure domains in Chrome, Safari, Firefox, and Internet Explorer:
To unsubscribe from this group and stop receiving emails from it, send an email to security-dev...@chromium.org.
On Dec 15, 2014, at 7:10 PM, ferdy.c...@gmail.com wrote:"If someone thinks their users are OK with their website not having integrity/authentication/privacy"
That is an assumption that doesn't apply to every website. Many websites don't even have authentication.
"Presumably these users would still be OK with it after Chrome starts making the situation more obvious."
Or perhaps it doesn't, and it scares them away. Just like with the cookie bars, where now every user believes all cookies are evil. You assume users are able to make an informed decision based on such warnings, and I doubt that.
"Presumably these users would still be OK with it after Chrome starts making the situation more obvious"
"If someone thinks their users are OK with their website not having integrity/authentication/privacy"
That is an assumption that doesn't apply to every website. Many websites don't even have authentication.
Serve the HTML page over http: but load all sub-resources over https: as
expected after the transition. Add the following header:
Content-Security-Policy-Report-Only: default-src https:; report-uri <me>
"Authentication" here does not refer to "Does the user authenticate
themselves to the site" (e.g. do they log in), but "Is the site you're
talking to the site you the site you expected" (or, put differently, "Does
the server authenticate itself to the user").
This is a pretty interesting use case. When you connect at the airport, the typical first thing that happens is you get a warning saying that the site you want to connect to has the wrong certificate (you went to pogoda.yandex..ru but the certtificate is for airport.logins.aero, or 1.1.1.1).
--
Charles McCathie Nevile - web standards - CTO Office, Yandex
Am I missing something?
On Wednesday, December 17, 2014 7:44:59 PM UTC+1, cha...@yandex-team.ru wrote:
This is a pretty interesting use case. When you connect at the airport, the typical first thing that happens is you get a warning saying that the site you want to connect to has the wrong certificate (you went to pogoda.yandex.ru but the certtificate is for airport.logins.aero, or 1.1.1.1).
--
Charles McCathie Nevile - web standards - CTO Office, Yandex
511 Network Authentication Required?
There is http://tools.ietf.org/html/rfc6585#section-6 for that. Chromium bug is https://code.google.com/p/chromium/issues/detail?id=114929 , Firefox has their own as well. As far as I know this only works for HTTP connections. There really is no reasonable way how the airport can step into an HTTPS connection and demand authentication without causing a certificate error.
There is experimantal https://tools.ietf.org/rfc/rfc2521.txt which suggests an ICMP packet "Need Authorization", but as I said, it is experimantal. Am I missing something?This gradual roll out of the UI hints that is being proposed now would help shift attention to such problems. The problems won't be solved until we get to a state we (actually, you ;) truly _need_ to be solving them.
Inline
On Dec 18, 2014 1:07 AM, <cha...@yandex-team.ru> wrote:
>
>
>
> 18.12.2014, 01:27, "software...@gmail.com" <software...@gmail.com>:
>>
>> On Wednesday, December 17, 2014 7:44:59 PM UTC+1, cha...@yandex-team.ru wrote:
>>>
>>> This is a pretty interesting use case. When you connect at the airport, the typical first thing that happens is you get a warning saying that the site you want to connect to has the wrong certificate (you went to pogoda.yandex.ru but the certtificate is for airport.logins.aero, or 1.1.1.1).
>>> --
>>> Charles McCathie Nevile - web standards - CTO Office, Yandex
>>
>>
>> 511 Network Authentication Required?
>>
>> There is http://tools.ietf.org/html/rfc6585#section-6 for that. Chromium bug is https://code.google.com/p/chromium/issues/detail?id=114929 , Firefox has their own as well. As far as I know this only works for HTTP connections. There really is no reasonable way how the airport can step into an HTTPS connection and demand authentication without causing a certificate error.
>
>
> There is a certificate error. The point is that since it is expected behaviour, I get trained to say "yeah, whatever" so I can pay for the connection I need. Despite the fact that it is very difficult to be *sure* that the error is not actually a real problem.
>
> I'd love to see a better situation relying on a proper standard.
>
> But in general I don't.
>
I'm not sure if it's terribly germane to the HTTP-being-indicated-as-not-secure to rathole too much on the ways that HTTPS can be messed with, but I will simply note that the Chrome Security Enamel team are working on ways to better detect and manage this.
While we can wish for better standards, this is an area where standards compliant devices take years to become even remotely ubiquitous. As such, a heuristic based approach on the way the world is, at least with respective to captive portals that actively try to disrupt and compromise users' connections.
>>
>> There is experimantal https://tools.ietf.org/rfc/rfc2521.txt which suggests an ICMP packet "Need Authorization", but as I said, it is experimantal. Am I missing something?
>>
>> This gradual roll out of the UI hints that is being proposed now would help shift attention to such problems. The problems won't be solved until we get to a state we (actually, you ;) truly _need_ to be solving them.
>
>
> Sure. But this turns out to be a case where right now there is a problem, and instead of *solving* it it seems that "the world" (or at least the parts I see, which is quite a lot by geography) is instead finding a quick workaround that gets them where they were going - at the cost of learning to ignore a potentially serious problem.
>
> On the whole I think this discussion is valuable, and the proposal makes sense. But I have concerns about whether we really understand the things that are going to change and the implications, so use cases like this are important to find and make sure we understand.
>
> cheers
>
> --
> Charles McCathie Nevile - web standards - CTO Office, Yandex
> cha...@yandex-team.ru - - - Find more at http://yandex.com
>
As noted elsewhere, we aren't trying to boil the ocean, and though I certainly accept the concerns are valid (and, as mentioned above, are already being independently worked on), I think we should be careful how much we fixate on these issues versus considering the broader philosophical issues this proposal is bringing forward.
There are certainly awful things in the world of HTTPS, on a variety of fronts. And yet, despite those warts, we would be misleading ourselves and others to think that insecure transports such as HTTP - ones actively disrupted for commercial gain, "value" adding, or malicious havoc and ones that are passively monitored on widespread, pervasive scale - represent the desirable state of where we want to be or go.
I think the end goal is more robust - we want a world where users are not only safe by default, but they expect that, and can understand what makes them unsafe. Though some of these may be outside our ken as UAs - for example, we have limited ability to know you're running a three year old version of phpBB that is owned harder than Sony Pictures and has more remote exploits than msasn1.dll - there are things we do know and should communicate. One of them is that the assumption many users have - that their messages are shared only between them and the server - is not true unless the server operator is conscientious.
Hi everyone,
Apologies to those of you who are about to get this more than once, due to
the cross-posting. I'd like to get feedback from a wide variety of people:
UA developers, web developers, and users. The canonical location for this
proposal is:
https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure.
Proposal
We, the Chrome Security Team, propose that user agents (UAs) gradually
change their UX to display non-secure origins as affirmatively non-secure.
We intend to devise and begin deploying a transition plan for Chrome in
2015.
The goal of this proposal is to more clearly display to users that HTTP
provides no data security.
Request
We’d like to hear everyone’s thoughts on this proposal, and to discuss with
the web community about how different transition plans might serve users.
Background
We all need data communication on the web to be secure (private,
authenticated, untampered). When there is no data security, the UA should
explicitly display that, so users can make informed decisions about how to
interact with an origin.
Roughly speaking, there are three basic transport layer security states for
web origins:
-
Secure (valid HTTPS, other origins like (*, localhost, *));
-
Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with
minor TLS errors); and
-
Non-secure (broken HTTPS, HTTP).
For more precise definitions of secure and non-secure, see Requirements for
Powerful Features <http://www.w3.org/TR/powerful-features/> and Mixed
Content <http://www.w3.org/TR/mixed-content/>.
We know that active tampering and surveillance attacks, as well as passive
surveillance attacks, are not theoretical but are in fact commonplace on
the web.
RFC 7258: Pervasive Monitoring Is an Attack
<https://tools.ietf.org/html/rfc7258>
NSA uses Google cookies to pinpoint targets for hacking
<http://www.washingtonpost.com/blogs/the-switch/wp/2013/12/10/nsa-uses-google-cookies-to-pinpoint-targets-for-hacking/>
Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine
<http://www.wired.com/2014/10/verizons-perma-cookie/>
How bad is it to replace adSense code id to ISP's adSense ID on free
Internet?
<http://stackoverflow.com/questions/25438910/how-bad-is-it-to-replace-adsense-code-id-to-isps-adsense-id-on-free-internet>
Comcast Wi-Fi serving self-promotional ads via JavaScript injection
<http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/>
Erosion of the moral authority of transparent middleboxes
<https://tools.ietf.org/html/draft-hildebrand-middlebox-erosion-01>
Transitioning The Web To HTTPS <https://w3ctag.github.io/web-https/>
We know that people do not generally perceive the absence of a warning sign.
(See e.g. The Emperor's New Security Indicators
<http://commerce.net/wp-content/uploads/2012/04/The%20Emperors_New_Security_Indicators.pdf>.)
Yet the only situation in which web browsers are guaranteed not to warn
users is precisely when there is no chance of security: when the origin is
transported via HTTP. Here are screenshots of the status quo for non-secure
domains in Chrome, Safari, Firefox, and Internet Explorer:
[image: Screen Shot 2014-12-11 at 5.08.48 PM.png]
[image: Screen Shot 2014-12-11 at 5.09.55 PM.png]
[image: Screen Shot 2014-12-11 at 5.11.04 PM.png]
[image: ie-non-secure.png]
Particulars
UA vendors who agree with this proposal should decide how best to phase in
the UX changes given the needs of their users and their product design
constraints. Generally, we suggest a phased approach to marking non-secure
origins as non-secure. For example, a UA vendor might decide that in the
medium term, they will represent non-secure origins in the same way that
they represent Dubious origins. Then, in the long term, the vendor might
decide to represent non-secure origins in the same way that they represent
Bad origins.
Ultimately, we can even imagine a long term in which secure origins are so
widely deployed that we can leave them unmarked (as HTTP is today), and
mark only the rare non-secure origins.
There are several ways vendors might decide to transition from one phase to
the next. For example, the transition plan could be time-based:
1.
T0 (now): Non-secure origins unmarked
2.
T1: Non-secure origins marked as Dubious
3.
T2: Non-secure origins marked as Non-secure
4.
T3: Secure origins unmarked
Or, vendors might set thresholds based on telemetry that measures the
ratios of user interaction with secure origins vs. non-secure. Consider
this strawman proposal:
1.
Secure > 65%: Non-secure origins marked as Dubious
2.
Secure > 75%: Non-secure origins marked as Non-secure
3.
Secure > 85%: Secure origins unmarked
The particular thresholds or transition dates are very much up for
discussion. Additionally, how to define “ratios of user interaction” is
also up for discussion; ideas include the ratio of secure to non-secure
page loads, the ratio of secure to non-secure resource loads, or the ratio
of total time spent interacting with secure vs. non-secure origins.
We’d love to hear what UA vendors, web developers, and users think. Thanks
for reading!
_______________________________________________
dev-security mailing list
dev-se...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security
Security warnings are often overused and therefore ignored [1]; it's even worse to provide a warning for something that's not actionable. I think we'd have to see very low plaintext rates (< 1%) in order not to habituate users into ignoring a plaintext warning indicator.
On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:
> I support the goal of this project, but I'm not sure how we can get to a
> point where showing warning indicators makes sense. It seems that about 67%
> of pageviews on the Firefox beta channel are http, not https. How are
> Chrome's numbers?
Currently, roughly 58% of top-level navigations in Chrome are HTTPS.
> Security warnings are often overused and therefore ignored [1]; it's even
> worse to provide a warning for something that's not actionable. I think we'd
> have to see very low plaintext rates (< 1%) in order not to habituate users
> into ignoring a plaintext warning indicator.
(a) Users are currently habituated to treat non-secure transport as
OK. The status quo is terrible.
On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:
> I support the goal of this project, but I'm not sure how we can get to a
> point where showing warning indicators makes sense. It seems that about 67%
> of pageviews on the Firefox beta channel are http, not https. How are
> Chrome's numbers?
Currently, roughly 58% of top-level navigations in Chrome are HTTPS.
> Security warnings are often overused and therefore ignored [1]; it's even
> worse to provide a warning for something that's not actionable. I think we'd
> have to see very low plaintext rates (< 1%) in order not to habituate users
> into ignoring a plaintext warning indicator.
(a) Users are currently habituated to treat non-secure transport as
OK. The status quo is terrible.
(b) What Peter Kasting said: we propose a passive indicator, not a
pop-up or interstitial.
I'm curious about the difference between the two browsers. My guess is that we're treating same-origin navigations differently, particularly fragment changes. Monica, is Firefox collapsing all same-origin navigations into a single histogram entry? Given that people spend a lot of time on a small number of popular (and HTTPS) sites, it would account for the different stats.
I understand the desire here, but a passive indicator is not going to change the status quo if it's shown 42% of the time (or 67% of the time, in Firefox's case).
On Thu, Dec 18, 2014 at 1:34 PM, Peter Kasting <pkas...@google.com> wrote:On Thu, Dec 18, 2014 at 1:18 PM, Monica Chew <m...@mozilla.com> wrote:I understand the desire here, but a passive indicator is not going to change the status quo if it's shown 42% of the time (or 67% of the time, in Firefox's case).Which is presumably why the key question this thread asked is what metrics to use to decide it makes sense to start showing these warnings, and what the thresholds should be.OK. I think the thresholds should be < 5%, preferably < 1%. What do you think they should be?
I'm sorry, this isn't how HTTPS works and isn't accurate, unless you have explicitly installed the routers cert as a CA cert. If you have, then you're not really an unwitting user (and it is quite hard to do this).
Of course, everything you said is true for insecure HTTP. Which is why this proposal is about reflecting that truth to the user.
On Dec 18, 2014 2:55 PM, "Michael Martinez" <michael....@xenite.org> wrote:
>
> On 12/18/2014 5:29 PM, Chris Palmer wrote:
>>
>> On Thu, Dec 18, 2014 at 2:22 PM, Jeffrey Walton <nolo...@gmail.com> wrote:
>>
>>>> A) i don't think we should remove "This website does not supply
>>>> identity information" -- but maybe replace it with "The identity of this
>>>> site is unconfirmed" or "The true identity of this site is unknown"
>>>
>>> None of them are correct when an interception proxy is involved. All
>>> of them lead to a false sense of security.
>>>
>>> Given the degree to which standard bodies accommodate (promote?)
>>> interception, UA's should probably steer clear of making any
>>> statements like that if accuracy is a goal.
>>
>> Are you talking about if an intercepting proxy is intercepting HTTP
>> traffic, or HTTPS traffic?
>>
>> A MITM needs a certificate issued for the proxied hostname, that is
>> signed by an issuer the client trusts. Some attackers can achieve
>> that, but it's not trivial.
>
>
> No it doesn't need a certificate. A MITM can be executed through a compromised or rogue router. It's simple enough to set up a public network in well-known wifi hotspots and attract unwitting users. Then the HTTPS doesn't protect anyone's transmission from anything as the router forms the other end of the secure connection and initiates its own secure connection with the user's intended destination (either the site they are trying to get to or whatever site the bad guys want them to visit).
>
> Google, Apple, and other large tech companies learned the hard way this year that their use of HTTPS failed to protect users from MITM attacks.
>
I'm sorry, this isn't how HTTPS works and isn't accurate, unless you have explicitly installed the routers cert as a CA cert. If you have, then you're not really an unwitting user (and it is quite hard to do this).
Of course, everything you said is true for insecure HTTP. Which is why this proposal is about reflecting that truth to the user.
No, what I am saying is that you can bypass the certificate for a MITM attack via a new technique that was published earlier this year.
Google, the great champion of HTTPS/SSL, cannot prevent yet more man-in-the-middle attacks against its users: http://www.theregister.co.uk/2014/11/21/hackers_snaffling_smartphone_secrets_with_redirection_attack/
If your company is serious about using HTTPS it has to do it right (not that it will matter, but don't throw your money away on bad implementation). http://www.darkreading.com/endpoint/the-week-when-attackers-started-winning-the-war-on-trust-/a/d-id/1317657