Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

When are public applications embedding certificates pointing to 127.0.0.1 OK?

5,701 views
Skip to first unread message

annie nguyen

unread,
Jun 20, 2017, 3:15:57 PM6/20/17
to dev-secur...@lists.mozilla.org
Hi!

I'm not sure if this is the correct place to ask (I'm not sure where
else I would ask). I'm so sorry if this message is unwanted.

Earlier this week, a certificate for a domain resolving to 127.0.0.1 in
a Cisco application was revoked, because it was deemed to have been
compromised.

Dropbox, GitHub, Spotify and Discord (among others) have done the same
thing for years: they embed SSL certificates and private keys into their
applications so that, for example, open.spotify.com can talk to a local
instance of Spotify (which must be served over https because
open.spotify.com is also delivered over https).

This has happened for years, and these applications have certificates
issued by DigiCert and Comodo all pointing to 127.0.0.1 whose private
keys are trivially retrievable, since they're embedded in publicly
distributed binaries.

- GitHub: ghconduit.com
- Discord: discordapp.io
- Dropbox: www.dropboxlocalhost.com
- Spotify: *.spotilocal.com

Here is Spotify's, for example:
https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0

----

What I want to know is: how does this differ to Cisco's situation? Why
was Cisco's key revoked and considered compromised, but these have been
known about and deemed acceptable for years - what makes the situation
different?

It's been an on-going question for me, since the use case (as a software
developer) is quite real: if you serve a site over HTTPS and it needs to
communicate with a local client application then you need this (or, you
need to manage your own CA, and ask every person to install a
certificate on all their devices)

Thank you so much,
Annie

Ryan Sleevi

unread,
Jun 20, 2017, 4:14:34 PM6/20/17
to annie nguyen, dev-secur...@lists.mozilla.org
Previous certificates for GitHub and Dropbox have been revoked for this
reason.

If this problem has been reintroduced, they similarly need to be revoked.
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Rob Stradling

unread,
Jun 20, 2017, 4:24:36 PM6/20/17
to annie nguyen, dev-secur...@lists.mozilla.org, rev...@digicert.com
[CC'ing rev...@digicert.com, as per
https://ccadb-public.secure.force.com/mozillacommunications/CACommResponsesOnlyReport?CommunicationId=a05o000003WrzBC&QuestionId=Q00028]

Annie,

"but these have been known about and deemed acceptable for years"

Known about by whom? Deemed acceptable by whom? Until the CA becomes
aware of a key compromise, the CA will not know that the corresponding
certificate(s) needs to be revoked.

Thanks for providing the Spotify example. I've just found the
corresponding certificate (issued by DigiCert) and submitted it to some
CT logs. It's not yet revoked:
https://crt.sh/?id=158082729

https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0 does
appear to be the corresponding private key.
--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
Office Tel: +44.(0)1274.730505
Office Fax: +44.(0)1274.730909
www.comodo.com

COMODO CA Limited, Registered in England No. 04058690
Registered Office:
3rd Floor, 26 Office Village, Exchange Quay,
Trafford Road, Salford, Manchester M5 3EQ

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
sender by replying to the e-mail containing this attachment. Replies to
this email may be monitored by COMODO for operational or business
reasons. Whilst every endeavour is taken to ensure that e-mails are free
from viruses, no liability can be accepted and the recipient is
requested to use their own virus checking software.

Koen Rouwhorst

unread,
Jun 20, 2017, 4:58:55 PM6/20/17
to dev-secur...@lists.mozilla.org
For your information: I have reported this issue to Spotify on Monday
(yesterday) through their official vulnerability disclosure channel
(HackerOne). The (not-yet-public) issue was assigned ID 241222.

In the report I have included all the necessary (technical) details,
including citations of the relevant sections from the Baseline
Requirements, and DigiCert policies. The report was acknowledged, and
has been escalated to Spotify's security team for review.

On Tue, Jun 20, 2017, at 22:23, Rob Stradling via dev-security-policy
wrote:

Matthew Hardeman

unread,
Jun 20, 2017, 9:12:35 PM6/20/17
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, June 20, 2017 at 2:15:57 PM UTC-5, annie nguyen wrote:

> Dropbox, GitHub, Spotify and Discord (among others) have done the same
> thing for years: they embed SSL certificates and private keys into their
> applications so that, for example, open.spotify.com can talk to a local
> instance of Spotify (which must be served over https because
> open.spotify.com is also delivered over https).
>
> This has happened for years, and these applications have certificates
> issued by DigiCert and Comodo all pointing to 127.0.0.1 whose private
> keys are trivially retrievable, since they're embedded in publicly
> distributed binaries.
>

Really?!? This is ridiculous.


> What I want to know is: how does this differ to Cisco's situation? Why
> was Cisco's key revoked and considered compromised, but these have been
> known about and deemed acceptable for years - what makes the situation
> different?

That situation is not different from the Cisco situation and should yield the same result.

> It's been an on-going question for me, since the use case (as a software
> developer) is quite real: if you serve a site over HTTPS and it needs to
> communicate with a local client application then you need this (or, you
> need to manage your own CA, and ask every person to install a
> certificate on all their devices)

There are numerous security reasons for this, which quite several other people here are better than I to illuminate. I'm a software developer myself (particularly, I am in the real-time communications space). I am not naive to the many great uses of WebSockets and similar. I have to admit, however, that never once have I ever considered having a piece of software that I have written running in the background with an open server socket for purposes of waiting on any old caller from localhost to interact with it.

The major browsers already consider localhost to be a secure context automatically, even without https. In this case, however, they don't seem to follow that. I have this theory as to why....

Maybe they think that it is ridiculous that an arbitrary website "need" to interact with locally installed software via WebSocket (or any other manner, short of those which require a great deal of explicit end-user interaction). It is not beyond imagination that they may even regard the mere fact that people believe that they "need" such interaction to be ridiculous.

Perhaps they've stopped to think, "Well, that would only work if our software or some part of our software is running on the visitor's system all the time." That kind of thing, in turn, encourages developers to write auto-start software that runs in the background from system startup and just sits there waiting for the user to load your website. That wastes of system resources (and probably an unconscionable amount of energy worldwide).

Perhaps they are concerned that if the local software "needs" interaction from a browser UI being served up from an actual web server elsewhere on the internet, then the software may well be written by people who are not well versed in the various mechanisms of security exploit in networked environments. Those are just the kinds of developers you do not want to be writing code that opens and listens for connections on server sockets. As a minor example, I do not believe that cisco.com is on the PSL. This means that if other Cisco.com web sites use domain-wide cookies, those cookies are available to that software running on the computer. Conversely, having that key and an ability to manipulate a computer's DNS queries might allow a third party to perpetrate a targeted attack and capture any cisco.com site wide cookies.

I personally am of the position that visiting a website in a browser should never privilege that website with direct interaction to other software on my computer without some sort of explicit extension / plugin / bridge technology that I had numerous obnoxious warnings to overcome to get installed.

Why on earth would a visit to spotify.com ever need to interact with the Spotify application on my computer? Can you explain what they do with that?

More broadly, the Google people provide Chrome Native Messaging for scenarios where trusted sources (like a Chrome extension) can communicate with a local application which has opted into this arrangement in a secure way. Limiting it to access via Chrome Extensions means that your Chrome browser user needs to install the Chrome Extension before you will be able to engage via that conduit.

A brief glance at the various chromium bugs involved in locking down access to WebSocket when referenced from a secure origin shows that the Chrome people definitely understood the use case and did not care that it would break things like this.

I have no affiliation with any browser team, but I speculate based on their actions and their commentary that they _meant_ to break this use case. Rather, they seem to regard the use case itself as inappropriate and seem to be actively breaking attempts to circumvent that.

I strongly support that. I see no reason that a web site needs to use my browser as a conduit to talk to a non-browser software element on my computer. A visit to spotify.com certainly does not imply that I want Spotify to play with my volume settings. There is no planet where the defaults should allow for a mere visit to spotify.com to determine whether or not I have the spotify application installed, running, or identify which instance of the spotify application I correspond with. Not without specially granted privilege.

I have an idea for the browser authors to contemplate in their statistics gathering:

I propose a new metric be produced by the browser and "shipped home" for analysis. Specifically, any time a https connection is properly validated to a certificate which descends from the publicly trusted roots ( no need to care about anything in the enterprise / administrative roots category ) AND further the connection is actually ultimately made to localhost (whether by hostname, IPv4 loopback, IPv6 loopback, etc.), capture the set of SANs in the certificate against which the connection was validated. Collect that data along with anonymized source-of-submission data and certificate issuer data.

When any single SAN on a certificate descendant of a publicly trusted hierarchy is utilized by too many sources to plausibly be represented by a single user, report that data to the issuer as a strong statistical showing that the key has been leaked. Demand revocation.

Just my thoughts on the matter,

Matt Hardeman


jacob.hoff...@gmail.com

unread,
Jun 21, 2017, 2:43:18 AM6/21/17
to mozilla-dev-s...@lists.mozilla.org
> It's been an on-going question for me, since the use case (as a software
> developer) is quite real: if you serve a site over HTTPS and it needs to
> communicate with a local client application then you need this (or, you
> need to manage your own CA, and ask every person to install a
> certificate on all their devices)

I think it's both safe and reasonable to talk to localhost over HTTP rather than HTTPS, because any party that can intercept communications to localhost presumably has nearly full control of your machine anyhow.

There's the question of mixed content blocking: If you have an HTTPS host URL, can you embed or otherwise communicate with a local HTTP URL? AFAICT both Chrome and Firefox will allow that: https://chromium.googlesource.com/chromium/src.git/+/130ee686fa00b617bfc001ceb3bb49782da2cb4e and https://bugzilla.mozilla.org/show_bug.cgi?id=903966. I haven't checked other browsers. Note that you have to use "127.0.0.1" rather than "localhost." See https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-03 for why.

So I think the answer to your underlying question is: Use HTTP on localhost instead of a certificate with a publicly resolvable name and a compromised private key. The latter is actually very risky because a MitM attacker can change the resolution of the public name to something other than 127.0.0.1, and because the private key is compromised, the attacker can also successfully complete a TLS handshake with a valid certificate. So the technique under discussion here actually makes web<->local communications less secure, not more.

Also, as a reminder: make sure that the code operating on localhost carefully restricts which web origins are allowed to talk to it, for instance by using CORS with the Access-Control-Allow-Origin header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS.

Matthew Hardeman

unread,
Jun 21, 2017, 5:32:25 AM6/21/17
to mozilla-dev-s...@lists.mozilla.org
I believe the underlying issue for many of these cases pertains to initiating a connection to a WebSocket running on some port on 127.0.0.1 as a sub-resource of an external web page served up from an external public web server via https.

I believe that presently both Firefox and Chrome prevent that from working, rejecting a non-secure ws:/ URL as mixed content.

Ryan Sleevi

unread,
Jun 21, 2017, 5:59:01 AM6/21/17
to Matthew Hardeman, mozilla-dev-security-policy
On Wed, Jun 21, 2017 at 5:32 AM, Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> I believe the underlying issue for many of these cases pertains to
> initiating a connection to a WebSocket running on some port on 127.0.0.1 as
> a sub-resource of an external web page served up from an external public
> web server via https.
>
> I believe that presently both Firefox and Chrome prevent that from
> working, rejecting a non-secure ws:/ URL as mixed content.


There are several distinct issues:
127.0.0.0/8 (and the associated IPv6 reservations ::1/128)
"localhost" (as a single host)
"localhost" (as a TLD)

The issues with localhost are (briefly) caught in
https://tools.ietf.org/html/draft-west-let-localhost-be-localhost - there
is a degree of uncertainty with ensuring that such resolution does not go
over the network. This problem also applies to these services using custom
domains that resolve to 127.0.0.1 - the use of a publicly resolvable domain
(which MAY include "localhost", surprisingly) - mean that a network
attacker can use such a certificate to intercept and compromise users, even
if it's not 'intended' to be. See
https://w3c.github.io/webappsec-secure-contexts/#localhost

127.0.0.0/8 is a bit more special - that's captured in
https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

Andrew Meyer

unread,
Jun 21, 2017, 11:43:49 AM6/21/17
to mozilla.dev.s...@googlegroups.com, Matthew Hardeman, mozilla-dev-security-policy
Does anyone have an idea of how good browser support is for the W3C Secure
Contexts standard? Could it be that vendors are abusing certificates in
this way in order to get around communications with loopback addresses
being blocked as insecure mixed content by non-conforming browsers?

Matthew Hardeman

unread,
Jun 21, 2017, 12:51:27 PM6/21/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, June 21, 2017 at 4:59:01 AM UTC-5, Ryan Sleevi wrote:

>
> There are several distinct issues:
> 127.0.0.0/8 (and the associated IPv6 reservations ::1/128)
> "localhost" (as a single host)
> "localhost" (as a TLD)
>
> The issues with localhost are (briefly) caught in
> https://tools.ietf.org/html/draft-west-let-localhost-be-localhost - there
> is a degree of uncertainty with ensuring that such resolution does not go
> over the network. This problem also applies to these services using custom
> domains that resolve to 127.0.0.1 - the use of a publicly resolvable domain
> (which MAY include "localhost", surprisingly) - mean that a network
> attacker can use such a certificate to intercept and compromise users, even
> if it's not 'intended' to be. See
> https://w3c.github.io/webappsec-secure-contexts/#localhost
>
> 127.0.0.0/8 is a bit more special - that's captured in
> https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

I agree in full that there are several issues with localhost, the IP forms, the loopback subnets, etc.

Moreover, within my own mind, I simplify these to a single succinct desire:

Break that entirely. There is never a time when I want a page served from an external resource to successfully reference anything on my own device or any directly connected subnet of a non-public nature. (For IPv6, maybe we call that no external resource should successfully reference or access ANY of my connected /64s).

My comments above were merely to point out why people are acquiring these certificates on public domains that resolve to 127.0.0.1.

With the exception of the way Plex is/has been doing it (unique certificate acquired via API from server-side for each system, assigning both a certificate and a unique to the system loopback FQDN), these all involve abuses where a private key for a publicly valid certificate is shared with the end-user systems.

If for no other reason than that normalizes and mainstreams an improper PKI practice, those certificates should be revoked upon discovery. (It's called a private key for a reason, etc, etc.) Obviously there are good causes above and beyond that as well.

I really am interested in seeing mechanisms to achieve this "necessary use case" murdered. That the web browser with no prompt and no special permission may interact in a non-obvious way with any other software or equipment on my computer or network is far from ideal.

The use case is also a nasty kludge. Why listed with a WebSocket on loopback? Why not just build a client-side WebSocket connection from the application back to the server side, where the server can then push any administrative commands securely back to that application. I understand that places more burden server-side, but it also provides a much better understood (by the end users and their IT admins) flow of network communications.

Frankly, if one is incorporating network stack code in their locally running applications and yet want to use the browser as a UI for controlling that application, there's a simple way of still having a browser UI and controlling that application, even pulling in external resources to inform the UI (if you must) -- embed the browser. Numerous quite embeddable engines exist and are easily used this way. In those environments, the application developer can write their own rules to avoid the security measures which get in their way while still only creating a security train-wreck within the context and runtime window of their own application.

Perhaps I come across as somewhat vehement in my disdain for the purportedly necessary use case that people are breaking this rule to achieve, but that's been informed by my experiences over the years of watching people who have no business doing so making security nightmares out of "clever" network hacks. Among the favorites I've seen implemented was a functioning TCP/IP-over-DNS which was built to provide actual (messy, fragmented, slow, but very functional) full internet access to a site with a captive portal that would still permit certain DNS queries to be answered.

Short of intentional circumvention of network controls, the kinds of things that are trying to be achieved by these local WebSocket hacks are indistinct and, I believe, can not be technologically differentiated from the very techniques one would use to have a browser act as a willing slave to a network penetration effort.

Here I favor public humiliation. Murder every mechanism for achieving the direct goal of "serving the use case for a need for externally sourced material loaded in the browser to communicate with a running local application". Issue press releases naming and shaming every attempt along the way.

Goodness. I'll stop now before I become really vulgar.

andre...@gmail.com

unread,
Jun 21, 2017, 1:41:53 PM6/21/17
to mozilla-dev-s...@lists.mozilla.org
I feel like this is getting sort of off-topic. Web pages can communicate directly with applications on the local machine regardless of whether they abuse certificates in this way or not. (Such as, for example, by using plain old HTTP.) The question of whether or not they should be able to do that is a separate topic IMO.

Certificate abuse aside, I disagree with your assertion that this is inherently a bad thing. Even if browsers were to block web apps from communicating with localhost, they could still achieve pretty much the same thing by using an external server as a proxy between the web app and the locally installed application. The only major difference is that with that method they'd be using unnecessary internet bandwidth, introducing a few dozen extra milliseconds of latency, and would be unable to communicate offline - all downsides IMO. If you _really_ want to try blocking this anyway, you could always use a browser extension.

Matthew Hardeman

unread,
Jun 21, 2017, 2:35:13 PM6/21/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, June 21, 2017 at 12:41:53 PM UTC-5, andre...@gmail.com wrote:

> I feel like this is getting sort of off-topic. Web pages can communicate directly with applications on the local machine regardless of whether they abuse certificates in this way or not. (Such as, for example, by using plain old HTTP.) The question of whether or not they should be able to do that is a separate topic IMO.
>
> Certificate abuse aside, I disagree with your assertion that this is inherently a bad thing. Even if browsers were to block web apps from communicating with localhost, they could still achieve pretty much the same thing by using an external server as a proxy between the web app and the locally installed application. The only major difference is that with that method they'd be using unnecessary internet bandwidth, introducing a few dozen extra milliseconds of latency, and would be unable to communicate offline - all downsides IMO. If you _really_ want to try blocking this anyway, you could always use a browser extension.


Certificate abuse aside, I suppose I have diverged from the key topic. I did so with the best intentions and in an attempt to serve to respond directly to the questions raised by the initiator of this thread as to the use case and how best to achieve the use case indicated.

Regarding localhost access, you are presently incorrect. The browsers do not allow access to localhost via insecure websocket if the page loads from a secure context. (Chrome and Firefox at least, I believe do not permit this presently.) I do understand that there is some question as to whether they may change that.

As for whether or not access to localhost from an externally sourced web site is "inherently a bad thing". I understand that there are downsides to proxying via the server in the middle in order to communicate back and forth with the locally installed application. Having said that, there is a serious advantage:

>From a security perspective, having the application make and maintain a connection or connections out to the server that will act as the intermediary between the website and the application allows for the network administrator to identify that there is an application installed that is being manipulated and controlled by an outside infrastructure. This allows for visibility to the fact that it exists and allows for appropriate mitigation measures if any are needed.

For a website to silently contact a server application running on the loopback and influence that software while doing so in a manner invisible to the network infrastructure layer is begging to be abused as an extremely covert command and control architecture when the right poorly written software application comes along.

uri...@gmail.com

unread,
Jun 21, 2017, 2:41:39 PM6/21/17
to mozilla-dev-s...@lists.mozilla.org
Apparently, in at least one case, the certificate was issued directly(!) to localhost by Symantec.

https://news.ycombinator.com/item?id=14598262

subject=/C=US/ST=Florida/L=Melbourne/O=AuthenTec/OU=Terms of use at www.verisign.com/rpa (c)05/CN=localhost
issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
reply

Is this a known incident?

Jonathan Rudenberg

unread,
Jun 21, 2017, 3:02:51 PM6/21/17
to uri...@gmail.com, mozilla-dev-s...@lists.mozilla.org

> On Jun 21, 2017, at 14:41, urijah--- via dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
>
> Apparently, in at least one case, the certificate was issued directly(!) to localhost by Symantec.
>
> https://news.ycombinator.com/item?id=14598262
>
> subject=/C=US/ST=Florida/L=Melbourne/O=AuthenTec/OU=Terms of use at www.verisign.com/rpa (c)05/CN=localhost
> issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
> reply
>
> Is this a known incident?

Here is the (since expired) certificate: https://crt.sh/?q=07C4AD287B850CAA3DD89656937DB1217067407AA8504A10382A8AD3838D153F

Santhan Raj

unread,
Jun 21, 2017, 3:22:22 PM6/21/17
to mozilla-dev-s...@lists.mozilla.org
As bad as it may sound, issuing certs for internal server name from a public chain was allowed until Oct 2015 (as per BR).

Jeremy Rowley

unread,
Jun 21, 2017, 3:28:51 PM6/21/17
to Santhan Raj, mozilla-dev-s...@lists.mozilla.org
And a common practice. Old Microsoft documentation used to recommend it.

> On Jun 21, 2017, at 12:22 PM, Santhan Raj via dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
>
> On Wednesday, June 21, 2017 at 12:02:51 PM UTC-7, Jonathan Rudenberg wrote:
> As bad as it may sound, issuing certs for internal server name from a public chain was allowed until Oct 2015 (as per BR).

andre...@gmail.com

unread,
Jun 21, 2017, 4:01:30 PM6/21/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, June 21, 2017 at 1:35:13 PM UTC-5, Matthew Hardeman wrote:
> Regarding localhost access, you are presently incorrect. The browsers do not allow access to localhost via insecure websocket if the page loads from a secure context. (Chrome and Firefox at least, I believe do not permit this presently.) I do understand that there is some question as to whether they may change that.

Right, I wasn't taking about WebSockets in particular, but about any possible form of direct communication between the web app and desktop application. That's why I pointed to plain old HTTP requests as an example.

> As for whether or not access to localhost from an externally sourced web site is "inherently a bad thing". I understand that there are downsides to proxying via the server in the middle in order to communicate back and forth with the locally installed application. Having said that, there is a serious advantage:
>
> >From a security perspective, having the application make and maintain a connection or connections out to the server that will act as the intermediary between the website and the application allows for the network administrator to identify that there is an application installed that is being manipulated and controlled by an outside infrastructure. This allows for visibility to the fact that it exists and allows for appropriate mitigation measures if any are needed.
>
> For a website to silently contact a server application running on the loopback and influence that software while doing so in a manner invisible to the network infrastructure layer is begging to be abused as an extremely covert command and control architecture when the right poorly written software application comes along.

I guess I don't completely understand what your threat model here is. Are you saying you're worried about users installing insecure applications that allow remote code execution for any process that can send HTTP requests to localhost?

Or are you saying you're concerned about malware already installed on the user's computer using this mechanism for command and control?

Both of those are valid concerns. I'm not really sure whether they're significant enough though to break functionality over, since they both require the user to already be compromised in some way before they're of any use to attackers. Though perhaps requiring a permissions prompt of some kind before allowing requests to localhost may be worth considering...

As I said though, this is kinda straying off topic. If the ability of web apps to communicate with localhost is something that concerns you, consider starting a new topic on this mailing list so we can discuss that in detail without interfering with the discussion regarding TLS certificates here.

Andrew Ayer

unread,
Jun 21, 2017, 4:39:57 PM6/21/17
to dev-security-policy, rev...@digicert.com
On Tue, 20 Jun 2017 21:23:51 +0100
Rob Stradling via dev-security-policy
> "but these have been known about and deemed acceptable for years"
>
> Known about by whom? Deemed acceptable by whom? Until the CA
> becomes aware of a key compromise, the CA will not know that the
> corresponding certificate(s) needs to be revoked.
>
> Thanks for providing the Spotify example. I've just found the
> corresponding certificate (issued by DigiCert) and submitted it to
> some CT logs. It's not yet revoked:
> https://crt.sh/?id=158082729
>
> https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0 does
> appear to be the corresponding private key.

24 hours later, this certificate is still not revoked, so DigiCert is
now in violation of section 4.9.1.1 of the BRs.

Regards,
Andrew

Amus

unread,
Jun 21, 2017, 5:33:19 PM6/21/17
to mozilla-dev-s...@lists.mozilla.org
Looking into this, we revoked the cert on our end at 2:20 MST (within 24 hours after the certificate problem report was processed), but we distribute all of our OCSP responses through CDNs. Distribution through the CDN took approximately an hour plus. I couldn't find a definition of revoked in the BRs, so I assume it's when we start distributing revoked responses, not when the CDN updates? Sorry for the confusion there.

Jeremy

Jakob Bohm

unread,
Jun 22, 2017, 7:29:17 AM6/22/17
to mozilla-dev-s...@lists.mozilla.org
The most obvious concern to me is random web servers, possibly through
hidden web elements (such as script tags) gaining access to anything
outside the Browser's sandbox without clear and separate user
action. For example, if I visit a site that carries an advertisement
for Spotify, I don't want that site to have any access to my locally
running Spottify software, its state or even its existence.

The most obvious way to have a local application be managed from a local
standard web browser while also using resources obtained from a central
application web site is for the local application to proxy those
resources from the web site. Thus the Browser will exclusively be
talking to a localhost URL, probably over plain HTTP or some locally
generated localhost certificate, that may or may not be based on
existing machine certificate facilities in some system configurations.

In other words, the user might open http://localhost:45678 to see the
App user interface, consisting of local element and some elements which
the app backend might dynamically download from the vendor before
serving them within the http://localhost:45678/ URL namespace.

This greatly reduces the need for any mixing of origins in the Browser,
and also removes the need to have publicly trusted certificates revealed
to such local applications.

For some truly complex scenarios, more complex techniques are needed to
avoid distributing private keys, but that's not needed for the cases
discussed here.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

andre...@gmail.com

unread,
Jun 22, 2017, 10:22:44 AM6/22/17
to mozilla-dev-s...@lists.mozilla.org
On Thursday, June 22, 2017 at 6:29:17 AM UTC-5, Jakob Bohm wrote:
> The most obvious concern to me is random web servers, possibly through
> hidden web elements (such as script tags) gaining access to anything
> outside the Browser's sandbox without clear and separate user
> action. For example, if I visit a site that carries an advertisement
> for Spotify, I don't want that site to have any access to my locally
> running Spottify software, its state or even its existence.


That's a good point. Even if you might be able to trust the software running on your computer not to reveal sensitive information or accept commands from random, unauthenticated sites, it's still a potential privacy concern if those sites can detect what software you're running in the first place (by, for example, checking to see if an image known to be hosted by that program successfully loads).

A properly-designed application could take steps to mitigate this problem (such as checking the referer header before serving resources like images to an external site), but not all such applications may be sensitive enough to privacy issues to actually implement such features.
0 new messages