Part of the trouble in relying upon the name alone is that on many OS's
(maybe all -- at least all the ones that matter for client side work) can
have localhost overridden to mean other things.
Taking it back to the IP layer has the advantage of certainty for the
constrained question of whether or not the target I am attempting to attach
to is "me", from the perspective of the local host.
I am aware that work is ongoing to address that. Furthermore, I'm sure
each individual browser could just consider the string "localhost", when
positioned in the URL as a host name, to be treated specially -- achieving
the same goal. I'm not certain whether or not they are doing so as yet.
The trouble the various software developers will have with the Chrome flag,
of course, is that external software is not allowed to directly change
Chrome flags. (I believe I recall that they made that sort of a pain on
purpose. I applaud that.)
This isn't about developer workstations, it's about real world end users.
If the end user has to go into Chrome settings and flip a flag, the
developer isn't going to tolerate that. (It's not trivially easy and lots
of real world end users would get lost, even with good instructions and
screen shots.) Instead the developer will just install a certificate and
make it trusted themselves. Hopefully, they would do so in a reasonably
sensible way: (like locally creating a name-constrained CA with proper
extended key usages and marking it trusted with proper trusts, followed by
locally created certificate for the needed name, signed by the local CA).
But that's a lot of work (relative to doing it the wrong way).
If I recall correctly, this is actually more of a thing for WebSockets.
If my recollection is correct, Chrome already considers
http://localhost or
is it
http://127.0.0.1 to be a secure origin in nature. Thus, code which
arises from a request to localhost is treated as running within a secure
context, even if it is not wrapped in a TLS connection. (You can run
javascript and access APIs in such circumstances on localhost via HTTP when
ordinarily Chrome would require HTTPS to enable that access.)
However... That doesn't help the web developer who is trying to access a
WebSocket service on localhost. While Chrome regards the localhost as a
secure origin, WebSockets in Chrome (and every other mainstream browser?)
require the connection to be a TLS wrapped one, regardless of whether or
not it's from a secure origin.
(I wonder if they even wrote a protocol handler for non-secure web
sockets? At the moment, it's Christmas and I'm being too lazy to look.)
All of this leads to the reasons which I think if the sub resource is local
while the main page URI is not, just decline to even attempt the connection.
The real trouble I have is that shoddy certificate and key handling are
likely the tip of the iceberg when further scrutiny and examination is made
of these various daemons listening on the local machine. The browsers have
an opportunity to foreclose any possibility of those daemon's being exposed
to external web resources.
The mechanism I propose would still permit a remote website to redirect the
browser to a full page and resource load on localhost and would allow
localhost sub-resources to load in that context. So it does not entirely
shut down the use case the various developers want, it just requires them
to move more of the UI and logic to the client side.
More importantly, it prevents a remote website from "port scanning"
(whether broadly or for more specific target ports) the local host for
anything stupid enough to answer with an Access-Control-Allow-Origin: *
header. (Far too often, you see, the actual marketing name website origin
from which the legit requests will arise is not yet decided or known for
the developer at development time to incorporate it in the CORS mechanism
properly.)
Thanks,
Matt Hardeman
On Mon, Dec 25, 2017 at 1:04 PM, Adrian R. via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
> Matthew Hardeman wrote:
> > 6. Thus, the direction this goes is that the developer creates a
> self-signed cert and imports it into the trust store. The developer may do
> this on the software host, but historically is more likely to just create
> one and package it just like they did with the trusted ones before. Only
> now, the developer has to annoyingly ask for admin permission to install
> the certificate to the trust store. All because they want to be able to
> run web-sockets or HTTP(s) to the local host at the command of the browser,
> as directed by a remote web site. Mere revocation of the trusted
> certificates is not sufficient to stop the bad practices which will arise
> (and have already arisen) in response to revoked certificates.
> >
> > 7. My proposal would almost certainly halt the interest in trusted
> certificates which refer back to the local endpoint -- particularly for
> shared certs/keys.
> >
> > Thanks,
> >
> > Matt Hardeman
>
> In the case of "localhost" there's even no need to import the certificate
> to the certificate store: browsers can be told to automatically skip
> certificate validation for 'localhost'.
>
> Chrome is one of the browsers that implemented this validation bypass for
> localhost, you need only to set a flag in settings to enable it and you
> don't need to mess with the certificate store after that:
> chrome://flags/#allow-insecure-localhost
> *Allow invalid certificates for resources loaded from localhost.*
> Allows requests to localhost over HTTPS even when an invalid certificate
> is presented. Mac, Windows, Linux, Chrome OS, Android
>
> Also, these restrictions should fit in nicely with the upcoming standard
>
https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02
>
> ~~~~
> Adrian R.
>
https://lists.mozilla.org/listinfo/dev-security-policy
>