>What is the correct way for them to achieve what they are trying to do?
I'm not sure if there is a correct way, just a least awful way. The problem
is that the browser vendors have decreed that you can only talk SSL if you use
a certificate from a commercial CA, which obviously isn't possible in this
case, or in numerous other cases (well, there are many commercial CAs who will
happily sell you a cert for "localhost", but that's another story).
So you can hack something with "the cloud", but now your label printing
software relies on external internet access to work, and you're sending
potentially sensitive data that never actually needs to go off-site, off-site
for no good reason.
Perhaps the least awful way is to install a custom root CA cert that only ever
signs one cert, "localhost" (and the CA's private key is held by Dymo, not
hardcoded into the binary). You've got a shared private key for localhost,
but it's less serious than having a universal root CA there.
The problem is really with the browsers, not with Dymo. There's no easy
solution from Dymo's end, so what they've done, assuming they haven't
hardcoded the CA's private key, is probably the least awful workaround to the
>First, there are non-commercial CAs that are trusted.
By "commercial CAs" I meant external business entities, not an in-house CA
that the key or cert owner controls. Doesn't matter if they charge money or
not, you still need to go to an external organisation to ask permission to use
>Second, you'e stated "there are many commercial CAs who will happily sell you
>a cert for 'localhost'". To that, I say POC||GTFO,
“An Observatory for the SSLiverse”, Peter Eckersley and Jesse Burns,
presentation at Defcon 18, July 2010,
http://www.eff.org/files/DefconSSLiverse.pdf. That lists *six thousand* certs
issued for localhost from Comodo, Cybertrust, Digicert, Entrust, Equifax,
GlobalSign, GoDaddy, Microsoft, Starfield, Verisign, and many others. Then
there's tens of thousands of certs that other studies have found for
unqualified names, RFC 1918 names, and so on.
(The naming-and-shaming via CT is certainly cutting down on this, but given
the widespread mis-issuance of these certs in the past, are you really
confident that it's not still happening when CT can't see it?).
>Third, I'm happy to inform you there is a correct way. The Secure Contexts
>spec ( https://www.w3.org/TR/secure-contexts/#is-origin-trustworthy )
... available on display in the bottom of a locked filing cabinet stuck in a
disused lavatory with a sign on the door saying "Beware of the Leopard".
Given that a number of vendors have resorted to hardcoding their own root CAs,
Secure Contexts is either not working or there's insufficient awareness of it
for it to be effective (or both). Having just skimmed parts of that lengthy
and complex spec, which I'd never heard of until now, it's pretty hard to see
what this actually gives me, and that it can (according to you) make
connecting to localhost secure. In particular the text "The following
features are at-risk, and may be dropped during the CR period: [...] The
localhost carveout" and "This carveout is 'at risk', as there’s currently only
one implementation" doesn't inspire confidence either in it being widely
supported or it continuing to be supported.
>There are other remarks in your response that are also wrong, but in the
>spirit of only focusing on the most important (to this specific reporter's
>question), I've omitted them.
Please, go ahead. I'm happy to defend them, with references to studies and
whatnot if available.
>Or is your viewpoint that because this happened in the past, one should
>assume that it will forever happen, no matter how much the ecosystem changes -
>including explicitly prohibiting it for years?
Pretty much. See the followup message, which shows it was still happening as
of a few months ago.
>one should assume that it will forever happen, no matter how much the
>ecosystem changes - including explicitly prohibiting it for years?
Buffer overflows, XSS, SQL injection, the list is endless. None of these
security issues have gone away, why would another widespread problem, issuance
of certs that shouldn't have been issued, magically disappear just because
someone says it should? Do you honestly believe we won't see more mis-issued
certs just because the BR says you're not allowed to do it? Just check the
list over any period of time for examples of the ones that someone's actually
noticed, who knows how many have gone unnoticed until someone like Tavis
>Quick check: will anything you cite newer than 2010?
See my other reply. I just used the best-known one, which was the first that
came to mind, and shows how widespread the issue was before the naming-and-
shaming cut some of it down.
>Of course, if that doesn’t tickle your fancy, there are other ways that are
>supported that you may not have heard about - for example:
So I've had a quick look at these and unless I've missed something they're
just a means of talking to an app on your local machine. As soon as you go
outside that boundary, e.g. to configure a router or printer on your local
network via a browser, you're back to having to add a new root CA to the cert
store for it to work. Or have I missed something?
>For communicating with other machines, the correct thing to do is to issue a
>unique certificate for each device from a publicly trusted CA. The way Plex
>does this is a good example:
But the Plex solution required DynDNS, partnering with a CA for custom hash-
based wildcard certificates (and for which the CA had to create a new custom
CA cert), and other tricks, I don't think that generalises. In effect this
has given Plex their own in-house CA (by proxy), which is a point solution for
one vendor but not something that any vendor can build into a product.
Anyone from Plex want to comment on how much effort was involved in this? It'd
be interesting to know what was required to negotiate this deal, and how long
it took, just as a reference point for anyone else considering it.
>I hope you can see how I responded to precisely the problem provided.
You responded to that one specific limited instance. That doesn't work for
anything else where you've got a service that you want to make available over
HTTPS. Native messaging is a hack to get around a problem with browsers, as
soon as you move off the local machine it reappears again, which is what I was
Since this is something that keeps cropping up, and from all signs will keep
on cropping up, perhaps the browser vendors could publish some sort of
guide/BCP on how to do it right that everyone could follow. For example:
HTTPS to localhost: Use Native Messaging
HTTPS to device on local network (e.g. RFC 1918): ???
HTTPS to device with non-FQDN: ???
HTTPS to device with static IP address: ???
This would solve... well, at least take a step towards solving the same issue
that keeps coming up again and again. If there's a definitive answer,
developers could refer to that and get it right.
Oh, and saying "you need to negotiate a custom deal with a
commercial/public/whatever-you-want-to-call-it CA" doesn't count as a
solution, it has to be something that's actually practical.
>I similarly suspect you’re unaware of https://wicg.github.io/cors-rfc1918/ in
>which browsers seek to limit or restrict communication to such devices?
A... blog post? Not sure what that is, it's labelled "A Collection of
Interesting Ideas", stashed on Github under the WICG's repository? No, for
some inexplicable reason I seem to have missed that one. Is there a "Beware
of the Leopard" sign somewhere?
It talks a lot about details of CORS, but I'm not sure what it says about
allowing secure HTTPS to devices at RFC 1918 addresses. The doc says "we
propose a mitigation against these kinds of attacks that would require
internal devices to explicitly opt-in to requests from the public internet",
which indicates it's targeted at something altogether different.
>while also acknowledging that you have not kept up with the state of the
>industry for the past near-decade of improvements or enhancements
If the industry actually publicised some of this stuff rather than posting
articles with names like "A Collection of Interesting Ideas" to GitHub (which
in any case doesn't look like it actually addresses the problem) then I might
have kept up with it a bit more. And as I've already pointed out, given the
number of vendors who are resorting to slipping in their own root CAs and
other tricks, I'm not the only one who's missing all these well-hidden
>> HTTPS to device with non-FQDN: ???
>> HTTPS to device with static IP address: ???
>I suspect any answer such as “Don’t do this” or “This is intentionally not
>supported” will be met by you as “impractical”.
Try me. The reason why I ruled out "negotiate a custom deal with a commercial
CA" is that it genuinely doesn't scale, you can't expect 10,000, 50,000,
100,000 (whatever the number is) device vendors to all cut a special deal with
a commercial/public/whatever CA just to allow a browser to talk to their $30
It's a simple enough question, so I'll repeat it again, a vendor selling some
sort of Internet-connected device that needs to be administered via HTTP (i.e.
using a browser), a printer, router, HVAC unit, whatever you like, wants to
add encryption to the connection. How should they do this for the fairly
common scenarios of:
HTTPS to device on local network (e.g. RFC 1918).
HTTPS to device with non-FQDN.
HTTPS to device with static IP address.
What's the recommended BCP for a vendor to allow browser-based HTTPS access
for these scenarios? I'm genuinely curious. And please publish the
recommendation so others can follow it (not on GitHub labelled "A Collection
of Interesting Ideas").