Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

DYMO Root CA installed by Label Printing Software

1,381 views
Skip to first unread message

Nicholas Humfrey

unread,
Jan 9, 2018, 4:05:16 PM1/9/18
to dev-secur...@lists.mozilla.org
Hello,

Apologies if this is off-topic but I am not sure where else to query
this.

While going through the list of Root Certificate Authorities on my
computer, I
was alarmed to discover one I wasn't expecting there, called "DYMO Root
CA (for
localhost)". This certificate was installed by the label printing
software, I
installed for my DYMO Label Printer.

It is intended purpose is to allow web-based tools to send content to
the label
printer to be printed by the local machine. It does it by allowing your
web
browser to access a web server running on your local computer.

It appears that they are installing the same Root CA and localhost
certificate
on each machine the printer software is installed on. On my Mac it was
installed
into the System keychain, as well as the Firefox list of Authorities.

There are screenshots and more details here:
https://github.com/njh/dymo-root-ca-security-risk



What is the correct way for them to achieve what they are trying to do?

Would it be better to use a self-signed localhost certificate (same
subject and
issuer), generated individually on each machine it is installed on?

Should 'localhost' / Mixed Content work without a certificate?

Or should they have a printer daemon on the local machine talking back
to a
cloud service, that the browser talks to?



Thanks,

nick.

Peter Gutmann

unread,
Jan 9, 2018, 4:41:33 PM1/9/18
to Nicholas Humfrey, dev-security-policy
Nicholas Humfrey via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>What is the correct way for them to achieve what they are trying to do?

I'm not sure if there is a correct way, just a least awful way. The problem
is that the browser vendors have decreed that you can only talk SSL if you use
a certificate from a commercial CA, which obviously isn't possible in this
case, or in numerous other cases (well, there are many commercial CAs who will
happily sell you a cert for "localhost", but that's another story).

So you can hack something with "the cloud", but now your label printing
software relies on external internet access to work, and you're sending
potentially sensitive data that never actually needs to go off-site, off-site
for no good reason.

Perhaps the least awful way is to install a custom root CA cert that only ever
signs one cert, "localhost" (and the CA's private key is held by Dymo, not
hardcoded into the binary). You've got a shared private key for localhost,
but it's less serious than having a universal root CA there.

The problem is really with the browsers, not with Dymo. There's no easy
solution from Dymo's end, so what they've done, assuming they haven't
hardcoded the CA's private key, is probably the least awful workaround to the
problem.

Peter.

Hanno Böck

unread,
Jan 9, 2018, 4:46:29 PM1/9/18
to dev-secur...@lists.mozilla.org, Nicholas Humfrey
Hi,

On Tue, 09 Jan 2018 21:04:34 +0000
Nicholas Humfrey via dev-security-policy
<dev-secur...@lists.mozilla.org> wrote:

> What is the correct way for them to achieve what they are trying to
> do?
>
> Would it be better to use a self-signed localhost certificate (same
> subject and
> issuer), generated individually on each machine it is installed on?

I covered this in detail in the last Bulletproof TLS Newsletter:
https://www.feistyduck.com/bulletproof-tls-newsletter/

Creating a local root on each host individually *with an individual
private key* is kinda okay. The cleaner solution is to connect via http
and the localhost IP (127.0.0.1), which should not throw mixed
contentwarnings - however not all browsers support that yet.

--
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: FE73757FA60E4E21B937579FA5880072BBB51E42

Ryan Sleevi

unread,
Jan 9, 2018, 4:51:42 PM1/9/18
to Peter Gutmann, Nicholas Humfrey, dev-security-policy
On Tue, Jan 9, 2018 at 4:40 PM, Peter Gutmann via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Nicholas Humfrey via dev-security-policy <dev-security-policy@lists.
> mozilla.org> writes:
>
> >What is the correct way for them to achieve what they are trying to do?
>
> I'm not sure if there is a correct way, just a least awful way. The
> problem
> is that the browser vendors have decreed that you can only talk SSL if you
> use
> a certificate from a commercial CA, which obviously isn't possible in this
> case, or in numerous other cases (well, there are many commercial CAs who
> will
> happily sell you a cert for "localhost", but that's another story).
>

Hi Peter,

This is factually false on several dimensions, so I think it bears calling
out.

First, there are non-commercial CAs that are trusted. This isn't about
commercial/non-commercial, but a question about whether trusted by default
or not. Obviously, those which are not trusted by default are necessarily
required to do something not default.

Second, you'e stated "there are many commercial CAs who will happily sell
you a cert for 'localhost'". To that, I say POC||GTFO, or, less profanely,
please provide examples so CA incident reports can be filed. This is
certainly not true enough to justify 'many', and the CA incident reports
shows that the certs we're dealing with today are primarily the result of
CAs that failed to revoke back when they should have, not that they've
continued issuance.

Third, I'm happy to inform you there is a correct way. The Secure Contexts
spec ( https://www.w3.org/TR/secure-contexts/#is-origin-trustworthy )
describes how localhost can be considered apriori trustworthy - that is,
loading http://localhost 'just works' and doesn't trigger mixed content.

There are other remarks in your response that are also wrong, but in the
spirit of only focusing on the most important (to this specific reporter's
question), I've omitted them. Certainly, it's not the minimal security risk
as you state - and messages over the past few weeks in this very Forum
capture that past discussion.

Peter Gutmann

unread,
Jan 9, 2018, 5:13:23 PM1/9/18
to ry...@sleevi.com, Nicholas Humfrey, dev-security-policy
Ryan Sleevi <ry...@sleevi.com> writes:

>First, there are non-commercial CAs that are trusted.

By "commercial CAs" I meant external business entities, not an in-house CA
that the key or cert owner controls. Doesn't matter if they charge money or
not, you still need to go to an external organisation to ask permission to use
encryption.

>Second, you'e stated "there are many commercial CAs who will happily sell you
>a cert for 'localhost'". To that, I say POC||GTFO,

“An Observatory for the SSLiverse”, Peter Eckersley and Jesse Burns,
presentation at Defcon 18, July 2010,
http://www.eff.org/files/DefconSSLiverse.pdf. That lists *six thousand* certs
issued for localhost from Comodo, Cybertrust, Digicert, Entrust, Equifax,
GlobalSign, GoDaddy, Microsoft, Starfield, Verisign, and many others. Then
there's tens of thousands of certs that other studies have found for
unqualified names, RFC 1918 names, and so on.

(The naming-and-shaming via CT is certainly cutting down on this, but given
the widespread mis-issuance of these certs in the past, are you really
confident that it's not still happening when CT can't see it?).

>Third, I'm happy to inform you there is a correct way. The Secure Contexts
>spec ( https://www.w3.org/TR/secure-contexts/#is-origin-trustworthy )

... available on display in the bottom of a locked filing cabinet stuck in a
disused lavatory with a sign on the door saying "Beware of the Leopard".

Given that a number of vendors have resorted to hardcoding their own root CAs,
Secure Contexts is either not working or there's insufficient awareness of it
for it to be effective (or both). Having just skimmed parts of that lengthy
and complex spec, which I'd never heard of until now, it's pretty hard to see
what this actually gives me, and that it can (according to you) make
connecting to localhost secure. In particular the text "The following
features are at-risk, and may be dropped during the CR period: [...] The
localhost carveout" and "This carveout is 'at risk', as there’s currently only
one implementation" doesn't inspire confidence either in it being widely
supported or it continuing to be supported.

>There are other remarks in your response that are also wrong, but in the
>spirit of only focusing on the most important (to this specific reporter's
>question), I've omitted them.

Please, go ahead. I'm happy to defend them, with references to studies and
whatnot if available.

Peter.

Ryan Sleevi

unread,
Jan 9, 2018, 5:58:45 PM1/9/18
to Peter Gutmann, Nicholas Humfrey, ry...@sleevi.com, dev-security-policy
On Tue, Jan 9, 2018 at 11:12 PM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

> Ryan Sleevi <ry...@sleevi.com> writes:
>
> >First, there are non-commercial CAs that are trusted.
>
> By "commercial CAs" I meant external business entities, not an in-house CA
> that the key or cert owner controls. Doesn't matter if they charge money
> or
> not, you still need to go to an external organisation to ask permission to
> use
> encryption.
>
> >Second, you'e stated "there are many commercial CAs who will happily sell
> you
> >a cert for 'localhost'". To that, I say POC||GTFO,
>
> “An Observatory for the SSLiverse”, Peter Eckersley and Jesse Burns,
> presentation at Defcon 18, July 2010,
> http://www.eff.org/files/DefconSSLiverse.pdf. That lists *six thousand*
> certs
> issued for localhost from Comodo, Cybertrust, Digicert, Entrust, Equifax,
> GlobalSign, GoDaddy, Microsoft, Starfield, Verisign, and many others. Then
> there's tens of thousands of certs that other studies have found for
> unqualified names, RFC 1918 names, and so on.
>
> (The naming-and-shaming via CT is certainly cutting down on this, but given
> the widespread mis-issuance of these certs in the past, are you really
> confident that it's not still happening when CT can't see it?).


It’s 2018. That practice has been banned for five years. It wasn’t banned
at the time. Heck, the Baseline Requirements didn’t even exist at the time
of that paper - and yet compliance to them has been a Mozilla Program
requirement for years and literally thousands of messages in this Forum
have been discussing them since their adoption. In 2012.

I appreciate your many contributions over the years, but it is extremely
misleading, borderline disingenuous to make such broad claims about the
state of the PKI today based on the ecosystem in 2010.

Would you feel your claim is any more appropriate or valid than a claim
that there are many CAs that will issue you SHA-1 certs or 1024-bit RSA key
certs, in 2018, simply because they did in 2010?

Or is your viewpoint that because this happened in the past, one should
assume that it will forever happen, no matter how much the ecosystem
changes - including explicitly prohibiting it for years?


>
> >Third, I'm happy to inform you there is a correct way. The Secure Contexts
> >spec ( https://www.w3.org/TR/secure-contexts/#is-origin-trustworthy )
>
> ... available on display in the bottom of a locked filing cabinet stuck in
> a
> disused lavatory with a sign on the door saying "Beware of the Leopard".


Well, no, it only seems that way because well-intentioned, but woefully
misinformed people will often be the first to reply, and then that bad
information continues to propagate, with plenty of excuses why they were
“technically” right or niggling on some minor semantic detail, all the
while happily believing their wrong way is the right way and promoting it
as such.

Given that a number of vendors have resorted to hardcoding their own root
> CAs,
> Secure Contexts is either not working or there's insufficient awareness of
> it
> for it to be effective (or both). Having just skimmed parts of that
> lengthy
> and complex spec, which I'd never heard of until now, it's pretty hard to
> see
> what this actually gives me, and that it can (according to you) make
> connecting to localhost secure. In particular the text "The following
> features are at-risk, and may be dropped during the CR period: [...] The
> localhost carveout" and "This carveout is 'at risk', as there’s currently
> only
> one implementation" doesn't inspire confidence either in it being widely
> supported or it continuing to be supported.


127.0.0.1 works.

“Localhost” has mixed support, due to the fact that several OSes will send
“localhost” traffic over the internet connection and dns server, allowing
anyone to claim to be your localhost.

Of course, if that doesn’t tickle your fancy, there are other ways that are
supported that you may not have heard about - for example:
https://docs.microsoft.com/en-us/microsoft-edge/extensions/guides/native-messaging

https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Native_messaging

https://developer.chrome.com/apps/nativeMessaging

These also offer secure communication channels, such as the use case the OP
raised.

>There are other remarks in your response that are also wrong, but in the
> >spirit of only focusing on the most important (to this specific reporter's
> >question), I've omitted them.
>
> Please, go ahead. I'm happy to defend them, with references to studies and
> whatnot if available.


Quick check: will anything you cite newer than 2010?

Peter Gutmann

unread,
Jan 9, 2018, 6:09:41 PM1/9/18
to Ryan Sleevi, Nicholas Humfrey, dev-security-policy
Ryan Sleevi <ry...@sleevi.com> writes:

>Or is your viewpoint that because this happened in the past, one should
>assume that it will forever happen, no matter how much the ecosystem changes -
>including explicitly prohibiting it for years?

Pretty much. See the followup message, which shows it was still happening as
of a few months ago.

>one should assume that it will forever happen, no matter how much the
>ecosystem changes - including explicitly prohibiting it for years?

Buffer overflows, XSS, SQL injection, the list is endless. None of these
security issues have gone away, why would another widespread problem, issuance
of certs that shouldn't have been issued, magically disappear just because
someone says it should? Do you honestly believe we won't see more mis-issued
certs just because the BR says you're not allowed to do it? Just check the
list over any period of time for examples of the ones that someone's actually
noticed, who knows how many have gone unnoticed until someone like Tavis
comes along.

>Quick check: will anything you cite newer than 2010?

See my other reply. I just used the best-known one, which was the first that
came to mind, and shows how widespread the issue was before the naming-and-
shaming cut some of it down.

Peter.

Ryan Sleevi

unread,
Jan 9, 2018, 6:30:50 PM1/9/18
to Peter Gutmann, Nicholas Humfrey, Ryan Sleevi, dev-security-policy
On Wed, Jan 10, 2018 at 12:08 AM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

> Ryan Sleevi <ry...@sleevi.com> writes:
>
> >Or is your viewpoint that because this happened in the past, one should
> >assume that it will forever happen, no matter how much the ecosystem
> changes -
> >including explicitly prohibiting it for years?
>
> Pretty much. See the followup message, which shows it was still happening
> as
> of a few months ago.


I fear you critically misunderstood this then.

>one should assume that it will forever happen, no matter how much the
> >ecosystem changes - including explicitly prohibiting it for years?
>
> Buffer overflows, XSS, SQL injection, the list is endless. None of these
> security issues have gone away, why would another widespread problem,
> issuance
> of certs that shouldn't have been issued, magically disappear just because
> someone says it should?


Because your comparison is tragically flawed? Again, this is something that
was perfectly permissible in 2010 because quite literally no rules existed
on this point.

If you want an analogy, you’re trying to argue that Windows 10 is insecure,
on the basis that you forgot to apply patches to your Windows 95 machine.
Yes, if you didn’t apply patches, your Windows 95 machine was bad. But that
has zero bearing on a discussion about 2018 unless you are intentionally or
unintentionally neglecting the hundreds of systemic changes since then.

Do you honestly believe we won't see more mis-issued
> certs just because the BR says you're not allowed to do it?


Here you switch to arguing they’re misissued (except they weren’t then),
and subsequently confusing the difference between a misissued certificate
(e.g. for localhost) versus a certificate that the subscriber compromises
their own key for (e.g. Blizzard).

These are two entirely separate things.

Just check the
> list over any period of time for examples of the ones that someone's
> actually
> noticed, who knows how many have gone unnoticed until someone like Tavis
> comes along.
>
> >Quick check: will anything you cite newer than 2010?
>
> See my other reply. I just used the best-known one, which was the first
> that
> came to mind, and shows how widespread the issue was before the naming-and-
> shaming cut some of it down.


Here, you ignore the fact that it wasn’t “naming and shaming” that cut it
down, but quite literally a tectonic shift in the industry and the
introduction of actual requirements. Citing the “best known” is great, but
that’s like citing the best known article that says smoking is good for
you, published in 1950.

Your arguments are like saying that you can go to Walgreens to buy cocaine,
because 100 years ago you could go to the drug store and do so. Or arguing
that anyone can go to the drug store and get enough cold medicine to make
meth, while ignoring the past two decades of increased drug controls.

To your original response, the OP literally gave multiple options that the
industry has acknowledged as significantly better than what you propose as
“best” practice, which is terribly insecure.

To avoid continuing to rathole on just how misguided the initial response
was - and how woefully out of date - I’ll circle back with priorities:

- You can use http://127.0.0.1
- You can generate a local CA cert on installation (as numerous products do)
- You can generate a local *non* CA cert and install t as trusted for your
users for just that site
- You can use the browser’s native messaging APIs
- You can generate a FQDN and deliver certs to your users from publicly
trusted CAs (as long as the client generated the local key) - If you’re not
familiar with this,
https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/

You should not:
- Have the vendor generate the leaf key (as the OP pointed out as troubling)
- Have the vendor generate the CA key themselves (as Peter suggested)
- Ship a private key within the app

That is best practice here in 2018.

Peter Gutmann

unread,
Jan 9, 2018, 6:43:03 PM1/9/18
to Ryan Sleevi, Nicholas Humfrey, dev-security-policy
Ryan Sleevi <ry...@sleevi.com> writes:

>Of course, if that doesn’t tickle your fancy, there are other ways that are
>supported that you may not have heard about - for example:
>https://docs.microsoft.com/en-us/microsoft-edge/extensions/guides/native-messaging
>
>https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Native_messaging
>
>https://developer.chrome.com/apps/nativeMessaging

So I've had a quick look at these and unless I've missed something they're
just a means of talking to an app on your local machine. As soon as you go
outside that boundary, e.g. to configure a router or printer on your local
network via a browser, you're back to having to add a new root CA to the cert
store for it to work. Or have I missed something?

Peter.

Jonathan Rudenberg

unread,
Jan 9, 2018, 7:01:43 PM1/9/18
to Peter Gutmann, Ryan Sleevi, Nicholas Humfrey, dev-security-policy
Yes, the native messaging API is for communication with local apps.

For communicating with other machines, the correct thing to do is to issue a unique certificate for each device from a publicly trusted CA. The way Plex does this is a good example: https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/

Peter Gutmann

unread,
Jan 9, 2018, 7:32:19 PM1/9/18
to Jonathan Rudenberg, Ryan Sleevi, Nicholas Humfrey, dev-security-policy
Jonathan Rudenberg <jona...@titanous.com> writes:

>For communicating with other machines, the correct thing to do is to issue a
>unique certificate for each device from a publicly trusted CA. The way Plex
>does this is a good example:
>https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/

But the Plex solution required DynDNS, partnering with a CA for custom hash-
based wildcard certificates (and for which the CA had to create a new custom
CA cert), and other tricks, I don't think that generalises. In effect this
has given Plex their own in-house CA (by proxy), which is a point solution for
one vendor but not something that any vendor can build into a product.

Anyone from Plex want to comment on how much effort was involved in this? It'd
be interesting to know what was required to negotiate this deal, and how long
it took, just as a reference point for anyone else considering it.

Peter.

Jonathan Rudenberg

unread,
Jan 9, 2018, 7:37:01 PM1/9/18
to Peter Gutmann, Ryan Sleevi, Nicholas Humfrey, dev-security-policy

> On Jan 9, 2018, at 19:31, Peter Gutmann via dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
>
> Jonathan Rudenberg <jona...@titanous.com> writes:
>
>> For communicating with other machines, the correct thing to do is to issue a
>> unique certificate for each device from a publicly trusted CA. The way Plex
>> does this is a good example:
>> https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/
>
> But the Plex solution required DynDNS, partnering with a CA for custom hash-
> based wildcard certificates (and for which the CA had to create a new custom
> CA cert), and other tricks, I don't think that generalises. In effect this
> has given Plex their own in-house CA (by proxy), which is a point solution for
> one vendor but not something that any vendor can build into a product.

There is nothing special about this, hardware vendors regularly do a similar amount of work around discovery/provisioning for their devices. Additionally, there is nothing special about the CA, it can be done with Let’s Encrypt! For example: https://crt.sh/?q=%25.myfritz.net

These types of use cases (“IOT”) are regularly brought up by CAs on mailing lists, so I assume there are several that are quite happy to help you set something similar up.

Ryan Sleevi

unread,
Jan 9, 2018, 9:19:36 PM1/9/18
to Peter Gutmann, Nicholas Humfrey, Ryan Sleevi, dev-security-policy
On Wed, Jan 10, 2018 at 12:42 AM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

> Ryan Sleevi <ry...@sleevi.com> writes:
>
> >Of course, if that doesn’t tickle your fancy, there are other ways that
> are
> >supported that you may not have heard about - for example:
> >
> https://docs.microsoft.com/en-us/microsoft-edge/extensions/guides/native-messaging
> >
> >
> https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Native_messaging
> >
> >https://developer.chrome.com/apps/nativeMessaging
>
> So I've had a quick look at these and unless I've missed something they're
> just a means of talking to an app on your local machine. As soon as you go
> outside that boundary, e.g. to configure a router or printer on your local
> network via a browser, you're back to having to add a new root CA to the
> cert
> store for it to work. Or have I missed something?


I believe you may have missed the original message in your haste to decry
browsers and offer outdated anecdotes and dangerously insecure solutions
(the latter of which is the most troubling and frustrating part of this
exchange, and bears greater professional responsibility)

The use case and feature was:

“called "DYMO Root CA (for localhost)". This certificate was installed by
the label printing software, I installed for my DYMO Label Printer.

It is intended purpose is to allow web-based tools to send content to the
label printer to be printed by the local machine.”

I hope you can see how I responded to precisely the problem provided.

Peter Gutmann

unread,
Jan 9, 2018, 9:34:49 PM1/9/18
to Ryan Sleevi, Nicholas Humfrey, dev-security-policy
Ryan Sleevi <ry...@sleevi.com> writes:

>I hope you can see how I responded to precisely the problem provided.

You responded to that one specific limited instance. That doesn't work for
anything else where you've got a service that you want to make available over
HTTPS. Native messaging is a hack to get around a problem with browsers, as
soon as you move off the local machine it reappears again, which is what I was
pointing out.

Since this is something that keeps cropping up, and from all signs will keep
on cropping up, perhaps the browser vendors could publish some sort of
guide/BCP on how to do it right that everyone could follow. For example:

HTTPS to localhost: Use Native Messaging
HTTPS to device on local network (e.g. RFC 1918): ???
HTTPS to device with non-FQDN: ???
HTTPS to device with static IP address: ???

This would solve... well, at least take a step towards solving the same issue
that keeps coming up again and again. If there's a definitive answer,
developers could refer to that and get it right.

Oh, and saying "you need to negotiate a custom deal with a
commercial/public/whatever-you-want-to-call-it CA" doesn't count as a
solution, it has to be something that's actually practical.

Peter.

Ryan Sleevi

unread,
Jan 9, 2018, 10:09:12 PM1/9/18
to Peter Gutmann, Nicholas Humfrey, Ryan Sleevi, dev-security-policy
On Wed, Jan 10, 2018 at 3:33 AM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

> Ryan Sleevi <ry...@sleevi.com> writes:
>
> >I hope you can see how I responded to precisely the problem provided.
>
> You responded to that one specific limited instance.


I responded to the topic of this thread, with actionable advice for the
problem described and questioned posed, and with specific advice to
disregard the dangerous, insecure, and outdated advice you provided.


That doesn't work for
> anything else where you've got a service that you want to make available
> over
> HTTPS. Native messaging is a hack to get around a problem with browsers,
> as
> soon as you move off the local machine it reappears again, which is what I
> was
> pointing out.


You continue to use pejoratives such as “hack” or “problem”, while also
acknowledging that you have not kept up with the state of the industry for
the past near-decade of improvements or enhancements. Perhaps it may be
more productive to listen and research before casting aspersions and
judgement - it would certainly be less alienating and dismissive.

Since this is something that keeps cropping up, and from all signs will keep
> on cropping up, perhaps the browser vendors could publish some sort of
> guide/BCP on how to do it right that everyone could follow. For example:
>
> HTTPS to localhost: Use Native Messaging
> HTTPS to device on local network (e.g. RFC 1918): ???


I similarly suspect you’re unaware of https://wicg.github.io/cors-rfc1918/ in
which browsers seek to limit or restrict communication to such devices?

HTTPS to device with non-FQDN: ???
> HTTPS to device with static IP address: ???


I suspect any answer such as “Don’t do this” or “This is intentionally not
supported” will be met by you as “impractical”. That said, I don’t find it
particularly useful to engage in the shifting goalposts here, especially
not without specific use cases in mind. That’s because discussions of
“practical” will inevitably be used to arbitrarily reject solutions for
contrived examples, rather than offer meaningful discussion tailored to
real use cases and assessments of tradeoffs or approaches.

This would solve... well, at least take a step towards solving the same
> issue
> that keeps coming up again and again. If there's a definitive answer,
> developers could refer to that and get it right.


Note these didn’t come up in this thread, not have they recently in the
Forum.

Oh, and saying "you need to negotiate a custom deal with a
> commercial/public/whatever-you-want-to-call-it CA" doesn't count as a
> solution, it has to be something that's actually practical.


By this definition, any solution short of hand holding / doing it for them
is “impractical”. Engineering has costs and tradeoffs. I get that you’re
not willing to entertain those is your pursuits, but please recognize the
danger when someone as respected within the community as you are offers
solutions that are substantially insecure because of this. It may be
better, in that case, to either disclose that there are tradeoffs you
personally would not make, or to simply not offer them forward as
solutions, certainly not best practices.

Peter Gutmann

unread,
Jan 9, 2018, 10:48:09 PM1/9/18
to Ryan Sleevi, Nicholas Humfrey, dev-security-policy
Ryan Sleevi <ry...@sleevi.com> writes:

>I similarly suspect you’re unaware of https://wicg.github.io/cors-rfc1918/ in
>which browsers seek to limit or restrict communication to such devices?

A... blog post? Not sure what that is, it's labelled "A Collection of
Interesting Ideas", stashed on Github under the WICG's repository? No, for
some inexplicable reason I seem to have missed that one. Is there a "Beware
of the Leopard" sign somewhere?

It talks a lot about details of CORS, but I'm not sure what it says about
allowing secure HTTPS to devices at RFC 1918 addresses. The doc says "we
propose a mitigation against these kinds of attacks that would require
internal devices to explicitly opt-in to requests from the public internet",
which indicates it's targeted at something altogether different.

>while also acknowledging that you have not kept up with the state of the
>industry for the past near-decade of improvements or enhancements

If the industry actually publicised some of this stuff rather than posting
articles with names like "A Collection of Interesting Ideas" to GitHub (which
in any case doesn't look like it actually addresses the problem) then I might
have kept up with it a bit more. And as I've already pointed out, given the
number of vendors who are resorting to slipping in their own root CAs and
other tricks, I'm not the only one who's missing all these well-hidden
industry solutions.

>> HTTPS to device with non-FQDN: ???
>> HTTPS to device with static IP address: ???
>
>I suspect any answer such as “Don’t do this” or “This is intentionally not
>supported” will be met by you as “impractical”.

Try me. The reason why I ruled out "negotiate a custom deal with a commercial
CA" is that it genuinely doesn't scale, you can't expect 10,000, 50,000,
100,000 (whatever the number is) device vendors to all cut a special deal with
a commercial/public/whatever CA just to allow a browser to talk to their $30
Internet-connected whatsit.

It's a simple enough question, so I'll repeat it again, a vendor selling some
sort of Internet-connected device that needs to be administered via HTTP (i.e.
using a browser), a printer, router, HVAC unit, whatever you like, wants to
add encryption to the connection. How should they do this for the fairly
common scenarios of:

HTTPS to device on local network (e.g. RFC 1918).
HTTPS to device with non-FQDN.
HTTPS to device with static IP address.

What's the recommended BCP for a vendor to allow browser-based HTTPS access
for these scenarios? I'm genuinely curious. And please publish the
recommendation so others can follow it (not on GitHub labelled "A Collection
of Interesting Ideas").

Peter.

Nicholas Humfrey

unread,
Jan 11, 2018, 7:32:21 PM1/11/18
to dev-security-policy, Ryan Sleevi, Peter Gutmann
Thank you very much to everyone who replied to my original post. I think
the fact that so many people are making the same mistakes indicates that
the correct solutions are not obvious to many developers.


I have added a "How could this be done better?" section to my README:
https://github.com/njh/dymo-root-ca-security-risk/blob/master/README.md

Please let me know if I have misunderstood any of it.


DYMO have replied to my enquiry saying:
"After investigating the problem that you described, I escalated the
issue to the developers."

So fingers crossed they are taking this seriously.


nick.

mka...@gmail.com

unread,
Jul 26, 2018, 3:57:07 PM7/26/18
to mozilla-dev-s...@lists.mozilla.org
I came across this from the OP's article posted on GitHub, apologies for posting so much later than the original discussion. I just wanted to throw in my 2 cents, real use case. A webapp I develop(ed) for my company has been using DYMO's developer setup and the web service that's installed with their label software since sometime in 2012. It is difficult to state how much that such a small thing has increased the efficiency in our business process. The webapp is is critical to our company; it and the label ability is constantly used by employees throughout the day. Those labels with barcode and several little bits of info are golden.

I/we would really really really prefer to not lose this ability. I know you're not talking about removing the ability, but I feel I have to say this because browsers have lost a lot of functionality related to the local machine in recent years. Security... fine, I know.

I'll also say that I don't think DYMO should have to code something for each individual web browser out there (re: native messaging links above), which also would likely change more frequently than more standardized methods. I guess that might make somewhat more sense in a walled garden world when only one browser is available or supported.

So generating a cert upon installation might be the better way that's needed. I don't care to try reading DYMO's minified JS code, but I do see both localhost and 127.0.0.1 in there. Installing one of our own non-CA certs in these clients would be doable but a hassle. However, that seems like more than should be required. If the user installs DYMO software and wants to use the connection between web browser and printer, then that should be enough.

I'd be curious to know if DYMO has read this or the OP's article. They have been making changes lately, but I know browsers have been forcing their hand. They really have a fantastic tool.
0 new messages