Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

WoSign Issue L and port 8080

690 views
Skip to first unread message

Jakob Bohm

unread,
Sep 9, 2016, 6:53:55 AM9/9/16
to mozilla-dev-s...@lists.mozilla.org
As I read the Wiki description of WoSign issue L: Arbitrary High port
validation, the description notes a case of port 8080 validation as an
instance of this.

However I seem to have seen (cannot find it now) that at least WoSign,
and possibly others considers port 8080 one of the 3 valid
non-arbitrary ports for web server control validations along with ports
80 and 443.

If the BR and or CP/CPS indeed classify port 8080 as a valid web port
for domain control checking, that particular case probably shouldn't
count.

If instead WoSign (as I seem to recall) considers port 8080 as valid,
but the relevant formal documents do not, then that would be a separate
but related issue, which should get it's own letter on the Wiki page.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Gervase Markham

unread,
Sep 10, 2016, 8:45:46 AM9/10/16
to Jakob Bohm
On 09/09/16 11:53, Jakob Bohm wrote:
> As I read the Wiki description of WoSign issue L: Arbitrary High port
> validation, the description notes a case of port 8080 validation as an
> instance of this.
>
> If the BR and or CP/CPS indeed classify port 8080 as a valid web port
> for domain control checking, that particular case probably shouldn't
> count.

We aren't counting particular incidents, just the facts of the case,
which was that any high port was accepted, and that at least one cert
was issued on a non-8080 port.

> If instead WoSign (as I seem to recall) considers port 8080 as valid,
> but the relevant formal documents do not, then that would be a separate
> but related issue, which should get it's own letter on the Wiki page.

As noted in the original write-up, at the time of the incident, the
relevant formal documents did not specify exact port numbers, but
Mozilla feels that the fact that ports over 1024 are unprivileged is
basic security knowledge that any CA should have.

Gerv


Lee

unread,
Sep 10, 2016, 12:14:47 PM9/10/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 9/10/16, Gervase Markham <ge...@mozilla.org> wrote:
> On 09/09/16 11:53, Jakob Bohm wrote:
>> As I read the Wiki description of WoSign issue L: Arbitrary High port
>> validation, the description notes a case of port 8080 validation as an
>> instance of this.
>>
>> If the BR and or CP/CPS indeed classify port 8080 as a valid web port
>> for domain control checking, that particular case probably shouldn't
>> count.
>
> We aren't counting particular incidents, just the facts of the case,
> which was that any high port was accepted, and that at least one cert
> was issued on a non-8080 port.
>
>> If instead WoSign (as I seem to recall) considers port 8080 as valid,
>> but the relevant formal documents do not, then that would be a separate
>> but related issue, which should get it's own letter on the Wiki page.
>
> As noted in the original write-up, at the time of the incident, the
> relevant formal documents did not specify exact port numbers, but
> Mozilla feels that the fact that ports over 1024 are unprivileged is
> basic security knowledge that any CA should have.

Does Mozilla feel that using 'clear text' protocols to validate
domains is adequate security?
https://cabforum.org/2016/08/05/ballot-169-revised-validation-requirements/
> Authorized Port: One of the following ports: 80 (http), 443 (http), 115 (sftp), 25 (smtp), 22 (ssh).

I got a copy of
https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.4.0.pdf
and searched for the string "dnssec". No matches.
Will Mozilla be offering an amendment to the BR requiring the use of
DNSSEC where available?

How bad does an auditor have to be before Mozilla will no longer
accept them as "a trusted auditor for the Mozilla root program"?
https://wiki.mozilla.org/CA:WoSign_Issues#Issue_J:_Various_BR_Violations_.28Apr_2015.29
> Google noted that many of these issues should have been caught by a competent auditor.
> WoSign's auditors at the time were Ernst and Young (Hong Kong).

Will Mozilla accept CA audits done by Ernst and Young in the near future?

Does Mozilla plan on giving any extra attention to CAs whose last
audit was done by Ernst and Young?

Thanks,
Lee

Peter Bowen

unread,
Sep 10, 2016, 1:15:53 PM9/10/16
to Lee, mozilla-dev-s...@lists.mozilla.org, Gervase Markham
On Sat, Sep 10, 2016 at 9:14 AM, Lee <ler...@gmail.com> wrote:
> On 9/10/16, Gervase Markham <ge...@mozilla.org> wrote:
>> On 09/09/16 11:53, Jakob Bohm wrote:
>
> Does Mozilla feel that using 'clear text' protocols to validate
> domains is adequate security?
> https://cabforum.org/2016/08/05/ballot-169-revised-validation-requirements/
>> Authorized Port: One of the following ports: 80 (http), 443 (http), 115 (sftp), 25 (smtp), 22 (ssh).

This is basically a catch-22 for initial issuance. If you allow
validation via connection to a host operating that the requested FQDN,
then it will almost surely not be using a trusted public certificate
for the first connection. Using ssh or accepting a self-signed
certificate does not appear to address any critical part of the threat
model.

> How bad does an auditor have to be before Mozilla will no longer
> accept them as "a trusted auditor for the Mozilla root program"?
> https://wiki.mozilla.org/CA:WoSign_Issues#Issue_J:_Various_BR_Violations_.28Apr_2015.29
>> Google noted that many of these issues should have been caught by a competent auditor.
>> WoSign's auditors at the time were Ernst and Young (Hong Kong).
>
> Will Mozilla accept CA audits done by Ernst and Young in the near future?
>
> Does Mozilla plan on giving any extra attention to CAs whose last
> audit was done by Ernst and Young?

EY, like BDO, Deloitte, KPMG, and PwC, are not single firms. They are
"networks" of firms which usually carry out their audit/attest
services independently and are independently owned and operated. So
an opinion from Ernst & Young Bedrijfsrevisoren BCVBA (Belgium) is
likely written by a team independent from the team that wrote an
opinion from Ernst & Young P/S (Demark) which is independent from the
team that wrote an opinion from EY 安永 (Hong Kong). That being said, I
suspect that EY Global wants to protect its brand, so I would hope
they review any reports from any member firm that appear to be
lacking.

Thanks,
Peter

Lee

unread,
Sep 10, 2016, 5:00:21 PM9/10/16
to Peter Bowen, mozilla-dev-s...@lists.mozilla.org, Gervase Markham
On 9/10/16, Peter Bowen <pzb...@gmail.com> wrote:
> On Sat, Sep 10, 2016 at 9:14 AM, Lee <ler...@gmail.com> wrote:
>> On 9/10/16, Gervase Markham <ge...@mozilla.org> wrote:
>>> On 09/09/16 11:53, Jakob Bohm wrote:
>>
>> Does Mozilla feel that using 'clear text' protocols to validate
>> domains is adequate security?
>> https://cabforum.org/2016/08/05/ballot-169-revised-validation-requirements/
>>> Authorized Port: One of the following ports: 80 (http), 443 (http), 115
>>> (sftp), 25 (smtp), 22 (ssh).
>
> This is basically a catch-22 for initial issuance. If you allow
> validation via connection to a host operating that the requested FQDN,
> then it will almost surely not be using a trusted public certificate
> for the first connection.

Right - I figured that out about 30 seconds after reading an email
about allowing verification on ports 80 and 443. But you only need to
get the initial certificate one time - after that you should be able
to renew using port 443 and I didn't see anything in the requirements
about checking via an encrypted connection first. Did I miss
something or is getting a renewal cert over port 80 allowed?

> Using ssh or accepting a self-signed
> certificate does not appear to address any critical part of the threat
> model.

Is the threat model documented somewhere?

Admittedly, I'm doing cargo-cult security - "clear text protocols Are
Bad." But is there really no better way to verify a domain? Is there
really a need to allow clear-text protocols after an end-user gets
their first certificate? Why no mention of DNSSEC in the BR?

I just started reading about certificate transparency so I might be
misunderstanding it, but if a CA is going to be handing out certs
automatically using clear-text protocols, why doesn't Mozilla make CT
a requirement? Relying solely on audits is clearly a failure, so how
about trying continuous monitoring? & make failure to log to CT
servers all by itself enough justification to be removed from the moz
trust store.

>> How bad does an auditor have to be before Mozilla will no longer
>> accept them as "a trusted auditor for the Mozilla root program"?
>> https://wiki.mozilla.org/CA:WoSign_Issues#Issue_J:_Various_BR_Violations_.28Apr_2015.29
>>> Google noted that many of these issues should have been caught by a
>>> competent auditor.
>>> WoSign's auditors at the time were Ernst and Young (Hong Kong).
>>
>> Will Mozilla accept CA audits done by Ernst and Young in the near future?
>>
>> Does Mozilla plan on giving any extra attention to CAs whose last
>> audit was done by Ernst and Young?
>
> EY, like BDO, Deloitte, KPMG, and PwC, are not single firms. They are
> "networks" of firms which usually carry out their audit/attest
> services independently and are independently owned and operated. So
> an opinion from Ernst & Young Bedrijfsrevisoren BCVBA (Belgium) is
> likely written by a team independent from the team that wrote an
> opinion from Ernst & Young P/S (Demark) which is independent from the
> team that wrote an opinion from EY 安永 (Hong Kong).

So Honest Achmed has his request for his CA to be added to the mozilla
root store denied & he comes up with a new business plan - pay the
franchise fee, get a brand name and go into business auditing CAs. As
long as the CAs he audits don't screw up he gets to rake in the money?
And when he does screw up the franchiser gets a free pass even though
they didn't make sure Honest Achmed was qualified to audit a CA?

> That being said, I
> suspect that EY Global wants to protect its brand, so I would hope
> they review any reports from any member firm that appear to be
> lacking.

I would hope the brand owner would protect their brand by insisting
that the franchises' were actually competent. In this case Google
says they weren't, so why isn't Mozilla asking EY Global for an action
plan on how they're going to fix their deficiencies?

I'm not seeing why
> EY, like BDO, Deloitte, KPMG, and PwC, are not single firms.
makes any difference. The local offices are using a global brand name
& if one local office screws up it tarnishes the brand name, not just
that one local office.

I'm not seeing why Mozilla should think any EY office is competent to
audit a CA now.

Regards,
Lee

Patrick Figel

unread,
Sep 11, 2016, 4:32:42 AM9/11/16
to Lee, Peter Bowen, Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 10/09/16 22:37, Lee wrote:
> Right - I figured that out about 30 seconds after reading an email
> about allowing verification on ports 80 and 443. But you only need
> to get the initial certificate one time - after that you should be
> able to renew using port 443 and I didn't see anything in the
> requirements about checking via an encrypted connection first. Did I
> miss something or is getting a renewal cert over port 80 allowed?

In order to spoof a CA's domain validation request, an attacker would
need to be in a position to MitM the connection between the CA and the
targeted domain. This is where (the authentication part of) TLS would
come in handy. That leaves us with the problem of determining whether
the domain name in question should be considered to support TLS:

1. The CA could look at prior records for that domain - if a
certificate has been issued before, treat it as a renewal.
2. The CA could similarly search Certificate Transparency logs and
treat the issuance as a renewal if a certificate is found.

Option 1 has one big problem: The attacker only has to chose a CA that's
different from the CA the domain has used before.

Option 2 is problematic because not all CAs log to CT at the moment.

Both options do nothing to solve the problem of a domain owner losing
the private key of their certificate (for example due to a hack, data
loss, or just a domain transfer).

You might be thinking of an option 3 - just connect to port 443, see if
the domain has a valid certificate, and use HTTPS if available. This
sounds great in theory, but since the attacker would need to be able to
MitM the connection in the first place in order to spoof the validation
request, they could simply intercept this request and force validation
on port 80.

All in all I think this would do more harm than good. Adding complexity
to the DV process means slower HTTPS adoption in general. I'd rather see
a "good enough" DV process and HTTPS everywhere when the alternative is
a perfect-in-theory DV process where HTTPS is available only for sites
that can deploy all these things competently. Even if we push for
encryption for this validation method, we still have DNS validation
without any encryption, and given the rate at which DNSSEC is deployed,
that's not going to change any time soon. (Not to mention that there's a
lot of opposition to DNSSEC in general.)

Patrick

Lee

unread,
Sep 11, 2016, 4:05:12 PM9/11/16
to Patrick Figel, Gervase Markham, mozilla-dev-s...@lists.mozilla.org, Peter Bowen
On 9/11/16, Patrick Figel <patf...@gmail.com> wrote:
> On 10/09/16 22:37, Lee wrote:
>> Right - I figured that out about 30 seconds after reading an email
>> about allowing verification on ports 80 and 443. But you only need
>> to get the initial certificate one time - after that you should be
>> able to renew using port 443 and I didn't see anything in the
>> requirements about checking via an encrypted connection first. Did I
>> miss something or is getting a renewal cert over port 80 allowed?
>
> In order to spoof a CA's domain validation request, an attacker would
> need to be in a position to MitM the connection between the CA and the
> targeted domain.

does dns hijacking or dns cache poisoning count as mitm?

> This is where (the authentication part of) TLS would
> come in handy. That leaves us with the problem of determining whether
> the domain name in question should be considered to support TLS:
>
> 1. The CA could look at prior records for that domain - if a
> certificate has been issued before, treat it as a renewal.
> 2. The CA could similarly search Certificate Transparency logs and
> treat the issuance as a renewal if a certificate is found.
>
> Option 1 has one big problem: The attacker only has to chose a CA that's
> different from the CA the domain has used before.
>
> Option 2 is problematic because not all CAs log to CT at the moment.

Why is that allowed?

Full-blown CT is going to take a while.
https://tools.ietf.org/html/rfc6962 talks about including the SCT in
the TLS handshake, so getting to CT means changing how browsers do TLS
- correct? But requiring CAs log every cert to multiple CT servers
doesn't require any changes to browser code and allows for continuous
monitoring of CA behavior (in other words, "trust, but verify"). Or
am I missing something?

Let's check my understanding of CT logging with the issues listed at
https://wiki.mozilla.org/CA:WoSign_Issues

Issue D: ... does not represent a violation of the BRs.
so we'll skip that

Issue F: WoSign issued two certificates in March 2015. These
certificates are identical in all ways (including their serial
numbers) except for their notBefore dates, which are 37 seconds apart.
any interested observer could have discovered that back in March of
2015 if Wosign had logged all certificates to a CT server - correct?

Issue H: Duplicate Serial Numbers (Apr 2015)
again, any interested observer could have discovered that back in
2015 if Wosign had logged all certificates to a CT server - correct?

Issue J: Various BR Violations (Apr 2015)
I don't know enough to say, but couldn't at least
Incorrect or missing policy OIDs in all or most subscriber certificates;
have been discovered by an interested observer?

Issue L: Any Port (Jan - Apr 2015)
CT logging wouldn't have caught any of that
But CT logging _would_ allow CAs to offer a new service to their
customers: for <some small price> we'll notify you about any new
certificates issued for <the domain of the cert you just got> for the
life of the cert

Issue N: Additional Domain Errors (June 2015)
again, same thing as issue L - CT logging wouldn't have caught it?
Interested customers could have registered for a service to be
notified of new certs for their domain?

Issue P: Use of SM2 Algorithm
any interested observer could have discovered that?

Issue R: Purchase of StartCom
CT logging wouldn't have caught that

Issue S: Backdated SHA-1 Certs
any interested observer could have discovered that?

Issue T: alicdn.com Misissuance
same deal as for issue L

Issue V: StartEncrypt
CT logging wouldn't have caught that?


So CT logging isn't enough for a "good enough" solution, but I'd say
that annual CA audits clearly isn't good enuf either but a combination
of the two does seem to get us a lot closer. Add a requirement for
DNSSEC whenever possible and we're there?


> Both options do nothing to solve the problem of a domain owner losing
> the private key of their certificate (for example due to a hack, data
> loss, or just a domain transfer).
>
> You might be thinking of an option 3 - just connect to port 443, see if
> the domain has a valid certificate, and use HTTPS if available. This
> sounds great in theory, but since the attacker would need to be able to
> MitM the connection in the first place in order to spoof the validation
> request, they could simply intercept this request and force validation
> on port 80.
>
> All in all I think this would do more harm than good. Adding complexity
> to the DV process means slower HTTPS adoption in general. I'd rather see
> a "good enough" DV process ...

if it isn't obvious by now, I'd say that any process that doesn't
include continuous monitoring isn't "good enough"

> ... and HTTPS everywhere when the alternative is
> a perfect-in-theory DV process where HTTPS is available only for sites
> that can deploy all these things competently.

If the site admins aren't competent they're going to get pwned, so why
do I care if they're doing https instead of http? Or look at it from
a different angle - if it's that hard for sites to do it correctly
then [Mozilla? CAs? somebody] can come up with a checklist of what to
look for in a hosting provider that does do it right. It seems like
most everybody is moving to "the cloud" anyway, so requiring site
admins to be competent doesn't seem all that onerous a requirement.

> Even if we push for
> encryption for this validation method, we still have DNS validation
> without any encryption, and given the rate at which DNSSEC is deployed,

http://www.dnssec-deployment.org/
dateline August 29, 2016
3. Implementing DNSSEC validation at Internet Service Providers (ISPs)

Internet Service Providers (ISPs) play a critical role by enabling
DNSSEC validation for the caching DNS resolvers used by their
customers. We have now seen massive rollouts of DNSSEC validation
within large North American ISPs and at ISPs around the world.


OK - I'll agree that we're not where we should be but what if EV
certificates required DNSSEC enabled domain servers to qualify for an
EV cert? Seems like that would help some :)

> that's not going to change any time soon. (Not to mention that there's a
> lot of opposition to DNSSEC in general.)

Where is the opposition to DNSSEC? I was going to say that I'm also
lurking on the dns ops mailing list, but I don't think I can call what
I'm doing on m.d.s.p now lurking :)

Yes, DNSSEC is complicated & difficult to do right, but opposition to
DNSSEC in general? I'm not seeing it & any CA that can't or won't do
DNSSEC shouldn't be in the Mozilla root store.

Regards,
Lee

Nick Lamb

unread,
Sep 11, 2016, 5:01:12 PM9/11/16
to mozilla-dev-s...@lists.mozilla.org
On Sunday, 11 September 2016 21:05:12 UTC+1, Lee wrote:
> does dns hijacking or dns cache poisoning count as mitm?

A careful CA validator does DNS only by making authoritative queries, so they're not subject to cache poisoning since they don't look at cached answers.

I think a successful DNS hijack against a CA validator would constitute a MITM except in the case where the attacker is straight up subverting the legitimate name owner's real systems. In /that/ case even DNSSEC doesn't necessarily help you, if they've subverted your systems they can give out DNS answers that check out as signed OK but say whatever they wish.

In the former case DNSSEC would protect you, and I agree that where it has been deployed CA validators should check it, but in a world where there are still login HTML forms with no HTTPS behind them, how surprised are we supposed to be that people don't all have DNSSEC for their domains ?

Patrick Figel

unread,
Sep 11, 2016, 5:02:11 PM9/11/16
to Lee, mozilla-dev-s...@lists.mozilla.org
On 11/09/16 22:05, Lee wrote:
>> In order to spoof a CA's domain validation request, an attacker
>> would need to be in a position to MitM the connection between the
>> CA and the targeted domain.
>
> does dns hijacking or dns cache poisoning count as mitm?

I was mentioning this in order to demonstrate that opportunistic
encryption (try HTTPS, if that fails, fall back to HTTP) does not help
with this threat model. The specifics of how a MitM attack against the
CA is being pulled off is not all that important.

>> Option 2 is problematic because not all CAs log to CT at the
>> moment.
>
> Why is that allowed?

Because CT is relatively new. In fact, I don't think Mozilla is shipping
a working CT implementation yet. Some might even question whether it's
fair to ask CAs to implement CT logging when the majority of browser
vendors haven't bothered yet (or have just recently begun to bother.)

>> Both options do nothing to solve the problem of a domain owner
>> losing the private key of their certificate (for example due to a
>> hack, data loss, or just a domain transfer).
>>
>> You might be thinking of an option 3 - just connect to port 443,
>> see if the domain has a valid certificate, and use HTTPS if
>> available. This sounds great in theory, but since the attacker
>> would need to be able to MitM the connection in the first place in
>> order to spoof the validation request, they could simply intercept
>> this request and force validation on port 80.
>>
>> All in all I think this would do more harm than good. Adding
>> complexity to the DV process means slower HTTPS adoption in
>> general. I'd rather see a "good enough" DV process ...
>
> if it isn't obvious by now, I'd say that any process that doesn't
> include continuous monitoring isn't "good enough"

If you're arguing in favor of mandatory CT logging for CAs, I'm with you
- I just don't think it's going to happen immediately. I think that's a
conversation that should be separate from the question of whether
encryption should be part of the domain validation process.

>> ... and HTTPS everywhere when the alternative is a
>> perfect-in-theory DV process where HTTPS is available only for
>> sites that can deploy all these things competently.
>
> If the site admins aren't competent they're going to get pwned, so
> why do I care if they're doing https instead of http? Or look at it
> from a different angle - if it's that hard for sites to do it
> correctly then [Mozilla? CAs? somebody] can come up with a checklist
> of what to look for in a hosting provider that does do it right. It
> seems like most everybody is moving to "the cloud" anyway, so
> requiring site admins to be competent doesn't seem all that onerous a
> requirement.

I'm not worried about incompetent admins that get owned, I'm worried
about admins taking a look at the domain validation process you're
suggesting, realizing that they now need to deploy DNSSEC or that they
might brick their domain if they lose their private key because they
suddenly can't get another certificate without having a valid
certificate, and then just figuring that sticking with HTTP actually
doesn't sound that bad.

(Not to be snarky, but this argument sounds a bit like "So what? Mozilla
can just solve web security for everyone, and then we can have safe CAs!")

> Where is the opposition to DNSSEC? I was going to say that I'm also
> lurking on the dns ops mailing list, but I don't think I can call
> what I'm doing on m.d.s.p now lurking :)
>
> Yes, DNSSEC is complicated & difficult to do right, but opposition
> to DNSSEC in general? I'm not seeing it & any CA that can't or won't
> do DNSSEC shouldn't be in the Mozilla root store.

I've found [1] to be a good summary of arguments against DNSSEC.

[1]: http://sockpuppet.org/blog/2015/01/15/against-dnssec/

Lee

unread,
Sep 11, 2016, 6:42:18 PM9/11/16
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
On 9/11/16, Nick Lamb <tiala...@gmail.com> wrote:
> On Sunday, 11 September 2016 21:05:12 UTC+1, Lee wrote:
>> does dns hijacking or dns cache poisoning count as mitm?
>
> A careful CA validator does DNS only by making authoritative queries, so
> they're not subject to cache poisoning since they don't look at cached
> answers.

Would a not careful CA be flagged on their yearly audit?

> I think a successful DNS hijack against a CA validator would constitute a
> MITM except in the case where the attacker is straight up subverting the
> legitimate name owner's real systems. In /that/ case even DNSSEC doesn't
> necessarily help you, if they've subverted your systems they can give out
> DNS answers that check out as signed OK but say whatever they wish.
>
> In the former case DNSSEC would protect you, and I agree that where it has
> been deployed CA validators should check it, but in a world where there are
> still login HTML forms with no HTTPS behind them, how surprised are we
> supposed to be that people don't all have DNSSEC for their domains ?

Me personally? Not at all. I'm just asking if they _do_ have DNSSEC
for their domains is there a way to leverage that to get a cert via an
encrypted channel or at least do the domain validation via an
encrypted channel instead of using email or tcp port 80?

Regards,
Lee

Nick Lamb

unread,
Sep 11, 2016, 7:13:59 PM9/11/16
to mozilla-dev-s...@lists.mozilla.org
On Sunday, 11 September 2016 23:42:18 UTC+1, Lee wrote:
> Me personally? Not at all. I'm just asking if they _do_ have DNSSEC
> for their domains is there a way to leverage that to get a cert via an
> encrypted channel or at least do the domain validation via an
> encrypted channel instead of using email or tcp port 80?

I don't remember what the situation was in the past. Certainly ballot 169 ("modern") DV explicitly permits DNS to be used directly to validate control.

ACME provides a challenge dns-01 in which the applicant provisions a DNS TXT record to prove they control the domain and are requesting the certificate. Let's Encrypt implements (an earlier draft of) this challenge today, and if you have DNSSEC it will perform correct DNSSEC verification. ACME itself is performed over HTTPS to a named server operated by the CA. So in this scenario all the steps are protected from a hypothetical Man-in-the-middle with ability to subvert most parts of the network but NOT systems directly controlled by the name owner, the registries, or the Certificate Authorities...

Lee

unread,
Sep 11, 2016, 8:16:28 PM9/11/16
to Patrick Figel, mozilla-dev-s...@lists.mozilla.org
On 9/11/16, Patrick Figel <patf...@gmail.com> wrote:
> On 11/09/16 22:05, Lee wrote:
>>> In order to spoof a CA's domain validation request, an attacker
>>> would need to be in a position to MitM the connection between the
>>> CA and the targeted domain.
>>
>> does dns hijacking or dns cache poisoning count as mitm?
>
> I was mentioning this in order to demonstrate that opportunistic
> encryption (try HTTPS, if that fails, fall back to HTTP) does not help
> with this threat model. The specifics of how a MitM attack against the
> CA is being pulled off is not all that important.

ok - fair enough

>>> Option 2 is problematic because not all CAs log to CT at the
>>> moment.
>>
>> Why is that allowed?
>
> Because CT is relatively new. In fact, I don't think Mozilla is shipping
> a working CT implementation yet. Some might even question whether it's
> fair to ask CAs to implement CT logging when the majority of browser
> vendors haven't bothered yet (or have just recently begun to bother.)

Which sounds a bit like the argument against IPv6, except in this case
CT logging all by itself enables interested parties to audit CA
behavior in near realtime.

>>> Both options do nothing to solve the problem of a domain owner
>>> losing the private key of their certificate (for example due to a
>>> hack, data loss, or just a domain transfer).
>>>
>>> You might be thinking of an option 3 - just connect to port 443,
>>> see if the domain has a valid certificate, and use HTTPS if
>>> available. This sounds great in theory, but since the attacker
>>> would need to be able to MitM the connection in the first place in
>>> order to spoof the validation request, they could simply intercept
>>> this request and force validation on port 80.
>>>
>>> All in all I think this would do more harm than good. Adding
>>> complexity to the DV process means slower HTTPS adoption in
>>> general. I'd rather see a "good enough" DV process ...
>>
>> if it isn't obvious by now, I'd say that any process that doesn't
>> include continuous monitoring isn't "good enough"
>
> If you're arguing in favor of mandatory CT logging for CAs, I'm with you
> - I just don't think it's going to happen immediately.

I'm sure it won't, but what's your guess for the lead-in time? A year?
Just how much time should CAs be allowed to implement something like
this? ... assuming that it can be made a mandatory item.

> I think that's a
> conversation that should be separate from the question of whether
> encryption should be part of the domain validation process.

Fine w/ me.

>>> ... and HTTPS everywhere when the alternative is a
>>> perfect-in-theory DV process where HTTPS is available only for
>>> sites that can deploy all these things competently.
>>
>> If the site admins aren't competent they're going to get pwned, so
>> why do I care if they're doing https instead of http? Or look at it
>> from a different angle - if it's that hard for sites to do it
>> correctly then [Mozilla? CAs? somebody] can come up with a checklist
>> of what to look for in a hosting provider that does do it right. It
>> seems like most everybody is moving to "the cloud" anyway, so
>> requiring site admins to be competent doesn't seem all that onerous a
>> requirement.
>
> I'm not worried about incompetent admins that get owned, I'm worried
> about admins taking a look at the domain validation process you're
> suggesting,

To be clear - I'm not suggesting a domain validation process. I'm
_asking_ if there's a way to do it without using clear-text protocols.
If there isn't, or it's too complicated/error-prone/whatever then ok

> realizing that they now need to deploy DNSSEC

is DNSSEC that hard to do or is it just that you just don't agree it
does anything useful?

> or that they
> might brick their domain if they lose their private key because they
> suddenly can't get another certificate without having a valid
> certificate, and then just figuring that sticking with HTTP actually
> doesn't sound that bad.

I used to think http wasn't all that bad for most things, then I read about
http://www.informationweek.com/mobile/mobile-business/verizon-wireless-embroiled-in-tracking-controversy/d/d-id/1317044

so yeah, http isn't all that good.

> (Not to be snarky, but this argument sounds a bit like "So what? Mozilla
> can just solve web security for everyone, and then we can have safe CAs!")

I certainly think Mozilla can do a better job than they have, but
solving web security for everyone? Not gonna happen. But maybe
there's a better way to do domain validation.. What? dunno. What I am
sure of is that this list has a lot of smart people on it that know
crypto much better than I ever will, so I'm bringing up the question -
is there a better/safer way to do it? If no, well at least I asked.

My other issue is CT logging. I'm not holding my breath waiting for
CT, but I am hoping CT logging can be made mandatory for all CAs in a
year or 18 months. Is that too much to hope for?

>> Where is the opposition to DNSSEC? I was going to say that I'm also
>> lurking on the dns ops mailing list, but I don't think I can call
>> what I'm doing on m.d.s.p now lurking :)
>>
>> Yes, DNSSEC is complicated & difficult to do right, but opposition
>> to DNSSEC in general? I'm not seeing it & any CA that can't or won't
>> do DNSSEC shouldn't be in the Mozilla root store.
>
> I've found [1] to be a good summary of arguments against DNSSEC.
>
> [1]: http://sockpuppet.org/blog/2015/01/15/against-dnssec/

Interesting. It's going to take me more than a quick read-thru to
process & I don't want to take that much time before hitting "send"

Thanks,
Lee

Gervase Markham

unread,
Sep 12, 2016, 3:43:07 AM9/12/16
to Lee, Nick Lamb
On 11/09/16 23:42, Lee wrote:
>> A careful CA validator does DNS only by making authoritative queries, so
>> they're not subject to cache poisoning since they don't look at cached
>> answers.
>
> Would a not careful CA be flagged on their yearly audit?

It only might, if doing non-authoritative queries violated some
standard. As far as I can recall, even the updated validation section
does not require this. That might make a good amendment.

Gerv

Jakob Bohm

unread,
Sep 12, 2016, 1:32:19 PM9/12/16
to mozilla-dev-s...@lists.mozilla.org
On 10/09/2016 14:45, Gervase Markham wrote:
> On 09/09/16 11:53, Jakob Bohm wrote:
>> As I read the Wiki description of WoSign issue L: Arbitrary High port
>> validation, the description notes a case of port 8080 validation as an
>> instance of this.
>>
>> If the BR and or CP/CPS indeed classify port 8080 as a valid web port
>> for domain control checking, that particular case probably shouldn't
>> count.
>
> We aren't counting particular incidents, just the facts of the case,
> which was that any high port was accepted, and that at least one cert
> was issued on a non-8080 port.
>

I obviously meant "count" as in "carry any weight in assessing the
trustworthiness of WoSign".

Our current evidence seems to be an unfortunate mix of actual issues
(such as the github.io certificates), and semi-irrelevant smear, which
means we will need to separate the chaff from the wheat before Mozilla
has a good basis for any decisions.

>> If instead WoSign (as I seem to recall) considers port 8080 as valid,
>> but the relevant formal documents do not, then that would be a separate
>> but related issue, which should get it's own letter on the Wiki page.
>
> As noted in the original write-up, at the time of the incident, the
> relevant formal documents did not specify exact port numbers, but
> Mozilla feels that the fact that ports over 1024 are unprivileged is
> basic security knowledge that any CA should have.
>

Note that the port above/below 1024 rule is mostly limited to Unix-like
systems, there are server platforms where listening on an arbitrary port
below 1024 is more or less unprotected (usually as a means to allow
servers to run with less privileges as a security measure).

The standard/non-standard port distinction for web servers is much more
relevant, as is the distinction between URL paths that are more or less
likely to be controlled by persons other than the domain owner.

However allowing "arbitrary URL on arbitrary high port" (chosen by the
applicant) is clearly not a good ownership test, in that I agree.

Jakob Bohm

unread,
Sep 12, 2016, 2:03:38 PM9/12/16
to mozilla-dev-s...@lists.mozilla.org
Wouldn't this fall under the general auditable requirement of being
careful in their practices and procedures. For example, I don't think
there would be specific BRs covering if they remember to lock the door
to the server room.

This would be very similar to how financial auditors does do some
checking if the day to day accounting practices are sound in terms of
avoiding fraud.

Gervase Markham

unread,
Sep 13, 2016, 5:50:55 AM9/13/16
to Jakob Bohm
Hi Jakob,

On 12/09/16 18:30, Jakob Bohm wrote:
> Our current evidence seems to be an unfortunate mix of actual issues
> (such as the github.io certificates), and semi-irrelevant smear, which
> means we will need to separate the chaff from the wheat before Mozilla
> has a good basis for any decisions.

If you mean the "evidence" in this newsgroup, your accusation might
carry some weight, although you will note that I have tried to reduce
the amount of "semi-irrelevant smear" by asking those writing such
things to stop.

If you mean the evidence listed at:
https://wiki.mozilla.org/CA:WoSign_Issues
then I reject that; I think everything on that list is a concern worth
investigating. It may turn out to be that some of the concerns have
reasonable explanations, in which case we should document those
explanations and move on. But that doesn't mean it was wrong to be
concerned in the first place.

Gerv

Gervase Markham

unread,
Sep 13, 2016, 5:51:35 AM9/13/16
to Jakob Bohm
On 12/09/16 19:02, Jakob Bohm wrote:
> Wouldn't this fall under the general auditable requirement of being
> careful in their practices and procedures.

Ask an auditor, and they will tell you that "be careful" is not an
auditable requirement.

Gerv

Jakob Bohm

unread,
Sep 13, 2016, 6:25:19 AM9/13/16
to mozilla-dev-s...@lists.mozilla.org
On 13/09/2016 11:50, Gervase Markham wrote:
> Hi Jakob,
>
> On 12/09/16 18:30, Jakob Bohm wrote:
>> Our current evidence seems to be an unfortunate mix of actual issues
>> (such as the github.io certificates), and semi-irrelevant smear, which
>> means we will need to separate the chaff from the wheat before Mozilla
>> has a good basis for any decisions.
>
> If you mean the "evidence" in this newsgroup, your accusation might
> carry some weight, although you will note that I have tried to reduce
> the amount of "semi-irrelevant smear" by asking those writing such
> things to stop.
>

Yes. Specifically that someone posted a port 8080 validated
certificate as an alleged unreported instance of issue L.

Jakob Bohm

unread,
Sep 13, 2016, 6:28:03 AM9/13/16
to mozilla-dev-s...@lists.mozilla.org
On 13/09/2016 11:50, Gervase Markham wrote:
I know from actual audited annual (fiscal) reports that those usually
contain a statement by the auditor regarding the sufficiency of the
accounting practices in the audited company.

Florian Weimer

unread,
Sep 17, 2016, 10:31:20 AM9/17/16
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
* Nick Lamb:

> On Sunday, 11 September 2016 21:05:12 UTC+1, Lee wrote:
>> does dns hijacking or dns cache poisoning count as mitm?
>
> A careful CA validator does DNS only by making authoritative queries,
> so they're not subject to cache poisoning since they don't look at
> cached answers.

I'm not sure if you can resolve all domains without some sort of DNS
cache, in the sense that you never use data from one answer to satisfy
more than one query (which can be internally generated).

More reasonable would be to require that the resolver starts with a
cold cache (possibly preloaded with a copy of the root zone) and
performs DNSSEC validation starting with the IANA keys.

Jakob Bohm

unread,
Sep 19, 2016, 2:27:41 PM9/19/16
to mozilla-dev-s...@lists.mozilla.org
On 17/09/2016 16:30, Florian Weimer wrote:
> * Nick Lamb:
>
>> On Sunday, 11 September 2016 21:05:12 UTC+1, Lee wrote:
>>> does dns hijacking or dns cache poisoning count as mitm?
>>
>> A careful CA validator does DNS only by making authoritative queries,
>> so they're not subject to cache poisoning since they don't look at
>> cached answers.
>
> I'm not sure if you can resolve all domains without some sort of DNS
> cache, in the sense that you never use data from one answer to satisfy
> more than one query (which can be internally generated).
>

Of cause you can, it's just not what normal programs do (because for
normal programs, DNS caching is good).

> More reasonable would be to require that the resolver starts with a
> cold cache (possibly preloaded with a copy of the root zone) and
> performs DNSSEC validation starting with the IANA keys.
>

While DNSSEC validation should be done where present, not all
certificate requests will come from DNSSEC signed domains. After all,
if they did, DANE would soon be a substitute for DV certs.
0 new messages