Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Incident Report – Certificates issued without proper domain validation

18,569 views
Skip to first unread message

Wayne Thayer

unread,
Jan 10, 2017, 10:03:08 PM1/10/17
to dev-secur...@lists.mozilla.org
Summary:
On Friday, January 6th, 2017, GoDaddy became aware of a bug affecting our domain validation processing system. The bug that caused the issue was fixed late Friday. At 10 PM PST on Monday, Jan 9th we completed our review to determine the scope of the problem, and identified 8850 certificates that were issued without proper domain validation as a result of the bug. The impacted certificates will be revoked by 10 PM PST on Tuesday, Jan 10th, and will also be logged to the Google Pilot CT log.
Detailed Description:
On Tuesday, Jan 3rd, 2017, one of our resellers (Microsoft) sent an email to n...@godaddy.com<mailto:n...@godaddy.com> and two GoDaddy employees. Due to holiday vacations and the fact that the issue was not reported properly per our CPS, we did not become aware of the issue until one of the employees opened the email on Friday Jan 6th and promptly alerted management. The issue was originally reported to Microsoft by one of their own customers and was described as only affecting certificate requests when the DNS A record of the domain was set to 127.0.0.1. An investigation was initiated immediately and within a few hours we determined that the problem was broader in scope. The root cause of the problem was fixed via a code change at approximately 10 PM MST on Friday, Jan 6th.
On Saturday, January 7th, we determined that the bug was first introduced on July 29th, 2016 as part of a routine code change intended to improve our certificate issuance process. The bug is related to our use of practical demonstration of control to validate authority to receive a certificate for a given fully-qualified domain name. In the problematic case, we provide a random code to a customer and ask them to place it in a specific location on their website. Our system automatically checks for the presence of that code via an HTTP and/or HTTPS request to the website. If the code is found, the domain control check is completed successfully. Prior to the bug, the library used to query the website and check for the code was configured to return a failure if the HTTP status code was not 200 (success). A configuration change to the library caused it to return results even when the HTTP status code was not 200. Since many web servers are configured to include the URL of the request in the body of a 404 (not found) response, and the URL also contained the random code, any web server configured this way caused domain control verification to complete successfully.
We are currently unaware of any malicious exploitation of this bug to procure a certificate for a domain that was not authorized. The customer who discovered the bug revoked the certificate they obtained, and subsequent certificates issued as the result of requests used for testing by Microsoft and GoDaddy have been revoked. Further, any certificate requests made for domains we flag as high-risk were also subjected to manual review (rather than being issued purely based on an invalid domain authorization).
We have re-verified domain control on every certificate issued using this method of validation in the period from when the bug was introduced until it was fixed. A list of 8850 potentially unverified certificates (representing less than 2% of the total issued during the period) was compiled at 10 PM PST on Monday Jan 9th. As mentioned above, potentially impacted certificates will be revoked by 10 PM PST on Tuesday Jan 10th and logged to a Google CT log. Additional code changes were deployed on Monday Jan 9th and Tuesday 10th to prevent the re-issuance of certificates using cached and potentially unverified domain validation information. However, prior to identifying and shutting down this path, an additional 101 certificates were reissued using such cached and potentially unverified domain validation information, resulting in an overall total of 8951 certificates that were issued without proper domain validation as a result of the bug.
Next Steps:
While we are confident that we have completely resolved the problem, we are watching our system closely to ensure that no more certificates are issued without proper domain validation, and we will take immediate action and report any further issues if found. A full post-mortem review of this incident will occur and steps will be taken to prevent a recurrence, including the addition of automated tests designed to detect this type of scenario. If more information about the cause or impact of this incident becomes available, we will publish updates to this report.
Wayne Thayer
GoDaddy

Ryan Sleevi

unread,
Jan 10, 2017, 10:09:35 PM1/10/17
to Wayne Thayer, dev-secur...@lists.mozilla.org
Wayne,

Thanks for sharing these details.

What's unclear is what steps GoDaddy has taken to remedy this.

For example:
1) Disabling domain control demonstrations through the use of a file on a server
2) Switching to /.well-known/pkivalidation
3) Ensuring that the random value is not part of the HTTP[S] request

etc

Could you speak further to how GoDaddy has resolved this problem? My
hope is that it doesn't involve "Only look for 200 responses" =)

Nick Lamb

unread,
Jan 11, 2017, 4:46:12 AM1/11/17
to mozilla-dev-s...@lists.mozilla.org
As Ryan said, thanks for informing m.d.s.policy about this issue. I am interested in the same general area as Ryan but I will ask my question separately, feel free to answer them together.

Has GoDaddy been following ACME https://datatracker.ietf.org/wg/acme/charter/ development, either with a view to eventually implementing ACME, or just to learn the same lessons about automating domain validation ?

Perhaps the most surprising thing the ACME WG discovered was that due to a common misconfiguration customers sharing a bulk host can often answer HTTPS requests for other people's sites that haven't for whatever reason enabled SSL yet. GoDaddy's validation method as described would be vulnerable to this problem. Can you say what, if anything, GoDaddy does to avoid being tricked into issuing a certificate on this basis ?

Gervase Markham

unread,
Jan 11, 2017, 4:57:17 AM1/11/17
to mozilla-dev-s...@lists.mozilla.org
Hi Wayne,

As others have said, thanks for bringing this to our attention.

On 11/01/17 03:02, Wayne Thayer wrote:
> results even when the HTTP status code was not 200. Since many web
> servers are configured to include the URL of the request in the body
> of a 404 (not found) response, and the URL also contained the random
> code,

As you will know, the method being used by GoDaddy here corresponds
broadly to method 3.2.2.4.6 from ballot 169 - "Agreed-Upon Change to
Website". (Although this method is not currently in the Baseline
Requirements due to it being part of ballot 182 and having a related IPR
disclosure, at least one root store operator has suggested they are
going to require strict adherence to the methods listed in that ballot
by 1st March.)
https://cabforum.org/2016/08/05/ballot-169-revised-validation-requirements/

One of the sentences in 3.2.2.4.6 is the following:

"The entire Required Website Content MUST NOT appear in the request used
to retrieve the file or web page"

This sentence is there precisely because the problem which hit GoDaddy
was anticipated when the Validation WG was discussing the possible
problems with this validation method.

Has GoDaddy already, or is GoDaddy planning to, update its
implementation to conform to that requirement?

> We are currently unaware of
> any malicious exploitation of this bug to procure a certificate for a
> domain that was not authorized.

Does that mean "we have revalidated all the domains", or does it mean
"no-one has actively reported to us that someone else is using a
certificate for a domain name the reporter owns"?

> The customer who discovered the bug
> revoked the certificate they obtained, and subsequent certificates
> issued as the result of requests used for testing by Microsoft and
> GoDaddy have been revoked.

I would hope and assume that such testing was done using domains owned
by Microsoft and/or GoDaddy, or someone else whose permission you had
gained?

> authorization). We have re-verified domain control on every
> certificate issued using this method of validation in the period from
> when the bug was introduced until it was fixed.

How was that possible for all domains, as surely some domain owners will
have taken the necessary file down?

> A list of 8850
> potentially unverified certificates (representing less than 2% of the
> total issued during the period) was compiled at 10 PM PST on Monday
> Jan 9th.

How were you able to create that list? Do you store the HTTP status code
and content returned from the website, and just searched for non-200
codes? Or some other way?

Gerv

Patrick Figel

unread,
Jan 11, 2017, 10:36:46 AM1/11/17
to dev-secur...@lists.mozilla.org
On 11/01/2017 04:08, Ryan Sleevi wrote:
> Could you speak further to how GoDaddy has resolved this problem? My
> hope is that it doesn't involve "Only look for 200 responses" =)

In case anyone is wondering why this is problematic, during the Ballot
169 review process, Peter Bowen ran a check against the top 10,000 Alexa
domains and noted that more than 400 sites returned a HTTP 200 response
for a request to
http://www.$DOMAIN/.well-known/pki-validation/4c079484040e32529577b6a5aade31c5af6fe0c7
[1]. A number of those included the URL in the response body, which
would presumably be good enough for GoDaddy's domain validation process
if they indeed only check for a HTTP 200 response.

[1]: https://cabforum.org/pipermail/public/2016-April/007506.html

Paul Wouters

unread,
Jan 11, 2017, 10:56:08 AM1/11/17
to dev-secur...@lists.mozilla.org
Are you saying that for an unknown amount of time (years?) someone could
have faked the domain validation check, and once it was publicly pointed
out so everyone could do this, it took one registrar 10 months to fix,
during which 8800 domains could have been falsely obtained and been used
in targetted attacks? Have other registrars made any statement on
whether they were or were not vulnerable to this attack?

Is there a way to find out if this has actually happened for any domain?
I would expect this would show up as "validated" certificates that were
logged in CT but that were never deployed on the real public TLS servers.
Is anyone monitoring that? I assume that for the "big players" who do
self-monitoring, were not affected? *crosses fingers*

Paul

Patrick Figel

unread,
Jan 11, 2017, 11:42:31 AM1/11/17
to Paul Wouters, dev-secur...@lists.mozilla.org
On 11/01/2017 16:55, Paul Wouters wrote:
> Are you saying that for an unknown amount of time (years?) someone
> could have faked the domain validation check, and once it was
> publicly pointed out so everyone could do this, it took one
> registrar 10 months to fix, during which 8800 domains could have been
> falsely obtained and been used in targetted attacks? Have other
> registrars made any statement on whether they were or were not
> vulnerable to this attack?

The "Agreed-Upon Website Change" domain validation method was not
something that the Baseline Requirements specified in any way prior to
Ballot 169. The BRs basically had a section saying "if you want to use
any other method that you think is as good as the ones specified here,
go for it". (Actually, that section still exists IIRC, thanks to the
patent-related weirdness currently going on in the CA/B Forum.)

There wasn't really a standard way to do this, so some CAs (like
GoDaddy) might have implemented something resembling the ACME http-01
challenge type, where part of the request URL is a random string (and
which suffers from this vulnerability if you only look for that random
string in the response body), while others did something like WoSign,
where the random string has to be served at a static URL (something like
example.com/example.com.txt) or where you have to add a meta tag to your
index page. These other methods would not have suffered from this
particular vulnerability.

It's hard to say how many CAs are affected by this. It's not something
the CA needs to document in their CP(S), so the only way to answer that
question would be to test the domain validation of every
publicly-trusted CA, or ask them and hope the answer is accurate. CAs do
need to keep audit logs for certificate request and the corresponding
domain validations, so perhaps this is something that Mozilla could add
as a question in their next CA communication?

I'd agree that any CA keeping track of the CA/B Forum mailing lists
should've caught this a long time ago. It was brought up at least twice
last year.

> Is there a way to find out if this has actually happened for any
> domain? I would expect this would show up as "validated"
> certificates that were logged in CT but that were never deployed on
> the real public TLS servers. Is anyone monitoring that? I assume that
> for the "big players" who do self-monitoring, were not affected?
> *crosses fingers*

You could probably make an educated guess for some of the domains (once
they're published) by using censys to see which of those certificates
were observed in the wild during one of their internet scans. It would
not give you the full picture since any number of those certificates
could've been deployed on non-public servers, or on TLS servers that
censys does not scan for (e.g. SMTP/IMAP/... - not sure if they scan
those). That's why a global monitor for something like this would
probably not work.

I'd imagine the big players would've been caught by their manual review
process flagging for high-risk domains.

Nick Lamb

unread,
Jan 11, 2017, 1:06:55 PM1/11/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, 11 January 2017 16:42:31 UTC, Patrick Figel wrote:
> There wasn't really a standard way to do this, so some CAs (like
> GoDaddy) might have implemented something resembling the ACME http-01
> challenge type, where part of the request URL is a random string (and
> which suffers from this vulnerability if you only look for that random
> string in the response body)

I had to read this twice to understand what you were getting at here.

For those who haven't (unlike Patrick) sat down and read the ACME specification, ACME http-01 won't get tripped here because the checked content of the URL is very much not the random string (it's a JWS signature over a data structure containing that random string, thereby proving it was made by whoever the ACME server is talking to). But yes, doing something that _looks_ superficially like the ACME style of validation without such subtlety will trip you up.

> It would not give you the full picture since any number of those certificates
> could've been deployed on non-public servers, or on TLS servers that
> censys does not scan for

In this very particular case, where the affected validation was specific to web servers, it seems extremely likely that almost all of the legitimate certificates (which may be, and we hope is, all of them) were subsequently put into use on a web server.

Why go to the bother of setting up a web server on say, smtp.example.com, only to get yourself a certificate, and then turn off the web server and use the certificate for SMTP? It's not impossible, but it would be very much the exception.

Patrick Figel

unread,
Jan 11, 2017, 1:17:51 PM1/11/17
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
On 11/01/2017 19:06, Nick Lamb wrote:
> For those who haven't (unlike Patrick) sat down and read the ACME
> specification, ACME http-01 won't get tripped here because the
> checked content of the URL is very much not the random string (it's a
> JWS signature over a data structure containing that random string,
> thereby proving it was made by whoever the ACME server is talking
> to). But yes, doing something that _looks_ superficially like the
> ACME style of validation without such subtlety will trip you up.

Thanks, that's a better way of phrasing what I was trying to say. ;-)

> In this very particular case, where the affected validation was
> specific to web servers, it seems extremely likely that almost all of
> the legitimate certificates (which may be, and we hope is, all of
> them) were subsequently put into use on a web server.
>
> Why go to the bother of setting up a web server on say,
> smtp.example.com, only to get yourself a certificate, and then turn
> off the web server and use the certificate for SMTP? It's not
> impossible, but it would be very much the exception.

I don't know this specifically for GoDaddy, but many commercial CAs I've
dealt with in the past typically only validate the "registered domain"
portion (e.g. example.com) of the FQDN and then give you certificates
for any subdomain (e.g. smtp.example.com) under that domain. I think the
approach used by Let's Encrypt, where each FQDN has to be validated
individually, is not all that common otherwise.



Ryan Sleevi

unread,
Jan 11, 2017, 1:31:32 PM1/11/17
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
On Wed, Jan 11, 2017 at 10:06 AM, Nick Lamb <tiala...@gmail.com> wrote:
> Why go to the bother of setting up a web server on say, smtp.example.com, only to get yourself a certificate, and then turn off the web server and use the certificate for SMTP? It's not impossible, but it would be very much the exception.

Because you're not required to setup the webserver for
smtp.example.com. It's sufficient to setup the webserver for
example.com to authorize the name, by creatively interpreting the
Method 7 (prior to Ballot 169) and applying the logic from Method 4 to
suggest it's OK to prune the domain (despite Method 6 not allowing
this).

I'm not saying they'd be right in arguing so, but they wouldn't be the
only CA who applied such an interpretation.

Jakob Bohm

unread,
Jan 11, 2017, 1:46:53 PM1/11/17
to mozilla-dev-s...@lists.mozilla.org
For your information, here are the ones I have encountered:

AlphaSSL (Globalsign sub) and one other (a Symantec sub, as far as I
recall) verify control of the corresponding hostmaster e-mail address
for the second level domain (hostm...@example.com).

Google (for non-certificate purposes) used to verify a url that just
had to return a string that said "google-site-verification: URL" where
URL was the file name part of the Url, this may or may not have been
foolable. This was done per exact domain.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Nick Lamb

unread,
Jan 11, 2017, 3:18:39 PM1/11/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, 11 January 2017 18:31:32 UTC, Ryan Sleevi wrote:

> Because you're not required to setup the webserver for
> smtp.example.com.

Ah yes, silly me

> I'm not saying they'd be right in arguing so

I suppose that from a husbanding of resources point of view it makes sense to wait and see what happens to the remnants of ballot 169 before Mozilla jumps up and down about this.

Yuhong Bao

unread,
Jan 11, 2017, 5:12:33 PM1/11/17
to Wayne Thayer, dev-secur...@lists.mozilla.org, Ryan Sleevi
I wonder if nest.com is now considered high-risk now. They recently switched from GoDaddy to Google Internet Authority.
________________________________________
From: dev-security-policy <dev-security-policy-bounces+yuhongbao_386=hotma...@lists.mozilla.org> on behalf of Wayne Thayer <wth...@godaddy.com>
Sent: Tuesday, January 10, 2017 7:02:28 PM
To: dev-secur...@lists.mozilla.org
Subject: Incident Report – Certificates issued without proper domain validation

Summary:
On Friday, January 6th, 2017, GoDaddy became aware of a bug affecting our domain validation processing system. The bug that caused the issue was fixed late Friday. At 10 PM PST on Monday, Jan 9th we completed our review to determine the scope of the problem, and identified 8850 certificates that were issued without proper domain validation as a result of the bug. The impacted certificates will be revoked by 10 PM PST on Tuesday, Jan 10th, and will also be logged to the Google Pilot CT log.
Detailed Description:
On Tuesday, Jan 3rd, 2017, one of our resellers (Microsoft) sent an email to n...@godaddy.com<mailto:n...@godaddy.com> and two GoDaddy employees. Due to holiday vacations and the fact that the issue was not reported properly per our CPS, we did not become aware of the issue until one of the employees opened the email on Friday Jan 6th and promptly alerted management. The issue was originally reported to Microsoft by one of their own customers and was described as only affecting certificate requests when the DNS A record of the domain was set to 127.0.0.1. An investigation was initiated immediately and within a few hours we determined that the problem was broader in scope. The root cause of the problem was fixed via a code change at approximately 10 PM MST on Friday, Jan 6th.
On Saturday, January 7th, we determined that the bug was first introduced on July 29th, 2016 as part of a routine code change intended to improve our certificate issuance process. The bug is related to our use of practical demonstration of control to validate authority to receive a certificate for a given fully-qualified domain name. In the problematic case, we provide a random code to a customer and ask them to place it in a specific location on their website. Our system automatically checks for the presence of that code via an HTTP and/or HTTPS request to the website. If the code is found, the domain control check is completed successfully. Prior to the bug, the library used to query the website and check for the code was configured to return a failure if the HTTP status code was not 200 (success). A configuration change to the library caused it to return results even when the HTTP status code was not 200. Since many web servers are configured to include the URL of the req
uest in the body of a 404 (not found) response, and the URL also contained the random code, any web server configured this way caused domain control verification to complete successfully.
We are currently unaware of any malicious exploitation of this bug to procure a certificate for a domain that was not authorized. The customer who discovered the bug revoked the certificate they obtained, and subsequent certificates issued as the result of requests used for testing by Microsoft and GoDaddy have been revoked. Further, any certificate requests made for domains we flag as high-risk were also subjected to manual review (rather than being issued purely based on an invalid domain authorization).
We have re-verified domain control on every certificate issued using this method of validation in the period from when the bug was introduced until it was fixed. A list of 8850 potentially unverified certificates (representing less than 2% of the total issued during the period) was compiled at 10 PM PST on Monday Jan 9th. As mentioned above, potentially impacted certificates will be revoked by 10 PM PST on Tuesday Jan 10th and logged to a Google CT log. Additional code changes were deployed on Monday Jan 9th and Tuesday 10th to prevent the re-issuance of certificates using cached and potentially unverified domain validation information. However, prior to identifying and shutting down this path, an additional 101 certificates were reissued using such cached and potentially unverified domain validation information, resulting in an overall total of 8951 certificates that were issued without proper domain validation as a result of the bug.
Next Steps:
While we are confident that we have completely resolved the problem, we are watching our system closely to ensure that no more certificates are issued without proper domain validation, and we will take immediate action and report any further issues if found. A full post-mortem review of this incident will occur and steps will be taken to prevent a recurrence, including the addition of automated tests designed to detect this type of scenario. If more information about the cause or impact of this incident becomes available, we will publish updates to this report.
Wayne Thayer
GoDaddy
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Wayne Thayer

unread,
Jan 11, 2017, 7:27:53 PM1/11/17
to mozilla-dev-s...@lists.mozilla.org
Responding to Ryan, Nick and Gerv:

> What's unclear is what steps GoDaddy has taken to remedy this.
>
> For example:
> 1) Disabling domain control demonstrations through the use of a file on a
> server
> 2) Switching to /.well-known/pkivalidation
> 3) Ensuring that the random value is not part of the HTTP[S] request
>
> etc
>
> Could you speak further to how GoDaddy has resolved this problem? My
> hope is that it doesn't involve "Only look for 200 responses" =)

Our process for verifying domain control via a change to the website worked by generating a random alphanumeric code. The code was then placed in a file in the root directory of the website. The structure of the filename is <code>.html. Our system queries for that URL and looks for the code in the response.

Our initial response as reported yesterday was to fix the bug introduced in July. Based on internal discussions and comments here, as of 12 midnight PST last night (1/11) we stopped using this method of file based domain control validation.

> Has GoDaddy been following ACME
> https://datatracker.ietf.org/wg/acme/charter/ development, either with a
> view to eventually implementing ACME, or just to learn the same lessons
> about automating domain validation ?

We are aware of the work being done on ACME. We’ll continue to watch its development and evolution. Our current focus is on implementing the new domain validation processes published by the CAB Forum.

> Perhaps the most surprising thing the ACME WG discovered was that due to
> a common misconfiguration customers sharing a bulk host can often answer
> HTTPS requests for other people's sites that haven't for whatever reason
> enabled SSL yet. GoDaddy's validation method as described would be
> vulnerable to this problem. Can you say what, if anything, GoDaddy does to
> avoid being tricked into issuing a certificate on this basis ?

Here is what we were doing to prevent this: When performing the domain control check over HTTPS, we validate the certificate – including verifying that the domain name of the site matches one in the cert and that the cert chains to a root in the Java root store – and the check fails if the certificate does not validate.

> As you will know, the method being used by GoDaddy here corresponds
> broadly to method 3.2.2.4.6 from ballot 169 - "Agreed-Upon Change to
> Website". (Although this method is not currently in the Baseline
> Requirements due to it being part of ballot 182 and having a related IPR
> disclosure, at least one root store operator has suggested they are going to
> require strict adherence to the methods listed in that ballot by 1st March.)
> https://cabforum.org/2016/08/05/ballot-169-revised-validation-
> requirements/
>
> One of the sentences in 3.2.2.4.6 is the following:
>
> "The entire Required Website Content MUST NOT appear in the request
> used to retrieve the file or web page"
>
> This sentence is there precisely because the problem which hit GoDaddy was
> anticipated when the Validation WG was discussing the possible problems
> with this validation method.
>
> Has GoDaddy already, or is GoDaddy planning to, update its implementation
> to conform to that requirement?

GoDaddy was on track to implement the new requirements prior to March 1, but we hope to be able to implement them even sooner than that date.

> > We are currently unaware of
> > any malicious exploitation of this bug to procure a certificate for a
> > domain that was not authorized.
>
> Does that mean "we have revalidated all the domains", or does it mean "no-
> one has actively reported to us that someone else is using a certificate for a
> domain name the reporter owns"?

As soon as we learned of this issue, we went through every certificate that was validated with the HTML method utilized during this period and attempted to verify. If it could not be immediately verified, we revoked the certificate.

> > The customer who discovered the bug
> > revoked the certificate they obtained, and subsequent certificates
> > issued as the result of requests used for testing by Microsoft and
> > GoDaddy have been revoked.
>
> I would hope and assume that such testing was done using domains owned
> by Microsoft and/or GoDaddy, or someone else whose permission you had
> gained?

Yes, testing was done using domains registered by Microsoft and GoDaddy employees.

> > authorization). We have re-verified domain control on every
> > certificate issued using this method of validation in the period from
> > when the bug was introduced until it was fixed.
>
> How was that possible for all domains, as surely some domain owners will
> have taken the necessary file down?

When we learned of this issue, we re-validated every affected certificate. If we were unable to properly validate, we revoked the certificate. That is how we got the total of 8,951 revoked certificates.

> > A list of 8850
> > potentially unverified certificates (representing less than 2% of the
> > total issued during the period) was compiled at 10 PM PST on Monday
> > Jan 9th.
>
> How were you able to create that list? Do you store the HTTP status code and
> content returned from the website, and just searched for non-200 codes? Or
> some other way?

As soon as we discovered the bug, we ran a report to identify every certificate that didn’t fail the domain validation check during the period the bug was active. We then started scanning websites to see which ones were able to re-pass the proper validation check. If they passed, we removed the certificate from the list. If we were unable to revalidate the certificate, we revoked it. If there was any question if the certificate was properly verified, we revoked it.

Richard Wang

unread,
Jan 11, 2017, 8:03:26 PM1/11/17
to Yuhong Bao, Wayne Thayer, dev-secur...@lists.mozilla.org, Ryan Sleevi
The nest.com certificate subject is:
CN = www.nest.com
O = Google Inc
L = Mountain View
S = California
C = US

This means this website owned by Google Inc. Right?


Best Regards,

Richard

-----Original Message-----
From: dev-security-policy
[mailto:dev-security-policy-bounces+richard=wosig...@lists.mozilla.org] On
Behalf Of Yuhong Bao
Sent: Thursday, January 12, 2017 6:12 AM
To: Wayne Thayer <wth...@godaddy.com>;
dev-secur...@lists.mozilla.org; Ryan Sleevi <ry...@sleevi.com>
Subject: Re: Incident Report - Certificates issued without proper domain
validation

I wonder if nest.com is now considered high-risk now. They recently switched
from GoDaddy to Google Internet Authority.
________________________________________
From: dev-security-policy
<dev-security-policy-bounces+yuhongbao_386=hotma...@lists.mozilla.org> on
behalf of Wayne Thayer <wth...@godaddy.com>
Sent: Tuesday, January 10, 2017 7:02:28 PM
To: dev-secur...@lists.mozilla.org
Subject: Incident Report - Certificates issued without proper domain
We are currently unaware of any malicious exploitation of this bug to
procure a certificate for a domain that was not authorized. The customer who
discovered the bug revoked the certificate they obtained, and subsequent
certificates issued as the result of requests used for testing by Microsoft
and GoDaddy have been revoked. Further, any certificate requests made for
domains we flag as high-risk were also subjected to manual review (rather
than being issued purely based on an invalid domain authorization).
We have re-verified domain control on every certificate issued using this
method of validation in the period from when the bug was introduced until it
was fixed. A list of 8850 potentially unverified certificates (representing
less than 2% of the total issued during the period) was compiled at 10 PM

Ryan Sleevi

unread,
Jan 11, 2017, 8:22:26 PM1/11/17
to Wayne Thayer, mozilla-dev-s...@lists.mozilla.org
On Wed, Jan 11, 2017 at 4:27 PM, Wayne Thayer <wth...@godaddy.com> wrote:
> Our process for verifying domain control via a change to the website worked by generating a random alphanumeric code. The code was then placed in a file in the root directory of the website. The structure of the filename is <code>.html. Our system queries for that URL and looks for the code in the response.
>
> Our initial response as reported yesterday was to fix the bug introduced in July. Based on internal discussions and comments here, as of 12 midnight PST last night (1/11) we stopped using this method of file based domain control validation.

Thanks Wayne. I realize that was probably a hard decision to make, but
it does sound like the right one, at least for the time being. I
definitely appreciate the engagement and reporting here, as it shows
the benefit of the public conversation - namely, being able to gather
additional information and feedback and make sure that the issue is
resolved.

Yuhong Bao

unread,
Jan 11, 2017, 9:08:46 PM1/11/17
to Richard Wang, Wayne Thayer, dev-secur...@lists.mozilla.org, Ryan Sleevi
That is what the current certificate by Google Internet Authority says.
What I am referring to is that before Google bought Nest they used GoDaddy as the CA.
________________________________________
From: Richard Wang <ric...@wosign.com>
Sent: Wednesday, January 11, 2017 5:01:08 PM
To: Yuhong Bao; Wayne Thayer; dev-secur...@lists.mozilla.org; Ryan Sleevi
Subject: RE: Incident Report – Certificates issued without proper domain validation

Ryan Sleevi

unread,
Jan 11, 2017, 9:42:26 PM1/11/17
to Yuhong Bao, Ryan Sleevi, Richard Wang, dev-secur...@lists.mozilla.org, Wayne Thayer
Hi Yuhong,

Perhaps it be best if you create a separate thread for your question -
it's not really clear at all how it relates to the topic at hand.

Yuhong Bao

unread,
Jan 11, 2017, 9:53:17 PM1/11/17
to ry...@sleevi.com, Richard Wang, dev-secur...@lists.mozilla.org, Wayne Thayer
In this case, Nest's 404 page happens not to include the original URL in the HTML so they are not affected, but you see what I mean now.
________________________________________
From: Ryan Sleevi <ry...@sleevi.com>
Sent: Wednesday, January 11, 2017 6:41:46 PM
To: Yuhong Bao
Cc: Richard Wang; Wayne Thayer; dev-secur...@lists.mozilla.org; Ryan Sleevi
Subject: Re: Incident Report – Certificates issued without proper domain validation

Gervase Markham

unread,
Jan 12, 2017, 5:07:49 AM1/12/17
to Wayne Thayer
Hi Wayne,

Thanks for these prompt and detailed responses.

On 12/01/17 00:27, Wayne Thayer wrote:
> Our initial response as reported yesterday was to fix the bug
> introduced in July. Based on internal discussions and comments here,
> as of 12 midnight PST last night (1/11) we stopped using this method
> of file based domain control validation.

That seems like an excellent idea, at least until you can alter the
system to make it so that the random value is not part of the URL requested.

> As soon as we learned of this issue, we went through every
> certificate that was validated with the HTML method utilized during
> this period and attempted to verify. If it could not be immediately
> verified, we revoked the certificate.

That seems like an excellent process.

> When we learned of this issue, we re-validated every affected
> certificate. If we were unable to properly validate, we revoked the
> certificate. That is how we got the total of 8,951 revoked
> certificates.

Are you able to say how many certificates were successfully revalidated?

> As soon as we discovered the bug, we ran a report to identify every
> certificate that didn’t fail the domain validation check during the
> period the bug was active. We then started scanning websites to see
> which ones were able to re-pass the proper validation check. If they
> passed, we removed the certificate from the list. If we were unable
> to revalidate the certificate, we revoked it. If there was any
> question if the certificate was properly verified, we revoked it.

So you re-validated pretty much everything? Wow. That must be a lot of
sites.

Not a requirement or a command, but it may be wise to improve your
logging, because if you had stored the website's response and status
code verbatim, you would not have needed to revalidate as many
certificates (because you could have skipped those that responded "200"
first time), and may have been able to revoke far fewer.

Gerv

Wayne Thayer

unread,
Jan 12, 2017, 6:41:41 PM1/12/17
to mozilla-dev-s...@lists.mozilla.org
> From: Gervase Markham [mailto:ge...@mozilla.org]
> Sent: Thursday, January 12, 2017 3:07 AM
> To: Wayne Thayer <wth...@godaddy.com>; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: Re: Incident Report – Certificates issued without proper domain
> validation
>
> Hi Wayne,
>
> Thanks for these prompt and detailed responses.
>
> On 12/01/17 00:27, Wayne Thayer wrote:
> > Our initial response as reported yesterday was to fix the bug
> > introduced in July. Based on internal discussions and comments here,
> > as of 12 midnight PST last night (1/11) we stopped using this method
> > of file based domain control validation.
>
> That seems like an excellent idea, at least until you can alter the system to
> make it so that the random value is not part of the URL requested.
>
> > As soon as we learned of this issue, we went through every certificate
> > that was validated with the HTML method utilized during this period
> > and attempted to verify. If it could not be immediately verified, we
> > revoked the certificate.
>
> That seems like an excellent process.
>
> > When we learned of this issue, we re-validated every affected
> > certificate. If we were unable to properly validate, we revoked the
> > certificate. That is how we got the total of 8,951 revoked
> > certificates.
>
> Are you able to say how many certificates were successfully revalidated?
>
Approximately 7500. In addition, as of earlier today, new certificates that cover over 50% of the CNs in the set of revoked certs have successfully gone through our domain validation process.
>
> > As soon as we discovered the bug, we ran a report to identify every
> > certificate that didn’t fail the domain validation check during the
> > period the bug was active. We then started scanning websites to see
> > which ones were able to re-pass the proper validation check. If they
> > passed, we removed the certificate from the list. If we were unable to
> > revalidate the certificate, we revoked it. If there was any question
> > if the certificate was properly verified, we revoked it.
>
> So you re-validated pretty much everything? Wow. That must be a lot of
> sites.
>
> Not a requirement or a command, but it may be wise to improve your
> logging, because if you had stored the website's response and status code
> verbatim, you would not have needed to revalidate as many certificates
> (because you could have skipped those that responded "200"
> first time), and may have been able to revoke far fewer.
>
Clearly this is good advice, thank you.
>
> Gerv

Itzhak Daniel

unread,
Jan 12, 2017, 7:38:47 PM1/12/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, January 11, 2017 at 5:03:08 AM UTC+2, Wayne Thayer wrote:
> ... and will also be logged to the Google Pilot CT log.

Why not posting _ALL_ certificates issues via that method to CT log?

montel....@gmail.com

unread,
Jan 19, 2017, 4:04:16 AM1/19/17
to mozilla-dev-s...@lists.mozilla.org
On Thursday, January 12, 2017 at 7:38:47 PM UTC-5, Itzhak Daniel wrote:
> Why not posting _ALL_ certificates issues via that method to CT log?

We had to nag and whine for a year to get IXSystems and FreeNAS folks to finally, begrudgingly use TLS (for Download of ISOs and SHA256 no less!). The 'Volunteers' and staff deleted my posts, accused me of trolling and stated that the CAs' system was something like bunk or a laughing stock. Though not a commiter or security guru, I submit that:

If a CA refuses to take advantage of Google's <i>Certificate Transparency Project</i> or otherwise public log per RFC 6962, then Mozilla MUST shun them!

I mean who dares disagree? Surely this is a non-partisan issue with Mozilla Devs AND majority of Firefox Users? Let's keep on topic of GoDaddy's second insufficiency, though it's not alone on the consensus naughty-list. I assume some relevant browser Devs were shown proof of what happened in detail? Can they complain their spaghetti code is that proprietary, really. It surely is not valuable now as a work product. Just sign NDAs if they won't the bother. The 'lapses' WILL keep getting more convoluted and ridiculous if Mozilla, Google et al. don't finally draw the line.

PS: FreeNAS is still using GoDadddy, even though they have other valid certificates per:
https://www.google.com/transparencyreport/https/ct/
...somebody has to lead by example and soon!

Jakob Bohm

unread,
Jan 19, 2017, 3:20:24 PM1/19/17
to mozilla-dev-s...@lists.mozilla.org
On 19/01/2017 01:33, montel....@gmail.com wrote:
> On Thursday, January 12, 2017 at 7:38:47 PM UTC-5, Itzhak Daniel wrote:
>> Why not posting _ALL_ certificates issues via that method to CT log?
>
> We had to nag and whine for a year to get IXSystems and FreeNAS folks to finally, begrudgingly use TLS (for Download of ISOs and SHA256 no less!). The 'Volunteers' and staff deleted my posts, accused me of trolling and stated that the CAs' system was something like bunk or a laughing stock. Though not a commiter or security guru, I submit that:
>
> If a CA refuses to take advantage of Google's <i>Certificate Transparency Project</i> or otherwise public log per RFC 6962, then Mozilla MUST shun them!
>

Google's CT initiative in its current form has serious privacy problems
for genuine certificate holders. I applaud any well-run CA that stands
up to this attack on the Internet at large.

> I mean who dares disagree? Surely this is a non-partisan issue with Mozilla Devs AND majority of Firefox Users? Let's keep on topic of GoDaddy's second insufficiency, though it's not alone on the consensus naughty-list. I assume some relevant browser Devs were shown proof of what happened in detail? Can they complain their spaghetti code is that proprietary, really. It surely is not valuable now as a work product. Just sign NDAs if they won't the bother. The 'lapses' WILL keep getting more convoluted and ridiculous if Mozilla, Google et al. don't finally draw the line.
>

I have no reason to believe Mozilla employees have any relevant GoDaddy
information not posted right here on this newsgroup and the associated
public web pages, bug trackers etc.

This newsgroup is *the* place where Mozilla finds out these things.
you and I are essentially standing inside the room where all this is
happening, seeing and hearing almost everything that goes on, and even
getting to contribute our opinions.


> PS: FreeNAS is still using GoDadddy, even though they have other valid certificates per:
> https://www.google.com/transparencyreport/https/ct/

Not at all relevant to this newsgroup.

> ...somebody has to lead by example and soon!
>

Hopefully not you.

Nick Lamb

unread,
Jan 19, 2017, 6:35:41 PM1/19/17
to mozilla-dev-s...@lists.mozilla.org
On Thursday, 19 January 2017 20:20:24 UTC, Jakob Bohm wrote:
> Google's CT initiative in its current form has serious privacy problems
> for genuine certificate holders. I applaud any well-run CA that stands
> up to this attack on the Internet at large.

I notice that you have not specifically identified which Certificate Authorities you believe are "well-run", perhaps your argument would have more force if you could name some market leaders in that category.

As a Relying Party for the Web PKI I think Google's initiative makes a sensible trade off, you can't have privacy while also delivering oversight. The public CAs are clearly in need of oversight. This did not happen in a vacuum but as a consequence of trusted Certificate Authorities exhibiting incompetence and greed over many years.

Jakob Bohm

unread,
Jan 19, 2017, 9:05:22 PM1/19/17
to mozilla-dev-s...@lists.mozilla.org
On 20/01/2017 00:35, Nick Lamb wrote:
> On Thursday, 19 January 2017 20:20:24 UTC, Jakob Bohm wrote:
>> Google's CT initiative in its current form has serious privacy problems
>> for genuine certificate holders. I applaud any well-run CA that stands
>> up to this attack on the Internet at large.
>
> I notice that you have not specifically identified which Certificate Authorities you believe are "well-run", perhaps your argument would have more force if you could name some market leaders in that category.
>

Presumably most that haven't been distrusted by Mozilla or otherwise
publicly shamed for massive misissuance.

> As a Relying Party for the Web PKI I think Google's initiative makes a sensible trade off, you can't have privacy while also delivering oversight. The public CAs are clearly in need of oversight. This did not happen in a vacuum but as a consequence of trusted Certificate Authorities exhibiting incompetence and greed over many years.
>

As both a relying party and a certificate holder, I see no reason for
public listing of most of the details in the CT logs, and I do take
specific measures to not get public certificates for a number of things
(such as my e-mail addresses) that I don't want listed in Google
searches or attacked by spammers.

So far, I have not seen any good uses for CT logging stuff such as:

- Full domain name (below public suffix + 1) for things like employee
/contractor portals.
- Full domain name (below public suffix + 1) for alpha / beta tests,
staging servers etc.
- Full domain name (below public suffic + 1) for CDN / cluster names,
such as r2---sn-op5oxu-j2il.googlevideo.com
- E-mail addresses other than the RFC2142 special ones.
- City and street address.
- Telephone number.
- Citizens ID/Social security number/Company registration ID.

What is really needed for most non-malicious CT uses are the relevant
2nd/3rd level domain (1 level below public suffix), country, state and
organization names (or CN if not an internet name and no O name part),
slow one way hash of full name entries (e.g.
SHA-512**65536("some...@gmail.com"),
full issuer details, cryptographic algorithm and strength, plus serial
number and technical details such as EKUs and other non-special cased
items.

For example, to check if someone issued a fraudulent certificate for
any domain or address under google.com, Google Inc could check the list
of redacted CT entries for *@*.google.com, then compare it against an
in-house non-public database of such certificates authorized by their
internal management procedures.

To check for certificates issued to non-existent / suspect domains such
as example.com and/or test[1-9].com (recent Symantec related post in
this group), this would still be fully visible too. So would SHA-1
certificates issued in 2016, duplicate serial numbers, etc. Someone
getting a misissued wildcard cert for a semi-public suffix such as
"sf.net" or "blogblog.com" would also show up.

For a service such as gmail.com, the information suggested above would
allow someone knowing a specific e-mail address such as
"some...@gmail.com" to check if a certificate was misissued for that
address, but would not provide an easy way for a third party (such as a
spammer) to extract a list of all @gmail.com user names that happen to
have S/MIME certificates (Of cause Google has the original list of
their users already, but no one else should).


Enjoy

kane...@gmail.com

unread,
Jan 24, 2017, 4:45:01 AM1/24/17
to mozilla-dev-s...@lists.mozilla.org
On Thursday, January 19, 2017 at 6:05:22 PM UTC-8, Jakob Bohm wrote:
> On 20/01/2017 00:35, Nick Lamb wrote:
> > On Thursday, 19 January 2017 20:20:24 UTC, Jakob Bohm wrote:
> >> Google's CT initiative in its current form has serious privacy problems
> >> for genuine certificate holders. I applaud any well-run CA that stands
> >> up to this attack on the Internet at large.
> >
> > I notice that you have not specifically identified which Certificate Authorities you believe are "well-run", perhaps your argument would have more force if you could name some market leaders in that category.
> >
>
> Presumably most that haven't been distrusted by Mozilla or otherwise
> publicly shamed for massive misissuance.
You presume a lot, I fear. "No disasters yet" is not proof of disaster prevention.

Apologies for the late reply and this diversion from the thread, but I felt I should respond to some of this.

> > As a Relying Party for the Web PKI I think Google's initiative makes a sensible trade off, you can't have privacy while also delivering oversight. The public CAs are clearly in need of oversight. This did not happen in a vacuum but as a consequence of trusted Certificate Authorities exhibiting incompetence and greed over many years.
> >
>
> As both a relying party and a certificate holder, I see no reason for
> public listing of most of the details in the CT logs, and I do take
> specific measures to not get public certificates for a number of things
> (such as my e-mail addresses) that I don't want listed in Google
> searches or attacked by spammers.
>
> So far, I have not seen any good uses for CT logging stuff such as:
>
> - (list cut)

I don't really see the point of putting a Citizen ID number in a website certificate, either. If you don't put it in the certificate, it doesn't need to be logged.

Remember that CT as specified is only for WebPKI website validation certificates, not S/MIME certificates (though I suspect you could submit them).

> What is really needed for most non-malicious CT uses are the relevant
> 2nd/3rd level domain (1 level below public suffix), country, state and
> organization names (or CN if not an internet name and no O name part),
> slow one way hash of full name entries (e.g.
> SHA-512**65536("some...@gmail.com"),
> full issuer details, cryptographic algorithm and strength, plus serial
> number and technical details such as EKUs and other non-special cased
> items.

This has enormous complexity requirements above "log the full certificate". History shows us this would result in even less adoption than we have right now.

> For example, to check if someone issued a fraudulent certificate for
> any domain or address under google.com, Google Inc could check the list
> of redacted CT entries for *@*.google.com, then compare it against an
> in-house non-public database of such certificates authorized by their
> internal management procedures.
>
> To check for certificates issued to non-existent / suspect domains such
> as example.com and/or test[1-9].com (recent Symantec related post in
> this group), this would still be fully visible too. So would SHA-1
> certificates issued in 2016, duplicate serial numbers, etc. Someone
> getting a misissued wildcard cert for a semi-public suffix such as
> "sf.net" or "blogblog.com" would also show up.

Disregarding the issue of S/MIME certificates, what do you propose for Honest Achmed's Car Dealership, whom do not need nor have rigorous management procedures for SSL certificates? (Replying to the first 3 items in your list here.)

Hypothetical: A monitoring service says "We saw a certificate logged for ????.honestachmedcars.com. It came from [CA name], SHA256, serial number xxxxxxxxxx."

Achmed asks around the office and nobody's quite sure why this certificate was created. It would help a lot if the monitor delivered the full list of subject names for the certificate, because if it did they would've seen it was issued for mail.honestachmedcars.com, which would point them to questioning their managed e-mail provider (who just did a scheduled renewal of all customer certificates).

>
> For a service such as gmail.com, the information suggested above would
> allow someone knowing a specific e-mail address such as
> "some...@gmail.com" to check if a certificate was misissued for that
> address, but would not provide an easy way for a third party (such as a
> spammer) to extract a list of all @gmail.com user names that happen to
> have S/MIME certificates (Of cause Google has the original list of
> their users already, but no one else should).
Again, disregarding this because CT was never meant for e-mail certificates.

~Kane York
Speaking as an individual.
0 new messages