Option 1: No name redaction allowed at all.This is perhaps the simplest option, and provides the most public trust and transparency. This treats all domain names, and certificates issued by public trust anchors, as needing a degree of public accountability. It improves the present state of the anti-phishing world, by requiring that all domain names (which have certificates associated with them) be publicly disclosed and programatically accessible, which can then be used to do analysis and mitigation for threats.On the other hand, it is at some conflict with the hierarchical notion of the DNS - in which domain holders are permitted, by design, to set the policies below their domain. It does feel at odds with this design to dictate policies to domain holders that all labels must be, in effect, public.Option 2: Allow redaction without any associated policies.This is non-viable, precisely because it would allow a CA to redact domains to a degree that prevents detection of misissuance. For example, it is technically valid to log a precertificate for "*.?.com" - but such a log event provides no information to Monitors and Auditors of the well-functioning behaviour of the CA, or that a misissuance event has not occurred.Option 3: Only permit redaction up to the level of effective TLD - plus two labels.This relies upon using information from secondary sources, such as the public suffix list, to determine what an effective TLD is. For example, "com" is a TLD, as is "uk", as both appear within the IANA Root Zone Database, while "co.uk" is effectively a TLD, as registrations happen for independent and distinct security zones and entities below it, even though it is not part of the Root Zone Database.The "plus two labels" means that, for a given domain of top.secret.example.com, we determine the effective TLD as "com", thus one additional label is "example.com", and two additional labels are "secret.example.com". As such, it would be permissible to redact as "?.secret.example.com", but it would be a violation to redact as "?.?.example.com".The reasoning for this is subtle, and is based on finding the appropriate balance between the need for accountability and transparency with respect to Publicly Trusted Certificates, while accepting the sovereignty and needs of domain holders to keep some domain labels 'hidden' from the world at large. Accepting a policy of eTLD+1, such that "?.example.com", has been a concern of multiple domain operators we've spoken to, because of the uncertainty as to whether that certificate was for "www.example.com", "mail.example.com", or "testing-and-staging.example.com", each of which may be operated by different teams and different policies. The goal is to ensure that those running the server for, say, mail.example.com, cannot inadvertently permit a CA to misissue a certificate for www.example.com.This also seeks to balance the incentives of domain holders who desire redaction with the need for transparency, in that it admittedly does mean that, in order to leverage the advantages afforded by name redaction, an organization must take appropriate steps to segment their DNS into a redaction zone. The risk is that, without such a policy, redaction will be seen as the default, whether by users or, worse, encouraged by CAs, and thus mask or dilute misissuance events.With Option 3, there's two variants:Option 3a: The determination of the "effective TLD" shall consider both the ICANN and PRIVATE divisions of the Public Suffix List (as described at https://publicsuffix.org/list/ )Option 3b: The determination of the "effective TLD" shall only consider the ICANN section of the Public Suffix ListOn the basis of the security arguments, it would seem like Option 3a is the appropriate balance, and we've not yet found an argument for Option 3b, but are including for completeness.We would like to solicit discussion and feedback on these options, from CAs, domain holders, monitors, and clients. There are various concerns from stakeholders that have been privately raised, but we think it's much more appropriate to have a public discussion.We would also like to see about reaching a decision over the course of the next several weeks. We believe that if there is consensus to be found as to how redaction shall happen, then it can play a significant role in determining what appropriate steps Chromium takes in response to Symantec's misissuance event, both for the set of pre-existing certificates that were issued prior to 1 June 2016, and for those certificates issued after 1 June 2016.
--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To post to this group, send email to ct-p...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/c0aa7e6f-9f42-4b2d-a59a-5fc5184a16c3%40chromium.org.
As RFC 6962-bis approaches maturity, it seems an appropriate time to begin a discussion regarding policies surrounding the redaction of domain labels within certificates. This is particularly salient for two reasons: Ensuring that a Log Policy for RFC 6962-bis is appropriately ready as 6962-bis finalizes, and due to the upcoming requirement on 1 June 2016 that Symantec Corporation must disclose all certificates issued, as a result of a past continuous series of misissuance events.RFC 6962-bis, Draft 13, Section 4 spells out, briefly, the argument for domain redaction, as well as possible means of technically accomplishing it. These means are, in essence, the use of a wildcard certificate, a special construction of the pre-certificate to allow substitution of domains, or the use of what is termed as a Name-Constrained Intermediate CA, which is technically compatible with the definition in the Baseline Requirements of a Technically Constrained Subordinate CA Certificate.As part of this discussion, there are three main actors to consider - what is expected of the Logs when accepting a certificate, what is expected of CAs when submitting a precertificate, and what is expected of clients when encountering such a certificate. Our core concern is on the latter two elements - ultimately, the Log will ideally accept any submission (that appropriately chains), and instead, it's a question of what CAs are expected/permitted to do, and what Chromium clients will do in response.There's also two facets to this discussion - what is acceptable for EV certificates, and what is acceptable for those certificates that are not EV (i.e. notions like DV/IV/OV). To simplify this, we are only talking about policies for the not-EV case. As currently implemented and deployed, name redaction is not permitted for EV certificates, as reflective of the current UI treatment granted to them and communicated to users, we have no plans to revisit this decision as a matter of policy and trust, even if it becomes technically viable. So this discussion is primarily about DV certificates.To recap: This is focused on what CAs are expected to do (and what clients will do when encountering such) for non-EV certificates.Option 1: No name redaction allowed at all.This is perhaps the simplest option, and provides the most public trust and transparency. This treats all domain names, and certificates issued by public trust anchors, as needing a degree of public accountability. It improves the present state of the anti-phishing world, by requiring that all domain names (which have certificates associated with them) be publicly disclosed and programatically accessible, which can then be used to do analysis and mitigation for threats.On the other hand, it is at some conflict with the hierarchical notion of the DNS - in which domain holders are permitted, by design, to set the policies below their domain. It does feel at odds with this design to dictate policies to domain holders that all labels must be, in effect, public.
Option 2: Allow redaction without any associated policies.This is non-viable, precisely because it would allow a CA to redact domains to a degree that prevents detection of misissuance. For example, it is technically valid to log a precertificate for "*.?.com" - but such a log event provides no information to Monitors and Auditors of the well-functioning behaviour of the CA, or that a misissuance event has not occurred.Option 3: Only permit redaction up to the level of effective TLD - plus two labels.This relies upon using information from secondary sources, such as the public suffix list, to determine what an effective TLD is. For example, "com" is a TLD, as is "uk", as both appear within the IANA Root Zone Database, while "co.uk" is effectively a TLD, as registrations happen for independent and distinct security zones and entities below it, even though it is not part of the Root Zone Database.The "plus two labels" means that, for a given domain of top.secret.example.com, we determine the effective TLD as "com", thus one additional label is "example.com", and two additional labels are "secret.example.com". As such, it would be permissible to redact as "?.secret.example.com", but it would be a violation to redact as "?.?.example.com".The reasoning for this is subtle, and is based on finding the appropriate balance between the need for accountability and transparency with respect to Publicly Trusted Certificates, while accepting the sovereignty and needs of domain holders to keep some domain labels 'hidden' from the world at large. Accepting a policy of eTLD+1, such that "?.example.com", has been a concern of multiple domain operators we've spoken to, because of the uncertainty as to whether that certificate was for "www.example.com", "mail.example.com", or "testing-and-staging.example.com", each of which may be operated by different teams and different policies. The goal is to ensure that those running the server for, say, mail.example.com, cannot inadvertently permit a CA to misissue a certificate for www.example.com.This also seeks to balance the incentives of domain holders who desire redaction with the need for transparency, in that it admittedly does mean that, in order to leverage the advantages afforded by name redaction, an organization must take appropriate steps to segment their DNS into a redaction zone. The risk is that, without such a policy, redaction will be seen as the default, whether by users or, worse, encouraged by CAs, and thus mask or dilute misissuance events.With Option 3, there's two variants:Option 3a: The determination of the "effective TLD" shall consider both the ICANN and PRIVATE divisions of the Public Suffix List (as described at https://publicsuffix.org/list/ )Option 3b: The determination of the "effective TLD" shall only consider the ICANN section of the Public Suffix ListOn the basis of the security arguments, it would seem like Option 3a is the appropriate balance, and we've not yet found an argument for Option 3b, but are including for completeness.We would like to solicit discussion and feedback on these options, from CAs, domain holders, monitors, and clients. There are various concerns from stakeholders that have been privately raised, but we think it's much more appropriate to have a public discussion.We would also like to see about reaching a decision over the course of the next several weeks. We believe that if there is consensus to be found as to how redaction shall happen, then it can play a significant role in determining what appropriate steps Chromium takes in response to Symantec's misissuance event, both for the set of pre-existing certificates that were issued prior to 1 June 2016, and for those certificates issued after 1 June 2016.
--
I see another variant of option 3 which seems worth mentioning. Require that certificates with SCTs containing name redaction are only allowed to be served from the private IP address space. This is a policy that would have to be implemented in the client, so it's not a CA policy as such. That is, if the client receives a SCT with name redaction from a server on a public IP, the cert validation would fail. This could help mitigate phishing certs (only for those clients supporting CT though [1]), and it would probably restrain server owners from unnecessarily using name redactions.[1] ..or it could be combined with some "local only" critical extension, meaning that name redacted certs would only work in clients supporting the extension. This might be acceptable since it will only affect private networks where there is relative good control of which clients are being used.
On 1 April 2016 at 03:19, Ryan Sleevi <rsl...@chromium.org> wrote:
<snip>
On the other hand, it is at some conflict with the hierarchical notion of the DNS - in which domain holders are permitted, by design, to set the policies below their domain. It does feel at odds with this design to dictate policies to domain holders that all labels must be, in effect, public.
This point seems important to me, and allows for a fourth possibility:Option 4: Allow domain owners to set redaction policyThis could be informal, e.g. domain owners can spot certs with names in their domains that are overly redacted in their view, and then:a) Update their internal policies to prevent recurrence,b) Ask for the corresponding cert to be revoked.This could also deal with redactions like "?.com" as anyone owning a domain in .com would be able to ask for revocation.
On Fri, Apr 1, 2016 at 5:37 AM, Ben Laurie <be...@google.com> wrote:On 1 April 2016 at 03:19, Ryan Sleevi <rsl...@chromium.org> wrote:<snip>On the other hand, it is at some conflict with the hierarchical notion of the DNS - in which domain holders are permitted, by design, to set the policies below their domain. It does feel at odds with this design to dictate policies to domain holders that all labels must be, in effect, public.This point seems important to me, and allows for a fourth possibility:Option 4: Allow domain owners to set redaction policyThis could be informal, e.g. domain owners can spot certs with names in their domains that are overly redacted in their view, and then:a) Update their internal policies to prevent recurrence,b) Ask for the corresponding cert to be revoked.This could also deal with redactions like "?.com" as anyone owning a domain in .com would be able to ask for revocation.Unfortunately, I think relying on revocation - which seems key to this proposal - would be a non-starter, given that revocation doesn't work; not just for browsers, but also the vast majority of TLS clients (e.g. those based on OpenSSL, or those using language bindings from PHP, Perl, Python, Go, etc).So I want to avoid a situation where revocation (either via OCSP/CRL or via blacklisting) is seen as the mitigation for a problem.
The question of how to implement [1] is figuring out what represents an acceptable set of "local only" in a world of IPv6. My understanding and belief is that the notion of a separation between "public" and "private" becomes significantly more muddy in an IPv6 world - while you could say it's only applicable to link-local addresses, I believe that's different (and less flexible) then what you'd get with IPv4's reserved IP range - and so if we accept IPv6 as a reality, I'm not sure how we would implement that.
On 1 April 2016 at 17:14, Ryan Sleevi <rsl...@chromium.org> wrote:
<snip>
Unfortunately, I think relying on revocation - which seems key to this proposal - would be a non-starter, given that revocation doesn't work; not just for browsers, but also the vast majority of TLS clients (e.g. those based on OpenSSL, or those using language bindings from PHP, Perl, Python, Go, etc).So I want to avoid a situation where revocation (either via OCSP/CRL or via blacklisting) is seen as the mitigation for a problem.
So, is the idea with 3 that logs refuse to log certs that don't comply?
I'm generally supportive of name redaction, and I understand the
concerns with allowing eTLD+1 redaction. That said, the eTLD+2 policy
strikes me as rather arbitrary and I don't think it fully solves the
problems which you raised. For instance, a sub-division of example.com
which uses sub.example.com might outsource their mail server,
mail.sub.example.com, to a third party. If a name-redacted pre-cert
for ?.sub.example.com is logged, the monitor run by sub.example.com
would not know what to do. Also, CAs could still redact as
much as possible by default. If they think redaction is good, why
wouldn't they? The difference in harm to the CT ecosystem between
redacting to eTLD+1 by default and redacting to eTLD+2 by default is
just a matter of degree.
I would instead require mandatory opt-in to name redaction with
a newly-defined property in a CAA record. Any label to the left
of the name with the CAA record could be redacted. For example,
a domain holder could allow redactions like ?.internal.example.com
or ?.?.internal.example.com by setting the appropriate CAA record on
internal.example.com. A domain holder who is sufficiently confident in
their internal accounting of certificates could even permit redactions
like ?.example.com by setting a domain-wide CAA record on example.com.
And a domain holder who wants no redaction could have that too.
Browsers would not be able to enforce this policy. However, a domain
holder's monitor would know their redaction policy and raise an alarm
if an overly-redacted pre-certificate for their domain is logged. Of
course, browsers should still enforce that no labels are redacted past
eTLD+1, to ensure that a domain holder's monitor can actually notice
the log entry.
This procedure is independently verifiable - if there is no such CAA record on internal.company.co.uk, then the issued certificate is bogus.It does add latency - 1 uncommon DNS query - to the browser's TLS connection, which is unfortunate. I would recommend caching a positive result for a static period (24hr?) to avoid that overhead, and a negative result for a shorter period (10m?) to discourage attacks.
First of all, I support the eTLD+2 proposal, with language strongly suggesting that it be named things like 'private', 'internal', or 'restricted'. For example, "service.internal.company.co.uk" would be allowed to be logged as "?.internal.company.co.uk".What if we had a CAA record that was enforced by the browser?
This procedure is independently verifiable - if there is no such CAA record on internal.company.co.uk, then the issued certificate is bogus.It does add latency - 1 uncommon DNS query - to the browser's TLS connection, which is unfortunate. I would recommend caching a positive result for a static period (24hr?) to avoid that overhead, and a negative result for a shorter period (10m?) to discourage attacks.Also, it makes DNS a source of certificate validity information, which it was not in the past. Questionable dependency there.
I would make this a brand new CAA property, and completely decouple
it from the question of issue/issuewild enforcement. In theory, CAs
should have no problem with this - they aren't currently allowed to do
name redaction anyways, and the presence or lack of the name redaction
property would apply equally to all CAs. Adopting CAA for the limited
purpose of name redaction seems like all upside for CAs compared to
the status quo. Hopefully being able to offer name redaction to their
customers would be sufficient motivation for CAs to do the small amount
of engineering work needed and to overcome any psychological aversion
to the general concept of CAA. For this reason, I think it's important
that CAA be used from the beginning with name redaction if it is to ever
be used: rolling it out later will be much harder if it results in a
restriction on CAs compared to the status quo.
Good point, although it doesn't seem like this would be a dealbreaker
in practice. Redactions can't "escape" up the hierarchy, so if the
administrators of example.com see a redaction
of ?.internal.example.com, they know they need to talk with the
internal.example.com administrators.
Right, though to be clear CAs would only need to examine the name
redaction property, and not the issue/issuewild properties. I would
also propose that a violation by a CA of this policy would be grounds
to reject future name redactions by that CA.
On Fri, Apr 1, 2016 at 3:45 PM, Ryan Sleevi <rsl...@chromium.org> wrote:
> At fundamental question is whether or not an over-redacted pre-certificate
> should represent "misissuance". If so, it's clearer what can happen to that
> CA, up to and including removal. However, if it's not misissuance, but
> "something else," that becomes trickier.
An "over-redacted" pre-certificate should not be worse than no
pre-certificate. After all, there is no requirement that a CA log
every certificate they issue, as far as I know.
The requirements
suggested and discussed so far are all at the per-certificate level
and are "if you want Chrome to accept the certificate" not "if you
want Chrome to accept the CA", unless I'm missing something.
Majority of the domains where our customers chose to preserve privacy by opting out of CT logging fall under “secret.example.com” pattern – effective TLD + 2 labels. Large enterprise customers have expressed their strong desire to keep their internal domain names private. Permitting redaction up to the level of effective TLD - plus two labels will force many enterprises to change their internal network setup to add one more label to their domain names so that they can preserve the needed privacy.
There can be two options to preserve the privacy as well as give needed control to domain operators:
- Permit redaction up to the level of effective TLD - plus one label.
- Permit redaction up to the level of effective TLD - plus one label or allow domain operators to specify level of redaction for their domain.
A good prevention mechanism like CAA can be effectively used to allow specific CA to issue certificate to specific sub domain operated by different operators.
Let's pretend there is a third CA operator - InterGalacticTrust (IGT)
- in the same category as the above two and assume IGT has a CA called
"IGT Earth CA - G3". As I understand it, this CA could issue
certificates with the following characteristics:
1) EKU: serverAuth, contains SCTs
2) EKU: clientAuth, contains SCTs
3) EKU: serverAuth, no SCTs
4) EKU: clientAuth, no SCTs
Assuming that there are not SCTs in a stapled OCSP response or a TLS
extension, then Chrome would only trust (1), right? But is there any
reason that IGT couldn't log a precert for #3 or #4? And if they did,
why would it be worse to partially redact it versus not logging it?
So isn't the real issue determining if #1 really meets the requirement
to be trusted, rather than whether IGT is logging "overly" redacted
certificates?
Thanks for starting the discussion here. There are both material implementation benefits from eTLD+1 and reasonable approaches to address the concerns you’ve raised.Supporting eTLD+1 provides the ability for customers who want to redact the option to do so with today’s implementations — our data shows that the majority of server operators who want to redact at all are doing so with only eTLD+1 to begin with.
Supporting eTLD+1 will speed up the feasibility of making CT mandatory since it requires no changes to server operators’ infrastructure. The act of starting with eTLD+2 will necessarily slow down CT adoption to account for the time for server operators who require privacy for their internal domains to rename all their private sites — look at how long transitioning to 2048 or SHA2 is taking… starting with eTLD+1 completely eliminates the dependency on server operators and enables us to create a practical, immediately implementable (and enforceable) policy.
The rationale for eTLD+2 seems to be centered on minimizing false alarms.
There are two cases of false alarms that are being discussed:1. Where server operators individually manage www, mail, cdn and where these server operators monitor individually (if the domain is monitored centrally, eTLD+1 is preferable for server operators and info sec team because it requires less support). It is still possible to minimize false alarms for the individuals in this case, as previously suggested, by requiring CA’s to enable customers the ability to set the level at which they redact. With this requirement, if alarms are triggered for legitimate certificates, it will only be because internal enterprise policy on the acceptable level of redaction wasn’t followed… and those alarms are OK — they will force either no redaction or following of enterprise policy (which can include differences like info sec monitoring results for all of ?.example.com and their CDN monitoring only ?.cdn.example.com).
2. The case of truly bad false alarms would occur only where a CA independently over-redacts. Our expectation is that full logging (no redaction) MUST BE the default implementation for all CA’s, something that (along with allowing customers redaction level flexibility) could be formalized as part of the baseline requirements.
With these two requirements accompanying eTLD+1, we can solve for false alarms, require nothing new from server operators, and create an approach that can move the industry far more quickly to universal support for CT.
On Fri, Apr 1, 2016 at 4:51 PM, Sanjay Modi <sanja...@symantec.com> wrote:Majority of the domains where our customers chose to preserve privacy by opting out of CT logging fall under “secret.example.com” pattern – effective TLD + 2 labels. Large enterprise customers have expressed their strong desire to keep their internal domain names private. Permitting redaction up to the level of effective TLD - plus two labels will force many enterprises to change their internal network setup to add one more label to their domain names so that they can preserve the needed privacy.That's correct, and as explained, that would be intentional/desirable.
There can be two options to preserve the privacy as well as give needed control to domain operators:
- Permit redaction up to the level of effective TLD - plus one label.
- Permit redaction up to the level of effective TLD - plus one label or allow domain operators to specify level of redaction for their domain.
Given the previous explanation of concerns with eTLD+1 earlier on the thread, could you explain if anything was missed in the considerations thus far? Overall, this does not feel appropriate as a blanket policy, as I explained already, and even less so compared to eTLD+2.
On Friday, April 1, 2016 at 4:57:37 PM UTC-7, Ryan Sleevi wrote:That's correct, and as explained, that would be intentional/desirable.Many enterprises register specific domains entirely for internal usage. Except eTLD+1 no subdomain is visible on the internet.While this usage is perfectly legitimate today the eTLD+2 policy will require enterprises not only change their internal domain name policies but also internal applications and tools that use these domain names. The amount of change require here to preserve privacy will require large enterprises more time to fully adopt CT.
The concern expressed with eTLD+1 is where different domain operators operate sub domains such as mail.example.com and www.example.com. This can be addressed by #2 where example.com can set a redaction policy of eTLD+2 so that monitoring certificate issuance for mail.example.com and www.example.com is not impacted.This concern can also be addressed by preventive approach like CAA (independent of CT) by specifying authorized CA for each sub domain
Symantec is not required to log such certificates, they can simply notI'm confused why certificates issued prior to June 1 are an issue - since
log the certs which they would otherwise redact.
As for the certificates
issued after June 1 but prior to RFC 6962-bis-compliant logs, how would
label redaction work technically? I suppose there's nothing stopping a CA
from logging a pre-cert with "?" labels, but would clients and auditors
be expected to understand redaction despite talking with pre-bis logs
and using pre-bis data structures?
That bit of confusion aside, I do understand how the CAA record approach
would take longer to put in place than the other proposed options.
However, if the CAA record approach is considered desirable, I think it
would be a shame to pass over it because of the impending Symantec
deadline.
Instead, I think it would be better to wait and do one of the
following in the interim:
a. Option 1 (no name redaction allowed).
b. Option 3 (eTLD + 2), but only for Symantec, and with an explicit
sunset date, after which this policy would replaced with a general
policy on name redaction. I don't think limiting this policy to
Symantec would be inappropriate: it's only temporary, Symantec is
currently the only CA required to log non-EV certs, and there's going
to be special-casing in the client (really only Chromium at this point)
for Symantec anyways.
Basically, I don't think the decision on a long-term, general name
redaction policy should be rushed or influenced because of
Symantec's June 1 deadline.
On Fri, Apr 1, 2016 at 8:02 PM, Michael Klieman <Michael...@symantec.com> wrote:Thanks for starting the discussion here. There are both material implementation benefits from eTLD+1 and reasonable approaches to address the concerns you’ve raised.Supporting eTLD+1 provides the ability for customers who want to redact the option to do so with today’s implementations — our data shows that the majority of server operators who want to redact at all are doing so with only eTLD+1 to begin with.Could you explain how you're measuring that? This only represents Symantec customers, correct?
More importantly, can you explain why redaction is important? As indicated, it seems entirely reasonable policy to simply not support it at all. So it's useful to hear - from CAs and their customers - what justifications there are, and how to balance those concerns with the broader ecosystem's.
Supporting eTLD+1 will speed up the feasibility of making CT mandatory since it requires no changes to server operators’ infrastructure. The act of starting with eTLD+2 will necessarily slow down CT adoption to account for the time for server operators who require privacy for their internal domains to rename all their private sites — look at how long transitioning to 2048 or SHA2 is taking… starting with eTLD+1 completely eliminates the dependency on server operators and enables us to create a practical, immediately implementable (and enforceable) policy.While I promise I'm not intentionally trying to pick on you, but since you brought it up yourself, it's worth noting that every CA was able to successfully transition their customers to RSA-2048 and to SHA-2 - except, it would seem, Symantec. It's useful to have this perspective and understand the context, just as it's similarly useful to hear from other CAs - many who are logging all of their certificates today without any redaction at all.
The rationale for eTLD+2 seems to be centered on minimizing false alarms.Not entirely; I think it's important to not lose sight of the concerns of eTLD+2 in the broader perspective, which is a fundamental question of whether such redaction of publicly trusted certificates is a good idea, and the necessary trade-offs between the intersection of relying parties, monitors, site operators, and CAs.- A policy of eTLD+2 unquestionably favors less transparency, and thus favors CAs, and admittedly requires some change on the site's part (for those who redact), but benefits sites that don't wish to redact, monitors and relying parties.
- A policy of eTLD+1 favors even less transparency, provides greater benefit to the sites that do wish to redact, and causes greater harm to those that don't.
- A policy of no redaction unless CAA present favors greater transparency, provides the functionality to sites that do want to redact, causes no harm to those that don't want to redact, and causes greater effort for CAs and sites that do redact.
- A policy of redaction unless CAA present saying no redaction favors less transparency, provides the functionality to sites that do wish to redact, but causes harm to those that don't want to redact.
- A policy of no redaction at all provides no benefits to those that do wish to redact, nor to their CAs, but causes no harm that sites that don't want to redact, and provides greater benefits to monitors and relying parties. For that matter, it COULD provide greater benefit to users, by providing Monitors and Auditors the tools to examine CT logs for suspicious/phishy domain names and react appropriately (notifying domain registrar, notifying SmartScreen/SafeBrowsing, etc). This would be one of the few times a certificate actually reduced the risk of phishing, even!
I want to make sure I concretely understand your proposal, and what tradeoffs you're asking the various stakeholders to make.There are two cases of false alarms that are being discussed:1. Where server operators individually manage www, mail, cdn and where these server operators monitor individually (if the domain is monitored centrally, eTLD+1 is preferable for server operators and info sec team because it requires less support). It is still possible to minimize false alarms for the individuals in this case, as previously suggested, by requiring CA’s to enable customers the ability to set the level at which they redact. With this requirement, if alarms are triggered for legitimate certificates, it will only be because internal enterprise policy on the acceptable level of redaction wasn’t followed… and those alarms are OK — they will force either no redaction or following of enterprise policy (which can include differences like info sec monitoring results for all of ?.example.com and their CDN monitoring only ?.cdn.example.com).I'm unclear what you're advocating here.I can't tell if you're suggesting CAA policy to restrict redaction, or CAA policy to permit redaction, and I'm not sure what you suggest as the means or consequences for a failure of a CA to abide by either.
I see several possible interpretations, so I want to make sure:1) No redaction unless CAA records present; if present, acceptable to eTLD+12) Redaction permissible at any time, unless CAA record present; when redacting, acceptable to eTLD+12. The case of truly bad false alarms would occur only where a CA independently over-redacts. Our expectation is that full logging (no redaction) MUST BE the default implementation for all CA’s, something that (along with allowing customers redaction level flexibility) could be formalized as part of the baseline requirements.Part of my confusion above is trying to understand how to quantify what customers redaction level flexibility is. I'm uncertain if you're suggesting that the CA simply provides a checkbox to "redact", or if you're embracing the suggestions provided on the thread so far, which include objective and quantifiable means for a site to opt-in to redaction.
With these two requirements accompanying eTLD+1, we can solve for false alarms, require nothing new from server operators, and create an approach that can move the industry far more quickly to universal support for CT.I don't "require nothing new from server operators" is an objective positive, since we've got multiple parties involved - relying parties, sites that wish for no redaction, sites that wish for full redaction, monitors, and CAs. I think we want to make sure to quantify the benefits - and risks - to all parties involved of redaction.
Redaction addresses the real world cases that get in the way of faster widespread adoption of CT. eTLD+1 is no worse for a site that doesn't want to redact while eTLD+2 is clearly worse for a site that does.
Nor does redaction at any level impact monitors... Only a site owner can monitor use of its own domain and the act of redacting necessarily requires a smarter monitor at any level of redaction.
- A policy of redaction unless CAA present saying no redaction favors less transparency, provides the functionality to sites that do wish to redact, but causes harm to those that don't want to redact.Agreed. Again, we would advocate for MUST log by default for all CAs. We're not trying to promote redaction but make it available for customers who want privacy for their domains... and not create a technical burden to get that privacy.
The issue here is handling those benefits for public sites vs. the needs for private sites (where most of these benefits don't apply). While I certainly like the idea of better phishing protection, getting CT adopted everywhere to enable orgs to monitor their domains to detect misissuance should be the primary objective... redaction support that's easy to implement will help achieve that.
Not suggesting a CAA requirement at all as part of CT redaction. Just eTLD+1 along with a BR requirement to log by default and provide customers the ability to choose the level of any desired redaction.
Yes... and looking at the various stakeholders, the party that is impacted by the redaction policy is limited to an org that chooses to redact its own information.
I think you have identified a separate but related issue with the
current Baseline Requirements. There is no requirement that a CA
cooperate with a Domain Name Registrant to get information on existing
certificates or to take their concerns into account. I think that it
would be reasonable to require all CAs to 1) provide a copy of the
full issued certificate to those who can show they control the domain,
2) revoke a certificate if requested by the domain registrant (even if
the registrant is not the Subscriber), and 3) allow registrants to
restrict who may successfully apply for a certificate. These are all
missing today and should be resolved.
I think it would be good to set a tenet up front: A domain owner
should be not be forced to expose more of their domain configuration
than they otherwise would just because they want security. It would
really bad if someone is having to seriously consider leaving things
unencrypted as a result of certificate policy. Yet that is a very real
possible outcome from this conversation.
Therefore, I think redaction is critical if we are to avoid making
customers decide between privacy and security. There is a single DNS
root (and this is a good thing) so it is reasonable that customers can
expect single trust anchor list that aligns with this rooot. To make
this happen, similar policies need to work for names in certificates
that work for DNS itself. This means the person controlling the
domain namespace needs to be able to choose how much to include in
data accessible on the public Internet, which might be just the
registered domain name.
If a domain holder is concerned, monitoring for redacted entries in their domain isn't enough?
On Apr 2, 2016 1:01 PM, "Richard Salz" <rich...@gmail.com> wrote:
>
> If a domain holder is concerned, monitoring for redacted entries in their domain isn't enough?
It makes a fairly substantial set of tradeoffs in practice, for the reasons outlined in the initial and subsequent messages. The question is whether accepting those tradeoffs (on the whole) is acceptable for the limited benefit of those who would redact.
On Apr 2, 2016 2:03 PM, "Peter Bowen" <pzb...@gmail.com> wrote:
>
> In order to make permission based redaction
> work, based on those docs, the policy will need to be encoded in a TXT
> record.
Let's be clear: When you say "In order for it to work," what you mean is, "In order for it to work without requiring any changes," which is a very different thing. We can certainly make it work with CAA, which would seem the clear and logical place.
If domain services don't support CAA, and you wish for name redaction, you can switch providers/software.
Does that mean more work? Yes, absolutely. But would it work as CAA? Of course.
The argument for avoiding CAA is that services don't support it, and services don't support it because customers don't ask for it, and customers don't ask for it because CAs aren't obligated to respect it, and because it is hard to put a value on security to justify a change. If privacy is of value to these sites, it leads to pressure to support CAA, which improves the entire ecosystem (not just for redaction, but for restricting misissuance)
I appreciate the perspective as to where things stand today, but I don't think that should dissuade us from doing the engineering in a sound way. The same argument you bring up here is no different than the original discussions of CAA vs TXT during CAAs standardization, and I'm not sure the argument has improved? It certainly seems CAA is the sound engineering choice, and the TXT record is only if we want to be lazy?
> The other thing that needs to be considered is what happens to
> existing certificates when the policy record changes. The already
> existing pre-certificates that relied on the old policy clearly still
> exist and were not mis-issued, but an observer viewing the
> pre-certificate months later could think they were if comparing for
> the policy. So there is still a reliance on the CA to accurately
> follow the policy which is hard to verify months later.
There's a difference, however, between not being able to verify at all (e.g. eTLD+1 redaction at the CA's discretion, as Michael effectively proposed) and not being able to verify substantially after the fact. This permits continuous monitors making observations (which will happen within one MMD), which is better than nothing.
I suppose if we want to talk about other challenges with this, is the ability of CAs to use previously validated information (up to 39 months old) prior to issuing a certificate. If we accept the CAA method, a CA could observe no redaction restrictions (in the absence = permission Michael advocated), and use that to continue issuing redacted certificates for up to 39 months after - including after the domain has changed ownership, and even after the domain holder has prohibited redaction.
This seems to suggest a policy of opt-in to redaction better favors sites - opting-in can take place immediately, while opting out can take up to 39 months - rather than the opt-out model, where to even opt-out initially it can take 39 months. Of course, an alternative would simply be require the record checked before every certificate is issued - a far safer bet period, but which I anticipate some CAs to oppose, because it means they have to make changes in order to keep users safe.
On Apr 2, 2016 3:17 PM, "Peter Bowen" <pzb...@gmail.com> wrote:
>
> I should been clearer. In order for it to work as the "interim
> solution", as you phrased it, CAA records are not viable. CAA records
> are more viable if the TXT or CAA records can be used until date X,
> after which point only CAA records can be used. Maybe X is the date
> that a new RFC is published defining the property that goes in a CAA
> record.
Thanks for clarifying.
I'm not sure I'd agree though, in the case of the interim solution (in which there will still be wide and ample opportunity from a variety of CAs to obtain non-redacted, non-disclosed certificates), that it necessarily requires the TXT solution.
I totally appreciate that the argument in favor of such an approach is that it allows Symantec and CNNIC customers a path to redaction, but it seems like they would already have a rather sizable alternative path available.
My concern, of course, is that introducing yet another timeline or sunset would add more to complexity and confusion, as opposed to there being only one sanctioned way, hopefully technically sound, to do things.
After some more thought, I think that any rule for redaction needs to
allow for the left most label to be redacted to avoid creating a
perverse incentive that pushes customers to wildcard certificates.
Wildcards already allow "redaction" of the left most label by covering
all possible labels.
On Fri, Apr 1, 2016 at 5:37 AM, Ben Laurie <be...@google.com> wrote:On 1 April 2016 at 03:19, Ryan Sleevi <rsl...@chromium.org> wrote:<snip>
On the other hand, it is at some conflict with the hierarchical notion of the DNS - in which domain holders are permitted, by design, to set the policies below their domain. It does feel at odds with this design to dictate policies to domain holders that all labels must be, in effect, public.
This point seems important to me, and allows for a fourth possibility:Option 4: Allow domain owners to set redaction policyThis could be informal, e.g. domain owners can spot certs with names in their domains that are overly redacted in their view, and then:a) Update their internal policies to prevent recurrence,b) Ask for the corresponding cert to be revoked.This could also deal with redactions like "?.com" as anyone owning a domain in .com would be able to ask for revocation.Unfortunately, I think relying on revocation - which seems key to this proposal - would be a non-starter, given that revocation doesn't work; not just for browsers, but also the vast majority of TLS clients (e.g. those based on OpenSSL, or those using language bindings from PHP, Perl, Python, Go, etc).So I want to avoid a situation where revocation (either via OCSP/CRL or via blacklisting) is seen as the mitigation for a problem.
On Apr 4, 2016 3:37 AM, "Ben Laurie" <be...@google.com> wrote:
>
> Surely all proposals rely on revocation? How else will you deal with an overly-redacted cert, regardless of definition of "overly-redacted"?
You have options such as removing the CA's ability to redact (on a go-forward basis or retroactively), or removing that CA.
What I mean is that relying on the revocation of end-entities, as seemed suggested, is as unacceptable as allowing CAs to issue SHA-1 and expecting browsers to blacklist leaves or intermediates. We want our policies to be such that revocation is both rare and significant, not a core assumption.
On Fri, Apr 1, 2016 at 10:53 AM, Nick Lamb <tiala...@gmail.com> wrote:
On Friday, 1 April 2016 17:05:08 UTC+1, Ryan Sleevi wrote:The question of how to implement [1] is figuring out what represents an acceptable set of "local only" in a world of IPv6. My understanding and belief is that the notion of a separation between "public" and "private" becomes significantly more muddy in an IPv6 world - while you could say it's only applicable to link-local addresses, I believe that's different (and less flexible) then what you'd get with IPv4's reserved IP range - and so if we accept IPv6 as a reality, I'm not sure how we would implement that.
Modern IPv6 does have the notion of "local" address ranges but they're globally unique. RFC 4193 explains the sensible engineering solution which carves out a range, within which you pick 48 random bits for your organisation, plus 16 bits of subnet IDs
The reason they're globally unique is that we know from IPv4 that organisations merge all the time, and then invariably end up with address collisions and all the mess that entails. Obeying RFC 4193 statistically renders such collisions so unlikely as to be irrelevant unless you're hooking hundreds of thousands of networks together, which is hardly a "private" system any more. In practice it is reasonable to expect some people will set the "random" bits to 0000 0000 0000 because people are idiots, but hopefully we can train administrators on larger networks to take the word "random" at least semi-seriously.
So, anyway, it would be possible for Chrome to detect that an address was in the RFC 4193 range, thus not on the public Internet, and choose to behave differently for that address, much as it might for RFC 1918 addresses today. It doesn't seem sensible to me to apply semantics to the addressing, whether for RFC 1918 or RFC 4193, but it's certainly no crazier for RFC 4193 than it was for RFC 1918.Ah! I had missed that.There is one more downside to this approach - and it's one we've explored with similar web policies gated on notions of public vs private - which is that it runs the risk of creating non-deterministic behaviours for standard configs. That is, consider the "split-DNS" route where a service resolution for "example.com" returns [FC00:..., 2001:0400:..., 8.x.x.x]. Depending on a variety of factors (Happy Eyeballs, DNS round robin, etc), the address you connect to could vary, which means that you can encounter a service that has the appearance of arbitrarily failing, even when they're all talking to the same endpoint.This has certainly been a concern in other contexts (for example, treatment of certain local traffic as apriori secure for privielged features), and perhaps it's even more of an edge case to expect to see this, but that'd at least be part of the concern with this.It also seems like it doesn't fit with the model where these internal services may be listening on globally routable addresses (or at least, mapped so such via DNS), but which only answer for certain networks (for example, firewall ingress policies or VPNs). In this model, you still want to prevent the certificates from being logged, but you'd want to be able to serve them on globally routable IPv6 IPs.Finally, the last place where I can think it gets 'weird' is with things like HTTP/2's connection coalescing properties. What happens if you have a certificate that contains both "public" and "redacted" names as part of a single certificate. With how HTTP/2 coalescing is defined, it would be viable to reuse the "public" IP connection for the "redacted" domain (note: the client has the full, unredacted certificate; it's only the SCT that is redacted), which would bypass these policies. It would seem like it would involve some changes to HTTP/2 spec, or at least browsers' Fetch() spec, to examine not just the certificate (which has the full name) but also the SCTs, and establish a new unique connection to that domain, and then ensure that domain matches the private IPv4/IPv6 space, and then and only then allow the connection to continue.I may have missed something, though, in thinking why this proposal may not be workable, much like I did with the IPv6 segmentation of FC00, so definitely want to keep the discussion going.
--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To post to this group, send email to ct-p...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/CACvaWvYB8PD6_dv2%3DvZymPHosTn%3DHHwUsm4ThSU3Z11MKNcHsw%40mail.gmail.com.
I prefer option 1: No name redaction allowed at all. I think it's the
best balance of interests, given that wildcards and name constrained
subordinates are available.
CAs can operate name constrained subordinates for those enterprises,
again with no infrastructure changes for the enterprise. That may bring
a cost to the CA, if they are currently offering name constrained
subordinates as a high-cost premium product, since it reduces the
ability to price segment. However, it also brings a benefit to the CA in
offering a more appealing "private" product to enterprises.
I forgot to mention that adding some salt would help. Perhaps something like...On 05/04/16 16:33, Rob Stradling wrote:
I'd like to resurrect a previous idea from Peter Bowen for discussion:
http://www.ietf.org/mail-archive/web/trans/current/msg00148.html
The basic idea is to change the construction of the redaction label:
instead of using a "?", the CA would apply a one-way function to the
actual label(s). This would enable different teams of monitors to
determine whether or not a particular redaction label belongs to one of
the actual labels under their control. Result: a reduction in false
alarms.
It would be trivial to mount a dictionary attack on obvious labels (e.g.
"www", "mail"), but ISTM that the obvious labels are usually the ones
that won't need to be redacted.
redacted_label = hash(actual_label + cert_serial_number).
This will also incentivize customers to collapse their DNS hierarchies and use wildcards; and because wildcards are not allowed in EV, customers will be incentivized to move from EV to DV/OV.
As there is still good discussion going on for a balanced redaction policy and until that is finalized we propose:1. Permit redaction up to the level of effective TLD + plus one label with a sunset timeline based on when a public redaction policy (to eTLD+2 or whatever) is finalized.
On Apr 6, 2016 8:07 AM, "Peter Bowen" <pzb...@gmail.com> wrote:
> I think that this is a decision that sites should not have to make.
> Privacy and security are both important and there is a clear path to
> supporting both -- allowing redaction of the label that is permitted
> to be a wildcard. The argument against such seems to be that there
> will be domains where the domain owner will notice the redaction, be
> OK with the redaction, but not bother to find out what is behind the
> redaction and instead assume all is good. This seems like a tenuous
> case at best. The more likely case is that the domain owner notes
> that issuer is an unexpected CA or that they don't request their CA do
> any redaction yet the precert is redacted.
No, there's meaningful differences between redaction and a wildcard. For example, a site can use CAA to restrict the issuance of wildcards today. Even without the broader support if the CA ecosystem, it is certainly sufficient, today, for the transitional needs until the finalization of 6962-bis AND any proposed improvements to the CAA pipeline.
And I don't agree that it's an accurate summary of the issue with redaction - that is, that the site sees it and doesn't bother to find out what's behind it. Rather, if we allow redaction as proposed, *without any other policy changes*, then there's no means to either prevent it or discover what's behind it, but there are those for wildcards.
That's why I think most of the conversation has shifted from being less about the purely technical means, and being also a discussion about what policies are necessary to accompany those technical means, as the tech itself cannot purely solve the issues and tradeoffs.
> From my perspective, the threat that CT is attempting to mitigate is
> misbehaving CAs. This is an important threat to mitigate but should
> not come at the expensive of domain owners ability to have security
> for their own certificates and privacy for their zones. There is zero
> advantage to having "*.example.com" logged over "?.example.com".
See above for why, today, these are meaningfully distinct.
> They
> both give the same amount of data to anyone independent of the domain
> owner and CA. But, for the domain owner, wildcards are massive
> security tradeoff for many of the same reasons that eTLD+2 was
> suggested as the initial policy, so I think any action that drives
> customers that direction is a poor decision.
It would seem your argument is that there should be no tradeoffs for site operators between privacy and security, and you believe there is no added ecosystem value for transparency as a principle?
I suspect this is where our disagreement lies, but I do want to make sure I understand.
--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To post to this group, send email to ct-p...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/5702A16D.5050302%40eff.org.
I think Jacob raises an important new point. Will removing redaction drive enterprises to a private CA and therefore expose more people to MITM? It might be worth it for the global sake, but it should be considered.
On Apr 6, 2016 9:05 PM, "Tom Ritter" <t...@ritter.vg> wrote:
>
> The complexity makes me unhappy, but I tend to lean towards 'let
> domain owners control redaction' - preference to do so seems to be via
> CAA. One thing I'm not sure I saw was how to solve the problem of
> "When I issued this certificate, the CAA record said I could redact;
> but now it says I can't." In *theory* couldn't one embed a DNSSEC
> chain of the CAA record in something (SCT extension most likely?) to
> prove correct permission to redact? Requires everyone to have DNSSEC
> of course.[0]
To be honest, I'm not even sure how much I attach to the post-issuance audit that the CA was following redaction policies; I would ALMOST trust the auditors there. However, that is only true if the default is to deny redaction, unless the CAA record is present. If the default allows redaction, then I don't trust CAs to do the due diligence to check for the existence of the don't redact, nor do I think it helps the ecosystem, as Jacob and Eric have mentioned.
A simpler form of control wouldn't even require the DNSSEC bits - the hypothetical CAA property could include virtually anything as a challenge, and the CA could include that in the precert. If anyone complained about over redaction, the burden would be on the CA to show they followed policies and that the challenge was fresh/correct. Alternatively, the domain holder could just demand the certificate be revealed.
That's why I favor a default deny, with an opt-in. It gives the domain holder control to do private issuance, favors an open ecosystem, and provides a means to correct overredaction without requiring revocation.
Auditors could only check the policy if it hasn't changed in DNS
though. Consider domain D wants a redacted cert, but doesn't want it
to be a normal policy. They put it in CAA, get a cert, everyone's
happy, then remove the CAA record. Later some auditor comes along and
says "Hey, CA C issued a cert for Domain D that shouldn't be
redacted." The onus is on the CA _and_ the domain to come forward and
say "Oh, no, that's okay." Variations could create a lot of false
positives, to the point that auditors can't effectively police
unpermissioned redactions, so they don't try.
The onus moves to the domains themselves to work with a monitor and do
the checking themselves. I think that's probably okay - the onus is
always on domain owners to look for misissued certs of course. It just
seems unfortunate that we have 85% of the system that could allow
auditors to check correct CA operation for redaction, and we can't get
it to 100%.
Regarding the question of whether to accept redaction at all, the main argument seems to be on the basis that DNS information isn't private. However (and I'm kinda sad no one checked me on this when I put it forward as a strawman), there are active efforts to ensure DNS privacy, such as via DNS-over-TLS. Similarly, using methods like HTTP Alt-Svc and Encrypted SNI (in TLS 1.3), it's conceivable to imagine a world where a domain may not necessarily be leaked to a network observer other than the intended endpoint/trusted parties. If we do not permit any redaction at all, it would seem to be suggesting that such efforts are not worthwhile; alternatively, it may be suggesting that they're not worthwhile yet - but then sets up the hard time on how to judge if and when redaction should be permitted.
It is definitely possible to imagine a world with universal CT, no name redaction, and fully confidential hostname interactions per-connection.
Do we know what the 'rules' are for wildcard and specific DNS names? If a browser has seen www.example.com; will *.example.com going to the same host upset it?
> > It is also possible to imagine a world where every name and phone number is
> > published.
>
> You mean like in a phone book, and voter registration rolls, and in
> innumerable data breaches?
Which is a negligible part of the global population and we hate it, so you agree with me. :)
Why can't DNS names be private? Because of a few bad or incompetent parties? Is that the right trade-off to make?
On Apr 19, 2016, at 18:10, Jacob Hoffman-Andrews <js...@eff.org> wrote:
>> Wildcards are a very heavy hammer. Maybe that's enough, but let's
> make sure the choice is a conscious one.
>
> I think wildcards are the right tradeoff. The usefulness of
> participating in the public Web PKI outweighs the detriments. And
> enterprises always have the option of using non-wildcard certificates,
> at the cost of accepting that some internal hostnames become public.
>
>
> Another argument against redaction: Until now it's been the case that
> anyone can submit any certificate to a CT log, so long as it chains up
> to a recognized root. This has been very useful. a researcher is free to
> download the Censys.io scans and upload them to a CT log. An individual
> is free to record every certificate their browser sees, for later uploading.
>
> If a researcher uploads an unredacted copy of a certificate for
> secret.example.com, and that certificate was previously logged in
> redacted form, is example.com going to consider that a breach of their
> privacy? Right now the implicit assumption in CT is that there is no
> expectation of privacy in publicly trusted certificates. Adding an
> explicit notion of privacy for some DNS names introduces uncertainty
> regarding what kind of certificates are fair game for uploading.
>
Two comments on this scenario:
1) Such process would have to be done with intent, rather than be the by product of using a "normal" browser, or such browser risk being accused of violating users privacy.
2) A researcher that has authorized access to such supposedly private servers would likely be violating some kind of agreement by disclosing them. And obviously would not really need CT to perform such disclosure.
This brings me to another question: assuming an entity somehow set up it's own private web PKI infrastructure, which is not leaking information through DNS or otherwise, would Chromium support such private web PKI in any way?
It seems like today, if you exclude the CT bits, Chrome (and other browsers) do work with such a private web PKI as long as such private PKI is anchored by a public CA.
But with CT added to the mix it becomes difficult, it seems.
-- Fabrice
> --
> You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
> To post to this group, send email to ct-p...@chromium.org.
> To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/5716D6FC.2020708%40eff.org.
--
You received this message because you are subscribed to a topic in the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this topic, visit https://groups.google.com/a/chromium.org/d/topic/ct-policy/vsTzv8oNcws/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ct-policy+...@chromium.org.
To post to this group, send email to ct-p...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/43E6D4E2-5F3A-4B51-9597-03413745B273%40gmail.com.
I'm not aware of any beyond the normal potential botching of pinning (not sure if you meant to send to the list, but feel free to re-CC)I mean, yes, that issue can arise if you pinned the leaf, but that's one of the MANY reasons we STRONGLY discourage anyone from ever pinning the leaf. If you're going to change the cert (e.g. to send users in class A to a wildcard, and users in class B to a specific), you have to set your pins appropriately.Although, arguably, if you have a wildcard, I don't know why you wouldn't always use that for users in class A and B if you're terminating at the same endpoint, but ... I'm not aware of any, you should be ok, unless you intentionally do something stupid (like pin the leaf, or use separate intermediates and only pin one of the intermediates, etc)On Tue, Apr 19, 2016 at 11:13 AM, Richard Salz <rich...@gmail.com> wrote:Are there any strange interactions with browser going to www.example.com and cert pinning and seeing a cert for www.example.com at some times, and perhaps at other times seeing *.example.com? Or perhaps it's www and ww2, for example.The answer could well be "not aware of any" and/or "probably will be okay" but I think it prudent to ask.
It seems like today, if you exclude the CT bits, Chrome (and other browsers) do work with such a private web PKI as long as such private PKI is anchored by a public CA.
But with CT added to the mix it becomes difficult, it seems.
I think you missed one additional item: name constrained CAs. Maybe
it is assumed that because 6962-bis covers this it doesn't need to be
said, but I think certificates issued by a name constrained CA should
not be required to be logged directly. Alternatively a name
constrained CA should be prima facie evidence that redaction is
explicitly permitted for subtrees covered by the constraints.
1) Such process would have to be done with intent, rather than be the by product of using a "normal" browser, or such browser risk being accused of violating users privacy.
My point: If Chrome only trusts certs from public CAs that have been logged, an intranet whose certs are not logged would not work in Chrome.
Ryan,
I am a little confused and concerned when it comes to understanding
how Chrome views 6962-bis. Today Chrome implements RFC 6962 and has
qualified logs that follow 6962. My expectation is that, once
6962-bis is published, most 6962 logs will shutdown and be replaced
with 6962-bis compliant logs (or the existing logs will do some
transition to -bis). I also assume that Chrome will adopt -bis.
6962-bis right now is fairly clear on how redaction works and has a
whole section (https://tools.ietf.org/html/draft-ietf-trans-rfc6962-bis-14#section-4.3)
on name constrained subordinate CAs. In the client section of -bis it
says "the TLS client MUST attempt to validate it against the server
certificate and against each of the zero or more suitable
name-constrained intermediates (Section 4.3)".
If you feel that Chrome is not going to be able to follow the 6962-bis
as written, then I would strongly suggest that you raise this in the
IETF TRANS group sooner than later. I would hate to see an early and
high profile client implementation fail to follow the standard.
OK, clarifying my points then:
1) Today an organization can build a seemingly private intranet, using
a public CA (non EV), by not logging the certs.
In the future, such certs would be rejected by Chrome as the certs
are not logged.
2) One alternative then would be that an organization would use a
private CA, and have their users install that CA on their systems so
Chrome accepts those intranet certs.
Excluding other possible alternatives that redaction could provide, is
that a fair interpretation ?
Is there a timetable for when Chrome will require CT for everything, not just EV?
On the other hand there are many cases where certificates are issued
with globally scoped DNS names that are not public. I am aware of very
large networks where the hostnames end in a publicly registered domain
but they are physically disconnected from the Internet. These
networks may use IP addresses listed as reserved or they may use IPs
assigned to the operating organizations. They are cross-organization
networks so using CAs that ship by default in browsers and operating
systems allows interoperability that would otherwise not be practical.
I don't see a compelling value in compelling disclosures for these
certificates.
On the other hand there are many cases where certificates are issued
with globally scoped DNS names that are not public. I am aware of very
large networks where the hostnames end in a publicly registered domain
but they are physically disconnected from the Internet. These
networks may use IP addresses listed as reserved or they may use IPs
assigned to the operating organizations. They are cross-organization
networks so using CAs that ship by default in browsers and operating
systems allows interoperability that would otherwise not be practical.
I don't see a compelling value in compelling disclosures for these
certificates.
--
You received this message because you are subscribed to a topic in the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this topic, visit https://groups.google.com/a/chromium.org/d/topic/ct-policy/vsTzv8oNcws/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ct-policy+...@chromium.org.
To post to this group, send email to ct-p...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/CACvaWvakEbx%3Dkc--F7KOR2FaVx8Z3YU1s23ikLLTpM%2B7c7bK5g%40mail.gmail.com.