On 1/10/2018 1:06 μμ, Ryan Sleevi via dev-security-policy wrote:
> On Mon, Oct 1, 2018 at 2:55 AM Dimitris Zacharopoulos <
ji...@it.auth.gr>
> wrote:
>
>> Perhaps I am confusing different past discussions. If I recall correctly,
>> in previous discussions we described the case where an attacker tries to
>> get a certificate for a company "Example Inc." with domain "
example.com".
>> This domain has a domain Registrant Address as "123 Example Street".
>>
>> The attacker registers a company with the same name "Example Inc." in a
>> different jurisdiction, with address "123 Example Street" and a different
>> (attacker's) phone number. How is the attacker able to get a certificate
>> for
example.com? That would be a real "attack" scenario.
>>
> Yes, you are confusing things, as I would have thought this would be a
> 'simple' discussion. Perhaps this confusion comes from only thinking the
> domain name matters in making an 'attack'. If that's the case, we can do
> away with EV and OV entirely, because they do not provide value to that
> domain validation. Alternatively, if we say that information is relevant,
> then the ability to spoof any of that information also constitutes an
> 'attack' - to have the information for one organization presented in a
> different (logical, legal) organization's associated information.
I'm just trying to understand the "attack" scenario of Ian. Domain
Validation is the baseline and OV/EV builds on top of that to include
verified information to the Relying Parties to assist in Trust
decisions. There were suggestions in the past that the use of OV/EV
validation of identity can substitute parts of Domain Validation but
it's clear that this is not the case we are discussing.
>
>> Unless this topic comes as a follow-up to the previous discussion of
>> displaying the "Stripe Inc." information to Relying Parties, with the
>> additional similarity in Street Address and not just the name of the
>> Organization. If I recall correctly, that second "Stripe Inc." was not a
>> "fake" entity but a "real" entity that was properly registered in some
>> Jurisdiction. This doesn't seem to be the same attack scenario as getting a
>> certificate for a Domain for which you are not the owner nor control, but a
>> way to confuse Relying Parties. Certainly, in case of fraud, this leaves a
>> lot more evidence for the authorities to trail back to a source, than for a
>> case without Organization information.
>>
> This also seems to be fixing on the domain name, but I have no idea why
> you've chosen that as the fixation, as the discussion to date doesn't
> involve that. I don't think it's your intent, but it sounds like you're
> saying "It's better for CAs to put inaccurate and misleading information in
> certificates, because at least then it's there" - which surely makes no
> sense.
>
No, this was not about the domain name but about the information
displayed to the Relying Party with the attributes included in the OV/EV
Certificate (primarily the Organization). So, I'm still uncertain if
Ian's "misleading street address" was trying to get a certificate for
domain "
stripe.com" owned by "Stripe Inc." in California, or was trying
to get a certificate for "ian's
domain.com" owned by "Stripe Inc." in
Kentucky, as was the previous discussions. The discussion so far
indicates that it's the latter, with the additional element that now the
Street Address is also misleading.
I am certainly not suggesting that CAs should put inaccurate and
misleading information in certificates :-) I merely said that if the
Subscriber introduces misleading or inaccurate information in
certificates via a reliable information source, then there will probably
be a trail leading back to the Subscriber. This fact, combined with the
lack of clear damage that this can cause to Relying Parties, makes me
wonder why doesn't the Subscriber, that wants to mislead Relying
Parties, just use a DV Certificate where this probably doesn't leave so
much evidence tracing back to the Subscriber?
>> But they do have some Reliable and Qualified Information according to our
>> standards (for example registry number, legal representative, company
>> name). If a CA uses only this information from that source, why shouldn't
>> it be considered reliable? We all need to consider the fact that CAs use
>> tools to do their validation job effectively and efficiently. These tools
>> are evaluated continuously. Complete dismissal of tools must be justified
>> in a very concrete way.
>>
> No, they are not Reliable Data Sources. Using unreliable data sources,
> under the motto that "even a stopped clock is right twice a day", requires
> clear and concrete justification. The burden is on the CA to demonstrate
> the data sources reliability. If there is any reason to suspect that a
> Reliable Data Source contains inaccurate data, you should not be using it -
> for any data.
But this inaccurate data is not used in the validation process nor
included in the certificates. Perhaps I didn't describe my thoughts
accurately. Let me have another try using my previous example. Consider
an Information Source that documents, in its practices, that they provide:
1. the Jurisdiction of Incorporation (they check official government
records),
2. registry number (they check official government records),
3. the name of legal representative (they check official government
records),
4. the official name of the legal entity (they check official
government records),
5. street address (they check the address of a utility bill issued
under the name of the legal entity),
6. telephone numbers (self-reported),
7. color of the building (self-reported).
The CA evaluates this practice document and accepts information 1-5 as
reliable, dismisses information 6 as non-reliable, and dismisses
information 7 as irrelevant.
Your argument suggests that the CA should dismiss this information
source altogether, even though it clearly has acceptable and verified
information for 1-5. Is that an accurate representation of your statement?
>
>> I would accept your conclusion for an Information Source that claimed, in
>> their practices, that they verify some information against a secondary
>> government database and the CA gets evidence that they don't actually do
>> that. This means that the rest of the "claimed as verified" information is
>> now questionable. This is very much similar to the Browsers checking for
>> misbehavior on CAs where they claim certain practices in their CP/CPS and
>> don't actually implement them. That would be a case where the CA might
>> decide to completely distrust that Information Source.
>>
>> I hope you can see the difference.
>>
> I hope you can understand that this is not an apt or accurate comparison.
> An organization that lacks a process, which is the case for unreliable
> data, is no different than an organization that declares a process but does
> not follow it.
But in my example, they don't lack a process, they clearly tell you
beforehand that the color of the building is self-reported. They do have
a process and it's the CA's call (or whoever uses this information) to
accept and use this information or not. I see a big difference if an
organization declares a process and then doesn't follow it, compared to
an organization that has a process, let's you know beforehand that some
piece of information is self-reported and you can judge to use this
particular piece of information or not. The latter is consistent and the
former is not.
>
>> I remember this argument being supported in the past and although I used
>> to agree to it, with the recent developments and CA disqualifications, I
>> support the opposite. That is, Subscribers start to choose their CA more
>> carefully and pay attention to the trust, reputation and practices, because
>> of the risk of getting their Certificates challenged, revoked or the CA
>> distrusted.
>>
> So you believe it's in best interests of Subscribers to have CAs
> distrusted, certificates challenged and revoked, and for relying parties to
> constantly call into question the certificates they encounter? And that
> this is somehow better than consistently applied and executed validation
> processes? I wish I could share your "Mad Max" level of optimism, but it
> also fails to understand that we're not talking about Subscriber selection,
> we're talking about adversarial models. The weakest link matters, not
> "market reputation", as much as some CAs would like to believe.
Again, I might have described my thoughts unclearly. I was only trying
to say that Subscribers now pay more attention to the CA they chose than
they did before. They may not choose a "loose" or "weak" CA that easily
because of the risks associated with that decision.
>
>> CAs are required to comply with very complicated standards and these
>> standards describe best practices on how to evaluate and re-evaluate
>> information sources.
>>
>> If CA A trusted the "street address" from a DS, used this information to
>> be included in Certificates and later during re-evaluation discovered that
>> this particular piece of information is unreliable, I would expect this to
>> be treated as an incident according to the current standards. The CA would
>> have to create a plan to mitigate this problem. Again, this depends on the
>> CA's decisions how to mitigate the problem. Others would revoke all the
>> affected certificates immediately, others would re-evaluate the "street
>> address" information using a different reliable source and revoke the
>> Certificates they were unable to re-verify, others would do nothing. It is
>> impossible to cover all possible cases and require equal treatment for
>> incidents.
>>
> Funny enough, that subjectivity you just described is not permitted of CAs,
> and for good reason. Every one of those certificates needs to be revoked,
> per 4.9.1.1 of the BRs. The CA has also material misstated its warranty for
> these certificates, per 9.6.1.
Yet we've seen this being exercised before and definitely in violation
of the 24 hour window. The main issue that we have seen some CAs
struggle with and explain in Incident Reports is that this information
might actually be proven to be accurate and can be re-validated without
causing interruptions for Subscribers and Relying Parties.
>
>
>> I fully support a white-list of RDS/QIS with global acceptance and
>> collective scrutiny but even these lists need to be re-evaluated
>> periodically as required by our standards. Perhaps a "global black-list"
>> might be setup for certain cases of misbehavior like the one described
>> above. However, this white-list should not be the only one allowed to be
>> used and national jurisdiction databases (company registries) should also
>> be allowed provided they are adequately evaluated by the CA according to
>> our standards.
>>
> As anyone with any security background can tell you, whitelists address the
> objectives where blacklists address the appearances.
Sure, but perhaps both are needed. Look at Mozilla banning E&Y Hong
Kong. We might encounter some reliable sources that misbehave too.
>
> If you believe that there are national jurisdictional databases, they can
> be added to the whitelist. Indeed, the entire point would be to ensure
> that, for the appropriate jurisdictional boundary, there's a clear
> indication as to appropriate data sources. Then, there is no need for CA
> discretion - or indiscretion.
You are basically suggesting that the evaluation of a data source
performed by the CA (at least for the smaller jurisdictions) be made
public and added in the white-list. I'm fine with that. However, we will
face the same problem if during re-evaluation we discover that some
piece of information is not as reliable as we thought.
>
>> I also support the idea of describing which information is evaluated as
>> "reliable" from a particular source and not completely dismiss a source if
>> they describe in their practices that they accept "some other"
>> irrelevant-to-the-ca information as self-reported.
>>
> So you would also see the whitelist broken down by jurisdiction and by data
> source. Unless and until there is a whitelist, there is no safe way to
> permit that usage you're describing.
>
>
>> It is a very challenging goal, especially because there are so many
>> different jurisdictions from which to collect reliable information. As a
>> more realistic goal, perhaps we could describe the ideal data source
>> evaluation and periodic re-evaluation process more explicitly in the
>> Guidelines, with clear and auditable criteria.
>>
> We're not short of describing expectations. We're short of CAs meeting
> expectations. And as I said, for an adversarial model, it does not matter
> what the best or 'most' do, it only matters what a single one permits. As
> such, a solution to "double down" on language that allows a CA to
> interpretive dance their way out of the objectives is not valuable, nor is
> any solution that relies on auditor review. Indeed, the main argument for
> 'auditable' criteria would be so that CAs could disguise their shady
> practices through a lack of transparent reporting, rather than the
> objective of this thread, which is to improve it through transparency.
My suggestion was that between now and "white-lists with full
transparency that all CAs MUST use", there might be some intermediate
steps to improve the current processes.
Dimitris.