Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

437 views
Skip to first unread message

Dimitris Zacharopoulos

unread,
Nov 29, 2018, 4:03:50 PM11/29/18
to mozilla-dev-s...@lists.mozilla.org
I didn't want to hijack the thread so here's a new one.

On 29/11/2018 6:39 μ.μ., Ryan Sleevi wrote:
>
>
> On Thu, Nov 29, 2018 at 2:16 AM Dimitris Zacharopoulos
> <ji...@it.auth.gr <mailto:ji...@it.auth.gr>> wrote:
>
> Mandating that CAs disclose revocation situations that exceed the
> 5-day
> requirement with some risk analysis information, might be a good
> place
> to start.
>
>
> This was proposed several times by Google in the Forum, and
> consistently rejected, unfortunately.

Times and circumstances change. When I brought this up at the Server
Certificate Working Group of the CA/B Forum
(https://cabforum.org/pipermail/servercert-wg/2018-September/000165.html),
there was no open disagreement from CAs. However, think about CAs that
decide to extend the 5-days (at their own risk) because of extenuating
circumstances. Doesn't this community want to know what these
circumstances are and evaluate the gravity (or not) of the situation?
The only way this could happen in a consistent way among CAs would be to
require it in some kind of policy.

This list has seen disclosures of revocation cases from CAs, mainly as
part of incident reports. What I understand as disclosure is the fact
that CAs shared that certain Subscribers (we know these subscribers
because their Certificates were disclosed as part of the incident
report) would be damaged if the mis-issued certificates were revoked
within 24 hours. Now, depending on the circumstances this might be
extended to 5 days.

> I don't consider 5 days (they are not even working days) to be
> adequate
> warning period to a large organization with slow reflexes and long
> procedures.
>
>
> Phrased differently: You don't think large organizations are currently
> capable, and believe the rest of the industry should accommodate that.

"Tolerate" would probably be the word I'd use instead of "accommodate".

>
> Do you believe these organizations could respond within 5 days if
> their internet connectivity was lost?

I think there is different impact. Losing network connectivity would
have "real" and large (i.e. all RPs) impact compared to installing a
certificate with -say- 65 characters in the OU field which may cause
very few problems to some RPs that want to use a certain web site.


> For example, if many CAs violate the 5-day rule for revocations
> related
> to improper subject information encoding, out of range, wrong
> syntax and
> that sort, Mozilla or the BRs might decide to have a separate
> category
> with a different time frame and/or different actions.
>
>
> Given the security risks in this, I think this is extremely harmful to
> the ecosystem and to users.
>
> It is not the first time we talk about this and it might be worth
> exploring further.
>
>
> I don't think any of the facts have changed. We've discussed for
> several years that CAs have the opportunity to provide this
> information, and haven't, so I don't think it's at all proper to
> suggest starting a conversation without structured data. CAs that are
> passionate about this could have supported such efforts in the Forum
> to provide this information, or could have demonstrated doing so on
> their own. I don't think it would at all be productive to discuss
> these situations in abstract hypotheticals, as some of the discussions
> here try to do - without data, that would be an extremely unproductive
> use of time.

There were voices during the SC6 ballot discussion that wanted to extend
the 5 days to something more. We continuously see CAs that either detect
or learn about having mis-issued Certificates, that fail to revoke
within 24 hours or even 5 days because their Subscribers have problems
and the RPs would be left with no service until the certificates were
replaces. I don't think we are having a hypothetical discussion, we have
seen real cases being disclosed in m.d.s.p. but it would be important to
have a policy in place to require disclosure of more information.
Perhaps that would work as a deterrent for CAs to revoke past the 5 days
if they don't have strong arguments to support their decisions in public.

> As a general comment, IMHO when we talk about RP risk when a CA
> issues a
> Certificate with -say- longer than 64 characters in an OU field, that
> would only pose risk to Relying Parties *that want to interact
> with that
> particular Subscriber*, not the entire Internet.
>
>
> No. This is demonstrably and factually wrong.
>
> First, we already know that technical errors are a strong sign that
> the policies and practices themselves are not being followed - both
> the validation activities and the issuance activities result from the
> CA following it's practices and procedures. If a CA is not following
> its practices and procedures, that's a security risk to the Internet,
> full stop.

You describe it as a black/white issue. I understand your argument that
other control areas will likely have issues but it always comes down to
what impact and what damage these failed controls can produce. Layered
controls and compensating controls in critical areas usually lower the
risk of severe impact. The Internet is probably safe and will not break
if for example a certificate with 65-character OU is used on a public
web site. It's not the same as a CA issuing SHA1 Certificates with
collision risk.

>
> Second, it presumes (incorrectly) that interoperability is not
> something valuable. That is, if say the three existing, most popular
> implementations all do not check whether or not it's longer than 64
> characters (for example), and a fourth implementation would like to
> come along, they cannot read the relevant standards and implement
> something interoperable. This is because 'interoperability' is being
> redefined as 'ignoring' the standard - which defeats the purposes of
> standards to begin with. These choices - to permit deviations -
> creates risks for the entire ecosystem, because there's no longer
> interoperability. This is equally captured in
> https://tools.ietf.org/html/draft-iab-protocol-maintenance-01
>
> The premise to all of this is that "CAs shouldn't have to follow
> rules, browsers should just enforce them," which is shocking and
> unfortunate. It's like saying "It's OK to lie about whatever you want,
> as long as you don't get caught" - no, that line of thinking is just
> as problematic for morality as it is for technical interoperability.
> CAs that routinely violate the standards create risk, because they
> have full trust on the Internet. If the argument is that the CA's
> actions (of accidentally or deliberately introducing risk) is the
> problem, but that we shouldn't worry about correcting the individual
> certificate, that entirely misses the point that without correcting
> the certificate, there's zero incentive to actually follow the
> standards, and as a result, that creates risk for everyone.
> Revocation, if you will, is the "less worse" alternative to complete
> distrust - it only affects that single certificate, rather than every
> one of the certificates the CA has issued. The alternative - not
> revoking - simply says that it's better to look at distrust options,
> and that's more risk for everyone.
>

I absolutely agree that interoperability is something valuable that
should be pursued by the ecosystem. Browsers and the majority of CAs
work in that direction. It's just the fact that if a browser strictly
enforces a requirement from a standard (e.g. rejects a certificate that
has an OU field with more than 64 characters), it makes a huge
difference towards the goal for interoperability compared to a CA that
just issues certificate with max of 64 characters in the OU. If browsers
enforced these rules, the difference would be so big that the
problematic certificate would be immediately discovered by the
Subscriber, who would complain to the CA and the Certificate would most
likely be revoked immediately since it wouldn't be usable.

What I meant to say in my original argument is that the "damage" created
by a certificate that fails to strictly comply with RFC5280 and the rest
of the X.* standards, as long as popular browsers "allow it", is
primarily an issue between a Subscriber (that maintains a web site), and
the particular Relying Parties that want to establish a secure
connection to that web site. That's not the entire Internet. This is why
I compared it with "a situation where a site operator forgets to send
the intermediate CA Certificate in the chain. These particular RPs will
fail to get TLS working when they visit the Subscriber's web site".

Perhaps I have misunderstood your argument but when we are discussing
about revocation timelines, it looks a little extreme to say that a CA
claiming "some important reasons" (I'm not saying if they are valid
reasons or not) for delaying a certificate revocation, that they have
zero incentive to follow the standards.


> Finally, CAs are terrible at assessing the risk to RPs. For example,
> negative serial numbers were prolific prior to the linters, and those
> have issues in as much as they are, for some systems, irrevocable.
> This is because those systems implemented the standards correctly -
> serials are positive INTEGERs - yet had to account for the fact that
> CAs are improperly encoding them, such as by "making" them positive
> (adding the leading zero). This leading zero then doesn't get stripped
> off when looking up by Issuer & Serial Number, because they're using
> the "spec-correct" serial rather than the "issuer-broken" serial.
> That's an example where the certificate "works", no report is filed,
> but the security and ecosystem properties are fatally compromised. The
> alternatives for such implementation are:
> 1) Reject such certificates (but see above about market forces and
> interoperability)
> 2) Correct both the certificate and the CRL/OCSP serial number (which
> then creates risk because you're not actually checking _any_
> certificates true serial)
> 3) Allow negative serial numbers (which then makes it harder for
> others to do #1)
>
> As I said, CAs have been terrible at assessing risk to the ecosystem
> for their decisions. The page at
> https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing#Things_for_CAs_to_Fix
> shows how bad such interoperability harms improvements - for example,
> all of these hacks that Mozilla had to add in order to ship a more
> secure, more efficient certificate verifier.

As I said earlier, times change. The bar is raised, this industry
matures day-after-day, things are hopefully improving (security-wise).
There is certainly more security awareness today for this ecosystem than
it was 5 or 10 years ago. Specifically for these "past sins", we have
seen browsers using telemetry to see how many certificates fail to
follow specific requirements and should normally see these numbers
decrease over time. Once these numbers reach an acceptably low level, we
usually see code changes that enforce these requirements and remove the
"hacks". Of course, this is a different topic for discussion.

In conclusion, after repeatedly seeing CAs requesting or effectively
taking more time to revoke certificates that the existing requirements,
I believe that a policy rule that would require CAs to disclose
revocation cases requiring more than 5 days to complete (i.e. revoke the
certificate), provided that the CA submits risk analysis information
after working with the affected Subscriber(s), is a reasonable way forward.


Dimitris.


Ryan Sleevi

unread,
Nov 29, 2018, 6:49:36 PM11/29/18
to Dimitris Zacharopoulos, mozilla-dev-s...@lists.mozilla.org
On Thu, Nov 29, 2018 at 4:03 PM Dimitris Zacharopoulos via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:

> I didn't want to hijack the thread so here's a new one.
>
>
> Times and circumstances change.


You have to demonstrate that.

When I brought this up at the Server
> Certificate Working Group of the CA/B Forum
> (https://cabforum.org/pipermail/servercert-wg/2018-September/000165.html),
>
> there was no open disagreement from CAs.


Look at the discussion during Wayne’s ballot. Look at the discussion back
when it was Jeremy’s ballot. The proposal was as simplified as could be -
modeled after 9.16.3 of the BRs. It would have allowed for a longer period
- NOT an unbounded period, which is grossly negligent for publicly trusted
CAs.

However, think about CAs that
> decide to extend the 5-days (at their own risk) because of extenuating
> circumstances. Doesn't this community want to know what these
> circumstances are and evaluate the gravity (or not) of the situation?
> The only way this could happen in a consistent way among CAs would be to
> require it in some kind of policy.


This already happens. This is a matter of the CA violating any contracts or
policies of the root store it is in, and is already being handled by those
root stores - e.g. misissuance reports. What you’re describing as a problem
is already solved, as are the expectations for CAs - that violating
requirements is a path to distrust.

The only “problem” you’re solving is giving CAs more time, and there is
zero demonstrable evidence, to date, about that being necessary or good -
and rich and ample evidence of it being bad.

> Phrased differently: You don't think large organizations are currently
> > capable, and believe the rest of the industry should accommodate that.
>
> "Tolerate" would probably be the word I'd use instead of "accommodate".


I chose accommodate, because you’d like the entire world to take on
systemic risk - and it is indeed systemic risk, to users especially - to
benefit some large companies.

Why stop with revocation, though? Why not just let CAs define their own
validation methods of they think they’re equivalent? After all, if we can
trust CAs to make good judgements on revocation, why can’t we also trust
them with validation? Some large companies struggle with our existing
validation methods, why can’t we accommodate them?

That’s exactly what one of the arguments against restricting validation
methods was.

As I said, I think this discussion will not accomplish anything productive
without a structured analysis of the data. Not anecdata from one or two
incidents, but holistic - because for every 1 real need, there may have
been 9,999 unnecessary delays in revocation with real risk.

How do CAs provide this? For *all* revocations, provide meaningful data. I
do not see there being any value to discussing further extensions until we
have systemic transparency in place, and I do not see any good coming from
trying to change at the same time as placing that systemic transparency in
place, because there’s no way to measure the (negative) impact such change
would have.

>
> > Do you believe these organizations could respond within 5 days if
> > their internet connectivity was lost?
>
> I think there is different impact. Losing network connectivity would
> have "real" and large (i.e. all RPs) impact compared to installing a

certificate with -say- 65 characters in the OU field which may cause
> very few problems to some RPs that want to use a certain web site.


So you do believe organizations are capable of making timely changes when
necessary, and thus we aren’t discussing capabilities, but perceived
necessity. And because some organizations have been mislead as to the role
of CAs, and thus don’t feel its necessary, don’t feel they should have to
use that capability.

I’m not terribly sympathetic to that at all. As you mention, they can
respond when all RPs are affected, so they can respond when their
certificate is misissused and thus revoked.

You describe it as a black/white issue. I understand your argument that
> other control areas will likely have issues but it always comes down to
> what impact and what damage these failed controls can produce. Layered
> controls and compensating controls in critical areas usually lower the
> risk of severe impact. The Internet is probably safe and will not break
> if for example a certificate with 65-character OU is used on a public
> web site. It's not the same as a CA issuing SHA1 Certificates with
> collision risk.


It absolutely is, and we have seen this time and time again. The CAs most
likely to argue the position you’re taking are the CAs that have had the
most issues.

Do we agree, at least, that any CA violating the BRs or Root Policies puts
the Internet ecosystem at risk?

It seems the core of your argument is how much risk should be acceptable,
and the answer is none. Zero. The point of postmortems is to get us to a
point where, as an industry, we’ve taken every available step to reduce and
eliminate that risk, by learning from our collective mistakes. Lives and
businesses are on the line - a single mistake can cost billions - and
there’s no excuse for just shrugging and saying “well, yanno, there’s risk
and there’s risk”

Go read
https://zakird.com/papers/zlint.pdf to see a systemic, thorough, analysis
that supports what I described to you, and disagrees with your framing. We
know what the warning signs are - and it’s continued framing of “low” risk
that collectively presents “severe” risk.
I literally provided you an explanation for why what you’re describing is
problematic and unreasonable. Please do re-read it. In a new system, sure,
that’s be great - but the existing system absolutely penalizes first movers.

Look at SC12 as an example. CAs would really like browsers to make that
change, because then they can have their customers blame browsers for their
misissuance. The customer is not going to say “Guess I should replace my
cert”, but rather, blame the browser. The links I provided showed how CAs
widespread disregard for the standards created real compatibility and
security issues - and a browser just rejecting them doesn’t actually fix
it, because the site says “well, works in other browsers, so the bug must
be the browsers, not mine.”

What I meant to say in my original argument is that the "damage" created
> by a certificate that fails to strictly comply with RFC5280 and the rest
> of the X.* standards, as long as popular browsers "allow it", is
> primarily an issue between a Subscriber (that maintains a web site), and
> the particular Relying Parties that want to establish a secure
> connection to that web site. That's not the entire Internet. This is why
> I compared it with "a situation where a site operator forgets to send
> the intermediate CA Certificate in the chain. These particular RPs will
> fail to get TLS working when they visit the Subscriber's web site".


It’s a perfect example of why your argument DOESN’T work. As Mozilla has
shared in the CA/B Forum, people don’t fix their site - they blame the
browser, and keep on with the brokenness. Firefox is the one having to
change to “accommodate” that.

Perhaps I have misunderstood your argument but when we are discussing
> about revocation timelines, it looks a little extreme to say that a CA
> claiming "some important reasons" (I'm not saying if they are valid
> reasons or not) for delaying a certificate revocation, that they have
> zero incentive to follow the standards.


It isn’t extreme, because even the incident reports from 2014/2015 show
exactly this argument being made. Your arguments themselves continue to
show that, by suggesting that “only” the site is impacted. And yet, if
every site is doing it because “only” that site is impacted, you have the
whole ecosystem doing it.

This myopic view of trying to assess per-Certificate is inherently
non-scalable. You haven’t actually proposed any way to address that. What
happens when a CA is doing 100 “exceptional” non-revocations? What about
10,000? We’ve seen examples of both discussed - so nothing is new here. Do
we make CAs also pay penalty fees, so that the community can ensure there
is adequate staffing to investigate and review this? If we do that, what’s
to prevent CAs from just seeing that as buying indulgences?

Your whole proposal breaks down at scale. It’s like asking “What’s the harm
if I start stealing candy bars - after all, it’s only a candy bar?” -
without actually acknowledging the consequences of normalizing that
behavior. It tries to frame the conversation as being about a $1 candy,
which, while appealing, isn’t actually what is being discussed.

Maybe you’re blinded by optimism and faith in CAs. I think if you take a
more realistic, grounded, and holistic view of the ecosystem - one that
considers we were where you propose to go 8 years ago (and it was
disastrous for the ecosystem), one that considers this is a shared commons,
and one that acknowledges the misaligned incentives - you would realize we
already know how and why this sort of suggestion doesn’t actually work in
practice, because we have been there, done that.
You said that, without any systemic data, without any support. Having the
same conversation tomorrow that we had today because, hey, “times change”,
may even be true, but it isn’t productive in the least.

I disagree that we’ve seen systemic improvements as a whole. There are a
few CAs trying to do better, but the incident reporting of today clearly
shows exactly what I’m saying - that the industry has not actually matured
as you suggest. What has changed has largely been driven by those outside
CAs - whether those who were wanting to become CAs (Amazon with certlint)
or those analyzing CA’s failures (ZLint).

In conclusion, after repeatedly seeing CAs requesting or effectively
> taking more time to revoke certificates that the existing requirements,
> I believe that a policy rule that would require CAs to disclose
> revocation cases requiring more than 5 days to complete (i.e. revoke the
> certificate), provided that the CA submits risk analysis information
> after working with the affected Subscriber(s), is a reasonable way forward.


I think it is grossly negligent and irresponsible, and is only reasonable
if one ignores the past two decades (such as by glibly saying “times
change”). A proposal based on submitting risk analysis merely outsources
the costs from the Subscriber onto this community and RPs in general - who
could easily become consumed with reading thousands upon thousands of
these. Such an act is incredibly hostile to meaningful trust in CAs and the
ecosystem.

Far more compelling is to reduce the timeframe that CAs can “go rogue” not
revoking, by reducing the overall certificate lifetime. By improving the
rate at which certificates are replaced, the “hardship” you spoke to
(though seemingly agree it’s not actually there) can be reduced. This can
be done without introducing the need for costly, and subjective, risk
assessments or “exceptions”.

In any event, I think it’s unproductive to try to bring this conversation
up without concrete data. If multiple CAs committed to publishing all of
their revocation data in a systemic way - reasons, hardships, etc (NOT just
the exceptional cases) - and committed to making funds available to be used
to rigorously analyze this (e.g. funding Mozilla to hire someone for this,
funding peer reviewed papers) - it might be worth revisiting. Then we could
have concrete data that could, for example, show that these “hardships” are
one-in-a-million (certs), and more reflective of poor organization controls
by CAs and Subscribers, rather than a systemic problem to address.

Dimitris Zacharopoulos

unread,
Nov 30, 2018, 4:24:52 AM11/30/18
to ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org


On 30/11/2018 1:49 π.μ., Ryan Sleevi wrote:
>
>
> On Thu, Nov 29, 2018 at 4:03 PM Dimitris Zacharopoulos via
> dev-security-policy <dev-secur...@lists.mozilla.org
> <mailto:dev-secur...@lists.mozilla.org>> wrote:
>
> I didn't want to hijack the thread so here's a new one.
>
>
> Times and circumstances change.
>
>
> You have to demonstrate that.

It's self-proved :-)

>
> When I brought this up at the Server
> Certificate Working Group of the CA/B Forum
> (https://cabforum.org/pipermail/servercert-wg/2018-September/000165.html),
>
> there was no open disagreement from CAs.
>
>
> Look at the discussion during Wayne’s ballot. Look at the discussion
> back when it was Jeremy’s ballot. The proposal was as simplified as
> could be - modeled after 9.16.3 of the BRs. It would have allowed for
> a longer period - NOT an unbounded period, which is grossly negligent
> for publicly trusted CAs.

Agreed.

>
> However, think about CAs that
> decide to extend the 5-days (at their own risk) because of
> extenuating
> circumstances. Doesn't this community want to know what these
> circumstances are and evaluate the gravity (or not) of the situation?
> The only way this could happen in a consistent way among CAs would
> be to
> require it in some kind of policy.
>
>
> This already happens. This is a matter of the CA violating any
> contracts or policies of the root store it is in, and is already being
> handled by those root stores - e.g. misissuance reports. What you’re
> describing as a problem is already solved, as are the expectations for
> CAs - that violating requirements is a path to distrust.
>
> The only “problem” you’re solving is giving CAs more time, and there
> is zero demonstrable evidence, to date, about that being necessary or
> good - and rich and ample evidence of it being bad.

I already mentioned that this is separate from the incident report (of
the actual mis-issuance). We have repeatedly seen post-mortems that say
that for some specific cases (not all of them), the revocation of
certificates will require more time. Even the underscore revocation
deadline creates problems for some large organizations as Jeremy pointed
out. I understand the compatibility argument and CAs are doing their
best to comply with the rules but you are advocating there should be no
exceptions and you say that without having looked at specific evidence
that would be provided by CAs asking for exceptions. You would rather
have Relying Parties loose their internet services from one of the
Fortune 500 companies. As a Relying Party myself, I would hate it if I
couldn't connect to my favorite online e-shop or bank or webmail. So I'm
still confused about which Relying Party we are trying to help/protect
by requiring the immediate revocation of a Certificate that has 65
characters in the OU field.

I also see your point that "if we start making exceptions..." it's too
risky. I'm just suggesting that there should be some tolerance for
extended revocations (to help with collecting more information) which
doesn't necessarily mean that we are dealing with a "bad" CA. I trust
the Mozilla module owner's judgement to balance that. If the community
believes that this problem is already solved, I'm happy with that :)
I don't see how data and evidence for "all revocations" somehow makes
things better, unless I misunderstood your proposal. It's not a balanced
request. It would be a huge effort for CAs to write risk assessment
reports for each revocation. Why not focus on the rare cases which
justifies the extra effort from CAs to write a disclosure letter
requesting more days for revocation? Why not add some rules on what's
the minimum information that's expected for these cases? If you want
this to be part of the incident report, that's fine.

The systemic transparency you are asking, as I understand it, would be
m.d.s.p. We already see incident reports being published here. CAs who
seek more than 5 days for revoking affected certificates would disclose
more details about the specifics of these revocations.
CAs are evaluated using schemes based on Risk Management. There is no
zero risk. It's like saying there is 100% security. You can add controls
to minimize risk to acceptable levels. Even when mitigations are added,
you have residual risk. However, layered controls and compensating
controls help to avoid disasters. I just don't believe it's black or
white and I think the module owners probably agree with that statement
(https://groups.google.com/d/msg/mozilla.dev.security.policy/tbSkcGHg1kA/CkrM6taBAwAJ).
If that was the case, every single BR violation or Root Policy violation
would be treated as a trigger for a complete distrust.

> Go read
> https://zakird.com/papers/zlint.pdf to see a systemic, thorough,
> analysis that supports what I described to you, and disagrees with
> your framing. We know what the warning signs are - and it’s continued
> framing of “low” risk that collectively presents “severe” risk.

I wasn't aware of that paper, it contains valuable information, thank
you for sharing. Notice the abstract that says "We find that the number
of errors has drastically reduced since 2012. In 2017, only 0.02% of
certificates have errors". To me, this is a positive indicator that the
ecosystem is continuously improving.
I have listened to this argument before but unfortunately it leads
nowhere. How badly are we interested in interop to justify being "the
bad guys" and how "disruptive" will our actions be for Relying Parties?
It is a very difficult problem to solve but the ecosystem has made progress:
- disclosure of intermediate CA Certificates
- identifying and fixing problematic OCSP responders
- increased supervision to the issued certificates with CT and linters
providing public information about mis-issuances
- browsers enforcing BR requirements with code (e.g. certificate
validity duration)

With these controls in place, CAs are very much obligated to follow the
rules or face the consequences. Browsers use telemetry to detect
violations of the standards and create plans on addressing those issues.
These plans usually include discussions in m.d.s.p. or the CA/B Forum in
order for the CAs to participate and create the necessary rules -along
with the browsers- to address these incompatibilities.

>
> What I meant to say in my original argument is that the "damage"
> created
> by a certificate that fails to strictly comply with RFC5280 and
> the rest
> of the X.* standards, as long as popular browsers "allow it", is
> primarily an issue between a Subscriber (that maintains a web
> site), and
> the particular Relying Parties that want to establish a secure
> connection to that web site. That's not the entire Internet. This
> is why
> I compared it with "a situation where a site operator forgets to send
> the intermediate CA Certificate in the chain. These particular RPs
> will
> fail to get TLS working when they visit the Subscriber's web site".
>
>
> It’s a perfect example of why your argument DOESN’T work. As Mozilla
> has shared in the CA/B Forum, people don’t fix their site - they blame
> the browser, and keep on with the brokenness. Firefox is the one
> having to change to “accommodate” that.

Or, they might blame the CA for providing them a "thing" that doesn't
work with all major browsers :)

>
> Perhaps I have misunderstood your argument but when we are discussing
> about revocation timelines, it looks a little extreme to say that
> a CA
> claiming "some important reasons" (I'm not saying if they are valid
> reasons or not) for delaying a certificate revocation, that they have
> zero incentive to follow the standards.
>
>
> It isn’t extreme, because even the incident reports from 2014/2015
> show exactly this argument being made. Your arguments themselves
> continue to show that, by suggesting that “only” the site is impacted.
> And yet, if every site is doing it because “only” that site is
> impacted, you have the whole ecosystem doing it.
>
> This myopic view of trying to assess per-Certificate is inherently
> non-scalable. You haven’t actually proposed any way to address that.
> What happens when a CA is doing 100 “exceptional” non-revocations?
> What about 10,000? We’ve seen examples of both discussed - so nothing
> is new here. Do we make CAs also pay penalty fees, so that the
> community can ensure there is adequate staffing to investigate and
> review this? If we do that, what’s to prevent CAs from just seeing
> that as buying indulgences?

This statement underestimating the reflexes of the Root programs. The
reason for requiring disclosure is meant as a first step for
understanding what's happening in reality and collect some meaningful
data by policy. Once Mozilla collects enough information to make a safe
estimation, the policy can be updated to allow or forbid certain
situations. If, for example, m.d.s.p. receives 10 or 20 revocation
exception cases within a 12-month period and none of them is convincing
to the community and module owners to justify the exception, the policy
can be updated with clear rules about the risk of distrust if the
revocation doesn't happen within 5 days. That would be a simple, clear
rule. Does Mozilla have the information to make such an aggressive rule
change today? Maybe.
I already provided some facts that I believe assisted in the security
improvement of the ecosystem. The paper you cited also agrees with that
statement. It's an ongoing effort for continuous improvement.

>
> I disagree that we’ve seen systemic improvements as a whole. There are
> a few CAs trying to do better, but the incident reporting of today
> clearly shows exactly what I’m saying - that the industry has not
> actually matured as you suggest. What has changed has largely been
> driven by those outside CAs - whether those who were wanting to become
> CAs (Amazon with certlint) or those analyzing CA’s failures  (ZLint).

If we truly care about the ecosystem, it doesn't really matter where the
systemic improvements come from. CAs and Browsers have contributed in
the Network Security Guidelines, the BRs (to improve and limit
validation methods, add CAA and so much more). I agree we should expect
every CA to develop tools or use existing ones to ensure they are
complying with all rules. We occasionally see some exceptions and this
is evaluated on a case-by-case basis. "Accidents" and mistakes do happen
and as it has been discussed in the past, it's collective failures that
pose the greatest risk and we have seen hard decisions being made to
minimize or eliminate these risks.
I already stated my reasoning for keeping the disclosure just for
exceptions. Currently, the only systemic technical way of providing
something about the revocation is the revocation reason, and that's
limited by RFC5280.

I also protest against the "grossly negligent and irresponsible" part
and I'm afraid statements like that alienate people from participating
and proposing anything. Simply disagreeing would ultimately have the
same effect in this conversation. You have already provided good
arguments against my proposal for people to evaluate.


Dimitris.

Ryan Sleevi

unread,
Nov 30, 2018, 11:13:35 AM11/30/18
to Dimitris Zacharopoulos, Ryan Sleevi, mozilla-dev-security-policy
On Fri, Nov 30, 2018 at 4:24 AM Dimitris Zacharopoulos <ji...@it.auth.gr>
wrote:

>
>
> On 30/11/2018 1:49 π.μ., Ryan Sleevi wrote:
>
>
>
> On Thu, Nov 29, 2018 at 4:03 PM Dimitris Zacharopoulos via
> dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
>
>> I didn't want to hijack the thread so here's a new one.
>>
>>
>> Times and circumstances change.
>
>
> You have to demonstrate that.
>
>
> It's self-proved :-)
>

This sort of glib reply shows a lack of good-faith effort to meaningfully
engage. It's like forcing the discussion every minute, since, yanno, "times
and circumstances have changed".

I gave you concrete reasons why saying something like this is a
demonstration of a weak and bad-faith argument. If you would like to
meaningfully assert this, you would need to demonstrate what circumstances
have changed in such a way as to warrant a rediscussion of something that
gets 'relitigated' regularly - and, in fact, was something discussed in the
CA/Browser Forum for the past two years. Just because you're unsatisfied
with the result and now we're in a month that ends in "R" doesn't mean time
and circumstances have changed meaningfully to support the discussion.

Concrete suggestions involved a holistic look at _all_ revocations, since
the discussion of exceptions is relevant to know whether we are discussing
something that is 10%, 1%, .1%, or .00001%. Similarly, having the framework
in place to consistently and objectively measure that helps us assess
whether any proposals for exceptions would change that "1%" from being
exceptional to seeing "10%" or "100%" being claimed as exceptional under
some new regime.

In the absence of that, it's an abusive and harmful act.


> I already mentioned that this is separate from the incident report (of the
> actual mis-issuance). We have repeatedly seen post-mortems that say that
> for some specific cases (not all of them), the revocation of certificates
> will require more time.
>

No. We've seen the claim it will require more time, frequently without
evidence. However, I do think you're not understanding - there is nothing
preventing CAs from sharing details, for all revocations they do, about the
factors they considered, and the 'exceptional' cases to the customers,
without requiring any BR violations (of the 24 hour / 5 day rule). That CAs
don't do this only undermines any validity of the argument you are making.

There is zero legitimate reason to normalize aberrant behaviour.


> Even the underscore revocation deadline creates problems for some large
> organizations as Jeremy pointed out. I understand the compatibility
> argument and CAs are doing their best to comply with the rules but you are
> advocating there should be no exceptions and you say that without having
> looked at specific evidence that would be provided by CAs asking for
> exceptions. You would rather have Relying Parties loose their internet
> services from one of the Fortune 500 companies. As a Relying Party myself,
> I would hate it if I couldn't connect to my favorite online e-shop or bank
> or webmail. So I'm still confused about which Relying Party we are trying
> to help/protect by requiring the immediate revocation of a Certificate that
> has 65 characters in the OU field.
>
> I also see your point that "if we start making exceptions..." it's too
> risky. I'm just suggesting that there should be some tolerance for extended
> revocations (to help with collecting more information) which doesn't
> necessarily mean that we are dealing with a "bad" CA. I trust the Mozilla
> module owner's judgement to balance that. If the community believes that
> this problem is already solved, I'm happy with that :)
>

The argument being made here is as odious as saying "We should have one day
where all crime is legal, including murder" or "Those who knowingly buy
stolen goods should be able to keep them, because they're using them".

I disagree that CAs are "doing their best" to comply with the rules. The
post-mortems continually show a lack of applied best practice. DigiCert's
example is, I think, a good one - because I do not believe it's reasonable
for DigiCert to have argued that there was ambiguity, given that prior to
the ballot, it was agreed they were forbidden, a ballot to explicitly
permit them failed, and the discussion of that ballot explicitly cited why
they weren't valid. From that, several non-DigiCert CAs took steps to
migrate their customers and cease issuance. As such, you cannot reasonably
argue DigiCert was doing "their best", unless you're willing to accept that
DigiCert's best is, in fact, far lower than the industry norm.

The framing about "Think about harm to the Subscriber" is, again, one that
is actively harmful, and, as coming from a CA, somewhat offensive, because
it shows a difference in perspective that further emphasizes why CA's
judgement cannot be trusted. In this regard, we're in agreement that the
certificates we're discussing are clearly misissued - the CA was never
authorized to have issued that certificate, and thus the Subscriber has
obtained it illegitimately. Regardless of whether the fault was their own
or not, the CA has "stolen", if you will, from the public bank of trust and
compatibility, that certificate, and then sold it to the Subscriber.

The arguments for why that should be OK have basically boiled into some
segment of trying to figure out whether the victims "deserved" it (that is,
was the car stolen from a church-going grandma or from a violent criminal)
and whether the buyer of the illicit goods really needs it ("They have very
important meetings"). To continue the argument-through-analogy (or, to more
aptly, highlight why I find the underlying argument offensive), it's like
saying it's OK to speed if you have a really important meeting to get to.
This is not about "medical emergencies" as a justification for speeding -
the situation here is entirely predictable to the Subscriber and entirely
under their control. In the Subscriber Agreement, every single Subscriber
agrees to and acknowledges that their CA will revoke if the CA screws up.
Thus, every single Subscriber needs to be prepared. The argument that "They
didn't know" or "They couldn't predict" is demonstrably and factually false.



>
> How do CAs provide this? For *all* revocations, provide meaningful data. I
> do not see there being any value to discussing further extensions until we
> have systemic transparency in place, and I do not see any good coming from
> trying to change at the same time as placing that systemic transparency in
> place, because there’s no way to measure the (negative) impact such change
> would have.
>
>
> I don't see how data and evidence for "all revocations" somehow makes
> things better, unless I misunderstood your proposal. It's not a balanced
> request. It would be a huge effort for CAs to write risk assessment reports
> for each revocation. Why not focus on the rare cases which justifies the
> extra effort from CAs to write a disclosure letter requesting more days for
> revocation? Why not add some rules on what's the minimum information that's
> expected for these cases? If you want this to be part of the incident
> report, that's fine.
>

As explained above, the core to the assertion being made here is that a
system of extended revocation is only usable for "exceptional" situations.
But we clearly know that everyone has an incentive to claim their situation
is exceptional. Without a structured analysis, before any changes, about
the nature of revocations, no one can assess whether this is .0001% of
revocations or 100% of revocations. Thus, this is an absolute and
non-negotiable pre-condition for such discussions about exceptional
situations.


> The systemic transparency you are asking, as I understand it, would be
> m.d.s.p. We already see incident reports being published here. CAs who seek
> more than 5 days for revoking affected certificates would disclose more
> details about the specifics of these revocations.
>

Your failure to plan does not make an emergency. The idea that the
community should have only 5 days to discuss. As we've seen during
"exceptional" discussions, Subscribers and CAs tend to assume that the
exception will be granted, and thus fail to take steps to prepare for it
being rejected. So, in effect, not only is the argument "There should be a
discussion" but "All representations from CAs and Subscribers should be
deemed as valid", despite there being ample evidence that such approaches
fundamentally and critically weaken security. That's extremely naive to
think "times and circumstances change".


> CAs are evaluated using schemes based on Risk Management.
>

That is a problem with the schemes, and why significant effort is being
placed to improve those schemes. "The status quo is bad, so what's the harm
in making things worse" is not a compelling narrative.


> There is no zero risk. It's like saying there is 100% security. You can
> add controls to minimize risk to acceptable levels. Even when mitigations
> are added, you have residual risk. However, layered controls and
> compensating controls help to avoid disasters. I just don't believe it's
> black or white and I think the module owners probably agree with that
> statement (
> https://groups.google.com/d/msg/mozilla.dev.security.policy/tbSkcGHg1kA/CkrM6taBAwAJ).
> If that was the case, every single BR violation or Root Policy violation
> would be treated as a trigger for a complete distrust.
>

Every single BR violation and Root Policy Violation is absolutely a
consideration for complete distrust. Whether or not it triggers complete
distrust is based on weighing the impact of that distrust. We absolutely
want to move to a world where BR violations are exceptions, not the rules.
Your proposal for how to do that is to make sure things aren't BR
violations. That would certainly solve the problem, by making the ecosystem
less reliable and trustworthy. My proposal is that CAs need to do better,
and their failure to adequately inform Subscribers of their Subscriber
Agreements does not a community problem make.


> Go read
> https://zakird.com/papers/zlint.pdf to see a systemic, thorough, analysis
> that supports what I described to you, and disagrees with your framing. We
> know what the warning signs are - and it’s continued framing of “low” risk
> that collectively presents “severe” risk.
>
>
> I wasn't aware of that paper, it contains valuable information, thank you
> for sharing. Notice the abstract that says "We find that the number of
> errors has drastically reduced since 2012. In 2017, only 0.02% of
> certificates have errors". To me, this is a positive indicator that the
> ecosystem is continuously improving.
>

Yes. Because CAs are being distrusted.


> I have listened to this argument before but unfortunately it leads
> nowhere. How badly are we interested in interop to justify being "the bad
> guys" and how "disruptive" will our actions be for Relying Parties? It is a
> very difficult problem to solve but the ecosystem has made progress:
> - disclosure of intermediate CA Certificates
> - identifying and fixing problematic OCSP responders
> - increased supervision to the issued certificates with CT and linters
> providing public information about mis-issuances
> - browsers enforcing BR requirements with code (e.g. certificate validity
> duration)
>
> With these controls in place, CAs are very much obligated to follow the
> rules or face the consequences. Browsers use telemetry to detect violations
> of the standards and create plans on addressing those issues. These plans
> usually include discussions in m.d.s.p. or the CA/B Forum in order for the
> CAs to participate and create the necessary rules -along with the browsers-
> to address these incompatibilities.
>

This is a fundamentally disgusting framing. And I do mean it with that
extreme and emotive work, because it's digusting to suggest that enforcing
contracts and norms is the "bad guys", and is an appeal to the listeners of
this argument to try to place yourself as somehow arguing for the "good"
guys - to use the earlier analogy, to try and suggest you're Robin Hood
rather than Enron.

CAs have a critical and fundamental role in issuing certificates. They
choose whether or not to violate the BRs. Every Subscriber agrees to a
contract that acknowledges that if the CA screws up, their certificate will
be revoked. Now, on the basis that some people (typically, large
corporations) haven't really read their contract, or are convinced that
somehow it's unfair to actually follow the thing they agreed to, they would
like to renegotiate the contract when it no longer benefits them.

We have seen, over the past two decades, incredible harm from not ensuring
these are consistently followed and applied. We've seen, over the past two
decades, incredibly poor judgement. We have not seen any data to suggest
things have improved - rather, we've seen some of the worst offenders
removed as CAs. Even if the mean has gone up, the median and mode have
remained unchanged. That's not improvement.


> It’s a perfect example of why your argument DOESN’T work. As Mozilla has
> shared in the CA/B Forum, people don’t fix their site - they blame the
> browser, and keep on with the brokenness. Firefox is the one having to
> change to “accommodate” that.
>
>
> Or, they might blame the CA for providing them a "thing" that doesn't work
> with all major browsers :)
>

Or the world might end tomorrow. I told you something that exactly is
happening, and you respond with a hypothetical that isn't. That's not as
insightful as the :) may be trying to capture.


> This statement underestimating the reflexes of the Root programs. The
> reason for requiring disclosure is meant as a first step for understanding
> what's happening in reality and collect some meaningful data by policy.
> Once Mozilla collects enough information to make a safe estimation, the
> policy can be updated to allow or forbid certain situations. If, for
> example, m.d.s.p. receives 10 or 20 revocation exception cases within a
> 12-month period and none of them is convincing to the community and module
> owners to justify the exception, the policy can be updated with clear rules
> about the risk of distrust if the revocation doesn't happen within 5 days.
> That would be a simple, clear rule. Does Mozilla have the information to
> make such an aggressive rule change today? Maybe.
>

This position presumes the argument is valid, which I've tried to show why
it isn't. It further tries to say that the best thing to do is accept the
harm your proposal would cause - which I've shown based on repeated
real-world application of the principles you propose to use - and then
re-evaluate it. Here's a better solution: Don't accept the harm. There's no
reason to hold a Purge to see "if it works out". There's no reason to allow
rampant theft, to see if people are happier once they get what their heart
most desires. That's negligent and irresponsible, and that's what's being
proposed here.

That's an extreme and emotive take - but that's because these arguments are
by no means new, they're ones that have been discussed for years. Even when
wrapped up as a "thinking about how to help" package, they're still
fundamentally flawed and based on a lack of consideration or analysis of
how things have worked or are working. Regardless of good intent, much like
Swift's "Modest Proposal" was rather heinous, the proposal here is
fundamentally flawed in a way that will cause real and lasting harm. A more
negative read would suggest its an attempt to move the Overton Window of
discourse, by suggesting that somehow browsers are "the bad guys" for
requiring that CAs do what they say they'll do as a condition of trust.
We're not "the bad guys" for pointing out deceptive practices and holding
the bearers of keys to the Internet accountable for what they said and did.


> I disagree that we’ve seen systemic improvements as a whole. There are a
> few CAs trying to do better, but the incident reporting of today clearly
> shows exactly what I’m saying - that the industry has not actually matured
> as you suggest. What has changed has largely been driven by those outside
> CAs - whether those who were wanting to become CAs (Amazon with certlint)
> or those analyzing CA’s failures (ZLint).
>
>
> If we truly care about the ecosystem, it doesn't really matter where the
> systemic improvements come from. CAs and Browsers have contributed in the
> Network Security Guidelines, the BRs (to improve and limit validation
> methods, add CAA and so much more). I agree we should expect every CA to
> develop tools or use existing ones to ensure they are complying with all
> rules. We occasionally see some exceptions and this is evaluated on a
> case-by-case basis. "Accidents" and mistakes do happen and as it has been
> discussed in the past, it's collective failures that pose the greatest risk
> and we have seen hard decisions being made to minimize or eliminate these
> risks.
>

I would believe we'd seen systemic improvements once we saw legacy CAs no
longer believing they had "exceptional" situations where they did not do
what they say they would do (correct issuance), do not want to do what they
said they would do in those situations (revoke), and somehow want to
present it as Subscribers not knowing what would happen (when it's baked
right into their contract).


> I also protest against the "grossly negligent and irresponsible" part and
> I'm afraid statements like that alienate people from participating and
> proposing anything. Simply disagreeing would ultimately have the same
> effect in this conversation. You have already provided good arguments
> against my proposal for people to evaluate.
>

That's the thing, though. Much like proposals to hold the purge, eat
babies, or legalize theft, there are some arguments that are so deeply
odious, that whether presented in jest or good faith, are actively harmful
and damaging. I don't use this lightly to shut down conversation, but as a
long-standing participant in the Forum and the list, you know that the
ideas you're proposing here are nothing new and have been debated for
years. The community knows and can see the consequences of the principles
you propose - with real harm and real potential at loss of life or serious
disruption - and so this isn't "Hey, I had some new idea to share," but an
idea that has been thoroughly explored and completely rejected.

I've been trying to capture more productive ways forward to countenance
such a re-discussion - and I think the core of it goes to the claim that
"exceptional" situations have incredible cost, to all participants, and so
concrete data is needed. If CAs find it expensive to analyze their
revocations, the reasons, and the challenges, then it suggests that supply
"exceptional" access to their customers / industry is not, in fact, a
priority they're willing to meaningfully engage on.

Fotis Loukos

unread,
Dec 4, 2018, 5:02:54 AM12/4/18
to ry...@sleevi.com, Dimitris Zacharopoulos, mozilla-dev-security-policy
Hello everybody,
First of all, I would like to note that I am writing as an individual
and my opinion does not necessarily represent the opinion of my employer.

An initial comment is that statements such as "I disagree that CAs are
"doing their best" to comply with the rules." because some CAs are
indeed not doing their best is simply a fallacy in Ryan's argumentation,
the fallacy of composition. Dimitris does not represent all CAs, and I'm
pretty sure that you are aware of this Ryan. Generalizations and the
distinction of two teams, our team (the browsers) and their team (the
CAs), where by default our team are the good guys and their team are
malicious is plain demagoguery. Since you like extreme examples, please
note that generalizations (we don't like a member of a demographic thus
all people from that demographic are bad) have lead humanity to
committing atrocities, let's not go down that road, especially since I
know you Ryan and you're definitely not that type of person.

I believe that the arguments presented by Dimitris are simply red
herring. Whether there is a blackout period, the CA lost internet
connectivity or a 65 character OU does not pose a risk to relying
parties is a form of ignoratio elenchi, a fallacy identified even by
Aristotle thousands of years ago. Using the same deductive reasoning,
someone could argue that if a person was scammed in participating in a
ponzi scheme and lost all his fortune, he can steal someone else's money.

The true point of the argument is whether CAs should be allowed to break
the BRs based on their own risk analysis. So, what is a certificate?
It's more or less an assertion. And making an assertion is equally
important as revoking it. As Ryan correctly mentioned, if this becomes a
norm, why shouldn't CAs be allowed to make a risk analysis and decide
that they will break the BRs in making the assertion too, effectively
issuing certificates with their own validation methods? Where would this
lead us? Who would be able to trust the WebPKI afterwards? Are we
looking into making it the wild west of the internet?

In addition, do you think that CAs should be audited regarding their
criteria for their risk analysis?

Furthermore, this poses a great risk for the CAs too. If this becomes a
practice, how can CAs be assured that the browsers won't make a risk
analysis and decide that an issuance made in accordance to all the
requirements in the BRs is a misissuance? Until now, we have seen that
browsers have distrusted CAs based on concrete evidence of misissuances.
Do you think Dimitris that they should be allowed to distrust CAs based
on some risk analysis?

Regards,
Fotis
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Jakob Bohm

unread,
Dec 4, 2018, 9:30:21 AM12/4/18
to mozilla-dev-s...@lists.mozilla.org
Hello to you too.

It seems that you are both misunderstanding what the proposal was.

The proposal was apparently to further restrict the ability of CAs to
make exceptions on their own, by requiring all such exceptions to go
through the public forums where the root programs can challenge or even
deny a proposed exception, after hearing the case by case arguments for
why an exception should be granted.

A better example would be that if someone broke their leg for some
reason, and therefore wants to delay payment of a debt by a short while,
they should be able to ask for it, and the request would be considered
on its merits, not based on a hard-nosed principle of never granting any
extensions.

Now because CAs making exceptions can be technically considered against
the letter of the BRs, specifying how exceptions should be reviewed
would constitute an admission by the community that exceptions might be
ok in some cases. Thus from a purely legalistic perspective it would
constitute a weakening of the rules. But only if one ignores the
reality that such exceptions currently happen with little or no
oversight.

As for doing risk assessments and reporting, no deep thinking and no
special logging of considerations is needed when revoking as quickly
as possible, well within the current 24 hour and 5 day deadlines (as
applicable), which hopefully constitutes the vast majority of revocations.
Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Fotis Loukos

unread,
Dec 4, 2018, 1:01:02 PM12/4/18
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
Hello,

On 4/12/18 4:30 μ.μ., Jakob Bohm via dev-security-policy wrote:
> Hello to you too.
>
> It seems that you are both misunderstanding what the proposal was.
>
> The proposal was apparently to further restrict the ability of CAs to
> make exceptions on their own, by requiring all such exceptions to go
> through the public forums where the root programs can challenge or even
> deny a proposed exception, after hearing the case by case arguments for
> why an exception should be granted.
>

Can you please point me to the exact place where this is mentioned?

The initial proposal is the following:

Mandating that CAs disclose revocation situations that exceed the 5-day
requirement with some risk analysis information, might be a good place
to start.

I see nothing related to public discussion and root programs challenging
or denying the proposed exception.

In a follow-up email, Dimitris mentions the following:

The reason for requiring disclosure is meant as a first step for
understanding what's happening in reality and collect some meaningful
data by policy. [...] If, for example, m.d.s.p. receives 10 or 20
revocation exception cases within a 12-month period and none of them is
convincing to the community and module owners to justify the exception,
the policy can be updated with clear rules about the risk of distrust if
the revocation doesn't happen within 5 days.

In this proposal it is clear that the CA will *disclose* and not ask for
permission for extending the 24h/5 day period, and furthermore he
accepts the fact that these exceptions may not be later accepted by the
community, which may lead to changing the policy.


> A better example would be that if someone broke their leg for some
> reason, and therefore wants to delay payment of a debt by a short while,
> they should be able to ask for it, and the request would be considered
> on its merits, not based on a hard-nosed principle of never granting any
> extensions.

I think that the proper analogy is if someone broke their leg, and
therefore wants to delay payment of a bank debt, he should be able to
delay it without notifying the bank in time, but after he has decided
that he is fine and he can walk, he can go to the bank and explain them
why he delayed the payment. I do not consider this a good practice.

>
> Now because CAs making exceptions can be technically considered against
> the letter of the BRs, specifying how exceptions should be reviewed
> would constitute an admission by the community that exceptions might be
> ok in some cases. Thus from a purely legalistic perspective it would
> constitute a weakening of the rules. But only if one ignores the
> reality that such exceptions currently happen with little or no
> oversight.

Please see above, there is no review in the original proposal.

>
> As for doing risk assessments and reporting, no deep thinking and no
> special logging of considerations is needed when revoking as quickly
> as possible, well within the current 24 hour and 5 day deadlines (as
> applicable), which hopefully constitutes the vast majority of revocations.

So, is deep thinking needed in the rest of the cases? If yes, how do you
think that a CA will be able to do this risk assessment and how can root
store operators decide on this within 24h in order to extend this
period? If no, would you trust such a risk assessment?

Regards,
Fotis

>
>
> On 04/12/2018 11:02, Fotis Loukos wrote:
> Enjoy
>
> Jakob
>

Dimitris Zacharopoulos

unread,
Dec 4, 2018, 1:29:43 PM12/4/18
to Fotis Loukos, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
Fotis,

You have quoted only one part of my message which doesn't capture the
entire concept.

CAs that mis-issue and must revoke these mis-issued certificates,
already violated the BRs. Delaying revocation for more than what the BRs
require, is also a violation. There was never doubt about that. I never
proposed that "extended revocation" would somehow "not be considered a
BR violation" or "make it legal".

I tried to highlight in this discussion that there were real cases in
m.d.s.p. where the revocation was delayed in practice. However, the
circumstances of these extended revocations remain unclear. Yet, the
community didn't ask for more details. Seeing this repeated, was the
reason I suggested that more disclosure is necessary for CAs that
require more time to revoke than the BRs require. At the very minimum,
it would help the community understand in more detail the circumstances
why a CA asks for more time to revoke.

I think Jakob make an accurate summary.


Dimitris.



On 4/12/2018 8:00 μ.μ., Fotis Loukos via dev-security-policy wrote:
> Hello,
>
> On 4/12/18 4:30 μ.μ., Jakob Bohm via dev-security-policy wrote:
>> Hello to you too.
>>
>> It seems that you are both misunderstanding what the proposal was.
>>
>> The proposal was apparently to further restrict the ability of CAs to
>> make exceptions on their own, by requiring all such exceptions to go
>> through the public forums where the root programs can challenge or even
>> deny a proposed exception, after hearing the case by case arguments for
>> why an exception should be granted.
>>
> Can you please point me to the exact place where this is mentioned?
>
> The initial proposal is the following:
>
> Mandating that CAs disclose revocation situations that exceed the 5-day
> requirement with some risk analysis information, might be a good place
> to start.
>
> I see nothing related to public discussion and root programs challenging
> or denying the proposed exception.
>
> In a follow-up email, Dimitris mentions the following:
>
> The reason for requiring disclosure is meant as a first step for
> understanding what's happening in reality and collect some meaningful
> data by policy. [...] If, for example, m.d.s.p. receives 10 or 20
> revocation exception cases within a 12-month period and none of them is
> convincing to the community and module owners to justify the exception,
> the policy can be updated with clear rules about the risk of distrust if
> the revocation doesn't happen within 5 days.
>
>> Enjoy
>>
>> Jakob

Ryan Sleevi

unread,
Dec 4, 2018, 1:31:09 PM12/4/18
to Fotis Loukos, Ryan Sleevi, Dimitris Zacharopoulos, mozilla-dev-security-policy
On Tue, Dec 4, 2018 at 5:02 AM Fotis Loukos <me+mozdev...@fotisl.com>
wrote:

> An initial comment is that statements such as "I disagree that CAs are
> "doing their best" to comply with the rules." because some CAs are
> indeed not doing their best is simply a fallacy in Ryan's argumentation,
> the fallacy of composition. Dimitris does not represent all CAs, and I'm
> pretty sure that you are aware of this Ryan. Generalizations and the
> distinction of two teams, our team (the browsers) and their team (the
> CAs), where by default our team are the good guys and their team are
> malicious is plain demagoguery. Since you like extreme examples, please
> note that generalizations (we don't like a member of a demographic thus
> all people from that demographic are bad) have lead humanity to
> committing atrocities, let's not go down that road, especially since I
> know you Ryan and you're definitely not that type of person.


I appreciate you breaking this down. I think it's important to respond to
the remark, because there is a substantive bit of this criticism that I
think meaningfully affects this conversation, and it's worth diving into.

Broadly speaking, it seems the interpretation of the first remark 'CAs are
"doing their best"' can be interpreted as "(Some) CAs are doing their best"
or "(All) CAs are doing their best". You rightfully point out that Dimitris
does not represent all CAs, but that lack of representation can't be
assumed to mean the statement could not possibly be meant as all CAs - that
could have been the intent, and is a valid interpretation. Similarly, in
the criticism, it seems the interpretation for 'I disagree that CAs are
"doing their best"' can be interpreted as "I disagree that (some) CAs are
doing their best", "I disagree that (all) CAs are doing their best", or "I
disagree that (any) CAs are doing their best".

While I doubt that any of these interpretations are likely to be seen as
supporting genocide, they do underscore an issue: Ambiguity about whether
we're talking about some CAs or all CAs. When we speak about policy
requirements, whether in the CA/Browser Forum or here, it's necessary in
the framing to consider all CAs in aggregate. Dimitris proposed a
distinction between "good" CAs and "bad" CAs, on the basis that flexibility
is needed for "good" CAs, while my counter-argument is that such
flexibility is easily abused by "bad" CAs, and when "bad" CAs are the
majority, there's no longer the distinction between "good" and "bad".
Policies that propose ambiguity, flexibility and trust, whether through
validation methods or revocation decisions, fundamentally rest on the
assumption that all entities with that flexibility will use the flexibility
"correctly." Codifying what that means removes the flexibility, and thus is
incompatible with flexibility - so if there exists the possibility of
abuse, it has to be dealt with by avoiding ambiguity and flexibility, and
removing trust where it's "misused".

This isn't a fallacy of composition - it's the fundamental risk assessment
that others on this thread have proposed. The risk of a single bad CA
spoiling the bunch, as it were, which is absolutely the case in a public
trust ecosystem, is such that it cannot afford considerations of
flexibility for the 'good' CAs. It's equally telling that the distinction
between 'bad' CAs and 'good CAs' are "Those that are not following the
rules" vs "Those that are", rather than the far more desirable "Those that
are doing the bare minimum required of the rules" and "Those that are going
above and beyond". If it truly was that latter case, one could imagine more
flexibility being possible, but when we're at a state where there are
literally CAs routinely failing to abide by the core minimum, then it's
necessary and critical to consider in any conversation that is granting
more trust to consider what "all CAs" when we talk about what "CAs are
doing", just like we already assume that negative discussions and removing
trust necessarily begin about "some CAs" when we talk about what "CAs are
doing".

Ryan Sleevi

unread,
Dec 4, 2018, 1:48:53 PM12/4/18
to mozilla-dev-security-policy
On Tue, Dec 4, 2018 at 1:29 PM Dimitris Zacharopoulos via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:

> I tried to highlight in this discussion that there were real cases in
> m.d.s.p. where the revocation was delayed in practice. However, the
> circumstances of these extended revocations remain unclear. Yet, the
> community didn't ask for more details.


The expectation is that there will already be a discussion about this. At
the worst case, this discussion will be delayed until the audit
qualifications come in - the absence of audit qualifications in such
situations would be incredibly damning. It sounds like you believe this is
not, in fact, a requirement today, and it may be possible to clarify that
already.

Do you think the language in
https://wiki.mozilla.org/CA/Responding_To_An_Incident is sufficient, or do
you feel it's ambiguous as to whether or not a failure to abide by the BRs
constitutes "an incident"?

As to the second half, the community not asking for details, as a member of
this community, you can and should feel empowered to ask the details you
feel are relevant. Do you believe that something about the handling of this
makes it inappropriate for you to ask questions you believe are relevant?


> Seeing this repeated, was the
> reason I suggested that more disclosure is necessary for CAs that
> require more time to revoke than the BRs require.


It's not at all clear how this result is linked to the remarks you make
above. Above, your remark seems to focus on CAs not disclosing in a timely
fashion, nor disclosing the circumstances. The former is a violation of the
existing requirements, the latter is something you can and should inquire
on if you feel is relevant. It's unclear what is "more" about the existing
disclosure, and certainly, the framing used in this statement implies that
the issue is time, but seemingly acknowledges we don't have data to support
that.


> At the very minimum,
> it would help the community understand in more detail the circumstances
> why a CA asks for more time to revoke.
>

I think there's an equally flawed assumption here - which is that CAs
should be asking for exceptions to policies. I don't think this is at all a
reasonable model - and the one time it did happen (with respect to SHA-1)
was one that caused a lot of pain and harm overall. I think it should be
uncontroversial to suggest that "exceptions" don't exempt the need from
qualifications - certainly, neither the professional standards behind the
ETSI audit criteria nor the standards behind the WebTrust would allow a CA
to argue an event is not a qualification solely because Mozilla "granted an
exception".

Instead, the concept of "exceptions" is one of asking the community whether
or not they will agree to ignore, apriori, a matter of non-compliance. In a
world without "exceptions", the CA will take the qualification, and will
need to disclose (as part of an Incident Report and, later, the audit
report) the nature behind the incident, the facts, and those details. In
determining ongoing trust, the community will take a holistic look at the
incidents and qualifications, whether sufficient detail was presented, and
what the patterns and issues are.

This is a healthy system, whereas introducing "exceptions" and agreement, a
priori, to exclude certain facts from consideration is not. For one, it
prevents the determination and establishment of patterns - granting
exceptions as "one-offs" can (and demonstrably does) lead to patterns of
misissuance, and asking the community to overlook those patterns because it
agreed to overlook the specific events is very much an unreasonable, and
harmful, request. This is similar to the harm of creating "tiers" of
misissuance, as both acts seek to legitimize some forms of non-compliance,
without concrete data, which then collectively erodes the very notion of
compliance to begin with.

Thus, if we disabuse the notion that some CAs have, or worse, have promoted
to their subscribers - that browsers can, do, and will grant promises to
overlook certain areas of non-compliance - then the proposal itself goes
away. That's because the existing mechanisms - for disclosure and detail
gathering - function, and the community can and will consider those facts
when holistically considering the CA. It may be that some forms of
misissuance are so egregious that no CA should ever attempt (e.g. granting
an unconstrained CA), and it may be that other forms are considered
holistically as part of patterns, but the CA is ultimately going to be
gambling, and that's all the more reason that a CA shouldn't violate in the
first place.

If (some) CAs do feel the requirements are overly burdensome, then
proposing changes is not unreasonble - but it MUST be accompanied with
concrete and meaningful data. Absent that, it leads to the harmful problems
I discuss above, and thus is not worth the time spent or electrons wasted
on the discussion. However, if (most) CAs systemically provide data, then
we can have meaningful discussions about what the bounds for flexibility
look like for (all) CAs.

Fotis Loukos

unread,
Dec 5, 2018, 2:36:19 AM12/5/18
to ry...@sleevi.com, mozilla-dev-security-policy, Dimitris Zacharopoulos
As far as I can tell, if no quantifiers are used in a proposition
written in the English language, then it is assumed to be a universal
proposition. If it were particular, then sentences such as "numbers are
bigger than 10" and "cars are blue" would be true, since there are some
numbers bigger than 10 and there are some cars that are blue. My
knowledge of the inner workings of the English grammar is not that good,
but at least this is what applies in Greek and in cs/logic (check
http://www.cs.colostate.edu/~cs122/.Fall14/tutorials/tut_2.php for
example). If I am mistaken, then it was error on my side.

>
> While I doubt that any of these interpretations are likely to be seen as
> supporting genocide, they do underscore an issue: Ambiguity about whether
> we're talking about some CAs or all CAs. When we speak about policy
> requirements, whether in the CA/Browser Forum or here, it's necessary in
> the framing to consider all CAs in aggregate. Dimitris proposed a

Totally agree with you, requirements must apply equally to everybody.

> distinction between "good" CAs and "bad" CAs, on the basis that flexibility
> is needed for "good" CAs, while my counter-argument is that such
> flexibility is easily abused by "bad" CAs, and when "bad" CAs are the
> majority, there's no longer the distinction between "good" and "bad".

Once again, I agree with you. It is a fact and has been displayed in the
past.

> Policies that propose ambiguity, flexibility and trust, whether through
> validation methods or revocation decisions, fundamentally rest on the
> assumption that all entities with that flexibility will use the flexibility
> "correctly." Codifying what that means removes the flexibility, and thus is
> incompatible with flexibility - so if there exists the possibility of
> abuse, it has to be dealt with by avoiding ambiguity and flexibility, and
> removing trust where it's "misused".

Agreed and I think it's the point of my previous email.

>
> This isn't a fallacy of composition - it's the fundamental risk assessment

The fallacy of composition arose from the fact that in the sentence I
pasted, you assumed that all CAs (according to my first paragraph in
this email) are not doing their best because one or some of the CAs are
not doing their best.

> that others on this thread have proposed. The risk of a single bad CA
> spoiling the bunch, as it were, which is absolutely the case in a public
> trust ecosystem, is such that it cannot afford considerations of
> flexibility for the 'good' CAs. It's equally telling that the distinction
> between 'bad' CAs and 'good CAs' are "Those that are not following the
> rules" vs "Those that are", rather than the far more desirable "Those that
> are doing the bare minimum required of the rules" and "Those that are going
> above and beyond". If it truly was that latter case, one could imagine more
> flexibility being possible, but when we're at a state where there are
> literally CAs routinely failing to abide by the core minimum, then it's
> necessary and critical to consider in any conversation that is granting
> more trust to consider what "all CAs" when we talk about what "CAs are
> doing", just like we already assume that negative discussions and removing
> trust necessarily begin about "some CAs" when we talk about what "CAs are
> doing".

Well, I am pretty sure that you agree that I am on your side at this and
I believe exactly the same thing.

My only remark was the fact that propositions about "CAs" with no
quantifiers are universal, and thus may only mean "all CAs" and never
"some CAs". Thus, in the example you mentioned above, if you mean "some
CAs" you must explicitly state this.

Regards,
Fotis

Fotis Loukos

unread,
Dec 5, 2018, 3:03:01 AM12/5/18
to Dimitris Zacharopoulos, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
On 4/12/18 8:29 μ.μ., Dimitris Zacharopoulos via dev-security-policy wrote:
> Fotis,
>
> You have quoted only one part of my message which doesn't capture the
> entire concept.

I would appreciate it if you mentioned how exactly did I distort your
proposal and which parts that change the meaning of what I said did I miss.

>
> CAs that mis-issue and must revoke these mis-issued certificates,
> already violated the BRs. Delaying revocation for more than what the BRs
> require, is also a violation. There was never doubt about that. I never
> proposed that "extended revocation" would somehow "not be considered a
> BR violation" or "make it legal".

You explicitly mentioned that there were voices during the SC6 ballot
discussion that wanted to extend the 5 days to something more (*extend*
the 5 days), as you also explicitly mentioned that this is not a
theoretical discussion.

>
> I tried to highlight in this discussion that there were real cases in
> m.d.s.p. where the revocation was delayed in practice. However, the
> circumstances of these extended revocations remain unclear. Yet, the
> community didn't ask for more details. Seeing this repeated, was the
> reason I suggested that more disclosure is necessary for CAs that
> require more time to revoke than the BRs require. At the very minimum,
> it would help the community understand in more detail the circumstances
> why a CA asks for more time to revoke.

I refer you to Ryan's email. Do you really believe that this is
something not expected from CAs?

>
> I think Jakob make an accurate summary.

You contradict what you said before 2 paragraphs. Jakob explicitly
mentioned:

The proposal was apparently to further restrict the ability of CAs to
make exceptions on their own, by requiring all such exceptions to go
through the public forums where the root programs can challenge or even
deny a proposed exception, after hearing the case by case arguments for
why an exception should be granted.

effectively 'legalizing' BR violations after browsers' concent (granting
an exception). Before two paragraphs you stated that you never proposed
making an extended revocation legal.

Regards,
Fotis

>
>
> Dimitris.
>
>
>
> On 4/12/2018 8:00 μ.μ., Fotis Loukos via dev-security-policy wrote:
>> Hello,
>>
>> On 4/12/18 4:30 μ.μ., Jakob Bohm via dev-security-policy wrote:
>>> Hello to you too.
>>>
>>> It seems that you are both misunderstanding what the proposal was.
>>>
>>> The proposal was apparently to further restrict the ability of CAs to
>>> make exceptions on their own, by requiring all such exceptions to go
>>> through the public forums where the root programs can challenge or even
>>> deny a proposed exception, after hearing the case by case arguments for
>>> why an exception should be granted.
>>>
>> Can you please point me to the exact place where this is mentioned?
>>
>> The initial proposal is the following:
>>
>> Mandating that CAs disclose revocation situations that exceed the 5-day
>> requirement with some risk analysis information, might be a good place
>> to start.
>>
>> I see nothing related to public discussion and root programs challenging
>> or denying the proposed exception.
>>
>> In a follow-up email, Dimitris mentions the following:
>>
>> The reason for requiring disclosure is meant as a first step for
>> understanding what's happening in reality and collect some meaningful
>> data by policy. [...] If, for example, m.d.s.p. receives 10 or 20
>> revocation exception cases within a 12-month period and none of them is
>> convincing to the community and module owners to justify the exception,
>> the policy can be updated with clear rules about the risk of distrust if
>> the revocation doesn't happen within 5 days.
>>
>>> Enjoy
>>>
>>> Jakob

Dimitris Zacharopoulos

unread,
Dec 5, 2018, 3:48:41 AM12/5/18
to Fotis Loukos, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
On 5/12/2018 10:02 π.μ., Fotis Loukos wrote:
> On 4/12/18 8:29 μ.μ., Dimitris Zacharopoulos via dev-security-policy wrote:
>> Fotis,
>>
>> You have quoted only one part of my message which doesn't capture the
>> entire concept.
> I would appreciate it if you mentioned how exactly did I distort your
> proposal and which parts that change the meaning of what I said did I miss.

I never claimed that you "distorted" my proposal. I said that it didn't
capture the entire concept.


>> CAs that mis-issue and must revoke these mis-issued certificates,
>> already violated the BRs. Delaying revocation for more than what the BRs
>> require, is also a violation. There was never doubt about that. I never
>> proposed that "extended revocation" would somehow "not be considered a
>> BR violation" or "make it legal".
> You explicitly mentioned that there were voices during the SC6 ballot
> discussion that wanted to extend the 5 days to something more (*extend*
> the 5 days), as you also explicitly mentioned that this is not a
> theoretical discussion.

This was mentioned in the context of a very long thread and you have
taken a piece of it which changes the meaning of the entire concept. I
explained what the entire concept was. Jakob summarized the proposal
correctly.

>> I tried to highlight in this discussion that there were real cases in
>> m.d.s.p. where the revocation was delayed in practice. However, the
>> circumstances of these extended revocations remain unclear. Yet, the
>> community didn't ask for more details. Seeing this repeated, was the
>> reason I suggested that more disclosure is necessary for CAs that
>> require more time to revoke than the BRs require. At the very minimum,
>> it would help the community understand in more detail the circumstances
>> why a CA asks for more time to revoke.
> I refer you to Ryan's email. Do you really believe that this is
> something not expected from CAs?
>
>> I think Jakob make an accurate summary.
> You contradict what you said before 2 paragraphs. Jakob explicitly
> mentioned:
>
> The proposal was apparently to further restrict the ability of CAs to
> make exceptions on their own, by requiring all such exceptions to go
> through the public forums where the root programs can challenge or even
> deny a proposed exception, after hearing the case by case arguments for
> why an exception should be granted.
>
> effectively 'legalizing' BR violations after browsers' concent (granting
> an exception). Before two paragraphs you stated that you never proposed
> making an extended revocation legal.
>
> Regards,
> Fotis

You missed one of Jakob's important point. This usually happens when you
clip-paste specific sentences that change the meaning of a whole
conversation.

"

But only if one ignores the
reality that such exceptions currently happen with little or no
oversight."


My previous response to you tries to re-summarize the concept in a more
accurate way. Please use that if you want to refer to the concept of my
proposal and not particular pieces from a huge thread.


Dimitris.


>>
>> Dimitris.
>>
>>
>>
>> On 4/12/2018 8:00 μ.μ., Fotis Loukos via dev-security-policy wrote:
>>> Hello,
>>>
>>> On 4/12/18 4:30 μ.μ., Jakob Bohm via dev-security-policy wrote:
>>>> Hello to you too.
>>>>
>>>> It seems that you are both misunderstanding what the proposal was.
>>>>
>>>> The proposal was apparently to further restrict the ability of CAs to
>>>> make exceptions on their own, by requiring all such exceptions to go
>>>> through the public forums where the root programs can challenge or even
>>>> deny a proposed exception, after hearing the case by case arguments for
>>>> why an exception should be granted.
>>>>
>>> Can you please point me to the exact place where this is mentioned?
>>>
>>> The initial proposal is the following:
>>>
>>> Mandating that CAs disclose revocation situations that exceed the 5-day
>>> requirement with some risk analysis information, might be a good place
>>> to start.
>>>
>>> I see nothing related to public discussion and root programs challenging
>>> or denying the proposed exception.
>>>
>>> In a follow-up email, Dimitris mentions the following:
>>>
>>> The reason for requiring disclosure is meant as a first step for
>>> understanding what's happening in reality and collect some meaningful
>>> data by policy. [...] If, for example, m.d.s.p. receives 10 or 20
>>> revocation exception cases within a 12-month period and none of them is
>>> convincing to the community and module owners to justify the exception,
>>> the policy can be updated with clear rules about the risk of distrust if
>>> the revocation doesn't happen within 5 days.
>>>
>>>> Enjoy
>>>>
>>>> Jakob

Wayne Thayer

unread,
Dec 5, 2018, 4:21:56 PM12/5/18
to Dimitris Zacharopoulos, me+mozdev...@fotisl.com, mozilla-dev-security-policy, Jakob Bohm
On Wed, Dec 5, 2018 at 3:48 AM Dimitris Zacharopoulos via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:

> On 5/12/2018 10:02 π.μ., Fotis Loukos wrote:
>
> > The proposal was apparently to further restrict the ability of CAs to
> > make exceptions on their own, by requiring all such exceptions to go
> > through the public forums where the root programs can challenge or even
> > deny a proposed exception, after hearing the case by case arguments for
> > why an exception should be granted.
> >
> > effectively 'legalizing' BR violations after browsers' concent (granting
> > an exception). Before two paragraphs you stated that you never proposed
> > making an extended revocation legal.
> >
> > Regards,
> > Fotis
>
> You missed one of Jakob's important point. This usually happens when you
> clip-paste specific sentences that change the meaning of a whole
> conversation.
>
> "
>
> But only if one ignores the
> reality that such exceptions currently happen with little or no
> oversight."
>
> I am particularly troubled by the proposal that exceptions be granted by
Mozilla as part of some recognized process. There is a huge difference
between this and the current process in which CAs may choose to take
exceptions as explicit violations. Even if the result is the same, granting
exceptions transfers the risk from the CA to Mozilla. We then are
responsible for assessing the potential impact, and if we get it wrong,
it's our fault. Please, let's not go there. As has been stated, if there is
really no risk to violating a requirement, then it's reasonable to make a
case for removing that requirement.

- Wayne

Jakob Bohm

unread,
Dec 5, 2018, 4:52:47 PM12/5/18
to mozilla-dev-s...@lists.mozilla.org
The problematic cases are these:

- Longer-than-standard revocation delays as part of another incident
(visible in incident reports post-event, such as the recent report
by Microsec).

- Longer-than-standard revocation delays outside other incidents
(currently not reported to the community).

Discussions of permitting longer revocations before-the-fact have
happened in a few larger scope situations:

- During the Symantec incidents, there was public discussions of
timetables for revoking certificates issued via certain problematic
RAs.

- In the discussion of underscores in DNS names (not on this list),
there was a public decision to set revocation dates more than 5
days after the discussion.

Eric Mill

unread,
Dec 5, 2018, 7:41:21 PM12/5/18
to me+mozdev...@fotisl.com, Ryan Sleevi, Dimitris Zacharopoulos, mozilla-dev-s...@lists.mozilla.org
On Wed, Dec 5, 2018 at 2:36 AM Fotis Loukos via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 4/12/18 8:30 μ.μ., Ryan Sleevi via dev-security-policy wrote:
> > On Tue, Dec 4, 2018 at 5:02 AM Fotis Loukos <
> me+mozdev...@fotisl.com>
>
> As far as I can tell, if no quantifiers are used in a proposition
> written in the English language, then it is assumed to be a universal
> proposition. If it were particular, then sentences such as "numbers are
> bigger than 10" and "cars are blue" would be true, since there are some
> numbers bigger than 10 and there are some cars that are blue. My
> knowledge of the inner workings of the English grammar is not that good,
> but at least this is what applies in Greek and in cs/logic (check
> http://www.cs.colostate.edu/~cs122/.Fall14/tutorials/tut_2.php for
> example). If I am mistaken, then it was error on my side.
>

Formally, yes, but in practice, there is ambiguity. For example, you can
say "elderly people vote for X political party", and it doesn't have to
mean that 100.0% of elderly people vote for that party for that to be a
reasonably accurate statement, if by and large that population has a clear
trend.

That's not to agree or disagree with Ryan's statement, just noting that
people do necessarily have to characterize groups sometimes, and that any
characterization of a large enough group will usually not apply to all of
its members.

I know I personally belong to a number of demographic groups whose behavior
as a group doesn't match mine as an individual, and when people criticize
those demographic groups, I try not to take it as a personal attack.

-- Eric
0 new messages