But sometimes, CAs, even CAs in Mozilla's trusted root list, make some of
the same mistakes the amateurs make, mistakes like:
- issuing certs with duplicate serial numbers from the same issuer name
- subordinate CA certs expire before they are replaced
- OCSP responder certs expire before they are replaced
- OCSP responder certs don't conform to RFC 2560.
Every time this happens, users get unhappy with Firefox. They perceive that
Firefox, not the CA, is stopping them from doing what they want to do,
and the sound of users singing the chorus of "eliminate all CAs and instead
rely entirely on key continuity management services like 'perspectives'"
grows louder.
Firefox gets a black eye, and PKI gets a black eye, but the CA at fault
seldom faces ANY consequences at all. That's not right. There must be SOME
way to bring pressure to bear on lousy CAs to maintain SOME minimum levels
of core competence, but we seem not to have found it so far.
Now, in recent days, a CA who has recently received approval for the
addition of some root CA certs to Mozilla's root CA list has made at least
two of those mistakes. What can we do about this?
The only stick we've ever discussed is the big one: removing the CA from the
trusted list. And I think history has shown that there's no sin so great,
so unpardonable, that committing it would cause a CA to be removed from
Mozilla's list (but perhaps my memory is failing me).
Can we devise a smaller stick? Is there something less than the axe and the
chopping block that will motivate CAs to take care?
How about a "hall of shame"? I'm thinking of a web page that publicly
records a history of incidents of CA malfeasance. When we learn about
such incidents, they go on that page and STAY THERE FOREVER. After a
while, CAs learn about this, and begin to do their best to stay off that
page. They pay attention to the mistakes that have gotten other CAs names
put onto that page. I think as long as it was purely factual, it would be
completely legal and defensible.
What do you think? Would it work?
--
/Nelson Bolyard
I think that some version of your idea could be useful. I would add to
the list:
- insufficient oversight of or validation by RAs
- issuance of SubCA certs in violation of Mozilla policy
- insecure web interfaces for cert application/validation
- misrepresentation of company/operational details, such as governing
jurisdiction
Perhaps the "wall of shame" should be accompanied by some formal
"probationary period" during which no new roots will be accepted from
that source. The end of this period could require a more extensive
explanation of what has been done to prevent the issue in the future.
However, I don't see all of this as a comprehensive solution to the
problem. It seems like we should, in parallel, pursue 1) patches to the
current model and 2) alternative/supplemental models.
For #1, I have in mind things like domain constraints and increased
disclosure.
For #2, I have in mind key-via-DNSSEC-secured-DNS, which is likely to be
a reality in the near future (will write more about this later).
Steve
Please note that the particular CA in question will not be included with
the upcoming NSS update if and until the issues are reviewed and
evaluated. Unfortunately Kathleen is away until beginning August, but
this were the action she requested in the interim.
> The only stick we've ever discussed is the big one: removing the CA from the
> trusted list. And I think history has shown that there's no sin so great,
> so unpardonable, that committing it would cause a CA to be removed from
> Mozilla's list (but perhaps my memory is failing me).
I don't think that's the correct assessment, but a reasonable chance for
correction under reasonable circumstances should remain.
> Can we devise a smaller stick? Is there something less than the axe and the
> chopping block that will motivate CAs to take care?
I don't think so.
--
Regards
Signer: Eddy Nigg, StartCom Ltd.
XMPP: star...@startcom.org
Blog: http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg
This is a great topic for discussion!
> In the 14 years that I've been working on PKI, I've seen certain mistakes in
> the issuance of certificates made over and over.
This is not intended as an excuse for the mistakes, but I am reminded of a quote I heard someone repeat after losing a handful of photos when her hard drive died with no backups:
"Nothing happens until it happens to you."
I took the meaning to be similar to the 2OI (second Order of Ignorance) laid out by Phillip G. Armour in an essay called "The Five Orders of Ignorance" --
One hopes the same mistakes aren't repeated within one organization, but that they learn from their embarrassing lessons and then "go and sin no more." I know that in the very beginning we made a few mistakes... For example, one or more of our OCSP responder certs had a null signature once, and it wasn't noticed until a new version of Firefox was released which checked OCSP responses for intermediate certificates and not just end-entity certs. It hadn't been noticed before because (a) none of the browsers had bothered to run OCSP checks on intermediate certs until then, and (b) because all of our other OCSP responder certs had worked up to then, we assumed correctness in the others -- we were hit by 2OI, which Firefox converted into 1OI, and then very quickly we got to 0OI (see the quoted excerpt from the 5OI article at bottom)
> But sometimes, CAs, even CAs in Mozilla's trusted root list, make some of
> the same mistakes the amateurs make, mistakes like:
> - issuing certs with duplicate serial numbers from the same issuer name
> - subordinate CA certs expire before they are replaced
> - OCSP responder certs expire before they are replaced
> - OCSP responder certs don't conform to RFC 2560.
These are all unfortunate mistakes -- but they're mistakes I wouldn't be surprised to see. I wouldn't try to pin all the blame on this, but: I think there's a general lack of good PKI troubleshooting tools and this has to be a contributing factor. That's not Mozilla's fault--it's just a general observation.
Here's an example: The web has these online tools:
and
http://jigsaw.w3.org/css-validator
And it also has NetCraft watching high profile websites and reporting on outages, their causes and effects, etc...
As for SSL, so far we have a few tools like:
(example)
https://www.ssllabs.com/ssldb/analyze.html?d=www.digicert.com
and
...but there isn't a "validator" for any of the PKI RFCs (afaik) -- no tools to definitively say whether an OCSP responder cert is compliant with RFC 2560 for example.
> Every time this happens, users get unhappy with Firefox...
> Firefox gets a black eye, and PKI gets a black eye, but the CA at fault
> seldom faces ANY consequences at all. That's not right.
I agree 100%, even though in our beginnings we made some similar (but short-lived) mistakes.
> There must be SOME
> way to bring pressure to bear on lousy CAs to maintain SOME minimum levels
> of core competence, but we seem not to have found it so far.
I have an idea that's similar to yours.
> Now, in recent days, a CA who has recently received approval for the
> addition of some root CA certs to Mozilla's root CA list has made at least
> two of those mistakes. What can we do about this?
>
> The only stick we've ever discussed is the big one: removing the CA from the
> trusted list. And I think history has shown that there's no sin so great,
> so unpardonable, that committing it would cause a CA to be removed from
> Mozilla's list (but perhaps my memory is failing me).
I can imagine one or two sins that would probably be great enough (key compromise maybe?)
> Can we devise a smaller stick? Is there something less than the axe and the
> chopping block that will motivate CAs to take care?
>
> How about a "hall of shame"? I'm thinking of a web page that publicly
> records a history of incidents of CA malfeasance. When we learn about
> such incidents, they go on that page and STAY THERE FOREVER. After a
> while, CAs learn about this, and begin to do their best to stay off that
> page. They pay attention to the mistakes that have gotten other CAs names
> put onto that page. I think as long as it was purely factual, it would be
> completely legal and defensible.
>
> What do you think? Would it work?
Many PKI sins are committed in the dark, either through ignorance (darkness of mind) or through opportunity (darkness of environment) and what I really like about your idea is that it would shine a light into the darkness. I don't think it has to be called a "hall of shame" though, for everyone to get the message.
I really like the www.ssllabs.com certificate analyzer. It shines a pretty good light on a web site's end-entity certificate, and the capabilities of the SSL server itself. They use some kind of rubric to come up with score (and a letter grade) for the site.
I recently signed up for a pingdom.com account so I could watch our uptime critical services from multiple locations:
http://share.pingdom.com/banners/2965e02b
In the book "Security Metrics", Andrew Jaquith (the author) makes a good case that what cannot be measured cannot be improved.
I recently wrote an in-depth OCSP monitoring tool that I use internally to make sure our responses are valid (correctly signed, not too close to expiration, etc) I'd be happy to give it to Mozilla or anyone who could use it. It would be trivial to configure it to "watch" any set of certificates (ee, intermediate, and root)
What if a CA-oriented version of the ssllabs.com analyzer, pingdom.com, my OCSP monitoring tool, and other things could be combined together to "watch" a wide range of aspects automatically, and come up with a score. Netcraft tracks a bunch of things -- and then there are a few projects tracking all certs. Perhaps those projects are almost fully ready to be able to do such a thing as a service (paid for by donations by CAs, browsers, individuals, whoever?)
> They pay attention to the mistakes that have gotten other CAs names
> put onto that page.
This would work well for me (I would LOVE to convert my 2OI into 1OI then 0OI without having to learn it by experience.)
At the same time, I think a monitoring service would really help too -- in the dark it's possible for something to be broken for quite a long time because sometimes nobody knows or notices (except perhaps a bad guy who discovers the darkness and exploits it?)
I'd be happy to help contribute... I would also like to find someone who can add a feature to NSS, the lack of which I consider a security hole: if an OCSP check fails to yield a definitive answer, fall back to the CRL(s) listed in the CRL DP of the cert.
Sincerely,
Paul Tiemann
CTO, DigiCert
(Quoted from The Five Orders of Ignorance, by Phillip G. Armour in October 2000/Vol. 43, No. 10 COMMUNICATIONS OF THE ACM)
0th Order Ignorance (0OI)— Lack of Ignorance. I have 0OI when I know something and can demonstrate my lack of ignorance in some tangible form, such as by building a system that satisfies the user. 0OI is knowledge. As an example, since it has been a hobby of mine for many years, I have 0OI about the activity of sailing, which, given a lake and a boat, is easily verified.
1st Order Ignorance (1OI)— Lack of Knowledge. I have 1OI when I don’t know something and can readily identify that fact. 1OI is basic ignorance. Example: I do not know how to speak the Russian language—a deficiency I could readily remedy by taking lessons, reading books, listening to the appropriate audiotapes, or moving to Russia for an extended period of time.
2nd Order Ignorance (2OI)— Lack of Awareness. I have 2OI when I don’t know that I don’t know something. That is to say, not only am I ignorant of some- thing (for instance I have 1OI), I am unaware of this fact. I don’t know enough to know that I don’t know enough. Example: I cannot give a good example of 2OI (of course).
Does anyone have a good timestamping server testing tool?
> Here's an example: The web has these online tools:
>
> http://validator.w3c.org
>
> and
>
> http://jigsaw.w3.org/css-validator
>
> And it also has NetCraft watching high profile websites and reporting on outages, their causes and effects, etc...
>
> As for SSL, so far we have a few tools like:
>
> https://www.ssllabs.com/
>
> (example)
> https://www.ssllabs.com/ssldb/analyze.html?d=www.digicert.com
>
> and
>
> https://www.digicert.com/help
>
> ...but there isn't a "validator" for any of the PKI RFCs (afaik) -- no tools to definitively say whether an OCSP responder cert is compliant with RFC 2560 for example.
>
>> Every time this happens, users get unhappy with Firefox...
>> Firefox gets a black eye, and PKI gets a black eye, but the CA at fault
>> seldom faces ANY consequences at all. That's not right.
> I agree 100%, even though in our beginnings we made some similar (but short-lived) mistakes.
If the PKI hierarchy that was misconfigured were actually named to the
user, it would very quickly change where the blame gets moved. (That's
really the entire reason I want CA branding in the browser -- so that
people can see the good CAs, the ones that never cause them any trouble,
versus the ones that give them all kinds of trouble. That would quickly
filter up into the sysadmin ranks, and the sysadmins are typically the
ones who decide who to buy the certificate from.
This would also help fix a risk-management problem for businesses --
they can't afford to let their businesses be seen as having problems
because of their CA's incompetence; there's also a rather major issue in
that it's not possible to send multiple end-entity certificates in the
same TLS Certificate response, and OpenSSL (at least) violates the
strict prohibition in TLS 1.0 against sending only the certificates
necessary to build the chain.
It might be worth looking into client authentication with an
OCSP-stapling extension -- let the client get its own OCSP response, and
eliminate the inverse of the information-leakage vulnerability that
already exists for servers with clients that do OCSP lookups: known to
have looked up the cert for *this site* at *that time*. This is a
violation of the user's privacy that is unnecessary and unconscionable,
and most CAs don't have privacy policies about the aggregate and
individual behaviors that they can discern from our OCSP lookups.
The inverse is for a user to provide a certificate, and the site itself
to look up the OCSP on the user's certificate before it authenticates.
If Mozilla is so committed to making the user safer through the web, why
are these leakages occurring? Why are the private details of who
shopped at what site being collected by default? Why must they occur?
OCSP Stapling has already been implemented for servers; I believe that
it must be implemented for clients as well.
>> Now, in recent days, a CA who has recently received approval for the
>> addition of some root CA certs to Mozilla's root CA list has made at least
>> two of those mistakes. What can we do about this?
>>
>> The only stick we've ever discussed is the big one: removing the CA from the
>> trusted list. And I think history has shown that there's no sin so great,
>> so unpardonable, that committing it would cause a CA to be removed from
>> Mozilla's list (but perhaps my memory is failing me).
> I can imagine one or two sins that would probably be great enough (key compromise maybe?)
>
I can already imagine the argument: "What about the sites who are
affected by this, and their users? How much time should we give them to
transition before we permanently blacklist this key?"
>> Can we devise a smaller stick? Is there something less than the axe and the
>> chopping block that will motivate CAs to take care?
>>
>> How about a "hall of shame"? I'm thinking of a web page that publicly
>> records a history of incidents of CA malfeasance. When we learn about
>> such incidents, they go on that page and STAY THERE FOREVER. After a
>> while, CAs learn about this, and begin to do their best to stay off that
>> page. They pay attention to the mistakes that have gotten other CAs names
>> put onto that page. I think as long as it was purely factual, it would be
>> completely legal and defensible.
>>
>> What do you think? Would it work?
> Many PKI sins are committed in the dark, either through ignorance (darkness of mind) or through opportunity (darkness of environment) and what I really like about your idea is that it would shine a light into the darkness. I don't think it has to be called a "hall of shame" though, for everyone to get the message.
>
> I really like the www.ssllabs.com certificate analyzer. It shines a pretty good light on a web site's end-entity certificate, and the capabilities of the SSL server itself. They use some kind of rubric to come up with score (and a letter grade) for the site.
They also don't understand the difference between Class 1, Class 2,
Class 3... actually, what *is* the difference between these, in the
server sense? That they established that the corporation existed, or
formally -- by act of the Board of Directors -- established that they
wished to authenticate themselves more stringently?
Historically, I thought that Verisign had defined for end-user
certificates (please pardon the gender-specific pronouns):
- Class 1: "He's shown us that he can receive mail at an email address"
- Class 2: "He's shown us that he can receive mail at a physical address"
- Class 3: "I went out and met the guy, and verified that it was
actually him."
One of the bigger questions that I have for the CAs is "What is your
general idea of various classes of certificates?" There's no policy OID
for "this is an email-verified certificate only", and even if there were
there's no policy-matching code (except in the special-purpose EV code)
to handle the differentiation.
The same problem exists with "domain validated" certificates, which
(since they're issued by most roots directly) is the reason why
ssllabs.com doesn't try to comprehend the strength of a server's claim
to its own identity. There's no policy OID agreed upon by everyone
which states that the certificate is 'phone number and address' verified
or 'email-level' verified/'domain validated' verified (which are, for
all intents and purposes, precisely the same level of authentication --
that is, 'entity which controls CN=whatever'.)
If the CA/B Forum would get off its collective rump and start creating
definitions for more things than just EV, and worked with users to
figure out what the various classes should be (how much risk is taken on
by each kind of certificate use), it would be a much more useful
organization.
(Particularly since so many CA errors occur in the dark and in a vacuum,
it would be MUCH better if the topics that the community has a reason to
want to discuss -- privacy breaches, discovery of mis-issued
certificates, and what matters are before the CABF. I believe that it
would be a good thing if these were actually made discussable. Mr
Nightingale might consider expressing his thoughts on what's going on in
the CABF every once in a while, please?)
I thought Mozilla made policy decisions partially in response to
community input. Why isn't the liaison open about policy things that
might affect the community, that the CAs are coordinating as a group?
> In the book "Security Metrics", Andrew Jaquith (the author) makes a good case that what cannot be measured cannot be improved.
Since we don't exactly have any strict policies, only guidelines that
seem to get bent every which way from Sunday every time someone does
something that is truly out of line...
> I recently wrote an in-depth OCSP monitoring tool that I use internally to make sure our responses are valid (correctly signed, not too close to expiration, etc) I'd be happy to give it to Mozilla or anyone who could use it. It would be trivial to configure it to "watch" any set of certificates (ee, intermediate, and root)
I'm fairly certain that this would be of interest to Microsoft, RedHat
(for their DogTag system), and EJBCA. It would also be of interest to
me. Would you be willing to release it as warranty-free open-source?
It might be very easy to grow that into a module in a full-fledged PKI
analysis system. Offering that as a service from many locations would
be very good.
Of course, someone's offering a product designed to watch the backup
datacenter's certificates and notify when they're close to expiration,
and automate their renewal.
> What if a CA-oriented version of the ssllabs.com analyzer, pingdom.com, my OCSP monitoring tool, and other things could be combined together to "watch" a wide range of aspects automatically, and come up with a score. Netcraft tracks a bunch of things -- and then there are a few projects tracking all certs. Perhaps those projects are almost fully ready to be able to do such a thing as a service (paid for by donations by CAs, browsers, individuals, whoever?)
I like the idea, as above.
>> They pay attention to the mistakes that have gotten other CAs names
>> put onto that page.
> This would work well for me (I would LOVE to convert my 2OI into 1OI then 0OI without having to learn it by experience.)
>
> At the same time, I think a monitoring service would really help too -- in the dark it's possible for something to be broken for quite a long time because sometimes nobody knows or notices (except perhaps a bad guy who discovers the darkness and exploits it?)
Where do you think most black-hats (and gray-hats) work? Did you think
it was because it was cool to be hidden in the shadow?
> I'd be happy to help contribute... I would also like to find someone who can add a feature to NSS, the lack of which I consider a security hole: if an OCSP check fails to yield a definitive answer, fall back to the CRL(s) listed in the CRL DP of the cert.
...and if that distribution point also fails (i.e., the entire CA's
infrastructure has been hit by a power failure or disaster)? What would
the acceptable failure-mode be?
Nelson, thank you for asking these questions out in the open, where
those of us who wonder can see (and comment on) the answers.
-Kyle H
> Paul Tiemann
> CTO, DigiCert
>
> (Quoted from The Five Orders of Ignorance, by Phillip G. Armour in October 2000/Vol. 43, No. 10 COMMUNICATIONS OF THE ACM)
>
> 0th Order Ignorance (0OI)— Lack of Ignorance. I have 0OI when I know something and can demonstrate my lack of ignorance in some tangible form, such as by building a system that satisfies the user. 0OI is knowledge. As an example, since it has been a hobby of mine for many years, I have 0OI about the activity of sailing, which, given a lake and a boat, is easily verified.
>
> 1st Order Ignorance (1OI)— Lack of Knowledge. I have 1OI when I don’t know something and can readily identify that fact. 1OI is basic ignorance. Example: I do not know how to speak the Russian language—a deficiency I could readily remedy by taking lessons, reading books, listening to the appropriate audiotapes, or moving to Russia for an extended period of time.
>
> 2nd Order Ignorance (2OI)— Lack of Awareness. I have 2OI when I don’t know that I don’t know something. That is to say, not only am I ignorant of some- thing (for instance I have 1OI), I am unaware of this fact. I don’t know enough to know that I don’t know enough. Example: I cannot give a good example of 2OI (of course).
From http://www.paperandpencil.info/home/2005/02/five_orders_of_.html :
*3rd Order Ignorance (3OI)—Lack of Process.*
You have no means, or process, for resolving your lack of knowledge.
*4th Order Ignorance (4OI)—Meta Ignorance.*
Not aware of these 5 levels.
> I recently wrote an in-depth OCSP monitoring tool that I use internally to make sure our responses are valid (correctly signed, not too close to expiration, etc) I'd be happy to give it to Mozilla or anyone who could > use it. It would be trivial to configure it to "watch" any set of certificates (ee, intermediate, and root)
I also think that such a tool would be very valuable, and I would use
it. Following up on a question (Kyle's, I believe): would you be
willing to share it with some of us?
Ralph
I think this is important. The big stick not only punishes the CA but
also its customers. It would be fantastic to see something a bit more
focused and pointy for things that are just a little bad (or careless).
> How about a "hall of shame"? I'm thinking of a web page that
> publicly records a history of incidents of CA malfeasance. When we
> learn about such incidents, they go on that page and STAY THERE
> FOREVER. After a while, CAs learn about this, and begin to do their
> best to stay off that page. They pay attention to the mistakes that
> have gotten other CAs names put onto that page. I think as long as it
> was purely factual, it would be completely legal and defensible.
This is an interesting idea, and perhaps if it were tied into the
problematic practices document, it might serve two purposes at once: (1)
exhibit real-world accounts of such practices and their side-effects,
and (2) incentivize CAs to stay off the list by avoiding the problematic
practices.
This could be implemented by adding to the description of each
problematic practice a list of CAs, each linked to a bug or article
describing what happened in each case.
-Sid
On 8/3/10 5:45 AM, Sid Stamm wrote:
> This could be implemented by adding to the description of each
> problematic practice a list of CAs, each linked to a bug or article
> describing what happened in each case.
>
> -Sid
I think that might work, but only if that 'wall of shame' was somehow
tied back to the sites using that CAs certs. If it's just another web
page on the Internet, there's no shame if no one sees it.
--
Gen Kanai
I don't think certificate authorities would give a "hall of shame" any
serious attention unless the customers of their subscribers do so,
"customers of their subscribers" being browser users. But how many
users would you expect to even know there is a "hall of shame", let
alone understand its significance?
The only real way to deal with authorities that fail to adhere to their
own CP/CPS or otherwise endanger secure use of the Internet is to turn
off the trust bits on their root certificates. Today, users would treat
that situation as "the browser is broken" without understanding that the
real problem is a failure of trust.
Perhaps a trust flag is needed in the NSS database for each root
certificate. If that flag is turned off when a user attempts secure use
of the Internet, a popup would appear saying "You are attempting secure
use of the Internet via an untrusted connection. Do you wish to
continue?" There probably should be more text about contacting the Web
site owner or E-mail sender to advise them that their certificate was
issued by a rogue authority.
--
David E. Ross
<http://www.rossde.com/>.
Anyone who thinks government owns a monopoly on inefficient, obstructive
bureaucracy has obviously never worked for a large corporation.
© 1997 by David E. Ross
And here another one: https://bugzilla.mozilla.org/show_bug.cgi?id=578499#c6
I'm really concerned about the level and quality of those CAs, is this a
trial and error exercise here? I don't know if that's pure coincident,
but did you notice that all of the above are Spanish CAs? All of them
and thee different ones.
On Jul 30, 2010, at 10:27 PM, Kyle Hamilton wrote:
>> I recently wrote an in-depth OCSP monitoring tool that I use internally to make sure our responses are valid (correctly signed, not too close to expiration, etc) I'd be happy to give it to Mozilla or anyone who could > use it. It would be trivial to configure it to "watch" any set of certificates (ee, intermediate, and root)
>
> I'm fairly certain that this would be of interest to Microsoft, RedHat
> (for their DogTag system), and EJBCA. It would also be of interest to
> me. Would you be willing to release it as warranty-free open-source?
>
> It might be very easy to grow that into a module in a full-fledged PKI
> analysis system. Offering that as a service from many locations would
> be very good.
On Aug 2, 2010, at 10:32 AM, Ralph@TUM wrote:
> I also think that such a tool would be very valuable, and I would use
> it. Following up on a question (Kyle's, I believe): would you be
> willing to share it with some of us?
I apologize for the wait... I'd be happy to share it -- I originally built it using the IAIK library, so I figured I should make a version on the (free) Bouncy Castle API. I'm almost finished with that -- I'll send you each a copy, and maybe you can give me some initial feedback. After that, I need to figure out where to put the code online, etc.
All the best,
Paul Tiemann
CTO, DigiCert
I just tested this one, and completed the bug with my analysis.
Just now, I tried to connect to one of the given test websites with Opera, and
found that it only checks the website's certificate by OCSP, and gets the CRL to
validate the intermediate CA, bypassing the OCSP problem Firefox faces.
Next step to do: create an identical hierarchy, with the same critical CRL
extension, but this one specifying only a subset (for example by setting the
onlyEndUserCertificates boolean), and see if Opera complains (to be compliant,
it should either reject it because of the critical extension, or declare the
certificate status as unknown because it can't compose a full CRL).
I tried the same experience with Chrome, Safari, and IE, but neglected to snoop
the network, and I don't know how to purge the OCSP replies and CRL cache. Will
test again in a new VM.
--
Erwann.
On 08/04/2010 09:02 PM, From Erwann Abalea:
>
> I just tested this one, and completed the bug with my analysis.
I saw your comments in the bug which was very helpful.
> Just now, I tried to connect to one of the given test websites with
> Opera, and found that it only checks the website's certificate by
> OCSP, and gets the CRL to validate the intermediate CA, bypassing the
> OCSP problem Firefox faces.
Probably because it fails to validate the OCSP response. It has been
already requested previously to implement for NSS/PSM a fail-over to CRL
in case OCSP fails for some reason. Which however is no excuse not to
have OCSP responders that generally work.
On 04/08/2010 23:57, Eddy Nigg wrote:
> On 08/04/2010 09:02 PM, From Erwann Abalea:
>> Just now, I tried to connect to one of the given test websites with
>> Opera, and found that it only checks the website's certificate by
>> OCSP, and gets the CRL to validate the intermediate CA, bypassing the
>> OCSP problem Firefox faces.
>
> Probably because it fails to validate the OCSP response. It has been
> already requested previously to implement for NSS/PSM a fail-over to CRL
> in case OCSP fails for some reason.
No, it doesn't even send an OCSP request to validate the intermediate CA. It
directly downloads the CRL. Strange.
Reading Wikipedia, I just discovered that IE supports OCSP only from version 7
on Vista (not XP). My VM guest OS is an XP, and I have EV-enabled websites (with
the green bar displayed). Installation of another guest is required :(
--
Erwann.
OK
> Reading Wikipedia, I just discovered that IE supports OCSP only from
> version 7 on Vista (not XP). My VM guest OS is an XP, and I have
> EV-enabled websites (with the green bar displayed). Installation of
> another guest is required :(
Ahh, I was under the (wrong) impression that you referred to Opera, not
Windows crypto store.
The described behaviour is for Opera on Windows XP. But I wanted to test other
browsers as well (IE, Opera, Safari, Chrome), and some of them (at least Chrome
and Safari) rely on Windows CAPI (even if not everything is done by the CAPI).
It's not like Firefox who doesn't care about the OS.
--
Erwann.
Opera doesn't rely on the Windows crypto store, it has its own.
On a new install (Win7 Enterprise), I repeated the test, and I confirm, Opera
10.60 checks the website's certificate by OCSP, and then downloads the CRL for
the intermediate CA.
> I tried the same experience with Chrome, Safari, and IE, but neglected
> to snoop the network, and I don't know how to purge the OCSP replies and
> CRL cache. Will test again in a new VM.
IE8 performs OCSP checks for all levels (and, strangely, does 2 requests for
each level for www.it-txartela.net), *and* downloads the CRLs.
Safari 5.0.1 does nothing, neither OCSP, nor CRL. This result needs validation.
I'll repeat the tests at home with (again) a fresh install, and test Chrome.
That way, I'll eliminate the corporate firewall :( I'll also be able to check
again with Safari, on my iMac.
BTW, earlier on this group, someone told that OCSP was mandatory for EV. I can't
find such statement in the SSL EV Guidelines 1.2. Am I missing something?
--
Erwann.
That's what I've observed too -- Opera grabs the CRL for checking intermediate certificate validity. This is quite nice, considering it probably caches the CRL (how often do intermediate certs get revoked without simultaneously revoking all of the end-entity certs too? It seems like a fine way to manage checking intermediate revocation...)
>> I tried the same experience with Chrome, Safari, and IE, but neglected
>> to snoop the network, and I don't know how to purge the OCSP replies and
>> CRL cache. Will test again in a new VM.
>
> IE8 performs OCSP checks for all levels (and, strangely, does 2 requests for each level for www.it-txartela.net), *and* downloads the CRLs.
>
> Safari 5.0.1 does nothing, neither OCSP, nor CRL. This result needs validation.
If I understand correctly, Safari on Windows is using the CryptoAPI, and it probably didn't check OCSP or CRL because you tried with IE first, and the IE OCSP and CRL cached responses were used by Safari.
You can clear the OCSP and CRL cache with this command:
certutil -urlcache * delete
Then restart Safari and try it again--it'll probably do OCSP checks after that.
> I'll repeat the tests at home with (again) a fresh install, and test Chrome. That way, I'll eliminate the corporate firewall :( I'll also be able to check again with Safari, on my iMac.
I've observed Safari on Mac using OCSP. I don't know if it fails back to CRL if OCSP doesn't yield an authoritative result.
To remove the OCSP cache on a Mac, I use this:
su
rm /private/var/db/crls/*db
killall ocspd
ocsp -d
> BTW, earlier on this group, someone told that OCSP was mandatory for EV. I can't find such statement in the SSL EV Guidelines 1.2. Am I missing something?
I don't think it was required. I remember a requirement that if no OCSP responder is available, then you can't let your CRLs grow too large to download in a few seconds over a 56kbit modem. But I do remember something about Firefox not showing its EV UI unless it can get OCSP responses (this behavior may have changed recently if I recall--so that it does show a green bar for EV certs with only a CRL distribution point defined)
Paul Tiemann
CTO, DigiCert
This CRL has a 1-year duration, but I don't know how often it is
renewed (the one I got has been created on July 30). We can imagine
that a crypto toolkit won't try to download a new version of it until
its expiration, even if this behaviour is not imposed by the standard.
That's what Firefox does, probably CAPI, I don't know for Opera.
If an intermediate certificate needs to be revoked, it won't be taken
into account until the CRL is reloaded. Whence, all the end-entity
certificates will be considered valid until then. I know it really
doesn't happen often, but the impact is more serious than a revoked
certificate. But you already know that.
> >> I tried the same experience with Chrome, Safari, and IE, but
> >> neglected to snoop the network, and I don't know how to purge the
> >> OCSP replies and CRL cache. Will test again in a new VM.
> >
> > IE8 performs OCSP checks for all levels (and, strangely, does 2
> > requests for each level for www.it-txartela.net), *and* downloads
> > the CRLs.
> >
> > Safari 5.0.1 does nothing, neither OCSP, nor CRL. This result
> > needs validation.
>
> If I understand correctly, Safari on Windows is using the CryptoAPI,
> and it probably didn't check OCSP or CRL because you tried with IE
> first, and the IE OCSP and CRL cached responses were used by Safari.
You're right for the order, and I effectively forgot that the cache
will benefit for all these...
> You can clear the OCSP and CRL cache with this command:
>
> certutil -urlcache * delete
Thanks! That will ease the tests.
> Then restart Safari and try it again--it'll probably do OCSP checks
> after that.
>
> > I'll repeat the tests at home with (again) a fresh install, and
> > test Chrome. That way, I'll eliminate the corporate firewall :(
> > I'll also be able to check again with Safari, on my iMac.
>
> I've observed Safari on Mac using OCSP. I don't know if it fails
> back to CRL if OCSP doesn't yield an authoritative result.
I still have on my TODO list the creation of a hierarchy with
"invalid" OCSP responders (an OCSP responder ultimately attached to
another trust anchor than the certificate we're checking) *and* CRL
with a incomplete scope and the necessary critical extension to
declare it as such.
> To remove the OCSP cache on a Mac, I use this:
>
> su
> rm /private/var/db/crls/*db
> killall ocspd
> ocsp -d
Hmmm... deleting files, killing a service, and restarting it? Well,
"Mme Michu" (french for "Mrs Jones") doesn't clear its CRL/OCSP cache,
but it could eventually be made easier :)
> > BTW, earlier on this group, someone told that OCSP was mandatory
> > for EV. I can't find such statement in the SSL EV Guidelines 1.2.
> > Am I missing something?
>
> I don't think it was required. I remember a requirement that if no
> OCSP responder is available, then you can't let your CRLs grow too
> large to download in a few seconds over a 56kbit modem.
Thanks.
> But I do remember something about Firefox not showing its EV UI
> unless it can get OCSP responses (this behavior may have changed
> recently if I recall--so that it does show a green bar for EV certs
> with only a CRL distribution point defined)
So... Mozilla could eventually accept CAs without OCSP at all for EV
certificates?
--
Erwann ABALEA <erwann...@keynectis.com>
Département R&D
KEYNECTIS
11-13 rue René Jacques - 92131 Issy les Moulineaux Cedex - France
Tél.: +33 1 55 64 22 07
http://www.keynectis.com
/It is strongly RECOMMENDED that all CAs support OCSP when a majority of
deployed Web servers support the TLS 1.0 extension in accordance to RFC
3546, to return “stapled” OCSP responses to EV-enabled applications. CAs
MUST support an OCSP capability for Subscriber Certificates that are
issued after Dec 31, 2010./
But obviously if a CA has an OCSP pointer in the intermediate CA
certificates, they must be working. If not, there is CRL for those that
bother to fetch them.
WTF, Mac runs its own ocspd daemon? What exactly is that?
> I don't think it was required.
Answered that just a minute ago.
> But I do remember something about Firefox not showing its EV UI unless it can get OCSP responses
That's because the only revocation checking method FF could perform was
OCSP.
There's quite a few soft that will not do OCSP checking for intermediate
CAs, only verify them with CRL.
There's some logic behind this, much less likely than an end level cert
to be suddenly revoked, the CRL is usually about almost empty and about
the size of a typical OCSP response, and the CRL has a very strong reuse
rate.
1) The CAB Forum published a document aimed at browser writers
(and platforms) which - although not compulsory - sets out the
recommended way to process EV certificates.
http://www.cabforum.org/Guidelines_for_the_processing_of_EV_certif
icates%20v1_0.pdf
"12.Revocation Checking
Applications must confirm that the EV certificate has not been
revoked before accepting it. Revocation checking must be performed
in accordance with [RFC5280]. Certificates for which confirmation
cannot be obtained must not be granted the EV treatment (see
Section 13, below).
The application should support both CRL and OCSP services. For
HTTP schemes, the application may use either the GET or POST
method. If the application cannot obtain a response using one
service, then it should try all available alternative services.
The application should follow HTTP redirects and cache-refresh
directives.
Response time-out should not be less than three seconds.
"
2) In the main guidelines (to CAs)
http://www.cabforum.org/Guidelines_v1_2.pdf
"15.2.1 CA Liability
....
(2) Indemnification of Application Software Vendors:
.... Thus, the CA (and its Root CA) SHALL defend, indemnify, and
hold harmless each Application Software Vendor for any and all
claims, damages, and losses suffered by such Application Software
Vendor related to an EV Certificate issued by the CA, regardless
of the cause of action or legal theory involved.
This shall not apply, however, to any claim, damages, or loss
suffered by such Application Software Vendor related to an EV
Certificate issued by the CA where such claim, damage, or loss was
directly caused by such Application Software Vendor's software
displaying as not trustworthy an EV Certificate that is still
valid, or displaying as trustworthy
(i) an EV Certificate that has expired, or
(ii) an EV Certificate that has been revoked (but only in cases
where the revocation status is currently available from the CA
online, and the relying-party application software either failed
to check such status or ignored an indication of revoked status).
"
Regards
Robin
> -----Original Message-----
> From: dev-security-policy-
> bounces+robin=comod...@lists.mozilla.org [mailto:dev-security-
> policy-bounces+robin=comod...@lists.mozilla.org] On Behalf Of
> Eddy Nigg
> Sent: 05 August 2010 18:37
> To: mozilla-dev-s...@lists.mozilla.org
> Subject: Re: getting CAs to maintain minimum standards of
> competence
>
> On 08/05/2010 05:35 PM, From Erwann Abalea:
> > BTW, earlier on this group, someone told that OCSP was
> mandatory for
> > EV. I can't find such statement in the SSL EV Guidelines 1.2.
> Am I
> > missing something?
>
> /It is strongly RECOMMENDED that all CAs support OCSP when a
> majority of
> deployed Web servers support the TLS 1.0 extension in accordance
> to RFC
> 3546, to return "stapled" OCSP responses to EV-enabled
> applications. CAs
> MUST support an OCSP capability for Subscriber Certificates that
> are
> issued after Dec 31, 2010./
>
> But obviously if a CA has an OCSP pointer in the intermediate CA
> certificates, they must be working. If not, there is CRL for
> those that
> bother to fetch them.
>
> --
> Regards
>
> Signer: Eddy Nigg, StartCom Ltd.
> XMPP: star...@startcom.org
> Blog: http://blog.startcom.org/
> Twitter: http://twitter.com/eddy_nigg
>
I suspect that's a different discussion, since it doesn't cancel the
OCSP requirement what the CAs concerns.
Regarding requiring applications to traverse all revocation methods
before giving up I wholeheartedly support. And not only for EV if
possible :-)
https://bugzilla.mozilla.org/show_bug.cgi?id=585122
I also added the following in Comment #5 of the bug: I believe this
should also apply to intermediate CA's... EV treatment should only be
allowed if both the end-entity cert and the intermediate certs have an
AIA OCSP URI.
Is this in the EV Guidelines? Or did I just add a new requirement that
CA’s won’t be expecting?
>> Regarding requiring applications to traverse all revocation
>> methods before giving up I wholeheartedly support.
Has a bug been filed for this?
>> issuing certs with duplicate serial numbers from the same issuer name
Is this issue something that should be added to the Recommended
Practices or the Potentially Problematic Practices wiki pages? I hope
it's not a common mistake. But maybe there should be something about
using random serial numbers and making sure duplicate values aren't used?
>> subordinate CA certs expire before they are replaced
>> OCSP responder certs expire before they are replaced
This is very annoying, but I'm not sure what we can do about it.
>> OCSP responder certs don't conform to RFC 2560.
I think we catch most of these during the root inclusion/change process.
Though we have found some cases where CA's with roots already in NSS had
broken OCSP services. I believe most of those have been caught and fixed.
I like the idea of regularly testing OCSP services for roots included in
NSS.
>> insufficient oversight of or validation by RAs
Do we need to add more to the Potentially Problematic Practices wiki page?
https://wiki.mozilla.org/CA:Problematic_Practices#Delegation_of_Domain_.2F_Email_validation_to_third_parties
>> issuance of SubCA certs in violation of Mozilla policy
Do we need to add more to the subCA checklist?
https://wiki.mozilla.org/CA:SubordinateCA_checklist
>> insecure web interfaces for cert application/validation
Should something about this be added to the Recommended Practices wiki page?
>> misrepresentation of company/operational details,
>> such as governing jurisdiction
This one is somewhat tricky, and misrepresentation may not be done on
purpose, but due to translation/culture/communication.
Kathleen
We could blog about it when we update the page, explaining what went
wrong. It wouldn't reach an end-user audience necessarily, but posts
on the Mozilla Security Blog often hit the tech press and would
reach many administrators.