Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Brief History of attacks on CAs

133 views
Skip to first unread message

ianG

unread,
Mar 4, 2012, 5:14:10 AM3/4/12
to Mozilla policy
For a risk analysis I am writing, I have compiled a list of attacks on
CAs that have happened. "History" in the sense of risk analysis.

It isn't mean to be deep, just to give a flavour. Unfortunately for a
real history we would need many more attacks to give some sense of
statistical predictability, sadly our attackers are failing us ;)

Comments welcome, iang.



2001. False certs. An unknown party used weaknesses in validation to get
two certificates issued in the name of Microsoft.com (Guerin). The
attacker was thought to be of the reputational variety: interested in
embarrassment of CA not exploitation.

2003. Phishing. This attack bypasses the security afforded by
certificates due to weaknesses in the secure browsing model (Grigg1).
The existence of an unsecured mode of communication (HTTP) alongside a
secure mode (HTTPS) provides an easy borders-of-the-map or downgrade
attack, which user interfaces offer little resistance against.
Consequences best guesstimate runs at around $100m (FC 1343).

2008 Interface breach. One CA created a false certificate for a vendor
by probing the RA of a competitor for weaknesses (Leyden). Consequences
limited to lowered reputations for all of those involved.

2008 Weak certs. An academic group succeeded in attacking a CA with weak
cryptographic protections in its certificates (Sotirov et al). This
resulted in the attackers acquiring a signed certificate over two keys,
one normal and one that acted as a sub-root. This gave them the ability
to sign new certificates that would be accepted by major vendors.
Consequences were limited to reputational effects as the root that was
attacked was slated to be removed within the month.

2011 False certs. An lone Iranian attacker, ichsunx2, breached
approximately 4 CAs. His best success was to use weaknesses in an RA to
acquire 9 certificates for several high profile communications sites
(Zetter). It was claimed that the attacker operated under the umbrella
of the Iranian state but no evidence for that was forthcoming.

2011 Breached / collapsed CA. The same attacker, icksunx2, breached a
Dutch CA and issued several certificates. The CA’s false certs were
first discovered in an attack on Google’s gmail service, suggested to be
directed against political activists opposed to the Iran government.
Controls within the CA were shown to be grossly weak in a report by an
independent security auditor (FOX-IT1), and the CA filed for bankrupcy
protection (perhaps for that reason). Vendors discovered that revocation
was not an option, and issued new browsers that blocked the CA in code.
Known user damages: rework by google, and vendor-coordinated re-issuance
of software to all browser users. Potential for loss of confidentiality
of activists opposed to Iranian government. Many Netherlands government
agencies had to replace their certificates.

2011. Spear Phishing. A group of 9 certificates were identified in
targetted malware injection attacks (FOX-IT2). As the certificates were
all alleged to be only 512 bits, the conjecture is that new private keys
were crunched for them. One public-facing sub-CA in Malaysia was
dropped, 3 other CAs re-issued some certs and reviewed controls. No
known customer breaches, but probably replacement certs for the holders
(minor).

2011. Website hack. A captive CA for a telecom had its website hacked,
and subscriber information and private IP compromised(Goodin). Attacker
was listed as a hacker who tipped off the media, claiming not to be the
first. Parent telecom shut down the website.
2012. CA breached protocol. A CA announced that it had issued a subroot
to a company for the purposes of intercepting the secure communications
of its employees. This is contrary to contract with vendors and industry
compact. At some moment of clarity, the CA decided to withdraw the
subroot. Consequences: loss or damage to that customer due to contract
withdrawal. Such contracts have been estimated to cost $50k. Destruction
of the equipment concerned, maybe $10k. Loss of reputation to that CA,
which specialises in providing services to US government agencies. Loss
of time at vendors which debated the appropriate response.




References are here: http://wiki.cacert.org/Risk/History

Jean-Marc Desperrier

unread,
Mar 5, 2012, 8:18:13 AM3/5/12
to mozilla-dev-s...@lists.mozilla.org
ianG a écrit :
> For a risk analysis I am writing, I have compiled a list of attacks on
> CAs that have happened. "History" in the sense of risk analysis.

I think the risk analysis can be properly done only if they are sided
with the number of attacks on certificate security that did not directly
attack CAs.

Firstly the separation between the two is not always completely obvious.

For example in the "2011. Spear Phishing" case, all indication we have
tend to show it's the issued certificates themselves that were attacked,
not the CA, and the reason why the CA's responsibility is engaged is by
having accepted to issue certificates that were obviously weak.

But you don't list the recent weak RSA key attack, where in fact
similarly CA's have accepted to issue certificates that were also weak.
It was less obvious ? But how much less really ? I was recently shown
the attack was already known and described on page 13 of this 1999 paper
http://www.comms.engg.susx.ac.uk/fft/crypto/ECCFut.pdf.
The new thing is the optimized method to make the calculation over a
very large number of keys, but it arguably doesn't require much effort
to discover for a competent cryptographer who wants to tackle the problem.

And as a result, it seems to me the level of competence and computing
power needed for an attacker to break a key in the second attack is not
higher (probably lower) than the one for the first.

The annoying part is that the effort for the CA's to identify the weak
key *is* significant higher, instead of a simple size test, it now
requires as much effort as it does for the attacker.

> It isn't mean to be deep, just to give a flavour. Unfortunately for a
> real history we would need many more attacks to give some sense of
> statistical predictability, sadly our attackers are failing us ;)
>
> Comments welcome, iang.

I think if were interested in statistical predictability we need to
merge the list with the attacks directly directed to certificate.
The stories of stolen code signing certificate private key are actually
very similar, just as well a case of PKI failing to deliver the level of
security expected by those who rely on it.

If the way PKI is used in practice, is sold, is recommended to be used,
leads to private keys that are stolen on the user's private computer
being usable in attacks on other people, then it's a failure just as
much as when the CA fails.
The user was not careful enough may be an excuse *only* if the uncareful
user is the only victim. When one uncareful user leads to a large number
of other people being victimized, it's the system that really needs to
be fixed.

ianG

unread,
Mar 6, 2012, 8:46:05 AM3/6/12
to dev-secur...@lists.mozilla.org
Hi Jean-Marc,
thanks for this, very helpful.

On 6/03/12 00:18 AM, Jean-Marc Desperrier wrote:
> ianG a écrit :
>> For a risk analysis I am writing, I have compiled a list of attacks on
>> CAs that have happened. "History" in the sense of risk analysis.
>
> I think the risk analysis can be properly done only if they are sided
> with the number of attacks on certificate security that did not directly
> attack CAs.

Nod. Hence the first one about phishing and the later spear phishing one.

> Firstly the separation between the two is not always completely obvious.
>
> For example in the "2011. Spear Phishing" case, all indication we have
> tend to show it's the issued certificates themselves that were attacked,
> not the CA, and the reason why the CA's responsibility is engaged is by
> having accepted to issue certificates that were obviously weak.

Right.

> But you don't list the recent weak RSA key attack, where in fact
> similarly CA's have accepted to issue certificates that were also weak.


Hmmm... details?


> It was less obvious ? But how much less really ? I was recently shown
> the attack was already known and described on page 13 of this 1999 paper
> http://www.comms.engg.susx.ac.uk/fft/crypto/ECCFut.pdf.
> The new thing is the optimized method to make the calculation over a
> very large number of keys, but it arguably doesn't require much effort
> to discover for a competent cryptographer who wants to tackle the problem.
>
> And as a result, it seems to me the level of competence and computing
> power needed for an attacker to break a key in the second attack is not
> higher (probably lower) than the one for the first.
>
> The annoying part is that the effort for the CA's to identify the weak
> key *is* significant higher, instead of a simple size test, it now
> requires as much effort as it does for the attacker.


This is all new to me and while the combination of claims sounds
dangerous ... I'd like to see the attack in motion?


>> It isn't mean to be deep, just to give a flavour. Unfortunately for a
>> real history we would need many more attacks to give some sense of
>> statistical predictability, sadly our attackers are failing us ;)
>>
>> Comments welcome, iang.
>
> I think if were interested in statistical predictability we need to
> merge the list with the attacks directly directed to certificate.
> The stories of stolen code signing certificate private key are actually
> very similar, just as well a case of PKI failing to deliver the level of
> security expected by those who rely on it.


Ah. Good point. I shall look for a summary of code-sign certificate
thefts. That's exactly what I was hoping for in posting this, thanks :)

> If the way PKI is used in practice, is sold, is recommended to be used,
> leads to private keys that are stolen on the user's private computer
> being usable in attacks on other people, then it's a failure just as
> much as when the CA fails.


Yeah.

> The user was not careful enough may be an excuse *only* if the uncareful
> user is the only victim. When one uncareful user leads to a large number
> of other people being victimized, it's the system that really needs to
> be fixed.

+1 .. but beyond scope of this exercise.

Thanks again.

iang

Jean-Marc Desperrier

unread,
Mar 6, 2012, 1:10:27 PM3/6/12
to mozilla-dev-s...@lists.mozilla.org
ianG a écrit :
>> But you don't list the recent weak RSA key attack, where in fact
>> similarly CA's have accepted to issue certificates that were also weak.
>
> Hmmm... details?

As it was all over the web, I couldn't even imagine you wouldn't see
what I was talking about :
- "New research: There's no need to panic over factorable keys--just
mind your Ps and Qs"
https://freedom-to-tinker.com/blog/nadiah/new-research-theres-no-need-panic-over-factorable-keys-just-mind-your-ps-and-qs
- "What You Need to Know About the RSA Key Research"
http://threatpost.com/en_us/blogs/what-you-need-know-about-rsa-key-research-021612
- "Ron was wrong, Whit is right"
http://eprint.iacr.org/2012/064.pdf

- http://blafh.blogspot.com/2012/02/ron-was-wrong-whit-is-right.html
"Q: Heniger, Halderman et al. have twice as many broken keys, and have
identified the source of the flaw. Most of them do not belong to real
websites and pose no threat to the general public.
A: We seem to have just as many broken keys. Some numbers in the paper
are based off an old dataset, and were left as-is as they are still
representative of the situation. And we did know many (but not all) of
them belonged to VPNs and other network devices, but didn't want to
disclose it too early, for obvious reasons. It is true that popular
https websites may not be at immediate risk for the general public; it
is still, however, a serious matter of concern."

Ondrej Mikle

unread,
Mar 6, 2012, 6:03:56 PM3/6/12
to dev-secur...@lists.mozilla.org
On 03/06/2012 02:46 PM, ianG wrote:
> Hi Jean-Marc,
>
>> It was less obvious ? But how much less really ? I was recently shown
>> the attack was already known and described on page 13 of this 1999 paper
>> http://www.comms.engg.susx.ac.uk/fft/crypto/ECCFut.pdf.
>> The new thing is the optimized method to make the calculation over a
>> very large number of keys, but it arguably doesn't require much effort
>> to discover for a competent cryptographer who wants to tackle the problem.
>>
>> And as a result, it seems to me the level of competence and computing
>> power needed for an attacker to break a key in the second attack is not
>> higher (probably lower) than the one for the first.
>>
>> The annoying part is that the effort for the CA's to identify the weak
>> key *is* significant higher, instead of a simple size test, it now
>> requires as much effort as it does for the attacker.
>
>
> This is all new to me and while the combination of claims sounds dangerous ...
> I'd like to see the attack in motion?

Testing keys in CSR by CAs was talked about some time ago in EFF observatory
mailing list. Not sure how many CAs actually implement it and what exactly do
they implement. It should be pretty simple to make an automated tool for that.

When testing for weak keys in various projects, we currently do (for RSA):

1. check modulus size and low public exponent
2. check against list of debian-weak-keys
3. test against collected moduli for common factors

Obviously first two tests are very fast, but since a CA needs to test one key at
a time, the last GCD test wouldn't be prohibitively slow even with a naive
algorithm. One needs to test only against moduli of roughly similar size (say,
+/- 16-bits). With the collected ~4.7M moduli the largest group is 1024-ish-bit
ones (~2.9M).

On a fairly standard server HW (some Xeon) with off-the-shelf GMP library one
test takes about 50 seconds against the largest 2.9M group, using just single
core (including ~10 sec loading time). Only if you need to test every key
against each other, things get tricky (the DJB's algorithm mentioned in
Heninger's article).

If we put together some key requirements (also for other types like ECC), I
could write such tool.

Ondrej

Phillip Hallam-Baker

unread,
Mar 6, 2012, 8:53:09 PM3/6/12
to Ondrej Mikle, dev-secur...@lists.mozilla.org
The fact that another person has the same factor as you is not the
security problem here.

The security problem is that the primes were generated with a bad RNG.
Telling people to reapply with a new key generated from the same RNG
will not improve security at all.


This is an application security hole.
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy



--
Website: http://hallambaker.com/

ianG

unread,
Apr 12, 2012, 8:05:26 AM4/12/12
to dev-secur...@lists.mozilla.org, jmd...@gmail.com
I've now got back to this and reviewed all the comments, thanks.

http://wiki.cacert.org/Risk/History

Especially, this one:


On 7/03/12 05:10 AM, Jean-Marc Desperrier wrote:
> ianG a écrit :
>>> But you don't list the recent weak RSA key attack, where in fact
>>> similarly CA's have accepted to issue certificates that were also weak.
>>
>> Hmmm... details?
>
> As it was all over the web, I couldn't even imagine you wouldn't see
> what I was talking about :
> - "New research: There's no need to panic over factorable keys--just
> mind your Ps and Qs"
> https://freedom-to-tinker.com/blog/nadiah/new-research-theres-no-need-panic-over-factorable-keys-just-mind-your-ps-and-qs
>
> - "What You Need to Know About the RSA Key Research"
> http://threatpost.com/en_us/blogs/what-you-need-know-about-rsa-key-research-021612
>
> - "Ron was wrong, Whit is right"
> http://eprint.iacr.org/2012/064.pdf
>
> - http://blafh.blogspot.com/2012/02/ron-was-wrong-whit-is-right.html
> "Q: Heniger, Halderman et al. have twice as many broken keys, and have
> identified the source of the flaw. Most of them do not belong to real
> websites and pose no threat to the general public.
> A: We seem to have just as many broken keys. Some numbers in the paper
> are based off an old dataset, and were left as-is as they are still
> representative of the situation. And we did know many (but not all) of
> them belonged to VPNs and other network devices, but didn't want to
> disclose it too early, for obvious reasons. It is true that popular
> https websites may not be at immediate risk for the general public; it
> is still, however, a serious matter of concern."



Ha! You are right, I had seen it, but discounted it as boring :) Added
now. This is exactly the sort of blind spot I was hoping to fix by
posting, thanks!



ing
0 new messages