Standard PKC Test Keys

1,045 views
Skip to first unread message

Peter Gutmann

unread,
Oct 19, 2024, 6:15:14 AM10/19/24
to dev-secur...@mozilla.org
The publiction of RFC 9500 passed without too much notice so I thought I'd
mention it here for both CAs and crypto library developers, the description
is:

The widespread use of public key cryptosystems on the Internet has led to a
proliferation of publicly known but not necessarily acknowledged keys that
are used for testing purposes or that ship preconfigured in applications.
These keys provide no security, but since there's no record of them,
relying parties are often unaware that they provide no security. In order
to address this issue, this document provides a set of standard public test
keys that may be used wherever a preconfigured or sample key is required
and, by extension, also in situations where such keys may be used, such as
when testing digitally signed data. Their purpose corresponds roughly to
that of the EICAR test file, a non-virus used as a test file for antivirus
products, and the GTUBE file, a similar file used with spam-detection
products.

Crypto library developers may want to use these keys as their standard test
keys, and CAs should check for them when issuing certificates to make sure
that they're not certifying test keys for production use.

Peter.

Mike Shaver

unread,
Oct 19, 2024, 9:35:17 AM10/19/24
to Peter Gutmann, dev-secur...@mozilla.org
Thank you, Peter! I agree that it would be good to lint for these, and hope to see that capability integrated into pre-issuance validation by all CAs.

Mike

--
You received this message because you are subscribed to the Google Groups "dev-secur...@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev-security-po...@mozilla.org.
To view this discussion on the web visit https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/ME0P300MB0713E3C29CAA7213BE419AB1EE412%40ME0P300MB0713.AUSP300.PROD.OUTLOOK.COM.

Matt Palmer

unread,
Oct 19, 2024, 10:56:01 PM10/19/24
to dev-secur...@mozilla.org
On Sat, Oct 19, 2024 at 10:15:05AM +0000, Peter Gutmann wrote:
> The widespread use of public key cryptosystems on the Internet has led to a
> proliferation of publicly known but not necessarily acknowledged keys that
> are used for testing purposes or that ship preconfigured in applications.
> These keys provide no security, but since there's no record of them,
> relying parties are often unaware that they provide no security.

Relying parties should be checking keys against the dataset maintained
by pwnedkeys.com, which has a great many keys, both test and otherwise,
including the keys contained in RFC9500 (included since ~December 2023).

- Matt

Peter Gutmann

unread,
Oct 20, 2024, 5:05:41 AM10/20/24
to Matt Palmer, dev-secur...@mozilla.org
Matt Palmer <mpa...@hezmatt.org> writes:

>Relying parties should be checking keys against the dataset maintained by
>pwnedkeys.com, which has a great many keys, both test and otherwise,
>including the keys contained in RFC9500 (included since ~December 2023).

Nice! Any chance of publishing either the SPKIs or the SPKI hashes? There
are lots of things around that can't make arbitrary Internet requests every
time they see a new key.

Peter.

Matt Palmer

unread,
Oct 20, 2024, 6:37:09 PM10/20/24
to dev-secur...@mozilla.org
On Sun, Oct 20, 2024 at 09:05:31AM +0000, Peter Gutmann wrote:
> Matt Palmer <mpa...@hezmatt.org> writes:
> >Relying parties should be checking keys against the dataset maintained by
> >pwnedkeys.com, which has a great many keys, both test and otherwise,
> >including the keys contained in RFC9500 (included since ~December 2023).
>
> Nice! Any chance of publishing either the SPKIs or the SPKI hashes?

Possibly. I have concerns around doing so, as the data set is very
large, and constantly updating. I'd prefer to build a system which is
capable of handling those challenges, and nobody has ever wanted to work
with me to address them, so I haven't gotten around to it myself. I've
also considered bloom-filtered querying for high volume applications,
and k-anonymous lookups for the privacy conscious, but again, nobody's
actually seriously asked me for that, so they're also in the "round tuit"
bucket.

> There are lots of things around that can't make arbitrary Internet
> requests every time they see a new key.

While I'm sure there are *some* things that can't make arbitrary
requests, I'm less confident about the "lots" part. If you're regularly
seeing new keys, you're probably communicating on the Internet, in which
case...

- Matt

Peter Gutmann

unread,
Oct 21, 2024, 4:25:33 AM10/21/24
to Matt Palmer, dev-secur...@mozilla.org
Matt Palmer <mpa...@hezmatt.org> writes:

>I have concerns around doing so, as the data set is very large, and
>constantly updating.

Ah, I didn't realise it was that big, I thought it'd be a small collection
that could be turned into a bloom filter. If there's that many of them the
data would be interesting, any chance of publishing stats, how many
compromised keys, how many are X.509, how many are SSH, etc?

>While I'm sure there are *some* things that can't make arbitrary requests,
>I'm less confident about the "lots" part.

I'm referring to embedded systems, which have no internet access but end up
seeing keys from who-knows-where. When you see a connection with a cert
issued to Some State in Some Country [0] you've got a pretty good idea that
the private key is unlikely to be very private, but apart from that there's no
other indication that there's a problem.

That would be another reason to see what's present, although that could also
be handled in the stats without having to publish actual keys/certs, what are
the top identifiers used with non-private keys? That could be applied like a
top-ten bad passwords filter, if you can get people to stay away from the most
commonly-used insecure keys it's at least some progress.

Peter.

[0] For those who don't recognise this, it's the default OpenSSL cert data.

Hanno Böck

unread,
Oct 21, 2024, 5:23:51 AM10/21/24
to dev-secur...@mozilla.org
FWIW, I may throw in my tool badkeys:
https://badkeys.info/

It contains checks for various known vulnerabilities in public keys,
and also a blocklist of known "public private keys", as I like to call
them. (And yes, the RFC 9500 keys are in there as well, also all
other private keys used in RFC and ietf draft documents).

The key sources are all public and documented here:
https://github.com/badkeys/blocklistmaker

It uses a hash list, however, the format is currently only
sourcecode-documented. (But it's on my task list to document that
properly, it's essentially a truncated sha256 of N in the case of RSA,
and x in the case of EC keys - that's a deliberate choice over spki
hashes, so it better covers co-broken RSA keys - different e, but same
N - and different encodings for EC keys)

--
Hanno Böck
https://hboeck.de/

Matt Palmer

unread,
Oct 21, 2024, 5:28:35 AM10/21/24
to dev-secur...@mozilla.org
On Mon, Oct 21, 2024 at 08:25:23AM +0000, Peter Gutmann wrote:
> Matt Palmer <mpa...@hezmatt.org> writes:
> >I have concerns around doing so, as the data set is very large, and
> >constantly updating.
>
> Ah, I didn't realise it was that big, I thought it'd be a small collection
> that could be turned into a bloom filter.

Rather, it's a *large* collection that could be turned into a bloom
filter, instead. :grin: Last time I ran the numbers, from memory I
think I calculated I'd need a 32MB filter file to get the false-positive
rate down to 0.1%.

> If there's that many of them the
> data would be interesting, any chance of publishing stats, how many
> compromised keys, how many are X.509, how many are SSH, etc?

There's roughly 2M keys in the pwnedkeys dataset at present. Splitting
by type can *kinda* be done, insofar as I keep track of whether the
format of the key I found was PKCS1, PKCS8, OpenSSH, PuTTY, etc, but
that's not definitive, since OpenSSH reads other formats of key, and
they're all just big numbers anyway, at the end of the day.

Publishing live stats is doable, just yet another of those "round tuit" things.

> >While I'm sure there are *some* things that can't make arbitrary requests,
> >I'm less confident about the "lots" part.
>
> I'm referring to embedded systems, which have no internet access but end up
> seeing keys from who-knows-where.

The trick there is two-fold: having the storage to hold the dataset, and
managing to somehow maintain a reasonably up-to-date dataset to query --
because new keys get added to the dataset all the time.

> That would be another reason to see what's present, although that could also
> be handled in the stats without having to publish actual keys/certs, what are
> the top identifiers used with non-private keys? That could be applied like a
> top-ten bad passwords filter, if you can get people to stay away from the most
> commonly-used insecure keys it's at least some progress.

It'd be possible to identify keys that are published in a number of
different places (I keep track of where keys were found, so I could
group key metadata by the SPKI and count how many distinct URLs I find).
I'll add it to the round tuit bucket, too.

- Matt

Rob Stradling

unread,
Oct 22, 2024, 12:22:28 PM10/22/24
to Matt Palmer, dev-secur...@mozilla.org
Inspired by this thread, I've been integrating the pwnedkeys API into pkimetal.  PR at https://github.com/pkimetal/pkimetal/pull/183.

Whereas all of the other linting tools (including Hanno's badkeys) that are already integrated with pkimetal are enabled by default and deployed locally (in the same Docker container that pkimetal runs from), pwnedkeys is only available as an externally-operated API; so I'm wondering what the most appropriate default configuration should be for pkimetal's pwnedkeys integration:
  • Should it be enabled or disabled by default?
  • What's a sensible default HTTP request timeout when calling the pwnedkeys API?
  • What's a sensible default severity level (error, warning, notice, info) if a pwnedkeys API request times out?
  • What's a sensible default severity level if the pwnedkeys API call misbehaves in any other way?
  • What's a sensible default maximum number of concurrent requests from a pkimetal instance to the pwnedkeys API?
  • Should a pkimetal instance apply a req/sec rate limit on its own outgoing requests to the pwnedkeys API?
I'm keen to hear from Matt, but also from any pkimetal users or potential users.


From: dev-secur...@mozilla.org <dev-secur...@mozilla.org> on behalf of Matt Palmer <mpa...@hezmatt.org>
Sent: 21 October 2024 10:28
To: dev-secur...@mozilla.org <dev-secur...@mozilla.org>
Subject: Re: Standard PKC Test Keys
 
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
--
You received this message because you are subscribed to the Google Groups "dev-secur...@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev-security-po...@mozilla.org.

Corey Bonnell

unread,
Oct 22, 2024, 1:25:26 PM10/22/24
to Rob Stradling, Matt Palmer, dev-secur...@mozilla.org

Hi Rob,

I don’t have any opinion on the other questions, but for:

 

> Should it be enabled or disabled by default?

 

For better or worse, it is not uncommon to install linting software on the same host as the CA system itself. In fact, that is how one popular CA software suite invokes external linters: it expects a CLI tool to be installed locally to perform linting. Having a linter running on the CA host dial out to the wider Internet is not a good idea given the security-sensitive nature of the host and the software it is running. For this reason, I think this lint should be disabled by default.

 

A secondary concern is that external API calls are harder to reason about in terms of performance impact due to variability in API response times.

 

Thanks,

Corey

Matt Palmer

unread,
Oct 22, 2024, 6:54:07 PM10/22/24
to dev-secur...@mozilla.org
On Tue, Oct 22, 2024 at 05:25:17PM +0000, Corey Bonnell wrote:
> For better or worse, it is not uncommon to install linting software on
> the same host as the CA system itself.

I'll vote for "worse", for whatever it's worth.

> In fact, that is how one popular CA software suite invokes external
> linters: it expects a CLI tool to be installed locally to perform
> linting.

Given that pkimetal runs as a HTTP service, the "CLI tool" that the CA
software runs would need to be a `curl | jq` (or similar) shell script.
That would remove the need for pkimetal itself to be running on the
same machine even for that CA software suite.

> Having a linter running on the CA host dial out to the wider Internet
> is not a good idea given the security-sensitive nature of the host and
> the software it is running.

Having *anything* running on the CA host itself dial out to the wider
Internet seems like a recipe for giving your SOC a regular panic attack.

> A secondary concern is that external API calls are harder to reason
> about in terms of performance impact due to variability in API
> response times.

I'm not averse to providing the pwnedkeys dataset in other forms, if the
live-query-over-HTTP model is the only barrier to adoption by someone
who will make use of the data. Hell, I can provide a replication slot
on the PostgreSQL database (that you can feed into a machine in your
infrastructure) if that'll work. But nobody has ever actually reached
out to discuss how to come up with a design that meets both parties'
needs. For example, every time someone says "why not just provide an
SPKI dump?", I explain why that won't work without additional
engineering to ensure currency of the dataset, and then... crickets.

- Matt

Matt Palmer

unread,
Oct 22, 2024, 7:30:07 PM10/22/24
to dev-secur...@mozilla.org
On Tue, Oct 22, 2024 at 04:22:14PM +0000, Rob Stradling wrote:
> * What's a sensible default HTTP request timeout when calling the
> pwnedkeys API?

Unless you're on an absolutely backwater Internet connection, if you
haven't got a response within a second or so, you're *probably* not
going to get one.

To give you an indication of actual service times, over the past two
weeks, HTTP requests that return an attestation (ie a GET that returns a
200) are consistently between 37 and 38 milliseconds. If you're willing
to trust me (highly not recommended) you can do a HEAD, which comes in
at between 20 and 21ms. A GET or HEAD that returns a 404 (which should
be the overwhelming majority of responses, unless you're extremely
unlucky) sits happily at about 12ms.

In order to automatically account for terribad network latency, I'd
suggest just collecting request time stats and setting the timeout at
something like MAX(1s, 2*mean, 3sigma).

> * What's a sensible default severity level (error, warning, notice,
> info) if a pwnedkeys API request times out?

Given that failing to check pwnedkeys is equivalent to the current
status quo, info seems reasonable.

> * What's a sensible default severity level if the pwnedkeys API call
> misbehaves in any other way?

In terms of the lint result, info (for the same reasons as timeout).

Log entries for 4xx (other than 404/429) should be a fairly high
severity, as those *should* indicate a bug in pkimetal's request logic
that needs to be fixed. For example, if/when the current API were to be
phased out, I intend to serve all 410s for an extended period, which
this logic would detect and log, notifying the operator that they need
to upgrade.

> * What's a sensible default maximum number of concurrent requests
> from a pkimetal instance to the pwnedkeys API?

Go for your life. I'd be interested in a good load test. The per-IP
rate limit *should* shut you down before you can break anything; anyone
who wants an API key with a higher rate limit feel free to get in touch.

> * Should a pkimetal instance apply a req/sec rate limit on its own
> outgoing requests to the pwnedkeys API?

No need -- as long as you respect the 429 (including the Retry-After)
you shouldn't cause any damage (and if you do, that's my fault, not
yours).

- Matt

Peter Gutmann

unread,
Oct 22, 2024, 9:55:08 PM10/22/24
to Matt Palmer, dev-secur...@mozilla.org
Matt Palmer <mpa...@hezmatt.org> writes:

>For example, every time someone says "why not just provide an SPKI dump?", I
>explain why that won't work without additional engineering to ensure currency
>of the dataset, and then... crickets.

It doesn't have to be perfect, it just has to be good enough, and in
particular better than what we have now which is nothing at all. Thus my
earlier comment that even a top-ten would be a good start, particularly if
that covers 90% of uses cases from widely-used software, i.e. prompts users to
use something other than the hardcoded out-of-the-box key in the app.

Peter.

Peter Gutmann

unread,
Oct 22, 2024, 10:03:54 PM10/22/24
to Matt Palmer, dev-secur...@mozilla.org
Matt Palmer <mpa...@hezmatt.org> writes:

>There's roughly 2M keys in the pwnedkeys dataset at present. Splitting by
>type can *kinda* be done, insofar as I keep track of whether the format of
>the key I found was PKCS1, PKCS8, OpenSSH, PuTTY, etc, but that's not
>definitive, since OpenSSH reads other formats of key, and they're all just
>big numbers anyway, at the end of the day.

Switching hats to the one that looks a lot like Sherlock Holmes' deerstalker,
it'd be really interesting to see stats for this since I had no idea there
were that many compromised keys out there. I think this would be quite
interesting to security researchers depending on how much data you've got on
the keys, breadown by key types, arrival rate (is it a steady trickle from
leaks or does it come in bursts due to large-scale compromises), etc. Heck,
just anything to help us understand key leaks/compromises a bit more, until
now I didn't even know how bad it was.

Peter.

Matt Palmer

unread,
Oct 24, 2024, 8:09:24 PM10/24/24
to dev-secur...@mozilla.org
On Wed, Oct 23, 2024 at 02:03:45AM +0000, Peter Gutmann wrote:
> Matt Palmer <mpa...@hezmatt.org> writes:
> >There's roughly 2M keys in the pwnedkeys dataset at present. Splitting by
> >type can *kinda* be done, insofar as I keep track of whether the format of
> >the key I found was PKCS1, PKCS8, OpenSSH, PuTTY, etc, but that's not
> >definitive, since OpenSSH reads other formats of key, and they're all just
> >big numbers anyway, at the end of the day.
>
> Switching hats to the one that looks a lot like Sherlock Holmes' deerstalker,
> it'd be really interesting to see stats for this since I had no idea there
> were that many compromised keys out there.

It's really pretty wild, isn't it? And I'm not even plumbing many
sources of keys -- there are a bunch of places I've wanted to go looking
for keys for years, but lack of time and other resources have meant
those dreams have gone as-yet unfulfilled.

There's also tens of thousands of encrypted keys, many of which are
trivially crackable, which I was chipping away at a few years ago when I
had the support of my then-primary client to throw some hardware at the
problem. (Anyone with a spare OpenCL rig they'd be happy to donate,
please get in touch!)

> I think this would be quite interesting to security researchers
> depending on how much data you've got on the keys, breadown by key
> types, arrival rate (is it a steady trickle from leaks or does it come
> in bursts due to large-scale compromises), etc.

Well, I don't know if it's actually all that interesting to security
researchers, since I've never had anyone ask in the six years I've been
running Pwnedkeys. But yes, I've got records of every time I find a
key, including algorithm, bits/curve (as appropriate), when it was
found, where it was found, how it was found, what format it was in, key
passphrase (for cracked keys), and anything else that seemed potentially
useful when I built it.

> Heck, just anything to help us understand key leaks/compromises a bit
> more, until now I didn't even know how bad it was.

Maybe as a first step I just need to put a big "1,994,495[*] COMPROMISED
KEYS FOUND" on the Pwnedkeys frontpage...

I've got a whole load of research ideas floating around, but not the
time to pursue them. For example, I had an experiment design for a
measurement of the real-world effectiveness of revocation, but couldn't
justify the time commitment to do the work relative to other
(money-making) work. I don't suppose you've got a spare part-time
research fellowship in your back pocket?

- Matt

[*] Taken from `SELECT COUNT(*) FROM pwnedkeys` as of the time of
writing the above paragraph.

Rob Stradling

unread,
Oct 25, 2024, 6:59:38 AM10/25/24
to Corey Bonnell, Matt Palmer, dev-secur...@mozilla.org
Thanks Corey.  I'll update the PR to disable pwnedkeys checks by default.


From: Corey Bonnell
Sent: Tuesday, October 22, 2024 18:25
To: Rob Stradling; Matt Palmer; dev-secur...@mozilla.org
Subject: RE: Standard PKC Test Keys

Rob Stradling

unread,
Oct 25, 2024, 7:00:36 AM10/25/24
to Matt Palmer, dev-secur...@mozilla.org
Thanks Matt.

> I'm not averse to providing the pwnedkeys dataset in other forms, if the
> live-query-over-HTTP model is the only barrier to adoption by someone
> who will make use of the data.

You have my attention.  🙂

I'd like to ship the pwnedkeys dataset with pkimetal.  I think it would be uncontroversial for pkimetal to enable local pwnedkeys checks by default.

I think bundling the actual keys and signed evidence of compromise with pkimetal would be overkill and that that full dataset would be prohibitively large, so I would be looking to create a system that downloads all of that information, verifies all the signatures, and produces a compact list of pwned SPKI hashes that would be bundled with pkimetal.  For transparency, I would want to make the download-and-verify process open-source so that anyone can reproduce it if they want to.


Sent: 22 October 2024 23:54

To: dev-secur...@mozilla.org <dev-secur...@mozilla.org>
Subject: Re: Standard PKC Test Keys
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


--
You received this message because you are subscribed to the Google Groups "dev-secur...@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev-security-po...@mozilla.org.

Rob Stradling

unread,
Oct 25, 2024, 7:04:32 AM10/25/24
to Matt Palmer, dev-secur...@mozilla.org
Thanks for those details, Matt.  I'll do some further work to make sure I'm handling HTTP response codes as you suggest.

Although pkimetal will disable online pwnedkeys checks by default, I am planning to enable online pwnedkeys checks in the public pkimetal instances (https://pkimet.al and https://dev.pkimet.al) and in the pkimetal instances that Sectigo uses for preissuance linting.


Sent: 23 October 2024 00:30

To: dev-secur...@mozilla.org <dev-secur...@mozilla.org>
Subject: Re: Standard PKC Test Keys
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


--
You received this message because you are subscribed to the Google Groups "dev-secur...@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev-security-po...@mozilla.org.

Matt Palmer

unread,
Oct 25, 2024, 5:47:30 PM10/25/24
to dev-secur...@mozilla.org
On Fri, Oct 25, 2024 at 11:00:29AM +0000, Rob Stradling wrote:
> Thanks Matt.
>
> > I'm not averse to providing the pwnedkeys dataset in other forms, if the
> > live-query-over-HTTP model is the only barrier to adoption by someone
> > who will make use of the data.
>
> You have my attention. 🙂
>
> I'd like to ship the pwnedkeys dataset with pkimetal. I think it would be uncontroversial for pkimetal to enable local pwnedkeys checks by default.
>
> I think bundling the actual keys and signed evidence of compromise with pkimetal would be overkill and that that full dataset would be prohibitively large, so I would be looking to create a system that downloads all of that information, verifies all the signatures, and produces a compact list of pwned SPKI hashes that would be bundled with pkimetal. For transparency, I would want to make the download-and-verify process open-source so that anyone can reproduce it if they want to.

The important requirement I have for use of the Pwnedkeys dataset is
that it *must* be kept fresh. New keys are constantly being discovered,
and I don't believe it's unreasonable for anyone that says "I am
checking for compromised keys using Pwnedkeys" to always be checking
against something that closely approximates the then-current set of
known-compromised keys. That is why the current HTTP query approach is
what I've initially rolled out: I can be confident that everyone doing a
query is checking the current dataset as of the time of that query (or
my monitoring will tell me there's a problem and I can fix it ASAP).

Your proposed approach of pre-bundling, as I understand it, doesn't
appear to meet that requirement, as it would seem to capture a
point-in-time snapshot of the Pwnedkeys dataset. This would become
progressively out-of-date, and only come back towards currency when the
bundling process was re-run *and* the operator's local pkimetal
installation was updated. I have my doubts that operators would be
updating their pkimetal installations, say, hourly, and even that is a
much larger discovery -> disclosure delay than is currently provided by
the HTTP query mechanism.

- Matt

Rob Stradling

unread,
Oct 30, 2024, 7:08:33 AM10/30/24
to Matt Palmer, dev-secur...@mozilla.org
Hi Matt.

I completely understand your strong preference for the freshest data to be used, but I'm not convinced that it's wise to make the perfect the enemy of the good.

We're talking about situations where Pwnedkeys is currently not being used at all, due to the requirement to call an API over the internet.  Yes, I am suggesting pre-bundling a point-in-time snapshot of the Pwnedkeys dataset with pkimetal, which would become progressively out-of-date.  Are you suggesting that checking against an out-of-date list is no better than doing no checks at all?


Sent: 25 October 2024 22:47

To: dev-secur...@mozilla.org <dev-secur...@mozilla.org>
Subject: Re: Standard PKC Test Keys
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


--
You received this message because you are subscribed to the Google Groups "dev-secur...@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev-security-po...@mozilla.org.

Peter Gutmann

unread,
Oct 30, 2024, 7:23:51 AM10/30/24
to Matt Palmer, dev-secur...@mozilla.org
(I had this queued in my drafts folder, for some reason it didn't get sent).

Matt Palmer <mpa...@hezmatt.org> writes:

>Your proposed approach of pre-bundling, as I understand it, doesn't appear to
>meet that requirement, as it would seem to capture a point-in-time snapshot
>of the Pwnedkeys dataset.

Le mieux est l'ennemi du bien (the best is the enemy of the good). Given that
attackers seem to have no problems getting their hands on high-value Windows
code signing keys, I would imagine they have even less trouble getting as many
random noddy keys used to access a single server somewhere as they want. So
even with an always-online constantly-updated dataset what you're getting is a
best-effort subset of all compromised keys, which in turn means that while it
would be nice to have access to an up-to-the-minute dataset, in practice
access to even an older subset is still pretty good value.

In particular I see it not as a magic bullet do deal with the vast number of
likely-compromised but not necessarily known keys but more as a hygiene issue
to catch test keys inadvertently used in production, that sort of thing. Like
bad passwords, you're never going to be able to enumerate every possible weak
password, but even rejecting the top ten will deal with the low-hanging fruit,
force attackers to work a bit harder, and incentivise users to not use the
weakest possible passwords out there.

Peter.

Peter Gutmann

unread,
Oct 30, 2024, 7:33:39 AM10/30/24
to Matt Palmer, dev-secur...@mozilla.org
Matt Palmer <mpa...@hezmatt.org> writes:

>Well, I don't know if it's actually all that interesting to security
>researchers, since I've never had anyone ask in the six years I've been
>running Pwnedkeys.

That could be because it's been pretty well under the radar until now, I knew
it existed but that was about it, and I've never seen it mentioned in research
publications.

>But yes, I've got records of every time I find a key, including algorithm,
>bits/curve (as appropriate), when it was found, where it was found, how it
>was found, what format it was in, key passphrase (for cracked keys), and
>anything else that seemed potentially useful when I built it.

Very nice!

>I've got a whole load of research ideas floating around, but not the time to
>pursue them. For example, I had an experiment design for a measurement of
>the real-world effectiveness of revocation, but couldn't justify the time
>commitment to do the work relative to other (money-making) work. I don't
>suppose you've got a spare part-time research fellowship in your back pocket?

Nothing sorry, I work in industry despite the .edu address. However if you
look at the arxiv.org collection there's quite a few papers there which are
really just "here's a data dump, see if you find it useful" (with a lot of
padding commentary text to make it longer), so what you've got above shouldn't
preclude publication in some form or other.

(Not saying that you must do this, but just pointing out that it sounds like
there's enough there to be posted somewhere).

Peter.

Reply all
Reply to author
Forward
0 new messages