HTTP request blocking by CAs for CRL, CPS, AIA caIssuers

521 views
Skip to first unread message

Dexter Castor Döpping

unread,
Jan 19, 2026, 4:55:55 PMJan 19
to dev-secur...@mozilla.org
Hi all,

Recently I wrote a script to get resources hosted by CAs. For some CAs
the requests were blocked by a WAF.

In one case (http://www.microsoft.com/pkiops/*) I think the blocking
happens because of the user-agent header. Probably because it's a
generic one (python-requests/2.32.5). Changing the UA header immediately
stops my requests from getting blocked. [I reported this and they're
looking into changing their configuration to not block this UA.]

In another case (http://ca-repository.desc.gov.ae/*) I don't know why
I'm being blocked. Perhaps because of the user-agent header, or maybe
because I exceeded a rate limit. Waiting some time or changing request
details hasn't fixed this. I suspect I've been IP blocked, but no idea
if it's temporary or indefinitely.

Section 4.9.7 of the BRs says the following: CRLs MUST be available via
a publicly-accessible HTTP URL (i.e., "published").

I'm wondering if blocking requests breaks this "publicly-accessible"
requirement. Perhaps it depends on why requests are blocked? Rate limits
make sense to me, but geo blocking does not. Blocking user agents may be
a grey area.

The baseline requirements also says that certain certificates must
contain a CRL URL. But is there a requirement for the CRLs to be
"publicly-accessible" through the URLs embedded in the certificate?

RFC 5280 doesn't really clarify things. It says that the CRL/AIA
caIssuers URIs embedded in certs must "point to" certain file types, but
that language seems vague.

The strictest interpretation is that the server should always return 200
OK along with the proper file.

Another interpretation is that status code 200 tells the client they can
get the resource "pointed to" by the URI, so in that case it must return
a proper file. Other data can be returned with different status codes,
like deny message with status code 403. (Perhaps status code 404 can not
be used because this tells the client that the URI doesn't "point to"
anything).

In the loosest interpretation any status code can effectively be used
for anything. For example, HTML error messages with status code 200.
(That's what Microsoft's server does when blocking requests.)

I think it'd be good if there are guidelines for what reasons CAs
can/can't block requests, whether they can (temporarily) IP block you,
and what the server should respond with if you're blocked (e.g., what
status codes, should it give a block reason message?, contact method?).

What do other people think? What do the standards currently say about
CRL/AIA caIssuers availability? And should they be changed/clarified? I
saw that DigiCert plans to start that discussion regarding caIssuers AIA
availability on the CABF, which is a good development:
https://bugzilla.mozilla.org/show_bug.cgi?id=2009491#c2

Kind regards,
Dexter

Peter Gutmann

unread,
Jan 20, 2026, 4:57:42 AMJan 20
to Dexter Castor Döpping, dev-secur...@mozilla.org
Dexter Castor Döpping <dexter.c...@gmail.com> writes:

>In another case (http://ca-repository.desc.gov.ae/*) I don't know why I'm
>being blocked. Perhaps because of the user-agent header, or maybe because I
>exceeded a rate limit.

That's an interesting point, at what stage do you decide you're being DoS'd
and rate-limit? I was pleasantly surprised recently when I was debugging a
resource-exhaustion issue on an embedded device that I could run a CA-cert
fetch in a loop without being rate-limited (thank you to whoever set up
cacerts.digicert.com for not rate-limiting :-). However since it could turn
into a DoS (I only did around 200 fetches with a wait between each one so
barely a blip), presumably there would need to be some text in any guidelines
about when it was permissible to rate-limit.

Peter.

Hanno Böck

unread,
Jan 20, 2026, 5:17:17 AMJan 20
to dev-secur...@mozilla.org
Hi,

On Mon, 19 Jan 2026 22:55:47 +0100
Dexter Castor Döpping <dexter.c...@gmail.com> wrote:

> Recently I wrote a script to get resources hosted by CAs. For some
> CAs the requests were blocked by a WAF.

I've been hit by this before, and I'd very much appreciate if we could
have some basic sanity rules.

It's understandable that some form of abuse prevention takes place
(e.g., ratelimits), but I don't see how blocking certain user agents is
acceptable. It should be possible to validate certificates based on the
information in the certificate, and it should not be upon the CA to
decide which software is allowed to do this.

While at it, I also wonder if there's an expectation to send CRLs and
Issuer certs with correct MIME types. (The correct MIME types are
application/pkix-crl for CRLs and application/pkix-cert for issuer
certificates. There are a couple of obsolete or inofficial CRL mime
types in use, e.g. application/x-x509-crl or application/x-pkcs7-crl.)

--
Hanno Böck - Independent security researcher
https://itsec.hboeck.de/
https://badkeys.info/

Adriano Santoni

unread,
Jan 20, 2026, 5:36:40 AMJan 20
to dev-secur...@mozilla.org

There clearly is such an expectation, since RFC5280 states that those content-types "SHOULD" be specified when serving CRLs and CA certificates, respectively.

-- Adriano

Roman Fischer

unread,
Jan 30, 2026, 2:02:51 AM (12 days ago) Jan 30
to dev-secur...@mozilla.org, Hanno Böck
One thing to consider here is that some CAs may use commercial CDN providers to serve some of the information mentioned. These CDNs often also provide DDoS protection. However, the decision when some access is considered an attack and what requests will then be blocked or let through is typically done by the CDN/DDoS service provider. Putting requirements with regards to e.g. not blocking based on user-agent might be difficult to impossible to implement in this kind of setup.

Regards
Roman

Hanno Böck

unread,
Jan 30, 2026, 4:05:35 AM (12 days ago) Jan 30
to 'Roman Fischer' via dev-security-policy@mozilla.org, Roman Fischer
On Thu, 29 Jan 2026 23:02:51 -0800 (PST)
"'Roman Fischer' via dev-secur...@mozilla.org"
<dev-secur...@mozilla.org> wrote:

> One thing to consider here is that some CAs may use commercial CDN
> providers to serve some of the information mentioned. These CDNs
> often also provide DDoS protection. However, the decision when some
> access is considered an attack and what requests will then be blocked
> or let through is typically done by the CDN/DDoS service provider.
> Putting requirements with regards to e.g. not blocking based on
> user-agent might be difficult to impossible to implement in this kind
> of setup.

I think it is entirely reasonable to ask that CAs choose service
providers that don't interfer with providing basic functionality that
is part of the requirements of being a CA.
There's some legitimacy to DDoS protection, but you should be able to
reasonably justify that you do it in a way that does not obivously
generate false positives.

Ultimately, if you use a CDN service that is focussing on a "browser
only / we consider common non-browser clients an attack by default"
scenario, I'd argue that service is simply not suitable for the job of
serving CRLs.

Matt Palmer

unread,
Feb 1, 2026, 6:35:51 PM (10 days ago) Feb 1
to dev-secur...@mozilla.org
On Thu, Jan 29, 2026 at 11:02:51PM -0800, 'Roman Fischer' via dev-secur...@mozilla.org wrote:
> One thing to consider here is that some CAs may use commercial CDN
> providers to serve some of the information mentioned. These CDNs often also
> provide DDoS protection. However, the decision when some access is
> considered an attack and what requests will then be blocked or let through
> is typically done by the CDN/DDoS service provider. Putting requirements
> with regards to e.g. not blocking based on user-agent might be difficult to
> impossible to implement in this kind of setup.

CAs choose which service providers to use. If they choose a service
provider which is not capable of behaving in a manner appropriate for
the service the CA requires, then the CA should choose a different
service provider. If the CA does not choose a different service
provider, for whatever reason, then it is reasonable that the
consequences of that choice be borne by the CA, not by the community.

- Matt

Roman Fischer

unread,
Feb 2, 2026, 3:45:59 AM (9 days ago) Feb 2
to dev-secur...@mozilla.org
I completely agree that CAs remain responsible to provide secure and available certificate status information to the WebPKI ecosystem.
DDoS protection is something that most CAs can't do without external service providers (mitigating TBit/s attacks is hard).And these DDoS protection are usually based on multiple signals and their internal workings change constantly. I thinks it's simply a residual risk that some clients may be wrongly blocked by DDoS mitigation to keep the service available for the majority of the ecosystem. I also agree that blocking -solely- on the user agent is not a good strategy.

Kind regards
Roman
Reply all
Reply to author
Forward
0 new messages