What if we could deliver OCSP responses via DNS?
* Add a dnsAddress identifier in the OCSP section of AIA, so you can list an HTTP URI, and also a DNS location where the response could be retrieved.
* DNS requests are often allowed through restrictive firewalls which block outbound port 80 packets.
* DNS has built in distributed caching with TTL support.
* DNS over UDP can be served via AnyCast DNS providers without any fear of BGP related interruptions to TCP communications
* If signed by the issuing CA's key instead of a delegated responder, then the response sizes fit under 700 bytes, still small enough for UDP DNS (right?)
* AnyCast UDP DNS responses would be many times faster than non-AnyCast HTTP TCP connections with far-away-IPs (it's hard to AnyCast OCSP since there are clients that POST and that makes hosting OCSP on a CDN hard to do.)
* The distributed caching could remove enough load to enable OCSP responders to shorten their OCSP validity TTLs (overall improvement in ocsp response lifespans?)
* It might even be possible to build 3rd party non-CA DNS infrastructures that act as mirror services (send out 2 requests simultaneously, one to the CA and one to the mirror service, proceed if either response comes back "good" but fail if either response comes back "revoked" -- or just send requests to the mirror service only)
* Bolstered by DNSSEC, OCSP responses could potentially be "verified" to have come from the CA, which could help eliminate the current risk of replay attacks or MITM sending back 500 errors (faking the death of the origin OCSP server)
New browsers could prefer DNS, and fallback to http. Old browsers could continue to fetch OCSP via http only.
Paul
Would it be preferable to do this thru DNSSEC secured zones?
I like the idea of pulling this information closer to the place of use
and effectively caching it, which would help us reduce the amount of clutter
(reverse proxies and special validation clients) that need to be kept around
for performance reasons.
I like the potential idea of it being the responsibility of the producer
to provide this info - so a private CA secured by DNSSEC could also provide
the DNS entries - no need for an additional, high-availabilty service, just
change the agreement with local DNS provider for this feature.
I wonder what security issues might be raised.
I am interested in talking about this more, here or elsewhere. Thanks, ==mwh
Michael Helm
ESnet/LBNL
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
What would be the differentiating features of this proposal vs. DANE?
(I suspect there are some, but I think discussion of the idea would
proceed better once they are identified.)
Gerv
> On 29/04/11 18:32, Paul Tiemann wrote:
>> (Disclaimer: I haven't though this all the way through.)
>>
>> What if we could deliver OCSP responses via DNS?
>
> What would be the differentiating features of this proposal vs. DANE?
>
> (I suspect there are some, but I think discussion of the idea would proceed better once they are identified.)
From a technical standpoint I think they're two non-competing proposals.
DANE enables per-connection trust-anchor discovery or per-connection cert pinning, but doesn't deal with revocation data.
OCSP over DNS would just be another way to deliver OCSP responses: A way to leverage the already-built distributed caching infrastructure of the DNS to improve scalability and availability of OCSP. DDoS risk was probably not factored in when the OCSP spec was put together in 1998 or 1999. As support for that theory, consider the fact that nonces were built into the spec (an anti-caching mechanism!) What if DNS had been designed with an anti-caching bias?
OCSP over DNS would have some characteristics similar to OCSP stapling, and even some advantages over stapling, and it would not require changing code in as many places.
* OCSP responses would be cached by the user's DNS resolver -- masking the requester at least somewhat, which is similar to OCSP stapling.
* OCSP over DNS wouldn't hurt the TLS connection's TCP performance by putting too much data into the first few packets during the SSL handshake. This is a concern with OCSP stapling--something about TCP window scaling.
* To get OCSP stapling off the ground, the servers and the browsers have to support it, and the CAs don't do anything. OCSP over DNS would require work from CAs and browsers, but none from the servers.
For CAs the incentive to support OCSP over DNS would be a decrease in their overall traffic load, and an increase in their ability to deal with DDoS attacks, which I believe are going to become a big problem as soon as a browser turns on the security.OCSP.require=true switch. OCSP over DNS could be served via AnyCast DNS providers with much more infrastructure than CAs should be investing in themselves (does any CA have 20+ POPs on an AnyCast network? Would going through that much work be a distraction from the main job of CAs?)
For browsers the incentives might be:
* Much faster OCSP validation checks. Under 5ms possible if cached in the users' local DNS resolver.
* Improved privacy for users compared to OCSP over HTTP.
* Having more than 1 mechanism for retrieving OCSP responses. Easy fall-back to OCSP over HTTP if DNS fails. (Maybe require 1 fallback mechanism be tried before the security.OCSP.require failure -- so OCSP over HTTP then CRL, or OCSP over DNS then OCSP over HTTP which could avoid downloading a potentially 100K to 1MB CRL file?)
* DNS is more often allowed through restrictive firewalls than outbound port 80. It is also easier for firewall admins to allow access to their local resolver than to hunt down all the IP addresses of all their CA certificate OCSP responders.
(It might still have fatal flaws -- but I'm encouraged that they weren't immediately announced.)
Paul
>From the DANE charter -
"discovering and authenticating public keys which are associated
with a service located at a domain name."
The most liberal thing one could say is that validation is an
extension of this idea - maybe the charter would need revision
or maybe not. The intention of DANE is, I put stuff in the
DNS service, & you authenticate it because (a) the zone has
DNSSEC keys and you have validated it and (b) I wouldn't have
put stuff in my zone if it wasn't mine (or at least it's my
responsibility to police this data.
DANE doesn't say anything (yet) about trustworthiness or value of this "stuff"
& this is one of the arguments that still pops up in the mailing
list, tho I think the current majority opinion is to keep it out of scope.
This means that trust services like CAs or maybe other systems still
have a value. They can issue certificates and manage level of assurance
or other trust assertions beyond what the DNS records are able to express.
I was thinking about it as a way of providing a trusted OCSP responder
without getting into keys-for-the-trusted responder lockup, but
this new is intriguing.
It may also provide a method for revoking a self-signed certificate -
I don't think this problem has been resurrected in the DANE list
but perhaps it should be.
Thanks, ==mwh
Paul, thanks for posting this idea and for starting to think through the
issues. (I briefly touched on the same idea in a private IM with my boss a
few weeks ago, but we just hadn't got around to thinking it through in any
detail yet).
<snip>
> * DNS over UDP can be served via AnyCast DNS providers without any fear of
> BGP related interruptions to TCP communications * If signed by the issuing
> CA's key instead of a delegated responder, then the response sizes fit
> under 700 bytes, still small enough for UDP DNS (right?)
I don't claim to know very much about DNS, but I'll have a go at answering
that question...
https://secure.wikimedia.org/wikipedia/en/wiki/Domain_Name_System says:
"DNS queries consist of a single UDP request from the client followed by a
single UDP reply from the server. The Transmission Control Protocol (TCP) is
used when the response data size exceeds 512 bytes..."
At Comodo, we use non-delegated OCSP Response signing. For RSA-2048 signing
keys, these Responses are typically ~470 bytes (when the certificate is
"good") or ~490 bytes (when the certificate is "revoked").
Looking at the "DNS resource records" section of the wikimedia page mentioned
above, the TYPE+CLASS+TTL+RDLENGTH fields take up 10 bytes, which leaves ~30
bytes for the NAME field.
"NAME is the fully qualified domain name of the node in the tree. On the wire,
the name may be shortened using label compression where ends of domain names
mentioned earlier in the packet can be substituted for the end of the current
domain name."
Another factor is the choice of hash algorithm used for the issuerNameHash,
issuerKeyHash and responderID->byKey. Today, only SHA-1 tends to be
supported/used. A switch to SHA-256 would add a further 3x((256-160)/8)=36
bytes.
So yes, as long as the NAME field is no longer than ~30 bytes, and for as long
as RSA-2048 is considered secure, and for as long as SHA-1 may be used as
described above...Non-delegated OCSP Responses should be small enough (<512
bytes) for UDP DNS. But only just small enough!
(Switching to ECC would help reduce the size. Perhaps we'll all do that
before we're otherwise forced to switch to RSA-3072 or RSA-4096).
<snip>
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
Surely this is a feature of the DNS server? As in, CAs could write DNS
servers which replied over UDP even when that limit was exceeded?
Perhaps some firewalls would truncate it, but then the browser would
just fall back to normal OCSP.
Gerv
Do we need that, or can we use this?
http://www.dns-sd.org/ServiceTypes.html
> * If signed by the issuing CA's key instead of a delegated
> responder, then the response sizes fit under 700 bytes, still
> small enough for UDP DNS (right?)
(See other message.)
> New browsers could prefer DNS, and fallback to http.
If a browser is asked for https://www.foo.com, it can send out DNS
requests for www.foo.com and _ocsp._tcp.www.foo.com at the same time.
Of course, this might require that https://www.foo.com uses the same
certificate no matter who is accessing it and from where...
Gerv
> On 03/05/11 09:01, Rob Stradling wrote:
>> https://secure.wikimedia.org/wikipedia/en/wiki/Domain_Name_System says:
>> "DNS queries consist of a single UDP request from the client followed by a
>> single UDP reply from the server. The Transmission Control Protocol (TCP) is
>> used when the response data size exceeds 512 bytes..."
>
> Surely this is a feature of the DNS server? As in, CAs could write DNS servers which replied over UDP even when that limit was exceeded? Perhaps some firewalls would truncate it, but then the browser would just fall back to normal OCSP.
Hmm... I wonder, even if the DNS server had to deliver its answer via TCP, would it still be more desirable to fetch OCSP responses via a DNS query since the local resolver can cache the status of commonly requested certs, and then deliver cached responses over TCP much faster than even a single UDP packet sent thousands of miles and back?
> On 29/04/11 18:32, Paul Tiemann wrote:
>> (Disclaimer: I haven't though this all the way through.)
>>
>> What if we could deliver OCSP responses via DNS?
>>
>> * Add a dnsAddress identifier in the OCSP section of AIA, so you can
> > list an HTTP URI, and also a DNS location where the response could be
>> retrieved.
>
> Do we need that, or can we use this?
> http://www.dns-sd.org/ServiceTypes.html
Would the customer's DNS need to be regularly updated if that were the case (if the record lives on the customer's domain?) If it were a CNAME, do we run the risk of a lot of misconfigurations?
>> New browsers could prefer DNS, and fallback to http.
>
> If a browser is asked for https://www.foo.com, it can send out DNS requests for www.foo.com and _ocsp._tcp.www.foo.com at the same time.
>
> Of course, this might require that https://www.foo.com uses the same certificate no matter who is accessing it and from where...
True, that might complicate things unless you could put multiple values at the _ocsp._tcp.www.foo.com address. I can see the appeal for this -- sending an OCSP request at the same time as connecting could improve network performance even more than sending a DNS request in the middle of the SSL handshake... If it's a CNAME, can the resolver deliver the value of the cname target in the same response?
A benefit of this being part of the CA's DNS server would be that the
TTL value could be guaranteed to match the notAfter or nextAvailable
dates.
Also, Replying OCSP responses in UDP with length up to the minimum
IPv6 MTU would be useful and appropriate.
-Kyle H
> On Wed, May 4, 2011 at 9:59 PM, Paul Tiemann <pa...@digicert.com> wrote:
>> On May 4, 2011, at 8:38 AM, Gervase Markham wrote:
>>
>>> On 03/05/11 09:01, Rob Stradling wrote:
>>>> https://secure.wikimedia.org/wikipedia/en/wiki/Domain_Name_System says:
>>>> "DNS queries consist of a single UDP request from the client followed by a
>>>> single UDP reply from the server. The Transmission Control Protocol (TCP) is
>>>> used when the response data size exceeds 512 bytes..."
>>>
>>> Surely this is a feature of the DNS server? As in, CAs could write DNS servers which replied over UDP even when that limit was exceeded? Perhaps some firewalls would truncate it, but then the browser would just fall back to normal OCSP.
>>
>> Hmm... I wonder, even if the DNS server had to deliver its answer via TCP, would it still be more desirable to fetch OCSP responses via a DNS query since the local resolver can cache the status of commonly requested certs, and then deliver cached responses over TCP much faster than even a single UDP packet sent thousands of miles and back?
>
> A benefit of this being part of the CA's DNS server would be that the
> TTL value could be guaranteed to match the notAfter or nextAvailable
> dates.
True. I don't think we could rely on site owners to set up _ocsp._tcp.www.foo.com for their sites, even if it were only a CNAME. But thinking along the lines of what Gerv would like to be able to do (to parallelize the handshake with the OCSP check) what if NSS could cache the last-seen certificate at each FQDN, and when the user reconnects, then NSS could send the OCSP validation requests for the last-seen certificate at the same time as it first opens a socket to FQDN -- it could help streamline the SSL handshake speed even for scenarios where only http OCSP can be done (perhaps it's more important in these cases because it's going to be slower over http)
Of course it would have to be intelligent enough to make sure the current SSL cert is the same as the last-seen cert before trusting the pre-fetched OCSP response...
> Also, Replying OCSP responses in UDP with length up to the minimum
> IPv6 MTU would be useful and appropriate.
That would be _very_ good -- if it's something like 1280 bytes, then even OCSP responses signed by a delegated OCSP responder certificate might fit (those are ~1100 bytes)
Does anyone know if common firewalls or routers drop UDP DNS packets larger than 512 bytes? (Perhaps the deep-inspection style firewalls do?)
One reason cited for why today's OCSP infrastructure doesn't work is that the
CAs' (HTTP-based) OCSP Responders are not always available. If an OCSP
Responder goes down temporarily, or if there's some sort of temporary routing
problem somewhere on the Internet, then a single point of failure affects
multiple users of multiple websites until the problem is fixed.
"OCSP via DNS" would see the CAs' DNS servers as the single points of failure.
Yes, they'd probably serve Responses more quickly, and they're probably less
likely to be temporarily unavailable, but they'd still be single points of
failure.
Also, OCSP via the CAs' DNS Servers doesn't solve the privacy problem. The
CAs would still get to see which IP Addresses are verifying which
certificates.
I do support the "OCSP via DNS" idea though, because I think that improving
the availability/performance/reliability of CA-operated OCSP would be a useful
improvement. CA-operated OCSP will always be required: for legacy clients, to
support servers capable of stapling, etc.
But, IMHO, OCSP hard-fail will only become viable if/when there is no single
point of failure. I can only see this happening with...
- some sort of widely available form of "multi-stapling", so that each site
can serve all of the relevant OCSP Responses for its cert chain
and/or
- some sort of solution involving short-duration certs for which none of the
certs in the chain need to be revocation-checked.
> On Thursday 05 May 2011 15:18:56 Paul Tiemann wrote:
>> On May 4, 2011, at 11:18 PM, Kyle Hamilton wrote:
> <snip>
>>> A benefit of this being part of the CA's DNS server...
>
> "OCSP via DNS" would see the CAs' DNS servers as the single points of failure.
> Yes, they'd probably serve Responses more quickly, and they're probably less
> likely to be temporarily unavailable, but they'd still be single points of
> failure.
True, but we all depend on the CAs' DNS servers as single points of failure already (how else to resolve ocsp.verisign.net or crl.comodoca.com) so that if CA DNS were to fail, the whole thing fails anyway, including attempts at CRL fallback.
In addition to being faster and less likely to be temporarily unavailable, I think DNS handles some kinds of failure a lot more gracefully:
* If one server (or load balancer or data center) suddenly goes offline, OCSP via HTTP fails. I don't know if any browsers attempt to try all IPs in a round-robin DNS entry or not, but we can safely guess that not all of them will. However, OCSP via DNS would survive those kinds of failures since DNS will attempt the next authoritative DNS server in the list. As long as you don't put all your DNS servers in the same data center or city, you can gracefully recover from those kinds of outages) Because of this behavior in DNS, you can also count on graceful recovery from most temporary routing issues, assuming your DNS resolver tries the 2nd, 3rd, 4th DNS server in the list.
* If your entire DNS infrastructure suddenly goes offline for some reason, then at least the scope of the failure is limited to the clients whose caching resolvers do not have the OCSP response already cached. The distributed cache gets to serve up a large percentage of your OCSP responses while you're down. And any client that supports OCSP via DNS could try falling back to your HTTP servers (assuming that DNS entry is cached in your resolvers)
> Also, OCSP via the CAs' DNS Servers doesn't solve the privacy problem. The
> CAs would still get to see which IP Addresses are verifying which
> certificates.
I think it solves it better than HTTP because the user's own IP address is masked by his/her DNS resolver. The client sends the DNS query to the resolver who then asks the CA's DNS servers. For example, if I use OpenDNS or my ISP's DNS resolvers, then the CA will have no way to know that my IP was requesting OCSP for a given site.
Caching resolvers would further limit the amount of traffic that the CA can "see" since OpenDNS may only have to ask for the www.gmail.com OCSP response once every X hours, but continue to serve that cached response to 10M people.
> I do support the "OCSP via DNS" idea though, because I think that improving
> the availability/performance/reliability of CA-operated OCSP would be a useful
> improvement. CA-operated OCSP will always be required: for legacy clients, to
> support servers capable of stapling, etc.
Yes, OCSP via HTTP is here to stay. Even if all modern browsers started to prefer DNS based OCSP.
> But, IMHO, OCSP hard-fail will only become viable if/when there is no single
> point of failure. I can only see this happening with...
> - some sort of widely available form of "multi-stapling", so that each site
> can serve all of the relevant OCSP Responses for its cert chain
I'd like to see OCSP soft-fail before we go from no-fail to hard-fail. Drawing a red line through the URL or having some other visual indicator of a failure to fetch revocation data would be a great first step.
I sometimes wonder if the browsers had soft-fail indicators if that would be 'good enough' or if it would at least help us all figure out what the exceptional cases look like (where OCSP can't be fetched due to wireless APs with user agreement click throughs, etc)
Stapling would help things a lot. The trouble with stapling (from my point of view) is that it depends on so many more organizations to support it first. I don't know if there is even one SSL accelerator out there that has built stapling support into their product (F5, Cisco, Foundry, A10, others?) The highest profile sites are usually using those beefy devices. But at what point could it be determined that "enough" of servers support it to turn on hard-fail? After just IIS and Apache support it? Then we create the ultimate DDoS incentive: the ability to take down all the big guys (on SSL accelerators that don't have stapling) and leave most of the little guys still floating (on IIS and Apache with stapling) by attacking the CA's OCSP servers.
Add to that the problem of multi-stapling. I'd love it if browsers used CRL to check intermediate certs, and I'd love it even more if they'd _cache_ the intermediate certificate's status (because they're much less likely to change from quarter to quarter)
> - some sort of solution involving short-duration certs for which none of the
> certs in the chain need to be revocation-checked.
Unless that were supported by all platforms I can only guess that using short-lived certs would be a delicacy item for the top 0.1% of websites (the highest trafficked sites have an incentive to go through the trouble, but not many others)
I think the browsers have a lot of tools at their disposal for improving things even if OCSP via HTTP is all we have -- for example, knowledge of previously-seen certs, previously valid OCSP responses, caching those responses, and attempting to refetch them after 1 day, but being willing to trust an otherwise unsuspicious SSL handshake for a previously-seen certificate at the same IP as last time on the merits of a cached OCSP response even if the current OCSP attempt fails... Or checking to make sure if the last-seen cert was an EV cert, that if the cert changes to a DV cert then require the OCSP to work (it'd be more suspicious if a cert that's always been EV suddenly changes to DV and the OCSP also stops working at the same time?)
> So yes, as long as the NAME field is no longer than ~30 bytes, and for as long
> as RSA-2048 is considered secure, and for as long as SHA-1 may be used as
> described above...Non-delegated OCSP Responses should be small enough (<512
> bytes) for UDP DNS. But only just small enough!
Yes, it's too bad 512 is the limit - thanks for pointing that out. In thinking about it some more, I can still see big enough advantages even if the responses had to be delivered via TCP DNS.
I was just thinking of this comparison this morning: We get more OCSP requests per day than all of our DNS queries per month combined. That's using a 5 minute TTL in DNS. If we had OCSP via DNS and used just 2hr TTL it could be assumed that OCSP related volumes would be reduced at least 1/30th, probably more like 1/100th. At the same time, any brand-new revocations could be pushed out in real time and within the 2 hour window all but the broken DNS resolvers out there will have refreshed it. Viva distributed caching!
Yes and no.
If the CA DNS fails for a long period (days), I agree that "the whole
(revocation checking) thing fails anyway".
But if the CA DNS fails for only a short period (minutes or hours),
clients/servers that support stapling should survive that failure unscathed.
> In addition to being faster and less likely to be temporarily unavailable,
> I think DNS handles some kinds of failure a lot more gracefully:
Agreed.
<Lots of good points snipped>
> > Also, OCSP via the CAs' DNS Servers doesn't solve the privacy problem.
> > The CAs would still get to see which IP Addresses are verifying which
> > certificates.
>
> I think it solves it better than HTTP because the user's own IP address is
> masked by his/her DNS resolver. The client sends the DNS query to the
> resolver who then asks the CA's DNS servers. For example, if I use
> OpenDNS or my ISP's DNS resolvers, then the CA will have no way to know
> that my IP was requesting OCSP for a given site.
Maybe we should also introduce a concept of "stapling via DNS". i.e. the OCSP
Response is available from the CA's DNS server, but the site owner may
optionally make the OCSP Response available from their own DNS as well.
Clients could send out DNS queries for both, in parallel.
<snip>
> I'd like to see OCSP soft-fail before we go from no-fail to hard-fail.
> Drawing a red line through the URL or having some other visual indicator
> of a failure to fetch revocation data would be a great first step.
>
> I sometimes wonder if the browsers had soft-fail indicators if that would
> be 'good enough' or if it would at least help us all figure out what the
> exceptional cases look like (where OCSP can't be fetched due to wireless
> APs with user agreement click throughs, etc)
Agreed.
> Stapling would help things a lot. The trouble with stapling (from my point
> of view) is that it depends on so many more organizations to support it
> first.
This is why I've been trying to invent alternative forms of (single- and
multi-) stapling that can work with existing webserver software. :-)
<snip>
> > - some sort of solution involving short-duration certs for which none of
> > the certs in the chain need to be revocation-checked.
>
> Unless that were supported by all platforms I can only guess that using
> short-lived certs would be a delicacy item for the top 0.1% of websites
> (the highest trafficked sites have an incentive to go through the trouble,
> but not many others)
Agreed.
> I think the browsers have a lot of tools at their disposal for improving
> things even if OCSP via HTTP is all we have -- for example...
<snip>
Agreed.
Hi Paul. In order to keep within the 512 byte limit, we could define a
"compression" algorithm for OCSP Responses (both delegated and non-delegated)
served via DNS. I'd approach it like this...
1. The components of the CertID would be part of the Domain Name in the DNS
request, so they wouldn't need to be present in the DNS response data.
2. For delegated signing, the OCSP Signer Certificate would be served by a
separate DNS record (using the existing CERT RR type). This record could be
cached on the client, saving bandwidth when multiple OCSP Responses signed by
the same OCSP Signer Certificate are required.
3. Strip out as much of the ASN.1 header overhead as possible.
When a client receives a "compressed" OCSP Response, they would have enough
information to be able to unambiguously re-encode the DER OCSP Response and
validate the signature.
It would be possible to get the size of the RDATA field for both non-delegated
and delegated "compressed" OCSP Responses that are signed with RSA-2048 down
to ~300 bytes.
Here's roughly how I'd do it...
Client performs a DNS query, using a to-be-defined "OCSP" RRTYPE, for the
following domain name:
<HEX(certificateserialnumber)>.<hashalgorithmname>-<HEX(issuernamehash)>-
<HEX(issuerkeyhash)>.<ocspdomainname>
Server sends a reply with the RDATA field constructed as follows:
[1 byte]: OCSPDNSType
0x30 = the RDATA contains a full DER-encoded OCSP Response, and this 0x30 is
its first byte (i.e. a SEQUENCE tag). The rest of the fields described below
are not present.
or 0x00 = it's a "compressed", non-delegated v1 Basic OCSP Response with 1
SingleResponse.
or 0x01 = it's a "compressed", delegated v1 Basic OCSP Response, with 1
SingleResponse, where the ResponderID is identified by KeyHash.
if (OCSPDNSType == 0x01) {
[n bytes]: ResponderIDKeyHash. The length of this hash is implied by the
<hashalgorithmname> in the DNS query. The Client may obtain the Responder's
OCSP Signer Certificate by performing a DNS CERT lookup for
<HEX(ResponderIDKeyHash)>.<ocspdomainname>
}
[8 bytes]: producedAt (number of seconds after 1st Jan 1970 UTC)
[1 byte]: CertStatus
0, 1 or 2 = see RFC2560
if (CertStatus == 1) {
[8 bytes]: revocationTime (number of seconds after 1st Jan 1970 UTC)
[1 byte]: CRL Reason size
[0 or 1 byte]: CRL Reason
}
[8 bytes]: thisUpdate (number of seconds after 1st Jan 1970 UTC)
[1 byte]: nextUpdate length (0=absent; 8=present)
[0 or 8 bytes]: nextUpdate (number of seconds after 1st Jan 1970 UTC)
[1 byte]: singleExtensions length (0=absent; >0=length of data)
[0 or >0 bytes]: singleExtensions DER-encoded data
[1 byte]: responseExtensions length (0=absent; >0=length of data)
[0 or >0 bytes]: responseExtensions DER-encoded data
[1 byte]: signatureAlgorithm
1 = RSA/SHA-1
2 = RSA/SHA-256, etc, etc
[n bytes]: signature raw data (exclude the BITSTRING header)
The size of the RDATA for an unrevoked certificate, with non-delegated Response
signing, with the optional nextUpdate present, and with "sha1" as the
<hashalgorithmname> would be...
1 (OCSPDNSType = 0x00)
+ 8 (producedAt)
+ 1 (CertStatus = 0 (good) )
+ 8 (thisUpdate)
+ 1 (nextUpdate length = 8)
+ 8 (nextUpdate)
+ 1 (singleExtensions length = 0)
+ 1 (responseExtensions length = 0)
+ 1 (signatureAlgorithm = 1)
+ 256 (signature = the RSA-2048 signature from the "uncompressed" Response)
= 286 bytes.
For delegated Response signing...
+ 20 (ResponderIDKeyHash = SHA-1 hash of the Responder's Public Key).
= 306 bytes.
> On Thursday 05 May 2011 20:17:08 Paul Tiemann wrote:
>> On May 3, 2011, at 2:01 AM, Rob Stradling wrote:
>>> So yes, as long as the NAME field is no longer than ~30 bytes, and for as
>>> long
>>>
>>> as RSA-2048 is considered secure, and for as long as SHA-1 may be used as
>>> described above...Non-delegated OCSP Responses should be small enough
>>> (<512 bytes) for UDP DNS. But only just small enough!
>>
>> Yes, it's too bad 512 is the limit - thanks for pointing that out. In
>> thinking about it some more, I can still see big enough advantages even if
>> the responses had to be delivered via TCP DNS.
>
> Hi Paul. In order to keep within the 512 byte limit, we could define a
> "compression" algorithm for OCSP Responses (both delegated and non-delegated)
> served via DNS. I'd approach it like this...
> 1. The components of the CertID would be part of the Domain Name in the DNS
> request, so they wouldn't need to be present in the DNS response data.
> 2. For delegated signing, the OCSP Signer Certificate would be served by a
> separate DNS record (using the existing CERT RR type). This record could be
> cached on the client, saving bandwidth when multiple OCSP Responses signed by
> the same OCSP Signer Certificate are required.
> 3. Strip out as much of the ASN.1 header overhead as possible.
This would be great! Thanks for looking into it, Rob!
Support for delegated signing would mean that even 4096 bit CAs could use a delegated signer certificate with a smaller key and they'd still be able to fit their OCSP responses into the 512 limit.
> When a client receives a "compressed" OCSP Response, they would have enough
> information to be able to unambiguously re-encode the DER OCSP Response and
> validate the signature.
>
> It would be possible to get the size of the RDATA field for both non-delegated
> and delegated "compressed" OCSP Responses that are signed with RSA-2048 down
> to ~300 bytes.
>
> Here's roughly how I'd do it...
>
>
> Client performs a DNS query, using a to-be-defined "OCSP" RRTYPE, for the
> following domain name:
> <HEX(certificateserialnumber)>.<hashalgorithmname>-<HEX(issuernamehash)>-
> <HEX(issuerkeyhash)>.<ocspdomainname>
>
Hmm, would we need to support more than one hash algorithm for the issuer name hash and issuer key hash? Would that require us to sign multiple variations of each OCSP response and post them at multiple locations in the DNS zone? Since it's not being used for crypto-sensitive parts, could sha1 be agreed on as the only one to use for CertID for OCSP over DNS?
Or, could the issuer specify the FQDN of the DNS RR to fetch within the AIA next to the OCSP http uri (as a separate data type)
One side thought: Browsers could support the use of a centralized OCSP service by using a configurable OCSP response broker domain that would be added to each OCSP in DNS request -- for example, instead of fetching
They could fetch the same FQDN but with .ocsp.opendns.com appended to the end of the name (assuming OpenDNS fired up an OCSP service like that)
With that much information in the FQDN itself, a custom DNS service like that could potentially even act as a gateway/bridge to the CA's HTTP OCSP service. It could construct the OCSP request, fetch it from the HTTP service, reformat it into compressed format and send it back... (Perhaps a service like that could be used to provide OCSP over DNS for all existing certificates that have at least an OCSP HTTP URI defined in the AIA?)
I think it's a great way to cope with the 512 byte limit!
OCSP over UDP with the distributed caching of DNS would have "something for everyone":
1) Users and Browsers get MUCH faster OCSP responses during SSL handshakes (5ms could be achievable if cached on the user's on-LAN resolver -- compare that with 200ms to 500ms or worse over http) They also get a little bit better privacy (their own IP doesn't get exposed to the responder)
2) CAs get a drastic reduction in OCSP volume, and could potentially afford to reduce OCSP validity windows (which is also good for users and browsers)
Paul
:-)
> Support for delegated signing would mean that even 4096 bit CAs could use a
> delegated signer certificate with a smaller key and they'd still be able
> to fit their OCSP responses into the 512 limit.
Yep.
<snip>
> Hmm, would we need to support more than one hash algorithm for the issuer
> name hash and issuer key hash?
In theory, yes. In practice, hopefully no.
> Would that require us to sign multiple variations of each OCSP response and
> post them at multiple locations in the DNS zone?
If it is required, then yes.
> Since it's not being used for crypto-sensitive parts, could sha1 be agreed
> on as the only one to use for CertID for OCSP over DNS?
Hopefully. SHA-1 is the only hash algorithm that RFC2560 requires to be
supported for CertID->hashAlgorithm. As of today, our OCSP infrastructure
only supports SHA-1 for this purpose.
Incidentally, I asked the "Can we make SHA-1 the only option for CertID-
>hashAlgorithm?" question over on the IETF PKIX mailing list several years
ago. I understand that there is an RFC2560-bis effort in progress, and I've
been led to believe that it will seek to address this question.
> Or, could the issuer specify the FQDN of the DNS RR to fetch within the AIA
> next to the OCSP http uri (as a separate data type)
It could be useful for the cert's AIA extension to indicate the availability
of an OCSP-via-DNS service.
If alternatives to SHA-1 are required, then it might also be useful (for both
traditional OCSP and OCSP-via-DNS) for the cert to indicate which hash
algorithm(s) may be used for CertID->hashAlgorithm.
> One side thought: Browsers could support the use of a centralized OCSP
> service by using a configurable OCSP response broker domain that would be
> added to each OCSP in DNS request -- for example, instead of fetching
>
> 031713EE43F5E5F0FA82CD5ECBE16DF6.sha1-80d229c411e346e12116f3dd54b403c3c7aea
> 05d-a4bc59c5ad11f5c699b813a49a6a76a4c8c0ed8e.ocsp.digicert.com
>
> They could fetch the same FQDN but with .ocsp.opendns.com appended to the
> end of the name (assuming OpenDNS fired up an OCSP service like that)
Yep.
> With that much information in the FQDN itself, a custom DNS service like
> that could potentially even act as a gateway/bridge to the CA's HTTP OCSP
> service. It could construct the OCSP request, fetch it from the HTTP
> service, reformat it into compressed format and send it back... (Perhaps a
> service like that could be used to provide OCSP over DNS for all existing
> certificates that have at least an OCSP HTTP URI defined in the AIA?)
Yep. Good idea. One such service could potentially implement OCSP-via-DNS
for all certificates issued by all CAs that have an HTTP-based OCSP Responder,
without the CAs and website operators having to do anything.
> > Server sends a reply with the RDATA field constructed as follows:
<snip>
> I think it's a great way to cope with the 512 byte limit!
>
> OCSP over UDP with the distributed caching of DNS would have "something for
> everyone":
>
> 1) Users and Browsers get MUCH faster OCSP responses during SSL handshakes
> (5ms could be achievable if cached on the user's on-LAN resolver --
> compare that with 200ms to 500ms or worse over http) They also get a
> little bit better privacy (their own IP doesn't get exposed to the
> responder)
>
> 2) CAs get a drastic reduction in OCSP volume, and could potentially afford
> to reduce OCSP validity windows (which is also good for users and
> browsers)
Yep.
> Paul
Hi Kyle. I'm not sure I understand how this could be achieved. Are you
suggesting that all "OCSP via DNS" traffic would be required to go over IPv6?
> > "OCSP via DNS" would see the CAs' DNS servers as the single points of
> > failure. Yes, they'd probably serve Responses more quickly, and they're
> > probably less likely to be temporarily unavailable, but they'd still be
> > single points of failure.
>
> No, they wouldn't. DNS takes care of all of that.
My first post to this thread said "I don't claim to know very much about DNS",
but I'm learning. ;-)
http://wiki.answers.com/Q/How_do_you_achieve_redundancy_for_your_DNS_server
> They wouldn't necessarily be singleton points, as there's no rule preventing
> a CA from operating multiple OCSP responders in parallel.
Agreed. Contrast this with "OCSP via HTTP": e.g. even though "dig A
ocsp.comodoca.com" returns multiple records, OCSP clients typically (AFAIK)
only try to do "OCSP via HTTP" with one of them before deciding that
certificate status information cannot be obtained.
> Really, it would be -precisely- the same risk that existing DNS
> infrastructures have. If an OCSP responder isn't available, if its
> response has been cached by the verifier's DNS, it makes the responder
> service appear to be higher-uptime than it already is.
Agreed.
> > Also, OCSP via the CAs' DNS Servers doesn't solve the privacy problem.
> > The CAs would still get to see which IP Addresses are verifying which
> > certificates.
>
> The only way for that to be solved is for the presenter to get his own OCSP
> responses, and the user to take business elsewhere if that's not done.
So how about we define "OCSP Stapling via DNS" as well? Something like this...
1. User asks browser to navigate to https://www.domain.com.
2. Browser does several DNS lookups for www.domain.com in parallel: A, AAAA
and "Stapled OCSP" (using a new to-be-defined RRTYPE).
3. Browser performs a TLS handshake with an IP Address returned by the A or
AAAA record.
4. If a "Stapled OCSP" DNS record was returned, Browser injects the OCSP
Response(s) into its local OCSP Response cache.
5. If a CertificateStatus TLS extension was received, Browser injects the OCSP
Response into its local OCSP Response cache.
6. If there are any other certs for which locally cached revocation
information is not yet available, Browser tries to contact the CA's DNS
Server(s) to obtain an OCSP Response using "OCSP via DNS" (using a new to-be-
defined RRTYPE). Browser locally caches any OCSP Response(s) found this way.
7. If there are still any other certs for which locally cached revocation
information is not yet available, Browser performs "OCSP via HTTP" with the
OCSP URI from the certificate's AIA extension, and locally caches any OCSP
Response(s) found this way.
> > But, IMHO, OCSP hard-fail will only become viable if/when there is no
> > single point of failure. I can only see this happening with...
Since I wrote that, I think I've changed my mind. OCSP hard-fail could be
viable if we just implement "OCSP via DNS". And this would really only need
buy-in from the Browsers! (Even the CAs wouldn't need to do anything if we
rig up what Paul T described: "a custom DNS service like that could
potentially even act as a gateway/bridge to the CA's HTTP OCSP service").
> You seriously run only one single OCSP responder?
We currently run 10s of OCSP Responders.
> You seriously don't have multiple valid OCSP certs, one for each responder
> in your farm?
We use non-delegated OCSP Response signing.
> DNS has a built-in caching facility which allows for you to specify
> precisely how long you want the intermediate nameservers to cache the
> information for -- this permits you to set it to timeout after as long as
> your responses are good for.
>
> > - some sort of widely available form of "multi-stapling", so that each
> > site can serve all of the relevant OCSP Responses for its cert chain
> > and/or
>
> Include all of the relevant responses and certificates in the chain in a
> self-signed structure, which also contains a signature by the certified
> key over the key used to self-sign.
How would your proposed "self-signed structure" be provided to OCSP clients?
If you're suggesting that the TLS Server should send 1 self-signed certificate
in its "Certificate" handshake messages, then this would trigger warnings in
browsers that don't understand the special contents of that certificate.
> > - some sort of solution involving short-duration certs for which none of
> > the certs in the chain need to be revocation-checked.
>
> Isn't this what Authority Information Access was originally intended to be
> for, to discover the intermediates? Those URLs could be constantly
> updated with new short-duration certs.
If a certificate obtained from an AIA->caIssuers URL expires, will a Browser
that cached the now-expired cert...
1. Contact the AIA->caIssuers URL again to look for an updated Intermediate
cert?
or...
2. Mark the certificate chain as untrusted (because the Intermediate has
expired) and exit the certificate path validation process?
I'm guessing 2, but I'd like to be wrong.
> -Kyle H