certificate transparency logs - limited range per API call

143 views
Skip to first unread message

glovescoffee

unread,
Oct 20, 2020, 11:03:24 AM10/20/20
to certificate-transparency
just wondering if the http://ct.googleapis.com/logs/argon2021/ct/v1/get-entries?start=12345&end=12345 has a limit in the range.

right now I can just get up to 20 entries in that API. for example I want to get recent records and the current tree size is 1000. I have to do something like this

.
.
.
until I can get 1000 records instead of just like


is there a limit per operator? what I observe now is 20 for Google, 1000 for Cloudflare and 200 for Let's Encrypt. if yes, what is the limit per call? thank you.

Mohammadamin Karbasforushan

unread,
Oct 20, 2020, 11:45:13 AM10/20/20
to certificate-transparency
Hi,

There is a batch size and it's the same for each operator (tho you could and probably should check for each log).

The batch sizes are:
- Google: 32
- LE: 256
- CloudFlare: 1024
- TrustAsia: 256
- DigiCert's ct1: 65 (might be wrong here)
- DigiCert's sharded logs: 256
- Comodo: 1001

Beware that some operators (Google, TrustAsia, and LetsEncrypt mainly) enforce alignment for better caching. See LE's blog post.
Needless to say, this is set by the operators, meaning Google's mirrors of other logs has Google's batch size setting.

I use this code to check log statuses, etc. See the flags to set what you want listed.

Cheers,
Amin

Al Cutter

unread,
Oct 20, 2020, 11:54:19 AM10/20/20
to certificate-...@googlegroups.com
Hi,

logs impose batch size limits for various reasons, and of course log operators may choose to change those sizes from time to time.
The RFC says that logs can deliver a smaller batch than requested, though, so you can always figure out the end of the range to fetch from the latest STH you have (tree_size-1), and then repeatedly request ?get-entries?start=<next_missing_entry>&end=STH.tree_size-1 updating <next_missing_entry> each time with the number of entries you received in the batch (i.e. you don't need to calculate the end parameter each time).

If you're downloading many entries you can partition this large range into several smaller chunks which you can still fetch in parallel using the above strategy - i.e. [0,1M), [1M, 2M),...[nM, STH.tree_size)

Hope that helps.

Cheers,
Al.



--
You received this message because you are subscribed to the Google Groups "certificate-transparency" group.
To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/certificate-transparency/07f10572-d1e0-4752-8871-e373e1309715n%40googlegroups.com.

Donna Gail Hernandez

unread,
Oct 20, 2020, 9:57:41 PM10/20/20
to certificate-...@googlegroups.com
ohh okay! thank you for your detailed explanation. really appreciate it.

--
You received this message because you are subscribed to the Google Groups "certificate-transparency" group.
To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transp...@googlegroups.com.

glovescoffee

unread,
Oct 20, 2020, 9:59:01 PM10/20/20
to certificate-transparency
ohh okay! thank you for your detailed explanation. really appreciate it.  

Pierre Phaneuf

unread,
Oct 21, 2020, 6:24:37 AM10/21/20
to certificate-transparency
Hi,

Al's approach is good, essentially, you have to adapt to the operator sending shorter batches, but note that this "negotiation" goes both ways: you should only ask for as many entries as you're prepared to receive!

For example, I've made a prototype log implementation here which "streams" entries, building the JSON response for "get-entries" dynamically, as it receives entries from the database... With that implementation, if you asked for a million entries, you might get a million entries! This might mean having to hold a million entries (at least, as there might be more than one copy) in memory on your side, and potentially give you difficulties... :-)

So you request as many as you're prepared to receive, and the log will send as many as it's prepared to send (which might be 32, or a billion!), and everyone is happy.

I'd also like to attract attention to the "-1" in Al's "end=STH.tree_size-1", as this is the most common incorrect request that our log servers receive (off-by-one error past the end of the tree)! ;-)

Kind regards,
Pierre

Reply all
Reply to author
Forward
0 new messages