--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/f5bc1e56-16cb-442b-beef-58c7a9916f53%40app.fastmail.com.
I can't help but wonder if the weird start/end dates contributed to thismistake. When I was configuring my monitor, I was so focused on themonth and day that I didn't notice the incorrect year and ended upmisconfiguring it as 2025. (I only became aware of the mistake thismorning when I received alerts about the log containing certificatesoutside of the range.) If logs generally configured their shards usingstrings like "2025", "2025h2" or "2025q3" instead of a pair ofdate-times, it seems like mistakes would be less likely.
When I was configuring my monitor, I was so focused on the
month and day that I didn't notice the incorrect year and ended up
misconfiguring it as 2025
--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/CAKh5S0ZCpz7a8GSGL8zDpugC3VhwnUFwTQ359Ta5cMpSBZD9ew%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/f9f5433f-0e00-4b79-9480-cb435c01a7ef%40app.fastmail.com.
--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/CAOzAejQDYv8Aa7oVW1znQ60jaQDNdzfsjfAaYV_8h09T1ih-3w%40mail.gmail.com.
I operate the Itko CT log (https://github.com/aditsachde/itko#public-instance), which runs my implementation of the static CT API spec. With Itko, each temporal shard is its own instance, so implementing an endpoint which covers all temporal shards would be difficult. I've added this endpoint to my log, following the log list v3 schema: https://ct2025.itko.dev/logs.v3.json
I also wanted to document an edge case for operators with static CT logs that hit my log last week.
Andrew notified me that the data tile `x023/404` did not match the hashes in the corresponding level 0 tile. However, he also checked the uncached version of the tile, which returned the correct data. After investigating, I believe the following sequence of events occurred.
1. The log tried to incorporate a bunch of new entries into the tree
2. The full data tile was written to disk
3. The log got OOM killed before the checkpoint was written
4. A consumer tried to opportunistically fetch the full data tile, putting the incorrect version in cache
5. The log rolled back to the last signed checkpoint on restart (which is fine because SCTs are not issued until the checkpoint is written to disk)
6. The log wrote a new data tile, hashes, and checkpoint
7. The incorrect data tile remained in cache, resulting in a mismatch
This case seems possible to hit with Sunlight as well, especially if the log is running on very constrained hardware as mine is, so it might be worth documenting somewhere. Based on what I can tell, Let's Encrypt's Sunlight logs don't cache with cloudfront so they shouldn't hit this issue.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/CACEYhVCEH89zF2p2nU5eXtL%2B5KKYWmVrUGQmrYmMLCeCcduM4A%40mail.gmail.com.
I operate the Itko CT log (https://github.com/aditsachde/itko#public-instance), which runs my implementation of the static CT API spec. With Itko, each temporal shard is its own instance, so implementing an endpoint which covers all temporal shards would be difficult. I've added this endpoint to my log, following the log list v3 schema: https://ct2025.itko.dev/logs.v3.jsonI also wanted to document an edge case for operators with static CT logs that hit my log last week.Andrew notified me that the data tile `x023/404` did not match the hashes in the corresponding level 0 tile. However, he also checked the uncached version of the tile, which returned the correct data. After investigating, I believe the following sequence of events occurred.1. The log tried to incorporate a bunch of new entries into the tree2. The full data tile was written to disk3. The log got OOM killed before the checkpoint was written4. A consumer tried to opportunistically fetch the full data tile, putting the incorrect version in cache5. The log rolled back to the last signed checkpoint on restart (which is fine because SCTs are not issued until the checkpoint is written to disk)6. The log wrote a new data tile, hashes, and checkpoint7. The incorrect data tile remained in cache, resulting in a mismatchThis case seems possible to hit with Sunlight as well, especially if the log is running on very constrained hardware as mine is, so it might be worth documenting somewhere. Based on what I can tell, Let's Encrypt's Sunlight logs don't cache with cloudfront so they shouldn't hit this issue.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/MW4PR17MB47298D61F3A776D3DE6AB0A9AA4D2%40MW4PR17MB4729.namprd17.prod.outlook.com.