--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/20250522073144.dc62e3d03249038195e33864%40andrewayer.name.
I think there's huge value in logs serving their own metadata, to ensure that it accurately reflects the log's configuration. Per-operator lists would probably be manually managed and prone to mistakes.
--
You received this message because you are subscribed to the Google Groups "Certificate Transparency Policy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ct-policy+...@chromium.org.
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/150ea685-89a1-4334-8174-b1ef5e4af564%40app.fastmail.com.
At this point, with static-ct being documented as a C2SP spec, I think it would be fine to add the "log metadata JSON" as another C2SP spec that both static-ct and RFC6962 logs implement.
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/80ec09a8-5a6b-43ae-97af-9afb79c8ab8f%40app.fastmail.com.
As mentioned, RFC6962 can not be modified.
Hi Pierre,
I misunderstood what you meant by updating RFC 6962. I thought you wanted to change the text of this RFC, but you mean publishing a new RFC for CT v1.1 that would add a metadata endpoint. I feel like there is definitely a place for CT v1.1, because it would be nice for the extra algorithms for validating in V2 to be in the same document as the endpoints that are actually implemented.Such an update however would presumably want to fix multiple issues with RFC6962 in one go, and might take quite a while to develop, so I still believe that having a seperate standard for the metadata endpoint (which might get rolled up into a future RFC update) is preferable.
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/af19b6cd-fb1e-4d02-8da8-2b044b2ca1c3%40app.fastmail.com.
The API is extremely difficult to use, conceptually complex, and challenging to verify (requiring each client to essentially engineer an entire CT log implementation) and it shifts the implementation burden entirely onto clients.
More fundamentally, static CT has always felt unnecessary to me because an efficient classic CT implementation is entirely possible (one of the reason why I wanted to work on CompactLog).
Hi Filippo,Looking at your sunlight client code, I think it actually reinforces my concerns about complexity distribution. While the LOC count appears modest, the client achieves simplicity primarily by delegating the hard parts: checkpoint verification is pushed to callers ("should have been verified by the caller"), and most cryptographic work happens in the torchwood dependency. This creates a thin veneer of simplicity while still requiring consumers to understand checkpoints, signatures, tile models, and verification flows.
I think we're looking at complexity from different angles. While your example demonstrates that a working Static CT client can be implemented, the cognitive complexity for log consumers remains significantly higher compared to RFC 6962.With RFC 6962, a consumer can:- Make straightforward HTTP requests to well-defined endpoints- Receive directly usable responses without additional processing- Implement basic verification with minimal cryptographic knowledgeThe Static API, even with your client library, requires consumers to:- Understand tile-based data structures and their implications- Implement tile fetching and assembly logic- Handle the inherent complexity of reconstructing log state from static componentsYour LOC comparison actually reinforces my point: while the server implementation becomes simpler (which may benefit operators), the aggregate complexity for the ecosystem increases because every consumer now needs more sophisticated client logic. RFC 6962 centralizes this complexity in the log operator, where it can be implemented once and shared by all consumers.
My concern isn't that Static CT is technically impossible to implement, but rather that this shift in complexity creates barriers for smaller log consumers and reduces the diversity of implementations in the ecosystem. The RFC 6962 ecosystem is already arguably slim in terms of independent implementations. With the additional complexity barriers of static APIs, I fear we'll see even less diversity, with most consumers simply defaulting to reference implementations rather than building their own clients.Your sunlight client example actually illustrates this pattern - it's a thin wrapper that most developers will likely use as-is rather than understanding the underlying static model well enough to build alternatives. This creates a concerning dependency pattern where the ecosystem's health becomes tied to the maintenance and governance of a few reference libraries, ultimately undermining CT's decentralization goals by creating de facto centralization at the client library level.
Hi Ben,I am specifically talking about CT v2, not static CT https://www.rfc-editor.org/rfc/rfc9162.html
I realize this may be a controversial opinion given the current ecosystem's tendency to favor static CT, but I hope static logs won't ever replace classic CT.The API is extremely difficult to use,
conceptually complex, and challenging to verify (requiring each client to essentially engineer an entire CT log implementation) and it shifts the implementation burden entirely onto clients. More fundamentally, static CT has always felt unnecessary to me because an efficient classic CT implementation is entirely possible (one of the reason why I wanted to work on CompactLog).
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/ct-policy/cf97ef9e-e292-4b07-90d1-e200c422996d%40app.fastmail.com.
You're not solving the caching problem.
This discussion has shifted from LOC complexity to implementation time to server complexity to bandwidth costs, with each new concern emerging when the previous one is addressed.
This suggests we're working backwards from a conclusion rather than genuinely evaluating the tradeoffs.
If bandwidth efficiency is the primary concern, RFC 6962 could have evolved to address this - for example, serving chains with hash references to certificates rather than complete chains, achieving the same deduplication benefits.
RFC 6962 responses can also be HTTP cached effectively since most monitors tail recent entries or request similar ranges, making cache hit rates comparable to static tiles. For point GETs or specific ranges, RFC 6962 can actually be more efficient since you fetch exactly what you need rather than entire tiles.
If static CT "doesn't go far enough yet" and won't solve the long-term scaling challenges, that suggests we shouldn't be splitting the ecosystem with an intermediate solution that will require further changes anyway. Also, I get a 8Gbps shared uplink to my home - treating 1Gbps as a meaningful constraint for enterprise CT infrastructure is not realistic.
I really appreciate your kind words about Merklemap and this community - it's been great to contribute. That's exactly why I care so much about these decisions. As someone building monitoring infrastructure, what I need most is stability and predictable APIs that don't require constant reimplementation as the ecosystem evolves.
On caching: RFC 6962 responses are just standard HTTP GET requests - they can be cached with normal HTTP caching headers without any special reverse proxy configuration.
I appreciate the organizational challenges, but "keeping sponsors interested" shouldn't be the primary driver of protocol design decisions.
If static CT is an intermediate solution that "doesn't go far enough yet," fragmenting the ecosystem seems counterproductive. As a monitor, I want stability, not complexity that keeps changing.I think we probably need to agree to disagree at this point, but I appreciate the thoughtful discussion.