Hi all,
Regarding question #2:
A CA must conform to the latest version version of the BRs, but I
don't see anywhere that they aren't allowed to use the DV rules of
older BRs and record this as the BR version. In this case the CA
must make sure that what they're doing is allowed in both the old
and new version of the BRs.
Imagine version 1 allows CAs to do (A,B,C) and version 2
allows (A,B,D).
If the records show that version 1 was used after version 2 was
released, then they must not have done (C) or (D), but (A,B) would
be fine.
If the CA wants to start doing (D), then the CA system must be
updated to record that version 2 of the BRs was used for DV.
That's my interpretation at least.
CA's should be aware of upcoming changes, and breaking changes
often go into effect at a later date. E.g., (C) isn't outright
removed, rather the BRs would say "Effective [future date] a CA
MUST NOT do (C)". So I don't think that following a common subset
of DV rules after a new BR release would be challenging.
Technically a CA could stay on an ancient version of the BRs for
DV requirements, it just wouldn't be a good idea because it
becomes harder and harder to keep track of the common subset of
requirements, and at some point it could even become impossible.
E.g., version 1 says a CA MUST do (X) and version 101 says a CA
MUST NOT do (X). So recording an old version of the BRs
unnecessarily long would be a bad practice and isn't in the CA's
interests.
Regards,
Dexter
Hi all,
I think the operative word in Section 3.2.2.4 is "used." The requirement is to record which BR version was used to validate every domain, not which version happened to be current when the clock ran.
"Used" is an operational term. It refers to the validation procedure actually applied. If a CA performed validation under rules that haven't changed between v2.2.2 and v2.2.3, it didn't use v2.2.3. It used the version whose logic it implemented.
The MRSP conformance obligation and the Section 3.2.2.4 recording obligation are distinct. Conformance to the latest version means deploying its logic, and the log should reflect what logic ran. A CA that logs the new version before deploying its logic is recording what it was supposed to conform to, not what actually ran.
The corollary matters, though. A CA whose validation code hasn't been updated to implement a new version cannot truthfully log that version. Logging the current version when the code hasn't caught up is a false record. It records what the CA was supposed to conform to, not what logic actually ran.
This interpretation is verifiable by an auditor, assuming a functioning change management regime is in place. A CA whose stamps are inconsistent with its deployment history has a substantive problem, not a record-keeping one.
Where I'd push back on Dexter's framing is on different grounds. Recording an older version and arguing the validation was permissible under both isn't the same thing as recording the version you actually implemented. The log should be a statement of fact about what governed the issuance, not a retroactive argument about compatibility with both versions.
The three incidents Aaron cited resolve differently under this reading. If those CAs' validation logic hadn't been updated, that's both a substantive compliance failure under MRSP and a recording failure, two distinct findings. If the logic was current but the version stamp wasn't, that's a record-keeping error of a different character, and one the audit record can help distinguish.
The operational gap Aaron describes largely disappears once "used" is understood as referring to deployed logic rather than clock time. The remaining ambiguity around effective dates and timezones is a CA/BF drafting problem worth fixing, and standardizing on UTC would largely close it.
That said, a recording failure that stems solely from timezone ambiguity in the effective date, where the underlying validation logic was current, seems the kind of thing that should be a note in an audit and not something that would cause someone to do a incident report or lose their Webtrust seal.
Thanks, Ryan
I would like to add a brief perspective to the discussion regarding Section 3.2.2.4 and the obligation to record the relevant BR version used for domain validation.
My recollection of the original intent behind this provision was that the validation method number and the BR version number were meant to function together for identification of the validation method used. A validation method might evolve slightly while retaining the same method identifier, and the BR version number would provide additional precision about the exact rule set used for validation. In that sense, the BR version number was intended to complement the method number by identifying the governing text associated with that implementation. And it wasn't thought that CAs would revise domain validation code and logging processes every time a new version of the BRs was adopted.
In practice, however, the Forum has often chosen to obsolete prior validation methods and introduce new method numbers when changes are made. As a result, the method identifier itself now typically captures the substantive change. That evolution in drafting practice arguably reduces the independent utility of the BR version number as a proxy for method-level differences.
This is distinct from the MRSP obligation to conform to the latest version of the TLS BRs. If a new version introduces substantive changes to validation requirements, both implementation and recorded version must reflect those changes. That is not in dispute.
The narrower question is whether Section 3.2.2.4 requires version recording to track the publication state of the consolidated BR document at a given moment in time, even where no changes were made to the applicable validation method and no implementation changes occurred. If ambiguity exists on that point, it may be useful to clarify the language prospectively to ensure auditability and consistent interpretation.
I would be open to filing an MRSP GitHub issue to tighten our interpretation of Section 3.2.2.4 and, if appropriate, revise the MRSP language to ensure operational expectations are precise. The goal should be accurate record-keeping tied to governing validation logic, without creating unnecessary implementation churn where no substantive requirements have changed.
Best regards,
Ben
To view this discussion visit https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/cc3a4e0d-5483-444e-8b83-a25ed2290f0en%40mozilla.org.
Thanks, Aaron, for initiating this discussion. You are correct that the relevant requirement we planned to cite in our recent incident is from section 3.2.2.4 (and also section 3.2.2.5 for IP validation). While we are working on a detailed report, we’ll offer some clarifications and thoughts relevant to this thread:
Our interpretation of BR Section 3.2.2.4 was mostly in line with Aaron’s - that the log should reflect the BR version in effect at the time of validation. Our incident report was filed because we discovered that the BR versions we were recording were outdated.
With the interpretation that Dexter gave, our situation might not have been considered a reportable incident at all.
We agree with the concerns raised, particularly under Aaron's interpretation, that achieving instantaneous compliance the moment a new BR version becomes effective is operationally very challenging.
On Ryan’s interpretation - the issue is that the analysis and any necessary code changes to comply with the substance of new BR requirements must logically be completed before the effective date. However, the separate act of changing the version number that is logged can only occur at or after the new version becomes effective.
This seems to imply that the mechanism for updating the logged BR version string must be decoupled from other code rollouts related to substantive BR changes. To meet a strict interpretation, a CA would likely need an independent process to monitor for newly effective BR versions and update a configuration value for the logging system, separate from regular code deployment cycles. This is further complicated by typical rollout schedules (e.g., weekly deployments) and potential production freezes, which can easily introduce delays.
Building and maintaining a system solely to ensure the logged BR version string flips at the precise moment of effectiveness, detached from the rollout of any actual logic changes, feels like it could become a significant overhead. In our day-to-day operations, we have not felt the need to query or filter validation data based on the BR version in force.
We are very interested in the community's thoughts and suggestions on:
How the Baseline Requirements could be clarified regarding the expectations for logging the BR version, especially concerning the timing of updates. Is there an acceptable propagation delay?
As Ben said: the forum has often chosen to obsolete prior validation methods and introduce new method numbers when changes are made. Based on this, should we consider dropping the requirement of keeping a record of the version, and simply rely on the creation timestamp of the log entry since CAs must always apply the current BR version anyway?
A CA is expected to conform to its Certificate Policy and Certification Practice Statements, which are versioned independently of the BRs; would it make more sense to version validations with the CP/CPS versions in force instead of the BR versions?
We believe that the goal is to ensure audits can trace the rules applied, and we hope to find a solution that is both effective and implementable without excessive burden.
Thanks, Aaron, for initiating this discussion. You are correct that the relevant requirement we planned to cite in our recent incident is from section 3.2.2.4 (and also section 3.2.2.5 for IP validation). While we are working on a detailed report, we’ll offer some clarifications and thoughts relevant to this thread:
Our interpretation of BR Section 3.2.2.4 was mostly in line with Aaron’s - that the log should reflect the BR version in effect at the time of validation. Our incident report was filed because we discovered that the BR versions we were recording were outdated.
With the interpretation that Dexter gave, our situation might not have been considered a reportable incident at all.
We agree with the concerns raised, particularly under Aaron's interpretation, that achieving instantaneous compliance the moment a new BR version becomes effective is operationally very challenging.
On Ryan’s interpretation - the issue is that the analysis and any necessary code changes to comply with the substance of new BR requirements must logically be completed before the effective date. However, the separate act of changing the version number that is logged can only occur at or after the new version becomes effective.
To view this discussion visit https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/dc3ee31e-299e-452c-a516-be3b19b2e8bdn%40mozilla.org.
Hi Ryan, Gurleen, Aaron, et al,
Concerns about audit consistency are well taken. The compliance model works best when the trigger for action is objective and predictable, not dependent on individualized assessments by either CAs or auditors.
I agree with Ryan that we should not create a regime where auditors must independently decide, revision by revision, whether a particular BR update was “substantively relevant” to a given DV method. That kind of subtle technical determination is unlikely to be applied uniformly.
At the same time, I want to clarify what I was — and was not — suggesting.
My main point was that, in my opinion, Section 3.2.2.4 was originally conceived to track adherence to the specific language governing each validation method, rather than to require mechanical synchronization with every new version of the TLS BRs where no changes were made to the applicable method. The requirement states that CAs “SHALL maintain a record of which domain validation method, including relevant BR version number, they used to validate every domain.” I read that as outcome-oriented: for each validation event, the CA must be able to determine which validation method and which BR version governed that validation.
The text requires record-keeping but does not expressly prescribe the storage architecture, nor does it explicitly require that a literal BR version string appear in every transaction-level log entry, provided that the CA’s record-keeping system preserves a reliable and reproducible mapping to the governing BR version. In that sense, the core objective is traceability of the governing rule set — not publication-timestamp synchronization for its own sake.
I think Gurleen’s and other’s operational concerns have force in a scenario where a new BR version introduces no changes to the applicable DV method, yet a strict interpretation would still require the CA to update its logging of BR version number immediately upon the new version becoming effective. In such case, logging changes would not correspond to any change in the deployed validation logic.
Certainly, if a new BR version changed the validation method, that would be a different story. But where the deployed validation logic is current and traceable, and the only delta is the publication of a new version of the TLS BRs, I don’t think the intent of §3.2.2.4 was to require changes to validation systems and logging solely to track that publication event.
Ultimately, I agree with Ryan on the principle that we want requirements that can be consistently implemented and verified, supported by language that reduces ambiguity and enables reliable CA oversight. However, if our intent is strict synchronization with each effective BR version regardless of method changes, we should state that clearly as a bright-line rule. And, if the intent is to ensure traceability of the governing validation logic, then that should likewise be stated more clearly.
Thanks,
Ben
Ben that aligns with my understanding of that requirement as well. I don’t think having the wrong version in the code is really the incident of interest in any of these issues. That’s an option for how to implement that requirement. I would be surprised if that’s the only way a CA knows they reviewed relevant BR changes and that their code correctly aligns with the requirements.
Aaron I generally agree but one way the version is relevant is that the validation methods aren’t self-contained. Most of them refer to other sections of the doc that can be updated without changing the method number.
I do think it’s interesting that we are having two conversations about CP/CPS contents and how CAs describe they comply with the Baseline Requirements. Some of what we’ve discussed here I think overlaps the other discussion: https://groups.google.com/a/ccadb.org/g/public/c/iZg_253IZfo.
Maybe one way we could make it more clear what the expectations are is to update the existing language
From
“CAs SHALL maintain a record of which domain validation method, including relevant BR version number, they used to validate every domain.”
To something like
“CAs MUST maintain a record of which domain validation method they used to validate every domain. CAs MUST maintain a record that they have reviewed their implementation of domain validation each time the BRs updated to ensure it continues to meet the latest requirements."
Remove it, and an auditor reconstructing what rules governed a validation event has to work from a timestamp and inference rather than a recorded fact. That's a meaningful loss of audit fidelity.
The timestamp consistency argument makes me uncomfortable. Applied broadly, it would justify removing every discrete data point the BRs require CAs to log, since auditors can always work backwards from a timestamp and a changelog.
On the DNSSEC example, a version stamp gives an auditor a bounded and deterministic starting point. They know exactly which two document versions to compare and can verify the CA's behavior against that specific set of changes. Without it, they first have to determine which version was in effect at the moment of validation, which is exactly the problem this thread started with: effective dates without times, timezone ambiguity, publication lag. That reconstruction has to happen for every validation event they want to scrutinize, and it has to happen with the same ambiguity CAs were dealing with operationally. It also assumes that the auditors would do that uniformly across the auditor set is a significant assumption that does not seem to align with the depth we have seen auditors apply historically.
The reality is that the history of this ecosystem is not short on examples where compliance gaps were visible in the record, long before anyone acted on them. We should be cautious about removing data points that can prompt harder questions.
If the logged version should reflect deployed validation logic rather than publication date, say that in the BRs. That addresses the operational concern Gurleen raised, keeps the audit value intact, and gives auditors something concrete to push on. That's the fix worth pursuing.
Ryan
The timestamp consistency argument makes me uncomfortable. Applied broadly, it would justify removing every discrete data point the BRs require CAs to log, since auditors can always work backwards from a timestamp and a changelog.
On the DNSSEC example, a version stamp gives an auditor a bounded and deterministic starting point. They know exactly which two document versions to compare and can verify the CA's behavior against that specific set of changes.
Without it, they first have to determine which version was in effect at the moment of validation, which is exactly the problem this thread started with: effective dates without times, timezone ambiguity, publication lag.
I agree Aaron, I also want to make sure we are focusing on operational and security outcomes. Not audit optimization driven ones. This is why I proposed we update that requirement to be more clear. I’m not also opposed to removing it given that we are now being more aggressive at deprecating methods. If I recall another reason we wanted to add that was so that we would have the ability to understand how much which validation methods are used to help understand impact of deprecation. Its not actually clear to me how you could reasonably issue a cert without knowing it’s validation method so requiring this may be moot.
The pressure in this thread is the same pressure that shows up repeatedly in this ecosystem, reduce specificity in the name of operational efficiency. Sometimes that pressure is legitimate. Sometimes the requirement being relaxed is genuinely redundant. But the cumulative effect of that pattern will be an environment where it will be progressively harder to audit and a CA ecosystem that is progressively less accountable.
That would not matter much if our audit regime were more robust than it is today. WebTrust and ETSI audits are point-in-time, documentation-focused engagements that rarely involve deep technical inspection of deployed systems. Root programs have had to step in repeatedly because clean audit opinions were not reflecting operational reality. That is not a criticism of auditors individually, it is a structural problem.
Ultimately, this is a decision root programs will have to make. Optimize for CA operational flexibility and trust that CAs will make the right call, or optimize for accountability by preserving the signals that give auditors something concrete to work with. Removing requirements like this one makes the first choice easier. It makes the second choice harder. We should at least be clear that is the tradeoff we are making.
Ryan
--
You received this message because you are subscribed to the Google Groups "dev-secur...@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev-security-po...@mozilla.org.
To view this discussion visit https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/5888dc3f-7ce6-4037-adc7-4c3d67cf684an%40mozilla.org.
I realize this thread has gone quiet, but I wanted to add some context that I now realize I was taking for granted that others may not have remembered, especially since not everyone here was active in the community when this requirement was introduced.
Ballot 190 (https://cabforum.org/2017/09/19/ballot-190-revised-validation-requirements/) passed unanimously in September 2017. The version number requirement was a direct response to a specific problem that kept coming up. CAs believed their validation logic was correct and compliant, and when the community or an auditor concluded otherwise, nobody could determine which certificates had been issued using validation that didn't meet the requirements, how far back the problem went, or how many certificates needed to be revoked and reissued.
It was introduced with two fundamental goals.
My earlier interpretation of "used" as referring to deployed logic rather than publication date stems directly from this history. SInce the goal was scope assessment and process verification, then "used" has to mean the logic that actually ran, not the version that happened to be current on the calendar.
Which brings us back to what raised this thread in the first place. Discovering that your version logging is out of sync with what you are issuing would not normally be something I would expect to trigger a stop issuance. Your change management process would serve as a mitigating control, capturing that you did in fact deploy the right validation logic. I would expect a CA to stop issuance only if they discovered they were actually running the wrong code, and to roll out the logging fix separately when it was safe to do so.
Ryan