Evaluation in the SCAP architecture

10 views
Skip to first unread message

Charles Schmidt

unread,
Jun 25, 2020, 1:43:35 PM6/25/20
to scap-dev-endpoint
Hi Everyone,

We wrapped up our most recent call discussing the role of evaluation within the SCAP data collection architecture. I feel we made some good progress, but there remain some open questions. I'm kicking off this thread to try to make some progress in our off week. (Because I know everyone will want to spend the holiday week thinking about data collection architectures.)

During the call, it seemed like there is at least one point of consensus: during ongoing monitoring, there is a need for Collectors to perform "filtering" actions. When filtering, the Collector distinguishes between measurements changes that the Application cares about from those that it does not and suppresses reporting of the latter. This is done to limit the volume of message traffic and data processing to just measurement updates that might be relevant to the Application's interests.

Beyond that, however, there seems to be some differences of opinions:
- How would "filters" best be defined to govern when changes were reported? The original proposal was that the presence of evaluation instructions within content (e.g., OVAL State elements) would indicate a desire to distinguish between "relevant" and "not-relevant" changes.

- In some cases, the value of measurements only comes from certain combinations of measurements. For example, when trying to determine whether a particular version of an application is installed, it is often necessary to check multiple system state elements. Individually, these elements have little or no value - it is only together, and in certain combinations, that they produce information that has significance to the enterprise. There seemed to be some agreement that, in such a circumstance, the "collected data" was the interpretation, rather than the raw data that was interpreted. If this is the case, is there any reason limit the circumstances where a Collector returns interpreted rather than raw values? Similarly, is there any reason to limit the return of interpreted data to just monitoring activities or should it be permissible during point-in-time assessments?

- How much evaluation should Collectors be required to support? (Note that support for evaluation just means the Collector is able to do the evaluation when requested, not that the Collector is required to perform that evaluation at all times.) It seems probable (although this was not explicitly agreed) that OVAL evaluation (i.e., the evaluation of OVAL State values against collections guided by a corresponding OVAL Object) makes sense as a way to support filtering and data interpretation. Should XCCDF (or other) evaluation capabilities be required. David S. felt that Collectors would need to be able to interpret XCCDF instructions to guide data collection and, given that an XCCDF interpreter would likely include the functionality anyway, it would make little sense to prohibit Collectors from performing XCCDF evaluation as well. Adam disagreed and felt that evaluation should wait until the collected data was returned to the Application.

There are probably other considerations on this topic as well. Please respond with your thoughts on this.

Charles

Charles Schmidt

unread,
Jun 25, 2020, 3:21:49 PM6/25/20
to scap-dev-endpoint
Hi again,

Having set up the discussion in what I hope was a relatively neutral way, I want to share a few of my opinions on this topic as well as the assumptions that drive those opinions:

I think there is value in allowing Applications to offload evaluation, where the Application receives evaluated results rather than raw data. I don't think this is incompatible with Adam's vision of having collection and evaluation happen separately - I just feel that it should be possible for the data to be evaluated prior to return to the Application. I don't think we do anyone any favors by creating a situation that requires multiple actions to be explicitly taken by an Application (once to perform collection and once to perform evaluation) when the invocation of the first action could address the second automatically.

I think that any evaluation should be explicitly controlled by the instructions sent from the Application. (This might be embedded in the content created by the content author, based on references to some remote file or service, or explicitly configured by the Application in the invocation parameters, but however it is done when the Application kicks of an assessment/monitoring, that invocation should unambiguously indicate where evaluations should occur and what those evaluations do.) I do not think we ever want to have situations where the SCAP architecture independently determines that certain data should be filtered or interpreted in the absence of explicit indication of this in the invocation.

On a similar note, for whatever set of evaluation functionality we agree that Collectors need to have in order to perform appropriate filtering and interpretation, I feel that Applications should be able to invoke that functionality as they see fit. I.e., I don't think we should have any situation where an Application might ask to employ existing filtering capabilities where the SCAP architecture says "no - while the functionality is present to do this, the architecture does not consider this to be a situation where the functionality is appropriate." In short, I feel that the only question that we should be considering is "what evaluation functionality is appropriate" and the question of "when should evaluation occur" should be entirely at the discretion of the Application.

I'll note that "wrapped" instructions (as discussed in the Content Wrapping thread: https://groups.google.com/a/list.nist.gov/forum/#!topic/scap-dev-endpoint/stKDaB_zOm0) might end up including evaluation instructions that get sent to PCEs. Since Collectors are not expected to understand either the instructions or the results, the activities a PCE takes, including whether any evaluation is done, will be outside of any constraints imposed by the SCAP architecture design. (Note that, when monitoring, unless the Collector had at least some ability to interpret results, this would mean that wrapped results would never be filtered out unless they were identical to the preceding wrapped result.) I don't think this influences the discussion one way or the other - I just note it to observe there are some situations where the Collector won't know if it is returning raw or interpreted results.

I have waffled back and forth a few times on this point, but in the end I think I agree with David S. that Collectors need to be able to ingest and parse XCCDF content. The Manager only needs to understand XCCDF enough to find and extract the platform applicability information. I don't think it makes sense for it to implement a full XCCDF parser. I think we do want an Application to be able to submit XCCDF benchmarks to guide assessments, which leaves the Collector to perform the full interpretation and parsing of the XCCDF instructions. Given that, I (again) agree with David that it would be reasonable for XCCDF Results to be an option for return since one cannot have a standard-compliant XCCDF interpreter that lacks the ability to generate XCCDF results. This then implies that Collectors would support XCCDF evaluation. I think that we want the Application to be able to control when XCCDF evaluation occurs (vs. just OVAL (or other supported check) evaluation, vs. just data collection), but I agree with David that prohibiting it would effectively mean taking functionality that was already present and disallowing it for reasons that have nothing to do with functionality.

I want to emphasize that nothing above leads to loss of desired data or forces Applications to do evaluation at the edge if they don't want to. It simply gives Applications access to functionality that already is required on Collectors for other reasons and the ability to use that functionality as a way to manage the volume of information they receive during an assessment.

Charles

David Solin

unread,
Jun 25, 2020, 6:28:14 PM6/25/20
to Charles Schmidt, scap-dev-endpoint
Seeing as how you agree with me, I also agree with you!

However, I am not completely certain that Adam disagrees with us on this particular point of XCCDF evaluation.  He can explain himself if he does.  But, there are really two kinds of “evaluation”, and I think perhaps Adam might be speaking philosophically and only wants to restrict where one kind of evaluation takes place.

Let me define two terms:
1) Logical Evaluation - this is the determination of a logical result (as in, first order rule logic), given system characteristics and rules as input.  For example: does some piece of telemetry cross some threshold?

2) Functional Evaluation - this is the determination of compliance with respect to policy.  For example, if the telemetry threshold has been crossed, does that violate some regulatory guideline?

If Adam contends merely that Functional Evaluation belongs in the Application (formerly “Evaluation Element”), I would say that I agree.  Functional Evaluation can conceivably be guided by inputs that are either themselves the results of Logical Evaluations, or system characteristics (to which Logical Evaluations can be applied), or a combination of the two.

So, while Logical Evaluation can certainly also happen at the Application level, Charles and I believe it should also be allowed to happen at the Collector level.  This gives us the flexibility to define some set of Logical Evaluation criteria, resolve them in a distributed fashion at the edge, and then make Functional assessments of those results back at the Application layer.  This capability will make possible a tremendous degree of scalability that would otherwise prove impossible to implement if all Logical Evaluation can only happen at the Application layer.

Hopefully this is a distinction whose illustration will lead to a consensus, rather than new confusion.

--
To unsubscribe from this group, send email to scap-dev-endpo...@list.nist.gov
Visit this group at https://list.nist.gov/scap-dev-endpoint
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev-endpo...@list.nist.gov.

Adam Montville

unread,
Jun 26, 2020, 10:48:06 AM6/26/20
to David Solin, Charles Schmidt, scap-dev-endpoint
Good morning :-) 

I can get behind the distinction between logical and functional evaluation, as David has defined them. I also want to go back to the reason I made my initial comments - it seemed to be the case that we were going to require functional evaluation at the edge and disallow for the flexibility of more decoupled solutions. If nothing we are doing now precludes such decoupling, I think we’re in good shape. I’m considering the need for state data beyond posture assessment (see the other thread on the OCA architecture diagram).

I think we need to recognize also that evaluation results rarely provide sufficient information operationally. Operators, security/compliance folks, and auditors all seem to want as much evidence as they can have to support the evaluation result - so they’re going to want the data used to make an evaluation decision. 

Kind regards,

Adam

Charles Schmidt

unread,
Jun 26, 2020, 1:32:32 PM6/26/20
to scap-dev-endpoint
Hi all,

To Adam's comment - I think we are all in agreement that we don't want to *require* evaluation in any particular situation. I think that there is agreement that evaluation should be an option that is controlled by the Application and the SCAP architecture follows those instructions rather than making its own determination as to whether to evaluate. Applications will always have the option of saying "only send me the data". We'll have to figure out what those controls look like, but it sounds like we might have agreement on this.

At the risk of undermining that consensus, however: I'm not sure the distinction between logical and functional evaluation has significant practical implications for the design. I think the same evaluation might be "logical" in some circumstances and "functional" in others. I also feel that the evaluation mechanics that Collectors need to support for logical evaluation will be the same mechanics used in functional evaluations. I do think looking at logical evaluations is a good way to determine the baseline functionality we need to require in Collectors (and maybe that is what you intended), but the distinction between logical and functional evaluation is irrelevant with regard to the use of that capability.

The bottom line is that I think our main questions are:
1) What evaluation capabilities are needed to support any reasonable "logical evaluation"?
2) What evaluation capabilities are tied to those needed logical evaluation capabilities? (I think that there is a valid argument that XCCDF check evaluation needs to be supported for efficient complex-check evaluation, and I believe any interpreter that can do that will also be able to compute a total XCCDF score result, even though computing that score is probably never a logical evaluation.)
3) How do Applications instruct the SCAP architecture to perform *any* of these evaluations that it is capable of (regardless of whether what the Application is requesting would reasonably be considered logical or functional evaluation).

Charles

David Solin

unread,
Jun 26, 2020, 1:41:38 PM6/26/20
to Charles Schmidt, scap-dev-endpoint
There is no technical distinction between the mechanics of functional vs logical evaluation.  The distinction only manifests in how its result translates to something experienced by an end-user.  The simplest form of functional evaluation would be to simply pass-through the result of a logical evaluation that occurred on the edge.

As for what comprises the set of logical evaluations SCAP should support?  At a minimum we have the logical evaluations already extant in the specifications — production of OVAL results and XCCDF results (and ARF).  Maybe also OCIL results?

Reply all
Reply to author
Forward
0 new messages