Hi Everyone,
We wrapped up our most recent call discussing the role of evaluation within the SCAP data collection architecture. I feel we made some good progress, but there remain some open questions. I'm kicking off this thread to try to make some progress in our off week. (Because I know everyone will want to spend the holiday week thinking about data collection architectures.)
During the call, it seemed like there is at least one point of consensus: during ongoing monitoring, there is a need for Collectors to perform "filtering" actions. When filtering, the Collector distinguishes between measurements changes that the Application cares about from those that it does not and suppresses reporting of the latter. This is done to limit the volume of message traffic and data processing to just measurement updates that might be relevant to the Application's interests.
Beyond that, however, there seems to be some differences of opinions:
- How would "filters" best be defined to govern when changes were reported? The original proposal was that the presence of evaluation instructions within content (e.g., OVAL State elements) would indicate a desire to distinguish between "relevant" and "not-relevant" changes.
- In some cases, the value of measurements only comes from certain combinations of measurements. For example, when trying to determine whether a particular version of an application is installed, it is often necessary to check multiple system state elements. Individually, these elements have little or no value - it is only together, and in certain combinations, that they produce information that has significance to the enterprise. There seemed to be some agreement that, in such a circumstance, the "collected data" was the interpretation, rather than the raw data that was interpreted. If this is the case, is there any reason limit the circumstances where a Collector returns interpreted rather than raw values? Similarly, is there any reason to limit the return of interpreted data to just monitoring activities or should it be permissible during point-in-time assessments?
- How much evaluation should Collectors be required to support? (Note that support for evaluation just means the Collector is able to do the evaluation when requested, not that the Collector is required to perform that evaluation at all times.) It seems probable (although this was not explicitly agreed) that OVAL evaluation (i.e., the evaluation of OVAL State values against collections guided by a corresponding OVAL Object) makes sense as a way to support filtering and data interpretation. Should XCCDF (or other) evaluation capabilities be required. David S. felt that Collectors would need to be able to interpret XCCDF instructions to guide data collection and, given that an XCCDF interpreter would likely include the functionality anyway, it would make little sense to prohibit Collectors from performing XCCDF evaluation as well. Adam disagreed and felt that evaluation should wait until the collected data was returned to the Application.
There are probably other considerations on this topic as well. Please respond with your thoughts on this.
Charles