SCAP-today-/OVAL-centric

21 views
Skip to first unread message

Adam Montville

unread,
Apr 24, 2020, 8:41:47 AM4/24/20
to scap-dev-endpoint
Good morning Everyone,

Reference Charles’ summary here: https://groups.google.com/a/list.nist.gov/forum/#!topic/scap-dev-endpoint/CozEcgf1pYU

One of the concerns we have based on existing discussions stems from what seems to be an expectation that the SCAP specifications aren’t going to change very much and that OVAL is front and center as the checking language. This is related to the third and fourth bullets in my other note, “A seemingly monolithic approach”.

I think we’ve all agreed that there could be several different types of PCS’ that may rely on different languages under the hood. OVAL is just one of those, so if we are relying on any OVAL-specific element to be used across the ecosystem globally, we’re probably asking for trouble.

Yes, at some point there needs to be a normalization, which is something that I think has been discussed, but possibly not recognized as the first-class problem it seems to be.

Kind regards,

Adam

Schmidt, Charles M.

unread,
Apr 24, 2020, 9:55:31 AM4/24/20
to Adam Montville, scap-dev-endpoint
Hi Adam,

Thank you for sending these out. I'll take a stab at starting a response on this one.

The SCAP architecture that we designed has two core requirements with regard to checking systems:
1) Every check needs a unique identifier. These identifiers are what are used in the Repositories as keys to stored results. When someone wishes to look up previous results, they query on the identifier of the check that would produce those results.
2) The type of each check needs to be clearly identified. This is needed so the Collector can route instructions to the PCE/PCX needed to actually run the check and gather the data. The Collector also needs to be able to recognize when it receives check instructions for which it has no associated PCE/PCX to execute them.

In the design proposal, I had suggested that the OVAL structure could serve as a "wrapper" for any type of check. The idea was that because OVAL Tests have both unique identifiers and an explicit indication of the type of check they are (e.g., registry test, etc.) that these easily meet these needs. If someone wished to use some other checking mechanism, they could just create an OVAL test that served as a wrapper - the test would provide the ID and check type information, and the rest of the body of the test could just be one big field that contained the corresponding instructions. (No need to make it any more complicated than that.) I'm literally talking about something like:

<ansible_test id="unique_identifier_12345" comment="this test contains ansible" xmlns="http://ansible.com">
<body>Lots of ansible instructions go here</body>
</ansible_test>

I like the idea of using OVAL wrappers because it means that the SCAP infrastructure only needs to understand one format - otherwise it needs to know where to find unique identifiers in Oval, Ansible, Chef, and every other format it supports. Using something like the above example, we could wrap any checking language (past or future), know that a Collector can always route it to the right PCE/PCX, and know that the results will always be easy to find in the Repository. Moreover, the Collector becomes instantly extensible - maybe there is a Collector that starts off only supporting OVAL, but the moment someone attaches a PCE or PCX that supports ansible, the Collector is able to recognize that these sorts of tests can be routed to those new components.

So yes - my proposal is that the OVAL *format* serve as the lingua franca of the SCAP architecture to facilitate easy check routing and result lookup. However, the intent was to use this format as a wrapper to allow any checking system to be easily supported.

Does this allay or confirm your concerns?

Thanks,
Charles
--
To unsubscribe from this group, send email to scap-dev-endpo...@list.nist.gov
Visit this group at https://list.nist.gov/scap-dev-endpoint
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev-endpo...@list.nist.gov.

Schmidt, Charles M.

unread,
Apr 24, 2020, 10:07:14 AM4/24/20
to Adam Montville, scap-dev-endpoint
One important detail I forgot to mention: my expectation was that the example ansible_test OVAL would never need to be formally adopted by OVAL or be part of the official OVAL language. The important thing is that the format is recognizable by tools. People can make up whatever they want and at least the ID and check type would be instantly discoverable.

Charles

David Solin

unread,
Apr 24, 2020, 6:39:01 PM4/24/20
to Adam Montville, scap-dev-endpoint
Hi Adam,

Charles’s response notwithstanding, are you suggesting there should be no check languages specified for SCAPv2 at all?  Or that, we might decide that OVAL shouldn’t be one of them?

I have been assuming that there might be multiple check languages, but that OVAL would almost certainly be one of them, because it would probably take us at least a decade to get back to the level of standardization we’ve managed to achieve if we decided to abandon it.

I don’t necessarily believe that every check language should manifest itself in the form of an OVAL schema.  However, I do believe it might be desirable to have the ability to support custom, or at least, not-yet-official OVAL schemas.

Best regards,
—David


Jessica Fitzgerald-McKay

unread,
Apr 27, 2020, 1:04:23 PM4/27/20
to David Solin, Adam Montville, scap-dev-endpoint
In support of Adam and David's desire to be able to support multiple check languages, should we spend some time considering, as Charles began to describe in his email, what we would need that language to cover?

Adam Montville

unread,
Apr 28, 2020, 2:17:07 PM4/28/20
to Jessica Fitzgerald-McKay, David Solin, scap-dev-endpoint
I have to admit, I’m struggling with the conversational grouping of my client on account of the “[EXT]” that gets added to MITRE emails :-)

OVAL should be one of the supported checking languages. I would like to additionally consider supporting NETCONF/RESTCONF, maybe NetFlow in some way (SACM has been experimenting with some of this). Even scripts (I know that’s a can of worms). 

To answer Jess’ question, it seems that we could benefit from an information model normalization that sits atop these different systems, and discovering what that might look at would require looking at some set of checking languages beyond OVAL. 

I also feel that using OVAL as the “lingua franca”, as Charles put it, is not the best way to accomplish this. It could work from a technical perspective, but it doesn’t feel right from a modeling perspective. 

Charles Schmidt

unread,
May 5, 2020, 3:13:12 PM5/5/20
to scap-dev-endpoint
Hi all,

Adam - sorry about the [EXT] addition, but apparently it is the only way to keep me safe from the Internet. :)

I think, overall, we are all in agreement here. To paraphrase what Adam said in the beginning, we want the SCAP v2 architecture to support many types of checking languages, of which OVAL will be one. The open question seems to be one of the mechanisms that support this.

I'm not opposed to coming up with something else that "sits on top of" a check language to facilitate standardized identification and routing. OVAL wrappers seemed to me like an easy shortcut to achieve that, but it sounds like David and Adam are both not fans of that idea. Ultimately, I think we just need three fields in a wrapper: a unique identifier, a description of the type of the check, and the check content itself. Seems pretty simple.

One possible complication, however: As I was envisioning things, a Collector might have multiple PCEs/PCXs that each support different OVAL schemas. For example, one PCE might handle a core set of schemas for the target class, a PCX might support a set of schemas outside this core, and another PCX might support some locally developed or experimental schemas. Since a single Definition might employ tests from across these sets, it would need to be the Test structure that was used to determine routing of instructions to the right tool. This means that our proposed wrapper would have to sit between the Definition and Test structures in OVAL, which seems to make this a much harder problem. I don't know if other checking languages will have similar issues (i.e., where support for elements of the language might be distributed across multiple PCEs/PCXs).

Thoughts?

Charles

On Tuesday, April 28, 2020 at 1:17:07 PM UTC-5, Adam Montville wrote:
I have to admit, I’m struggling with the conversational grouping of my client on account of the “[EXT]” that gets added to MITRE emails :-)

OVAL should be one of the supported checking languages. I would like to additionally consider supporting NETCONF/RESTCONF, maybe NetFlow in some way (SACM has been experimenting with some of this). Even scripts (I know that’s a can of worms). 

To answer Jess’ question, it seems that we could benefit from an information model normalization that sits atop these different systems, and discovering what that might look at would require looking at some set of checking languages beyond OVAL. 

I also feel that using OVAL as the “lingua franca”, as Charles put it, is not the best way to accomplish this. It could work from a technical perspective, but it doesn’t feel right from a modeling perspective. 

On Apr 27, 2020, at 12:04 PM, Jessica Fitzgerald-McKay <jmfm...@gmail.com> wrote:

In support of Adam and David's desire to be able to support multiple check languages, should we spend some time considering, as Charles began to describe in his email, what we would need that language to cover?

On Fri, Apr 24, 2020 at 6:39 PM David Solin <so...@jovalcm.com> wrote:
Hi Adam,

Charles’s response notwithstanding, are you suggesting there should be no check languages specified for SCAPv2 at all?  Or that, we might decide that OVAL shouldn’t be one of them?

I have been assuming that there might be multiple check languages, but that OVAL would almost certainly be one of them, because it would probably take us at least a decade to get back to the level of standardization we’ve managed to achieve if we decided to abandon it.

I don’t necessarily believe that every check language should manifest itself in the form of an OVAL schema.  However, I do believe it might be desirable to have the ability to support custom, or at least, not-yet-official OVAL schemas.

Best regards,
—David
On Apr 24, 2020, at 7:41 AM, Adam Montville <adam.mon...@gmail.com> wrote:

Good morning Everyone,

Reference Charles’ summary here: https://groups.google.com/a/list.nist.gov/forum/#!topic/scap-dev-endpoint/CozEcgf1pYU

One of the concerns we have based on existing discussions stems from what seems to be an expectation that the SCAP specifications aren’t going to change very much and that OVAL is front and center as the checking language. This is related to the third and fourth bullets in my other note, “A seemingly monolithic approach”.

I think we’ve all agreed that there could be several different types of PCS’ that may rely on different languages under the hood. OVAL is just one of those, so if we are relying on any OVAL-specific element to be used across the ecosystem globally, we’re probably asking for trouble.

Yes, at some point there needs to be a normalization, which is something that I think has been discussed, but possibly not recognized as the first-class problem it seems to be.

Kind regards,

Adam

--
To unsubscribe from this group, send email to scap-dev...@list.nist.gov

Visit this group at https://list.nist.gov/scap-dev-endpoint
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev...@list.nist.gov.


--
To unsubscribe from this group, send email to scap-dev...@list.nist.gov

Visit this group at https://list.nist.gov/scap-dev-endpoint
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev...@list.nist.gov.

David Solin

unread,
May 5, 2020, 4:13:31 PM5/5/20
to Charles Schmidt, scap-dev-endpoint
Why can’t it sit at the current XCCDF check layer, for compliance checking purposes?

I would think, an inventory collection might consist of some content in the form of a ’naked’ check system’s piece of content.  For OVAL, I think we have described an extensibility mechanism already.  Any other check systems would have to be either monolithic, or describe their own extension mechanism.

—David

To unsubscribe from this group, send email to scap-dev-endpo...@list.nist.gov

Visit this group at https://list.nist.gov/scap-dev-endpoint
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev-endpo...@list.nist.gov.

Adam Montville

unread,
May 7, 2020, 11:59:49 AM5/7/20
to Charles Schmidt, scap-dev-endpoint
Hahah re: [EXT]. Yes, you are now protected from the Internet…

More inline to address Charles and David.

On May 5, 2020, at 2:13 PM, Charles Schmidt <schmidt....@gmail.com> wrote:

Hi all,

Adam - sorry about the [EXT] addition, but apparently it is the only way to keep me safe from the Internet. :)

I think, overall, we are all in agreement here. To paraphrase what Adam said in the beginning, we want the SCAP v2 architecture to support many types of checking languages, of which OVAL will be one. The open question seems to be one of the mechanisms that support this.

I'm not opposed to coming up with something else that "sits on top of" a check language to facilitate standardized identification and routing. OVAL wrappers seemed to me like an easy shortcut to achieve that, but it sounds like David and Adam are both not fans of that idea. Ultimately, I think we just need three fields in a wrapper: a unique identifier, a description of the type of the check, and the check content itself. Seems pretty simple.

If we go this route, then XCCDF is likely sufficient (as David points out in another reply).


One possible complication, however: As I was envisioning things, a Collector might have multiple PCEs/PCXs that each support different OVAL schemas. For example, one PCE might handle a core set of schemas for the target class, a PCX might support a set of schemas outside this core, and another PCX might support some locally developed or experimental schemas. Since a single Definition might employ tests from across these sets, it would need to be the Test structure that was used to determine routing of instructions to the right tool. This means that our proposed wrapper would have to sit between the Definition and Test structures in OVAL, which seems to make this a much harder problem. I don't know if other checking languages will have similar issues (i.e., where support for elements of the language might be distributed across multiple PCEs/PCXs).

This speaks to capabilities. Given a specific PCE, yes it speaks OVAL, but how much? We need some way to express capabilities, and we could choose to do this in different ways. I would like to avoid any solution that requires a standard/specification revision to support later capabilities - this is too slow.

If the capabilities of a PCE are known, then the implementation should be able to avoid ever sending it content it is unable to understand. Or, if it does, then it simply doesn’t understand and returns a value that can be normalized as “unknown” or “unable to process” or something similar. 

To unsubscribe from this group, send email to scap-dev-endpo...@list.nist.gov

Visit this group at https://list.nist.gov/scap-dev-endpoint
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev-endpo...@list.nist.gov.

Charles Schmidt

unread,
May 8, 2020, 4:23:43 PM5/8/20
to scap-dev-endpoint
Hi all,

Adam/David - when you talk about using the XCCDF layer as the wrapper to provide routing and identification, I'm not sure if you are thinking of replacing XCCDF, if you are thinking of adding new fields to XCCDF at the Benchmark level, or if you are thinking of adding new fields to XCCDF at the Rule level. In all cases, however, I feel there is a challenge:

Consider the following pseudo-OVAL:
Definition id="example" {
   {registry-test id="abc" …}
       AND
   {custom-test id="xyz" …}
}

Probably a Windows Collector will natively support a PCE that can collect registry key values, but the custom test will need to be routed to a PCX that was built to support it. If the wrapper data that describes the needed capabilities is at the XCCDF layer, then this wrapper will need to point into the guts of the OVAL definition to route each part of the Definition separately. Likewise, if we want to store anything more than the overall Rule result, the wrapper will need to map separate IDs to each part of the test so the test results are individually addressable. This is all certainly possible, but it strikes me as more complicated and difficult for authors than using Tests as wrappers.

Adam wrote:
"We need some way to express capabilities, and we could choose to do this in different ways. I would like to avoid any solution that requires a standard/specification revision to support later capabilities - this is too slow.

 

If the capabilities of a PCE are known, then the implementation should be able to avoid ever sending it content it is unable to understand. Or, if it does, then it simply doesn’t understand and returns a value that can be normalized as “unknown” or “unable to process” or something similar."

I agree 100% on every point here. Collectors MUST know what content each of their PCEs/PCXs can handle and they MUST be able to route portions of content based on this understanding. As you noted, if the content requires a capability that is outside the set of capabilities supported by a Collector's PCEs/PCXs, then it needs to be able to recognize that immediately too. The OVAL test type can do this, but other processes could work too.

Charles

David Solin

unread,
May 8, 2020, 4:42:46 PM5/8/20
to Charles Schmidt, scap-dev-endpoint
I would expect an extension to normally operate at the check system level.  The Collector would delegate whole checks to extensions.

However you bring up a good point about OVAL schemas — I guess OVAL extensions could operate at the test level.  However, it might be really difficult for variables to work properly if they traverse objects of different types.  We’ll have to think about this.


Bill Munyan

unread,
May 14, 2020, 9:38:46 AM5/14/20
to David Solin, Charles Schmidt, scap-dev-endpoint
I think its a good discussion to have, specifically in reference to OVAL.  An OVAL collector's capabilities could definitely vary by the platform schemas it supports, but in the context of an SCAP-validated "collector", there are specific constructs the collector must support in order to be considered validated?  In this case, if there are separate collectors supporting OVAL on Windows and OVAL on Linux, then both collectors would have to support the platform-independent schema, and the Linux-supporting collector would need to support the Unix platform schema also.

From the Collector's perspective, should it be able to advertise that it supports (a) the minimum required constructs to support a "validated" collector, (b) schema/test-level support (much more granular), or (c) just that it supports collection using OVAL, and any results returned are governed by the OVAL specification (such as "unknown" for unsupported schemas or object types, etc)

Cheers, 
-Bill M.

Charles Schmidt

unread,
May 14, 2020, 10:05:41 AM5/14/20
to scap-dev-endpoint
Hi Bill,

Thanks for the inputs.

To your second question - my understanding of the current design was such that Collectors would not need to "advertise" their support for OVAL or any other checking languages. Instead, Collectors are bound to a certain set of targets, and each target is bound to a single Collector. If some or all of the Collector's bound targets were the subject of an assessment, the assessment instructions would be sent to the Collector. The Collector would then be expected to route instructions to certain PCEs and PCXs based on the PCE/PCX ability to run the given type of check against specific each target. If the Collector finds itself in possession of a type of check that it doesn't recognize (i.e., a type of check for which it doesn't know of any PCE/PCX that can handle it), then it is the Collector's job to provide the "result" for those checks - namely that the check could not be run.

I believe that we probably will want to define a minimal set of schemas for a PCE to support in order to claim that it "supports OVAL", but the architecture doesn't need to depend upon that. All the architecture needs to depend upon is that, when given a set of content, a Collector will pass it to PCE/PCXs that are able to assess using that content whenever possible, and that the Collector will recognize cases where no PCE/PCX will run the given content and return an appropriate result to indicate the check couldn't be run.

One side note on this - it appears that, in addition to always needing a way for the Collector to determine the type of a given piece of check content and the need for every piece of check content to have a unique identify that the Collector and Repository know how to find, it also appears that, regardless of the type of check content, the Collector needs to be able to return a universally recognized result of "UNABLE TO RUN THIS CHECK". 

Thanks,
Charles

On Thursday, May 14, 2020 at 8:38:46 AM UTC-5, Bill Munyan wrote:
I think its a good discussion to have, specifically in reference to OVAL.  An OVAL collector's capabilities could definitely vary by the platform schemas it supports, but in the context of an SCAP-validated "collector", there are specific constructs the collector must support in order to be considered validated?  In this case, if there are separate collectors supporting OVAL on Windows and OVAL on Linux, then both collectors would have to support the platform-independent schema, and the Linux-supporting collector would need to support the Unix platform schema also.

From the Collector's perspective, should it be able to advertise that it supports (a) the minimum required constructs to support a "validated" collector, (b) schema/test-level support (much more granular), or (c) just that it supports collection using OVAL, and any results returned are governed by the OVAL specification (such as "unknown" for unsupported schemas or object types, etc)

Cheers, 
-Bill M.


On Fri, May 8, 2020 at 4:42 PM David Solin <so...@jovalcm.com> wrote:
I would expect an extension to normally operate at the check system level.  The Collector would delegate whole checks to extensions.

However you bring up a good point about OVAL schemas — I guess OVAL extensions could operate at the test level.  However, it might be really difficult for variables to work properly if they traverse objects of different types.  We’ll have to think about this.
On May 8, 2020, at 3:23 PM, Charles Schmidt <schmidt...@gmail.com> wrote:

Hi all,

Adam/David - when you talk about using the XCCDF layer as the wrapper to provide routing and identification, I'm not sure if you are thinking of replacing XCCDF, if you are thinking of adding new fields to XCCDF at the Benchmark level, or if you are thinking of adding new fields to XCCDF at the Rule level. In all cases, however, I feel there is a challenge:

Consider the following pseudo-OVAL:
Definition id="example" {
   {registry-test id="abc" …}
       AND
   {custom-test id="xyz" …}
}

Probably a Windows Collector will natively support a PCE that can collect registry key values, but the custom test will need to be routed to a PCX that was built to support it. If the wrapper data that describes the needed capabilities is at the XCCDF layer, then this wrapper will need to point into the guts of the OVAL definition to route each part of the Definition separately. Likewise, if we want to store anything more than the overall Rule result, the wrapper will need to map separate IDs to each part of the test so the test results are individually addressable. This is all certainly possible, but it strikes me as more complicated and difficult for authors than using Tests as wrappers.

Adam wrote:
"We need some way to express capabilities, and we could choose to do this in different ways. I would like to avoid any solution that requires a standard/specification revision to support later capabilities - this is too slow.
 
If the capabilities of a PCE are known, then the implementation should be able to avoid ever sending it content it is unable to understand. Or, if it does, then it simply doesn’t understand and returns a value that can be normalized as “unknown” or “unable to process” or something similar."

I agree 100% on every point here. Collectors MUST know what content each of their PCEs/PCXs can handle and they MUST be able to route portions of content based on this understanding. As you noted, if the content requires a capability that is outside the set of capabilities supported by a Collector's PCEs/PCXs, then it needs to be able to recognize that immediately too. The OVAL test type can do this, but other processes could work too.

Charles


--
To unsubscribe from this group, send email to scap-dev...@list.nist.gov

Visit this group at https://list.nist.gov/scap-dev-endpoint
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev...@list.nist.gov.

--
To unsubscribe from this group, send email to scap-dev...@list.nist.gov

Visit this group at https://list.nist.gov/scap-dev-endpoint
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev...@list.nist.gov.

David Solin

unread,
May 14, 2020, 10:59:59 AM5/14/20
to Bill Munyan, Charles Schmidt, scap-dev-endpoint
I feel like the current SCAP validation test program embraces this idea, without necessarily declaring it as such.
Reply all
Reply to author
Forward
0 new messages