SCAP v2: RFEs for XCCDF

70 views
Skip to first unread message

David Ries

unread,
Jul 18, 2019, 12:02:53 PM7/18/19
to scap-dev-authoring
Hello All,

One of the tasks being considered by the SCAP v2 XCCDF & Authoring group is making improvements to XCCDF.

By “improvements”, I mean incremental changes to the current schema/model that—ideally—would be backwards compatible. This task does not include wholesale changes to the model or switching from XML to another format; those types of changes are being considered as a part of a different task.

As a first step, we are compiling a list of RFEs (requests for enhancement). 

Please respond to this email to add RFEs of your own! These can be problems to solve, opportunities to realize or any other specific wish list item for XCCDF.

Here are a few that have been mentioned on various mailings lists to get the ball rolling:

  • Better support for automation-friendly remediations, for example:
    • A well-specified way to reference external remediations in well-known formats (Chef, Ansible, PowerShell, etc.), including common remediation-relevant metadata for each remediation (e.g. applicability, restart behavior, user input requirements, uninstall steps, etc.)
  • Expand Profiles/Tailoring to allow:
    • Customizing the XCCDF metadata (titles, descriptions, etc.)
    • Adding external rules and/or checks
    • Overriding arbitrary OVAL elements and/or values
  • A way TBD to describe how multiple benchmarks should be applied to a system composed of devices with different roles (production DB server, web server, etc.). This might be a meta-benchmark or catalog of some sort or a way for benchmarks to reference other benchmarks, mapping them by role, tier, OS, etc.

Best,
David

David E. Ries
Co-Founder, Business Development
ri...@jovalcm.com

Joval Continuous Monitoring

Facebook Linkedin


Gabriel Alford

unread,
Jul 18, 2019, 6:52:18 PM7/18/19
to David Ries, scap-dev-authoring
On Thu, Jul 18, 2019 at 10:03 AM David Ries <ri...@jovalcm.com> wrote:
Hello All,

One of the tasks being considered by the SCAP v2 XCCDF & Authoring group is making improvements to XCCDF.

By “improvements”, I mean incremental changes to the current schema/model that—ideally—would be backwards compatible. This task does not include wholesale changes to the model or switching from XML to another format; those types of changes are being considered as a part of a different task.

As a first step, we are compiling a list of RFEs (requests for enhancement). 

Please respond to this email to add RFEs of your own! These can be problems to solve, opportunities to realize or any other specific wish list item for XCCDF.

Here are a few that have been mentioned on various mailings lists to get the ball rolling:

  • Better support for automation-friendly remediations, for example:
    • A well-specified way to reference external remediations in well-known formats (Chef, Ansible, PowerShell, etc.), including common remediation-relevant metadata for each remediation (e.g. applicability, restart behavior, user input requirements, uninstall steps, etc.)
Couple of questions:
  • Should there be a way to specify external remediations from the content? I assume here that for example you are referring to say a Chef Server or Ansible Tower which can get dicey on how to account for everyone's different environment setup.
  • As far a metadata, shouldn't applicability be determined by OVAL? If fail, remediation is applicable; pass remediation is not applicable. Also if we start to add metadata such as user input requirements and uninstall steps, that starts to move out of the "automation" part of SCAP.
  • Expand Profiles/Tailoring to allow:
    • Customizing the XCCDF metadata (titles, descriptions, etc.)
    • Adding external rules and/or checks
    • Overriding arbitrary OVAL elements and/or values
 I assume that you are talking about more than making sure that XCCDF variables are used and changed during a customization.
  • A way TBD to describe how multiple benchmarks should be applied to a system composed of devices with different roles (production DB server, web server, etc.). This might be a meta-benchmark or catalog of some sort or a way for benchmarks to reference other benchmarks, mapping them by role, tier, OS, etc.
  • Something I would like to see here as well is the ability to extend multiple profiles into many single different profiles. For example, I may have multiple products that use a db server profile and a web server profile in a benchmark and want to use those db and web server profiles multiple times for every different product that uses a db and web server. At the same time, I also don't want my db server profile extending the web profile or vice versa.
As another RFE, it would fix a lot of problems on we run into that rule ordering in a profile be obeyed by the scan itself. In other words, how I order the rules in a profile is the order in which a scan runs if a profile is selected. Otherwise, follow the behavior we have today.
 

Best,
David

David E. Ries
Co-Founder, Business Development
ri...@jovalcm.com

Joval Continuous Monitoring

Facebook Linkedin


--
To unsubscribe from this group, send email to scap-dev-author...@list.nist.gov
Visit this group at https://list.nist.gov/scap-dev-authoring
---
To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev-author...@list.nist.gov.

David Ries

unread,
Jul 19, 2019, 11:03:00 AM7/19/19
to Gabriel Alford, scap-dev-authoring
Hi Gabriel,

Thanks for the two additional RFEs! 

If you don’t mind, I’m going to table responding to your questions about the remediation and tailoring RFEs for now. Let’s get a good list first. Then we can dig into each RFE. I don’t want to get into the weeds yet, :).

-David

Grobauer, Bernd

unread,
Jul 22, 2019, 10:05:50 AM7/22/19
to David Ries, scap-dev-authoring

Hi,

 

here is some “small” annoyances with XCCDF which shoud be ironed out in a future

release of XCCDF:

 

  1. Identifiers
    1. Identifiers should be “lifetime identifiers”
      The specification should require that identifiers stay valid for the complete lifetime of a rule and
      must not depend on chapter-section-subsection structures which are bound to change from revision to revision of a document,
      as is, e.g., currently the case with CIS benchmarks.
    2. Form
      Form-specifications like “xccdf_namespace_benchmark_name” shoehorn an identifier that has two components
      into a single component, making authoring more error-prone than it has to be.
      I guess the reason for doing so is the standard way of XML treatment of id and idref, so everything that is needed
      for identifying the structure must be within the ‘id’ field. Hence, in XML we probably will not get around it,
      but for the data model, the identifier should be separated out into namespace and name.
  2. Extension point for applicability information
    Currently, Benchmark, Group, Rule, and FIX element allow addition of applicability information as CPE information. To allow for organization-specific
    applicability information as well as future extensions that can be carried out without issuing a new XCCDF version, XCCDF should allow an extension
    point <applicability> <applicability_info system=’…’> …</applicability_info> </applicability>
    or similar that can be specified for benchmarks, groups, rules, fixes, checks.
  3. Human-readable check info
    CIS benchmarks viewed in PDF, Word or XLSX contain a human-readable description of how to audit a setting, which is not contained in the XCCDF version
    of the guide, simply because there is no field for this type of content. XCCDF should therefore carry a ‘checktext’ element akin to the the ‘fixtext’ element.
  4. Rule-specific History information
    At least rules (maybe also groups and profiles) should offer the ability to keep track of change information, detailing changes carried out by over the various
    versions of a rule; the history information should have a human readable description and a machine-readable flag, with the latter flag informing whether
    changes with respect to the previous revision are merely cosmetic (e.g., changed wording, formatting, etc.) and thus can be disregarded when comparing subsequent
    revisions of a benchmark, or should be reviewed for a possible impact on check/implementation.
  5. Support Markdown instead/alongside HTML for human-readable content
    CIS uses Markdown for authoring their human-readable content in XCCDF; Redhat does; we at Siemens do. I would love to have XCCDF communicate me the original Markdown
    rather than the HTML conversion – I can reuse Markdown much easier than somehow reconverting HTML to Markdown (and then dealing manually with the errors that occur in that process). (I realize, when using XML, that will be cumbersome, since we would need to have to use CDATA-environments in all fields
    containing Markdown – but since nobody is writing XCCDF per hand anyhow, this might not be that big an issue…)

 

Mostly rather small stuff … but  stuff that would save our organization a lot of time in dealing with externally available XCCDF benchmarks.

 

Kind regards,

 

Bernd

--

David Ries

unread,
Jul 22, 2019, 10:38:20 AM7/22/19
to Grobauer, Bernd, scap-dev-authoring
Thanks, Bernd! These are terrific. I will add them to the list. -David

David Ries

unread,
Aug 15, 2019, 12:23:21 PM8/15/19
to scap-dev-authoring
Hello All,

Below, please find the 11 XCCDF RFEs suggested on this list.

Please note: these are in no particular order. They have not been reviewed, discussed or edited. And this certainly isn’t a complete list. It’s simply intended to be a conversation-starter, providing examples of the type of incremental changes to the current XCCDF schema/model we might consider.

    • Better support for automation-friendly remediations, for example:
      • A well-specified way to reference external remediations in well-known formats (Chef, Ansible, PowerShell, etc.), including common remediation-relevant metadata for each remediation (e.g. applicability, restart behavior, user input requirements, uninstall steps, etc.)
    • Lifetime identifiers
      • The specification should require that identifiers stay valid for the complete lifetime of a rule and must not depend on chapter-section-subsection structures which are bound to change from revision to revision of a document, as is, e.g., currently the case with CIS benchmarks.
    • Identifier format improvements
      • Form-specifications like “xccdf_namespace_benchmark_name” shoehorn an identifier that has two components into a single component, making authoring more error-prone than it has to be. I [Bernd] guess the reason for doing so is the standard way of XML treatment of id and idref, so everything that is needed for identifying the structure must be within the ‘id’ field. Hence, in XML we probably will not get around it, but for the data model, the identifier should be separated out into namespace and name.
    • Extension point for applicability information
      • Currently, Benchmark, Group, Rule, and FIX element allow addition of applicability information as CPE information. To allow for organization-specific applicability information as well as future extensions that can be carried out without issuing a new XCCDF version, XCCDF should allow an extension point <applicability> <applicability_info system=’…’> …</applicability_info> </applicability> or similar that can be specified for benchmarks, groups, rules, fixes, checks.
    • Human-readable check info
      • CIS benchmarks viewed in PDF, Word or XLSX contain a human-readable description of how to audit a setting, which is not contained in the XCCDF version of the guide, simply because there is no field for this type of content. XCCDF should therefore carry a ‘checktext’ element akin to the the ‘fixtext’ element.
    • Rule-specific History information
      • At least rules (maybe also groups and profiles) should offer the ability to keep track of change information, detailing changes carried out by over the various versions of a rule; the history information should have a human readable description and a machine-readable flag, with the latter flag informing whether changes with respect to the previous revision are merely cosmetic (e.g., changed wording, formatting, etc.) and thus can be disregarded when comparing subsequent revisions of a benchmark, or should be reviewed for a possible impact on check/implementation.
    • Support Markdown instead/alongside HTML for human-readable content
      • CIS uses Markdown for authoring their human-readable content in XCCDF; Redhat does; we at Siemens do. I would love to have XCCDF communicate me the original Markdown rather than the HTML conversion – I can reuse Markdown much easier than somehow reconverting HTML to Markdown (and then dealing manually with the errors that occur in that process). (I realize, when using XML, that will be cumbersome, since we would need to have to use CDATA-environments in all fields containing Markdown – but since nobody is writing XCCDF per hand anyhow, this might not be that big an issue…)
      • Profile-based rule ordering
        • As another RFE, it would fix a lot of problems on we [Red Hat] run into that rule ordering in a profile be obeyed by the scan itself. In other words, how I order the rules in a profile is the order in which a scan runs if a profile is selected. Otherwise, follow the behavior we have today.
      • Enhanced profile extension
        • Something I [Gabriel] would like to see here as well is the ability to extend multiple profiles into many single different profiles. For example, I may have multiple products that use a db server profile and a web server profile in a benchmark and want to use those db and web server profiles multiple times for every different product that uses a db and web server. At the same time, I also don't want my db server profile extending the web profile or vice versa.
      • Expand Profiles/Tailoring to allow:
        • Customizing the XCCDF metadata (titles, descriptions, etc.)
        • Adding external rules and/or checks
        • Overriding arbitrary OVAL elements and/or values
      • Express a system composed of multiple devices with different roles/benchmarks
        • A way TBD to describe how multiple benchmarks should be applied to a system composed of devices with different roles (production DB server, web server, etc.). This might be a meta-benchmark or catalog of some sort or a way for benchmarks to reference other benchmarks, mapping them by role, tier, OS, etc.

        Best,
        David Ries

        Charles Schmidt

        unread,
        Aug 29, 2019, 6:49:48 PM8/29/19
        to scap-dev-authoring
        Hi all,

        Per my action item from the last call, I've taken a stab at organizing the incremental improvements identified in David's compilation. I felt that they fell into four distinct buckets. Comments are welcome.

        Charles

        ----------

        This information categorizes the list of proposed enhancements to XCCDF compiled by David Ries in his Aug 15 email and supplemented by additional suggestions from the August 16 XCCDF and Content Authoring teleconference. It is noted that all of these represent incremental changes to XCCDF (or to other standards) rather than major revisions.

        Standardizing Documentation Capture

        This covers efforts to better support human-readable information within SCAP content. The utilization of such mechanisms would help users of SCAP to better render instructions/descriptions to make it easier to understand what the automation checks and how to interpret the results.

        These enhancements would primarily be used by Content Developers who were intending to build content for a wide audience, and thus one for which clear context would need to be applied. Content Authors would be less likely to do this as their documentation focus would generally be on functional automation and any description would be more along the lines of code documentation, intended for those who shared a given context, and do not generally need significant power or flexibility in the presentation thereof.

        •Support Markdown instead/alongside HTML for human-readable content

        •Human-readable check info

        Content Management for Authors

        This covers efforts to make SCAP content and content elements easier to track within a content management environment. They would make it easier to find specific content, track how that content has changed, and thus better reuse and manage the implications of reuse of content.

        These enhancements would primarily be used by Content Developers who focus on development of larger SCAP Content corpuses that are intended to have a long lifetime. It would allow Content Developers to maintain large bodies of existing content from which they could pull existing elements for reuse, as well as to better managed changes to content elements that impact their content collections.

        Content Authors might also benefit from these changes if it allowed them to quickly locate and apply content elements needed to address a specific task.

        •Lifetime identifiers

        •Identifier format improvements

        •Rule-specific History information

        Content Execution Control

        This covers extensions to mechanisms that allow parties to modify the behaviors of existing content through changes to elements of it or by combining it with other existing content. It makes SCAP content more flexible by allowing parties to modify behavior without altering low level structures of the content.

        These enhancements will benefit both Content Developers and Content Authors. Content Developers will benefit because it will allow them to create content bodies that are applicable to more scenarios, making the content more immediately usable by their customers. Content Authors will benefit from these enhancements because it will make it easier to tweak existing content to account for local variations and needs.

        •Extension point for applicability information

        •Profile-based rule ordering

        •Enhanced profile extension (apply profiles to multiple benchmarks)

        •Expand Profiles/Tailoring to allow customizing of XCCDF Metadata, adding external checks, overriding arbitrary OVAL elements and values

        •Express a system composed of multiple devices with different roles/benchmarks

        Remediation

        This covers enhancements to move SCAP beyond assessment and enable it to initiate responses to this assessment in an automated manner. Both Content Developers and Content Authors are likely to make use of this feature. Content Authors might be slightly more likely to employ this because they will have an awareness of local needs and nuances that make it safer to invoke automated responses to detected conditions. By contrast, Content Developers might be concerned about automatically changing system configurations due to the potential for unintended consequences. (Even automated application of missing patches, which is a fairly straightforward process, might be considered too risky to write into content when one doesn't know the enterprise's policy for vetting patches prior to application.)

        •Better support for automation-friendly remediations

        David Ries

        unread,
        Aug 30, 2019, 10:10:52 AM8/30/19
        to Charles Schmidt, scap-dev-authoring
        Hi Charles, this looks great! A couple minor comments below.

        On Aug 29, 2019, at 5:49 PM, Charles Schmidt <schmidt....@gmail.com> wrote:

        Hi all,

        Per my action item from the last call, I've taken a stab at organizing the incremental improvements identified in David's compilation. I felt that they fell into four distinct buckets. Comments are welcome.

        Charles

        ----------

        This information categorizes the list of proposed enhancements to XCCDF compiled by David Ries in his Aug 15 email and supplemented by additional suggestions from the August 16 XCCDF and Content Authoring teleconference. It is noted that all of these represent incremental changes to XCCDF (or to other standards) rather than major revisions.

        Standardizing Documentation Capture

        This covers efforts to better support human-readable information within SCAP content. The utilization of such mechanisms would help users of SCAP to better render instructions/descriptions to make it easier to understand what the automation checks and how to interpret the results.

        These enhancements would primarily be used by Content Developers who were intending to build content for a wide audience, and thus one for which clear context would need to be applied. Content Authors would be less likely to do this as their documentation focus would generally be on functional automation and any description would be more along the lines of code documentation, intended for those who shared a given context, and do not generally need significant power or flexibility in the presentation thereof.

        •Support Markdown instead/alongside HTML for human-readable content

        •Human-readable check info

        Content Management for Authors

        This covers efforts to make SCAP content and content elements easier to track within a content management environment. They would make it easier to find specific content, track how that content has changed, and thus better reuse and manage the implications of reuse of content.

        These enhancements would primarily be used by Content Developers who focus on development of larger SCAP Content corpuses that are intended to have a long lifetime. It would allow Content Developers to maintain large bodies of existing content from which they could pull existing elements for reuse, as well as to better managed changes to content elements that impact their content collections.

        These enhancements would be used by Content Developers, but the primary benefit would be for consumers of SCAP benchmarks that have a long lifetime. These enhancements would allow consumers to better track changes between versions of a benchmark.

        Content Authors might also benefit from these changes if it allowed them to quickly locate and apply content elements needed to address a specific task.

        •Lifetime identifiers

        •Identifier format improvements

        •Rule-specific History information

        Content Execution Control

        This covers extensions to mechanisms that allow parties to modify the behaviors of existing content through changes to elements of it or by combining it with other existing content. It makes SCAP content more flexible by allowing parties to modify behavior without altering low level structures of the content.

        These enhancements will benefit both Content Developers and Content Authors. Content Developers will benefit because it will allow them to create content bodies that are applicable to more scenarios, making the content more immediately usable by their customers. Content Authors will benefit from these enhancements because it will make it easier to tweak existing content to account for local variations and needs.

        You might consider replacing “Content Authors” with “Content Consumers” in the above paragraph. 

        •Extension point for applicability information

        •Profile-based rule ordering

        •Enhanced profile extension (apply profiles to multiple benchmarks)

        •Expand Profiles/Tailoring to allow customizing of XCCDF Metadata, adding external checks, overriding arbitrary OVAL elements and values

        •Express a system composed of multiple devices with different roles/benchmarks

        Remediation

        This covers enhancements to move SCAP beyond assessment and enable it to initiate responses to this assessment in an automated manner. Both Content Developers and Content Authors are likely to make use of this feature. Content Authors might be slightly more likely to employ this because they will have an awareness of local needs and nuances that make it safer to invoke automated responses to detected conditions. By contrast, Content Developers might be concerned about automatically changing system configurations due to the potential for unintended consequences. (Even automated application of missing patches, which is a fairly straightforward process, might be considered too risky to write into content when one doesn't know the enterprise's policy for vetting patches prior to application.)

        •Better support for automation-friendly remediations


        --
        To unsubscribe from this group, send email to scap-dev-author...@list.nist.gov
        Visit this group at https://list.nist.gov/scap-dev-authoring
        ---
        To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev-author...@list.nist.gov.

        Charles Schmidt

        unread,
        Aug 30, 2019, 7:28:01 PM8/30/19
        to scap-dev-authoring
        Hi David,

        Thanks for the review. Are you defining a "Content Consumer" as a type of vendor that identifies and provides SCAP content to customers (probably as part of a tool that they support)? I think at the last workshop we spent some time talking about the challenges of parties who need to scrape existing content repositories, gather the latest versions of content, and then bundle it as a delivery for customers, so I'm guessing this is the role you are thinking of. (This would be distinct from the enterprise users of such tools, that are executing tools with SCAP content and interpreting the output.)

        If I am correct on your meaning for Content Consumer, then I agree that they would be beneficiaries of Content Management for Author enhancements. For the Content Execution Control features, I'm less clear - are you envisioning that these assessment tool providers would add enhancements to their tools that would make it easier for their customers to manipulate SCAP data? I can see that, but virtually any feature gives those tool vendors the ability to provide more capabilities to customers. By contrast, I do feel that Content Authors would be beneficiaries of Content Execution Controls because they would make it much easier to make tweaks to content to account for local enterprise needs and variations, whether or not those variations were anticipated by the original Content Developers. I believe (but don't know for sure) that such tweaking to customize content to fit local enterprise needs would be a common use case for Content Authors, and the enhancements discussed would be improvements over the current procedure of going into a massive content file and doing XML hacking.

        Am I misunderstanding your use case here?

        Thanks,
        Charles

        B Grobauer

        unread,
        Sep 3, 2019, 6:28:05 AM9/3/19
        to scap-dev-authoring
        Hi,

        I would have written exactly the same comment as David that enhancements reagrding Content Mangaement would also be helpful to consumers to better track changes between versions of a benchmark. 

        Charles' answer shows that we probably should try to define precisely what we mean by Content Developer/Author/Consumer before returning to the discussion about who benefits from what.

        In the meetings of the July telco, we have:

        GA – I think that all of us here are content developers, and then we have content authors.  I have a lot of 
        government people in DoD IC, for example, and they need to create one or two rules in addition to a 
        STIG, perhaps something that is site-specific. They are currently struggling with the data model and the 
        serialization format. Those voices have become louder as tools like Google Kubernetes become more 
        popular at these sites. 

        CMS – Could you elaborate on the difference between a content author and a content developer? 
        GA – Sure. As a content developer, I author content, but I also develop it, so I know more of the nuances 
        of the language. A content author is a base author who is going to create single rules, or someone who 
        tweaks content, more of a user. 
        CMS – If I can paraphrase what you said, a developer is someone who is focused on generating SCAP 
        content that others could use, and an author is more operationally focused. They have some SCAP 
        content that they need to make fit with an operational need.  There primary focus is on an operational 
        need, not on publishing the SCAP content. 
        GA – Agreed.


        So how about the following:

        Content Developer: "Power Users" who issue baselines, either writing from scratch (eg., CIS, DoD, OpenScap) or building upon existing baselines (e.g., Siemens)
        Content Authors: Users who adapt existing SCAP content for their specific use-case (adding/removing/modifying) a few rules "in a small-scale way"
        Content Consumers: Users who use SCAP content without modifying the SCAP content either to check their systems using SCAP-enabled tools and their features (including profiling/tailoring)
          or to build/generate/author artefacts based on SCAP content (writing scripts for implementation, writing Scan policies for a proprietary tool such as Nessus/Qualys that corresponds to a given SCAP baseline (profile)

        This definition of Content Consumer, based on the distinction that no own SCAP content is produced) would throw together security providers such as Qualys with small-sized enterprise using CIS baselines and CISCAT
        for checking their own systems: both make use of SCAP without writing SCAP, but in very different ways... so maybe a further distinction into a mere "Consumer" and somebody who builds something else based on SCAP content in a systematic way (I lack a good term for this role: "Content Transformer"?) would be helpful?

        Kind regards,

        Bernd

        David Ries

        unread,
        Sep 3, 2019, 10:57:51 AM9/3/19
        to B Grobauer, scap-dev-authoring
        Hi Bernd,

        I was thinking of distinguishing between Content Developers and Authors based on their SCAP expertise and commitment. I think these are consistent and compatible with your definitions so I thought I’d share them:

        • Content Developers: authors of SCAP content with expertise in the component standards (OVAL, XCCDF, etc.) and typically, an ongoing commitment to publishing content in the SCAP format. This would include individuals and organizations that author SCAP from scratch (i.e. writing XML) as well as those that create SCAP content generators and authoring tools. This could also include those that aggregate, build upon and republish SCAP content, assuming they have SCAP expertise and/or commitment.
        • Content Authors: authors of SCAP content with little-to-no expertise in the component standards, and typically, insufficient commitment to the standards themselves to justify developing SCAP expertise. This would include individuals and organizations that use authoring tools to generate SCAP.

        I think of Content Consumers as those who use SCAP content to run assessments (including, optionally creating Tailorings, selecting Profiles, etc.). One may be a Content Consumer and Content Author or Developer.

        -David



        --
        To unsubscribe from this group, send email to scap-dev-author...@list.nist.gov
        Visit this group at https://list.nist.gov/scap-dev-authoring
        ---
        To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev-author...@list.nist.gov.

        David Ries

        unread,
        Sep 4, 2019, 12:12:22 PM9/4/19
        to scap-dev-authoring
        Here is a quick, rough list of activities and concerns that have been mentioned:

        • Use Case Coverage
          • SCAP’s core capability to express vulnerability, compliance, inventory, patch and other miscellaneous security-related assertions about systems in an implementation-neutral fashion.
        • SCAP Content Development
          • Creation of SCAP content from scratch and/or based on existing baselines by “power users” that have expertise in the component standards (OVAL, XCCDF, etc.) and typically, an ongoing commitment to publishing content in the SCAP format. This would also include development of SCAP content generators and authoring tools. This could also include those that aggregate, build upon and republish SCAP content, assuming they have SCAP expertise and/or commitment.
        • SCAP Content Authoring
          • Creation of SCAP content for specific use-cases, typically for operational reasons (e.g. adding/removing/modifying a few rules for internal organizational reasons) by authors with little-to-no expertise in the component standards, and typically, insufficient commitment to the standards themselves to justify developing SCAP expertise. This would include individuals and organizations that use authoring tools to generate SCAP and those that SCAP features like Profile selection and Tailoring.
        • Downstream Content Authoring
          • SCAP Content Development that leverages 3rd-party (upstream) content. This subset of SCAP Content Development includes the extension and customization of upstream content beyond Tailoring, such as aggregating, building upon and republishing SCAP content.
        • Content Packaging and Republishing
          • Aggregating content from various third parties, packaging it and publishing it.

        -David

        Charles Schmidt

        unread,
        Sep 15, 2019, 2:54:07 PM9/15/19
        to scap-dev-authoring
        Hi David,

        Thanks for posting this. I attempted to capture the key ideas in the attached presentation for the deep-dive on Tuesday. I unified both types of Content Authoring - let me know if you feel that this obscures the distinctions you make below.

        Everyone - comments welcome.

        Charles
        XCCDF - SCAP Participant Roles.pptx

        Charles Schmidt

        unread,
        Sep 15, 2019, 3:18:41 PM9/15/19
        to scap-dev-authoring
        Hi all (again),

        I have compiled a presentation on our list of incremental improvements. In this, I tried to include the comments by David and Bernd to recognize how consumers would benefit from the given classes of work. I'm very interested in any feedback you might have on this.

        Thanks a bunch,
        Charles
        XCCDF - Incremental Improvements for XCCDF.pptx

        David Ries

        unread,
        Sep 16, 2019, 9:39:53 AM9/16/19
        to Charles Schmidt, scap-dev-authoring
        Hi Charles,

        This looks good. 

        I have one (very) small price of feedback regarding slide 9, "Enable SCAP Content to initiate responses to assessments in an automated manner”. I’m not sure that SCAP tools/content will be driving or “initiating” remediations. At this point, I think goal is simply to capture the relationship between rule and responses in an automation-friendly manner. This would allow, for example, integration with workflow/automation solutions that would drive the assessment response process.

        It’s minor, but “initiating” the response might sound like we’re expanding SCAP’s scope to include automated remediation, which is a much bigger change than I think we’re currently looking at.

        -David

        --
        To unsubscribe from this group, send email to scap-dev-author...@list.nist.gov
        Visit this group at https://list.nist.gov/scap-dev-authoring
        ---
        To unsubscribe from this group and stop receiving emails from it, send an email to scap-dev-author...@list.nist.gov.
        <XCCDF - Incremental Improvements for XCCDF.pptx>
        Reply all
        Reply to author
        Forward
        0 new messages