RHEL STIG via Puppet

1,939 views
Skip to first unread message

Michael Worsham

unread,
Jul 12, 2011, 8:26:10 PM7/12/11
to mil...@googlegroups.com
Has anyone tried using the puppet method technique to actually STIG a
RHEL 5.x platform yet?

https://fedorahosted.org/aqueduct/wiki/RhelStigProcess

-- Michael

Isaac Christoffersen

unread,
Jul 12, 2011, 9:28:34 PM7/12/11
to mil...@googlegroups.com
I haven't but it's essentially the Tresys CLIP modules, so I would
expect similar results.

> --
> You received this message because you are subscribed to the "Military Open
> Source Software"  Google Group.
> To post to this group, send email to mil...@googlegroups.com
> To unsubscribe from this group, send email to
> mil-oss+u...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/mil-oss?hl=en
>
> www.mil-oss.org
>

Vincent Passaro

unread,
Apr 4, 2012, 5:16:23 PM4/4/12
to mil...@googlegroups.com
Michael,

I would encourage you to give it a try.  The Puppet content is however designed around the Unix/Linux checklist.  If you want to get compliant against the RHEL-5-Beta STIG you would want to use the bash content.

If you have any questions, let me know!

Andrew Dunn

unread,
Apr 5, 2012, 9:30:22 AM4/5/12
to Military Open Source Software
I've been thinking about this lately. There is a product called
SecurityBlanket (http://www.trustedcs.com/SecurityBlanket/
SecurityBlanket.html) I've not a whole lot of personal experience with
it, but I believe its similar to Bastille (http://bastille-
linux.sourceforge.net/) in that it runs a series of modifications to
your platform.

Puppet, and Puppet enterprise would really be a wonderful thing to
deploy (I believe it now even supports windows). It would be really
nice if there were a collection of puppet manifests that would bring a
base linux/unix system into STIG compliance in an iterative fashion.

Vincent Passaro

unread,
Apr 5, 2012, 11:55:44 AM4/5/12
to mil...@googlegroups.com
I have used Security Blanket before and wasn't overly impressed.  It actually creates more issues than I wanted since it installs a web server on the system and / or an agent on the box.  That and all the source is complied python, so you really don't know how its doing the configuration...which means reverse engineering time when you gotta try and figure it out. 

Aqueduct is actually working on the puppet manifests for a number of requirements since that's where the technology is going, but will probably maintain bash scripts as well for smaller environments that don't deploy puppet or for those who choose to do it 'old skool' 

I have heard Bastille was starting up development again.  I emailed the main developer, but never heard anything back.

-Vince

Jennings, Jared L CTR USAF AFMC 46 SK/CCI

unread,
Apr 6, 2012, 6:34:27 PM4/6/12
to mil...@googlegroups.com
Quoth Andrew Dunn:

> Puppet, and Puppet enterprise would really be a wonderful thing to
> deploy (I believe it now even supports windows). It would be really
> nice if there were a collection of puppet manifests that would bring a
> base linux/unix system into STIG compliance in an iterative fashion.

I've got such a collection - for our site and network. I've found three
large categories of settings set by my manifest:
(1) Settings that are unique to our network.
(2) Settings that anyone would probably need to set, fully and exactly
specified by the STIG.
(3) Settings that work for us, and comply with the STIG, but someone
else may want to set differently.

Examples:
(1) The host name of the HTTP proxy server.
(2) Disallowing root login via SSH: put "PermitRootLogin no" in the sshd
config file.
(3) Removing all FTP server software packages.

It was easy enough to make separate Puppet classes for the first
category. The second category is where the good sharing really is. But
for the settings in the third category, solutions will differ, because
requirements will differ. (We don't need FTP, but you may at your site.)
After making a whole manifest, I think that the third category is larger
than the second.

I also think it's important that the "collection of puppet manifests" is
informative, not normative: made by the community, not the people who
put out the regulations. Some people are going to have to say, That
preserves the property that there are many means to compliance. If
there's an official puppet manifest, it becomes a new regulation.

kitplummer

unread,
Aug 13, 2012, 10:26:36 AM8/13/12
to mil...@googlegroups.com
Hey Vince.

Can you explain what the difference would be between Puppet executing "tasks" and Bash?  Or what the difference is in "content"?  I'm trying to read into what you're trying to say, and I think I'm missing it - not sure.

Thanks.
Kit

Vincent Passaro

unread,
Aug 13, 2012, 12:24:36 PM8/13/12
to mil...@googlegroups.com
Kit,

Good question, unfortunately its a very long winded answer.  I will try and give you the Readers Digest version and hopefully it will make sense. 

DISA recently changed their requirements for RHEL 5 systems.  This is largely in part due to the recent changes from DISA 'home grown' tools such as Gold Disk / Unix SRR to SCAP.  The old requirements for systems were based on the now outdated Unix/Linux Checklist, which is what Aqueduct has Puppet content for. The latest version used for RHEL systems is the RHEL 5 STIG, which can be found here:


For the new requirements I wrote the remediation content in Bash, not in puppet for a few reasons.  

1.  Puppet is vendor supported by Puppet Labs, but its a pay for license.  Some commands / systems I work for/with don't have the funding to go buy a license.  Because of the USMC / NAVY requirement that all software be supported from a vendor, using Puppet wasn't the easiest answer.
2.  Between myself and others in the industry, the community came to the conclusion that Bash was the preferred 'way' of using remediation content (for now at least)
3.  Not that many people are up to speed on Puppet scripting, so using bash encourages more contributions from the community. 

I hope that makes some sense.  Let me know if you have any other questions.

Thanks,

Vince


--

John Janek

unread,
Aug 13, 2012, 12:28:56 PM8/13/12
to mil...@googlegroups.com
How does the SCAP scan map to the NIST publications?  When we built our Secure Enterprise Browser Environment we ended up applying the NIST RHEL5 guidelines to a RHEL6 build.  I was surprised how well they went over (I don't know why, precisely, *nix being what it is, you'd figure that would be exactly the expected response).

I know the DISA STIGs are different.  I guess my idle curious wants to know how different?

Vincent Passaro

unread,
Aug 13, 2012, 12:42:07 PM8/13/12
to mil...@googlegroups.com
I 'think' there is a mapping of DISA requirements to NIST requirements somewhere out there, but I don't recall where I saw it. 

Some things from the RHEL 5 STIG can be mapped over to RHEL 6, but not everything.  This is especially true when it comes to writing remediation content. I won't even start on my tangent of how bad the DISA SCAP content is, but the community is trying to solve this problem.  

https://fedorahosted.org/scap-security-guide/ 

Scap-Security-Guide is working with DISA to submit content from the community to DISA (backed by NSA / Vendors).  These requirements would become the RHEL 6 STIG / SCAP content. 

Kit Plummer

unread,
Aug 13, 2012, 12:49:38 PM8/13/12
to mil...@googlegroups.com
Thanks.

Comments inline below:

On Aug 13, 2012, at 9:24 AM, Vincent Passaro <vincent...@gmail.com> wrote:

Kit,

Good question, unfortunately its a very long winded answer.  I will try and give you the Readers Digest version and hopefully it will make sense. 

DISA recently changed their requirements for RHEL 5 systems.  This is largely in part due to the recent changes from DISA 'home grown' tools such as Gold Disk / Unix SRR to SCAP.  The old requirements for systems were based on the now outdated Unix/Linux Checklist, which is what Aqueduct has Puppet content for. The latest version used for RHEL systems is the RHEL 5 STIG, which can be found here:


For the new requirements I wrote the remediation content in Bash, not in puppet for a few reasons.  

1.  Puppet is vendor supported by Puppet Labs, but its a pay for license.  Some commands / systems I work for/with don't have the funding to go buy a license.  Because of the USMC / NAVY requirement that all software be supported from a vendor, using Puppet wasn't the easiest answer.

Puppet Labs provides an "Enterprise" version…which is pay-for licensed but not required to use Puppet.  Puppet itself is OSS software.


2.  Between myself and others in the industry, the community came to the conclusion that Bash was the preferred 'way' of using remediation content (for now at least)

The problem quickly becomes how can you abstract away from the OS.  What if we want to provide, what I think you're calling "content" for more than one OS (including multiple versions and variants, e.g. 32-bit versus 64-bit).

It feels like this conversation is happening in a lot of places without the required level of understanding of Configuration Management.  There's more to it than just running checklists, or scripts that run through checklists.

3.  Not that many people are up to speed on Puppet scripting, so using bash encourages more contributions from the community. 

I agree that there's probably more familiarity with Bash but highly disagree that that leads to a greater potential from a community.  

The Puppet community, outside of the DoD/Fed space is quite vibrant.  Just search for puppet and your favorite server.

I think the thing that is most concerning to me about this statement, is that it presupposes that the way we are doing it today (and are comfortable with) is the right way without looking at what is happening in other industries/enterprises.

Bash is just a shell language.  It's purpose isn't to manage 10s of 1000s of nodes and the configuration thereof.

Howard Cohen

unread,
Aug 13, 2012, 12:50:04 PM8/13/12
to mil...@googlegroups.com

Vincent Passaro

unread,
Aug 13, 2012, 1:21:36 PM8/13/12
to mil...@googlegroups.com
Comments below (start with VP)

On Mon, Aug 13, 2012 at 9:49 AM, Kit Plummer <kitpl...@gmail.com> wrote:
Thanks.

Comments inline below:

On Aug 13, 2012, at 9:24 AM, Vincent Passaro <vincent...@gmail.com> wrote:

Kit,

Good question, unfortunately its a very long winded answer.  I will try and give you the Readers Digest version and hopefully it will make sense. 

DISA recently changed their requirements for RHEL 5 systems.  This is largely in part due to the recent changes from DISA 'home grown' tools such as Gold Disk / Unix SRR to SCAP.  The old requirements for systems were based on the now outdated Unix/Linux Checklist, which is what Aqueduct has Puppet content for. The latest version used for RHEL systems is the RHEL 5 STIG, which can be found here:


For the new requirements I wrote the remediation content in Bash, not in puppet for a few reasons.  

1.  Puppet is vendor supported by Puppet Labs, but its a pay for license.  Some commands / systems I work for/with don't have the funding to go buy a license.  Because of the USMC / NAVY requirement that all software be supported from a vendor, using Puppet wasn't the easiest answer.

Puppet Labs provides an "Enterprise" version…which is pay-for licensed but not required to use Puppet.  Puppet itself is OSS software.


VP - Fully aware that puppet is licensed to meet requirements for SECNAVINST-5230.15. 

2.  Between myself and others in the industry, the community came to the conclusion that Bash was the preferred 'way' of using remediation content (for now at least)

The problem quickly becomes how can you abstract away from the OS.  What if we want to provide, what I think you're calling "content" for more than one OS (including multiple versions and variants, e.g. 32-bit versus 64-bit).

VP - Aqueduct is focused on RHEL, we haven't gone down the path of trying to address other operating systems.  

It feels like this conversation is happening in a lot of places without the required level of understanding of Configuration Management.  There's more to it than just running checklists, or scripts that run through checklists.

VP - The conversation is happening allover.  Everyone is trying to solve the same problem and not everyone will ever have the same solution. 
 
3.  Not that many people are up to speed on Puppet scripting, so using bash encourages more contributions from the community. 

I agree that there's probably more familiarity with Bash but highly disagree that that leads to a greater potential from a community.  

VP - I don't disagree that Puppet has a lot of potential.  But Open Source is ran by the community, so what they want is what they create.  

The Puppet community, outside of the DoD/Fed space is quite vibrant.  Just search for puppet and your favorite server.

VP - Everyone else in the world can be vibrant, but that doesn't help the DoD community if they aren't up to speed on it yet.   

I think the thing that is most concerning to me about this statement, is that it presupposes that the way we are doing it today (and are comfortable with) is the right way without looking at what is happening in other industries/enterprises.

VP - I understand that's how its viewed, but right now there really aren't many projects out there like Aqueduct that are specifically tailored to remediation content, so it kind of is the 'new' way forward (open community development). 
 
Bash is just a shell language.  It's purpose isn't to manage 10s of 1000s of nodes and the configuration thereof.

VP - Sure, but bash does reside on every system bet it 10 or 1000.  There are ways to making bash work.  Is it the best option?  Maybe it is for some people?  Maybe not for others?   In the end all of this is driven by the community.  If the community isn't ready for a technology it won't be adopted.  Does this mean Aqueduct won't have any puppet content?  Nope.  We're always looking to see what is going to work best. 

Lastly, this problem has to be looked at not just from a configuration management perspective but also from an Auditor perspective.  From what I have seen in the community is that most people are taking the Aqueduct content and wrapping it into their own internal process.  Everyone tackles the problem differently.  Some people do all their remediation at time of provisioning via Kickstart and call it a day.  Others load the content into cron and allow it to continually maintain compliance based on the systems 'profile' of findings. 

Kit Plummer

unread,
Aug 13, 2012, 2:06:06 PM8/13/12
to mil...@googlegroups.com
Great dialog…thanks.  More below:

On Aug 13, 2012, at 10:21 AM, Vincent Passaro <vincent...@gmail.com> wrote:

Comments below (start with VP)

On Mon, Aug 13, 2012 at 9:49 AM, Kit Plummer <kitpl...@gmail.com> wrote:
Thanks.

Comments inline below:

On Aug 13, 2012, at 9:24 AM, Vincent Passaro <vincent...@gmail.com> wrote:

Kit,

Good question, unfortunately its a very long winded answer.  I will try and give you the Readers Digest version and hopefully it will make sense. 

DISA recently changed their requirements for RHEL 5 systems.  This is largely in part due to the recent changes from DISA 'home grown' tools such as Gold Disk / Unix SRR to SCAP.  The old requirements for systems were based on the now outdated Unix/Linux Checklist, which is what Aqueduct has Puppet content for. The latest version used for RHEL systems is the RHEL 5 STIG, which can be found here:


For the new requirements I wrote the remediation content in Bash, not in puppet for a few reasons.  

1.  Puppet is vendor supported by Puppet Labs, but its a pay for license.  Some commands / systems I work for/with don't have the funding to go buy a license.  Because of the USMC / NAVY requirement that all software be supported from a vendor, using Puppet wasn't the easiest answer.

Puppet Labs provides an "Enterprise" version…which is pay-for licensed but not required to use Puppet.  Puppet itself is OSS software.


VP - Fully aware that puppet is licensed to meet requirements for SECNAVINST-5230.15. 

2.  Between myself and others in the industry, the community came to the conclusion that Bash was the preferred 'way' of using remediation content (for now at least)

The problem quickly becomes how can you abstract away from the OS.  What if we want to provide, what I think you're calling "content" for more than one OS (including multiple versions and variants, e.g. 32-bit versus 64-bit).

VP - Aqueduct is focused on RHEL, we haven't gone down the path of trying to address other operating systems.

Right, and I'm sure there will be efforts to support other systems, which may or may not be Puppet.  But, I'm kinda pointing out that this may not be the right direction.  Working out from the OS, gives us OS-specific solutions.  I'm just saying that there is/are a tool(s) out there that inverts it, providing the potential to abstract away from the OS but also maintaining the ability to home OS-specific requirements at the same time.

 

It feels like this conversation is happening in a lot of places without the required level of understanding of Configuration Management.  There's more to it than just running checklists, or scripts that run through checklists.

VP - The conversation is happening allover.  Everyone is trying to solve the same problem and not everyone will ever have the same solution. 

This is true in and out of DoD.  There isn't a single best solution…but, I'm positive it isn't a self, or community-maintained collection of shell scripts.  If it were we wouldn't have the need for all of these new tools, or even the Configuration Management discussion (or the ideals behind continuous delivery).

 
3.  Not that many people are up to speed on Puppet scripting, so using bash encourages more contributions from the community. 

I agree that there's probably more familiarity with Bash but highly disagree that that leads to a greater potential from a community.  

VP - I don't disagree that Puppet has a lot of potential.  But Open Source is ran by the community, so what they want is what they create.  

[Hmmn.  That's a bit of the "inmates running the asylum" isn't it?  Almost all of the successful OSS projects have parents, or some kind of governance to control them from the top down.  I doesn't necessarily matter to this discussion other than we should agree that requirements just don't get accepted and implemented on whims.]  

The point I was trying to make was that while you could recreate all of the capability behind Puppet/Chef/SALT with Bash…nobody will, because all three of those are already OSS.


The Puppet community, outside of the DoD/Fed space is quite vibrant.  Just search for puppet and your favorite server.

VP - Everyone else in the world can be vibrant, but that doesn't help the DoD community if they aren't up to speed on it yet.   

Again, it doesn't have to be that direction.  Why can't DoD try to emulate what successful enterprises do?  Don't think for a second that AMEX or Kaiser Permanente don't have the same kinds of requirements that we do WRT to CM, governance and auditing.  If the DoD wants to be a part of the OSS communities, then we have to go to them and be a part - and stop expecting them to care, or come to us.  


I think the thing that is most concerning to me about this statement, is that it presupposes that the way we are doing it today (and are comfortable with) is the right way without looking at what is happening in other industries/enterprises.

VP - I understand that's how its viewed, but right now there really aren't many projects out there like Aqueduct that are specifically tailored to remediation content, so it kind of is the 'new' way forward (open community development). 
 
Bash is just a shell language.  It's purpose isn't to manage 10s of 1000s of nodes and the configuration thereof.

VP - Sure, but bash does reside on every system bet it 10 or 1000.  There are ways to making bash work.  Is it the best option?  Maybe it is for some people?  Maybe not for others?   In the end all of this is driven by the community.  If the community isn't ready for a technology it won't be adopted.  Does this mean Aqueduct won't have any puppet content?  Nope.  We're always looking to see what is going to work best. 

This is a dangerous hole, here be dragons.  I'd say generally, and admitting that I'm definitely part of it, that our community is rather blind to what technologies are out there, or what's happening in the enterprise.  Never mind that our pace of adoption is rather slow because of the way we operate (contractually, risk, etc.).  

I get the 'its on every system' argument.  But, I don't buy it.  Sure, there are ways to make it work…but, why?  Why not use an OSS tool that is trying to solve the higher-level problem (logging and auditing being one example)?

Any tool is going to take an investment.  Are you saying the DoD isn't ready for Configuration Management?  Or, that we're not capable?  Is it a tool problem, or a people problem?  Or, perhaps there are just more important problems.

Fishing: What if there was a supported Puppet-based STIG suite that could be run against Ubuntu as well as RHEL?  Would it be worth it then?


Lastly, this problem has to be looked at not just from a configuration management perspective but also from an Auditor perspective.  From what I have seen in the community is that most people are taking the Aqueduct content and wrapping it into their own internal process.  Everyone tackles the problem differently.  Some people do all their remediation at time of provisioning via Kickstart and call it a day.  Others load the content into cron and allow it to continually maintain compliance based on the systems 'profile' of findings. 

And how does all of that get logged, reported, or made available to an auditor?  What is the delta between the baseline and the final system configuration (full load out)?  Where do the manifests/states/changes get stored?  How does that scale in terms of Bash scripts?

FWIW, I'm not really trying to argue against the script mentality.  I'm just having a hard time seeing how we get away from OS/distro-specific thinking and solutions.

shannon.mitchell

unread,
Aug 13, 2012, 2:20:28 PM8/13/12
to mil...@googlegroups.com
It is an open source project.  I don't think anyone would complain if you started working on the puppet content and the drivers for various OS's.
Shannon Mitchell
Fusion Technology, LLC


Kit Plummer

unread,
Aug 13, 2012, 2:23:49 PM8/13/12
to mil...@googlegroups.com
Are you talking about Aqueduct?

shannon.mitchell

unread,
Aug 13, 2012, 2:59:53 PM8/13/12
to mil...@googlegroups.com
Yes.  I would like to eventually see some puppet content in aqueduct for those who can use it in addition to the current bash base.  The STIG oval content is meant to provide that same level of abstraction that puppet would provide, but the xml oval language is pretty limited in what it can do.   Puppet content would provide an alternative abstraction layer that is more popular, flexible and enjoyable to work with.  I think the security guys here would also pass out with excitement if they had a nice puppet dashboard showing which of their servers are out of compliance.

Lee Kinser

unread,
Aug 13, 2012, 3:22:59 PM8/13/12
to mil...@googlegroups.com
Kit,
I don't believe Puppet provides the level of OS independent
configuration management that you are envisioning. Content that could
STIG Red Hat, Ubuntu, or any other distro would be great, but it will
still be OS specific. Puppet manages files, the content of those
files, and their permissions... it does not allow you to instruct
puppet to "audit all chmod, chown, and chgrp commands" and expect it
to know what has to be done, you still have to instruct it on what
configuration is required and in what file, which will vary between
distributions. So what you would end up with, is puppet content
specific to RHEL and specific to Ubuntu and specific to SUSE, etc...
just as you would have with BASH.

Now, don't get me wrong, I'm not saying there aren't benefits to
puppet over bash in a wide variety of circumstances. It's ability to
centrally manage massive numbers of hosts via a puppetmaster server,
logic (if/then/else) driven configuration content, and configuration
sourcing to allow for a single config to be pulled from multiple
chunks of possible configs are all amazingly useful and powerful
features. However, this is not a deployment scenario where those are
necessary and there are well defined issues with choosing puppet over
BASH. The Aqueduct devs are creating automation for DoD entities,
which have specific requirements. Of those requirements, vendor
support for OSS, is a big issue for many of those entities. Bash is
vendor supported as long as the OS itself is, puppet is not. It's as
simple as that. So if we develop content for puppet, we instantly
make the project more difficult for a large group of people to
consume. So why go that route?

-Lee


On Mon, Aug 13, 2012 at 2:06 PM, Kit Plummer <kitpl...@gmail.com> wrote:

Kit Plummer

unread,
Aug 13, 2012, 3:57:57 PM8/13/12
to mil...@googlegroups.com

On Aug 13, 2012, at 12:22 PM, Lee Kinser <lee.k...@gmail.com> wrote:

> Kit,
> I don't believe Puppet provides the level of OS independent
> configuration management that you are envisioning. Content that could
> STIG Red Hat, Ubuntu, or any other distro would be great, but it will
> still be OS specific. Puppet manages files, the content of those
> files, and their permissions... it does not allow you to instruct
> puppet to "audit all chmod, chown, and chgrp commands" and expect it
> to know what has to be done, you still have to instruct it on what
> configuration is required and in what file, which will vary between
> distributions. So what you would end up with, is puppet content
> specific to RHEL and specific to Ubuntu and specific to SUSE, etc...
> just as you would have with BASH.
>

Point understood. I do get that there is a difference between auditing any platform and configuring one for application runtime.

"audit all chmod, chown, and chgrp commands" could easily be a Puppet module, a single module that can run the different steps for the different platforms. I'm not sure what is wrong with this being run by something other than Bash? In fact, there's nothing preventing Puppet from running the Bash script, which the result of could be managed appropriately.

Yes, obviously there are differences amongst the different platforms/distributions. But, that's the point. Even if there is platform specific, which there surely is, configuration why would you still want that to be in separate modules, or even projects? Puppet is already doing this on many levels (e.g. package management).

> Now, don't get me wrong, I'm not saying there aren't benefits to
> puppet over bash in a wide variety of circumstances. It's ability to
> centrally manage massive numbers of hosts via a puppetmaster server,
> logic (if/then/else) driven configuration content, and configuration
> sourcing to allow for a single config to be pulled from multiple
> chunks of possible configs are all amazingly useful and powerful
> features. However, this is not a deployment scenario where those are
> necessary and there are well defined issues with choosing puppet over
> BASH. The Aqueduct devs are creating automation for DoD entities,
> which have specific requirements. Of those requirements, vendor
> support for OSS, is a big issue for many of those entities. Bash is
> vendor supported as long as the OS itself is, puppet is not. It's as
> simple as that. So if we develop content for puppet, we instantly
> make the project more difficult for a large group of people to
> consume. So why go that route?

Bash is vendor supported? Do the distributions support your scripts, or help train you to write them in the most effective way (at scale)?

I'm not sure I understand why using Puppet would be harder to consume. Is it because it is one of a handful of alternatives that doesn't come preinstalled on a distribution?

I don't get why the DoD thinks it's requirements are always so unique. Sure, the STIG itself is unique but auditing isn't.

I have nothing against Aqueduct, or Bash for that matter - or even doing CM with Bash. I just think the view of CM needs to be expanded (beyond just running checklists) to include maintenance, reporting and monitoring - which is being done by tools with a bigger scope than the shell.

As an aside but picking up on your last question, there are a few other benefits to a tool like Puppet. There is a TDD suite for Puppet, yes you could possibly CI your 'infrastructure-as-code'. There is a precompiler that can verify syntactic correctness. It is possible to run Puppet in "test" mode, which will run everything and report the required changes. Puppet by default is modular…each module can be shared and used in a set or subset - there is already a Puppet Labs provided Forge. There are many modules available in Github, Gitorious, Bitbucket, etc. which can be used/forked as needed. Puppet's node database can be externalized, allowing for node configurations to exist in a DB, which is extremely important in large scale (number of nodes) environments. Puppet can report and dashboard state. Puppet is push or pull (and even local with no server). I could go on…but am really just highlighting Configuration Management as something more than a checklist.

Lee Kinser

unread,
Aug 13, 2012, 5:11:51 PM8/13/12
to mil...@googlegroups.com
LK -> What the DoD means by "supported" is that there is an entity
standing behind the software that can be held accountable to keep it
updated and secured from a vulnerability standpoint. So yes, bash is
supported, just as is apache, but don't expect to call the vendor
looking for help writing your web page.


>
> I'm not sure I understand why using Puppet would be harder to consume. Is it because it is one of a handful of alternatives that doesn't come preinstalled on a distribution?

LK -> This has already been mentioned several times in this thread.
For many DoD projects, puppet CANNOT be deployed on their systems
unless the project get's a special waiver or purchases vendor support
from Puppet Labs, which would mean more complexity or cost to the end
user if Aqueduct were to standardize on puppet. This is due to
requirements set forth in SECNAVINST-5230.15 (http://goo.gl/vVyJa)
section 5 subsection C:

If the particular OSS application is not acquired under
commercial vendor support,
then the program and/or command who requires continued use of
the OSS application
must request and receive a waiver to this policy, in
accordance with paragraph 9 below.

That waiver must come from the DON Deputy CIO. And this isn't a
problem for ALL DoD projects, but the ones under higher scrutiny from
their certification authority would absolutely get hit by this.

>
> I don't get why the DoD thinks it's requirements are always so unique. Sure, the STIG itself is unique but auditing isn't.

LK -> AMEX and Kaiser Permanente are not in the business of guiding
nuclear missiles while simultaneously failing over guidance subsystems
with less than 200 microsecond response times because the front half
of the ship just got blown off... sometimes their requirements can be
quite unique.

Now, that being said, the DoD is not some unique butterfly in this
case, but it does have its own set of requirements, just like everyone
else. In this case, and in my humble opinion, those requirements make
puppet a bad option.


>
> I have nothing against Aqueduct, or Bash for that matter - or even doing CM with Bash. I just think the view of CM needs to be expanded (beyond just running checklists) to include maintenance, reporting and monitoring - which is being done by tools with a bigger scope than the shell.
>
> As an aside but picking up on your last question, there are a few other benefits to a tool like Puppet. There is a TDD suite for Puppet, yes you could possibly CI your 'infrastructure-as-code'. There is a precompiler that can verify syntactic correctness. It is possible to run Puppet in "test" mode, which will run everything and report the required changes. Puppet by default is modular…each module can be shared and used in a set or subset - there is already a Puppet Labs provided Forge. There are many modules available in Github, Gitorious, Bitbucket, etc. which can be used/forked as needed. Puppet's node database can be externalized, allowing for node configurations to exist in a DB, which is extremely important in large scale (number of nodes) environments. Puppet can report and dashboard state. Puppet is push or pull (and even local with no server). I could go on…but am really just highlighting Configuration Management as something more than a checklist.

LK -> Again, I'm right there with ya, the project I used to work on
had puppet in use all over the place because of me... until DISA came
in and told them they had to buy support on it or take it out.

Kit Plummer

unread,
Aug 13, 2012, 5:24:49 PM8/13/12
to mil...@googlegroups.com
Thanks for the clarification on SECNAVINST-5230.15 (http://goo.gl/vVyJa)

I do have to comment on one of the points…since you laid it out there, ripe for the picking.
Apples and oranges. I used to write software for that system. Actually, the defensive version: SM-3. While I'd like to think the IT-centric software world we are talking about is the same as the embedded RT-gooey-coolness world that is the stuff you've described (which BTW both AMEX and KP have systems written in the same RTOSes and against similar critical-software specs). Let's be real - we're not talking DO-178B here. I'm not down playing the critical nature of some of these system, quite the opposite - since we're discussing auditing and STIGs.

But, my point still stands. "Mission critical software" is used in all kinds of industries - not just the DoD.

John Scott III

unread,
Aug 13, 2012, 5:25:36 PM8/13/12
to mil...@googlegroups.com
point of clarification:
> LK -> What the DoD means by "supported" is that there is an entity
> standing behind the software that can be held accountable to keep it
> updated and secured from a vulnerability standpoint. So yes, bash is
> supported, just as is apache, but don't expect to call the vendor
> looking for help writing your web page.

this is not true, DoD CIO memo means there is a support plan in place that does not necessarily mean a vendor. Which for some OSS projects that don't have a company that provides support has been a sometimes stumbling block to adoption of open source software
-----------------------------------------------------------
John Scott
240.401.6574
< jms...@gmail.com >
http://powdermonkey.blogs.com
@johnmscott

Have you joined MIL-OSS?:
http://groups.google.com/group/mil-oss
http://mil-oss.org/

Vincent Passaro

unread,
Aug 13, 2012, 5:59:56 PM8/13/12
to mil...@googlegroups.com
On Mon, Aug 13, 2012 at 11:06 AM, Kit Plummer <kitpl...@gmail.com> wrote:
Great dialog…thanks.  More below:

On Aug 13, 2012, at 10:21 AM, Vincent Passaro <vincent...@gmail.com> wrote:

Comments below (start with VP)

On Mon, Aug 13, 2012 at 9:49 AM, Kit Plummer <kitpl...@gmail.com> wrote:
Thanks.

Comments inline below:

On Aug 13, 2012, at 9:24 AM, Vincent Passaro <vincent...@gmail.com> wrote:

Kit,

Good question, unfortunately its a very long winded answer.  I will try and give you the Readers Digest version and hopefully it will make sense. 

DISA recently changed their requirements for RHEL 5 systems.  This is largely in part due to the recent changes from DISA 'home grown' tools such as Gold Disk / Unix SRR to SCAP.  The old requirements for systems were based on the now outdated Unix/Linux Checklist, which is what Aqueduct has Puppet content for. The latest version used for RHEL systems is the RHEL 5 STIG, which can be found here:


For the new requirements I wrote the remediation content in Bash, not in puppet for a few reasons.  

1.  Puppet is vendor supported by Puppet Labs, but its a pay for license.  Some commands / systems I work for/with don't have the funding to go buy a license.  Because of the USMC / NAVY requirement that all software be supported from a vendor, using Puppet wasn't the easiest answer.

Puppet Labs provides an "Enterprise" version…which is pay-for licensed but not required to use Puppet.  Puppet itself is OSS software.


VP - Fully aware that puppet is licensed to meet requirements for SECNAVINST-5230.15. 

2.  Between myself and others in the industry, the community came to the conclusion that Bash was the preferred 'way' of using remediation content (for now at least)

The problem quickly becomes how can you abstract away from the OS.  What if we want to provide, what I think you're calling "content" for more than one OS (including multiple versions and variants, e.g. 32-bit versus 64-bit).

VP - Aqueduct is focused on RHEL, we haven't gone down the path of trying to address other operating systems.

Right, and I'm sure there will be efforts to support other systems, which may or may not be Puppet.  But, I'm kinda pointing out that this may not be the right direction.  Working out from the OS, gives us OS-specific solutions.  I'm just saying that there is/are a tool(s) out there that inverts it, providing the potential to abstract away from the OS but also maintaining the ability to home OS-specific requirements at the same time.
 

It feels like this conversation is happening in a lot of places without the required level of understanding of Configuration Management.  There's more to it than just running checklists, or scripts that run through checklists.

VP - The conversation is happening allover.  Everyone is trying to solve the same problem and not everyone will ever have the same solution. 

This is true in and out of DoD.  There isn't a single best solution…but, I'm positive it isn't a self, or community-maintained collection of shell scripts.  If it were we wouldn't have the need for all of these new tools, or even the Configuration Management discussion (or the ideals behind continuous delivery).
 
VP - Noted.  Would you be interested in contributing puppet code for compliance remediation then?  Since that is the direction you think it should go? 

3.  Not that many people are up to speed on Puppet scripting, so using bash encourages more contributions from the community. 

I agree that there's probably more familiarity with Bash but highly disagree that that leads to a greater potential from a community.  

VP - I don't disagree that Puppet has a lot of potential.  But Open Source is ran by the community, so what they want is what they create.  

[Hmmn.  That's a bit of the "inmates running the asylum" isn't it?  Almost all of the successful OSS projects have parents, or some kind of governance to control them from the top down.  I doesn't necessarily matter to this discussion other than we should agree that requirements just don't get accepted and implemented on whims.]  

The point I was trying to make was that while you could recreate all of the capability behind Puppet/Chef/SALT with Bash…nobody will, because all three of those are already OSS.


The Puppet community, outside of the DoD/Fed space is quite vibrant.  Just search for puppet and your favorite server.

VP - Everyone else in the world can be vibrant, but that doesn't help the DoD community if they aren't up to speed on it yet.   

Again, it doesn't have to be that direction.  Why can't DoD try to emulate what successful enterprises do?  Don't think for a second that AMEX or Kaiser Permanente don't have the same kinds of requirements that we do WRT to CM, governance and auditing.  If the DoD wants to be a part of the OSS communities, then we have to go to them and be a part - and stop expecting them to care, or come to us.  

VP - Reverse that logic.  Why doesn't Amex and Kaiser adopt what the military uses?  The military requires ruggedized systems that can withstand high temperatures, dust, nuclear explosions, etc, etc.   Its because they don't have the same requirements..or more importantly the same mandates.  Technology is adapted to the organizations requirements, not the other way around. MRG Messaging, Oracle, and SELinux would not exist if it hadn't been for the government because they had unique requirements that no one could fulfill. 


I think the thing that is most concerning to me about this statement, is that it presupposes that the way we are doing it today (and are comfortable with) is the right way without looking at what is happening in other industries/enterprises.

VP - I understand that's how its viewed, but right now there really aren't many projects out there like Aqueduct that are specifically tailored to remediation content, so it kind of is the 'new' way forward (open community development). 
 
Bash is just a shell language.  It's purpose isn't to manage 10s of 1000s of nodes and the configuration thereof.

VP - Sure, but bash does reside on every system bet it 10 or 1000.  There are ways to making bash work.  Is it the best option?  Maybe it is for some people?  Maybe not for others?   In the end all of this is driven by the community.  If the community isn't ready for a technology it won't be adopted.  Does this mean Aqueduct won't have any puppet content?  Nope.  We're always looking to see what is going to work best. 

This is a dangerous hole, here be dragons.  I'd say generally, and admitting that I'm definitely part of it, that our community is rather blind to what technologies are out there, or what's happening in the enterprise.  Never mind that our pace of adoption is rather slow because of the way we operate (contractually, risk, etc.).  

I get the 'its on every system' argument.  But, I don't buy it.  Sure, there are ways to make it work…but, why?  Why not use an OSS tool that is trying to solve the higher-level problem (logging and auditing being one example)?

VP - You don't buy bash being on every system?  Sure there are ways to make puppet work, but why? Are you saying that it CAN'T be done in bash...or just that it shouldn't? I can wrap puppet modules around bash scripts...does that count?  

Any tool is going to take an investment.  Are you saying the DoD isn't ready for Configuration Management?  Or, that we're not capable?  Is it a tool problem, or a people problem?  Or, perhaps there are just more important problems.

VP - ACK,  Tools do take an investment. In my eyes, the process needs to be sorted out before we can try and build solutions around broke problems. (See comment about scap-security-guide)

Fishing: What if there was a supported Puppet-based STIG suite that could be run against Ubuntu as well as RHEL?  Would it be worth it then?

VP - Ubuntu?  No, would be pretty worthless for 90 percent of the DoD since most of the systems have support requirements / EAL certifications that are needed.   


Lastly, this problem has to be looked at not just from a configuration management perspective but also from an Auditor perspective.  From what I have seen in the community is that most people are taking the Aqueduct content and wrapping it into their own internal process.  Everyone tackles the problem differently.  Some people do all their remediation at time of provisioning via Kickstart and call it a day.  Others load the content into cron and allow it to continually maintain compliance based on the systems 'profile' of findings. 

And how does all of that get logged, reported, or made available to an auditor?  What is the delta between the baseline and the final system configuration (full load out)?  Where do the manifests/states/changes get stored?  How does that scale in terms of Bash scripts?

VP - Scan of systems from compliance are done under the specific tools directed by the governing organization.  Not some new software that you found along the way.  I'm guessing you haven't done much system engineering for compliance / auditing?  It can be a little confusing if you haven't.  I can talk to it more if you like. 

FWIW, I'm not really trying to argue against the script mentality.  I'm just having a hard time seeing how we get away from OS/distro-specific thinking and solutions.
 
VP-  Well, we can't get away from it.  The guidance that is written on how a system needs to be configured is tailored specifically to the OS.  Since every OS is different in its own way, everything has to be tailored to that OS.  

Lee Kinser

unread,
Aug 13, 2012, 6:16:44 PM8/13/12
to mil...@googlegroups.com
On Mon, Aug 13, 2012 at 5:25 PM, John Scott III <jms...@gmail.com> wrote:
> point of clarification:
>> LK -> What the DoD means by "supported" is that there is an entity
>> standing behind the software that can be held accountable to keep it
>> updated and secured from a vulnerability standpoint. So yes, bash is
>> supported, just as is apache, but don't expect to call the vendor
>> looking for help writing your web page.
>
> this is not true, DoD CIO memo means there is a support plan in place that does not necessarily mean a vendor. Which for some OSS projects that don't have a company that provides support has been a sometimes stumbling block to adoption of open source software

LK -> That's why I tried to be vague and said "an entity", because
there are caveats in place for the project utilizing the OSS to have
processes in place for updating it and maintaining it themselves or
having some other 3rd party provide that. And I agree, this is a huge
road block for OSS and one that I have personally fought against in
the past, with unreliable results... hence my desire to avoid putting
the users of Aqueduct in that situation if we don't have to.

Vincent Passaro

unread,
Aug 13, 2012, 6:26:46 PM8/13/12
to mil...@googlegroups.com
On Mon, Aug 13, 2012 at 3:16 PM, Lee Kinser <lee.k...@gmail.com> wrote:
On Mon, Aug 13, 2012 at 5:25 PM, John Scott III <jms...@gmail.com> wrote:
> point of clarification:
>> LK -> What the DoD means by "supported" is that there is an entity
>> standing behind the software that can be held accountable to keep it
>> updated and secured from a vulnerability standpoint.  So yes, bash is
>> supported, just as is apache, but don't expect to call the vendor
>> looking for help writing your web page.
>
> this is not true, DoD CIO memo means there is a support plan in place that does not necessarily mean a vendor. Which for some OSS projects that don't have a company that provides support has been a sometimes stumbling block to adoption of open source software

LK -> That's why I tried to be vague and said "an entity", because
there are caveats in place for the project utilizing the OSS to have
processes in place for updating it and maintaining it themselves or
having some other 3rd party provide that.  And I agree, this is a huge
road block for OSS and one that I have personally fought against in
the past, with unreliable results... hence my desire to avoid putting
the users of Aqueduct in that situation if we don't have to.

VP - Lee hit the nail on the head with that.  Aqueduct is OSS, if someone wants to take it and modify it to how they see fit.  Go for it, just adhere to the licensing requirements. If the community builds puppet content, Aqueduct can have both bash and puppet...or just bash...or just puppet...or whatever new technology that comes out next week.

Kit Plummer

unread,
Aug 13, 2012, 6:47:13 PM8/13/12
to mil...@googlegroups.com
Of course.  I don't know if I'll contribute Puppet modules to Aqueduct though.  Nothing to do with Aqueduct, but the need for a module to be reusable with or without Aqueduct.  If Aqueduct can take advantage of the module, it is up to them.


3.  Not that many people are up to speed on Puppet scripting, so using bash encourages more contributions from the community. 

I agree that there's probably more familiarity with Bash but highly disagree that that leads to a greater potential from a community.  

VP - I don't disagree that Puppet has a lot of potential.  But Open Source is ran by the community, so what they want is what they create.  

[Hmmn.  That's a bit of the "inmates running the asylum" isn't it?  Almost all of the successful OSS projects have parents, or some kind of governance to control them from the top down.  I doesn't necessarily matter to this discussion other than we should agree that requirements just don't get accepted and implemented on whims.]  

The point I was trying to make was that while you could recreate all of the capability behind Puppet/Chef/SALT with Bash…nobody will, because all three of those are already OSS.


The Puppet community, outside of the DoD/Fed space is quite vibrant.  Just search for puppet and your favorite server.

VP - Everyone else in the world can be vibrant, but that doesn't help the DoD community if they aren't up to speed on it yet.   

Again, it doesn't have to be that direction.  Why can't DoD try to emulate what successful enterprises do?  Don't think for a second that AMEX or Kaiser Permanente don't have the same kinds of requirements that we do WRT to CM, governance and auditing.  If the DoD wants to be a part of the OSS communities, then we have to go to them and be a part - and stop expecting them to care, or come to us.  

VP - Reverse that logic.  Why doesn't Amex and Kaiser adopt what the military uses?  The military requires ruggedized systems that can withstand high temperatures, dust, nuclear explosions, etc, etc.   Its because they don't have the same requirements..or more importantly the same mandates.  Technology is adapted to the organizations requirements, not the other way around. MRG Messaging, Oracle, and SELinux would not exist if it hadn't been for the government because they had unique requirements that no one could fulfill. 

Obviously, they do…where it makes sense, and without questioning the source or thinking that their requirements are solely theirs.

I think we're into the rat hole now, and over generalizing - or maybe just not understanding that things are being generalized.    

But, we'll see how it goes with Accumulo.  There are just many more cases where it'd have been a lot better if the government collaborated in the first place - than not. 



I think the thing that is most concerning to me about this statement, is that it presupposes that the way we are doing it today (and are comfortable with) is the right way without looking at what is happening in other industries/enterprises.

VP - I understand that's how its viewed, but right now there really aren't many projects out there like Aqueduct that are specifically tailored to remediation content, so it kind of is the 'new' way forward (open community development). 
 
Bash is just a shell language.  It's purpose isn't to manage 10s of 1000s of nodes and the configuration thereof.

VP - Sure, but bash does reside on every system bet it 10 or 1000.  There are ways to making bash work.  Is it the best option?  Maybe it is for some people?  Maybe not for others?   In the end all of this is driven by the community.  If the community isn't ready for a technology it won't be adopted.  Does this mean Aqueduct won't have any puppet content?  Nope.  We're always looking to see what is going to work best. 

This is a dangerous hole, here be dragons.  I'd say generally, and admitting that I'm definitely part of it, that our community is rather blind to what technologies are out there, or what's happening in the enterprise.  Never mind that our pace of adoption is rather slow because of the way we operate (contractually, risk, etc.).  

I get the 'its on every system' argument.  But, I don't buy it.  Sure, there are ways to make it work…but, why?  Why not use an OSS tool that is trying to solve the higher-level problem (logging and auditing being one example)?

VP - You don't buy bash being on every system?  Sure there are ways to make puppet work, but why? Are you saying that it CAN'T be done in bash...or just that it shouldn't? I can wrap puppet modules around bash scripts...does that count?  

This isn't about Bash being on every system.  It's about using the right tool for the job, instead of making every problem a nail (which requires said Bash-hammer).  Yes, I thought I got that point out somewhere.  It is completely possible to have Puppet exec shell commands or scripts.  This is great because at least you have the right tool handling success and failure, and it fits within a larger workflow naturally.

I'm all for shell.  There are amazing things now being done through the shell (RVM, virtualenv).  Caveat, I don't use Bash as a shell - preferring ZSH, and I don't use shell for scripting.


Any tool is going to take an investment.  Are you saying the DoD isn't ready for Configuration Management?  Or, that we're not capable?  Is it a tool problem, or a people problem?  Or, perhaps there are just more important problems.

VP - ACK,  Tools do take an investment. In my eyes, the process needs to be sorted out before we can try and build solutions around broke problems. (See comment about scap-security-guide)

Fishing: What if there was a supported Puppet-based STIG suite that could be run against Ubuntu as well as RHEL?  Would it be worth it then?

VP - Ubuntu?  No, would be pretty worthless for 90 percent of the DoD since most of the systems have support requirements / EAL certifications that are needed.   

Ah, but that level of thinking is yesterday's.  Suppose Ubuntu gets EAL/CC'd to the same level as RHEL.  Then what?  Do you think the OS platform distribution stays the same?



Lastly, this problem has to be looked at not just from a configuration management perspective but also from an Auditor perspective.  From what I have seen in the community is that most people are taking the Aqueduct content and wrapping it into their own internal process.  Everyone tackles the problem differently.  Some people do all their remediation at time of provisioning via Kickstart and call it a day.  Others load the content into cron and allow it to continually maintain compliance based on the systems 'profile' of findings. 

And how does all of that get logged, reported, or made available to an auditor?  What is the delta between the baseline and the final system configuration (full load out)?  Where do the manifests/states/changes get stored?  How does that scale in terms of Bash scripts?

VP - Scan of systems from compliance are done under the specific tools directed by the governing organization.  Not some new software that you found along the way.  I'm guessing you haven't done much system engineering for compliance / auditing?  It can be a little confusing if you haven't.  I can talk to it more if you like. 

Obviously.  ;)  And, I will definitely take you up on that.  I do get that the complexity of compliance makes this much harder than it should be.  I also understand how the the scans are done.   But I'm a firm believer in "the level of thinking that created the problem won't solve it".  So, I'm partly playing dumb and rocking the boat for the sake of it.  Hopefully nobody is offended.


FWIW, I'm not really trying to argue against the script mentality.  I'm just having a hard time seeing how we get away from OS/distro-specific thinking and solutions.
 
VP-  Well, we can't get away from it.  The guidance that is written on how a system needs to be configured is tailored specifically to the OS.  Since every OS is different in its own way, everything has to be tailored to that OS.

Understood.  And, perhaps my thinking is straying too far from the STIG/compliance process - and too far towards CM and continuous delivery. 

JesseRedHat

unread,
Aug 13, 2012, 6:50:37 PM8/13/12
to mil...@googlegroups.com
There is a difference between the DoD CIO policy guidance on OSS being COTS and the Secretary of the Navy (SECNAVINST-5230.15) policy. My interpretation of this memo (how MCSC/DoN Enterprise License folks explained it to me) is to limit OSS use to vendor supported COTS and to treat it just like every other commercial software offering that is on the market since Open Source is a development model and methodology not a business model (it's been my experience that these are confused and lumped together). The Navy seems to recognize there are inherent risks to deploying software that is unsupported and inherent costs to have it supported solely by a systems integrator or government engineers (vendor lock-in!).

NOTE: Despite my login name (legacy) I no longer work for Red Hat. Still remain very interested in the Aqueduct community.

Jesse

Vincent Passaro

unread,
Aug 13, 2012, 7:15:37 PM8/13/12
to mil...@googlegroups.com
VP - AGAIN!  Aqueduct isn't a tool per say.  Its the remediation content.  If there are puppet modules the community can take those and use them anyway they want.   And they will have to becuase not every system can be fully STIG'd, so it will have to be used via their internal process to sort that out and deploy it. 
VP -  So you want people to solve problems that don't exist yet?  Rather than trying to solve whats an issue now.  If Ubuntu, Knoppiz, DSL, and every other version of linux out there gets their EAL and start moving into the DoD then they will get their own STIG, which will drive what needs to be done to the system for DoD compliance.  Hopefully projects like scap-security-guide will be there to write the content in the OS community, not in the great bunker of DISA. 



Lastly, this problem has to be looked at not just from a configuration management perspective but also from an Auditor perspective.  From what I have seen in the community is that most people are taking the Aqueduct content and wrapping it into their own internal process.  Everyone tackles the problem differently.  Some people do all their remediation at time of provisioning via Kickstart and call it a day.  Others load the content into cron and allow it to continually maintain compliance based on the systems 'profile' of findings. 

And how does all of that get logged, reported, or made available to an auditor?  What is the delta between the baseline and the final system configuration (full load out)?  Where do the manifests/states/changes get stored?  How does that scale in terms of Bash scripts?

VP - Scan of systems from compliance are done under the specific tools directed by the governing organization.  Not some new software that you found along the way.  I'm guessing you haven't done much system engineering for compliance / auditing?  It can be a little confusing if you haven't.  I can talk to it more if you like. 

Obviously.  ;)  And, I will definitely take you up on that.  I do get that the complexity of compliance makes this much harder than it should be.  I also understand how the the scans are done.   But I'm a firm believer in "the level of thinking that created the problem won't solve it".  So, I'm partly playing dumb and rocking the boat for the sake of it.  Hopefully nobody is offended.

VP - By all means rock the boat, but lets rock the boat in attempt to solve the problem (assuming there is one).  While your mentality on problem solving is good, this is what the scap-security-guide is doing...  But I'm truly missing the point of where this thread went?  What was the end goal of this conversation?  


FWIW, I'm not really trying to argue against the script mentality.  I'm just having a hard time seeing how we get away from OS/distro-specific thinking and solutions.
 
VP-  Well, we can't get away from it.  The guidance that is written on how a system needs to be configured is tailored specifically to the OS.  Since every OS is different in its own way, everything has to be tailored to that OS.

Understood.  And, perhaps my thinking is straying too far from the STIG/compliance process - and too far towards CM and continuous delivery. 

VP - They are closely related, but two very different beasts. 

ben

unread,
Aug 16, 2012, 11:50:04 PM8/16/12
to mil...@googlegroups.com, mawo...@gmail.com
Pardon my ignorance.  Please correct me if I'm wrong.  

I thought there was a remediation automation function of the SCAP content where you could literally put in the script to fix the finding within the SCAP content files, and program's such as (maybe) secstate that provide remediation, would run the script with validation to attempt to fix the finding and not just report it?

Lee Kinser

unread,
Aug 17, 2012, 12:08:54 AM8/17/12
to mil...@googlegroups.com, mawo...@gmail.com
Ben,
You're correct, there are fix tags in SCAP, but they are limited in
how complicated the scripts inside them can be. That being said, as
you mentioned, the fix tag could just call out to external script/app
that could actually resolve the issue. That configuration is
something that we're discussing implementing now in the scap security
guide.


-Lee

Vincent Passaro

unread,
Aug 17, 2012, 12:18:32 AM8/17/12
to mil...@googlegroups.com, mawo...@gmail.com
Ben,

You're absolutely correct.  There is the capability to call fix tag statements with SCAP.  I'm not an SCAP expert, that's our sister projects role (scap-security-guide)  https://fedorahosted.org/scap-security-guide/.  Myself and Jeff Blank are currently looking at how were going to do the implementation of Aqueduct to remediate systems via their content.  This assumes that the scap-security-guide RHEL 6 guidelines is adopted by DISA. It also becomes a little more complicate when you starting trying to map multiple remediation technologies into those fix statements, but again we are working that out. 

Tresys is working with both Aqueduct and Scap-Security-Guide on their CLIP project that utilizes SecState.  I can't speak to exactly how well the process all works as they haven't showed any of us in the community exactly what all is happening, but this is a very low level breakdown.  Secstate scans via Scap-Security-Guide content and then calls on Aqueduct content to remediate.  Essentials is a buffer in-between the two projects right now.  Its also VERY VERY VERY beta since both Aqueduct and Scap-Security-Guide are still in active development of both scanning content and remediation content.  Something else to consider with CLIP is that it loads some extra rpm's / SELinux policies, etc that aren't standard to a RHEL build.  Someones this works well for customers, sometimes...not so much (its bit me in past). 

Lastly, the fix tag statement idea works well when both the prose and the remediation content is being built outside the 4 walls of DISA.  Given what I have seen of the RHEL 5 STIG SCAP content, there is no good way to implement fix tag statements for remediation simply because the content isn't complete.  The RHEL 5 STIG has 594 checks in total.  350 of those are automated via XCCDF - OVAL (I won't even start to call out how BADLY they are structured), so there is a lot that remain 'manual', yet have a technical implementation behind them.  

Let me know if you have any questions.

-Vince

--
Reply all
Reply to author
Forward
0 new messages