Is there a method for puppet to find all suid files?

67 views
Skip to first unread message

Sean

unread,
Sep 4, 2015, 3:01:28 PM9/4/15
to Puppet Users
Hi,
 
I'm using a module from the Forge to manage auditd rules, the module works quite well and managing rules is very easy.  The hard part is that there's a requirement to audit use SUID files on each system.  With out knowing exactly what files are SUID on every server in the field, since there are several linux flavors and versions, I'm finding myself thinking the only way to accomplish this is to write a custom fact to hold all the SUID files as an array, then pass the array to the resource creator.  I just don't relish the idea of running a find command from / every 30 minutes.

Might anyone have any better ideas?

Thank you kindly!

Trevor Vaughan

unread,
Sep 6, 2015, 10:22:28 AM9/6/15
to puppet...@googlegroups.com
This rule will let you know when an SUID binary is *executed* https://github.com/simp/pupmod-simp-auditd/blob/master/templates/base.erb#L50:L55.

I would not run any filesystem searches from Puppet, I would relegate those to cron+syslog so that you can better control the amount of I/O churn on your system over time.

Thanks,

Trevor

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/e848e8ab-0a96-4934-9382-42f3b828d529%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Trevor Vaughan
Vice President, Onyx Point, Inc
(410) 541-6699

-- This account not approved for unencrypted proprietary information --

Corey Osman

unread,
Sep 8, 2015, 12:15:45 AM9/8/15
to Puppet Users
As Trevor mentioned above this is something you want to control externally via cron and not puppet. I took a slightly different approach and used an external fact which allowed be to write a fact in bash.  There is no reason why you couldn't do this in a Ruby based fact but since all the original code was written in bash I used external facts simply to save time. 


The key item is that this fact alone takes 37 seconds to run so I decided to cache the result for 12 hours which obviously speeds up fact values retrieval. 

I wasn't crazy about having a bunch of random cron jobs to cache the value of 10+ facts so I built the control mechanism into the fact code itself so that it doesn't rely on cron or some other service.  

Hit me up privately as I might have more code to share that could be useful to you. 

Corey

Trevor Vaughan

unread,
Sep 8, 2015, 8:50:58 AM9/8/15
to puppet...@googlegroups.com
Just out of curiosity, what's the benefit of making this a fact?

I'm thinking that this would be better relegated to a monitoring system, not a configuration management system.

(Yes, you can use Puppet as a monitoring system but that's not really what it is designed for and you'll end up slowing everything down over time.)

Thanks,

Trevor


For more options, visit https://groups.google.com/d/optout.

Dan White

unread,
Sep 8, 2015, 9:08:38 AM9/8/15
to puppet...@googlegroups.com
FWIW, here's what I did in a previous environment: 

I have a script that is run by cron once a day (in the wee, small hours) that scans for all SUID/GUID files and compares them to a list of allowed SUID/GUID kept with the script.  The script and the list are maintained by puppet.  

The output of the script is a custom fact that reports unauthorized SUID/GUID files through Puppet.

From that Puppet report, admins either remove the offending file(s), change their permissions, or add it to the allowed list in Puppet for that server.

One additional note: In this environment, Puppet agent was not left running as a daemon.  It was run once a day by cron.  The other scripts were scheduled to run an hour or so before the Puppet cron was to run.

YMMV
HTH
“Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us.”  (Bill Waterson: Calvin & Hobbes)

jcbollinger

unread,
Sep 8, 2015, 9:29:48 AM9/8/15
to Puppet Users
Ultimately, Puppet relies on the underlying operating system for all services.  It cannot provide anything that the OS does not support.  Puppet notwithstanding, I am unaware of any mechanism for affirmatively detecting the presence of SUID files (on systems that support them) other than scanning the file system.

There are really two parts to the problem, though, as the other responses have highlighted:
  1. gathering the data, and
  2. communicating the data to Puppet.
I am inclined to agree that it would be unwise to install a custom fact whose evaluation involves performing a file system scan, so I agree with the several recommendations to decouple such scans from custom facts.  If you use a scheduler to run the scan periodically, however, you can and should use a custom fact to report the results to Puppet.  An array-valued fact seems a reasonable vehicle for this.  If you wanted to present more data about each SUID file then you could instead use a hash with the file names as keys.

You can use Puppet to install and manage a scheduler (e.g. cron) job that performs the scan, and you can audit whether Puppet has to make any changes to that job.  You can also audit the (apparent) mtime of the scan results, which can tell you either when last the list of SUID files changed or when last the scanner ran, depending on how you configure the scanning job.


John

Corey Osman

unread,
Sep 8, 2015, 11:55:31 AM9/8/15
to Puppet Users
I am using a custom type called assert that flags the report if any of the facts did not pass. I would agree that these items should be captured 
in a monitoring platform, but I don't have access to the monitoring platform to make these changes and if it were that easy I am sure 
it would have been implemented in the monitoring system already.  However collecting 30 POIs per node on a proprietary, super expensive monitoring 
system would probably somehow cost an additional 30K (I have no factual data to back this claim up). 


Some of the benefits of using a fact are:

- A single interface to run all the scripts  (there is 30+ facts that I have)  ( so I can tell anybody to just run facter to get the results)
- Can use the fact value to make decisions in puppet code, although really we just use the assert type
- Can get the results of all these 30+  tests via mcollective facts instead of running each test individual  across many nodes.
- From an auditing standpoint its pretty handy to know when the facts changed values in the reports which are stored for 30+ days.
  and what helped make the change.
 

assert{'suid_test':
condition => $suid_test == 'pass',
message => 'Suit test did not pass'
}

Sean,

To wrap all these discussions up.  You can do the following:

1. cache the result of find like I did in the script and not care how or when the script is run, nor maintain a cron job for it.
2. run a cron job and configure the script to run when you need it to, then write a fact around the value of the result
3. use a monitoring system to poll for these values, and configure which values are to be stored
Reply all
Reply to author
Forward
0 new messages