Puppet logging agent/master

148 views
Skip to first unread message

Mike Reed

unread,
Aug 26, 2014, 1:34:51 PM8/26/14
to puppet...@googlegroups.com
Hello all,

I've recently been looking into various methods for configuring meaningful logging from my puppet 3.6 master/agent nodes.  I've typically gone the route of grep'ing through syslog on both master/agents and I'd like something a little more robust and user friendly for other who may not be hip on going through hundreds of lines of syslog information in addition to a simpler design.

I've recently been playing with an agent's puppet.conf and simply trying to set the logdir using this with no success at all (permissions have been changed to allow puppet to write to that directory):
[agent]
logdir=        /var/log/puppet

I've also tested syslog facility configurations but after some time, it seemed like having to modify multiple configuration files to get puppet logging consistent, seems a bit bulky to me.

I suppose I have two questions:

1.  Is there a simple way to push messages to a file other than /var/log/syslog on an Ubuntu machine?
2.  Is there a preferred way in the community by which people aggregate logs to make troubleshooting nodes issues easier to manage?

Thank you all for your time in advance.

Cheers,

Mike

Ramin K

unread,
Aug 26, 2014, 1:48:22 PM8/26/14
to puppet...@googlegroups.com
This is the way I do it,
http://ask.puppetlabs.com/question/432/puppet-and-rsyslog/?answer=439#post-id-439

I thought that the Puppet packages used to install a syslog config, but
maybe I imagined that.

Ramin

Wil Cooley

unread,
Aug 26, 2014, 8:11:11 PM8/26/14
to puppet-users group
kOn Tue, Aug 26, 2014 at 10:34 AM, Mike Reed <mjohn...@gmail.com> wrote:
I suppose I have two questions:

1.  Is there a simple way to push messages to a file other than /var/log/syslog on an Ubuntu machine?

I think the rsyslog Ramin mentioned is a good way to filter.
 
2.  Is there a preferred way in the community by which people aggregate logs to make troubleshooting nodes issues easier to manage?

I use syslog forwarding to a central log collector and then use rsyslog collector to separate the Puppet events to their own file. I feed the files into Splunk.

I also have a Puppet report-processor that logs via syslog with the data in a key=value format, which is automatically extracted by Splunk but might be useful for other log event management systems:


This only handles data from the agent (but it logged by the master); the master can still have errors and data outside of the agents' reports that's useful. For example, the catalog compile time is logged by the master and some failures only show up on the master; analysing the Apache (or whatever HTTP/Rack server you use) is also useful analysing what is being most frequently requested.

I have a Splunk app I've written (but never quite finished enough to push to Splunk-base): 


Much of this can be done with PuppetDB and Erik Dalen's demo Puppet Explorer looks like it handles much of the visualization too.

Wil

Mike Reed

unread,
Aug 26, 2014, 10:40:49 PM8/26/14
to puppet...@googlegroups.com
Hey Will and Ramin,

Thank you both for taking the time to explain your configurations. 

I suspect I'll roll with the central logging option via rsyslog/syslog and go from there.  Splunk sounds like a great tool to use for parsing as well as PuppetDB for more advanced visualization features.

Thanks again for the information as it's most appreicated.

Cheers,

Mike

Martijn

unread,
Aug 27, 2014, 12:54:29 PM8/27/14
to puppet...@googlegroups.com
We still use Puppet Dasboard (with PuppetDB) to get a quick overview of the state of nodes and the logs of their Puppet runs. Not very fancy and a little hard to search, but it works well as a read-only dashboard.

Furthermore we use the ELK-stack (Logstash, Elasticsearch, Kibana) (See http://www.elasticsearch.org/overview/), which is essentially an open-source alternative to Splunk, to ship all logs from each host via a queue to a central server, where they're normalized, processed and stored in Elasticsearch. I've created several dashboards in Kibana that query that data to graph metrics and show anomalies, not just for Puppet runs. I'd prefer to add some active alerting to this pipeline, but have yet to figure that out.

There are many ways to do this, but this works pretty well for us.

Regards, Martijn


Op dinsdag 26 augustus 2014 19:34:51 UTC+2 schreef Mike Reed:

Walid

unread,
Aug 31, 2014, 1:21:02 PM8/31/14
to puppet...@googlegroups.com
Hi Martijn

are you using the logstash reporter https://github.com/logstash/puppet-logstash-reporter , would it be possible to share your puppet kibana dashboards, and logstash.conf file

regards

Walid


--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/59be2841-9cf1-49cd-ac2d-d219db0e2c38%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Jason Antman

unread,
Sep 4, 2014, 6:43:01 PM9/4/14
to puppet...@googlegroups.com
FWIW,

1) for logging, we use ELK. We tried Splunk... the quote they gave us was somewhere around our entire annual software budget. Also, it's closed source.

2) We don't do anything with logs, aside from the few times we need to manually confirm something. We rely on reports (PuppetBoard, and some custom code around PuppetDB) for run analysis.

-Jason


Reply all
Reply to author
Forward
0 new messages