On Wed, 14 Jan 2015, BKeep wrote:
> Hi David,
>
> I took the time to read your article and wish I would have read it sooner.
> It helps put the overall goals into perspective and articulates what I am
> thinking on the idea of "Best Effort Delivery." It takes a little bit of
> effort to figure it all out given the amount of options out there. With no
> one clear path forward, sorting through all the "How To" blogs can get
> distracting.
I'm glad you found it useful, I wrote 5 more articles on logging topics after
that one, and from your other questions I think you'll find them useful
> I think my main goal for implementing OSSEC into the mix is to check logs
> for interesting stuff with the possibility to perform file integrity
> checking later on. Of course, another goal is to fiddle with the endpoint
> as little as possible so I don't have to overhaul anything 6 months to a
> year from now. So I'm clear, I think you are saying you prefer sending the
> endpoint stream to the OSSEC management server and sending a log stream to
> a separate centralized log store. If I misunderstood please correct me.
I like to have the ossec non-log related processing (file integrity checking,
rootkit checks, etc) report to the ossec server, which then sends it's logs to
syslog.
Then at the central log collector you can have multiple things processing the
logs (or subsets of them). You could use an ELK stack, ossec, and Simple Event
correlator on the same logs, each doing the analysis they are best at.
As I see it right now, an ELK stack is wonderful for exploring the logs and
looking for things that you didn't know to look for (this attack came from IP
x.x.x.x, what other accounts have been logged into from that IP, then what other
IP addresses have logged into those accounts, then what other accounts have
those IP addresses logged into..), but it's far less efficient in producing
reports than other approaches.
OSSEC has lots of canned rules to look for interesting log events, but only
relating to a single server. It doesn't have rules to look for combinations of
events across multiple servers (a failed login on one server is no big deal, a
failed login to each of a bunch of servers is something to take note of)
SEC (simple event correlator) is a very powerful engine to let you setup rules
for log messages or combinations of messages, including across multiple
machines, but it doesn't provide you with any canned rules.
Archive of logs, having ossec send all log messages is less than ideal, because
every message then is tagged with the ossec programname and the timestamp of
when it gathered the log in addition to the original timestamp/programname/etc,
so plain rsyslog is far better for creating archives that you can go back and
refer to (it's also much easier to secure flat files and prove that they haven't
been tampered with than the same logs in a database)
> Do you happen to know if OSSEC can determine the originating node for a log
> message coming form an origin other than the original point? Within
> graylog, it is able to identify the origin source from the passed through
> message and if OSSEC is capable of this, using rsyslog as a traffic cop of
> sorts could work well.
I have not done this yet with ossec, so I don't know for sure, but it should (if
it can't, I would consider it a significant bug, which I would consider makes
that capability close to worthless :-)
> I understand the scaling issues and would have to
> figure out the total number of expected logs per second then find a way to
> deal with it.
There was a footnote in my enterprise logging paper pointing at a paper a year
or two earlier about a system that AMD put together for doing event correlation
across multiple servers and combining the results. The approach that they took
is very useful.
David Lang