OSSEC and Logging Infrastructure Design Questions

147 views
Skip to first unread message

BKeep

unread,
Jan 13, 2015, 2:07:17 AM1/13/15
to ossec...@googlegroups.com
Hi,

I am just getting started with designing a logging stack and have some questions regarding how OSSEC will fit into the overall scheme. Over the last several weeks, I have been setting up different log stacks and think I have a viable solution. However, I have some questions about how everyone else is deploying OSSEC in similar situations. I understand this question is not entirely specific to OSSEC but I am hoping others have had similar goals so they may have some related information.

My overall design at this point will be shipping logs from endpoints to an rsyslog server then on that server, I will store everything and forward to graylog2, which will be using an elasticsearch backend. I plan to use encryption and TCP where possible. When adding OSSEC into the mix, how are others setting up their environments? On the endpoint nodes, are you deploying OSSEC on every node or are you shipping logs to a central store and then performing the checks there? As far as OSSEC on the server side, are you segregating OSSEC or running it on the same server as your rsyslog/logstash/whatever instances?

Does it make sense to ship all endpoint logs to the central log repository then use rsyslog to redirect the logs to local files, graylog2, and OSSEC?

Regards

David Lang

unread,
Jan 13, 2015, 1:56:50 PM1/13/15
to ossec...@googlegroups.com
I'll start off with the disclaimer that I'm fairly new to using ossec and still
working on figuring out some of this stuff myself.

OSSEC has two types of work that it does.

1. watching logs for interesting stuff

2. checking other things in the system (running processes, file checksums, etc)

#2 requires a local install of an ossec agent, so you still need that even with
centralized logging.

As far as #1 goes, I tend to favor centralizing the analysis, but the biggest
reason for doing so is that a centralized analysis can notice things that happen
across multiple machines. I don't think that ossec has any rules that would take
advantage of this, and I don't know if it would get confused if it gets fed logs
from lots of machines.

I don't believe that ossec is multi-threaded, so at some point a centralized
version looking at logs will end up being overloaded.

On the other hand, the default rules for ossec end up forwarding a lot of logs
to the central ossec server.

In an ideal world, I would like to have the local ossec agents looking for
"significant" things and generating alerts on them, without forwarding all the
log messages. I would have the log messages forwarded separately and have
additional log correlation and anslysis happening on the combined feed.

This is an area I need to experiment with myself.

I outlined my view of what logging should be in a series of articles for ;login
that started with
https://www.usenix.org/publications/login/august-2013-volume-38-number-4/enterprise-logging

David Lang

Michael Starks

unread,
Jan 13, 2015, 2:12:16 PM1/13/15
to ossec...@googlegroups.com
On 2015-01-13 1:07, BKeep wrote:

> Does it make sense to ship all endpoint logs to the central log
> repository then use rsyslog to redirect the logs to local files,
> graylog2, and OSSEC?

I have deployed OSSEC in several environments over the years. My
preference is to use OSSEC agents for integrity and rootkit checking
only, and ship the logs separately so they can be consumed by ELSA (or
graylog, etc). I then analyze the logs on the destination log host with
analysisd.

I do this because OSSEC does not have very good capabilities when it
comes to archiving logs. Sure, you can turn that on but all of the logs
are stored in one monolithic log file and the log format is not
standardized. And if you don't turn it on, you'll end of discarding
90-99% of the logs that come in because they don't match a rule.

My preference would be to have one agent (only OSSEC), but it just
doesn't work so well in environments where you want to archive all logs
for forensics purposes.

Yaniv Ron

unread,
Jan 14, 2015, 5:04:31 AM1/14/15
to ossec...@googlegroups.com



--

--- You received this message because you are subscribed to the Google Groups "ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ossec-list+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Yaniv Ron
Security  Department | Viber S.a.r.l www.viber.com | yron@viber.com

BKeep

unread,
Jan 14, 2015, 8:07:50 PM1/14/15
to ossec...@googlegroups.com, da...@lang.hm
Hi David,

I took the time to read your article and wish I would have read it sooner. It helps put the overall goals into perspective and articulates what I am thinking on the idea of "Best Effort Delivery." It takes a little bit of effort to figure it all out given the amount of options out there. With no one clear path forward, sorting through all the "How To" blogs can get distracting.

I think my main goal for implementing OSSEC into the mix is to check logs for interesting stuff with the possibility to perform file integrity checking later on. Of course, another goal is to fiddle with the endpoint as little as possible so I don't have to overhaul anything 6 months to a year from now. So I'm clear, I think you are saying you prefer sending the endpoint stream to the OSSEC management server and sending a log stream to a separate centralized log store. If I misunderstood please correct me.

Do you happen to know if OSSEC can determine the originating node for a log message coming form an origin other than the original point? Within graylog, it is able to identify the origin source from the passed through message and if OSSEC is capable of this, using rsyslog as a traffic cop of sorts could work well. I understand the scaling issues and would have to figure out the total number of expected logs per second then find a way to deal with it.

I think I will have to do a little more testing so I have a better understanding of what OSSEC is capable of and what it is not.

Thanks for your time.
Brandon

BKeep

unread,
Jan 14, 2015, 8:13:15 PM1/14/15
to ossec...@googlegroups.com
Hi Michael,

As far as performance expectations from a setup like that, what kind of processing power and ram requirements would be needed for how many endpoints? I have not even gotten to sizing at this point but I should get it on my radar.

Thanks for your reply.
Brandon

BKeep

unread,
Jan 14, 2015, 8:18:19 PM1/14/15
to ossec...@googlegroups.com
Hi Yaniv,

I looked at using the ELK stack and found it to be extremely easy to get a working prototype setup. However, one thing I cannot get past, is the lack of authentication for Kibana. In my environment, the need for LDAP capable management is the determining factor for my choice of graylog2 as a frontend. With that said, how many endpoints are you pointing at the OSSEC server and what kind of system specs are you looking at?

Thanks for your input,
Brandon


On Wednesday, January 14, 2015 at 4:04:31 AM UTC-6, Yaniv Ron wrote:
On Tue, Jan 13, 2015 at 9:12 PM, Michael Starks <ossec...@michaelstarks.com> wrote:
On 2015-01-13 1:07, BKeep wrote:

Does it make sense to ship all endpoint logs to the central log
repository then use rsyslog to redirect the logs to local files,
graylog2, and OSSEC?

I have deployed OSSEC in several environments over the years. My preference is to use OSSEC agents for integrity and rootkit checking only, and ship the logs separately so they can be consumed by ELSA (or graylog, etc). I then analyze the logs on the destination log host with analysisd.

I do this because OSSEC does not have very good capabilities when it comes to archiving logs. Sure, you can turn that on but all of the logs are stored in one monolithic log file and the log format is not standardized. And if you don't turn it on, you'll end of discarding 90-99% of the logs that come in because they don't match a rule.

My preference would be to have one agent (only OSSEC), but it just doesn't work so well in environments where you want to archive all logs for forensics purposes.


--

--- You received this message because you are subscribed to the Google Groups "ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ossec-list+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

David Lang

unread,
Jan 14, 2015, 10:22:34 PM1/14/15
to BKeep, ossec...@googlegroups.com
On Wed, 14 Jan 2015, BKeep wrote:

> Hi David,
>
> I took the time to read your article and wish I would have read it sooner.
> It helps put the overall goals into perspective and articulates what I am
> thinking on the idea of "Best Effort Delivery." It takes a little bit of
> effort to figure it all out given the amount of options out there. With no
> one clear path forward, sorting through all the "How To" blogs can get
> distracting.

I'm glad you found it useful, I wrote 5 more articles on logging topics after
that one, and from your other questions I think you'll find them useful

> I think my main goal for implementing OSSEC into the mix is to check logs
> for interesting stuff with the possibility to perform file integrity
> checking later on. Of course, another goal is to fiddle with the endpoint
> as little as possible so I don't have to overhaul anything 6 months to a
> year from now. So I'm clear, I think you are saying you prefer sending the
> endpoint stream to the OSSEC management server and sending a log stream to
> a separate centralized log store. If I misunderstood please correct me.

I like to have the ossec non-log related processing (file integrity checking,
rootkit checks, etc) report to the ossec server, which then sends it's logs to
syslog.

Then at the central log collector you can have multiple things processing the
logs (or subsets of them). You could use an ELK stack, ossec, and Simple Event
correlator on the same logs, each doing the analysis they are best at.

As I see it right now, an ELK stack is wonderful for exploring the logs and
looking for things that you didn't know to look for (this attack came from IP
x.x.x.x, what other accounts have been logged into from that IP, then what other
IP addresses have logged into those accounts, then what other accounts have
those IP addresses logged into..), but it's far less efficient in producing
reports than other approaches.

OSSEC has lots of canned rules to look for interesting log events, but only
relating to a single server. It doesn't have rules to look for combinations of
events across multiple servers (a failed login on one server is no big deal, a
failed login to each of a bunch of servers is something to take note of)

SEC (simple event correlator) is a very powerful engine to let you setup rules
for log messages or combinations of messages, including across multiple
machines, but it doesn't provide you with any canned rules.

Archive of logs, having ossec send all log messages is less than ideal, because
every message then is tagged with the ossec programname and the timestamp of
when it gathered the log in addition to the original timestamp/programname/etc,
so plain rsyslog is far better for creating archives that you can go back and
refer to (it's also much easier to secure flat files and prove that they haven't
been tampered with than the same logs in a database)

> Do you happen to know if OSSEC can determine the originating node for a log
> message coming form an origin other than the original point? Within
> graylog, it is able to identify the origin source from the passed through
> message and if OSSEC is capable of this, using rsyslog as a traffic cop of
> sorts could work well.

I have not done this yet with ossec, so I don't know for sure, but it should (if
it can't, I would consider it a significant bug, which I would consider makes
that capability close to worthless :-)

> I understand the scaling issues and would have to
> figure out the total number of expected logs per second then find a way to
> deal with it.

There was a footnote in my enterprise logging paper pointing at a paper a year
or two earlier about a system that AMD put together for doing event correlation
across multiple servers and combining the results. The approach that they took
is very useful.

David Lang

David Lang

unread,
Jan 14, 2015, 10:22:55 PM1/14/15
to ossec...@googlegroups.com
On Wed, 14 Jan 2015, BKeep wrote:

> Hi Yaniv,
>
> I looked at using the ELK stack and found it to be extremely easy to get a
> working prototype setup. However, one thing I cannot get past, is the lack
> of authentication for Kibana. In my environment, the need for LDAP capable
> management is the determining factor for my choice of graylog2 as a
> frontend. With that said, how many endpoints are you pointing at the OSSEC
> server and what kind of system specs are you looking at?

As I understand Kibana, you could have the Apache webserver do the
authentication before it allows you to bring up the app.

David Lang
>>> email to ossec-list+...@googlegroups.com <javascript:>.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> *Yaniv Ron*
>> +972-3-7298582
>> *Security Department | Viber S.a.r.l *| www.viber.com | yron@viber
>> <http://twitter.com/viber>.com
>>
>
>
Reply all
Reply to author
Forward
0 new messages