Wazuh + Elastic Stack + Syslog

850 views
Skip to first unread message

dhen...@lixar.com

unread,
Mar 14, 2018, 1:21:02 PM3/14/18
to Wazuh mailing list
Hello,

Recently stood up a new install of Wazuh + Elastic Stack. Got the install running and in testing added a Linux server running nginx, and a Windows desktop just to get some logging data.

Started then looking at what I can do for some of my networking equipment in the office. So on the Wazuh server, I added in the ossec.conf the following to start up the syslog portion of it:

  <remote>
    <connection>syslog</connection>
    <port>514</port>
    <protocol>udp</protocol>
    <allowed-ips>0.0.0.0/0</allowed-ips>
  </remote>

Restarted the daemon to enable it. Within Kibana I believe I can see syslog data making it there. As within the Wazuh app portion of Kibana I see this:

Group Count 
syslog
138


However, I'm interested in digging in and seeing what data is being received. Call it a noob issue, but I can't seem to be able to drill down into the specifics. Also, is there a way of adding in an agent for each host sending syslog data? Or is all syslog data lumped together?

Thanks.



jesus.g...@wazuh.com

unread,
Mar 15, 2018, 6:33:23 AM3/15/18
to Wazuh mailing list
Hi @dhenshaw,

Your screenshot  is saying that there are 138 alerts related to syslog but, what it means?
since Wazuh is monitoring syslog, one or more syslog entries could fired up a decoder or multiple decoder,
and they probably end on an alert. We have different alert levels, so you could have 138 not harmful syslog alerts.

Whenever you want to check an specific agent on Kibana discover section, the only thing you need
is to use Lucene syntax.

1. Open Kibana -> Discover
2. On the top search bar write the following and press Enter key:

agent.id: 001 AND rule.groups: syslog

It means that we want to filter alerts by the agent id field, this case 001 and we want to filter for syslog too.

Useful command:

On the manager machine you could use the next command to know the id of your desired agent.

# /var/ossec/bin/agent_control -l

Also remember to set properly your time range on Kibana, next to the search bar you should see 15 minutes, set it to your desired time range.

Hope it helps, have a nice day.

Best regards,
Jesús

dhen...@lixar.com

unread,
Mar 15, 2018, 8:57:03 AM3/15/18
to Wazuh mailing list
Thanks Jesus.

So I've made some progress, and can now with tcpdump see syslog messages being sent and received. Now to filter and find them in Kibana/Wazuh.

jesus.g...@wazuh.com

unread,
Mar 15, 2018, 10:05:42 AM3/15/18
to Wazuh mailing list
We are glad, @dhenshaw. Since all the syslog content is "included" on the tcpdump content, I understand that our last mail 
was enough for you. Have you got any more question about this or it's solved? Need help with the filtering syntax or something similar?

Feel free to ask us, we are glad to help.

Best regards,
Jesús

dhen...@lixar.com

unread,
Mar 16, 2018, 8:03:18 AM3/16/18
to Wazuh mailing list
In continuance of your email Jesus.

My setup is two servers, I have one running Wazuh and a second running Elastic Stack(all components in one).

Currently not using FileBeat, but I may look into some of the Beats options in Elastic Stack.

What is the difference if FileBeat was used, compared with the Wazuh agent?

Thanks.

jesus.g...@wazuh.com

unread,
Mar 16, 2018, 8:29:48 AM3/16/18
to Wazuh mailing list
Hi @dhenshaw,

There are two main ways to make it work:
  • Single host scenario
    • Wazuh Manager + Logstash + Elasticsearch + Kibana on machine A
    • Wazuh Agents on machine B,C,D,E...
    • Your agents send logs to machine A, where the manager write alerts to the alerts.json file
      • Logstash read the alerts.json file and send data to Elasticsearch
      • Kibana read, aggregate, filter, search data from Elasticsearch
  • Distributed scenario
    • Wazuh Manager + Filebeat on machine A
    • Logstash + Elasticsearch + Kibana on machine B
    • Wazuh Agents on machine B,C,D,E...
    • Your agents send logs to machine A, where the manager write alerts to the alerts.json file
      • Filebeat read the alerts.json file and send data to machine B where Logstash is waiting for data
      • Logstash read data received from Filebeat and sends data to Elasticsearch
      • Kibana read, aggregate, filter, search data from Elasticsearch
The above description is simple, could be more and more complex environments, but it's enough.

Take a look at this ugly diagram but simple. It's a distributed architecture:




Now take a look at this single host architecture diagram:




Is your question answered with the above description? 

Best regards,
Jesús
Reply all
Reply to author
Forward
0 new messages