Failure to send zeek logs via wazuh agent

542 views
Skip to first unread message

Paco Gómez Zaya

unread,
Apr 22, 2024, 7:10:56 AM4/22/24
to Wazuh | Mailing List
Good morning,
I am using the wazuh agent to be able to send the logs generated by zeek.

When I run zeek in real time and watch how the logs are generated, the information arrives correctly to wazuh.

The problem comes when I run zeek with a pcap, so that I start sending information in bulk. This information is not sent correctly, with the following errors:

  • There are some logs that are not sent.
  • Sometimes the logs are not even sent, I have to delete them and generate them again.

Attached is the configuration.

  • Configuration of the wazuh agent:
<ossec_config>
  <localfile>
    <log_format>syslog</log_format>.
    <location>/opt/opt/zeek/logs/current/*.log</location>
    <only-future-events>no</only-future-events>.
  </localfile>
</ossec_config>

  • The decoders for all sent logs have been created and successfully checked.

  • All rules have been added and also checked correctly:
<group name="zeek,ids,">
  <rule id="66001" level="15">
    <field name="bro_engine">SSH</field>
    <description>Zeek: SSH Connection</description>
  </rule>
  <rule id="66002" level="5">
    <field name="bro_engine">SSL</field>
    <description>Zeek: SSL Connection</description>
  </rule>
  <rule id="66003" level="15">
    <field name="bro_engine">DNS</field>
    <description>Zeek: DNS Query</description>
  </rule>
  <rule id="66004" level="5">
    <field name="bro_engine">CONN</field>
    <description>Zeek: Connection detail</description>
  </rule>
  <rule id="66005" level="5">
    <field name="bro_engine">HTTP</field>
    <description>Zeek: HTTP detail</description>
  </rule>
  <rule id="66006" level="5">
    <field name="bro_engine">WEIRD</field>
    <description>Zeek: WEIRD detail</description>
  </rule>
  <rule id="66007" level="5">
    <field name="bro_engine">INVENTORY</field>
    <description>Zeek: INVENTORY detail</description>
  </rule>
</group>
<group name="zeek,ids,">
  <rule id="66001" level="15">
    <field name="bro_engine">SSH</field>
    <description>Zeek: SSH Connection</description>
  </rule>
  <rule id="66002" level="5">
    <field name="bro_engine">SSL</field>
    <description>Zeek: SSL Connection</description>
  </rule>
  <rule id="66003" level="15">
    <field name="bro_engine">DNS</field>
    <description>Zeek: DNS Query</description>
  </rule>
  <rule id="66004" level="5">
    <field name="bro_engine">CONN</field>
    <description>Zeek: Connection detail</description>
  </rule>
  <rule id="66005" level="5">
    <field name="bro_engine">HTTP</field>
    <description>Zeek: HTTP detail</description>
    </rule>
  <rule id="66006" level="5">
    <field name="bro_engine">WEIRD</field>
    <description>Zeek: WEIRD detail</description>
  </rule>
  <rule id="66007" level="5">
    <field name="bro_engine">INVENTORY</field>
    <description>Zeek: INVENTORY detail</description>
  </rule>
</group>



  • Modified the filebeat pipeline (wazuh module) to be able to separate the sending of logs to the different indices:
{
      "date_index_name": {
        "if": "ctx?.data?.bro_engine == 'CONN'",
        { "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}conn",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        }, { "if": "ctx?.data?.bro_engine == 'DNS'",
        { "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}dns",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        }, { "if": "ctx?.data?.bro_engine == 'INVENTORY'",
        { "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix":"{{fields.index_prefix}}inventory",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        "if": "ctx?.data?.bro_engine != 'CONN' && ctx?.data?.bro_engine != 'DNS' && ctx?.data?.bro_engine != 'INVENTORY'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },




That's a summary of the wazuh configuration.

After all the trabelshouting performed, it occurs to me that the only possible cause of this error is some missing configuration by the wazuh agent, because sometimes the information arrives correctly to the indexer, that is to say that the error is not in the processing of the information, discarding the following causes:
  • Decoder (Wazuh-server)
  • Filebeat pipeline (Wazuh-server)
  • Indexer (Wazuh-server)

In addition I have to emphasize, that when I was sending data and sometimes I observed that the data arrived and sometimes not, the times that did not coincide in that the wazuh agent was not reading the new logs generated.

I did this check, reading the file: /var/ossec/var/run/wazuh-logcollector.state

Thanks in advance.

Gerardo David Caceres Fleitas

unread,
Apr 22, 2024, 12:34:17 PM4/22/24
to Wazuh | Mailing List
Hello Paco, 

It seems your error is mainly related to some alerts not shown in the dashboard, right? I'm not familiar with Zeek, but I suggest you verify the following on the Wazuh side :
1. Enable archives and check that the manager receives the desired events. , maybe you can use a "grep" command and filter by a specific field. https://documentation.wazuh.com/current/user-manual/manager/wazuh-archives.html.
 You can also use the logtest tool and test the behavior of the analysis engine with those events https://documentation.wazuh.com/current/user-manual/ruleset/testing.html (maybe you will need to customize your decoders/rules)
2. Check that there are decoders/rules in place for those specific events and that the alerts are being generated and saved in alerts.json 
3. Note that the alerts shown in the dashboard are level 3 and above, so if you want to visualize the lower-level alerts, this has to be changed in the ossec.conf of the manager. 
<log_alert_level>3</log_alert_level>
https://documentation.wazuh.com/current/user-manual/manager/alert-threshold.html

One more thing, note that when you are using custom rules, they must use a rule ID above 100.000 to avoid conflicts with the Wazuh's original ones
https://documentation.wazuh.com/current/user-manual/ruleset/index.html

I hope this is helpful to you.

Best regards.

Paco Gómez Zaya

unread,
Apr 23, 2024, 12:27:22 PM4/23/24
to Wazuh | Mailing List
Good morning,
Thank you very much for your answer, I will now give you the information obtained on the different points:

  1. I have already checked this point and indeed I was receiving the same logs in the file index as in the alert index, so the rules were working correctly.
  2. Also with wazuh-logtest I have checked the rules and the decoders, all correct when passing a single event I have tried even passing several lines and it works without problem.
  3. I have also tried to set the rule id from the ID 100.000 and I still have the same problem.

Couldn't it be as I said some missing configuration in the ossec.conf of the wazuh agent?

Thanks!

Gerardo David Caceres Fleitas

unread,
Apr 29, 2024, 6:18:26 AM4/29/24
to Wazuh | Mailing List
Hello Paco, 

If you can see the logs within the archives.json file, it seems that the log collection is working fine; there is no need to configure anything else on the agent side. Then, using the logtest, you can verify that your decoders/rules are working as expected; you just need to verify that the " **Alert to be generated.  " message appears at the end. Not being able to visualize those alerts on the dashboard could also mean that you are modifying the filebeat ingest pipeline, so creating a new index pattern on the dashboard could be necessary.

I hope this has been helpful during your troubleshooting process.

Best regards.

Paco Gómez Zaya

unread,
May 7, 2024, 11:03:55 AM5/7/24
to Wazuh | Mailing List
Good afternoon, maybe I am not explaining myself well.

The problem I have is exactly the following:

I send the file generated by zeek which always contains the same information, since I use a pcap to create the log.

  • line 1 contains the computer A
  • line 2 contains  computer   B
  • line 3 contains  computer  C

Well, sometimes when I send the logs, the first line containing team A is sometimes sent and sometimes not.

That is to say that the error I have is not by a static configuration, but sometimes you get to read the first line and sometimes not, randomly.

So I rule out that it is a configuration problem on the server side of wazuh and therefore I focus more on the configuration of the agent.

What could be happening?

Thanks in advance.
Reply all
Reply to author
Forward
0 new messages