Erase the part wazuh agent read as he read suricata's eve.json.

153 views
Skip to first unread message

Lucas

unread,
Mar 2, 2023, 4:14:28 AM3/2/23
to Wazuh mailing list
Hi team

AWS operates Suricata through VPC Mirror and is alerted as a Wazuh Agent.

Too many logs came into Suricata or a message occurred that the queue was insufficient, and Ossec.conf increased the queue capacity to its maximum.

However, the following error message is still occurring.
2023/03/02 08:58:13 wazuh-logcollector: ERROR: Large message size from file '/var/log/suricata/eve.json' (length = 645): '{"timestamp":"2023-03-02T08:58:11.981439+0000","flow_id":47617

So when Wazuh Agent reads eve.json, I want to know if I can delete the part I read and bring it back.
(e.g. delete=>ture in Logstash's S3 input)


Regards

Leandro David Sayanes

unread,
Mar 10, 2023, 11:59:53 AM3/10/23
to Wazuh mailing list
Hi Lucas

I will try to help you!

The problem seems to be with an extremely large message in eve.json,
but I didn't understand if the message generates an error and stops working or if the module is still working fine.

Could you enable debug mode 2 for LogCollector so we have more details of what is going on ?

To do so, search for:
# log collector (server, local or Unix agent)
logcollector.debug=0

In /var/ossec/etc/local_internal_options.conf and set it to 2:
logcollector.debug=2

This is the debug option:

# Debug options.
# Debug 0 -> no debug
# Debug 1 -> first level of debug
# Debug 2 -> full debugging

Once done, restart Wazuh, then you can check the log (ossec.log) again to see the new information obtained.
(Remember once you get the necessary data, remove the debug line and restart again the manager to avoid disk space problems)


The ERROR appears because the size of the text is larger than the size of the variable, here the code:

        /* Incorrect message size */
        if (__ms) {
            // strlen(str) >= (OS_MAXSTR - OS_LOG_HEADER - 2)
            // truncate str before logging to ossec.log

            if (!__ms_reported) {
                merror("Large message size from file '%s' (length = " FTELL_TT "): '%.*s'...", lf->file, FTELL_INT64 rbytes, sample_log_length, str);
                __ms_reported = 1;
            }


If you are having other problems or errors related to line limit overruns, queue size or buffer size, please note that these options are available:
  1. There is a setting that determines the number of lines that are attempted to be read by the reading threads of logcollector. See logcollector.max_lines.
  2. The reading threads put the lines in a logcollector internal circular queue. The size of this queue is determined by the logcollector.queue_size.
  3. Those logs are extracted from the circular queue and finally put in the agent buffer to finally send them to the manager. Related settings are <client_buffer>.


Regards!
Reply all
Reply to author
Forward
0 new messages