Filebeat error

937 views
Skip to first unread message

Michael Hodge

unread,
Nov 5, 2021, 4:16:15 PM11/5/21
to Wazuh mailing list
Hello,

We recently noticed filebeat was having an error preventing it from sending alerts to Elasticsearch.

The error is: ERROR        [publisher_pipeline_output]        pipeline/output.go:180        failed to publish events: temporary bulk send failure.

Nothing has been changed in the filebeat.yml file:

# Wazuh - Filebeat configuration file
filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: false

setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.overwrite: true
setup.ilm.enabled: false

Versions:
Wazuh version: 4.0.4
Filebeat version: 7.9.2
Elasticsearch version: 7.9.2
Kibana: 

Luis Contreras

unread,
Nov 6, 2021, 1:15:09 PM11/6/21
to Wazuh mailing list
Hi Michael,

A couple of questions:

How is your environment All in one, single node, or multi-node deployment ?
Is that the whole content for the filebeat.yml? otherwise, could you send the complete file?
Could you send filebeat log file too?
Any changes on /etc/filebeat/wazuh-template.json file?

kind regards,

Michael Hodge

unread,
Nov 6, 2021, 3:49:34 PM11/6/21
to Wazuh mailing list
Hi,

Environment is single node.  Wazuh manager/filebeat on an on-prem vm server, Elasticsearch in a cloud instance, and Kibana in an on-prem vm server. That is the whole content for the filebeat.yml that I posted.  There have been no changes to my knowledge to wazuh-template.json file.

Here is the log, let me know if more is needed, this repeats, so I just took a snippet of it.  I ***** out the IDs and URLs.

Nov 05 19:55:50  filebeat[10174]: 2021-11-05T19:55:50.791Z        INFO        [publisher]        pipeline/retry.go:219        retryer: send unwait signal to consumer
Nov 05 19:55:50 L filebeat[10174]: 2021-11-05T19:55:50.791Z        INFO        [publisher]        pipeline/retry.go:223          done
Nov 05 19:55:50  filebeat[10174]: 2021-11-05T19:55:50.997Z        INFO        [esclientleg]        eslegclient/connection.go:314        Attempting to connect to Elasticsearch version 7.9.2
Nov 05 19:55:51 L filebeat[10174]: 2021-11-05T19:55:51.023Z        INFO        [license]        licenser/es_callback.go:51        Elasticsearch license: Platinum
Nov 05 19:55:51  filebeat[10174]: 2021-11-05T19:55:51.047Z        INFO        [esclientleg]        eslegclient/connection.go:314        Attempting to connect to Elasticsearch version 7.9.2
Nov 05 19:55:51  filebeat[10174]: 2021-11-05T19:55:51.261Z        INFO        template/load.go:169        Existing template will be overwritten, as overwrite is enabled.
Nov 05 19:55:51  filebeat[10174]: 2021-11-05T19:55:51.263Z        INFO        template/load.go:109        Try loading template wazuh to Elasticsearch
Nov 05 19:55:51  filebeat[10174]: 2021-11-05T19:55:51.508Z        INFO        template/load.go:101        template with name 'wazuh' loaded.
Nov 05 19:55:51  filebeat[10174]: 2021-11-05T19:55:51.508Z        INFO        [index-management]        idxmgmt/std.go:298        Loaded index template.
Nov 05 19:55:51  filebeat[10174]: 2021-11-05T19:55:51.560Z        INFO        [publisher_pipeline_output]        pipeline/output.go:151        Connection to backoff(elasticsearch(https://*****)) established
Nov 05 19:56:20  filebeat[10174]: 2021-11-05T19:56:20.475Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":180,"time":{"ms":189}},"total":{"ticks":790,"time":{"ms":804},"value":790},"user":{"ticks":610,"time":{"ms":615}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":12},"info":{"ephemeral_id":"*****","uptime":{"ms":30059}},"memstats":{"gc_next":60997504,"memory_alloc":55598264,"memory_total":216896704,"rss":124542976},"runtime":{"goroutines":27}},"filebeat":{"events":{"active":4117,"added":4119,"done":2},"harvester":{"files":{"f2400e91-28aa-41a8-ac03-20a4b860c240":{"last_event_published_time":"2021-11-05T19:55:50.845Z","last_event_timestamp":"2021-11-05T19:55:50.845Z","name":"/var/ossec/logs/alerts/alerts.json","read_offset":13918563,"size":13164165335,"start_time":"2021-11-05T19:55:50.733Z"}},"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":2048,"batches":1,"total":2048},"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":4117,"filtered":2,"published":4116,"retry":2048,"total":4119}}},"registrar":{"states":{"current":1,"update":2},"writes":{"success":2,"total":2}},"system":{"cpu":{"cores":4},"load":{"1":0.04,"15":0.09,"5":0.1,"norm":{"1":0.01,"15":0.0225,"5":0.025}}}}}}
Nov 05 19:56:50  filebeat[10174]: 2021-11-05T19:56:50.475Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":180,"time":{"ms":1}},"total":{"ticks":790,"time":{"ms":5},"value":790},"user":{"ticks":610,"time":{"ms":4}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":12},"info":{"ephemeral_id":"*****","uptime":{"ms":60058}},"memstats":{"gc_next":60997504,"memory_alloc":55846456,"memory_total":217144896},"runtime":{"goroutines":27}},"filebeat":{"harvester":{"files":{"f2400e91-28aa-41a8-ac03-20a4b860c240":{"size":3508656}},"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0.02,"15":0.09,"5":0.09,"norm":{"1":0.005,"15":0.0225,"5":0.0225}}}}}}
Nov 05 19:56:52  filebeat[10174]: 2021-11-05T19:56:52.650Z        INFO        [publisher]        pipeline/retry.go:219        retryer: send unwait signal to consumer
Nov 05 19:56:52  filebeat[10174]: 2021-11-05T19:56:52.650Z        INFO        [publisher]        pipeline/retry.go:223          done

Luis Contreras

unread,
Nov 11, 2021, 1:27:18 PM11/11/21
to Wazuh mailing list
I was trying to replicate the issue as well,  was there any other kind of change in the environment? or did this problem start since the deployment from scratch?

Michael Hodge

unread,
Nov 11, 2021, 4:00:01 PM11/11/21
to Wazuh mailing list
No change in the environment.  Once this was set up, nothing on the server itself was touched.  This problem started after months of running without anything outside of security updates for the OS.  We can provide more logs if needed.

Luis Contreras

unread,
Nov 15, 2021, 8:38:12 AM11/15/21
to Wazuh mailing list
Thanks Michael, I will check a couple of things on my side.

Michael Hodge

unread,
Nov 15, 2021, 5:31:17 PM11/15/21
to Wazuh mailing list
Hi,

We noticed today that wazuh-alerts-3.x*, wazuh-archives-3.x-*, wazuh-monitoring-*, and wazuh-monitoring-3.x* have all been turned into Legacy index templates.  Could this have something to do with it?

Michael Hodge

unread,
Nov 16, 2021, 4:05:37 PM11/16/21
to Wazuh mailing list
Solved the issue, the ILM policies were deleted some how and all of the data was going into only hot storage.  That filled up and Elasticsearch was reporting that it ran out of space on 11/02.  We could not see this because only our Elasticsearch engineers could, and they did not see it only until recently.  Reapplied the ILM policies and fixed the space.  Working now. Thank you so much for working on this with us Luis.
Reply all
Reply to author
Forward
0 new messages