Wazuh alerts log using logstash to elasticsearch

4,248 views
Skip to first unread message

Derek Wuelfrath

unread,
Sep 18, 2019, 9:17:39 AM9/18/19
to Wazuh mailing list
Hello there !

I am new here but I've been using Wazuh for a bit more than a year and maybe two for the elastic stack.

Since I've updated to ELK stack 7.x it looks like there is a problem in the mapping of fields on the logstash side, especially for the [host] field that cannot be mapped to [keyword] type.

[WARN ] 2019-09-17 20:58:36.973 [[wazuh]>worker11] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"wazuh-alerts-3.x-2019.09.17", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x3f2141d9>], :response=>{"index"=>{"_index"=>"wazuh-alerts-3.x-2019.09.17", "_type"=>"_doc", "_id"=>"n04FQW0BvOg7U9x3DkMM", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [keyword] in document with id 'n04FQW0BvOg7U9x3DkMM'. Preview of field's value: '{name=CLUSTER_NAME}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:1190"}}}}}

I made sure to update the logstash configuration for wazuh and the elasticsearch template.

There is only one elasticsearch template matching for the 'wazuh-alerts-3.x-*' and it is the wazuh one 'wazuh-custom'.

Any idea ?

Let me know if I should provide any more info.

Cheers !

Derek Wuelfrath

unread,
Sep 19, 2019, 11:26:36 AM9/19/19
to Wazuh mailing list
Hello again,
I keep fighting with it since for now, either none of my wazuh data is being indexed or I can't access it from kibana.

I would like to know which piece I should look at and maybe some reference to make sure everything is correctly configured.

From my point of view, here are the components (I have a "remote" setup):

Server running the wazuh-manager and wazuh-api:
- wazuh-manager
- wazuh-api
- filebeat (see prospector configuration below) configured to send to logstash

- type: log
  enabled
: true
  paths
:
   
- "/var/ossec/logs/alerts/alerts.json"
  fields_under_root
: true
  document_type
: json
  json
.message_key: log
  json
.keys_under_root: true
  json
.overwrite_keys: true
  fields
:
    beat
.type: wazuh_alerts

Server running logstash (see input and ouput configurations below) configured to send to elasticsearch:

input {
  redis
{
    data_type
=> "channel"
    key
=> "wazuh"
 
}
}
output
{
    elasticsearch
{
        hosts
=> [ "IP1:9200", "IP2:9200", "IP3:9200", "IP4:9200" ]
        index
=> "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
   
}
}

Servers running elasticsearch with the template configured (https://raw.githubusercontent.com/wazuh/wazuh/v3.10.0/extensions/elasticsearch/7.x/wazuh-template.json)

wazuh-custom                [wazuh-alerts-3.x-*, wazuh-archives-3.x-*] 0          1
.monitoring-es              [.monitoring-es-7-*]                       0          7000199
.ml-anomalies-              [.ml-anomalies-*]                          0          7030299
.monitoring-beats           [.monitoring-beats-7-*]                    0          7000199
.watches                    [.watches*]                                2147483647
.monitoring-alerts          [.monitoring-alerts-6]                     0          6050399
.watch-history-10           [.watcher-history-10*]                     2147483647
.watch-history-7            [.watcher-history-7*]                      2147483647
.ml-state                   [.ml-state*]                               0          7030299
security
-index-template     [.security-*]                              1000      
.triggered_watches          [.triggered_watches*]                      2147483647
instartlogic                
[instartlogic-*]                           0          
.watch-history-9            [.watcher-history-9*]                      2147483647
.management-beats           [.management-beats]                        0          70000
wazuh
-agent                 [wazuh-monitoring-3.x-*]                   0          
.ml-config                  [.ml-config]                               0          7030299
o365uls                    
[o365uls-*]                                0          
.ml-meta                    [.ml-meta]                                 0          7030299
.data-frame-internal-1      [.data-frame-internal-1]                   0          7030299
fortigate                  
[fortigate-*]                              0          
.monitoring-kibana          [.monitoring-kibana-7-*]                   0          7000199
cbdefense                  
[cbdefense-*]                              0          
.ml-notifications           [.ml-notifications]                        0          7030299
.monitoring-alerts-7        [.monitoring-alerts-7]                     0          7000199
.logstash-management        [.logstash]                                0          
metricbeat
-6.5.0            [metricbeat-6.5.0-*]                       1          
meraki                      
[meraki-*]                                 0          
.monitoring-logstash        [.monitoring-logstash-7-*]                 0          7000199
logstash                    
[logstash-*]                               0          60001
.kibana_task_manager        [.kibana_task_manager]                     0          7030299
defaults
-beats              [beats-*]                                  0          
defaults
-syslog             [syslog-*]                                 0          
o365api                    
[o365api-*]                                0          
.data-frame-notifications-1 [.data-frame-notifications-*]              0          7030299

Server running kibana with the latest kibana wazuh-app installed (running in docker).

Any hints ?

Thanks !

Javier Escobar

unread,
Sep 20, 2019, 9:37:15 AM9/20/19
to Wazuh mailing list

Hi Derek, sorry for the late response.


Since ELK 7.x, Filebeat can ingest data directly into Elasticsearch. In this version, Logstash is no longer required because that way the architecture is more simple and consumes fewer resources, so I recommend configuring Filebeat with Elasticsearch. 


You can find the configuration files in our documentation:

https://documentation.wazuh.com/3.10/installation-guide/installing-wazuh-manager/index.html


But if you prefer it we can help you configure your system to work with Logstash. Could you please share your OS and software versions?


Elastic version:

curl ELASTIC_IP:9200?pretty

Kibana version:

/usr/share/kibana/bin/kibana --version

Filebeat version:

/usr/share/filebeat/bin/filebeat version


Logstash version:

/usr/share/logstash/bin/logstash --version

Wazuh version:

cat /var/ossec/etc/ossec-init.conf

Wazuh API version:

cat /var/ossec/api/package.json


Wazuh APP version:

cat /usr/share/kibana/plugins/wazuh/package.json

Regards,

Javier


Derek Wuelfrath

unread,
Sep 20, 2019, 9:55:48 AM9/20/19
to Wazuh mailing list
Hello Javier,

No worries for the late response ! There is no such thing as "late" when it comes to open source software and mailing-list support :)

Actually, if you say that the now "official" way when in a distributed environment would be to have Filebeat to talk directly with elasticsearch, I could go that way. Sure thing is, I would prefer to have a centralized place where logs enters (logstash) but my main concern is to be "upgrade proof" so if you say that taking the way of using logstash would require customization along the upgrade path, I'd go with Filebeat to talk directly to elasticsearch.

Here are the outputs that you asked for !

Elastic version:

curl ELASTIC_IP:9200?pretty

7.3.2 in docker containers

Kibana version:

/usr/share/kibana/bin/kibana --version

7.3.2 in docker container

Filebeat version:

/usr/share/filebeat/bin/filebeat version


7.3.2 with RPM package

Logstash version:

/usr/share/logstash/bin/logstash --version

7.3.2 in docker container

Wazuh version:

cat /var/ossec/etc/ossec-init.conf

3.10.0

Wazuh API version:

cat /var/ossec/api/package.json

3.10.0

Wazuh APP version:

cat /usr/share/kibana/plugins/wazuh/package.json

3.10.0_7.3.2

Thanks !

Javier Escobar

unread,
Sep 23, 2019, 10:39:39 AM9/23/19
to Wazuh mailing list

Hi again Derek,
Following these steps you can configure Filebeat to the latest version:-Download the Filebeat configuration file:

curl -so /etc/filebeat/filebeat.yml https://raw.githubusercontent.com/wazuh/wazuh/v3.10.0/extensions/filebeat/7.x/filebeat.yml
chmod go+r /etc/filebeat/filebeat.yml

-Download the Wazuh module for Filebeat:

curl -s https://packages.wazuh.com/3.x/filebeat/wazuh-filebeat-0.1.tar.gz | sudo tar -xvz -C /usr/share/filebeat/module

-Edit the file /etc/filebeat/filebeat.yml and replace the output with the IP addresses of Elasticsearch.For more information:
https://documentation.wazuh.com/3.10/installation-guide/installing-wazuh-manager/linux/centos/wazuh_server_packages_centos.html#installing-filebeatLet me know if you have any issues with the configuration.Regards,
Javier Escobar

Derek Wuelfrath

unread,
Sep 23, 2019, 10:54:56 PM9/23/19
to Wazuh mailing list
Hello Javier,

Thanks again for the reply.
I will definitely give it a try and get back to you.

Since you give me the instructions for setting up Filebeat to talk directly to elasticsearch, I assume that the Logstash way of doing thing is now obsolete and should be avoided for easier upgrades / less customization ?

Thanks !

Derek Wuelfrath

unread,
Sep 23, 2019, 10:56:36 PM9/23/19
to Wazuh mailing list
Also, do I still need the elasticsearch template file ? It looks as per the documentation you sent that the template is handled by Filebeat.

Derek Wuelfrath

unread,
Sep 24, 2019, 10:46:28 AM9/24/19
to Wazuh mailing list
Hello again Javier,
So I put everything in place and everything seems to be working just fine. I see the index doc counts increase which means data is flowing.
I removed the Logstash part of the equation and notified that the elasticsearch template is now managed by Filebeat.

One thing tho, when using the discover tab of Kibana and looking at the wazuh-alerts-... index pattern, there is no data. Neither when using the Wazuh app inside Kibana...
Manually deleting the index pattern and recreating it makes the data showing. This is unfortunately not an option since all the pieces of the infrastructure resides inside different Docker containers and is built and destroyed on demand which then recreates the index pattern on the Wazuh app installation step of Kibana built.

Thanks

Javier Escobar

unread,
Sep 25, 2019, 10:12:47 AM9/25/19
to Wazuh mailing list

Hi Derek,

It seems like an issue related to the upgrade of Elasticsearch. In the upgrade from 6.8 to 7.x there was a field migration from @timestamp to timestamp. Due to this change, previous alerts won’t be visible in Wazuh indices, an update must be performed to all previous indices in order to complete the upgrade.


You can see the index pattern fields at Kibana -> Management -> Index Patterns and click on wazuh-alerts-3.x-*. It should look like this:


fields.png



Run below request for each Wazuh index that was created before Elastic 7.x upgrade. It will add the timestamp field for all the index documents:


curl -X POST "localhost:9200/wazuh-alerts-3.x-2019.05.16/wazuh/_update_by_query?wait_for_completion=true" -H 'Content-Type: application/json' -d'

{

  "query": {

    "bool": {

      "must_not": {

        "exists": {

          "field": "timestamp"

        }

      }

    }

  },

  "script": "ctx._source.timestamp = ctx._source[\"@timestamp\"]"

}

'


For more information, we have a documentation page related to the issue:

https://documentation.wazuh.com/3.10/upgrade-guide/upgrading-elastic-stack/elastic_server_rolling_upgrade.html#field-migration-from-timestamp-to-timestamp


To be safe please share the wazuh template that you are using.


I hope it helps.


Regards,

Javier Escobar

Derek Wuelfrath

unread,
Oct 8, 2019, 1:41:48 PM10/8/19
to Wazuh mailing list
Hello Javier,

First of all, sorry for the (way too) late reply...

I was waiting to see if everything was under control to make sure to reply appropriately.

So it looks like the filebeat way of doing things is working !

Thanks a lot for your help.

Cheers !
Reply all
Reply to author
Forward
0 new messages