Wazuh doesn't show events on Web app

215 views
Skip to first unread message

Nickolas Orestis Gavriilidis

unread,
Dec 29, 2020, 9:17:14 AM12/29/20
to Wazuh mailing list
Hello team,

I have deployed the following architecture: 1 wazuh manager (All in one deployment v.4) and 1 wazuh agent(v.4)

I am monitoring a specific file of the agent, so i have modified the ossec.conf file of the agent as follows:

/ossec.conf

<ossec_config>
  <localfile>
    <log_format>syslog</log_format>
    <location>/opt/dionaea/var/lib/dionaea/bistreams/dionaea_logs/dionaea.log</location>
  </localfile>

There were no decoders or rules matching the logs generated so i created a decoder in the wazuh manager at /var/ossec/etc/decoders/0379-dionaea_decoder.xml

<decoder name="dionaea">   
  <type>syslog</type>   
  <program_name>dionaea</program_name>
</decoder>

<decoder name="dionaea-logs">   
  <parent>dionaea</parent>
  <prematch offset="after_parent">^log </prematch>
   <!-- offset="after_parent" makes OSSEC ignore anything matched by the parent decoder and before -->
  <regex offset="after_prematch">^(\S+) (\S+) (\S+) (\S+) (\S+)</regex> <!-- offset="after_prematch" makes OSSEC ignore anything matched by the prematch and earlier-->
  <order>protocol, dstip, dstport, srcip, srcport</order>
</decoder>

and i also modified the rules in the /var/ossec/rules/local_rules.xml as follows

<group name="local,syslog,sshd,json,">
 <!--
  Dec 10 01:02:02 host sshd[1234]: Failed none for root from 1.1.1.1 port 1066 ssh2
  --> 
  <rule id="100001" level="5">
    <if_sid>5716</if_sid>
    <srcip>1.1.1.1</srcip>
    <description>sshd: authentication failed from IP 1.1.1.1.</description>
    <group>authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,</group>
  </rule>
  <rule id="100010" level="5">
    <decoded_as>dionaea</decoded_as>
    <description>Dionaea messages grouped.</description>
  </rule>

  <rule id="100011" level="10" frequency="20" timeframe="120">
    <if_matched_sid>100010</if_matched_sid>
    <same_source_ip />   
    <description>DOS attack detected!</description>
  </rule>
</group>


I have tested my logs with /var/ossec/bin/ossec-logtest and everything seems to work fine. The last messages I get are:

**Phase 3: Completed filtering (rules).
       Rule id: '100010'
       Level: '5'
       Description: 'Dionaea messages grouped.'
**Alert to be generated.

Also, when i start gathering data from the agent all the alerts appear in the alerts.log file but when i try to see them on the web app nothing appears!!! Any ideas for how to solve that problem?

Thanks allot!!!

victor....@wazuh.com

unread,
Dec 30, 2020, 2:33:35 AM12/30/20
to Wazuh mailing list
Hello,

I have tested your configuration in my environment and it looks fine. Probably some of your ELK components are misconfigured or they are not running. Did you use the step-by-step  (https://documentation.wazuh.com/4.0/installation-guide/open-distro/all-in-one-deployment/all_in_one.html) or the unattended installation (https://documentation.wazuh.com/4.0/installation-guide/open-distro/all-in-one-deployment/unattended-installation.html)?

Do you receive any alert in your Kibana or you only don't receive your custom rules alerts?

Let's find the problem looking at your ELK components configuration and status:

  • Filebeat

Check that Filebeat is running:

systemctl status filebeat

And ensure it is correctly configured

filebeat test output

  • Elasticsearch

 Check if your elasticsearch server is running

systemctl status elasticsearch

And ensure you have alerts in your Elasticsearch.

curl -XGET https://localhost:9200/_cat/indices/wazuh-alerts-4.x-* -u admin:admin -k

If this is not empty you have some alerts in Elasticsearch.

If you don't find something useful check your Elasticsearch and Filebeat logs files in order to find errors or warnings and sent me them back:

cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn"
cat /var/log/filebeat/filebeat | grep -i -E "error|warn"

Nickolas Orestis Gavriilidis

unread,
Dec 30, 2020, 10:08:41 AM12/30/20
to Wazuh mailing list
Hello and thank you for your response,

I have used the unattended installation.

Filebeat seems to be Ok.

After "systemctl status elasticsearch" i get :

systemd[1]: Starting Elasticsearch...
systemd-entrypoint[27406]: WARNING: An illegal reflective access operation has occurred
systemd-entrypoint[27406]: WARNING: Illegal reflective access by com.amazon.opendistro.elasticsearch.performanceanalyzer.collector
systemd-entrypoint[27406]: WARNING: Please consider reporting this to the maintainers of com.amazon.opendistro.elasticsearch.perfo
systemd-entrypoint[27406]: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
systemd-entrypoint[27406]: WARNING: All illegal access operations will be denied in a future release
gav systemd[1]: Started Elasticsearch.


After executing
"curl -XGET https://localhost:9200/_cat/indices/wazuh-alerts-4.x-* -u admin:admin -k" i get:

green open wazuh-alerts-4.x-2020.12.11 3xjLFmY9S0-zXvEKPmJ3oQ 3 0 481 0  964.7kb  964.7kb
green open wazuh-alerts-4.x-2020.12.22 Gsf-I-ZiT-6suBsMzNaZaw 3 0 174 0  459.4kb  459.4kb
green open wazuh-alerts-4.x-2020.12.23 CFZYh5uLSjW0Bsy_rS7SdQ 3 0 126 0  448.3kb  448.3kb
green open wazuh-alerts-4.x-2020.12.12 zOzP-vsGTxuPYlcIGQVtkQ 3 0 241 0    551kb    551kb
green open wazuh-alerts-4.x-2020.12.20 ryfexpZuQx2VFngZcJdyWA 3 0 213 0  605.7kb  605.7kb
green open wazuh-alerts-4.x-2020.12.10 ZS-RBmtpSnqT5CYnWwTvGg 3 0 601 0 1020.2kb 1020.2kb
green open wazuh-alerts-4.x-2020.12.21 ia_7RfueRK2fj2_AwPBeFw 3 0 178 0  579.3kb  579.3kb
green open wazuh-alerts-4.x-2020.12.15 5Csz-GQzRC60FScViND46w 3 0 196 0  516.2kb  516.2kb
green open wazuh-alerts-4.x-2020.12.26 Auf3cZA3S9el1GDiD5etmA 3 0  35 0  301.3kb  301.3kb
green open wazuh-alerts-4.x-2020.12.27 3hLTbdgmSXKCD1NhEdLQ2g 3 0  45 0    322kb    322kb
green open wazuh-alerts-4.x-2020.12.16 bzG7Q5TrTam324U2LMQ3rg 3 0 284 0  650.1kb  650.1kb
green open wazuh-alerts-4.x-2020.12.13 FYcJrx9DRGSihlFm6K2Rlw 3 0 275 0  480.8kb  480.8kb
green open wazuh-alerts-4.x-2020.12.24 O-Fte9IZQ4GMHmCPpy6IuA 3 0  31 0  257.3kb  257.3kb
green open wazuh-alerts-4.x-2020.12.14 w6dtL47tToqJWEFkwu684A 3 0 285 0  611.9kb  611.9kb
green open wazuh-alerts-4.x-2020.12.25 SbYphyh4S8WC9SqupEDggQ 3 0  39 0  315.9kb  315.9kb
green open wazuh-alerts-4.x-2020.12.19 KkwqEwCHTR2RSQSfLlVTlw 3 0 210 0  546.5kb  546.5kb
green open wazuh-alerts-4.x-2020.12.08 9B_3AEGBS22YWGAR-fuXow 3 0 611 0    1.2mb    1.2mb
green open wazuh-alerts-4.x-2020.12.09 D67EGENzTG-iEO7JoMDGVg 3 0 727 0    1.1mb    1.1mb
green open wazuh-alerts-4.x-2020.12.17 bGsILEIuSNCuevIgP1hEFA 3 0 349 0  759.7kb  759.7kb
green open wazuh-alerts-4.x-2020.12.18 x2hF0XcxROCpTui98Sg2Qw 3 0 246 0  618.1kb  618.1kb

By executing cat /var/log/filebeat/filebeat | grep -i -E "error|warn" i get nothing back.


After executing " cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn" " I get:
.
.
.
Counters=TotalError=0
Counters=TotalError=0
[2020-12-30T17:14:54,840][WARN ][c.a.o.s.a.BackendRegistry] [node-1] Authentication finally failed for admin from 127.0.0.1:57496
[2020-12-30T17:14:54,843][ERROR][c.a.o.s.a.s.InternalESSink] [node-1] Unable to index audit log {"audit_cluster_name":"elasticsearch","audit_rest_request_params":{"index":"wazuh-alerts-4.x-*"},"audit_node_name":"node-1","audit_rest_request_method":"GET","audit_category":"FAILED_LOGIN","audit_request_origin":"REST","audit_node_id":"IuY92BTnStGZwi8QJzi0iw","audit_request_layer":"REST","audit_rest_request_path":"/_cat/indices/wazuh-alerts-4.x-*","@timestamp":"2020-12-30T15:14:54.841+00:00","audit_request_effective_user_is_admin":false,"audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_node_host_address":"127.0.0.1","audit_rest_request_headers":{"User-Agent":["curl/7.58.0"],"content-length":["0"],"Host":["localhost:9200"],"Accept":["*/*"]},"audit_request_effective_user":"admin","audit_node_host_name":"127.0.0.1"} due to org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;
[2020-12-30T17:15:02,175][WARN ][c.a.o.s.a.BackendRegistry] [node-1] Authentication finally failed for admin from 127.0.0.1:57502
[2020-12-30T17:15:02,178][ERROR][c.a.o.s.a.s.InternalESSink] [node-1] Unable to index audit log {"audit_cluster_name":"elasticsearch","audit_rest_request_params":{"index":"wazuh-alerts-4.x-*"},"audit_node_name":"node-1","audit_rest_request_method":"GET","audit_category":"FAILED_LOGIN","audit_request_origin":"REST","audit_node_id":"IuY92BTnStGZwi8QJzi0iw","audit_request_layer":"REST","audit_rest_request_path":"/_cat/indices/wazuh-alerts-4.x-*","@timestamp":"2020-12-30T15:15:02.175+00:00","audit_request_effective_user_is_admin":false,"audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_node_host_address":"127.0.0.1","audit_rest_request_headers":{"User-Agent":["curl/7.58.0"],"content-length":["0"],"Host":["localhost:9200"],"Accept":["*/*"]},"audit_request_effective_user":"admin","audit_node_host_name":"127.0.0.1"} due to org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;
Counters=TotalError=0
[2020-12-30T17:15:13,907][WARN ][c.a.o.s.a.BackendRegistry] [node-1] Authentication finally failed for admin from 127.0.0.1:57514
[2020-12-30T17:15:13,909][ERROR][c.a.o.s.a.s.InternalESSink] [node-1] Unable to index audit log {"audit_cluster_name":"elasticsearch","audit_rest_request_params":{"index":"wazuh-alerts-4.x-*"},"audit_node_name":"node-1","audit_rest_request_method":"GET","audit_category":"FAILED_LOGIN","audit_request_origin":"REST","audit_node_id":"IuY92BTnStGZwi8QJzi0iw","audit_request_layer":"REST","audit_rest_request_path":"/_cat/indices/wazuh-alerts-4.x-*","@timestamp":"2020-12-30T15:15:13.908+00:00","audit_request_effective_user_is_admin":false,"audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_node_host_address":"127.0.0.1","audit_rest_request_headers":{"User-Agent":["curl/7.58.0"],"content-length":["0"],"Host":["localhost:9200"],"Accept":["*/*"]},"audit_request_effective_user":"admin","audit_node_host_name":"127.0.0.1"} due to org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;
[2020-12-30T17:15:29,101][WARN ][c.a.o.s.a.BackendRegistry] [node-1] Authentication finally failed for admin from 127.0.0.1:57516
[2020-12-30T17:15:29,103][ERROR][c.a.o.s.a.s.InternalESSink] [node-1] Unable to index audit log {"audit_cluster_name":"elasticsearch","audit_rest_request_params":{"index":"wazuh-alerts-4.x-*"},"audit_node_name":"node-1","audit_rest_request_method":"GET","audit_category":"FAILED_LOGIN","audit_request_origin":"REST","audit_node_id":"IuY92BTnStGZwi8QJzi0iw","audit_request_layer":"REST","audit_rest_request_path":"/_cat/indices/wazuh-alerts-4.x-*","@timestamp":"2020-12-30T15:15:29.102+00:00","audit_request_effective_user_is_admin":false,"audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_node_host_address":"127.0.0.1","audit_rest_request_headers":{"User-Agent":["curl/7.58.0"],"content-length":["0"],"Host":["localhost:9200"],"Accept":["*/*"]},"audit_request_effective_user":"admin","audit_node_host_name":"127.0.0.1"} due to org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;
[2020-12-30T17:15:34,832][ERROR][c.a.o.s.a.s.InternalESSink] [node-1] Unable to index audit log {"audit_cluster_name":"elasticsearch","audit_node_name":"node-1","audit_trace_task_id":"IuY92BTnStGZwi8QJzi0iw:8793188","audit_transport_request_type":"CreateIndexRequest","audit_category":"INDEX_EVENT","audit_request_origin":"REST","audit_request_body":"{}","audit_node_id":"IuY92BTnStGZwi8QJzi0iw","audit_request_layer":"TRANSPORT","@timestamp":"2020-12-30T15:15:34.829+00:00","audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_request_privilege":"indices:admin/auto_create","audit_node_host_address":"127.0.0.1","audit_request_effective_user":"admin","audit_trace_indices":["<wazuh-alerts-4.x-{2020.12.30||/d{yyyy.MM.dd|UTC}}>"],"audit_node_host_name":"127.0.0.1"} due to org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;
[2020-12-30T17:15:48,050][WARN ][c.a.o.s.a.BackendRegistry] [node-1] Authentication finally failed for admin from 127.0.0.1:57524
[2020-12-30T17:15:48,053][ERROR][c.a.o.s.a.s.InternalESSink] [node-1] Unable to index audit log {"audit_cluster_name":"elasticsearch","audit_rest_request_params":{"index":"wazuh-alerts-4.x-*"},"audit_node_name":"node-1","audit_rest_request_method":"GET","audit_category":"FAILED_LOGIN","audit_request_origin":"REST","audit_node_id":"IuY92BTnStGZwi8QJzi0iw","audit_request_layer":"REST","audit_rest_request_path":"/_cat/indices/wazuh-alerts-4.x-*","@timestamp":"2020-12-30T15:15:48.051+00:00","audit_request_effective_user_is_admin":false,"audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_node_host_address":"127.0.0.1","audit_rest_request_headers":{"User-Agent":["curl/7.58.0"],"content-length":["0"],"Host":["localhost:9200"],"Accept":["*/*"]},"audit_request_effective_user":"admin","audit_node_host_name":"127.0.0.1"} due to org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;
[2020-12-30T17:15:49,807][ERROR][c.a.o.s.a.s.InternalESSink] [node-1] Unable to index audit log {"audit_cluster_name":"elasticsearch","audit_node_name":"node-1","audit_trace_task_id":"IuY92BTnStGZwi8QJzi0iw:8794134","audit_transport_request_type":"CreateIndexRequest","audit_category":"INDEX_EVENT","audit_request_origin":"REST","audit_request_body":"{}","audit_node_id":"IuY92BTnStGZwi8QJzi0iw","audit_request_layer":"TRANSPORT","@timestamp":"2020-12-30T15:15:49.798+00:00","audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_request_privilege":"indices:admin/auto_create","audit_node_host_address":"127.0.0.1","audit_request_effective_user":"admin","audit_trace_indices":["<wazuh-alerts-4.x-{2020.12.30||/d{yyyy.MM.dd|UTC}}>"],"audit_node_host_name":"127.0.0.1"} due to org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;
Counters=TotalError=0
Counters=TotalError=0
Counters=TotalError=0


Thank you!!

victor....@wazuh.com

unread,
Dec 31, 2020, 9:50:16 AM12/31/20
to Wazuh mailing list
Hello,

Checking your logs It looks your elastic has some authentication errors but I could not replicate it in my environment. Is your Kibana showing any error? Is it getting the rest of your agent's alerts?

If you only don't get this alerts it must be an error in your ruleset. Otherwise, it is Kibana/Elasticsearch issue.

Please follow these steps to check the workflow of the alerts:

  • Edit your /opt/dionaea/var/lib/dionaea/bistreams/dionaea_logs/dionaea.log file in your agent.

Add this line in that file:

Dec 10 01:02:02 host dionaea[1234]: Failed none for root from 1.1.1.1 port 1066 ssh2

  • Check your alert.log and search something similar to this:

Rule: 100010 (level 5) -> 'Dionaea messages grouped.'

If you find it, We can totally discard any configuration error in your Manager, rule/decoder or Agent configuration.

If not,  search in your agent's ossec.log  this line

2020/12/31 09:58:50 ossec-logcollector: INFO: (1950): Analyzing file: '/opt/dionaea/var/lib/dionaea/bistreams/dionaea_logs/dionaea.log'.

If you can't find it, check your agent configuration

  • Filebeat

This command filebeat test output  gives you an output similar to this ?

 elasticsearch: https://127.0.0.1:9200...
   parse url... OK
   connection...
     parse host... OK
     dns lookup... OK
     addresses: 127.0.0.1
     dial up... OK
   TLS...
     security: server's certificate chain verification is enabled
     handshake... OK
     TLS version: TLSv1.3
     dial up... OK
   talk to server... OK
   version: 7.9.1

Nickolas Orestis Gavriilidis

unread,
Jan 2, 2021, 11:15:54 AM1/2/21
to Wazuh mailing list
Hello and Happy new Year!

After adding:

Dec 10 01:02:02 host dionaea[1234]: Failed none for root from 1.1.1.1 port 1066 ssh2

on /opt/dionaea/var/lib/dionaea/bistreams/dionaea_logs/dionaea.log    file

I got:

** Alert 1609603954.67276: - local,syslog,sshd,json,
2021 Jan 02 18:12:34 (Dionaea_18_v2) any->/opt/dionaea/var/lib/dionaea/bistreams/dionaea_logs/dionaea.log

Rule: 100010 (level 5) -> 'Dionaea messages grouped.'
Dec 10 01:02:02 host dionaea[1234]: Failed none for root from 1.1.1.1 port 1066 ssh2

on my alerts.log file on the manager.


After using the command filebeat test output :

elasticsearch: https://127.0.0.1:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.3
    dial up... OK
  talk to server... OK
  version: 7.9.1

 Everything seems to be Ok but when i search the UI i see no data as in the picture attached.

Thank you!
Screenshot 2021-01-02 at 6.12.08 PM.png

victor....@wazuh.com

unread,
Jan 4, 2021, 3:51:41 AM1/4/21
to Wazuh mailing list
Happy New Year! ,

It looks that your environment is running correctly. If you check the screenshot you sent me, you can see that SCA scan has been run in your Debian/Linux (https://documentation.wazuh.com/4.0/user-manual/capabilities/sec-config-assessment/what_is_it.html)

Maybe you can't see events in events count evolution because you set a time gap when no events have occurred. Check the screenshots below:
kibana_timegap_1.png

If you change that time gap you may see all agent's events

kibana_timegap_2.png


Also, to ensure that your Kibana is well configured:

  • Check that your agent is shown in Wazuh app
agents_in_kibana.png

  • Look all alerts: Security events -> Events and check you have received alerts.
events_kibana1.png
events_kibana2.png
  • Search for dionaea:
dionaea_log.png

Nickolas Orestis Gavriilidis

unread,
Jan 4, 2021, 4:13:16 AM1/4/21
to Wazuh mailing list
Hello,

I have changed the time gap to one year and i only see a specific rule as in picture attached. My custom rules are not appearing in the app while they appear on log file when i feed the agent with new events. Isn't it weird?

Thank you!
Screenshot 2021-01-04 at 11.09.54 AM.png

victor....@wazuh.com

unread,
Jan 4, 2021, 5:59:17 AM1/4/21
to Wazuh mailing list
Checking your elasticsearch log again I saw this error line:


Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;


I think this is the root of all problems. You probably have too many shards per node.

Check this documentation page for more information:

https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster

For solving this issue I recommend you horizontally scale your Elasticsearch cluster by adding more data nodes. Some resources that may help:

  • Wazuh documentation:

  1. https://documentation.wazuh.com/4.0/installation-guide/open-distro/distributed-deployment/step-by-step-installation/elasticsearch-cluster/elasticsearch-multi-node-cluster.html#elasticsearch-multi-node-cluster

  • Elasticsearch documentation

  1. https://www.elastic.co/guide/en/elasticsearch/reference/current/scalability.html
  2. https://www.elastic.co/guide/en/elasticsearch/reference/current/add-elasticsearch-nodes.html
  3. https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#data-node


In the meantime, you can use this command below to increase cluster.max_shards_per_node

curl -XPUT https://localhost:9200/_cluster/settings -H 'Content-type: application/json' --data-binary $'{"transient":{"cluster.max_shards_per_node":2000}}' -k -u admin:admin

But be careful increasing the “max_shards_per_node” above 1000,  it could get your elasticsearch performance worse
Reply all
Reply to author
Forward
0 new messages