no logs

531 views
Skip to first unread message

Nicholai Tailor

unread,
Oct 8, 2018, 10:41:39 AM10/8/18
to wa...@googlegroups.com
Hello,

After i upgraded  to latest version. Iam im no longer to see any logs in kibana for wazuh.

I get the following error.

Kibana
2018-10-04T23:00:01.388Z ERROR Could not check if the index wazuh-monitoring-3.x-2018.10.04 exists due to [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]
  


 

Jesus Linares

unread,
Oct 8, 2018, 11:11:20 AM10/8/18
to Wazuh mailing list
Hi Nicholai,

Usually, this problem is related to a low storage in Elasticsearch. Check the disk space, and if it is low, you need to change the read-only mode for some indices. Elasticsearch will change the indices to read-only mode when there is no free space.

For example, to change that property in .kibana index:

PUT .kibana/_settings
    {
        "index": {
            "blocks": {
            "read_only_allow_delete": "false"
            }
    }
}

I hope it helps.

Regards,
Jesus Linares. 

alberto....@wazuh.com

unread,
Oct 8, 2018, 11:11:48 AM10/8/18
to Wazuh mailing list
Hello Nicholai

  The error indicates that indices become in read-only mode in order to prevent damage in data. Your disk has not enough space. Please check it. After free or add more space, if you need to re-configure the indices with write permissions, open the Kibana Dev Tools and execute: 

PUT _settings
{
   
"index" : {
       
"blocks.read_only" : "false"
   
}
}

Hope it help.
Best regards, 

Alberto R. 

On Monday, October 8, 2018 at 4:41:39 PM UTC+2, Nicholai Tailor wrote:

Nicholai Tailor

unread,
Oct 8, 2018, 4:10:54 PM10/8/18
to je...@wazuh.com, wa...@googlegroups.com
Hello,

Are you sure?

/dev/mapper/wazuh-ossec       100G   17G   84G  17% /var/ossec 

I have 84Gigs free

Is that not enough?

Cheers

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/3e74fa49-3014-4080-9690-eaeaba450ef9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Nicholai Tailor

unread,
Oct 8, 2018, 4:23:39 PM10/8/18
to je...@wazuh.com, wa...@googlegroups.com
Hello,

I did as you suggested. Since i have 84gigs available. That should be more than enough space.

I ran the put from the console and got a acknowledged = true status.

This is what i see in the log KKibana
/usr/share/kibana/optimize/wazuh-logs/wazuhapp.log

{"date":"2018-10-04T18:00:01.489Z","level":"error","location":"[monitoring][saveStatus]","message":"Could not check if the index wazuh-monitoring-3.x-2018.10.04 exists due to [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}
{"date":"2018-10-04T19:00:00.551Z","level":"error","location":"[monitoring][saveStatus]","message":"Could not check if the index wazuh-monitoring-3.x-2018.10.04 exists due to [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}
{"date":"2018-10-04T20:00:00.736Z","level":"error","location":"[monitoring][saveStatus]","message":"Could not check if the index wazuh-monitoring-3.x-2018.10.04 exists due to [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}
{"date":"2018-10-04T21:00:00.888Z","level":"error","location":"[monitoring][saveStatus]","message":"Could not check if the index wazuh-monitoring-3.x-2018.10.04 exists due to [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}
{"date":"2018-10-04T22:00:01.042Z","level":"error","location":"[monitoring][saveStatus]","message":"Could not check if the index wazuh-monitoring-3.x-2018.10.04 exists due to [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}
{"date":"2018-10-04T23:00:01.388Z","level":"error","location":"[monitoring][saveStatus]","message":"Could not check if the index wazuh-monitoring-3.x-2018.10.04 exists due to [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}
{"date":"2018-10-05T00:00:00.960Z","level":"info","location":"[monitoring][createIndex]","message":"Successfully created today index."}
{"date":"2018-10-06T00:00:01.372Z","level":"info","location":"[monitoring][createIndex]","message":"Successfully created today index."}
{"date":"2018-10-07T00:00:00.900Z","level":"info","location":"[monitoring][createIndex]","message":"Successfully created today index."}
{"date":"2018-10-08T00:00:00.908Z","level":"info","location":"[monitoring][createIndex]","message":"Successfully created today index."}

Cheers

Nicholai Tailor

unread,
Oct 9, 2018, 11:04:54 AM10/9/18
to Jesus Linares, wa...@googlegroups.com
Hello,

I still after doing as you suggested see no logs?

All i see is..

{"date":"2018-10-09T00:00:01.348Z","level":"info","location":"[monitoring][createIndex]","message":"Successfully created today index."}

was created today...?

Any ideas?

Miguel Ruiz

unread,
Oct 9, 2018, 11:37:51 AM10/9/18
to Wazuh mailing list
Hi Nicholai,

can you please verify the version of elasticsearch, kibana, logstash, filebeat and wazuh?

In your ELK stack instances:
Elasticsearch:
curl
-XGET 'localhost:9200'

Kibana:
/usr/share/kibana/bin/kibana -V

Logstash:
/usr/share/logstash/bin/logstash -V

In the manager instance:

Wazuh:
cat
/etc/ossec-init.conf

Filebeat:
/usr/share/filebeat

If all the components have the same version.

You can check the data flow from the manager to elasticsearch to see if everything is working properly.
In order to do that, check the status of the Wazuh manager and Filebeat

systemctl status wazuh-manager
systemctl status filebeat

If both are up and running, check if filebeat is reading the alerts and sending them to logstash
Check the configuration at:
/etc/filebeat/filebeat.yml

See if filebeat is reading the alerts.json file
lsof
/var/ossec/logs/alerts/alerts.json

The output should look like this:
[root@localhost wazuh]# lsof /var/ossec/logs/alerts/alerts.json
COMMAND   PID  USER   FD   TYPE DEVICE SIZE
/OFF   NODE NAME
ossec
-ana 524 ossec    9w   REG    8,1    27818 789487 /var/ossec/logs/alerts/alerts.json
filebeat  
818  root    5r   REG    8,1    27818 789487 /var/ossec/logs/alerts/alerts.json





On Monday, October 8, 2018 at 4:41:39 PM UTC+2, Nicholai Tailor wrote:

Miguel Ruiz

unread,
Oct 9, 2018, 11:46:29 AM10/9/18
to Wazuh mailing list
Sorry,
I made a mistake while writing the post and sent before ending.

The command to check filebeat version is incomplete, here is the correct one:
/usr/share/filebeat/bin/filebeat version

And last thing I have to add is, if everything looks correct in the manager side, it would be useful to have a look at logstash and elasticsearch logs to troubleshoot them. You can find them at this location by default:
Logstash:
/var/log/logstash/logstash-plain.log

Elasticsearch:
/var/log/elasticsearch/elasticsearch.log

Let me know if this helped you.

Best regards,
Miguel R.

On Monday, October 8, 2018 at 4:41:39 PM UTC+2, Nicholai Tailor wrote:

Nicholai Tailor

unread,
Oct 9, 2018, 12:14:54 PM10/9/18
to migue...@wazuh.com, wa...@googlegroups.com
Hi Miguel
[root@waz01 ~]# curl -XGET 'localhost:9200'
{
  "name" : "Ap37uZl",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "jLVYVcL4Qaq76fh4EyHong",
  "version" : {
    "number" : "6.4.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "595516e",
    "build_date" : "2018-08-17T23:18:47.308994Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
[root@waz01 ~]# 
[root@waz01 ~]# Kibana:
-bash: Kibana:: command not found
[root@waz01 ~]# /usr/share/kibana/bin/kibana -V
6.4.0
[root@waz01 ~]# 
[root@waz01 ~]# Logstash:
-bash: Logstash:: command not found
[root@waz01 ~]# /usr/share/logstash/bin/logstash -V
logstash 6.4.0
[root@waz01 ~]# cat /etc/ossec-init.conf
DIRECTORY="/var/ossec"
NAME="Wazuh"
VERSION="v3.6.1"
REVISION="3608"
DATE="Fri Sep  7 16:00:40 UTC 2018"
TYPE="server"
[root@waz01 ~]# systemctl status wazuh-manager
● wazuh-manager.service - Wazuh manager
   Loaded: loaded (/etc/systemd/system/wazuh-manager.service; enabled; vendor preset: disabled)
   Active: active (exited) since Wed 2018-09-26 09:11:31 BST; 1 weeks 6 days ago

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
[root@waz01 ~]# systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-09-26 09:55:24 BST; 1 weeks 6 days ago
 Main PID: 13782 (filebeat)
   CGroup: /system.slice/filebeat.service
           └─13782 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/fileb...

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

[root@waz01 ~]# lsof /var/ossec/logs/alerts/alerts.json
COMMAND     PID  USER   FD   TYPE DEVICE   SIZE/OFF      NODE NAME
ossec-ana 19039 ossec   10w   REG  253,7 3463232106 201439408 /var/ossec/logs/alerts/alerts.json

[root@waz01 ~]# /usr/share/filebeat/bin/filebeat version
filebeat version 6.4.0 (amd64), libbeat 6.4.0 [34b4e2cc75fbbee5e7149f3916de72fb8892d070 built 2018-08-17 22:20:20 +0000 UTC]

everything looks good to me?





--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Nicholai Tailor

unread,
Oct 9, 2018, 12:40:45 PM10/9/18
to migue...@wazuh.com, wa...@googlegroups.com
Here are the other logs you asked for.,

I have no logs for any agents since oct 4. which is around when i upgraded i believe. Need to get this working? :)

[root@waz01 ~]# tail /var/log/elasticsearch/elasticsearch.log
[2018-10-09T08:00:01,017][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T09:00:01,213][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T10:00:00,522][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T11:00:00,777][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T12:00:01,176][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T13:00:01,318][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T14:00:00,638][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T15:00:00,848][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T16:00:01,116][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]
[2018-10-09T17:00:01,205][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.09]



[root@waz01 ~]# tail /var/log/logstash/logstash-plain.log
[2018-10-09T17:37:59,475][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[2018-10-09T17:37:59,475][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-09T17:37:59,475][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-09T17:37:59,475][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>2}
[2018-10-09T17:37:59,475][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-09T17:37:59,475][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[2018-10-09T17:37:59,475][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>2}
[2018-10-09T17:37:59,475][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>3}
[2018-10-09T17:37:59,476][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-09T17:37:59,476][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}


Miguel Ruiz

unread,
Oct 9, 2018, 12:46:08 PM10/9/18
to Wazuh mailing list
Hello Nicholai,

The problem is filebeat is not reading logs from alerts.json.

Restart filebeat:
systemctl restart filebeat

And check again if the looks like this:
[root@localhost wazuh]# lsof /var/ossec/logs/alerts/alerts.json

COMMAND   PID  USER   FD   TYPE DEVICE SIZE
/
OFF   NODE NAME
ossec
-ana 524 ossec    9w   REG    8,1    27818 789487 /var/ossec/logs/alerts/alerts.json

filebeat  
818  root    5r   REG    8,1    27818 789487 /var/ossec/logs/alerts/alerts.json

The command lsof list which processes have opened a file.

So you should see ossec-analisyd (the command only show ossec-ana) and filebeat.

If you don't see the filebeat process have opened the alerts.json file.
Check the filebeat configuration:
cat /etc/filebeat/filebeat.yml


If you see that filebeat have open alerts.json.
Restart logstash and elasticsearch:
systemctl restart logstash
systemctl restart elasticsearch

Wait until elastic comes up, you can check it using this command:
curl localhost:9200/_cluster/health?pretty

When you see the cluster health is yellow, open Kibana and see if alerts are now being indexed.

Filebeat will start indexing alerts from the last alert you had in elasticsearch, so make sure your time range is big enough.

Let me know if this worked.

Regards,
Miguel R.

On Monday, October 8, 2018 at 4:41:39 PM UTC+2, Nicholai Tailor wrote:

Nicholai Tailor

unread,
Oct 9, 2018, 8:54:58 PM10/9/18
to migue...@wazuh.com, wa...@googlegroups.com
Hi Miguel,

Thank you for you help. Sort of worked...?..

now im seeing tooooons of

2018-10-10 01:50:08 ossec-remoted warning (1404): Authentication error. Wrong key from '10.10.249.92'.
2018-10-10 01:50:08 ossec-remoted error (2202): Error uncompressing string.
2018-10-10 01:50:08 ossec-remoted warning (1404): Authentication error. Wrong key from '10.10.249.64'.
2018-10-10 01:50:08 ossec-remoted warning (1404): Authentication error. Wrong key from '10.10.249.46'.
2018-10-10 01:50:08 ossec-remoted warning (1404): Authentication error. Wrong key from '10.10.249.39'.
2018-10-10 01:50:07 ossec-remoted warning (1404): Authentication error. Wrong key from '10.10.249.96'.




--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Miguel Ruiz

unread,
Oct 10, 2018, 12:58:13 PM10/10/18
to Wazuh mailing list
Hi Nicholai,

Do you mean you are now receiving alerts in Elasticsearch?

Looks like the Authentication error. Wrong key from '10.10.249.92' message might be caused by a different problem.

Can you confirm those agents are registered in the manager?

To do that check the content of the file /var/ossec/etc/client.keys and look for the ips of the messages.

Also check the permissions of that file with:
ls -l /var/ossec/etc/client.keys
a
And try to look if you can see any other errors inside the file /var/ossec/logs/ossec.log using:
cat /var/ossec/logs/ossec.log | grep -i error

Regards,
Miguel R.

On Monday, October 8, 2018 at 4:41:39 PM UTC+2, Nicholai Tailor wrote:

Nicholai Tailor

unread,
Oct 10, 2018, 1:43:30 PM10/10/18
to migue...@wazuh.com, wa...@googlegroups.com
Hi miguel,

No agents are still showing no logs...:(

This is not good, any other ideas?

The authentication failure is due to vpn. I really need to get the logs working :(

Cheers

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Miguel Ruiz

unread,
Oct 10, 2018, 2:13:48 PM10/10/18
to Wazuh mailing list
Hi Nicholai,

I don't think the authentication error is due to VPN.

The agents are sending messages to the manager, but for some reason, the manager don't recognize their key as valid.

So the problem probably is with the information inside /var/ossec/etc/client.keys in the manager.

To see if everything is correct, follow this steps in the manager side:

1- Check if the file /var/ossec/etc/client.keys exists.

2- Check if the manager have a key for the agents sending those messages. Inside /var/ossec/etc/client.keys, you should see for every agent one line with the id, agent-name and ip. For example you can look if the with ip '10.10.249.92' is inside that file.

3- Make sure the permissions for that file are correct. If they are not correct, the Wazuh manager won't be able to open it.
ls -l /var/ossec/etc/client.keys

4- Look for error logs inside /var/ossec/logs/ossec.log to see if there are any previous error
cat /var/ossec/logs/ossec.log | grep -i error

Send me the output of this last two commands so I can have a look and see everything is correct.

Regards,

On Monday, October 8, 2018 at 4:41:39 PM UTC+2, Nicholai Tailor wrote:

Nicholai Tailor

unread,
Oct 10, 2018, 4:19:45 PM10/10/18
to migue...@wazuh.com, wa...@googlegroups.com
Hi Miguel,

I have rebooted the server and if i do a security audit timeline 24 hours.

It appears to be working. But i dont see last hour, 15min, etc.

Cheers

On Wed, Oct 10, 2018 at 9:07 PM Nicholai Tailor <nichola...@gmail.com> wrote:
Hi Miguel,

They are all windows servers that have the authentication issue and someone is using a vpn without split tunnel or site to site.

I am rebooting my wazuh server to see if i can get the logs working. Right now I need to get that working as that is the whole point to wazuh...

Currently all of my clients show no security logs, since ive upgraded.

Cheers

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Nicholai Tailor

unread,
Oct 10, 2018, 4:58:31 PM10/10/18
to migue...@wazuh.com, wa...@googlegroups.com
Hi Miguel,

There are so many broken pieces since the upgrade. Monitoring is not working either cant see any view the cluster, there is no logs.

As far as i can tell everything seems to be in order....not sure what else to check?

Cheers


Nicholai Tailor

unread,
Oct 10, 2018, 5:23:43 PM10/10/18
to migue...@wazuh.com, wa...@googlegroups.com
Is there anyway i can get one of you to do a remote session.

My client is considering scrapping it. At this point as it appears unreliable. I still have faith, however could use a bit of help.

Cheers

Nicholai Tailor

unread,
Oct 11, 2018, 7:08:38 AM10/11/18
to migue...@wazuh.com, wa...@googlegroups.com
Hi miguel,

I still have no logs showing up?

Any other idea on how we can fix this?

Please and thank you

Cheers

Miguel Ruiz

unread,
Oct 11, 2018, 11:45:12 AM10/11/18
to Wazuh mailing list
Hi Nicholai,

to make sure that everything works fine, I need you to follow this steps.

First part:
We will check if the problem is in the Wazuh manager side.

1- Please execute this commands, to see if your agents are active and reporting to the manager.
/var/ossec/bin/agent_control -l

If you have active agents, they should be sending alerts.

2- Check if you are receiving new alerts in real time.
Execute this command and wait until you see some alerts (Ctrl-C to exit the command):
tail -n0 -f /var/ossec/logs/alerts/alerts.json

Also, in the directory /var/ossec/logs/alerts/2018/Oct/ you should have the alerts for this month.

3 - Check again if filebeat is sending those alerts to logstash

lsof /var/ossec/logs/alerts/alerts.json

The output should look like this:

[root@localhost wazuh]# lsof /var/ossec/logs/alerts/alerts.json
COMMAND   PID  USER   FD   TYPE DEVICE SIZE
/OFF   NODE NAME
ossec
-ana 524 ossec    9w   REG    8,1    27818 789487 /var/ossec/logs/alerts/alerts.json
filebeat  
818  root    5r   REG    8,1    27818 789487 /var/ossec/logs/alerts/alerts.json


Please, tell me if the Wazuh manager is generating alerts and filebeat is reading them.

Second part:

To get more information about this error you sent:

2018-10-10 01:50:08    ossec-remoted    warning    (1404): Authentication error. Wrong key from '10.10.249.92'.

2018-10-10 01:50:08    ossec-remoted    error    (2202): Error uncompressing string.
2018-10-10 01:50:08    ossec-remoted    warning    (1404): Authentication error. Wrong key from '10.10.249.64'.
2018-10-10 01:50:08    ossec-remoted    warning    (1404): Authentication error. Wrong key from '10.10.249.46'.
2018-10-10 01:50:08    ossec-remoted    warning    (1404): Authentication error. Wrong key from '10.10.249.39'.
2018-10-10 01:50:07    ossec-remoted    warning    (1404): Authentication error. Wrong key from '10.10.249.96'.


1- Restart the wazuh manager and see the logs since the process starts running, using
systemctl restart wazuh-manager && tail -f /var/ossec/logs/ossec.log

We need that information to make sure what is the reason for the warning and error logs.

2- Make sure the permissions for /var/ossec/etc/client.keys are correct. If they are not correct, the Wazuh manager won't be able to open it.
ls -l /var/ossec/etc/client.keys

The correct permissions are:

[root@Manager wazuh]# ls -l /var/ossec/etc/client.keys
-rw-r-----. 1 ossec ossec 0 Oct 10 13:15 /var/ossec/etc/client.keys

Third part:

We need again the logs from logstash and elastic, to make sure they are working properly:

Logstash:
/var/log/logstash/logstash-plain.log

Elasticsearch:
/var/log/elasticsearch/elasticsearch.log

Please follow this steps and send us the results, so we can give you further assistance.

Best regards,
Miguel


On Monday, October 8, 2018 at 4:41:39 PM UTC+2, Nicholai Tailor wrote:

Nicholai Tailor

unread,
Oct 12, 2018, 5:13:58 PM10/12/18
to migue...@wazuh.com, wa...@googlegroups.com
Hi Miguel,

I have over 200 active agents.  when i run /var/ossec/bin/agent_control -l
For security reasons i prefer not to list them out in open thread.

The authentication error is a select group of servers which are windows, about 10 out of over 250. Lets not worry about them. :) As right now i have no logs showing up for over 200 agents.

I have checked all those things prior and everything appears to be find still no logs in kibana on active agents.

Lots of logs being sent. 
==========================
[root@waz01 ~]# tail -n0 -f /var/ossec/logs/alerts/alerts.json
{"timestamp":"2018-10-12T22:07:21.836+0100","rule":{"level":6,"description":"sshd: insecure connection attempt (scan).","id":"5706","firedtimes":547,"mail":false,"groups":["syslog","sshd","recon"],"pci_dss":["11.4"],"gpg13":["4.12"],"gdpr":["IV_35.7.d"]},"agent":{"id":"148","name":"dgsdprdubs01","ip":"10.79.240.247"},"manager":{"name":"dgsdprdwaz01"},"id":"1539378441.3332628481","full_log":"Oct 12 21:07:21 dgsdprdubs01 sshd[69201]: Did not receive identification string from 10.79.240.111 port 55354","predecoder":{"program_name":"sshd","timestamp":"Oct 12 21:07:21","hostname":"dgsdprdubs01"},"decoder":{"parent":"sshd","name":"sshd"},"data":{"srcip":"10.79.240.111","srcport":"55354"},"location":"/var/log/messages"}
{"timestamp":"2018-10-12T22:07:21.854+0100","rule":{"level":3,"description":"sshd: authentication success.","id":"5715","firedtimes":297,"mail":false,"groups":["syslog","sshd","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"148","name":"dgsdprdubs01","ip":"10.79.240.247"},"manager":{"name":"dgsdprdwaz01"},"id":"1539378441.3332628859","full_log":"Oct 12 21:07:21 dgsdprdubs01 sshd[69202]: Accepted password for svc_pervade from 10.79.240.111 port 55356 ssh2","predecoder":{"program_name":"sshd","timestamp":"Oct 12 21:07:21","hostname":"dgsdprdubs01"},"decoder":{"parent":"sshd","name":"sshd"},"data":{"srcip":"10.79.240.111","dstuser":"svc_pervade"},"location":"/var/log/messages"}
{"timestamp":"2018-10-12T22:07:21.856+0100","rule":{"level":5,"description":"PAM: User login failed.","id":"5503","firedtimes":50,"mail":false,"groups":["pam","syslog","authentication_failed"],"pci_dss":["10.2.4","10.2.5"],"gpg13":["7.8"],"gdpr":["IV_35.7.d","IV_32.2"]},"agent":{"id":"148","name":"dgsdprdubs01","ip":"10.79.240.247"},"manager":{"name":"dgsdprdwaz01"},"id":"1539378441.3332629254","full_log":"Oct 12 21:07:21 dgsdprdubs01 sshd[69202]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=dgsdpvd02.mcs.local  user=svc_pervade","predecoder":{"program_name":"sshd","timestamp":"Oct 12 21:07:21","hostname":"dgsdprdubs01"},"decoder":{"name":"pam"},"data":{"srcip":"dgsdpvd02.mcs.local","dstuser":"svc_pervade","uid":"0","euid":"0","tty":"ssh"},"location":"/var/log/secure"}
{"timestamp":"2018-10-12T22:07:21.910+0100","rule":{"level":3,"description":"Windows Logon Success.","id":"18107","firedtimes":3087,"mail":false,"groups":["windows","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539378441.3332629746","full_log":"2018 Oct 12 21:07:20 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was successfully logged on.    Subject:   Security ID:  S-1-0-0   Account Name:  -   Account Domain:  -   Logon ID:  0x0    Logon Information:   Logon Type:  3   Restricted Admin Mode: -   Virtual Account:  No   Elevated Token:  Yes    Impersonation Level:  Delegation    New Logon:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-1959   Account Name:  DGSDWKS01$   Account Domain:  MCS.LOCAL   Logon ID:  0x9E6C951E   Linked Logon ID:  0x0   Network Account Name: -   Network Account Domain: -   Logon GUID:  {06675D85-89E3-33C1-733D-BB470C2DC761}    Process Information:   Process ID:  0x0   Process Name:  -    Network Information:   Workstation Name: -   Source Network Address: 10.79.248.238   Source Port:  51825    Detailed Authentication Information:   Logon Process:  Kerberos   Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 12 21:07:20"},"decoder":{"parent":"windows","name":"windows"},"data":{"srcip":"10.79.248.238","dstuser":"(no user)","id":"4624","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","security_id":"S-1-0-0","account_name":"DGSDWKS01$","account_domain":"MCS.LOCAL","logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-12T22:07:21.956+0100","rule":{"level":3,"description":"Successful sudo to ROOT executed","id":"5402","firedtimes":340,"mail":false,"groups":["syslog","sudo"],"pci_dss":["10.2.5","10.2.2"],"gpg13":["7.6","7.8","7.13"],"gdpr":["IV_32.2"]},"agent":{"id":"026","name":"dgsdtsthdp04","ip":"10.79.247.62"},"manager":{"name":"dgsdprdwaz01"},"id":"1539378441.3332632500","full_log":"Oct 12 21:07:20 dgsdtsthdp04 sudo: svc_pervade : TTY=unknown ; PWD=/home/svc_pervade ; USER=root ; COMMAND=/usr/bin/top -b -i -n 1","predecoder":{"program_name":"sudo","timestamp":"Oct 12 21:07:20","hostname":"dgsdtsthdp04"},"decoder":{"parent":"sudo","name":"sudo"},"data":{"srcuser":"svc_pervade","dstuser":"root","tty":"unknown","pwd":"/home/svc_pervade","command":"/usr/bin/top -b -i -n 1"},"location":"/var/log/auth.log"}
{"timestamp":"2018-10-12T22:07:21.958+0100","rule":{"level":3,"description":"Windows Logon 

Filebeat was dead again. I restarted it.

[root@waz01 ~]# lsof /var/ossec/logs/alerts/alerts.json
COMMAND     PID  USER   FD   TYPE DEVICE   SIZE/OFF      NODE NAME
ossec-ana 19035 ossec   10w   REG  253,3 4325276081 201439428 /var/ossec/logs/alerts/alerts.json
filebeat  21728  root    6r   REG  253,3 4325277364 201439428 /var/ossec/logs/alerts/alerts.json


[root@waz01 ~]# ls -l /var/ossec/etc/client.keys 
-rw-r----- 1 root ossec 23489 Oct 12 22:01 /var/ossec/etc/client.keys


[root@waz01 ~]# tail /var/log/logstash/logstash-plain.log
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>16}
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>12}
[root@waz01 ~]# 



  [root@waz01 ~]# tail /var/log/elasticsearch/elasticsearch.log
[2018-10-12T13:00:00,743][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T14:00:00,550][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T15:00:00,549][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T16:00:00,667][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T17:00:00,578][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T18:00:00,543][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T19:00:00,491][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T20:00:00,704][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T21:00:00,642][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]
[2018-10-12T22:00:00,653][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [Ap37uZl] updating number_of_replicas to [1] for indices [wazuh-monitoring-3.x-2018.10.12]







--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Nicholai Tailor

unread,
Oct 12, 2018, 5:28:20 PM10/12/18
to migue...@wazuh.com, wa...@googlegroups.com
Hello,

Here is additional log info.

[root@waz01 ~]# filebeat test output
logstash: 10.79.240.160:5000...
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 10.79.240.160
    dial up... ERROR dial tcp 10.79.240.160:5000: connect: connection refused
[root@waz01 ~]# systemctl restart logstash
[root@waz01 ~]# cat /var/log/elasticsearch/elasticsearch.log | grep -E "(ERROR|WARN)"
[2018-10-12T01:00:00,044][ERROR][o.e.x.m.e.l.LocalExporter] failed to delete indices
[root@waz01 ~]# cat /var/log/logstash/logstash-plain.log | grep -E "(ERROR|WARN)"
[2018-10-12T22:23:02,246][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-10-12T22:23:08,223][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:08,230][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2018-10-12T22:23:13,313][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:18,411][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:23,508][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:28,595][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:33,683][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:38,755][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:43,817][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:48,886][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:53,971][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:23:59,031][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:24:04,109][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:24:09,167][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:24:14,235][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:24:19,316][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:24:24,387][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:24:29,454][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Mutate", {"remove_field"=>["timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"], "id"=>"ef305c252f653d6943a28185513fcec34c053c53c0770c2408bdc8b29a4520ae"}]=>[{"thread_id"=>23, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>30, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-10-12T22:24:54,584][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"wazuh-alerts-3.x-%{+YYYY.MM.dd}", id=>"47c5d5f74396f7bfe2bace5201e4f43d67ff3fa9072ea8ba13d2e62d2ff6bbea", hosts=>[//localhost:9200], document_type=>"wazuh", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_04e0abd0-5095-41ff-8181-3436c376192e", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-10-12T22:24:55,199][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-10-12T22:24:55,254][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}

Any ideas?

Nicholai Tailor

unread,
Oct 15, 2018, 3:11:05 AM10/15/18
to migue...@wazuh.com, wa...@googlegroups.com
Hi Miguel,

Still no logs

image.png

Any other things we should check. Need to get this working :)

Please and thanks

Nicholai Tailor

unread,
Oct 15, 2018, 3:16:18 AM10/15/18
to migue...@wazuh.com, wa...@googlegroups.com
Hello,

Also when i run

tail -n0 -f /var/ossec/logs/alerts/alerts.json | grep -i Ubuntu

nothing comes up for any of my Linux machines

If i run

tail -n0 -f /var/ossec/logs/alerts/alerts.json | grep -i windows

It showing logs.

If that helps?

Cheers

Nicholai Tailor

unread,
Oct 15, 2018, 6:10:48 AM10/15/18
to migue...@wazuh.com, wa...@googlegroups.com
Hello,

The other thing i found is that I have over 257 clients. Yet when i run curl to check indices I dont see 257 indices, since the upgrade..

[root@waz01 conf.d]# curl localhost:9200/_cat/indices
yellow open wazuh-alerts-3.x-2018.09.07     HLNDuMjHS1Ox3iLoSwFE7g 5 1     294     0 1000.8kb 1000.8kb
yellow open wazuh-monitoring-3.x-2018.10.05 QZUYRFkAQJairkP3FRJorQ 5 1    5848     0    3.3mb    3.3mb
yellow open .wazuh-version                  daOExJOfQKCN8_hRxFFH-w 1 1       1     0    5.1kb    5.1kb
yellow open wazuh-monitoring-3.x-2018.09.05 GzmAl7J_To6saJ562i_FOw 5 1      26     0  344.8kb  344.8kb
yellow open wazuh-alerts-3.x-2018.09.08     MqIJtCNQR3aU3inuv-pxpw 5 1     183     0    748kb    748kb
yellow open wazuh-monitoring-3.x-2018.10.10 3BKq8MOmRUafq9QTF90Xpg 5 1    2500     0    1.3mb    1.3mb
green  open .monitoring-es-6-2018.10.14     z5sVUyj4SC6vkDIFXqPvQA 1 0  990566  6199  484.7mb  484.7mb
yellow open wazuh-monitoring-3.x-2018.09.16 TCK4R93WTuWiDpn9iheVdg 5 1     720     0  904.3kb  904.3kb
green  open .monitoring-es-6-2018.10.11     QtyuwZGVThKoN1nLdKy2Wg 1 0  946121 15275  457.4mb  457.4mb
yellow open wazuh-monitoring-3.x-2018.08.31 Md28rr1VT7WsvSexXrRygA 5 1      24     0  322.4kb  322.4kb
yellow open wazuh-monitoring-3.x-2018.09.25 cVVSJ24SRv-hvHcvXtJQcA 5 1    2976     0    1.8mb    1.8mb
yellow open wazuh-monitoring-3.x-2018.09.19 NO5GK52jRiaPtrapmpKZyA 5 1    1030     0      1mb      1mb
yellow open wazuh-alerts-3.x-2018.09.21     FY0mIXGQQHmCpYgRgOIJhg 5 1  203134     0   63.5mb   63.5mb
green  open .monitoring-es-6-2018.10.09     YcGOZwKWQbidUbGUWsoyjg 1 0  918716  6074  442.4mb  442.4mb
yellow open wazuh-monitoring-3.x-2018.10.07 IcxUyFhZQrCtl0arhJt3Hw 5 1    6000     0    3.3mb    3.3mb
yellow open wazuh-monitoring-3.x-2018.10.09 VJ7ocUhnSYCVgeL9VZrkyw 5 1    5750     0    3.1mb    3.1mb
yellow open wazuh-monitoring-3.x-2018.09.15 6gBQircjTZyFO1GLZYAc7w 5 1     720     0  905.9kb  905.9kb
yellow open wazuh-monitoring-3.x-2018.09.04 dpegpn03T464EvrNMU7NZA 5 1      24     0  322.4kb  322.4kb
yellow open wazuh-monitoring-3.x-2018.09.20 WeI6b0vqQuaOMWhnisrXxw 5 1    1056     0      1mb      1mb
yellow open wazuh-alerts-3.x-2018.09.18     B1wJIN1SQKuSQbkoFsTmnA 5 1  187805     0   52.4mb   52.4mb
yellow open wazuh-alerts-3.x-2018.09.04     CvatsnVxTDKgtPzuSkebFQ 5 1      28     0  271.1kb  271.1kb
yellow open wazuh-monitoring-3.x-2018.09.07 DgASBh6HTzS_xA9YRlfTqA 5 1      54     0  333.1kb  333.1kb
yellow open wazuh-alerts-3.x-2018.10.13     wM5hHYMCQsG5XCkIquE-QA 5 1  303773     0  221.7mb  221.7mb
yellow open wazuh-monitoring-3.x-2018.10.13 UbssD1NHSxmLicB0Jthyuw 5 1    1000     0      1mb      1mb
yellow open .wazuh                          4CzS7CnEQR-edBXJ1obdMQ 1 1       1     0   10.6kb   10.6kb
yellow open wazuh-monitoring-3.x-2018.08.29 sDy8FHJ1R4Wi1Etq38j0eg 5 1      24     0  322.4kb  322.4kb
yellow open wazuh-monitoring-3.x-2018.09.18 Z9jNS5VcRUOcJiN8nOfLzg 5 1     830     0 1002.7kb 1002.7kb
yellow open wazuh-monitoring-3.x-2018.09.11 qrVS_fH6RJm4ak_sLb8wNQ 5 1      72     0  357.3kb  357.3kb
yellow open wazuh-alerts-3.x-2018.09.22     60AsCkS-RGG0Z2kFGcrbxg 5 1  218077     0   74.2mb   74.2mb
yellow open wazuh-alerts-3.x-2018.10.12     WdiFnzu7QlaBetwzcsIFYQ 5 1  363029     0  237.7mb  237.7mb
yellow open wazuh-monitoring-3.x-2018.10.14 i9k1AlD2QOOPKwFBkTjQ8w 5 1    6000     0    3.3mb    3.3mb
yellow open wazuh-alerts-3.x-2018.09.17     zK3MCinOSF2_3rNAJnuPCQ 5 1  174254     0   48.3mb   48.3mb
yellow open wazuh-monitoring-3.x-2018.09.06 EO02VU2ITA-sf6o3HOldUA 5 1      48     0  564.4kb  564.4kb
yellow open wazuh-monitoring-3.x-2018.10.11 -aepDZwXSUa3YXPL-Dh8zw 5 1    6000     0    3.3mb    3.3mb
yellow open wazuh-monitoring-3.x-2018.08.28 NZsVaguvSlu0YURoA7eEAQ 5 1       6     0   81.5kb   81.5kb
yellow open wazuh-alerts-3.x-2018.09.28     iZ2J4UMhR6y1eHH1JiiqLQ 5 1  232290     0   78.6mb   78.6mb
yellow open wazuh-monitoring-3.x-2018.09.26 1ykmzQWiRDWLpDDpZ7Mxig 5 1    3120     0      2mb      2mb
yellow open wazuh-monitoring-3.x-2018.10.08 f9vV2Pe-TfOTLzZqtkEkPw 5 1    6000     0    3.3mb    3.3mb
yellow open wazuh-alerts-3.x-2018.09.09     FRELA8dFSWy6aMd12ZFnqw 5 1     428     0  895.1kb  895.1kb
yellow open wazuh-monitoring-3.x-2018.10.04 KPSxZbbJSIu_FsCBu0ZPWQ 5 1    1452     0    1.5mb    1.5mb
green  open .monitoring-es-6-2018.10.08     mCDMhJ_IT7aZ7kXwIqYWLg 1 0  911498 16600  439.2mb  439.2mb
yellow open wazuh-alerts-3.x-2018.09.11     2Zc4Fg8lR6G64XuJLZbkBA 5 1     203     0  772.1kb  772.1kb
yellow open wazuh-monitoring-3.x-2018.09.09 vqgslu5nT_qGc5fL7Wivyw 5 1      72     0  338.3kb  338.3kb
green  open .monitoring-es-6-2018.10.13     6TTrY0wtSouTe3SN8wVEaA 1 0  185509  9100   69.7mb   69.7mb
yellow open wazuh-alerts-3.x-2018.08.29     kAPHZSRpQqaMhoWgkiXupg 5 1      28     0  236.6kb  236.6kb
yellow open wazuh-alerts-3.x-2018.08.28     XmD43PlgTUWaH4DMvZMiqw 5 1     175     0  500.9kb  500.9kb
yellow open wazuh-alerts-3.x-2018.09.26     toJKVX5lQcOBC_rkUaFcrg 5 1  224709     0   75.9mb   75.9mb
yellow open wazuh-monitoring-3.x-2018.09.27 mHwFuEXnRh-Es0QM4zZMcg 5 1    3160     0      2mb      2mb
yellow open wazuh-alerts-3.x-2018.09.14     0uaTbLxpSXWQr9m0h8xZiA 5 1  114549     0   31.3mb   31.3mb
yellow open wazuh-monitoring-3.x-2018.09.17 4DUsdUiKQ62xNUcjh5AoDw 5 1     720     0  907.2kb  907.2kb
yellow open wazuh-alerts-3.x-2018.09.06     KqfwJqhcSNK3-hP7lbgLrw 5 1      72     0  562.9kb  562.9kb
yellow open wazuh-monitoring-3.x-2018.09.24 2xm0hR6hQpCMZc_hgL_YHg 5 1    2488     0    1.7mb    1.7mb
yellow open wazuh-alerts-3.x-2018.09.20     k5rfOcyuQASiMwiXslYl2g 5 1  202388     0   57.6mb   57.6mb
yellow open wazuh-alerts-3.x-2018.08.30     wFvW8IqHS8yDMUINDkt1fQ 5 1      28     0  248.9kb  248.9kb
yellow open wazuh-alerts-3.x-2018.09.01     U5jntd1hSWGxdofpgMD4Xg 5 1      28     0  203.5kb  203.5kb
yellow open wazuh-alerts-3.x-2018.09.03     trTjIjrTSIKUw5kqu9Lkpw 5 1      41     0  396.5kb  396.5kb
green  open .monitoring-es-6-2018.10.10     2yJvfnh3Qgq0cX-FTYcpGw 1 0  457055 12683  191.3mb  191.3mb
yellow open wazuh-monitoring-3.x-2018.09.03 PqBztk9_RfOehGxvP8AygQ 5 1      24     0  322.4kb  322.4kb
yellow open wazuh-monitoring-3.x-2018.10.12 RE0CDF70Tz6pI0l2MAvumQ 5 1    6000     0    3.3mb    3.3mb
green  open .monitoring-kibana-6-2018.10.15 XITB8lQ4QeCPcWos51V49Q 1 0    3652     0  923.1kb  923.1kb
yellow open wazuh-monitoring-3.x-2018.09.29 K79cyOmHSBK9tc0lcduCMg 5 1    3048     0    1.9mb    1.9mb
yellow open wazuh-monitoring-3.x-2018.08.30 V9j_9rstQfu8_GkcqzD1yA 5 1      24     0  322.4kb  322.4kb
yellow open wazuh-alerts-3.x-2018.09.25     Eg1rvDXbSNSq5EqJAtSm_A 5 1  247998     0   87.7mb   87.7mb
green  open .monitoring-es-6-2018.10.12     D7RWNzbmTJ-0krut64aqLw 1 0  962296 10021  468.1mb  468.1mb
yellow open wazuh-alerts-3.x-2018.09.05     HHRnxqjtTKimmW6FEUUfdw 5 1     143     0  679.6kb  679.6kb
yellow open wazuh-alerts-3.x-2018.09.15     GIx8fMXnQ3ukrSkKmjbViQ 5 1  171191     0   45.9mb   45.9mb
yellow open wazuh-alerts-3.x-2018.10.10     W3pw1hDwSp2QAtRm0hwoaQ 5 1  896799     0  662.6mb  662.6mb
yellow open wazuh-monitoring-3.x-2018.10.03 Wr21Gr7mSc2wtgpL32Iuxg 5 1    5576     0    3.2mb    3.2mb
yellow open wazuh-monitoring-3.x-2018.09.12 Mb1NV0EbS_Ola_N-7C1qqQ 5 1      72     0  338.7kb  338.7kb
yellow open wazuh-monitoring-3.x-2018.09.08 9PTWd-sqRLS85iEEkLRxBw 5 1      72     0  303.4kb  303.4kb
yellow open wazuh-alerts-3.x-2018.10.02     nKEdjkFOQ9abitVi_dKF3g 5 1  727934     0  232.7mb  232.7mb
yellow open wazuh-monitoring-3.x-2018.09.01 XyUbhVaJQ_uqn0bKUU9ACA 5 1      24     0  322.4kb  322.4kb
green  open .monitoring-kibana-6-2018.10.09 slk00IDxQ9O1fT-N8iioOg 1 0    8639     0      2mb      2mb
yellow open wazuh-monitoring-3.x-2018.09.30 5keMEnBbShWsOuqGQw8oig 5 1    3048     0    1.9mb    1.9mb
yellow open wazuh-monitoring-3.x-2018.09.23 I0tunBSSQB6TyX8q83YDBw 5 1    2040     0    1.5mb    1.5mb
yellow open wazuh-alerts-3.x-2018.10.01     mvYSVDZJSfa-F_5dKIBwAg 5 1  402155     0  129.9mb  129.9mb
green  open .monitoring-es-6-2018.10.15     zeJnDO98T_WhkR9bCmdY0A 1 0  423934 11160  236.8mb  236.8mb
yellow open wazuh-alerts-3.x-2018.09.19     ebb9Jrt1TT6Qm6df7VjZxg 5 1  201897     0   58.3mb   58.3mb
yellow open wazuh-alerts-3.x-2018.09.13     KPy8HfiyRyyPeeHpTGKJNg 5 1   52530     0   13.7mb   13.7mb
yellow open wazuh-alerts-3.x-2018.10.03     bMW_brMeRkSDsJWL6agaWg 5 1 1321895     0    715mb    715mb
yellow open wazuh-monitoring-3.x-2018.09.02 MW7EyTNoQty81n_-vH7RAg 5 1      24     0  322.4kb  322.4kb
yellow open wazuh-monitoring-3.x-2018.09.21 IV-W-zfWR5mEeQAqBUI25Q 5 1    1523     0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.09.27     8wRF0XhXQnuVexAxLF6Y5w 5 1  233117     0   79.2mb   79.2mb
green  open .monitoring-kibana-6-2018.10.14 W0omIcUFQoiLUf4M2FiLRw 1 0    8639     0      2mb      2mb
yellow open wazuh-monitoring-3.x-2018.09.22 cjPuywBwStOEHxtLAh6RDw 5 1    2040     0    1.5mb    1.5mb
yellow open wazuh-monitoring-3.x-2018.09.13 iC8y9cCuTOSAQt6LS_lwuw 5 1     319     0  408.9kb  408.9kb
yellow open wazuh-alerts-3.x-2018.09.12     1aB7pIcnTWqZPZkFagHnKA 5 1      73     0    516kb    516kb
yellow open wazuh-monitoring-3.x-2018.10.15 g_si2x_eSraRg8gkeZIirQ 5 1    2748     0    1.5mb    1.5mb
yellow open wazuh-alerts-3.x-2018.09.29     BXyZe2eySkSlwutudcTzNA 5 1  222734     0   73.7mb   73.7mb
yellow open wazuh-alerts-3.x-2018.10.04     x8198rpWTxOVBgJ6eTjJJg 5 1  492044     0  364.9mb  364.9mb
green  open .monitoring-kibana-6-2018.10.10 TC41hwkIR6CdpQz7tCWXrA 1 0    6565     0    1.2mb    1.2mb
yellow open wazuh-alerts-3.x-2018.09.23     ZQZE9KD1R1y6WypYVV5kfg 5 1  216141     0   73.7mb   73.7mb
yellow open .kibana                         icwZoA6MS4mfqkMizdUCew 5 1       4     0   48.4kb   48.4kb
yellow open wazuh-monitoring-3.x-2018.10.01 AC39iGquQB2BkkniX-_6ww 5 1    3199     0      2mb      2mb
green  open .monitoring-kibana-6-2018.10.13 7KqrreY9Rnm5zKWLGPEbZA 1 0    1081     0  247.4kb  247.4kb
yellow open wazuh-alerts-3.x-2018.09.24     Loa8kM7cSJOujjRzvYsVKw 5 1  286140     0  106.3mb  106.3mb
yellow open wazuh-alerts-3.x-2018.09.02     lt8xvq2ZRdOQGW7pSX5-wg 5 1     148     0    507kb    507kb
yellow open wazuh-alerts-3.x-2018.08.31     RP0_5r1aQdiMmQYeD0-3CQ 5 1      28     0  247.8kb  247.8kb
yellow open wazuh-monitoring-3.x-2018.09.14 ZHoOv9vGSJOwmHyCoNJzKQ 5 1     634     0  855.7kb  855.7kb
yellow open wazuh-monitoring-3.x-2018.10.02 1jPOXU5zSpaqwlvxxKXXkA 5 1    5331     0      3mb      3mb
green  open .monitoring-kibana-6-2018.10.12 YQrNM7_zRtaerTgYQXzO5A 1 0    8639     0      2mb      2mb
yellow open wazuh-monitoring-3.x-2018.10.06 BYysen5ARLyZW0P15yaYMw 5 1    6000     0    3.3mb    3.3mb
green  open .monitoring-kibana-6-2018.10.08 SoOf-gW9T0i0lLAbk3mPYw 1 0    8640     0      2mb      2mb
yellow open wazuh-alerts-3.x-2018.09.16     uwLNlaQ1Qnyp2V9jXJJHvA 5 1  171478     0   46.5mb   46.5mb
yellow open wazuh-monitoring-3.x-2018.09.10 4mFQ0vCQQxqDDHrOPiGEcw 5 1      72     0  464.6kb  464.6kb
yellow open wazuh-monitoring-3.x-2018.09.28 tJIdoMOaQ_ebD4GyPXKMMw 5 1    3048     0    1.9mb    1.9mb
yellow open wazuh-alerts-3.x-2018.09.30     6mMUxi3MSeqrXGBSW3qRlA 5 1  224733     0   74.8mb   74.8mb
yellow open wazuh-alerts-3.x-2018.09.10     7geBbQ-FS22Ctasg0aRkug 5 1     202     0  917.5kb  917.5kb
green  open .monitoring-kibana-6-2018.10.11 qCkVHe-kTUWWKROtB5wKqQ 1 0    8640     0      2mb      2mb

Cheers

Miguel Ruiz

unread,
Oct 15, 2018, 9:46:59 AM10/15/18
to Wazuh mailing list
Hello again Nicholai,

If alerts are growing in /var/ossec/logs/alerts/alerts.json, so I'm sure the problem is with the ELK stack.

I will try to answer all of your questions

when i run

tail -n0 -f /var/ossec/logs/alerts/alerts.json | grep -i Ubuntu

nothing comes up for any of my Linux machines

If i run

tail -n0 -f /var/ossec/logs/alerts/alerts.json | grep -i windows

It showing logs.

The command tail -f shows the additions to the alerts file in real time.
I like to use the -n0 option to don't show any content that previously exists in the file.

Using grep -i windows is probably matching the content of some logs, the name of the "windows" decoder or the group called "windows", like in this sample you sent:

{"timestamp":"2018-10-12T22:07:21.910+0100","rule":{"level":3,"description":"Windows Logon Success.","id":"18107","firedtimes":3087,"mail":false,"groups":["windows","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539378441.3332629746","full_log":"2018 Oct 12 21:07:20 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was successfully logged on.    Subject:   Security ID:  S-1-0-0   Account Name:  -   Account Domain:  -   Logon ID:  0x0    Logon Information:   Logon Type:  3   Restricted Admin Mode: -   Virtual Account:  No   Elevated Token:  Yes    Impersonation Level:  Delegation    New Logon:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-1959   Account Name:  DGSDWKS01$   Account Domain:  MCS.LOCAL   Logon ID:  0x9E6C951E   Linked Logon ID:  0x0   Network Account Name: -   Network Account Domain: -   Logon GUID:  {06675D85-89E3-33C1-733D-BB470C2DC761}    Process Information:   Process ID:  0x0   Process Name:  -    Network Information:   Workstation Name: -   Source Network Address: 10.79.248.238   Source Port:  51825    Detailed Authentication Information:   Logon Process:  Kerberos   Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 12 21:07:20"},"decoder":{"parent":"windows","name":"windows"},"data":{"srcip":"10.79.248.238","dstuser":"(no user)","id":"4624","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","security_id":"S-1-0-0","account_name":"DGSDWKS01$","account_domain":"MCS.LOCAL","logon_type":"3"},"location":"WinEvtLog"}

That doesn't happen when using grep -i Ubuntu, because the log doesn't contain the string Ubuntu, but that doesn't mean your Ubuntu agents are not reporting alerts.

The other thing i found is that I have over 257 clients. Yet when i run curl to check indices I dont see 257 indices, since the upgrade..

Wazuh generates one index for every day, with the name wazuh-alerts-3.x-{year}.{month}.{day}.
That index contains all the alerts generated from every agent that day.

You have an index for the day Oct 13th:
yellow open wazuh-alerts-3.x-2018.10.13     wM5hHYMCQsG5XCkIquE-QA 5 1  303773     0  221.7mb  221.7mb

That means alerts from that day are being indexed.

Can you go to Kibana and check if you can see the alerts from that day and previous?
In your Kibana UI, go to the Discover tab and change the time range in the upper right corner.

I don't see you have any indices for the days 14 and 15, so probably Filebeat/Logstash stopped working again.

You still have some blocked indices with read_only
[2018-10-12T22:11:12,903][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})

Check you have enough disk space in your ELK instance, and then execute:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

This will unlock the locked indices.

After that, restart Logstash and Filebeat.

If the problem persists after unlocking the indices, you can start filebeat in debug mode and store the logs modifying /etc/filebeat/filebeat.yml and add this:

logging:
  level: debug
  to_files: true
  files:
    path: /var/log/filebeat
    name: filebeat.log
    keepfiles: 7

With this configuration, filebeat will store debug logs at /var/log/filebeat/

Let me know if this worked. If after this filebeat keeps stopping, it would be helpful to have the new logs for logstash and elastic to keep helping you with this issue.

Best regards,
Miguel R.


On Monday, October 8, 2018 at 4:41:39 PM UTC+2, Nicholai Tailor wrote:

Nicholai Tailor

unread,
Oct 15, 2018, 10:23:49 AM10/15/18
to migue...@wazuh.com, wa...@googlegroups.com
I have ran what you have said nothing new.

Also i my elk stack is also my wazuh manager. I have an all in one.

Should i not remove filebeat? if its setup this way?

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Nicholai Tailor

unread,
Oct 15, 2018, 10:31:43 AM10/15/18
to migue...@wazuh.com, wa...@googlegroups.com
[root@waz01 filebeat]# systemctl restart filebeat
[root@waz01 filebeat]# tail filebeat
2018-10-15T15:26:59.079+0100 INFO crawler/crawler.go:149 Stopping 1 inputs
2018-10-15T15:26:59.079+0100 INFO input/input.go:149 input ticker stopped
2018-10-15T15:26:59.079+0100 INFO input/input.go:167 Stopping Input: 16612411745408489738
2018-10-15T15:26:59.079+0100 INFO crawler/crawler.go:165 Crawler stopped
2018-10-15T15:26:59.079+0100 INFO registrar/registrar.go:356 Stopping Registrar
2018-10-15T15:26:59.079+0100 INFO registrar/registrar.go:282 Ending Registrar
2018-10-15T15:26:59.091+0100 INFO [monitoring] log/log.go:149 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":210,"time":{"ms":214}},"total":{"ticks":1090,"time":{"ms":1098},"value":1090},"user":{"ticks":880,"time":{"ms":884}}},"info":{"ephemeral_id":"c028719e-5952-4232-9613-8d10cdb465db","uptime":{"ms":827860}},"memstats":{"gc_next":53559024,"memory_alloc":27376192,"memory_total":108540800,"rss":51535872}},"filebeat":{"events":{"active":4116,"added":4119,"done":3},"harvester":{"closed":1,"open_files":0,"running":0,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":4116,"failed":1,"filtered":2,"published":4116,"retry":34816,"total":4119}}},"registrar":{"states":{"current":1,"update":2},"writes":{"success":3,"total":3}},"system":{"cpu":{"cores":8},"load":{"1":4.02,"15":3.59,"5":4.24,"norm":{"1":0.5025,"15":0.4488,"5":0.53}}}}}}
2018-10-15T15:26:59.091+0100 INFO [monitoring] log/log.go:150 Uptime: 13m47.86018875s
2018-10-15T15:26:59.091+0100 INFO [monitoring] log/log.go:127 Stopping metrics logging.
2018-10-15T15:26:59.091+0100 INFO instance/beat.go:373 filebeat stopped.

Here is the log as well.

jesus.g...@wazuh.com

unread,
Oct 15, 2018, 10:50:15 AM10/15/18
to Wazuh mailing list
Hi Nicholai,

As you said in your other thread, you may fall into an Elasticsearch block due to disk usage. 

Removing Filebeat, setting up Logstash

If you are using a single-host architecture, let's remove Filebeat for performance reasons:

1. Stop affected services:

# systemctl stop logstash
# systemctl stop filebeat

2. Remove Filebeat

# yum remove filebeat


3. Setting up Logstash

# curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/3.6/extensions/logstash/01-wazuh-local.conf
# usermod -a -G ossec logstash

4. Restart Logstash

# systemctl restart logstash

5. Please, copy and paste this command (it differs from your curl in the other thread):

curl -XPUT 'http://localhost:9200/_settings' -H 'Content-Type: application/json' -d' { "index": { "blocks": { "read_only_allow_delete": "false" } } } '

6. Now check again your Logstash log file:

# date // For debug purposes, it would be nice if we know your instance date, then we can check the logs properly
# cat /var/log/logstash/logstash-plain.log | grep -i -E "(error|warning|critical)"


Disk usage and Elasticsearch

Elasticsearch has a watermark to prevent from making the disk unusable. 
You said /var/ossec is in a different partition, that's okay but Elasticsearch stores its indices in a different place, for example
in a CentOS 7 I've just created it's storing in /usr/share/elasticsearch/data.

# ls /usr/share/elasticsearch/data/nodes/0
_state  indices  node
.lock

Please, ensure Elasticsearch partition (if you have a different partition) has enough space.

I hope it helps.

Best regards,
Jesús

Nicholai Tailor

unread,
Oct 15, 2018, 11:52:24 AM10/15/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

Okay, I completed all those steps successfully

Here is the log 

cat /var/log/logstash/logstash-plain.log | grep -i -E "(error|warning|critical)"  

[2018-10-15T16:45:57,513][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T16:46:13,205][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T16:46:13,220][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T16:46:29,205][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T16:46:29,220][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T16:46:45,235][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T16:46:45,249][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T16:47:01,095][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T16:47:01,110][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T16:48:09,041][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T16:48:09,055][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T16:48:24,603][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", 

[root@dgsdprdwaz01 filebeat]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/vg_local-root     4.9G  109M  4.8G   3% /
devtmpfs                      3.9G     0  3.9G   0% /dev
tmpfs                         3.9G     0  3.9G   0% /dev/shm
tmpfs                         3.9G  384M  3.5G  10% /run
tmpfs                         3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/vg_local-usr      9.9G  2.6G  7.4G  27% /usr
/dev/sda1                     997M  186M  812M  19% /boot
/dev/mapper/vg_local-tmp      2.0G   34M  2.0G   2% /tmp
/dev/mapper/vg_local-home     9.8G   33M  9.8G   1% /home
/dev/mapper/vg_local-var       30G  9.0G   21G  31% /var
/dev/mapper/wazuh-ossec       100G  8.7G   92G   9% /var/ossec
/dev/mapper/vg_local-var_log  9.8G  1.6G  8.2G  17% /var/log
tmpfs                         783M     0  783M   0% /run/user/297800546

Cheers

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 15, 2018, 12:14:27 PM10/15/18
to Wazuh mailing list
Hello again Nicholai,

Next step is to correct folder owner for certain Logstash directories:

# chown -R logstash:logstash /usr/share/logstash
# chown -R logstash:logstash /var/lib/logstash

Now restart Logstash:

# systemctl restart logstash

Look for Logstash log one more time:

# cat /var/log/logstash/logstash-plain.log | grep -i -E "(error|warning|critical)"

Thanks for your patience Nicholai, we are almost done.

Best regards,
Jesús
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Nicholai Tailor

unread,
Oct 15, 2018, 3:08:32 PM10/15/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

Here is the new log

Thank you so much for your help

cat /var/log/logstash/logstash-plain.log | grep -i -E "(error|warning|critical)" 

 

tstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:04:02,566][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:04:18,540][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:04:18,554][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:04:34,568][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:04:34,583][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:04:50,460][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:04:50,475][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:05:06,595][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:05:06,611][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:05:22,480][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:05:22,496][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:05:38,501][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:05:38,515][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:05:54,742][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:05:54,757][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:06:10,952][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:06:10,967][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:06:26,863][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:06:26,878][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:06:42,543][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:06:42,557][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:06:58,344][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:06:58,359][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit


To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/cf4df2d6-699b-465c-9f68-c2bbeeeb33fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 15, 2018, 3:15:19 PM10/15/18
to Wazuh mailing list
Hi Nicholai,

It seems we are still having the permission problem. Let's increase them:

# chmod -R 766 /usr/share/logstash

Restart Logstash:

# systemctl restart logstash


Check one more time if the error persists:

# cat /var/log/logstash/logstash-plain.log | grep -i -E "(error|warning|critical)"

I hope it helps.

Kind regards,
Jesús
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/cf4df2d6-699b-465c-9f68-c2bbeeeb33fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Jesús Ángel González

unread,
Oct 15, 2018, 3:24:00 PM10/15/18
to Wazuh mailing list
Nicholai,

I forgot to say that you must increase permissions for this folder too:

# chmod -R 766 /var/lib/logstash

I hope it helps!

Best regards

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/cf4df2d6-699b-465c-9f68-c2bbeeeb33fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/dac79598-de31-4e89-9720-0d8d88e9e792%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

For more options, visit https://groups.google.com/d/optout.
--
Best regards,
Jesús.

Nicholai Tailor

unread,
Oct 15, 2018, 6:47:40 PM10/15/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

I ran the chmod commands and restarted logstash.

Here is the log again. No change....

[2018-10-15T20:06:10,967][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:06:26,863][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:06:26,878][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:06:42,543][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:06:42,557][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-10-15T20:06:58,344][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-10-15T20:06:58,359][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

Jesús Ángel González

unread,
Oct 16, 2018, 2:30:38 AM10/16/18
to Nicholai Tailor, wa...@googlegroups.com
Hi Nicholai,

The date(~2018-10-15T20:06:58,359)
from those logs are similar to your last logs,I think they are the same. 

Please let’s see last 10 logs without using grep:

# tail -10 /var/log/logstash/logstash-plain.log

Also, check the Logstash service:

# systemctl status logstash -l

I think it’s solved but we are looking for old logs.

Regards,
Jesús 
--
Best regards,
Jesús.

jesus.g...@wazuh.com

unread,
Oct 16, 2018, 10:46:21 AM10/16/18
to Wazuh mailing list
Hi Nicholai,

I think you replied me in private message by error, in any case as I can see in your logs, Logstash is just fine now.
The only one warning message is a common and known message about type deprecation. It will be removed in Elastic 7.x and
we know it, so just ignore that message. From my view, your Logstash is working properly now.

Let us know if you need any more help setting up your instances.

Best regards,
Jesús

Nicholai Tailor

unread,
Oct 16, 2018, 1:04:36 PM10/16/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

I see, well there is still something very wrong then...

image.png

This is what i see for all my agents that are active.

Thank you kindly for all your help.

Any other ideas?



--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 16, 2018, 2:00:23 PM10/16/18
to Wazuh mailing list
Hello again Nicholai,

So your Logstash is now working properly but apparently, there are no alerts in Elasticsearch. I'm going to describe a bit how the data flow works
from the agent up to Elasticsearch:

- The agent sends a lot of events to the manager.
- The manager checks those events and analyzes them. Once analyzed and our rules match some event it generates one or more alerts in /var/ossec/logs/alerts/alerts.json
- Logstash should be reading /var/ossec/logs/alerts/alerts.json and it should be forwarding the alerts to Elasticsearch.
- Kibana has a time filter in the top right corner (Eg: last 15 minutes), it looks for alerts in that time range from Elasticsearch indices.

Now that we have all clear, let's check component by component:

1. Check last 10 alerts generated in your Wazuh manager. Also, check the field timestamp, we must take care about the timestamp.

tail -10 /var/ossec/logs/alerts/alerts.json

2. If the Wazuh manager is generating alerts from your view (step 1), then let's check if Logstash is reading our alerts. You should see two processes: java for Logstash and ossec-ana from Wazuh.

# lsof /var/ossec/logs/alerts/alerts.json

3. If Logstash is reading our alerts, let's check if there is an Elasticsearch index for today (wazuh-alerts-3.x-2018.10.16)):

curl localhost:9200/_cat/indices/wazuh-alerts-3.x-*

4. If Elasticsearch has an index for today 
(wazuh-alerts-3.x-2018.10.16), the problem is probably selected time range in Kibana. To discard any error related to this, please go to Kibana > Discover, and look for
alerts in that section of Kibana itself. If there are alerts from today in the Discover section, the problem is reduced and I think we can continue debugging in a more specific way.

If the above steps are failing at some point, stop there and let us know Nicholai.

Kind regards,
Jesús
curl -XPUT 'http://localhost:9200/_settings' -H 'Content-Type: application/json' <span style

Nicholai Tailor

unread,
Oct 17, 2018, 2:08:49 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com

All the machines that are registered should be showing alerts ever 15mins or at least every hour, as they all have a lot of activity. 

This is not the case. We see nothing in kibana.


tail 
-10 /var/ossec/logs/alerts/alerts.json 
 
[root@waz01 ~]# tail -10 /var/ossec/logs/alerts/alerts.json
{"timestamp":"2018-10-17T07:04:50.608+0100","rule":{"level":3,"description":"Windows Logon Success.","id":"18107","firedtimes":3037,"mail":false,"groups":["windows","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"240","name":"dgsdprddom02","ip":"10.79.240.11"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048666203","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom02.mcs.local: An account was successfully logged on.    Subject:   Security ID:  S-1-0-0   Account Name:  -   Account Domain:  -   Logon ID:  0x0    Logon Information:   Logon Type:  3   Restricted Admin Mode: -   Virtual Account:  No   Elevated Token:  Yes    Impersonation Level:  Delegation    New Logon:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-4557   Account Name:  DGSDTST03$   Account Domain:  MCS.LOCAL   Logon ID:  0xBE53B215   Linked Logon ID:  0x0   Network Account Name: -   Network Account Domain: -   Logon GUID:  {68F21EA8-31B0-C3FD-E323-8CE8790D6974}    Process Information:   Process ID:  0x0   Process Name:  -    Network Information:   Workstation Name: -   Source Network Address: 10.79.244.43   Source Port:  57692    Detailed Authentication Information:   Logon Process:  Kerberos   Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"srcip":"10.79.244.43","dstuser":"(no user)","id":"4624","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom02.mcs.local","type":"Security","security_id":"S-1-0-0","account_name":"DGSDTST03$","account_domain":"MCS.LOCAL","logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.624+0100","rule":{"level":3,"description":"Windows Logon Success.","id":"18107","firedtimes":3038,"mail":false,"groups":["windows","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"240","name":"dgsdprddom02","ip":"10.79.240.11"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048668945","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom02.mcs.local: An account was successfully logged on.    Subject:   Security ID:  S-1-0-0   Account Name:  -   Account Domain:  -   Logon ID:  0x0    Logon Information:   Logon Type:  3   Restricted Admin Mode: -   Virtual Account:  No   Elevated Token:  Yes    Impersonation Level:  Delegation    New Logon:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-4162   Account Name:  DCGCJMP02$   Account Domain:  MCS.LOCAL   Logon ID:  0xBE53B26D   Linked Logon ID:  0x0   Network Account Name: -   Network Account Domain: -   Logon GUID:  {CBAA97BE-4230-E803-EDC5-36D75AE84113}    Process Information:   Process ID:  0x0   Process Name:  -    Network Information:   Workstation Name: -   Source Network Address: 10.79.249.24   Source Port:  59190    Detailed Authentication Information:   Logon Process:  Kerberos   Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"srcip":"10.79.249.24","dstuser":"(no user)","id":"4624","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom02.mcs.local","type":"Security","security_id":"S-1-0-0","account_name":"DCGCJMP02$","account_domain":"MCS.LOCAL","logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.692+0100","rule":{"level":3,"description":"Windows Logon Success.","id":"18107","firedtimes":3039,"mail":false,"groups":["windows","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048671687","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was successfully logged on.    Subject:   Security ID:  S-1-5-18   Account Name:  DGSDPRDDOM01$   Account Domain:  MCS   Logon ID:  0x3E7    Logon Information:   Logon Type:  3   Restricted Admin Mode: -   Virtual Account:  No   Elevated Token:  Yes    Impersonation Level:  Identification    New Logon:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-2015   Account Name:  etisalat   Account Domain:  MCS   Logon ID:  0xE596C8E2   Linked Logon ID:  0x0   Network Account Name: -   Network Account Domain: -   Logon GUID:  {2E828245-0E7B-F765-EE8C-08E5F6820252}    Process Information:   Process ID:  0x294   Process Name:  C:\\Windows\\System32\\lsass.exe    Network Information:   Workstation Name: DGSDPRDDOM01   Source Network Address: -   Source Port:  -    Detailed Authentication Information:   Logon Process:  Authz      Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"dstuser":"(no user)","id":"4624","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","subject":{"security_id":"S-1-5-18","account_name":"DGSDPRDDOM01$","account_domain":"MCS","logon_id":"0x3E7"},"security_id":"S-1-5-21-2728124869-2810918716-3645054289-2015","account_name":"etisalat","account_domain":"MCS","logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.708+0100","rule":{"level":3,"description":"Windows User Logoff.","id":"18149","firedtimes":2984,"mail":false,"groups":["windows"],"pci_dss":["10.2.5"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048674606","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4634): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was logged off.    Subject:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-2015   Account Name:  etisalat   Account Domain:  MCS   Logon ID:  0xE596C8E2    Logon Type:   3    This event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value. Logon IDs are only unique between reboots on the same computer.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"dstuser":"(no user)","id":"4634","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","subject":{"security_id":"S-1-5-21-2728124869-2810918716-3645054289-2015","account_name":"etisalat","account_domain":"MCS","logon_id":"0xE596C8E2"},"logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.755+0100","rule":{"level":3,"description":"Windows Logon Success.","id":"18107","firedtimes":3040,"mail":false,"groups":["windows","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048675540","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was successfully logged on.    Subject:   Security ID:  S-1-5-18   Account Name:  DGSDPRDDOM01$   Account Domain:  MCS   Logon ID:  0x3E7    Logon Information:   Logon Type:  3   Restricted Admin Mode: -   Virtual Account:  No   Elevated Token:  Yes    Impersonation Level:  Identification    New Logon:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-2015   Account Name:  etisalat   Account Domain:  MCS   Logon ID:  0xE596C925   Linked Logon ID:  0x0   Network Account Name: -   Network Account Domain: -   Logon GUID:  {2E828245-0E7B-F765-EE8C-08E5F6820252}    Process Information:   Process ID:  0x294   Process Name:  C:\\Windows\\System32\\lsass.exe    Network Information:   Workstation Name: DGSDPRDDOM01   Source Network Address: -   Source Port:  -    Detailed Authentication Information:   Logon Process:  Authz      Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"dstuser":"(no user)","id":"4624","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","subject":{"security_id":"S-1-5-18","account_name":"DGSDPRDDOM01$","account_domain":"MCS","logon_id":"0x3E7"},"security_id":"S-1-5-21-2728124869-2810918716-3645054289-2015","account_name":"etisalat","account_domain":"MCS","logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.770+0100","rule":{"level":3,"description":"Windows User Logoff.","id":"18149","firedtimes":2985,"mail":false,"groups":["windows"],"pci_dss":["10.2.5"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048678459","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4634): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was logged off.    Subject:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-2015   Account Name:  etisalat   Account Domain:  MCS   Logon ID:  0xE596C925    Logon Type:   3    This event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value. Logon IDs are only unique between reboots on the same computer.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"dstuser":"(no user)","id":"4634","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","subject":{"security_id":"S-1-5-21-2728124869-2810918716-3645054289-2015","account_name":"etisalat","account_domain":"MCS","logon_id":"0xE596C925"},"logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.817+0100","rule":{"level":3,"description":"Windows Logon Success.","id":"18107","firedtimes":3041,"mail":false,"groups":["windows","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048679393","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was successfully logged on.    Subject:   Security ID:  S-1-5-18   Account Name:  DGSDPRDDOM01$   Account Domain:  MCS   Logon ID:  0x3E7    Logon Information:   Logon Type:  3   Restricted Admin Mode: -   Virtual Account:  No   Elevated Token:  Yes    Impersonation Level:  Identification    New Logon:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-1182   Account Name:  iag   Account Domain:  MCS   Logon ID:  0xE596CA1B   Linked Logon ID:  0x0   Network Account Name: -   Network Account Domain: -   Logon GUID:  {B82540C7-4693-7F77-06C6-D342E6C6DFD2}    Process Information:   Process ID:  0x294   Process Name:  C:\\Windows\\System32\\lsass.exe    Network Information:   Workstation Name: DGSDPRDDOM01   Source Network Address: -   Source Port:  -    Detailed Authentication Information:   Logon Process:  Authz      Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"dstuser":"(no user)","id":"4624","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","subject":{"security_id":"S-1-5-18","account_name":"DGSDPRDDOM01$","account_domain":"MCS","logon_id":"0x3E7"},"security_id":"S-1-5-21-2728124869-2810918716-3645054289-1182","account_name":"iag","account_domain":"MCS","logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.833+0100","rule":{"level":3,"description":"Windows User Logoff.","id":"18149","firedtimes":2986,"mail":false,"groups":["windows"],"pci_dss":["10.2.5"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048682302","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4634): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was logged off.    Subject:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-1182   Account Name:  iag   Account Domain:  MCS   Logon ID:  0xE596CA1B    Logon Type:   3    This event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value. Logon IDs are only unique between reboots on the same computer.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"dstuser":"(no user)","id":"4634","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","subject":{"security_id":"S-1-5-21-2728124869-2810918716-3645054289-1182","account_name":"iag","account_domain":"MCS","logon_id":"0xE596CA1B"},"logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.880+0100","rule":{"level":3,"description":"Windows Logon Success.","id":"18107","firedtimes":3042,"mail":false,"groups":["windows","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.1","7.2"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048683226","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was successfully logged on.    Subject:   Security ID:  S-1-5-18   Account Name:  DGSDPRDDOM01$   Account Domain:  MCS   Logon ID:  0x3E7    Logon Information:   Logon Type:  3   Restricted Admin Mode: -   Virtual Account:  No   Elevated Token:  Yes    Impersonation Level:  Identification    New Logon:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-1182   Account Name:  iag   Account Domain:  MCS   Logon ID:  0xE596CD59   Linked Logon ID:  0x0   Network Account Name: -   Network Account Domain: -   Logon GUID:  {B82540C7-4693-7F77-06C6-D342E6C6DFD2}    Process Information:   Process ID:  0x294   Process Name:  C:\\Windows\\System32\\lsass.exe    Network Information:   Workstation Name: DGSDPRDDOM01   Source Network Address: -   Source Port:  -    Detailed Authentication Information:   Logon Process:  Authz      Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"dstuser":"(no user)","id":"4624","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","subject":{"security_id":"S-1-5-18","account_name":"DGSDPRDDOM01$","account_domain":"MCS","logon_id":"0x3E7"},"security_id":"S-1-5-21-2728124869-2810918716-3645054289-1182","account_name":"iag","account_domain":"MCS","logon_type":"3"},"location":"WinEvtLog"}
{"timestamp":"2018-10-17T07:04:50.895+0100","rule":{"level":3,"description":"Windows User Logoff.","id":"18149","firedtimes":2987,"mail":false,"groups":["windows"],"pci_dss":["10.2.5"],"gdpr":["IV_32.2"]},"agent":{"id":"239","name":"dgsdprddom01.mcs.local","ip":"10.79.240.10"},"manager":{"name":"dgsdprdwaz01"},"id":"1539756290.1048686135","full_log":"2018 Oct 17 06:04:49 WinEvtLog: Security: AUDIT_SUCCESS(4634): Microsoft-Windows-Security-Auditing: (no user): no domain: dgsdprddom01.mcs.local: An account was logged off.    Subject:   Security ID:  S-1-5-21-2728124869-2810918716-3645054289-1182   Account Name:  iag   Account Domain:  MCS   Logon ID:  0xE596CD59    Logon Type:   3    This event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value. Logon IDs are only unique between reboots on the same computer.","predecoder":{"program_name":"WinEvtLog","timestamp":"2018 Oct 17 06:04:49"},"decoder":{"parent":"windows","name":"windows"},"data":{"dstuser":"(no user)","id":"4634","status":"AUDIT_SUCCESS","data":"Microsoft-Windows-Security-Auditing","system_name":"dgsdprddom01.mcs.local","type":"Security","subject":{"security_id":"S-1-5-21-2728124869-2810918716-3645054289-1182","account_name":"iag","account_domain":"MCS","logon_id":"0xE596CD59"},"logon_type":"3"},"location":"WinEvtLog"}


[root@waz01 ~]# lsof /var/ossec/logs/alerts/alerts.json
COMMAND     PID     USER   FD   TYPE DEVICE   SIZE/OFF      NODE NAME
java      11924 logstash   93r   REG  253,3 1360095987 201439440 /var/ossec/logs/alerts/alerts.json
ossec-ana 19035    ossec   10w   REG  253,3 1360095987 201439440 /var/ossec/logs/alerts/alerts.json


[root@waz01 ~]# curl localhost:9200/_cat/indices/wazuh-alerts-3.x-*
yellow open wazuh-alerts-3.x-2018.09.07 HLNDuMjHS1Ox3iLoSwFE7g 5 1     294 0 1000.8kb 1000.8kb
yellow open wazuh-alerts-3.x-2018.09.22 60AsCkS-RGG0Z2kFGcrbxg 5 1  218077 0   74.2mb   74.2mb
yellow open wazuh-alerts-3.x-2018.10.12 WdiFnzu7QlaBetwzcsIFYQ 5 1  363029 0  237.7mb  237.7mb
yellow open wazuh-alerts-3.x-2018.09.25 Eg1rvDXbSNSq5EqJAtSm_A 5 1  247998 0   87.7mb   87.7mb
yellow open wazuh-alerts-3.x-2018.09.05 HHRnxqjtTKimmW6FEUUfdw 5 1     143 0  679.6kb  679.6kb
yellow open wazuh-alerts-3.x-2018.09.08 MqIJtCNQR3aU3inuv-pxpw 5 1     183 0    748kb    748kb
yellow open wazuh-alerts-3.x-2018.09.15 GIx8fMXnQ3ukrSkKmjbViQ 5 1  171191 0   45.9mb   45.9mb
yellow open wazuh-alerts-3.x-2018.10.10 W3pw1hDwSp2QAtRm0hwoaQ 5 1  896799 0  662.6mb  662.6mb
yellow open wazuh-alerts-3.x-2018.09.24 Loa8kM7cSJOujjRzvYsVKw 5 1  286140 0  106.3mb  106.3mb
yellow open wazuh-alerts-3.x-2018.10.15 rnC7kyXRQSCSXm6wVCiWOw 5 1 2628257 0    1.8gb    1.8gb
yellow open wazuh-alerts-3.x-2018.09.17 zK3MCinOSF2_3rNAJnuPCQ 5 1  174254 0   48.3mb   48.3mb
yellow open wazuh-alerts-3.x-2018.09.02 lt8xvq2ZRdOQGW7pSX5-wg 5 1     148 0    507kb    507kb
yellow open wazuh-alerts-3.x-2018.10.17 A4yCMv4YTuOQWelbb3XQtQ 5 1  627474 0  459.6mb  459.6mb
yellow open wazuh-alerts-3.x-2018.08.31 RP0_5r1aQdiMmQYeD0-3CQ 5 1      28 0  247.8kb  247.8kb
yellow open wazuh-alerts-3.x-2018.10.02 nKEdjkFOQ9abitVi_dKF3g 5 1  727934 0  232.7mb  232.7mb
yellow open wazuh-alerts-3.x-2018.09.28 iZ2J4UMhR6y1eHH1JiiqLQ 5 1  232290 0   78.6mb   78.6mb
yellow open wazuh-alerts-3.x-2018.09.21 FY0mIXGQQHmCpYgRgOIJhg 5 1  203134 0   63.5mb   63.5mb
yellow open wazuh-alerts-3.x-2018.09.09 FRELA8dFSWy6aMd12ZFnqw 5 1     428 0  895.1kb  895.1kb
yellow open wazuh-alerts-3.x-2018.10.01 mvYSVDZJSfa-F_5dKIBwAg 5 1  402155 0  129.9mb  129.9mb
yellow open wazuh-alerts-3.x-2018.09.19 ebb9Jrt1TT6Qm6df7VjZxg 5 1  201897 0   58.3mb   58.3mb
yellow open wazuh-alerts-3.x-2018.09.13 KPy8HfiyRyyPeeHpTGKJNg 5 1   52530 0   13.7mb   13.7mb
yellow open wazuh-alerts-3.x-2018.09.16 uwLNlaQ1Qnyp2V9jXJJHvA 5 1  171478 0   46.5mb   46.5mb
yellow open wazuh-alerts-3.x-2018.10.03 bMW_brMeRkSDsJWL6agaWg 5 1 1321895 0    715mb    715mb
yellow open wazuh-alerts-3.x-2018.10.14 WQV3dpLeSdapmaKOewUh-Q 5 1  226964 0  154.9mb  154.9mb
yellow open wazuh-alerts-3.x-2018.09.04 CvatsnVxTDKgtPzuSkebFQ 5 1      28 0  271.1kb  271.1kb
yellow open wazuh-alerts-3.x-2018.09.18 B1wJIN1SQKuSQbkoFsTmnA 5 1  187805 0   52.4mb   52.4mb
yellow open wazuh-alerts-3.x-2018.09.11 2Zc4Fg8lR6G64XuJLZbkBA 5 1     203 0  772.1kb  772.1kb
yellow open wazuh-alerts-3.x-2018.09.27 8wRF0XhXQnuVexAxLF6Y5w 5 1  233117 0   79.2mb   79.2mb
yellow open wazuh-alerts-3.x-2018.10.13 wM5hHYMCQsG5XCkIquE-QA 5 1  304830 0  222.4mb  222.4mb
yellow open wazuh-alerts-3.x-2018.10.16 p2F-trx1R7mBXQUb4eY-Fg 5 1 2655690 0    1.8gb    1.8gb
yellow open wazuh-alerts-3.x-2018.08.29 kAPHZSRpQqaMhoWgkiXupg 5 1      28 0  236.6kb  236.6kb
yellow open wazuh-alerts-3.x-2018.08.28 XmD43PlgTUWaH4DMvZMiqw 5 1     175 0  500.9kb  500.9kb
yellow open wazuh-alerts-3.x-2018.09.26 toJKVX5lQcOBC_rkUaFcrg 5 1  224709 0   75.9mb   75.9mb
yellow open wazuh-alerts-3.x-2018.09.12 1aB7pIcnTWqZPZkFagHnKA 5 1      73 0    516kb    516kb
yellow open wazuh-alerts-3.x-2018.09.29 BXyZe2eySkSlwutudcTzNA 5 1  222734 0   73.7mb   73.7mb
yellow open wazuh-alerts-3.x-2018.09.14 0uaTbLxpSXWQr9m0h8xZiA 5 1  114549 0   31.3mb   31.3mb
yellow open wazuh-alerts-3.x-2018.09.30 6mMUxi3MSeqrXGBSW3qRlA 5 1  224733 0   74.8mb   74.8mb
yellow open wazuh-alerts-3.x-2018.09.06 KqfwJqhcSNK3-hP7lbgLrw 5 1      72 0  562.9kb  562.9kb
yellow open wazuh-alerts-3.x-2018.09.10 7geBbQ-FS22Ctasg0aRkug 5 1     202 0  917.5kb  917.5kb
yellow open wazuh-alerts-3.x-2018.10.04 x8198rpWTxOVBgJ6eTjJJg 5 1  492044 0  364.9mb  364.9mb
yellow open wazuh-alerts-3.x-2018.08.30 wFvW8IqHS8yDMUINDkt1fQ 5 1      28 0  248.9kb  248.9kb
yellow open wazuh-alerts-3.x-2018.09.20 k5rfOcyuQASiMwiXslYl2g 5 1  202388 0   57.6mb   57.6mb
yellow open wazuh-alerts-3.x-2018.09.01 U5jntd1hSWGxdofpgMD4Xg 5 1      28 0  203.5kb  203.5kb
yellow open wazuh-alerts-3.x-2018.09.03 trTjIjrTSIKUw5kqu9Lkpw 5 1      41 0  396.5kb  396.5kb
yellow open wazuh-alerts-3.x-2018.09.23 ZQZE9KD1R1y6WypYVV5kfg 5 1  216141 0   73.7mb   73.7mb

Cheers


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Nicholai Tailor

unread,
Oct 17, 2018, 2:12:55 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

When do check discover i see lots of data on the agents in the same time range 15mins.

image.png

Cheers

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 3:16:49 AM10/17/18
to Wazuh mailing list
Hi Nicholai,

So your Elasticsearch stack is finally working (at least at index level). That's good because you are storing data properly. Let's dig into the Kibana issue.
In the Discover tab, you are seeing alerts for your whole environment because there are no filters being applied in that section. The first screenshot you provided
is for the agent 013 but maybe you have no alerts for that agent. 

Let's do the next two checks:

- Please open the Wazuh app and go to Overview > Security events then select a proper time range and check if the visualizations are showing something to you.
- Now open the Discover tab and add a filter for the agent 013, click on "Add a filter +" and set agent.id is 013

Let us know once done Nicholai. But in any case, I can confirm that you are now indexing data in the right way.

Regards,
Jesús

Nicholai Tailor

unread,
Oct 17, 2018, 3:30:22 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

Overview has information,

On discover for the agent, even i go back an hour nothing.

image.png

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 3:53:21 AM10/17/18
to Wazuh mailing list
Hello again Nicholai,

If you have data in Overview, it means one or more agents are reporting events. By the way, the filter you are using is not valid, the ID should be "013" and not "13". 

Can you confirm me that you've added the filter like the next screenshots?

captura_1.png

captura_3.png

captura_2.png


Also, a nice check is look for that agent status in your Wazuh API, maybe it's down:


curl -u api_user:api_password localhost:55000/agents/013?select=status

I hope it helps.


Regards,

Jesús

Nicholai Tailor

unread,
Oct 17, 2018, 4:18:16 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Here is the status of the curl command

[root@waz01 ~]# curl -u user:passwprd localhost:55000/agents/013?select=status
{"error":0,"data":{"status":"Active"}}

[root@dgwaz01 ~]

On Wed, Oct 17, 2018 at 9:07 AM Nicholai Tailor <nichola...@gmail.com> wrote:
Here is another one same thing.

image.png

On Wed, Oct 17, 2018 at 9:03 AM Nicholai Tailor <nichola...@gmail.com> wrote:
Hi Jesus,

When making it 013 didnt make a difference.

image.png



--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 4:23:18 AM10/17/18
to Wazuh mailing list
Hi Nicholai,

That agent seems to be not reporting or something else. 

captura_4.png


Try to fetch data directly from Elasticsearch for the today's index and for the agent 013. Copy and paste the next query in the Kibana dev tools:

GET wazuh-alerts-3.x-2018.10.17/_search
{
 
"query": {
   
"match": {
     
"agent.id": "013"
   
}
 
}
}

Press the play button and take a screenshot from the result, then share it with us here. Thanks.

Regards,
Jesús
# chown -R logstash:logstash /usr/share/logstash</s

Nicholai Tailor

unread,
Oct 17, 2018, 4:31:41 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Here is the output of the query.

{
  "took": 49,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 5930,
    "max_score": 5.039797,
    "hits": [
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "S5dTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "43f1bade6454349c258017cc99113f8b6a5712e3807e82ad9371348d52d60190",
            "gid_after": "0",
            "md5_after": "231b5b8bf05c5e93a9b2ebc4186eb6f7",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "64",
            "perm_after": "120777",
            "sha1_after": "76c3afd5eaf8d5cbf250f08b923fbf9085c6793e",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17064879,
            "uid_after": "0",
            "path": "/etc/ssl/certs/Certum_Trusted_Network_CA.pem"
          },
          "@timestamp": "2018-10-17T00:01:23.693Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 27,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/Certum_Trusted_Network_CA.pem'\n",
          "id": "1539734483.157460685",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "VJdTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "56646cbc348ddedc3db0f197ced3f6dd922bdcb3b6482e5fd23149dc6e3a4877",
            "gid_after": "0",
            "md5_after": "034c17251a18993f996f3020b0f97384",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "60",
            "perm_after": "120777",
            "sha1_after": "70d9615d97499de54242ba2ccacc691b09c284e1",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17119317,
            "uid_after": "0",
            "path": "/etc/ssl/certs/GeoTrust_Universal_CA.pem"
          },
          "@timestamp": "2018-10-17T00:01:24.621Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 66,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/GeoTrust_Universal_CA.pem'\n",
          "id": "1539734484.157517510",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "V5dTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "9c2a7510e01aec2c9b8cc2c9d03b16576858a93863a6d0a31a2ac05f47f5dbe1",
            "gid_after": "0",
            "md5_after": "6f3ff6e3e4c21ac5faa4aee951d13303",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "62",
            "perm_after": "120777",
            "sha1_after": "0a3e41e74e9a4137036525f0c2fd6d8edfdc7c39",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17080104,
            "uid_after": "0",
            "path": "/etc/ssl/certs/Buypass_Class_2_Root_CA.pem"
          },
          "@timestamp": "2018-10-17T00:01:23.669Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 15,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/Buypass_Class_2_Root_CA.pem'\n",
          "id": "1539734483.157453727",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "XpdTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "ee4f76bb73607da41aac662038e25b2f514984836efa3317e2c50390408dcdce",
            "gid_after": "0",
            "md5_after": "410737016cffcb136715e1aadd30e69c",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "65",
            "perm_after": "120777",
            "sha1_after": "175c17492e693e80f148a53d0d97aa0a9bfbdaf9",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17101210,
            "uid_after": "0",
            "path": "/etc/ssl/certs/Deutsche_Telekom_Root_CA_2.pem"
          },
          "@timestamp": "2018-10-17T00:01:23.718Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 39,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/Deutsche_Telekom_Root_CA_2.pem'\n",
          "id": "1539734483.157467722",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "XZdTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "cba061eef768b79cf995171941b3d61bbcfe8761924da52d76814897205f7d7d",
            "gid_after": "0",
            "md5_after": "35610177afc9c64e70f1ce62c1885496",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "53",
            "perm_after": "120777",
            "sha1_after": "9f910d4a23e1fd39c8d25a65480180c583cf48ff",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17064873,
            "uid_after": "0",
            "path": "/etc/ssl/certs/Certum_Root_CA.pem"
          },
          "@timestamp": "2018-10-17T00:01:23.691Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 26,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/Certum_Root_CA.pem'\n",
          "id": "1539734483.157460113",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "bJdTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "e80cb8eb9fc411c947de2785c4907ab80353d52cc6912b45a5ad1812c1110087",
            "gid_after": "0",
            "md5_after": "20af0db1f0a1bd929c472dfcfe4b13c7",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "62",
            "perm_after": "120777",
            "sha1_after": "33948162d3468a5d7b0b0147c58d6cb047ac4296",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17119327,
            "uid_after": "0",
            "path": "/etc/ssl/certs/GlobalSign_Root_CA_-_R2.pem"
          },
          "@timestamp": "2018-10-17T00:01:24.632Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 71,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/GlobalSign_Root_CA_-_R2.pem'\n",
          "id": "1539734484.157523167",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "NZdTf2YBgodqkxjNtzgQ",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "b52fae9cd8dcf49285f0337cd815deca13fedd31f653bf07f61579451517e18c",
            "gid_after": "0",
            "md5_after": "9e328b8d7eb2098bca933ed3353de369",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "66",
            "perm_after": "120777",
            "sha1_after": "d636a2396e29b4e91e00106a183938a6d746f716",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17060912,
            "uid_after": "0",
            "path": "/etc/ssl/certs/DigiCert_Assured_ID_Root_CA.pem"
          },
          "@timestamp": "2018-10-17T00:01:23.720Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 40,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/DigiCert_Assured_ID_Root_CA.pem'\n",
          "id": "1539734483.157468306",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "bZdTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "4a4928ce5db4c7347a6d0b7b10677c6308390b624d82eeb0fe3f68d125121a4e",
            "gid_after": "0",
            "md5_after": "51e14b4c734e450402ea2cf73f2aee0f",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "61",
            "perm_after": "120777",
            "sha1_after": "4513711209c4c1e1780c91df93024fecd8083160",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17080078,
            "uid_after": "0",
            "path": "/etc/ssl/certs/AddTrust_External_Root.pem"
          },
          "@timestamp": "2018-10-17T00:01:23.648Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 4,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/AddTrust_External_Root.pem'\n",
          "id": "1539734483.157447281",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "cJdTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "d1e1969cdbc656bb4c568116fe2d9b4f8b02b170dc20193b86a26c046f4b35a7",
            "gid_after": "0",
            "md5_after": "7800edd54b8cde1605c6469c7f9fa5eb",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "47",
            "perm_after": "120777",
            "sha1_after": "fe5df407c4cba70f49928410bf55df03d1e2732f",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17068465,
            "uid_after": "0",
            "path": "/etc/ssl/certs/Certigna.pem"
          },
          "@timestamp": "2018-10-17T00:01:23.681Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 21,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/Certigna.pem'\n",
          "id": "1539734483.157457223",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      },
      {
        "_index": "wazuh-alerts-3.x-2018.10.17",
        "_type": "wazuh",
        "_id": "d5dTf2YBgodqkxjNtzgR",
        "_score": 5.039797,
        "_source": {
          "syscheck": {
            "uname_after": "root",
            "sha256_after": "745bd29be45667514b4000e9cdb70cdecad0f02c78232ed722f64f7f80436e35",
            "gid_after": "0",
            "md5_after": "d5c740071952f2189d90dc600985be3f",
            "mtime_after": "2018-10-17T00:32:41",
            "event": "modified",
            "gname_after": "root",
            "size_after": "75",
            "perm_after": "120777",
            "sha1_after": "84c2702946b6895ac09e25fc4eccd685129a2e27",
            "mtime_before": "2018-10-16T12:32:55",
            "inode_after": 17116462,
            "uid_after": "0",
            "path": "/etc/ssl/certs/Entrust_Root_Certification_Authority.pem"
          },
          "@timestamp": "2018-10-17T00:01:24.596Z",
          "manager": {
            "name": "dgsdprdwaz01"
          },
          "location": "syscheck",
          "decoder": {
            "name": "syscheck_integrity_changed"
          },
          "rule": {
            "level": 7,
            "description": "Integrity checksum changed.",
            "firedtimes": 55,
            "pci_dss": [
              "11.5"
            ],
            "gpg13": [
              "4.11"
            ],
            "gdpr": [
              "II_5.1.f"
            ],
            "id": "550",
            "groups": [
              "ossec",
              "syscheck"
            ],
            "mail": false
          },
          "full_log": "Integrity checksum changed for: '/etc/ssl/certs/Entrust_Root_Certification_Authority.pem'\n",
          "id": "1539734484.157507345",
          "agent": {
            "ip": "10.79.244.143",
            "name": "dgsdqahw03",
            "id": "013"
          },
          "path": "/var/ossec/logs/alerts/alerts.json"
        }
      }
    ]
  }
}

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 4:39:17 AM10/17/18
to Wazuh mailing list
Hi Nicholai,

The agent has data in Elasticsearch, but maybe the timestamp is not matching with your time filter in Kibana,
as I can see, those alerts have dates like "@timestamp": "2018-10-17T00:01:24.621Z", just try to increase the time filter
to "Last 24 hours" or "Last 48 hours". Open Kibana > Discover, set the filter for the agent ("agent.id": "013") and set the time filter
as the next screenshot:

captura_5.png


Also, you can review the date from your instances, maybe they are in different time zones or something else. That's why you are not seeing alerts for "Last 15 minutes" or "Last 1 hour",
it's known behavior with Kibana and Elasticearch.

Regards,
Jesús
GET wazuh-alerts-3.x-2018.10.17/_search
{
 
"query": {
   
"match": {
     
"agent.id</a

Nicholai Tailor

unread,
Oct 17, 2018, 4:59:45 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

We are able to do this before.

However this means we cant see whats happening in the hour. 

Which is kind of crucial for us, and i have noticed that it works on windows agents currently.

Is there anyway to fix this?

Also i dont see any logs accuring in the wazuhapp - Logs located at /usr/share/kibana/optimize/wazuh-logs/wazuhapp.log ( should there not be logs being generated here?)

image.png

Here is an a 7 day view on an agent which appears to show logs missing

image.png

Are you sure its working right?....

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 5:36:06 AM10/17/18
to Wazuh mailing list
Hi Nicholai,

As I can see, there are alerts for the agent "013" on 17th October. So from my view, it's working as expected.

Let's check the date in all affected instances in order to compare them. Login using SSH into the Elasticsearch instance,
into the Wazuh manager instance and into the Wazuh agent 013 instance then copy/paste the date here.

$ date

This way we can verify if all is synchronized. On the other hand, it would be nice if we can check that specific agent, SSH into the agent
and paste the output from the next commands:

cat /var/ossec/logs/ossec.log | grep -i -E "(error|warning|critical)"

and

# ps aux | grep ossec

That's all for now. 

Regarding the Wazuh app logs, they are internal logs from the Wazuh app itself, they are not related to the Wazuh data flow or any other thing. In any case, it may be caused by wrong permissions,
just ensure they are fine executing the next command in the Kibana instance:

#  chown -R kibana:kibana /usr/share/kibana/optimize
# chown -R kibana:kibana /usr/share/kibana/plugins
# systemctl restart kibana // Close and open your browser and wait about 30s before entering the Kibana UI again, otherwise it could be still loading

Kind regards,
Jesús

Nicholai Tailor

unread,
Oct 17, 2018, 6:05:13 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

[root@waz01 ~]# date
Wed 17 Oct 11:01:22 BST 2018

I know the reason for these authentication errors. The servers were either shutdown or replaces with new version of windows and the agent key is missing
So i am not worried about these errors at this time.
cat /var/ossec/logs/ossec.log | grep --"(error|warning|critical)"  -  
2018/10/17 11:00:51 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.43'.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.67'.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1213): Message from '10.79.240.110' not allowed.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.80'.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.43'.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.70'.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1213): Message from '10.79.249.69' not allowed.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.41'.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.68'.
2018/10/17 11:00:52 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.16'.
2018/10/17 11:00:53 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.49'.
2018/10/17 11:00:53 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.250.10'.
2018/10/17 11:00:53 ossec-remoted: WARNING: (1213): Message from '10.79.249.69' not allowed.
2018/10/17 11:00:53 ossec-remoted: WARNING: (1213): Message from '10.79.240.111' not allowed.
2018/10/17 11:00:53 ossec-remoted: WARNING: (1213): Message from '10.79.249.63' not allowed.
2018/10/17 11:00:53 ossec-remoted: WARNING: (1404): Authentication error. Wrong key from '10.79.249.35'.
2018/10/17 11:00:53 ossec-remoted: WARNING: (1213): Message from '10.79.244.141' not allowed

[root@waz01 ~]# ps aux | grep ossec
ossec     1332  0.0  0.3 926164 24560 ?        Ssl  Oct11   0:20 /bin/node /var/ossec/api/app.js
root     19007  0.0  0.0 243044  1124 ?        Sl   Oct13   0:19 /var/ossec/bin/ossec-authd
ossec    19012  0.1  0.3 629460 24160 ?        Sl   Oct13   7:43 /var/ossec/bin/wazuh-db
root     19029  0.0  0.0  29916   400 ?        Sl   Oct13   0:11 /var/ossec/bin/ossec-execd
ossec    19035  4.6  0.8 166796 71128 ?        Sl   Oct13 293:14 /var/ossec/bin/ossec-analysisd
root     19039  0.0  0.0  98204  3772 ?        Sl   Oct13   4:30 /var/ossec/bin/ossec-syscheckd
ossecr   19046  4.2  0.1 687780  9480 ?        Sl   Oct13 269:09 /var/ossec/bin/ossec-remoted
root     19048  0.0  0.0 388240  1148 ?        Sl   Oct13   3:13 /var/ossec/bin/ossec-logcollector
ossec    19064  0.3  0.0  21748  1092 ?        S    Oct13  21:23 /var/ossec/bin/ossec-monitord
root     19072  1.8  0.1 418644 15540 ?        Sl   Oct13 115:14 /var/ossec/bin/wazuh-modulesd
root     29617  0.0  0.0 112708   956 pts/0    S+   11:02   0:00 grep --color=auto ossec
[root@waz01 ~]# 


Again, thank you all the help you are providing. :)


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Nicholai Tailor

unread,
Oct 17, 2018, 6:07:46 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

Also the permissions fixed the logs thanks for the app..

Cheers

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 6:39:25 AM10/17/18
to Wazuh mailing list
Always glad to help Nicholai!

Unfortunately what I meant is the date from all the affected instances. This means I want the date for the next instances:

- Let me know the date from the instance where Elasticsearch is installed, I remember that is the same as the Wazuh manager instance.
- Let me know the date from the instance where the agent "013" is installed.
- Copy here the error/warning in the ossec.log from the agent "013"
- Copy here the ps aux output from the agent "013" 

And regarding the Wazuh app internal logs, it was the permissions, ok! Happy about it's fixed too.

Best regards,
Jesús
            &

Nicholai Tailor

unread,
Oct 17, 2018, 8:15:11 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

Where do i find this information?

Cheers

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 9:01:44 AM10/17/18
to Wazuh mailing list
Hi Nicholai,

Maybe I didn't explain it as well as I thought.


- Let me know the date from the instance where Elasticsearch is installed, I remember that is the same as the Wazuh manager instance.

// Login using SSH into the Elasticsearch/Wazuh manager instance
$ date

- Let me know the date from the instance where the agent "013" is installed.

// Login using SSH into the agent "013" instance
$ date

- Copy here the error/warning in the ossec.log from the agent "013"

// Login using SSH into the agent "013" instance
$ cat
/var/ossec/logs/ossec.log | grep -i -E "(error|warning|critical)"

- Copy here the ps aux output from the agent "013" 

// Login using SSH into the agent "013" instance
$ ps aux
| grep ossec

Regards,
Jesús

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 9:58:08 AM10/17/18
to Wazuh mailing list
Hello again Nicholai,

One more time I think you replied to me in private by error, but it's not a problem, copying here:

root@dgsdqahw03:~# date
Wed Oct 17 13:29:07 UTC 2018
[root@waz01 ~]# date
Wed 17 Oct 14:29:17 BST 2018

There is one hour between them, it's not a problem because we are not talking about days or weeks. Just keep in mind that difference.
 
root@dgsdqahw03:~# cat /var/ossec/logs/ossec.log | grep -i -E "(error|warning|critical)"
2018/10/17 00:09:08 ossec-agentd: WARNING: Agent buffer at 90 %.
2018/10/17 00:09:08 ossec-agentd: WARNING: Agent buffer is full: Events may be lost.
2018/10/17 12:10:20 ossec-agentd: WARNING: Agent buffer at 90 %.
2018/10/17 12:10:20 ossec-agentd: WARNING: Agent buffer is full: Events may be lost.

This is the main problem here. As you can see in Kibana the agent has a peak every 12 hours, and as I can see, your agent is flooded every 12 hours. This is probably caused
by audit.log and syscheck. Please, can we check the number of lines in the audit.log ? Let's do a little experiment in order to verify this theory.

Login using SSH into the agent "013" and execute the next command:

wc -l /var/log/audit/audit.log | cut -d'/' -f1

It should show you a positive number, and that number is the number of lines in the audit.log file. Note down it.

Now restart the Wazuh agent:

# systemctl restart wazuh-agent

We need to wait for syscheck scan is finished, this trick is useful to know exactly when it's done:

# tail -f /var/ossec/logs/ossec.log | grep syscheck | grep Ending

The above command shouldn't show anything until the scan is finished (it could take some time, be patient please). At the end, you should see a line like this:

2018/10/17 13:36:03 ossec-syscheckd: INFO: Ending syscheck scan (forwarding database).

Now, it's time for c
hecking the audit.log file again:

wc -l /var/log/audit/audit.log | cut -d'/' -f1

Let me know its size before and after restarting the agent.

Also, it would be nice if you provide us your audit rules, let's check them using the next command:

# auditctl -l

Please, paste the output from the above command here.

I hope it helps.


Kind regards,
Jesús
            &qu

Nicholai Tailor

unread,
Oct 17, 2018, 10:37:18 AM10/17/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

Sorry i tried to reply-all sometimes it fails.

I dont have audit.log on ubuntu

root@dgsdqahw03:/var/log# wc -l /var/log/syslog | cut -d'/' -f1
36451 

The tail command i let it sit for a long while didnt turn up anything.

root@dgsdqahw03:/var/log# cat /var/ossec/logs/ossec.log | grep -i -E "(error|warning|critical)"
2018/10/17 00:09:08 ossec-agentd: WARNING: Agent buffer at 90 %.
2018/10/17 00:09:08 ossec-agentd: WARNING: Agent buffer is full: Events may be lost.
2018/10/17 12:10:20 ossec-agentd: WARNING: Agent buffer at 90 %.
2018/10/17 12:10:20 ossec-agentd: WARNING: Agent buffer is full: Events may be lost.
2018/10/17 14:25:20 ossec-logcollector: ERROR: (1103): Could not open file '/var/log/messages' due to [(2)-(No such file or directory)].
2018/10/17 14:25:20 ossec-logcollector: ERROR: (1103): Could not open file '/var/log/secure' due to [(2)-(No such file or directory)].
2018/10/17 14:26:08 ossec-agentd: WARNING: Agent buffer at 90 %.
2018/10/17 14:26:08 ossec-agentd: WARNING: Agent buffer is full: Events may be lost.
2018/10/17 14:28:18 ossec-logcollector: ERROR: (1103): Could not open file '/var/log/messages' due to [(2)-(No such file or directory)].
2018/10/17 14:28:18 ossec-logcollector: ERROR: (1103): Could not open file '/var/log/secure' due to [(2)-(No such file or directory)].
2018/10/17 14:29:06 ossec-agentd: WARNING: Agent buffer at 90 %.
2018/10/17 14:29:06 ossec-agentd: WARNING: Agent buffer is full: Events may be lost.

This does not look good?



--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 17, 2018, 11:17:57 AM10/17/18
to Wazuh mailing list
Hi Nicholai, 

Ok, let's debug your agent events using logall_json in the Wazuh manager instance.

1. Login using SSH into the Wazuh manager instance and edit the ossec.conf file. Edit the file /var/ossec/etc/ossec.conf and look for the <global> section, then enable <logall_json>

<logall_json>yes</logall_json>

2. Restart the Wazuh manager

# systemctl restart wazuh-manager

3. Login using SSH into the Wazuh agent instance, restart it and tail -f until it shows you the warning message:

# systemctl restart wazuh-agent
# tail -f /var/ossec/logs/ossec.log | grep WARNING

4. Once you see ossec-agentd: WARNING: Agent buffer at 90 %. in the Wazuh agent logs, 
    then switch your CLI to the Wazuh manager instance again and send to me in private message (yes, this time private due to your privacy) 
    the next file from your Wazuh manager:
 
   /var/ossec/logs/archives/archives.json

5. Now we can take a look into your events in order to clarify what is flooding your agent "013".

Once the mail is sent, you can disable logall_json and restart the Wazuh manager.

Regards!

jesus.g...@wazuh.com

unread,
Oct 18, 2018, 6:28:01 AM10/18/18
to Wazuh mailing list
Hello again Nicholai,

After reviewing the archives.json file that you send to me yesterday, I've concluded the next situation:

You have a lot of alerts with the same structure, related to /var/lib/kubelet/pods/* permissions, and rootcheck is thinking that it's wrong for you.

{"timestamp":"2018-10-17T18:06:18.125+0100","rule":{"level":7,"description":"Host-based anomaly detection event (rootcheck).","id":"510","firedtimes":3840,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_35.7.d"]},"agent":{"id":"013","name":"dgsdqahw03","ip":"10.79.244.143"},"manager":{"name":"dgsdprdwaz01"},"id":"1539795978.2752323246","full_log":"File '/var/lib/kubelet/pods/2ff462ce-7233-11e8-8282-005056b518e6/containers/install-cni/c9369c41' is owned by root and has written permissions to anyone.","decoder":{"name":"rootcheck"},"data":{"title":"File is owned by root and has written permissions to anyone.","file":"/var/lib/kubelet/pods/2ff462ce-7233-11e8-8282-005056b518e6/containers/install-cni/c9369c41"},"location":"rootcheck"}

Here you can see the number of events from rootcheck in your archives.json:

cat archives.json | grep rootcheck | wc -l
489

Here you can see the number of events from rootcheck and rule 510 in your archives.json:

cat archives.json | grep rootcheck | grep 510 | wc -l
489

Here you can see the number of events from rootcheck and rule 510 and including "/var/lib/kubelet/pods/"  in your archives.json:

cat archives.json | grep rootcheck | grep 510 | grep /var/lib/kubelet/pods/ | wc -l
489

At this point and from my view, the solution is as simple as including a <ignore> directive in the rootcheck section from the agent configuration.

You have two options:

Option 1. Edit the ossec.conf from your Wazuh agent "013". 

- Login using SSH into the Wazuh agent "013" instance.
- Edit the file /var/ossec/etc/ossec.conf, and look for the rootcheck block, then put a <ignore> block for that directory. 

<rootcheck>
...
<ignore>/var/lib/kubelet</ignore>
...
</rootcheck>


Restart the Wazuh agent "013"

# systemctl restart wazuh-agent

Option 2. Check in which group is your agent and edit its centralized configuration.

- Login using SSH into the Wazuh manager instance.
- Check the group where is agent "013"

# /var/ossec/bin/agent_groups -s -i 013

- Note down the group, example: default
- Edit the file under /var/ossec/etc/shared/default/agent.conf (replace default by the real group name, it could be different from my example), 
then add the rootcheck ignore inside the <agent_config> block, example:

<agent_config>
 
<!-- Shared agent configuration here -->

 
<rootcheck>
   
<ignore>/var/lib/kuberlet</ignore>
 
</rootcheck>

</agent_config>

- Restart the Wazuh manager

# systemctl restart wazuh-manager

The solution #1 takes effect immediately.

The solution #2 will push the new configuration from the Wazuh manager to the Wazuh agent, once the agent receives it, 
it auto restarts itself automatically and then it applies the new configuration. It could take a bit more time than solution #1.

On a side note, you can take a look at this useful link about the agent flooding:
The above link talks about how to prevent from being flooded. Your case is a bit different because we may have those permissions even if our ruleset thinks that it's wrong. That's
why I suggested you both options #1 and #2. But in future similar situations, it could be useful for you.

Kind regards,
Jesús

jesus.g...@wazuh.com

unread,
Oct 18, 2018, 6:32:39 AM10/18/18
to Wazuh mailing list
Nicholai, one more thing, 

I had a typo in the option #2 in my last message. Please replace  <ignore>/var/lib/kuberlet</ignore> by <ignore>/var/lib/kubelet</ignore>
if you are using the option #2, I've used kuberlet instead kubelet by error.

Regards!

jesus.g...@wazuh.com

unread,
Oct 18, 2018, 7:20:41 AM10/18/18
to Wazuh mailing list
Hello again Nicholai,

We need you to reply me in the public list, be careful when you send the message, please.

Hi Jesus,
Okay,
Im sure this might be happening to a lot of machines.
Can i add this to wazuh-manager configuration?

If you think other agents are probably suffering the non-desired alerts from rootcheck and the /var/lib/kubelet directory,
the solution is to use centralized configuration but in all your groups where there are affected agents. This is to use the 
Option #2 from my last message. Is the fastest and easiest way to propagate your <ignore> block to all your agents.

is mainly based on that article from our documentation. It's recommended that you understand well what you are doing with a centralized configuration.

I hope it helps Nicholai.

Best regards,
Jesús

Nicholai Tailor

unread,
Oct 18, 2018, 7:27:14 AM10/18/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

Sincere apologies. I was replying just to you because you asked previously.

Thank you again. Okay I have done option 2. May I ask why the agent buffer gets full error occurs?

And is there way to make that buffer much larger?

Cheers

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 18, 2018, 7:52:29 AM10/18/18
to Wazuh mailing list
Hello again Nicholai,

May I ask why the agent buffer gets full error occurs?

The agent buffer gets full whenever the agent is trying to send more events than the agent buffer can manage. Your agent is
trying to send a lot of events regarding the /var/lib/kubelet rootcheck events.

And is there way to make that buffer much larger?

Yes, but you must be extremely carefully modifying this kind of settings. There is an option in the ossec.conf from your agents
that could be modified in the same way as the rootcheck configuration (this means you can modify the ossec.conf from your agent or
you can use the centralized configuration solution).

You should see a block like this:

<client_buffer>
    <!-- Agent buffer options -->
    <disabled>no</disabled>
    <queue_size>5000</queue_size>
    <events_per_second>500</events_per_second>
</client_buffer>

Affected directives are: <queue_size> and <events_per_second>

You can try to increase one of them or both but keep in mind this:

- If you increase the queue size, the agent will use more memory (but not extremely more but take care about this value). 
- If you increase the events per second (EPS), the agent will use more bandwidth from the network where is connected.

A useful link to read about these values is the next one:
I hope it helps.

Very appreciated your contribution with the community and please still replying to all when you send a message. Thanks in advance.

Best regards,
Jesús


To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Nicholai Tailor

unread,
Oct 18, 2018, 8:08:16 AM10/18/18
to jesus.g...@wazuh.com, Wazuh mailing list
Hi Jesus,

I understand.

Thank you for all your troubleshooting.

It has been invaluable. 

Is there a wiki online with troubleshooting commands and explanations?

Cheers

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/dc743bf6-e437-4c5f-8d9c-8dafdf7770e1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 18, 2018, 8:39:49 AM10/18/18
to Wazuh mailing list
Hi Nicholai,

You are always welcome and we are always glad to help our community.

Regarding your question, we have a lot of documents in our documentation https://documentation.wazuh.com/current/index.html, but in fact
there is no "troubleshooting" or "FAQ" section, by the way, it's in our roadmap and we are working to provide the best solution day by day.

Let us know if you have any more troubles.

Regards!
Nicholai, one more thing, 

cat /var/ossec/logs/ossec.log | grep -</spa

Nicholai Tailor

unread,
Oct 18, 2018, 8:45:39 AM10/18/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hey jesus,

After we added that ignore now the charts are showing up after a restart of the agent on the client in the 15 min time range

Thank you so much!

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 18, 2018, 8:51:03 AM10/18/18
to Wazuh mailing list
You are welcome Nicholai!

Since this thread is solved by now, my suggestion is to open a new thread in future problems. Thanks in advance. 

Best regards,
Jesús
Nicholai, one more thing, 

cat /var/ossec/</spa

Nicholai Tailor

unread,
Oct 18, 2018, 8:55:54 AM10/18/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Yes I will.

Thank you for your quick responses and excellent troubleshooting ability. 

One quick question the indices. Are those created per server or each one has many indexed in a bundle?

Im trying to understand why i dont see 200+ incidies.

Cheers

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 18, 2018, 9:07:58 AM10/18/18
to Wazuh mailing list
Hi Nicholai,

Our default configuration is creating an index per day, Logstash is who defines where to send and a few other things. But I think
where we and some other users were discussing about it. 

Regards!

Nicholai Tailor

unread,
Oct 23, 2018, 4:16:07 AM10/23/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

Have a question.

I was looking for my elasticsearch and making notes of all troubleshooting for reference.

You said /var/ossec is in a different partition, that's okay but Elasticsearch stores its indices in a different place, for example
in a CentOS 7 I've just created it's storing in /usr/share/elasticsearch/data.

# ls /usr/share/elasticsearch/data/nodes/0
_state  indices  node
.lock

[root@waz01 ~]# ls -al  /usr/share/elasticsearch/data
ls: cannot access /usr/share/elasticsearch/data: No such file or directory

This data directory does not exist. So is something still wrong?

Cheers


On Mon, Oct 15, 2018 at 3:50 PM <jesus.g...@wazuh.com> wrote:
Hi Nicholai,

As you said in your other thread, you may fall into an Elasticsearch block due to disk usage. 

Removing Filebeat, setting up Logstash

If you are using a single-host architecture, let's remove Filebeat for performance reasons:

1. Stop affected services:

# systemctl stop logstash
# systemctl stop filebeat

2. Remove Filebeat

# yum remove filebeat


3. Setting up Logstash

# curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/3.6/extensions/logstash/01-wazuh-local.conf
# usermod -a -G ossec logstash

4. Restart Logstash

# systemctl restart logstash

5. Please, copy and paste this command (it differs from your curl in the other thread):

curl -XPUT 'http://localhost:9200/_settings' -H 'Content-Type: application/json' -d' { "index": { "blocks": { "read_only_allow_delete": "false" } } } '

6. Now check again your Logstash log file:

# date // For debug purposes, it would be nice if we know your instance date, then we can check the logs properly
# cat /var/log/logstash/logstash-plain.log | grep -i -E "(error|warning|critical)"


Disk usage and Elasticsearch

Elasticsearch has a watermark to prevent from making the disk unusable. 
You said /var/ossec is in a different partition, that's okay but Elasticsearch stores its indices in a different place, for example
in a CentOS 7 I've just created it's storing in /usr/share/elasticsearch/data.

# ls /usr/share/elasticsearch/data/nodes/0
_state  indices  node
.lock

Please, ensure Elasticsearch partition (if you have a different partition) has enough space.

I hope it helps.

Best regards,
Jesús

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Oct 23, 2018, 4:25:14 AM10/23/18
to Wazuh mailing list
Hi Nicholai,

Depending on your Elasticsearch configuration. Take a look at the next file:

$ cat /etc/elasticsearch/elasticsearch.yml

Then look for path.data and path.logs, those two settings are where Elasticsearch stores its indices (data) and its logs. 

Example:

path.data: /var/lib/elasticsearch
path
.logs: /var/log/elasticsearch

Note about commented settings (#) for elasticsearch.yml:

#This is a commented line
This is an uncommented line

Here you can find useful information about Elasticsearch configuration files:
And here you can read about path.data and path.logs settings:
Whenever you change a setting, remember to restart Elasticsearch:

# systemctl restart elasticsearch

Also, note that Elasticsearch needs about 15 seconds to be ready after a restart.

Regards,
Jesús
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Nicholai Tailor

unread,
Oct 23, 2018, 4:29:17 AM10/23/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesues,

Ahh thank you.

# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch

Okay thank you.

Cheers

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/cf4df2d6-699b-465c-9f68-c2bbeeeb33fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Nicholai Tailor

unread,
Oct 23, 2018, 4:37:43 AM10/23/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

One last question.

I think while troubleshooting i updated logstash to slightly newer version.

Will this cause issues with logs?

Kibana:
/usr/
share/kibana/bin/kibana -V

[root@waz01 ~]# /usr/share/kibana/bin/kibana -V

6.4.0

Logstash:
/usr/
share/logstash/bin/logstash -V

[root@waz01 ~]# /usr/share/logstash/bin/logstash -V

logstash 6.4.2

jesus.g...@wazuh.com

unread,
Oct 23, 2018, 4:43:32 AM10/23/18
to Wazuh mailing list
Hi Nicholai,

They are a patch version so it should not be a problem but from Wazuh we always recommend to have the same version on all components.

In any case, since there are no known breaking changes between those versions, you can wait for the next Wazuh minor version that is coming soon and then you can upgrade all your stack. 
For now and if you are not seeing any error, it's fine.

Best regards,
Jesús
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/cf4df2d6-699b-465c-9f68-c2bbeeeb33fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Nicholai Tailor

unread,
Oct 23, 2018, 4:47:45 AM10/23/18
to jesus.g...@wazuh.com, wa...@googlegroups.com
Hi Jesus,

okay cool. 

Thank you again for your quick replies.

I will have to send you a beer once I have this all done :)

Cheers

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/cf4df2d6-699b-465c-9f68-c2bbeeeb33fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/5cbbfe67-f386-4633-b067-c0628ee2e923%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
Reply all
Reply to author
Forward
0 new messages