Elasticsearch index not found: wazuh-alerts-3.x-

1,437 views
Skip to first unread message

Felipe Andres Concha Sepúlveda

unread,
Jun 29, 2018, 7:53:28 AM6/29/18
to Wazuh mailing list, jesus.g...@wazuh.com
Dear,
I have made a configuration as a cluster and when I go to see the log it gives me an error like this.
Will you have an idea?

regards













jesus.g...@wazuh.com

unread,
Jun 29, 2018, 8:50:21 AM6/29/18
to Wazuh mailing list
Hi Felipe,

Please could you paste the output of the next commands?

# curl elastic_ip:9200/_cat/indices -s | grep wazuh
# curl elastic_ip:9200/_cat/templates | grep wazuh

I've seen agent: 001 in your screenshots, could you execute the next command?

# cat /var/ossec/logs/alerts/alerts.json | grep '"001"'

Execute the above command in all your Wazuh managers from your cluster.

Regards,
Jesús

Felipe Andres Concha Sepúlveda

unread,
Jun 29, 2018, 9:07:22 AM6/29/18
to jesus.g...@wazuh.com, Wazuh mailing list
Jesus Here is the screenshot











[root@localhost filebeat]# cat /var/ossec/logs/alerts/alerts.json | grep '"001"'
{"timestamp":"2018-06-29T11:43:39.641+0200","rule":{"level":3,"description":"Ossec agent started.","id":"503","firedtimes":1,"mail":true,"groups":["ossec"],"pci_dss":["10.6.1","10.2.6"],"gpg13":["10.1"],"gdpr":["IV_35.7.d"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530265419.22597","cluster":{"name":"wazuh","node":"node01"},"full_log":"ossec: Agent started: 'Centos7_1->any'.","decoder":{"parent":"ossec","name":"ossec"},"data":{"data":"Centos7_1->any"},"location":"ossec"}
{"timestamp":"2018-06-29T11:43:39.641+0200","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":1,"mail":false,"groups":["ossec","syscheck"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530265419.22826","cluster":{"name":"wazuh","node":"node01"},"full_log":"Integrity checksum changed for: '/var/ossec/etc/ossec.conf'\nOld md5sum was: 'f2ba1b9b566d1003294980ae5ca347cc'\nNew md5sum is : 'ae4be0f3a65635d54e436ff90e3a821a'\nOld sha1sum was: 'f79b2b70f9356e3ac203ca85fb18ee132598050f'\nNew sha1sum is : '7b900b85cc291932cb6b29e23590cdeba5d2c744'\n","syscheck":{"path":"/var/ossec/etc/ossec.conf","size_after":"4889","perm_after":"100640","uid_after":"0","gid_after":"994","md5_before":"f2ba1b9b566d1003294980ae5ca347cc","md5_after":"ae4be0f3a65635d54e436ff90e3a821a","sha1_before":"f79b2b70f9356e3ac203ca85fb18ee132598050f","sha1_after":"7b900b85cc291932cb6b29e23590cdeba5d2c744","event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
{"timestamp":"2018-06-29T11:43:46.695+0200","rule":{"level":7,"description":"Listened ports status (netstat) changed (new port opened or closed).","id":"533","firedtimes":1,"mail":false,"groups":["ossec"],"pci_dss":["10.2.7","10.6.1"],"gpg13":["10.1"],"gdpr":["IV_35.7.d"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530265426.23577","cluster":{"name":"wazuh","node":"node01"},"previous_output":"ossec: output: 'netstat listening ports':\ntcp 0.0.0.0:22 0.0.0.0:* 1166/sshd\ntcp6 :::22 :::* 1166/sshd\ntcp 127.0.0.1:25 0.0.0.0:* 1329/master\ntcp6 ::1:25 :::* 1329/master\nudp 0.0.0.0:68 0.0.0.0:* 968/dhclient\ntcp 0.0.0.0:111 0.0.0.0:* 708/rpcbind\ntcp6 :::111 :::* 708/rpcbind\nudp 0.0.0.0:111 0.0.0.0:* 708/rpcbind\nudp6 :::111 :::* 708/rpcbind\nudp 127.0.0.1:323 0.0.0.0:* 727/chronyd\nudp6 ::1:323 :::* 727/chronyd\nudp 0.0.0.0:874 0.0.0.0:* 708/rpcbind\nudp6 :::874 :::* 708/rpcbind","full_log":"ossec: output: 'netstat listening ports':\ntcp 0.0.0.0:22 0.0.0.0:* 1161/sshd\ntcp6 :::22 :::* 1161/sshd\ntcp 127.0.0.1:25 0.0.0.0:* 1335/master\ntcp6 ::1:25 :::* 1335/master\nudp 0.0.0.0:68 0.0.0.0:* 965/dhclient\ntcp 0.0.0.0:111 0.0.0.0:* 710/rpcbind\ntcp6 :::111 :::* 710/rpcbind\nudp 0.0.0.0:111 0.0.0.0:* 710/rpcbind\nudp6 :::111 :::* 710/rpcbind\nudp 127.0.0.1:323 0.0.0.0:* 722/chronyd\nudp6 ::1:323 :::* 722/chronyd\nudp 0.0.0.0:872 0.0.0.0:* 710/rpcbind\nudp6 :::872 :::* 710/rpcbind","decoder":{"name":"ossec"},"previous_log":"ossec: output: 'netstat listening ports':\ntcp 0.0.0.0:22 0.0.0.0:* 1166/sshd\ntcp6 :::22 :::* 1166/sshd\ntcp 127.0.0.1:25 0.0.0.0:* 1329/master\ntcp6 ::1:25 :::* 1329/master\nudp 0.0.0.0:68 0.0.0.0:* 968/dhclient\ntcp 0.0.0.0:111 0.0.0.0:* 708/rpcbind\ntcp6 :::111 :::* 708/rpcbind\nudp 0.0.0.0:111 0.0.0.0:* 708/rpcbind\nudp6 :::111 :::* 708/rpcbind\nudp 127.0.0.1:323 0.0.0.0:* 727/chronyd\nudp6 ::1:323 :::* 727/chronyd\nudp 0.0.0.0:874 0.0.0.0:* 708/rpcbind\nudp6 :::874 :::* 708/rpcbind","location":"netstat listening ports"}
{"timestamp":"2018-06-29T11:44:44.649+0200","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":2,"mail":false,"groups":["ossec","syscheck"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530265484.24803","cluster":{"name":"wazuh","node":"node01"},"full_log":"Integrity checksum changed for: '/etc/resolv.conf'\n","syscheck":{"path":"/etc/resolv.conf","size_after":"72","perm_after":"100644","uid_after":"0","gid_after":"0","md5_after":"56590de6241b8392a289a2ab6eb8d53c","sha1_after":"1e48093a39dc2cfa34ff49171036620258a9e76c","uname_after":"root","gname_after":"root","mtime_before":"2018-06-28T10:04:47","mtime_after":"2018-06-29T11:36:51","inode_before":17431959,"inode_after":17431966,"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
{"timestamp":"2018-06-29T11:48:48.410+0200","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":3,"mail":false,"groups":["ossec","syscheck"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530265728.25341","cluster":{"name":"wazuh","node":"node01"},"full_log":"Integrity checksum changed for: '/etc/tuned/active_profile'\n","syscheck":{"path":"/etc/tuned/active_profile","size_after":"14","perm_after":"100644","uid_after":"0","gid_after":"0","md5_after":"9a561d913bcdb5a659ec2dd035975a8e","sha1_after":"633f07e1b5698d04352d5dca735869bf2fe77897","uname_after":"root","gname_after":"root","mtime_before":"2018-06-28T10:04:48","mtime_after":"2018-06-29T11:36:52","inode_after":50715186,"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
{"timestamp":"2018-06-29T11:48:48.416+0200","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":4,"mail":false,"groups":["ossec","syscheck"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530265728.25877","cluster":{"name":"wazuh","node":"node01"},"full_log":"Integrity checksum changed for: '/etc/tuned/profile_mode'\n","syscheck":{"path":"/etc/tuned/profile_mode","size_after":"5","perm_after":"100644","uid_after":"0","gid_after":"0","md5_after":"451e20aff0f489cd2f7d4d73533aa961","sha1_after":"43683f4e92c48be4b00ddd86e011a4f27fcdbeb5","uname_after":"root","gname_after":"root","mtime_before":"2018-06-28T10:04:48","mtime_after":"2018-06-29T11:36:52","inode_after":50715073,"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
{"timestamp":"2018-06-29T11:56:55.756+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":1,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.26408","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 3: Root can log in. File: /etc/ssh/sshd_config. Reference: 3 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 3: Root can log in.","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.759+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":2,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.26726","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 4: No Public Key authentication {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 4 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 4: No Public Key authentication","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.761+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":3,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.27086","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 5: Password Authentication {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 5 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 5: Password Authentication","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.764+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":4,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.27436","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 6: Empty passwords allowed {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 6 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 6: Empty passwords allowed","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.766+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":5,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.27786","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 7: Rhost or shost used for authentication {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 7 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 7: Rhost or shost used for authentication","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.768+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":6,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.28166","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 8: Wrong Grace Time {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 8 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 8: Wrong Grace Time","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.771+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":7,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.28502","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 9: Wrong Maximum number of authentication attempts {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 9 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 9: Wrong Maximum number of authentication attempts","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.773+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":8,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.28900","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - Testing against the CIS Red Hat Enterprise Linux 7 Benchmark v1.1.0. File: /etc/redhat-release. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - Testing against the CIS Red Hat Enterprise Linux 7 Benchmark v1.1.0.","file":"/etc/redhat-release"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.775+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":9,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.29391","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - Build considerations - Robust partition scheme - /tmp is not on its own partition. File: /etc/fstab. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - Build considerations - Robust partition scheme - /tmp is not on its own partition.","file":"/etc/fstab"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.778+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":10,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["1.1.5 RHEL7"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.29908","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - Build considerations - Robust partition scheme - /var is not on its own partition {CIS: 1.1.5 RHEL7}. File: /etc/fstab. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - Build considerations - Robust partition scheme - /var is not on its own partition","file":"/etc/fstab"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.780+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":11,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["4.1.2 RHEL7"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.30443","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 4.1.2 - Network parameters - IP send redirects enabled {CIS: 4.1.2 RHEL7} {PCI_DSS: 2.2.4}. File: /proc/sys/net/ipv4/conf/all/send_redirects. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 4.1.2 - Network parameters - IP send redirects enabled","file":"/proc/sys/net/ipv4/conf/all/send_redirects"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.783+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":12,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["1.1.1 RHEL7"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.31005","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 4.2.2 - Network parameters - ICMP redirects accepted {CIS: 1.1.1 RHEL7} {PCI_DSS: 2.2.4}. File: /proc/sys/net/ipv4/conf/all/accept_redirects. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 4.2.2 - Network parameters - ICMP redirects accepted","file":"/proc/sys/net/ipv4/conf/all/accept_redirects"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.785+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":13,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["4.2.3 RHEL7"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.31567","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 4.2.3 - Network parameters - ICMP secure redirects accepted {CIS: 4.2.3 RHEL7} {PCI_DSS: 2.2.4}. File: /proc/sys/net/ipv4/conf/all/secure_redirects. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 4.2.3 - Network parameters - ICMP secure redirects accepted","file":"/proc/sys/net/ipv4/conf/all/secure_redirects"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.787+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":14,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["4.2.4 RHEL7"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.32143","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 4.2.4 - Network parameters - martians not logged {CIS: 4.2.4 RHEL7} {PCI_DSS: 2.2.4}. File: /proc/sys/net/ipv4/conf/all/log_martians. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 4.2.4 - Network parameters - martians not logged","file":"/proc/sys/net/ipv4/conf/all/log_martians"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.789+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":15,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.32689","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 6.2.5 - SSH Configuration - Set SSH MaxAuthTries to 4 or Less  {CIS - RHEL7 - 6.2.5} {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 6.2.5 - SSH Configuration - Set SSH MaxAuthTries to 4 or Less ","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.792+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":16,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["6.2.8 RHEL7"],"pci_dss":["4.1"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.33226","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 6.2.8 - SSH Configuration - Root login allowed {CIS: 6.2.8 RHEL7} {PCI_DSS: 4.1}. File: /etc/ssh/sshd_config. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 6.2.8 - SSH Configuration - Root login allowed","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T11:56:55.794+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":17,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["6.2.9 RHEL7"],"pci_dss":["4.1"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530266215.33726","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 6.2.9 - SSH Configuration - Empty passwords permitted {CIS: 6.2.9 RHEL7} {PCI_DSS: 4.1}. File: /etc/ssh/sshd_config. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 6.2.9 - SSH Configuration - Empty passwords permitted","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:06:33.536+0200","rule":{"level":3,"description":"PAM: Login session opened.","id":"5501","firedtimes":1,"mail":false,"groups":["pam","syslog","authentication_success"],"pci_dss":["10.2.5"],"gpg13":["7.8","7.9"],"gdpr":["IV_32.2"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530270393.34240","cluster":{"name":"wazuh","node":"node01"},"full_log":"Jun 29 13:06:31 localhost su: pam_unix(su:session): session opened for user root by root(uid=0)","predecoder":{"program_name":"su","timestamp":"Jun 29 13:06:31","hostname":"localhost"},"decoder":{"parent":"pam","name":"pam"},"data":{"srcuser":"root","dstuser":"root","uid":"0"},"location":"/var/log/secure"}
{"timestamp":"2018-06-29T13:28:12.575+0200","rule":{"level":3,"description":"Ossec agent started.","id":"503","firedtimes":1,"mail":true,"groups":["ossec"],"pci_dss":["10.6.1","10.2.6"],"gpg13":["10.1"],"gdpr":["IV_35.7.d"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530271692.34574","cluster":{"name":"wazuh","node":"node01"},"full_log":"ossec: Agent started: 'Centos7_1->any'.","decoder":{"parent":"ossec","name":"ossec"},"data":{"data":"Centos7_1->any"},"location":"ossec"}
{"timestamp":"2018-06-29T13:41:29.910+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":1,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.34803","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 3: Root can log in. File: /etc/ssh/sshd_config. Reference: 3 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 3: Root can log in.","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.912+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":2,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.35121","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 4: No Public Key authentication {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 4 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 4: No Public Key authentication","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.914+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":3,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.35481","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 5: Password Authentication {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 5 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 5: Password Authentication","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.917+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":4,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.35831","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 6: Empty passwords allowed {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 6 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 6: Empty passwords allowed","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.919+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":5,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.36181","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 7: Rhost or shost used for authentication {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 7 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 7: Rhost or shost used for authentication","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.922+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":6,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.36561","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 8: Wrong Grace Time {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 8 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 8: Wrong Grace Time","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.924+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":7,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.36897","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: SSH Hardening - 9: Wrong Maximum number of authentication attempts {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: 9 .","decoder":{"name":"rootcheck"},"data":{"title":"SSH Hardening - 9: Wrong Maximum number of authentication attempts","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.927+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":8,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.37295","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - Testing against the CIS Red Hat Enterprise Linux 7 Benchmark v1.1.0. File: /etc/redhat-release. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - Testing against the CIS Red Hat Enterprise Linux 7 Benchmark v1.1.0.","file":"/etc/redhat-release"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.929+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":9,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.37786","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - Build considerations - Robust partition scheme - /tmp is not on its own partition. File: /etc/fstab. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - Build considerations - Robust partition scheme - /tmp is not on its own partition.","file":"/etc/fstab"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.932+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":10,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["1.1.5 RHEL7"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.38303","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - Build considerations - Robust partition scheme - /var is not on its own partition {CIS: 1.1.5 RHEL7}. File: /etc/fstab. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - Build considerations - Robust partition scheme - /var is not on its own partition","file":"/etc/fstab"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.934+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":11,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["4.1.2 RHEL7"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.38838","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 4.1.2 - Network parameters - IP send redirects enabled {CIS: 4.1.2 RHEL7} {PCI_DSS: 2.2.4}. File: /proc/sys/net/ipv4/conf/all/send_redirects. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 4.1.2 - Network parameters - IP send redirects enabled","file":"/proc/sys/net/ipv4/conf/all/send_redirects"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.936+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":12,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["1.1.1 RHEL7"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.39400","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 4.2.2 - Network parameters - ICMP redirects accepted {CIS: 1.1.1 RHEL7} {PCI_DSS: 2.2.4}. File: /proc/sys/net/ipv4/conf/all/accept_redirects. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 4.2.2 - Network parameters - ICMP redirects accepted","file":"/proc/sys/net/ipv4/conf/all/accept_redirects"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.939+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":13,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["4.2.3 RHEL7"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.39962","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 4.2.3 - Network parameters - ICMP secure redirects accepted {CIS: 4.2.3 RHEL7} {PCI_DSS: 2.2.4}. File: /proc/sys/net/ipv4/conf/all/secure_redirects. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 4.2.3 - Network parameters - ICMP secure redirects accepted","file":"/proc/sys/net/ipv4/conf/all/secure_redirects"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.941+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":14,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["4.2.4 RHEL7"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.40538","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 4.2.4 - Network parameters - martians not logged {CIS: 4.2.4 RHEL7} {PCI_DSS: 2.2.4}. File: /proc/sys/net/ipv4/conf/all/log_martians. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 4.2.4 - Network parameters - martians not logged","file":"/proc/sys/net/ipv4/conf/all/log_martians"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.944+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":15,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"pci_dss":["2.2.4"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.41084","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 6.2.5 - SSH Configuration - Set SSH MaxAuthTries to 4 or Less  {CIS - RHEL7 - 6.2.5} {PCI_DSS: 2.2.4}. File: /etc/ssh/sshd_config. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 6.2.5 - SSH Configuration - Set SSH MaxAuthTries to 4 or Less ","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.946+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":16,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["6.2.8 RHEL7"],"pci_dss":["4.1"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.41621","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 6.2.8 - SSH Configuration - Root login allowed {CIS: 6.2.8 RHEL7} {PCI_DSS: 4.1}. File: /etc/ssh/sshd_config. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 6.2.8 - SSH Configuration - Root login allowed","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}
{"timestamp":"2018-06-29T13:41:29.948+0200","rule":{"level":3,"description":"System Audit event.","id":"516","firedtimes":17,"mail":false,"groups":["ossec","rootcheck"],"gdpr":["IV_30.1.g"],"cis":["6.2.9 RHEL7"],"pci_dss":["4.1"]},"agent":{"id":"001","name":"Centos7_1"},"manager":{"name":"localhost.localdomain"},"id":"1530272489.42121","cluster":{"name":"wazuh","node":"node01"},"full_log":"System Audit: CIS - RHEL7 - 6.2.9 - SSH Configuration - Empty passwords permitted {CIS: 6.2.9 RHEL7} {PCI_DSS: 4.1}. File: /etc/ssh/sshd_config. Reference: https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf .","decoder":{"name":"rootcheck"},"data":{"title":"CIS - RHEL7 - 6.2.9 - SSH Configuration - Empty passwords permitted","file":"/etc/ssh/sshd_config"},"location":"rootcheck"}

jesus.g...@wazuh.com

unread,
Jun 29, 2018, 9:15:52 AM6/29/18
to Wazuh mailing list
Hi Felipe,

I think you have no alerts indices created. A Wazuh alerts index is named using this pattern:

wazuh-alerts-3.x-YYYY.MM.DD

Example:

wazuh-alerts-3.x-2018.06.29

At this point, my suggestion is to check if Filebeat is properly sending data to Logstash.
Execute the next command in each Filebeat machine:

# filebeat test output

If they say all is OK, check each Logstash log:

# cat /var/log/logstash/logstash-plain.log | grep -i -E "error|warn"

If they also say all is OK, check each Elasticsearch log:

# cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn"

Hope it helps,
Jesús

Felipe Andres Concha Sepúlveda

unread,
Jun 29, 2018, 9:38:55 AM6/29/18
to jesus.g...@wazuh.com, Wazuh mailing list

These are the results of the screens,
I'm analyzing, but if you have any ideas?
 3 nodos filebeat OK




NODO MASTER ELK





Log nodo 2 Elasticsearch and logstash
2018-06-29T08:42:03,123][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-06-29T08:42:09,488][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>29, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.0.14-java/lib/logstash/inputs/beats.rb:198:in `run'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}]}}
[2018-06-29T08:42:09,541][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2018-06-29T09:01:24,775][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], index=>"wazuh-alerts-3.x-%{+YYYY.MM.dd}", document_type=>"wazuh", id=>"9b75326af0b186735d7929582cc619c38faa0af3613877c498271f50b08b9b35", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_648fc27e-961c-4636-89ed-ce2eb356db0e", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-06-29T09:01:25,722][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:01:25,766][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:31:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:17:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:96:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:26:in `register'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:97:in `register'", "org/logstash/config/ir/compiler/OutputDelegatorExt.java:93:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:340:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:351:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:351:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:728:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:361:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:288:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:248:in `block in start'"]}
[2018-06-29T09:01:30,762][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:01:35,771][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:01:40,782][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:01:45,803][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:01:50,824][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:01:55,838][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:00,852][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:05,863][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:10,874][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:15,885][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:20,898][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:25,907][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:30,917][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:35,928][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:40,947][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:45,963][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:50,987][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:02:56,007][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:03:01,019][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:03:06,031][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:03:11,054][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:03:16,084][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:03:21,113][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-06-29T09:03:26,146][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http:/

[root@localhost elasticsearch]# cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn"
[2018-06-28T12:37:28,086][INFO ][o.e.n.Node               ] [NT8FKNY] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.JyUJSB88, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm]
[2018-06-28T12:37:41,614][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [NT8FKNY] Failed to clear cache for realms [[]]








Nodo 3 Elasticsearch ans logstash
[root@localhost elasticsearch]# cat /var/log/logstash/logstash-plain.log | grep -i -E "error|warn"
[2018-06-29T08:38:32,660][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-06-29T08:38:40,427][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>29, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.0.14-java/lib/logstash/inputs/beats.rb:198:in `run'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}]}}
[2018-06-29T08:38:40,473][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2018-06-29T09:09:10,915][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], index=>"wazuh-alerts-3.x-%{+YYYY.MM.dd}", document_type=>"wazuh", id=>"9b75326af0b186735d7929582cc619c38faa0af3613877c498271f50b08b9b35", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_7aa501be-3fbf-4692-8746-81ebf7044ce1", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-06-29T09:09:14,612][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-06-29T09:09:14,894][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}


[root@localhost elasticsearch]# cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn"
[2018-06-29T09:08:36,217][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.xF0eUgNY, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]
[2018-06-29T09:09:14,108][WARN ][o.e.m.j.JvmGcMonitorService] [BVHTxUb] [gc][young][28][3] duration [1.1s], collections [1]/[1.6s], total [1.1s]/[1.5s], memory [289mb]->[53.9mb]/[990.7mb], all_pools {[young] [266.2mb]->[1.7mb]/[266.2mb]}{[survivor] [18.6mb]->[33.2mb]/[33.2mb]}{[old] [4.1mb]->[20.7mb]/[691.2mb]}
[2018-06-29T09:09:14,117][WARN ][o.e.m.j.JvmGcMonitorService] [BVHTxUb] [gc][28] overhead, spent [1.1s] collecting in the last [1.6s]
[2018-06-29T10:14:16,283][INFO ][o.e.n.Node               ] [nodo-2] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.QphHUCGQ, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]
[2018-06-29T10:14:24,722][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [1] failed send ping to {#zen_unicast_192.168.2.156_0#}{U-n0ZyShRzGFHPsxvsso8Q}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:24,724][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [1] failed send ping to {#zen_unicast_192.168.2.155_0#}{1ntGUylVRvenuraASqbWJg}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:25,116][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [1] failed send ping to {#zen_unicast_192.168.2.155_0#}{1ntGUylVRvenuraASqbWJg}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:25,116][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [1] failed send ping to {#zen_unicast_192.168.2.156_0#}{U-n0ZyShRzGFHPsxvsso8Q}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:26,142][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [1] failed send ping to {#zen_unicast_192.168.2.156_0#}{U-n0ZyShRzGFHPsxvsso8Q}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:26,142][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [1] failed send ping to {#zen_unicast_192.168.2.155_0#}{1ntGUylVRvenuraASqbWJg}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:27,054][WARN ][o.e.d.z.ZenDiscovery     ] [nodo-2] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-06-29T10:14:27,143][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [2] failed send ping to {#zen_unicast_192.168.2.156_0#}{j9uNDeCvRPWw5oSc4s2cfA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:27,146][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [2] failed send ping to {#zen_unicast_192.168.2.155_0#}{jXqKUw73Q_66zQzaDoaB8w}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:28,069][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [2] failed send ping to {#zen_unicast_192.168.2.155_0#}{jXqKUw73Q_66zQzaDoaB8w}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:28,071][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [2] failed send ping to {#zen_unicast_192.168.2.156_0#}{j9uNDeCvRPWw5oSc4s2cfA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:29,068][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [2] failed send ping to {#zen_unicast_192.168.2.155_0#}{jXqKUw73Q_66zQzaDoaB8w}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:29,068][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [2] failed send ping to {#zen_unicast_192.168.2.156_0#}{j9uNDeCvRPWw5oSc4s2cfA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:30,057][WARN ][o.e.d.z.ZenDiscovery     ] [nodo-2] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-06-29T10:14:30,070][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [3] failed send ping to {#zen_unicast_192.168.2.156_0#}{wM_33cqmQfSOzLzgvDWL6g}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:30,071][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [3] failed send ping to {#zen_unicast_192.168.2.155_0#}{utW5XYXARkOkUIncPp42aw}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:31,081][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [3] failed send ping to {#zen_unicast_192.168.2.156_0#}{wM_33cqmQfSOzLzgvDWL6g}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:31,079][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [3] failed send ping to {#zen_unicast_192.168.2.155_0#}{utW5XYXARkOkUIncPp42aw}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:32,073][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [3] failed send ping to {#zen_unicast_192.168.2.156_0#}{wM_33cqmQfSOzLzgvDWL6g}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:32,073][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [3] failed send ping to {#zen_unicast_192.168.2.155_0#}{utW5XYXARkOkUIncPp42aw}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:33,060][WARN ][o.e.d.z.ZenDiscovery     ] [nodo-2] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-06-29T10:14:33,069][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [4] failed send ping to {#zen_unicast_192.168.2.156_0#}{hbBE0-wNStyFt6fVQIzWAA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:33,071][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [4] failed send ping to {#zen_unicast_192.168.2.155_0#}{6nor2UVoRjCvwG3nZBWYLQ}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:34,071][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [4] failed send ping to {#zen_unicast_192.168.2.156_0#}{hbBE0-wNStyFt6fVQIzWAA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:34,078][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [4] failed send ping to {#zen_unicast_192.168.2.155_0#}{6nor2UVoRjCvwG3nZBWYLQ}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:35,070][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [4] failed send ping to {#zen_unicast_192.168.2.155_0#}{6nor2UVoRjCvwG3nZBWYLQ}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:35,070][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [4] failed send ping to {#zen_unicast_192.168.2.156_0#}{hbBE0-wNStyFt6fVQIzWAA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:36,063][WARN ][o.e.d.z.ZenDiscovery     ] [nodo-2] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-06-29T10:14:36,072][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [5] failed send ping to {#zen_unicast_192.168.2.156_0#}{j6v_-TP2RN2dIzPj_DpFiw}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:36,074][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [5] failed send ping to {#zen_unicast_192.168.2.155_0#}{2PdlmSpcRsCteUOALMpjyQ}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:37,075][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [5] failed send ping to {#zen_unicast_192.168.2.156_0#}{j6v_-TP2RN2dIzPj_DpFiw}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:37,076][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [5] failed send ping to {#zen_unicast_192.168.2.155_0#}{2PdlmSpcRsCteUOALMpjyQ}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:38,075][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [5] failed send ping to {#zen_unicast_192.168.2.155_0#}{2PdlmSpcRsCteUOALMpjyQ}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:38,075][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [5] failed send ping to {#zen_unicast_192.168.2.156_0#}{j6v_-TP2RN2dIzPj_DpFiw}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:39,067][WARN ][o.e.d.z.ZenDiscovery     ] [nodo-2] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-06-29T10:14:39,077][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [6] failed send ping to {#zen_unicast_192.168.2.155_0#}{xa5_MJ9HTIurC4jVkz9oSw}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:39,079][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [6] failed send ping to {#zen_unicast_192.168.2.156_0#}{qyFdMLsnR1iSk8c75ZgkfQ}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:40,078][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [6] failed send ping to {#zen_unicast_192.168.2.155_0#}{xa5_MJ9HTIurC4jVkz9oSw}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:40,079][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [6] failed send ping to {#zen_unicast_192.168.2.156_0#}{qyFdMLsnR1iSk8c75ZgkfQ}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:41,078][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [6] failed send ping to {#zen_unicast_192.168.2.155_0#}{xa5_MJ9HTIurC4jVkz9oSw}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:41,078][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [6] failed send ping to {#zen_unicast_192.168.2.156_0#}{qyFdMLsnR1iSk8c75ZgkfQ}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:42,070][WARN ][o.e.d.z.ZenDiscovery     ] [nodo-2] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-06-29T10:14:42,078][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [7] failed send ping to {#zen_unicast_192.168.2.156_0#}{l2La2RWLQ1yz3vFwBudZWA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:42,078][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [7] failed send ping to {#zen_unicast_192.168.2.155_0#}{3ABGXygZSPWAfCo0-6z1-A}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:43,081][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [7] failed send ping to {#zen_unicast_192.168.2.155_0#}{3ABGXygZSPWAfCo0-6z1-A}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:43,089][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [7] failed send ping to {#zen_unicast_192.168.2.156_0#}{l2La2RWLQ1yz3vFwBudZWA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:44,079][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [7] failed send ping to {#zen_unicast_192.168.2.155_0#}{3ABGXygZSPWAfCo0-6z1-A}{192.168.2.155}{192.168.2.155:9300}
[2018-06-29T10:14:44,079][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [7] failed send ping to {#zen_unicast_192.168.2.156_0#}{l2La2RWLQ1yz3vFwBudZWA}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:45,072][WARN ][o.e.d.z.ZenDiscovery     ] [nodo-2] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-06-29T10:14:45,080][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [8] failed send ping to {#zen_unicast_192.168.2.156_0#}{-pHZC4XJQlyZnV7M2Oe9Gw}{192.168.2.156}{192.168.2.156:9300}
[2018-06-29T10:14:45,081][WARN ][o.e.d.z.UnicastZenPing   ] [nodo-2] [8] failed send ping to {#zen_unicast_192.168.2.155_0#}{rL2sX4S2SRCf

Felipe Andres Concha Sepúlveda

unread,
Jun 29, 2018, 10:14:17 AM6/29/18
to jesus.g...@wazuh.com, Wazuh mailing list

Sorry, I sent you the log of the morning with other errors.

This is the updated:

I also see that I have some different versions of elastticsearch




[2018-06-29T15:48:30,558][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-06-29T15:49:05,963][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], index=>"wazuh-alerts-3.x-%{+YYYY.MM.dd}", document_type=>"wazuh", id=>"9b75326af0b186735d7929582cc619c38faa0af3613877c498271f50b08b9b35", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_dd413631-78d7-46ed-bdfc-43c2ed3cd14c", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-06-29T15:49:07,900][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-06-29T15:49:09,239][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-06-29T15:49:41,716][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-06-29T15:49:47,160][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>29, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.0.14-java/lib/logstash/inputs/beats.rb:198:in `run'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}]}}
[2018-06-29T15:49:47,233][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2018-06-29T15:50:19,668][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], index=>"wazuh-alerts-3.x-%{+YYYY.MM.dd}", document_type=>"wazuh", id=>"9b75326af0b186735d7929582cc619c38faa0af3613877c498271f50b08b9b35", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_f5a92841-372b-4f6a-abc3-16c78e58e635", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-06-29T15:50:20,937][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-06-29T15:50:21,236][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[root@localhost elasticsearch]# curl 192.168.2.156:9200/?pretty


El 29-06-2018, a las 15:38, Felipe Andres Concha Sepúlveda <felipeandresc...@gmail.com> escribió:


These are the results of the screens,
I'm analyzing, but if you have any ideas?
 3 nodos filebeat OK

<PastedGraphic-12.png>



NODO MASTER ELK

<PastedGraphic-11.png>

jesus.g...@wazuh.com

unread,
Jun 29, 2018, 10:53:12 AM6/29/18
to Wazuh mailing list
Hello again Felipe,

You need all the Elastic Stack components in the same version for them to work (Elasticsearch, Filebeat, Logstash, Kibana).

1. Update/Downgrade all your Elastic components to have the same version.
2. Go to the master node from Elasticsearch and paste the output of the next commands:

Let us know about your configuration files:

#  cat /etc/elasticsearch/elasticsearch.yml
# cat /etc/logstash/conf.d/01-wazuh.conf

Let us know about your services statuses:

# systemctl status logstash -l
# systemctl status elasticsearch -l

3. Repeat the above command per each Elasticsearch node.
4. Once we have the above information we can inspect for missing configurations, typos, errors, etc.

That's all for now, keep in touch Felipe, hope we can make your cluster worked as expected soon.

Regards,
Jesús

El viernes, 29 de junio de 2018, 16:14:17 (UTC+2), Felipe Andres Concha Sepúlveda escribió:

Sorry, I sent you the log of the morning with other errors.

This is the updated:

I also see that I have some different versions of elastticsearch




[2018-06-29T15:48:30,558][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-06-29T15:49:05,963][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], index=>"wazuh-alerts-3.x-%{+YYYY.MM.dd}", document_type=>"wazuh", id=>"9b75326af0b186735d7929582cc619c38faa0af3613877c498271f50b08b9b35", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_dd413631-78d7-46ed-bdfc-43c2ed3cd14c", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-06-29T15:49:07,900][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-06-29T15:49:09,239][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-06-29T15:49:41,716][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-06-29T15:49:47,160][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>29, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.0.14-java/lib/logstash/inputs/beats.rb:198:in `run'"}, {"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}]}}
[2018-06-29T15:49:47,233][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2018-06-29T15:50:19,668][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], index=>"wazuh-alerts-3.x-%{+YYYY.MM.dd}", document_type=>"wazuh", id=>"9b75326af0b186735d7929582cc619c38faa0af3613877c498271f50b08b9b35", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_f5a92841-372b-4f6a-abc3-16c78e58e635", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-06-29T15:50:20,937][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-06-29T15:50:21,236][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[root@localhost elasticsearch]# curl 192.168.2.156:9200/?pretty

Felipe Andres Concha Sepúlveda

unread,
Jul 1, 2018, 7:24:26 PM7/1/18
to jesus.g...@wazuh.com, Wazuh mailing list
Thank you Jesus for your help, I solved the error message by restarting the server, surely there was some process that I had not seen.
Anyway, there is still a problem that I can not find.


sometimes I can not see this screen with the detail of the log, it is intermittent every so often telling me that it does not find the wazuh-alerts-3x, and when I switch to wazuh-monitoring it also sometimes tells me that it does not find it.

Attached below my configuration files that you asked me

Regards






MASTER
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: felipecluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: nodo-01
#
# Add custom attributes to the node:
node.master: true
node.data: false
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.2.203", "192.168.2.204", "192.168.2.205"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true


NODO DATA

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: felipecluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: nodo-02
#
# Add custom attributes to the node:
node.master: false
node.data: true
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.2.203", "192.168.2.204", "192.168.2.205"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
[root@localhost ~]# 



NODO DATA
[root@localhost ~]# cat /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: felipecluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: nodo-3
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
node.master: false
node.data: true
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.2.203", "192.168.2.204", "192.168.2.205"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true


MASTER
[root@localhost ~]# cat /etc/logstash/conf.d/01-wazuh.conf
# Wazuh - Logstash configuration file
## Remote Wazuh Manager - Filebeat input
input {
    beats {
        port => 5000
        codec => "json_lines"
#       ssl => true
#       ssl_certificate => "/etc/logstash/logstash.crt"
#       ssl_key => "/etc/logstash/logstash.key"
    }
}
filter {
    if [data][srcip] {
        mutate {
            add_field => [ "@src_ip", "%{[data][srcip]}" ]
        }
    }
    if [data][aws][sourceIPAddress] {
        mutate {
            add_field => [ "@src_ip", "%{[data][aws][sourceIPAddress]}" ]
        }
    }
}
filter {
    geoip {
        source => "@src_ip"
        target => "GeoLocation"
        fields => ["city_name", "country_name", "region_name", "location"]
    }
    date {
        match => ["timestamp", "ISO8601"]
        target => "@timestamp"
    }
    mutate {
        remove_field => [ "timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type","@src_ip"]
    }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
        document_type => "wazuh"
    }
}
[root@localhost ~]# 


NODO DATA
[root@localhost ~]# cat /etc/logstash/conf.d/01-wazuh.conf
# Wazuh - Logstash configuration file
## Remote Wazuh Manager - Filebeat input
input {
    beats {
        port => 5000
        codec => "json_lines"
#       ssl => true
#       ssl_certificate => "/etc/logstash/logstash.crt"
#       ssl_key => "/etc/logstash/logstash.key"
    }
}
filter {
    if [data][srcip] {
        mutate {
            add_field => [ "@src_ip", "%{[data][srcip]}" ]
        }
    }
    if [data][aws][sourceIPAddress] {
        mutate {
            add_field => [ "@src_ip", "%{[data][aws][sourceIPAddress]}" ]
        }
    }
}
filter {
    geoip {
        source => "@src_ip"
        target => "GeoLocation"
        fields => ["city_name", "country_name", "region_name", "location"]
    }
    date {
        match => ["timestamp", "ISO8601"]
        target => "@timestamp"
    }
    mutate {
        remove_field => [ "timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"]
    }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
        document_type => "wazuh"
    }
}
[root@localhost ~]# 



NODO DATA
[root@localhost ~]# cat /etc/logstash/conf.d/01-wazuh.conf
# Wazuh - Logstash configuration file
## Remote Wazuh Manager - Filebeat input
input {
    beats {
        port => 5000
        codec => "json_lines"
#       ssl => true
#       ssl_certificate => "/etc/logstash/logstash.crt"
#       ssl_key => "/etc/logstash/logstash.key"
    }
}
filter {
    if [data][srcip] {
        mutate {
            add_field => [ "@src_ip", "%{[data][srcip]}" ]
        }
    }
    if [data][aws][sourceIPAddress] {
        mutate {
            add_field => [ "@src_ip", "%{[data][aws][sourceIPAddress]}" ]
        }
    }
}
filter {
    geoip {
        source => "@src_ip"
        target => "GeoLocation"
        fields => ["city_name", "country_name", "region_name", "location"]
    }
    date {
        match => ["timestamp", "ISO8601"]
        target => "@timestamp"
    }
    mutate {
        remove_field => [ "timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip", "host"]
    }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
        document_type => "wazuh"
    }
}

MASTER
[root@localhost ~]# systemctl status logstash -l
logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2018-07-01 23:56:17 CEST; 1h 18min ago
 Main PID: 709 (java)
   CGroup: /system.slice/logstash.service
           └─709 /bin/java -Xms256m -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -cp /usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-19.0.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash --path.settings /etc/logstash

Jul 01 23:56:17 localhost.localdomain systemd[1]: Started logstash.
Jul 01 23:56:17 localhost.localdomain systemd[1]: Starting logstash...
Jul 01 23:56:45 localhost.localdomain logstash[709]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
[root@localhost ~]# 


DATA
[root@localhost ~]# systemctl status logstash -l
logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-07-02 00:54:32 CEST; 20min ago
 Main PID: 728 (java)
   CGroup: /system.slice/logstash.service
           └─728 /bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-19.0.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash --path.settings /etc/logstash

Jul 02 00:57:48 localhost.localdomain logstash[728]: [2018-07-02T00:57:48,548][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
Jul 02 00:57:48 localhost.localdomain logstash[728]: [2018-07-02T00:57:48,557][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '503' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Jul 02 00:57:53 localhost.localdomain logstash[728]: [2018-07-02T00:57:53,565][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
Jul 02 00:57:53 localhost.localdomain logstash[728]: [2018-07-02T00:57:53,575][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '503' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Jul 02 00:57:58 localhost.localdomain logstash[728]: [2018-07-02T00:57:58,580][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
Jul 02 00:57:58 localhost.localdomain logstash[728]: [2018-07-02T00:57:58,601][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '503' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Jul 02 00:58:03 localhost.localdomain logstash[728]: [2018-07-02T00:58:03,607][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
Jul 02 00:58:03 localhost.localdomain logstash[728]: [2018-07-02T00:58:03,901][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
Jul 02 00:58:05 localhost.localdomain logstash[728]: [2018-07-02T00:58:05,568][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Jul 02 00:58:05 localhost.localdomain logstash[728]: [2018-07-02T00:58:05,612][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[root@localhost ~]# 

DATA
[root@localhost ~]# systemctl status logstash -l
logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-07-02 00:54:46 CEST; 20min ago
 Main PID: 710 (java)
   CGroup: /system.slice/logstash.service
           └─710 /bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-19.0.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash --path.settings /etc/logstash

Jul 02 00:57:24 localhost.localdomain logstash[710]: [2018-07-02T00:57:24,135][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Jul 02 00:57:24 localhost.localdomain logstash[710]: [2018-07-02T00:57:24,536][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
Jul 02 00:57:28 localhost.localdomain logstash[710]: [2018-07-02T00:57:28,295][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
Jul 02 00:57:28 localhost.localdomain logstash[710]: [2018-07-02T00:57:28,336][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '503' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Jul 02 00:57:34 localhost.localdomain logstash[710]: [2018-07-02T00:57:34,156][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
Jul 02 00:57:34 localhost.localdomain logstash[710]: [2018-07-02T00:57:34,323][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '503' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Jul 02 00:57:39 localhost.localdomain logstash[710]: [2018-07-02T00:57:39,331][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
Jul 02 00:57:39 localhost.localdomain logstash[710]: [2018-07-02T00:57:39,477][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
Jul 02 00:57:48 localhost.localdomain logstash[710]: [2018-07-02T00:57:48,909][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Jul 02 00:57:48 localhost.localdomain logstash[710]: [2018-07-02T00:57:48,980][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[root@localhost ~]# 

MASTER
[root@localhost ~]# systemctl status elasticsearch -l
elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2018-07-01 23:56:19 CEST; 1h 21min ago
     Docs: http://www.elastic.co
 Main PID: 1125 (java)
   CGroup: /system.slice/elasticsearch.service
           ├─1125 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.8bJsDAY7 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
           └─1670 /usr/share/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller

Jul 01 23:56:19 localhost.localdomain systemd[1]: Started Elasticsearch.
Jul 01 23:56:19 localhost.localdomain systemd[1]: Starting Elasticsearch...
[root@localhost ~]# 

DATA
[root@localhost ~]# systemctl status elasticsearch -l
elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-07-02 00:54:35 CEST; 23min ago
     Docs: http://www.elastic.co
 Main PID: 1141 (java)
   CGroup: /system.slice/elasticsearch.service
           ├─1141 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.C4VKc6vs -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
           └─1498 /usr/share/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller

Jul 02 00:54:35 localhost.localdomain systemd[1]: Started Elasticsearch.
Jul 02 00:54:35 localhost.localdomain systemd[1]: Starting Elasticsearch...
[root@localhost ~]# 


DATA
[root@localhost ~]# systemctl status elasticsearch -l
elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-07-02 00:54:49 CEST; 24min ago
     Docs: http://www.elastic.co
 Main PID: 1126 (java)
   CGroup: /system.slice/elasticsearch.service
           ├─1126 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.lERcOtIE -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
           └─1466 /usr/share/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller

Jul 02 00:54:49 localhost.localdomain systemd[1]: Started Elasticsearch.
Jul 02 00:54:49 localhost.localdomain systemd[1]: Starting Elasticsearch...
[root@localhost ~]# 


-- 
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/95384855-8d61-46f4-bf74-88a0c4fdeea3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

jesus.g...@wazuh.com

unread,
Jul 2, 2018, 3:07:50 AM7/2/18
to Wazuh mailing list
Hi Felipe,

You need to set the time range properly. If you have no alerts in last 15 minutes (example) you'll see that screen, it's right.
Take a look at the top right corner from the Discover tab, you'll see "15 minutes", try to change its value in order to see older alerts,
probably it's the key here.

Regards,
Jesús
...

Felipe Andres Concha Sepúlveda

unread,
Jul 2, 2018, 3:27:20 AM7/2/18
to jesus.g...@wazuh.com, Wazuh mailing list
Thank you Jesus, I thought it could be related to the indices that are created per day
There are some that have value 0.
As the problem is random, I can not be sure what it is, I have changed the date but I followed the problems before.
I attach the screen

Regards



--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Jul 2, 2018, 11:33:34 AM7/2/18
to Wazuh mailing list
Hello Felipe,

The wazuh-monitoring indices are created by the Wazuh App. They store the agents statuses and they are used by the Agents status visualization from Overview general tab.
These indices have no alerts.

The wazuh-alerts indices are created by Logstash, they store the alerts from your Wazuh manager(s).

Regardless the Elasticsearch node you requested the indices, all of them will give you the same list cause they are in cluster mode. This means you are seeing the same indices
in each /_cat/indices since you are not using a node filter.

My suggestion is to delete useless indices that have been created by error or something else. This means you should execute the next command:

// Delete wazuh-alerts-3.x, myindex and wazuh-monitoring-3.x-2018.06.26 indices

curl
-XDELETE elastic_ip:9200/wazuh-alerts-3.x
curl
-XDELETE elastic_ip:9200/myindex
curl
-XDELETE elastic_ip:9200/wazuh-monitoring-3.x-2018.06.26

Once done, you should be fine. One more thing could be restart Kibana once the wrong indices are deleted:

systemctl restart kibana

Let us know Felipe.

Regards,
Jesús



To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Felipe Andres Concha Sepúlveda

unread,
Jul 3, 2018, 5:36:33 AM7/3/18
to jesus.g...@wazuh.com, Wazuh mailing list
Great Jesus!!!! 
this helps me a lot

Regards

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

jesus.g...@wazuh.com

unread,
Jul 3, 2018, 5:40:49 AM7/3/18
to Wazuh mailing list
You are welcome Felipe! 

Let us know if you are facing new problems.

Regards,
Jesús
Regards


Reply all
Reply to author
Forward
0 new messages