wazuh-alerts-3.x-* No results match your search criteria

1,086 views
Skip to first unread message

Jacky Qin

unread,
Jan 13, 2020, 3:24:13 AM1/13/20
to Wazuh mailing list
Hi ,

My version of wazuh is 3.9.3, and my efk status is running.It is normal until I change the network of elasticsearch to 0.0.0.0.But the service of elasticsearch is running after I modify it, so I don't know where there is a problem?

root@hids-001a:/etc/elasticsearch# curl localhost:9200/
{
  "name" : "hids-001a",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "o8gUdRTcTxii10fDh88lOw",
  "version" : {
    "number" : "7.2.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "508c38a",
    "build_date" : "2019-06-20T15:54:18.811730Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}


root@hids-001a:/etc/elasticsearch# cat elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["0.0.0.0"] //backup no rizhi
discovery.seed_hosts: ["0.0.0.0"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
cluster.initial_master_nodes: ["node-1", "node-2"]
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true


Best regards,
Jacky Qin

Mayte Ariza

unread,
Jan 13, 2020, 5:47:14 AM1/13/20
to Wazuh mailing list
Hi Jacky,
 
The IP address 0.0.0.0 should bind to all network interfaces, the problem must lie elsewhere. Could you please share the elasticsearch 7.2.0 logs? Log files can be found in /var/log/elasticsearch/
 
Best regards,
Mayte Ariza.

Jacky Qin

unread,
Jan 13, 2020, 8:06:37 AM1/13/20
to Wazuh mailing list
Hi Mayte,

The log file is in the attachment.

Best regards,
Jacky Qin

在 2020年1月13日星期一 UTC+8下午6:47:14,Mayte Ariza写道:
elasticsearch.log

Mayte Ariza

unread,
Jan 13, 2020, 11:30:02 AM1/13/20
to Wazuh mailing list

Hi Jacky,
 
The warning you got for the wazuh-alerts-3.x-* index pattern (No results match your search criteria) means that your query does not match anything in the current time range, but it does not mean there is something wrong. 
If wazuh-alerts-3.x-* have been configured properly in the Elasticsearch index patterns and templates, and also the managers are sending alerts to Elastic, then there is a problem if nothing shows on the Discover tab when setting the proper range. 
 
Run the following request in your elasticsearch server to get your Elasticsearch indices: curl  http://localhost:9200/_cat/indices?pretty
 
This will show us if the alerts are being indexed on Elasticsearch.
 
The attached log shows that all shards failed for .kibana and .kibana_task_manager indices. However, the last line indicates that the cluster health status is green because the shards started. 

Run the following request in your elasticsearch server to get some details about the current cluster health: curl http://localhost:9200/_cluster/health?pretty
 
There may be a problem on the data flow between the Wazuh manager and Elasticsearch. Are you using logstash or do you use filebeat to forwards the alerts to Elasticsearch?
 
Best regards,
Mayte Ariza.

Jacky Qin

unread,
Jan 13, 2020, 8:34:46 PM1/13/20
to Wazuh mailing list
Hi Mayte,

I use filebeat to forwards the alerts to Elasticsearch.

green open wazuh-monitoring-3.x-2019.08.10  K0S8OsvgSVq8zap2hhIXJQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.06  tzGDNu9mS92RHOx1QCPXLg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.14      6tHOZXviTJuIzvm3Zm6d4Q 3 0    1780 0   2.4mb   2.4mb
green open wazuh-monitoring-3.x-2019.10.10  bE0NfUxTRsq7SoUh-wdYRQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.24      GHaX7okwTASn8xZpEdy4Mg 3 0   49118 0  20.1mb  20.1mb
green open wazuh-alerts-3.x-2019.12.12      j7cMMusSRGKRqVx7U1Howw 3 0    6295 0   6.7mb   6.7mb
green open wazuh-monitoring-3.x-2019.10.18  1-DahR3CQoq067F_AHGVNg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.16  clmJOfTFRAih7QjtyO98tQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.12.20      H9HsRsvbT52CVLjZp8WKPA 3 0    3716 0   4.8mb   4.8mb
green open wazuh-alerts-3.x-2019.11.16      o4XcmynZRXWojkZJz250MQ 3 0    4685 0   4.7mb   4.7mb
green open wazuh-alerts-3.x-2019.05.23      ZaV4QGltSke6GOiBP--iTA 5 0   43407 0  16.2mb  16.2mb
green open wazuh-alerts-3.x-2019.08.04      R9u56NtCTj2FWb7GN1hm4g 3 0    2308 0     3mb     3mb
green open wazuh-monitoring-3.x-2019.09.26  8lNOkGLsTrGtQRupZcJowg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.17  W8GTxjaMRm-NwrtLrNSTHg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.25  oR3DpnnjQ0qAqV7yS7G8xQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.07.02  x-5JRxP1RHq27_TGbplJVQ 5 0       0 0   1.2kb   1.2kb
green open wazuh-alerts-3.x-2019.08.01      6Nv8RgY7T2CawMCd5KBhYg 3 0    2490 0   3.4mb   3.4mb
green open wazuh-monitoring-3.x-2019.10.17  DCqq9U2qTzCzbDJmaBw-zA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.02      pJsJmYvATmu9cwYhq5hUbQ 3 0    2546 0   3.3mb   3.3mb
green open wazuh-monitoring-3.x-2019.07.28  6WZHkwnBQJyfv9DoLC1ypQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.02  4_-0oeo5RHugrIydwlGUOw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.09  CC6a5edbS1mC6Wt_luKlcQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.06  8wFl7GQiQH2p6WXuYDKNiA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.27  uAcmKCMPSLG2Faq_2zp49g 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.10  NEowhhS9SYimP1zIxXh2ew 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.05.07      jD29I94FT_WPknuFVUwvPA 5 0   22546 0   9.5mb   9.5mb
green open wazuh-alerts-3.x-2019.12.15      U7RisMjfSqaDlASlFvpdaA 3 0    4801 0   4.9mb   4.9mb
green open wazuh-alerts-3.x-2019.12.14      Ko2QSmQSTAqfwzEH5v-yyA 3 0    5344 0   5.8mb   5.8mb
green open wazuh-alerts-3.x-2019.11.17      wF70qTgsRj2nsjzjTlblRA 3 0    3801 0   4.1mb   4.1mb
green open wazuh-alerts-3.x-2019.12.03      BhZace7nQLyHAyZ-gUzw9w 3 0    4930 0   5.1mb   5.1mb
green open wazuh-monitoring-3.x-2019.12.13  0Effp8idSH2s8XCLGg-Q0Q 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.13      RqefY-m7QQetGFsw3E6nHg 3 0   48893 0  19.1mb  19.1mb
green open wazuh-monitoring-3.x-2019.05.08  gxc0m48TTY-WXoo5_URHfQ 5 0    1400 0 521.2kb 521.2kb
green open wazuh-alerts-3.x-2019.09.02      G8cr_977Sy-gR5SkXHHrZQ 3 0    1426 0   2.1mb   2.1mb
green open wazuh-alerts-3.x-2019.11.23      Pz3KvmqvT0iW-QUEEkTDTA 3 0    3722 0   3.8mb   3.8mb
green open wazuh-alerts-3.x-2019.11.02      -JeMNqMvQFKZmNhkvfzvsw 3 0    2700 0   3.5mb   3.5mb
green open wazuh-alerts-3.x-2019.12.05      xdWUM-EQQHmFT3mT6yN_yg 3 0    4784 0   5.1mb   5.1mb
green open wazuh-monitoring-3.x-2019.10.26  rORbVHDSRo-IahvU9ZHM9Q 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.05.01      BkfaD5xPR2eyFqCAy6GIWQ 5 0   81951 0  33.1mb  33.1mb
green open wazuh-alerts-3.x-2019.11.29      RzB5um-LQ-enuBLevhSq_Q 3 0    4223 0   4.3mb   4.3mb
green open wazuh-monitoring-3.x-2019.07.12  zxACXeR7Su2sha0WPtBpRw 5 0      50 0   168kb   168kb
green open wazuh-alerts-3.x-2019.11.27      bCF6mylmS_-iL92CgW-mzw 3 0    3755 0   3.8mb   3.8mb
green open wazuh-monitoring-3.x-2019.10.27  7ck4774rSAGyLuWPMTJh8A 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.29      DDD-lGWBRFWYWZaKOnj6HQ 3 0    1269 0   1.9mb   1.9mb
green open wazuh-alerts-3.x-2019.10.05      _t8SQAoHQXCNGBHhYpDD0w 3 0   70228 0  25.5mb  25.5mb
green open wazuh-monitoring-3.x-2019.05.21  -h9gRXIiRCuj1xnk4lJo_A 5 0    3550 0     1mb     1mb
green open wazuh-monitoring-3.x-2019.11.21  koCvKQfjRKyZ-bdNOEAkaQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.14  DTslr6C2ReKpnUsJUNDi-Q 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.16      S0Z_du4FTYWxv03MSXwgUw 3 0    1619 0   2.1mb   2.1mb
green open wazuh-alerts-3.x-2019.12.09      tGhu5VPEQlSrvFYBGFCUjA 3 0    7564 0  11.3mb  11.3mb
green open wazuh-monitoring-3.x-2019.11.23  GccJ8RXJTaSCQ1DIEJL7Eg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.14  Y9u_ZBxESOCat6QOgVUVmw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.09      Rq2RVZ4hR9uHnpavsxLusw 3 0   35430 0  12.9mb  12.9mb
green open .wazuh-version                   ecrQJm8lTK2G7ayAvaSiUQ 1 0       1 0   5.2kb   5.2kb
green open wazuh-alerts-3.x-2019.12.10      SIsntcp5T5KH1ItoKFqiYQ 3 0    4702 0   4.9mb   4.9mb
green open wazuh-monitoring-3.x-2019.12.02  8nSKs-wNTUGnXGb4nG29-w 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.14      yIULcWhfR9WNIaoU6pDVsg 3 0    1470 0   2.2mb   2.2mb
green open wazuh-monitoring-3.x-2019.10.31  bgn6B2T3QO2BlSqiHwCRjA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.24  oRN8gP4wSoGvbopLpMsb3g 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.10      Bm9GlULGRW6c9SnvToSeCQ 3 0   62694 0  23.7mb  23.7mb
green open wazuh-monitoring-3.x-2019.08.09  nJUNkEueRi-n4_-wlMk4cg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.04      8XdgQxL2TuKK7qsUrE8f8A 3 0    1525 0   2.2mb   2.2mb
green open wazuh-monitoring-3.x-2019.05.04  uQ2mCz2LQXiO5w5WiT3gqg 5 0    3600 0   1.2mb   1.2mb
green open wazuh-monitoring-3.x-2019.11.17  d6i98WsGRvW6UBF_hrYjig 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.16  uD7CQyWPS5SIWOu9v93vRw 2 0       0 0    566b    566b
green open .wazuh                           F1mjyQNLR2WOXUSe6PATkQ 1 0       1 0  11.3kb  11.3kb
green open wazuh-alerts-3.x-2019.10.04      AZs1D5pmTmKcSUQ1dfz3XQ 3 0  211876 0  65.9mb  65.9mb
green open wazuh-alerts-3.x-2019.09.13      qV3i-0V5SGGka1lx4dlmIw 3 0    1743 0   2.4mb   2.4mb
green open wazuh-alerts-3.x-2019.12.24      VEJZajoWTM60-li7AeLbNw 3 0    3674 0   4.6mb   4.6mb
green open wazuh-monitoring-3.x-2019.11.01  NBGVATObSgmWu0Fjd4wZAw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.29  ATydpxpMQ4uQmUtQMMYMiw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.11.28      JhrnRg4qSKKynT4w0mANSQ 3 0    3999 0     4mb     4mb
green open wazuh-alerts-3.x-2019.11.01      xL3QjvTJRAuGZStcFBzqgA 3 0    2604 0   3.3mb   3.3mb
green open wazuh-alerts-3.x-2019.12.11      HB2lqNo6T-SNxiMwausaUA 3 0    4773 0     5mb     5mb
green open wazuh-monitoring-3.x-2019.07.29  65X9_w6WQfypj6TsBgwdxQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.05  0LhBbo-BR-a9OpKle1hEFg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.03      QweRcS81T52F2g3xRtmd9w 3 0    2669 0   3.5mb   3.5mb
green open wazuh-alerts-3.x-2019.11.08      lhsyL_XVQ26QyR_SBbdzMw 3 0    4043 0   4.2mb   4.2mb
green open wazuh-monitoring-3.x-2019.07.30  VI31fkNQQMao-TpKmgUlrw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.09  badOig2bTZubRq1ERtUMAg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.23  XrmGF08NRESl9KxsnQEJXA 5 0    2275 0 732.8kb 732.8kb
green open wazuh-monitoring-3.x-2019.11.27  cC5mpJwTRUGgdlPsPFEwSQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.19  OYFQw4ppRTiY84cQxmDwUA 5 0    3575 0   1.2mb   1.2mb
green open wazuh-alerts-3.x-2019.08.31      yW3MKYXHSB2FDPAFeC8EUg 3 0    1378 0   2.1mb   2.1mb
green open wazuh-monitoring-3.x-2019.09.03  j2ymZAlhStyA-NcB2Q0Nyw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.17      IwxoO-XCQuWr1CCtrEKJxw 3 0    4514 0   4.1mb   4.1mb
green open wazuh-monitoring-3.x-2019.12.21  VKKbD7POTjeONU1xL2oe1g 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.28  SwLA2SjGSAmNZ6HUN6iQMg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.05.13      JsJaoK1nT6CYCYq_y8qerQ 5 0      11 0  34.3kb  34.3kb
green open wazuh-alerts-3.x-2019.10.27      7ezjeuS3R2iatMuXho3MoQ 3 0 1337485 0   1.6gb   1.6gb
green open wazuh-alerts-3.x-2019.10.16      n1GARwpgTL6sK0frc0-NEA 3 0   49325 0    20mb    20mb
green open wazuh-alerts-3.x-2019.11.07      kpMLMLA4QMuXvlOkqHmxeQ 3 0    3894 0   4.2mb   4.2mb
green open wazuh-monitoring-3.x-2019.09.12  O5akVufwRFyf4zgumeDjXg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.14  ZbdvXnptQgS8Y2jf6knCaA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.20      OCI1APFlQjS0nnOUz46kTw 3 0    1494 0     2mb     2mb
green open wazuh-monitoring-3.x-2019.11.02  gmn-XQxPTbqX10tLdWZk5w 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.28      wo7FKCLKRWK2hSr22CZ7Kw 3 0  719825 0 911.5mb 911.5mb
green open wazuh-alerts-3.x-2019.12.06      9mSWLiNLTuyZRSQmfxbmhQ 3 0    6787 0  10.1mb  10.1mb
green open wazuh-monitoring-3.x-2019.10.11  e5z3b7ToTNSNsCjX_Nze7A 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.13  sIrflY9-SyaUgmpDqtwv2Q 2 0       0 0    566b    566b
green open .kibana_2                        rOB5vFxyTsu1eR6Jucfugg 1 0      10 2   143kb   143kb
green open wazuh-monitoring-3.x-2019.12.20  Hhp9GhcPRvqI12Gb5NzxmQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.07.12      iRRxH-edSW6leITOT7TlBQ 5 0     141 0 260.4kb 260.4kb
green open wazuh-monitoring-3.x-2019.05.14  I9bTiTU2SvWLDb7x_2NnjA 5 0       0 0   1.2kb   1.2kb
green open wazuh-monitoring-3.x-2019.08.23  HFQ7CpGAS52pp566qlG3bQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.18  3hSkyztyQASlHYpVOZyVTQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.11  _6uwVH8QROuTQtr3V6Ly1w 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.25      wyUl600CREW8t2LpU0G2Ew 3 0    1760 0   2.5mb   2.5mb
green open wazuh-monitoring-3.x-2019.08.12  zgKvI_3UTLWP6gnzVYf5uw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.16      TANowKfVQiOQhQaUzaOGVw 3 0  102159 0    47mb    47mb
green open wazuh-alerts-3.x-2019.11.14      HAc6xwXlQbCvPbH3IegcHg 3 0    3938 0   4.2mb   4.2mb
green open wazuh-monitoring-3.x-2019.12.12  XMCdsBmKSgyDFc1T4NTiQw 2 0       0 0    566b    566b
green open .kibana_task_manager             RPgxeQxDSgO_dbAdNGR-oA 1 0       2 0  29.9kb  29.9kb
green open wazuh-alerts-3.x-2019.09.12      gTLom1HxRMWLghDcjjsnOw 3 0    1872 0   2.6mb   2.6mb
green open wazuh-alerts-3.x-2019.10.03      beBJ0uThTNaAO05hJr5dew 3 0    5831 0   4.5mb   4.5mb
green open wazuh-monitoring-3.x-2019.12.23  CbSGawv5RC-rl17raPq1MQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.31  xdwprQQKQfqkbgXPDj9_3g 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.03  660tH8rpTV-Z--aTWbnVfA 5 0    3575 0   1.2mb   1.2mb
green open wazuh-alerts-3.x-2019.07.26      OaS-ZYLHQ9aUhGarSZz9NQ 3 0    2990 0   3.6mb   3.6mb
green open wazuh-monitoring-3.x-2019.09.17  w3SwV2BsTP2O0dGEmHFifg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.11      GeODQc5SSROSJz2llwoX5A 3 0    2981 0   3.9mb   3.9mb
green open wazuh-alerts-3.x-2019.11.24      DojDZ5wbSlmsxe7MrRywqw 3 0    3572 0   3.6mb   3.6mb
green open wazuh-monitoring-3.x-2019.10.25  seFiueR_TsykawQnEWlC0Q 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.08  SOrNB1EySIS_UqCJ0bva1Q 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.12.18      NX14by65SKaoVD5EC7X88Q 3 0    4851 0   5.3mb   5.3mb
green open wazuh-alerts-3.x-2019.11.03      xZ4swY0VRqCwglfJ9PPpQA 3 0    2539 0   3.2mb   3.2mb
green open wazuh-alerts-3.x-2019.10.23      Rr4tGdTlQgy3AUPZQSoA3Q 3 0   49246 0  20.1mb  20.1mb
green open wazuh-alerts-3.x-2019.09.03      V4re3PR2RfuU2E_asr8m2Q 3 0    1608 0   2.3mb   2.3mb
green open wazuh-alerts-3.x-2019.11.04      Xey-_r9-TIKZrxTJBZowwg 3 0    3381 0   4.2mb   4.2mb
green open wazuh-monitoring-3.x-2019.11.22  SiYhPQSVTTqil14werAyww 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.04  gtSfVGXDTo-HgshS6vgmxQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.07.31  kfz--ibcTP29aStDhOQXYw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.07  gXDAI0ViQzyIG-Wvampbfw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.19  7F7DngqFRFiK3WgOkY8TsA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.19      9_PU8llbSSKK7UNAmJuztA 3 0    1892 0   2.6mb   2.6mb
green open wazuh-monitoring-3.x-2019.08.24  0qWR4RqpR7eZa8vVwfIaXQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.12.16      7QLy8PILSC-hhXEYD1psvw 3 0    4902 0   5.5mb   5.5mb
green open wazuh-alerts-3.x-2019.09.22      kPi4RfwkRrmAIuszBu4ygA 3 0    1697 0   2.3mb   2.3mb
green open wazuh-monitoring-3.x-2019.09.16  7lghNdySQgGSXQSlfLhe7Q 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.16  fuNl2Hd8SGisa4K7scdl0w 5 0    3600 0   1.2mb   1.2mb
green open wazuh-alerts-3.x-2019.11.13      pyIJi8O3RLuf64nY_R2pVw 3 0    4684 0   4.8mb   4.8mb
green open wazuh-alerts-3.x-2019.10.21      LNuJxyOwRuKrK6ciJ97EKw 3 0   49866 0  20.6mb  20.6mb
green open wazuh-monitoring-3.x-2019.08.29  Wm5m5LX2RrKv37JBERas3g 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.30  pjCBUNbQRyu-Fy87iKZkqg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.19  FkrcMEK7QZ-IpXFxCzPUEw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.25      nNKnPh4vQ9GGIyeuy9u5EQ 3 0   49288 0  20.1mb  20.1mb
green open wazuh-alerts-3.x-2019.09.27      Z8e2a7riSOSc4dJVlLyzIQ 3 0    2899 0   3.6mb   3.6mb
green open wazuh-alerts-3.x-2019.08.15      8ppP_X5LRaSbN8JRmShggA 3 0    1462 0     2mb     2mb
green open wazuh-alerts-3.x-2019.08.08      oPAJYojkT0y_OhZnu-mIqQ 3 0    2993 0   3.9mb   3.9mb
green open wazuh-monitoring-3.x-2019.08.15  pLXXZDHdR9yqQ-jgR4KpgQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.07.15      uPufrM2DT8mgxtWDU_NV2A 3 0    2486 0     3mb     3mb
green open wazuh-alerts-3.x-2019.12.22      _zoXkiMASvi3EE3KuyVfDg 3 0    3463 0   4.2mb   4.2mb
green open wazuh-alerts-3.x-2019.11.12      s-ehH79oRVaANO69f8eEGA 3 0    3892 0   4.1mb   4.1mb
green open wazuh-monitoring-3.x-2019.12.09  qvOZY5sdQO-n4g3F76LM6A 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.21      77FsR1wxR7imJSsTWni84Q 3 0    1587 0   2.1mb   2.1mb
green open wazuh-alerts-3.x-2019.08.18      ApEJCK7aSY26T_4W7mDzXA 3 0    1601 0   2.1mb   2.1mb
green open wazuh-monitoring-3.x-2019.08.05  4PoxpGOgQXmxbU2HsRx7Ag 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.13  vU6WNc58Q_KBHMV9-mV_xA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.05  7Sw0meHZTJaTwC7-8jWHTQ 2 0       0 0    566b    566b
green open .kibana_1                        JKiRAsjLTnG7Cg8Rd_Gtig 1 0       6 1  99.2kb  99.2kb
green open wazuh-alerts-3.x-2019.07.25      HEur6bvNQpG5OWidNyQpsA 3 0    2228 0   2.7mb   2.7mb
green open wazuh-monitoring-3.x-2019.12.10  CwtuN4SSR3qzl0Eb37tiRA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.12.19      3P4uPTS0QvuxMOvGzoDV5w 3 0    4284 0   5.1mb   5.1mb
green open wazuh-alerts-3.x-2019.09.28      F4YC-c95RGu1yR4U43Wk8A 3 0    2709 0   3.3mb   3.3mb
green open wazuh-alerts-3.x-2019.05.22      Lt-pAHNqSmO1PZeRJepz7Q 5 0   72284 0  26.6mb  26.6mb
green open wazuh-alerts-3.x-2019.08.22      snGe5xDwSc2K1U9KF0TAtw 3 0    1608 0   2.4mb   2.4mb
green open wazuh-alerts-3.x-2019.11.22      beEs-bH2TTyMIDXGGVk-wQ 3 0    3908 0   4.2mb   4.2mb
green open wazuh-monitoring-3.x-2019.12.18  An24_nQkSuSJbmSYMey7PQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.02      am4H8HqNRg6VMeWmVfTqLQ 3 0    2727 0   3.5mb   3.5mb
green open .tasks                           SfyTeq57SEaJErLhmkRz4w 1 0       1 0   6.2kb   6.2kb
green open wazuh-alerts-3.x-2019.10.07      u5_dIyytSs6YodqPatiT9Q 3 0   70547 0    26mb    26mb
green open wazuh-monitoring-3.x-2019.10.09  i-lDSPnnTJ6ztHR31YSExA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.20  zC8MptG2TBOE5TrXV4ZL_g 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.17  Ez15IfmRR56ZAo85sKyVsg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.21  E64OXACTQbi9sM5LaXYIvg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.29      ZRj9Tn5RTsiu6Vyvs2FeUQ 3 0    2734 0   3.3mb   3.3mb
green open wazuh-monitoring-3.x-2019.08.19  i_fPWUQRRvSQXLtFEqbiNA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.11.30      mmVhN7YUTuuZ3zQpdEf7fg 3 0    4033 0     4mb     4mb
green open wazuh-alerts-3.x-2019.10.08      qtLB8DX_TrigkKbBTsDChQ 3 0  622730 1   3.7gb   3.7gb
green open wazuh-monitoring-3.x-2019.10.22  FB4YqRKQQ86DL-FcXa-5Vw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.26      EC1_jF9SQ-6AUm1UfAdzEA 3 0  526009 0 150.6mb 150.6mb
green open wazuh-monitoring-3.x-2019.11.28  CJZEpojFTq-pJwzpBGoBsA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.07.30      LElrRDBDTXqfIFhlUZvvxw 3 0    2729 0   3.6mb   3.6mb
green open wazuh-monitoring-3.x-2019.11.26  qhdZVCTGTMOpq1tLEjDcMg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.15  y3x3zq16Ty29L9NmyUdcPA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.20  h8oujM78TQC0cgh5VOIWmw 5 0    3575 0   1.2mb   1.2mb
green open wazuh-alerts-3.x-2019.11.10      PZm33-zZSla3yB1pRrp4Pw 3 0    3535 0   3.6mb   3.6mb
green open wazuh-monitoring-3.x-2019.09.02  q3dEs3GiReaMXszY041BuQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.05.15      6p3OVq-sRKuzEakqDj8S5Q 5 0   70625 0  28.9mb  28.9mb
green open wazuh-alerts-3.x-2019.05.02      TyDndGRISiy_iv6xxto3yA 5 0   81673 0    33mb    33mb
green open wazuh-alerts-3.x-2019.08.23      blOXQLw0Qzqv6KN7tZPBEw 3 0    1761 0   2.4mb   2.4mb
green open wazuh-alerts-3.x-2019.08.05      CEIU4DCkQoCCy4KLwynWsA 3 0    3471 0   4.9mb   4.9mb
green open wazuh-alerts-3.x-2019.10.14      8k_F0ngFQb6FpPOIY4fx8g 3 0   48896 0  19.2mb  19.2mb
green open wazuh-alerts-3.x-2019.10.22      eAXfOOjfRrG27Qt-VGxOyw 3 0   49345 0  19.9mb  19.9mb
green open wazuh-monitoring-3.x-2019.10.30  pUHmPUymTriZOQW4JpNdtg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.04  fSmccmjXT2SiSF0p_FMUGw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.06  Ply63oP1SfeNXHuadhnV0Q 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.22  yd2bqi4gQ7Gpm4fmHeKG7Q 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.27  gB85j2I9RiS3vwai_hGrJA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.28  abYIQ446RjmL45fn8HpcFA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.29      O8IYC7YiTAq2pS7Al7GBmw 3 0   44151 0  18.1mb  18.1mb
green open wazuh-alerts-3.x-2019.08.24      zBe1c-XbRL2qe9NtSBc67g 3 0    1354 0   2.1mb   2.1mb
green open wazuh-monitoring-3.x-2019.06.18  dsoxgPBSSJSnL6tDKdJpjw 5 0       0 0   1.2kb   1.2kb
green open wazuh-alerts-3.x-2019.09.30      HDZ75OxwSwGBzUWvH3mifA 3 0    2935 0   3.7mb   3.7mb
green open wazuh-alerts-3.x-2019.11.05      qbf9y8SXRVCKiP8FGXjKDA 3 0    3933 0   4.6mb   4.6mb
green open wazuh-monitoring-3.x-2019.12.08  QCMWeuk4QSy6k6dPHpONMg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.11.25      OdfPRxCPTdWAz1y9SdG2qg 3 0    3688 0   3.9mb   3.9mb
green open wazuh-alerts-3.x-2019.08.10      X4XAYn0sRtiFYNQ4FGjMjQ 3 0    3289 0   4.3mb   4.3mb
green open wazuh-alerts-3.x-2019.05.04      Ki_rVwHIR3COuEGDWgoA-g 5 0   81555 0  33.1mb  33.1mb
green open wazuh-monitoring-3.x-2019.12.24  erV3PLpeSeaH7V0K67-BRg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.07.27  LTtjD7PPQqaBez_FUKIX_w 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.01  xd1indSYTs-RJwJsCbN6sw 5 0    3600 0   1.2mb   1.2mb
green open wazuh-alerts-3.x-2019.11.09      YLnfRulOQIqhl9NyNS3aUg 3 0    3499 0   3.6mb   3.6mb
green open wazuh-alerts-3.x-2019.08.09      tybQNYxESn64k6P2EmbaxA 3 0    3040 0   3.9mb   3.9mb
green open wazuh-monitoring-3.x-2019.10.14  LOcgDbk_RamvAWL_0nPJVw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.10      FhNEuQPyTDeUeqbZr-pEYA 3 0    4440 0   3.6mb   3.6mb
green open wazuh-monitoring-3.x-2019.12.16  RyTMlBjuSgeHFGryU9P7Jg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.11.21      a8je9z9VQdGb9iC4CqDdEQ 3 0    3671 0   3.8mb   3.8mb
green open wazuh-monitoring-3.x-2019.10.01  qt480newR6OsCU4s0t70Rw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.15      96QDUYNFQOSJ-NkTDVZCvw 3 0   49638 0  20.6mb  20.6mb
green open wazuh-monitoring-3.x-2019.10.12  adA3t8uuT12JR1syKLfRGg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.30  pqNS_loYQsmUI5ZNlAiT1A 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.07.27      djMbjTAgSaKDJUa3-Kx3vg 3 0    2350 0   2.9mb   2.9mb
green open wazuh-alerts-3.x-2019.05.18      fFwLJhjyTeailMbAtVAfPw 5 0   60644 0  22.8mb  22.8mb
green open wazuh-alerts-3.x-2019.09.05      imto2mZ2Q5ais8XqCFzFMw 3 0    1812 0   2.7mb   2.7mb
green open wazuh-alerts-3.x-2019.12.17      i4hAJNG8TcamQh8o8SPleQ 3 0    4983 0   5.4mb   5.4mb
green open wazuh-alerts-3.x-2019.10.11      vLtyp_2MSFOq-OqJmIA_kg 3 0   62818 0  23.6mb  23.6mb
green open wazuh-monitoring-3.x-2019.09.07  NGi8SUiTQxGILd7KFpW5bQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.11.06      0UgB8QaqS86wRvfX_uAHpw 3 0    3889 0     4mb     4mb
green open wazuh-alerts-3.x-2019.09.23      PeM1vIOnQkCCQiWXkQEfLw 3 0    1717 0   2.3mb   2.3mb
green open wazuh-monitoring-3.x-2019.07.26  izJrtqj-Qpy3aHtSTWldUA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.06      f_gBdGClRUi85XOWyEnTBw 3 0   69940 0  25.6mb  25.6mb
green open wazuh-monitoring-3.x-2019.05.17  qVVu9KFsRwiAYAyRwZJrZg 5 0    3575 0   1.2mb   1.2mb
green open wazuh-alerts-3.x-2019.07.14      6IJbrC6sRhqROFmaop7OpQ 3 0     889 0     1mb     1mb
green open wazuh-alerts-3.x-2019.08.13      S2Osz1QhQFKAheW8nUH4cg 3 0    4383 0   4.6mb   4.6mb
green open wazuh-alerts-3.x-2019.07.24      KD45BvcrT32lyR2ldtl5uw 3 0    1767 0   2.5mb   2.5mb
green open wazuh-monitoring-3.x-2019.09.08  tOcIxnO4Q9idTbr23DnNJQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.01  DKHSOly2Rc-Pps1b1Tab_Q 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.11.26      4DTKzRpBSGSFhI9XX1IUcw 3 0    3887 0   4.3mb   4.3mb
green open wazuh-monitoring-3.x-2019.05.06  jCZkNlTESZu2c2Lr0Fx3KQ 5 0    3575 0   1.3mb   1.3mb
green open wazuh-monitoring-3.x-2019.11.15  EvwlQU27RN-ohAHI-wV1HQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.05.20      TDOAc2HbQUaqz14pZZ2P2Q 5 0   70527 0  26.1mb  26.1mb
green open wazuh-monitoring-3.x-2019.12.11  DLiXtvzyT2KNlWhXn7Jflg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.08  Uj1ANPVjRgqi3_oJIg6kHQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.24  Zybqg7pzQlaiWTScostXOg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.11  rEpXIQOhQkOVXttOPxoJhg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.01      psS1DV6CR_2vXkUcfoiPyA 3 0    2942 0   3.5mb   3.5mb
green open wazuh-alerts-3.x-2019.08.25      Jb3jyT-RTk-E-qtXMwfLKA 3 0   23550 0   8.9mb   8.9mb
green open wazuh-alerts-3.x-2019.10.31      55FLlO8JT66qJFf6jmjY5A 3 0    2616 0   3.3mb   3.3mb
green open wazuh-monitoring-3.x-2019.11.07  FlKdp5dpR3qiXbgr_cclXQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.24      HB1UazhpSJCFbVpkrpRA9g 3 0   12516 0   6.1mb   6.1mb
green open wazuh-alerts-3.x-2019.05.05      ka_p0w-pQa-EDZjSI7r0lQ 5 0   80724 0  32.9mb  32.9mb
green open wazuh-alerts-3.x-2019.05.19      FtcRdBoGTYqsc6UofzEWMg 5 0   75832 0  27.7mb  27.7mb
green open wazuh-alerts-3.x-2019.05.21      kkFl_VxNR2O_0at5tu3MVg 5 0   66482 0  24.7mb  24.7mb
green open wazuh-monitoring-3.x-2019.11.08  HNNYAkZlQySknu-196wMAQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.08      AndgkKaTQ0yNSDW8eTGArg 3 0    2586 0   3.3mb   3.3mb
green open wazuh-alerts-3.x-2019.08.07      kjohG-2RRKSo-mWxxOX-OA 3 0    2852 0   3.9mb   3.9mb
green open wazuh-alerts-3.x-2019.09.06      xBS1z2feQ828TdUseubljg 3 0    1761 0   2.6mb   2.6mb
green open wazuh-monitoring-3.x-2019.09.13  bT4QIyP9SmWoLWIveGvl0g 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.15  AU6klmZ4RIq2c1THeYdyRA 5 0    3200 0 837.6kb 837.6kb
green open wazuh-monitoring-3.x-2019.10.15  eOf8XUM1QTyegFCUWrCXIw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.12.13      S5OFhGdVTQ2hxX8mSEG1Vg 3 0    4911 0   5.2mb   5.2mb
green open wazuh-monitoring-3.x-2019.10.03  NLozSbaaRq6zNUq9XVtwQw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.18  OONz6qG7TfKZ7RN0LYX-6Q 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.23  gC1RUDDvRXWGDaG63OLFIA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.19  jC7-UXxJTWeMdm2me7mPoA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.14  MVacgu7xSnyni_Qj65HAlQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.29  vbEF-JmPTRSooGXNzeFpEw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.12.04      o9juF20UTiOHruXXQYPVdw 3 0    4806 0   5.1mb   5.1mb
green open wazuh-monitoring-3.x-2019.05.07  b6zPWGmUT0WnEfMxJskgiA 5 0    1000 0 587.4kb 587.4kb
green open wazuh-monitoring-3.x-2019.12.01  IwnJ5UQDQsu9ToJHdu0sYw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.12.08      jJDQJmEOSGq9ffNiz58Vxw 3 0   10535 0  20.6mb  20.6mb
green open wazuh-monitoring-3.x-2019.05.22  juAPDBR4RS2v8aaCWRzkLQ 5 0    3575 0   1.1mb   1.1mb
green open wazuh-alerts-3.x-2019.11.19      yGnkqQBhREm8nooA_vrWtA 3 0    4457 0   4.9mb   4.9mb
green open wazuh-alerts-3.x-2019.05.17      88wNYIiaR9GpwYBTRN9I7w 5 0   64635 0  23.9mb  23.9mb
green open wazuh-alerts-3.x-2019.12.07      YFj46iqcQhKdZaZD8jdLKQ 3 0   10951 0  21.5mb  21.5mb
green open wazuh-monitoring-3.x-2019.10.02  YC7GrHdyS_aMtsKjC_oQOQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.05  p2HPPTJQQ1a17qggQjmrdQ 5 0    3600 0   1.2mb   1.2mb
green open wazuh-alerts-3.x-2019.08.27      ZqkeKJUtR0qwlZM232TiUQ 3 0  352218 0 102.9mb 102.9mb
green open wazuh-alerts-3.x-2019.08.30      bC6SeSLHQgyCpFJ5H9QTlw 3 0    1362 0   1.9mb   1.9mb
green open wazuh-monitoring-3.x-2019.08.07  GJDB37gwRgemePNuonCGZA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.25  IPqzE_BvROuKFhJcxe5CGg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.22  RMakJ4MPSzOslcCD-alRGw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.06  5lLZYPAtQwimD_41Bi1F9w 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.20  HobMS5ypTeu7eT3yFkluJA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.04  1fLLYzCIS2-hWqMlYADsOw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.11.20      rk4aTGwBQCe5fE8KAwoDxQ 3 0    3966 0   4.1mb   4.1mb
green open wazuh-alerts-3.x-2019.05.03      tlljnJrOT9yIhVlDdUsY2A 5 0   80990 0    33mb    33mb
green open wazuh-monitoring-3.x-2019.11.12  eKQTMTMOQcGnOzLrP4p15A 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.19      w43dqmZqQ76MmTPGU0Pcjw 3 0   49490 0  19.9mb  19.9mb
green open wazuh-monitoring-3.x-2019.11.18  YhDqvYPQTIGazdx330TVaw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.26      dtFEcEI2Rxevf6BHFW3Kog 3 0    3042 0   3.4mb   3.4mb
green open wazuh-monitoring-3.x-2019.11.16  BbgZehzLQRecaBgu8WPUWw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.30  HdoR_4zjSzehRwiRh1Kafw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.11  xjXQMsEVQfuO7pl8yxvzWQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.17      OZ6s77LySKWP4Qxj90wazw 3 0    1469 0   1.8mb   1.8mb
green open wazuh-alerts-3.x-2019.08.06      9fdReqU-QRquER3JRfztrQ 3 0    2719 0   3.5mb   3.5mb
green open wazuh-alerts-3.x-2019.11.11      yOgtuZpPQdqTr3SxPOrM5g 3 0    4166 0   4.4mb   4.4mb
green open wazuh-monitoring-3.x-2019.09.22  aVJ4f3iuQfmf-qgU375APA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.12.21      -EAcsbioRGCiEbDZZZI0Bg 3 0    3638 0   4.6mb   4.6mb
green open wazuh-alerts-3.x-2019.07.29      CB92ZjhaQCG5swY0B19MbQ 3 0    2548 0   3.3mb   3.3mb
green open wazuh-monitoring-3.x-2019.10.29  ViEHlIE9TlyaSpxCLodJ2w 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.12      C0b2gnprRSa4lE9Qu4u2gQ 3 0   51383 0  20.1mb  20.1mb
green open wazuh-alerts-3.x-2019.07.31      w13IQVznSZCIw6_-lD93-w 3 0    3279 0   4.4mb   4.4mb
green open wazuh-alerts-3.x-2019.12.02      snlg9F6oSNmergGY_dbKIQ 3 0    4321 0   4.5mb   4.5mb
green open wazuh-monitoring-3.x-2019.09.06  TotTLl8ARzCXl24Uze3ZYA 2 0       0 0    566b    566b
green open filebeat-7.2.0-2019.07.24-000001 v2TInrAbTluxvseDqCYiWg 1 0       0 0    283b    283b
green open wazuh-alerts-3.x-2019.09.01      xua2ACSUSxKsBlP7q5e2Jg 3 0    2129 0   2.6mb   2.6mb
green open wazuh-alerts-3.x-2019.07.28      T48zppX2SFWxm4Yww6bhHw 3 0    8064 0   6.8mb   6.8mb
green open wazuh-alerts-3.x-2019.05.28      InUAlaE7SJuTtIxMy99nRQ 5 0       8 0  77.2kb  77.2kb
green open wazuh-alerts-3.x-2019.05.29      qSD0aadNSiS6QuOB23oMpw 5 0    2970 0   1.2mb   1.2mb
green open wazuh-monitoring-3.x-2019.10.04  uNrBkMJFQmKJpTDA8k2XDQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.05.02  9dJbV0i5S7uY-abVGn43yg 5 0    3600 0   1.2mb   1.2mb
green open wazuh-alerts-3.x-2019.10.18      6DShPusITrWu6J-XX_GZlw 3 0   49517 0  20.2mb  20.2mb
green open wazuh-monitoring-3.x-2019.09.19  CpdHkxOBS1STDk1x5PWxXQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.08.25  Sww7z76vTV-T1WdBqQ_nDg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.10  hHsXT58aQFK-HviQpr-LwQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.11.15      -qHLdaGOT1-eU05vNMPw9g 3 0    3770 0     4mb     4mb
green open wazuh-monitoring-3.x-2019.11.05  ISwXf6giQYKn1vU47Q6Uxw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.06.17      uIrdsbIhQr68-xdLNKbFpA 5 0       0 0   1.2kb   1.2kb
green open wazuh-monitoring-3.x-2019.05.18  SGVm9KytSbK5Hz7qKLX6MQ 5 0    3575 0   1.2mb   1.2mb
green open wazuh-monitoring-3.x-2019.08.26  7QswTOePTRSEM3Ivsnxl_Q 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.10.21  BbEgULXpQZOycN8-H7Uf2Q 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.26      6ZooR6lpTUa--A_imvh5kg 3 0  875708 0     1gb     1gb
green open wazuh-alerts-3.x-2019.08.28      sTwWihrNRl-E1oYS8KAYGg 3 0    1521 0   2.1mb   2.1mb
green open wazuh-alerts-3.x-2019.09.07      k3HgLF-lRGWruNubyVbqRw 3 0    2488 0   3.2mb   3.2mb
green open wazuh-monitoring-3.x-2019.09.05  DsX0SEHUQUOmMpXVsr0rfg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.20      NeNmcwsBTF2GVTNHEwPP0w 3 0    1873 0   2.4mb   2.4mb
green open wazuh-alerts-3.x-2019.12.01      qP3Kf_WTSB6jyd-qa6atCA 3 0    3647 0   3.8mb   3.8mb
green open wazuh-monitoring-3.x-2019.09.21  sE6CQoMzR8eLhPO9vZDdRg 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.09      MathC0rdT3-nJC9XVvcP8g 3 0   65710 0  24.6mb  24.6mb
green open wazuh-monitoring-3.x-2019.09.04  pbmv1AZ1R0KB2b1pl-v-Pg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.24  Bgrt9QIsSNWej8LHYvTtIw 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.05.16      2c0XqoNORI-tQvW6HKhD0A 5 0   77987 0  30.9mb  30.9mb
green open wazuh-alerts-3.x-2019.09.18      yrSeSoBbQr6ggriaPMlDEQ 3 0    2465 0     3mb     3mb
green open wazuh-monitoring-3.x-2019.11.13  aU5OG7ZKS7m4H6jR_RDooA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.07  qugYoqJaQ2u9suWs6OcrHw 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.23  Qmp18nWcS4y9rBoF6R8mdg 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.12.03  pY8GFGtiTfeK0QO50ENp5A 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.05.06      yVwYz_vzTJiShfIeOzz18Q 5 0   80220 0  32.5mb  32.5mb
green open wazuh-alerts-3.x-2019.11.18      kTWmwcZxSQK1tA4AllPbvQ 3 0    3755 0   3.9mb   3.9mb
green open wazuh-monitoring-3.x-2019.10.28  uINgWSytTYixwo-MnekgKQ 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.11.20  VFe0QWtORX6G2ZVMvLI4Og 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.07.11      MEhudlslRhuBWzjWXs-I-Q 5 0       0 0   1.2kb   1.2kb
green open wazuh-alerts-3.x-2019.12.23      mKM0UL2jSfC_XEasZbY_ww 3 0    3567 0   4.3mb   4.3mb
green open wazuh-monitoring-3.x-2019.08.03  730Dg8d-Td6npPhnLl8_aQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.08.19      BSE6e4mIR9S_xxUehUYm6Q 3 0    1743 0   2.2mb   2.2mb
green open wazuh-alerts-3.x-2019.09.15      OgSY8AyDS9m17gm7yGSD0Q 3 0    1766 0   2.5mb   2.5mb
green open wazuh-alerts-3.x-2019.10.17      HLuM9AiKTnKwYkm_jlfS7g 3 0   49711 0  20.4mb  20.4mb
green open wazuh-monitoring-3.x-2019.07.11  0MVzidbCSgaPqntlruPyZQ 5 0       0 0   1.2kb   1.2kb
green open wazuh-alerts-3.x-2019.09.21      trI8SCmcQT2mo6ydf2v3hg 3 0    1994 0   2.5mb   2.5mb
green open wazuh-monitoring-3.x-2019.11.03  CJ7tKkNASlKnAPsRLkJuVQ 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.09.11      cJg5CmHtRzOYIEdhxWlanw 3 0    1913 0   2.5mb   2.5mb
green open wazuh-monitoring-3.x-2019.12.15  FOBns5qbTC-VlcOYLD_YYA 2 0       0 0    566b    566b
green open wazuh-monitoring-3.x-2019.09.20  5AfSvqBISueAJYP2sToAEA 2 0       0 0    566b    566b
green open wazuh-alerts-3.x-2019.10.30      sbSc5FEHSFiQ4O8OoqE_bA 3 0    2535 0   3.4mb   3.4mb
green open wazuh-alerts-3.x-2019.10.20      Wcy4OZ4BRaSt1riSLtnzCg 3 0   49171 0  19.8mb  19.8mb
green open wazuh-alerts-3.x-2019.08.12      MwCvsbsJQFyMHR1oRCizGQ 3 0   26642 0  12.2mb  12.2mb
green open wazuh-monitoring-3.x-2019.09.01  _UnTlwVvSSCqlDHgQ3dRgg 2 0       0 0    566b    566b
root@hids-001a:~#


curl http://localhost:9200/_cluster/health?pretty
{
  "cluster_name" : "elasticsearch",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 999,
  "active_shards" : 999,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
root@hids-001a:~#

filebeat.yml
root@hids-001a:/etc/filebeat# cat filebeat.yml
# Wazuh - Filebeat configuration file

filebeat.inputs:
  - type: log
    paths:
      - '/var/ossec/logs/alerts/alerts.json'

setup.template.json.enabled: true
setup.template.json.path: "/etc/filebeat/wazuh-template.json"
setup.template.overwrite: true

processors:
  - decode_json_fields:
      fields: ['message']
      process_array: true
      max_depth: 200
      target: ''
      overwrite_keys: true
  - drop_fields:
      fields: ['message', 'ecs', 'beat', 'input_type', 'tags', 'count', '@version', 'log', 'offset', 'type', 'host']
  - rename:
      fields:
        - from: "data.aws.sourceIPAddress"
          to: "@src_ip"
      ignore_missing: true
      fail_on_error: false
      when:
        regexp:
          data.aws.sourceIPAddress: \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
  - rename:
      fields:
        - from: "data.srcip"
          to: "@src_ip"
      ignore_missing: true
      fail_on_error: false
      when:
        regexp:
          data.srcip: \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
  - rename:
      fields:
        - from: "data.win.eventdata.ipAddress"
          to: "@src_ip"
      ignore_missing: true
      fail_on_error: false
      when:
        regexp:
          data.win.eventdata.ipAddress: \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b

# Send events directly to Elasticsearch
output.elasticsearch:
  hosts: ['http://localhost:9200']
  #pipeline: geoip
  indices:
    - index: 'wazuh-alerts-3.x-%{+yyyy.MM.dd}'

# Optional. Send events to Logstash instead of Elasticsearch
#output.logstash.hosts: ["YOUR_LOGSTASH_SERVER_IP:5000"]


Best regards,
Jacky Qin

在 2020年1月14日星期二 UTC+8上午12:30:02,Mayte Ariza写道:

Mayte Ariza

unread,
Jan 15, 2020, 12:12:44 PM1/15/20
to Wazuh mailing list
Hi Jacky,
 
Sorry for the late response.
 
Your Elasticsearch server seems to be properly configured since all the indices and the Elasticsearch cluster health are in green status.

When you got the message No results match your search criteria on Kibana, does the range cover enough to show any alerts? Your last wazuh-alert index is from 2019.12.24, so the range should be set between 2019.05.01 and 2019.12.24
Also the selected index pattern must be wazuh-alerts-3.x-*. If after setting that range the message is still showing on Kibana, the problem may be on Elastic (e.g. Kibana-Elasticsearch connection, Elasticsearch templates)
 
If the message does not show, then we need to check the data flow between the Wazuh manager and Elasticsearch:
  •  Are your agents sending events to the manager?
  • Is the manager generating any alerts? Check if the file /var/ossec/logs/alerts/alerts.json contains any alerts and also /var/ossec/logs/ossec.log to verify if there are any errors.
  • Check your filebeat connection to Elasticsearch running the command: filebeat test output
If all the checks are "OK" then filebeat is able to communicate properly.
 
Keep us updated with any new information to solve the issue.

Best regards,
Mayte Ariza.

Message has been deleted

Jacky Qin

unread,
Jan 15, 2020, 9:51:18 PM1/15/20
to Wazuh mailing list
Hi Mayte,

Sorry for forgetting to describe my deployment architecture before. Wazuh server, elasticsearch, filebeat and kibana are all installed on the same server.Their network communication should be OK.
  1. It is normal for agents to send logs to manager.
  2. Between May 1, 2019 and December 24, 2019, wazuh alerts-3. X - * is normally displayed in logs.
  3. Alerts are in the alerts.json file(some of them are as follows).
  4. Filebeat is able to communicate properly.
root@hids-001a:~# cat /var/ossec/logs/alerts/alerts.json
{"timestamp":"2020-01-16T00:00:14.155+0800","rule":{"level":5,"description":"Windows error event.","id":"18103","firedtimes":2,"mail":false,"groups":["windows","system_error"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"022","name".............


root@hids-001a:~# filebeat test output
elasticsearch: http://localhost:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1, 127.0.0.1, ::1
    dial up... OK
  TLS... WARN secure connection disabled
  talk to server... OK
  version: 7.2.0


Best regards,
Jacky Qin

在 2020年1月16日星期四 UTC+8上午1:12:44,Mayte Ariza写道:
ossec_error.txt
wazuh.png

Mayte Ariza

unread,
Jan 16, 2020, 4:56:30 AM1/16/20
to Wazuh mailing list
Hi Jacky,
 
It seems there are no errors in your environment connections since filebeat is able to communicate properly with Elasticsearch and the old indices are been displayed on Kibana.
 
Since there are no indices since 2019.12.24, but the environment has alerts, maybe filebeat is not being able to index the events to Elasticsearch due to a parsing exceptions. Could you check the filebeat logs? The file should be located in /var/log/filebeat path.
 
Run the following commands just to make sure that everything is running as it should:

/var/ossec/bin/ossec-control status

ps aux | grep -i ossec

systemctl status filebeat (systemd) or service filebeat status (sysV Init)

cat /var/ossec/logs/ossec.log | grep -vi "No space left on device\|info"
 
Could you share the last elasticsearch logs? Use the command: tail -n 1000 /var/log/elasticsearch/elasticsearch.log (modify it to match your Elasticsearch log file)
 

Also, it seems your server is running out of space, run the command df -h to check the hard drive.
 

The checks can be quite tedious, but please keep us updated to debug the issue.

Best regards,
Mayte Ariza.

Jacky Qin

unread,
Jan 16, 2020, 9:32:15 AM1/16/20
to Wazuh mailing list
Hi Mayte,

There should be something wrong with elasticearch.

root@hids-001a:/var/log/filebeat# cat filebeat
2020-01-16T10:20:40.411+0800    INFO    instance/beat.go:606    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2020-01-16T10:20:40.413+0800    INFO    instance/beat.go:614    Beat ID: 2b95e144-b5e8-48cc-93e3-de96c6a36dff
2020-01-16T10:20:40.414+0800    INFO    [index-management]      idxmgmt/std.go:178      Set output.elasticsearch.index to 'filebeat-7.2.0' as ILM is enabled.
2020-01-16T10:20:40.415+0800    INFO    elasticsearch/client.go:166     Elasticsearch url: http://localhost:9200
2020-01-16T10:20:40.418+0800    INFO    elasticsearch/client.go:735     Attempting to connect to Elasticsearch version 7.2.0

root@hids-001a:~# /var/ossec/bin/ossec-control status
wazuh-clusterd not running...
wazuh-modulesd is running...
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted is running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild is running...
ossec-execd is running...
wazuh-db is running...
ossec-authd is running...
ossec-agentlessd not running...
ossec-integratord not running...
ossec-dbd not running...
ossec-csyslogd not running...

root@hids-001a:~# ps aux | grep -i ossec
ossec      1412  0.0  0.2 935356 37000 ?        Ssl   2019   0:16 /usr/bin/nodejs /var/ossec/api/app.js
root      13561  0.0  0.0 121000  4584 ?        Sl   Jan08   0:39 /var/ossec/bin/ossec-authd
ossec     13567  0.1  0.3 649216 51420 ?        Sl   Jan08  14:19 /var/ossec/bin/wazuh-db
root      13584  0.0  0.0  38924  2364 ?        Sl   Jan08   0:21 /var/ossec/bin/ossec-execd
ossecm    13592  0.1  0.0  38888  2940 ?        Sl   Jan08  17:13 /var/ossec/bin/ossec-maild
ossec     13602  4.6  0.6 1183072 109856 ?      Sl   Jan08 557:21 /var/ossec/bin/ossec-analysisd
root      13608  0.9  0.0 115056  6508 ?        Sl   Jan08 110:06 /var/ossec/bin/ossec-syscheckd
ossecr    13617  1.4  0.0 712136 12564 ?        Sl   Jan08 173:38 /var/ossec/bin/ossec-remoted
root      13622  0.1  0.0 407588  3416 ?        Sl   Jan08  14:41 /var/ossec/bin/ossec-logcollector
ossec     13628  0.6  0.0  38916  3468 ?        Sl   Jan08  82:56 /var/ossec/bin/ossec-monitord
root      13652  0.1  0.0 276596 10720 ?        Sl   Jan08  20:04 /var/ossec/bin/wazuh-modulesd
root      82063  0.0  0.0  14224   988 pts/0    S+   22:13   0:00 grep --color=auto -i ossec

root@hids-001a:~# service filebeat status
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-07-24 09:47:54 CST; 5 months 24 days ago
 Main PID: 3211 (filebeat)
    Tasks: 23
   Memory: 38.1M
      CPU: 5h 14min 7.240s
   CGroup: /system.slice/filebeat.service
           └─3211 /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat

Jan 16 22:12:55 hids-001a filebeat[3211]: 2020-01-16T22:12:55.015+0800        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{
Jan 16 22:12:56 hids-001a filebeat[3211]: [2.0K blob data]
Jan 16 22:12:56 hids-001a filebeat[3211]: [1.6K blob data]
Jan 16 22:13:11 hids-001a filebeat[3211]: [2.0K blob data]
Jan 16 22:13:11 hids-001a filebeat[3211]: [1.6K blob data]
Jan 16 22:13:18 hids-001a filebeat[3211]: [2.0K blob data]
Jan 16 22:13:18 hids-001a filebeat[3211]: [1.6K blob data]
Jan 16 22:13:19 hids-001a filebeat[3211]: [2.0K blob data]
Jan 16 22:13:19 hids-001a filebeat[3211]: [1.5K blob data]
Jan 16 22:13:25 hids-001a filebeat[3211]: 2020-01-16T22:13:25.015+0800        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{
lines 1-21/21 (END)

root@hids-001a:~# cat /var/ossec/logs/ossec.log | grep -vi "No space left on device\|info"
2020/01/16 00:00:11 ossec-remoted: WARNING: (1213): Message from '10.19.17.41' not allowed.
2020/01/16 00:00:11 ossec-remoted: WARNING: (1213): Message from '10.19.7.81' not allowed.
2020/01/16 00:00:11 ossec-remoted: WARNING: (1213): Message from '10.19.7.21' not allowed.
......

root@hids-001a:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.9G     0  7.9G   0% /dev
tmpfs           1.6G  181M  1.4G  12% /run
/dev/xvda1      197G   53G  136G  28% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs           1.6G     0  1.6G   0% /run/user/0

Best regards,
Jacky Qin
elasticsearch.log

Mayte Ariza

unread,
Jan 17, 2020, 4:30:54 AM1/17/20
to Wazuh mailing list
Hi Jacky,
 
By default, Elasticsearch enforces a read-only index block when nodes have less than 5% of free disk space. The attached Elasticsearch log is pointing this out:

[WARN ][o.e.c.r.a.DiskThresholdMonitor] [hids-001a] flood stage disk watermark [95%] exceeded on [34Yzhd0fQv2nIneiSxPQ9A][hids-001a][/var/lib/elasticsearch/nodes/0] free: 0b[0%], all indices on this node will be marked read-only
 
You can find more information about the watermark flood stage here: https://www.elastic.co/guide/en/elasticsearch/reference/7.2/disk-allocator.html
 
Also, the warnings in the ossec.log file suggest the same problem:

2020/01/16 00:05:00 ossec-monitord: ERROR: Compression error: /logs/alerts/2020/Jan/ossec-alerts-15.log.gz: No space left on device
 
Exceeding the elasticsearch flood stage means that Elasticsearch stops accepting writes once your disk exceeds 95% usage at some point. 
 
So it seems there is some issue with your server disk space, although the “df -h” output does not show anything unusual. Run the following commands to get more details about your disk usage:
  • findmnt 
  • lsblk
Also, keep in mind that it will be necessary to release the index block manually once there is enough disk space available: curl -X PUT "localhost:9200/*/_settings?pretty" -H 'Content-Type: application/json' -d' { "index.blocks.read_only_allow_delete": null }'
 
Best regards,
Mayte Ariza.
Message has been deleted

Jacky Qin

unread,
Jan 17, 2020, 8:09:24 AM1/17/20
to Wazuh mailing list
Hi Mayte,

According to the elastic guidance document, I've added settings to the elasticsearch.yml file.After restarting the elasticsearch service, wazuh-alerts-3. X - * still has no data.But the error information in the log file seems to be different.

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 93%
cluster.routing.allocation.disk.watermark.high: 95%

# ------------------------------------ Node ------------------------------------

root@hids-001a:~# findmnt
TARGET                                SOURCE      FSTYPE      OPTIONS
/                                     /dev/xvda1  ext4        rw,relatime,errors=remount-ro,data=ordered
├─/sys                                sysfs       sysfs       rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security              securityfs  securityfs  rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup                    tmpfs       tmpfs       ro,nosuid,nodev,noexec,size=8203192k,nr_inodes=2050798,mode=755
│ │ ├─/sys/fs/cgroup/systemd          cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
│ │ ├─/sys/fs/cgroup/hugetlb          cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,hugetlb
│ │ ├─/sys/fs/cgroup/freezer          cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,freezer
│ │ ├─/sys/fs/cgroup/cpu,cpuacct      cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
│ │ ├─/sys/fs/cgroup/perf_event       cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,perf_event
│ │ ├─/sys/fs/cgroup/cpuset           cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,cpuset
│ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
│ │ ├─/sys/fs/cgroup/pids             cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,pids
│ │ ├─/sys/fs/cgroup/blkio            cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,blkio
│ │ ├─/sys/fs/cgroup/memory           cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,memory
│ │ └─/sys/fs/cgroup/devices          cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,devices
│ ├─/sys/fs/pstore                    pstore      pstore      rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/debug                 debugfs     debugfs     rw,relatime
│ │ └─/sys/kernel/debug/tracing       tracefs     tracefs     rw,relatime
│ └─/sys/fs/fuse/connections          fusectl     fusectl     rw,relatime
├─/proc                               proc        proc        rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc          systemd-1   autofs      rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct
│   └─/proc/sys/fs/binfmt_misc        binfmt_misc binfmt_misc rw,relatime
├─/dev                                udev        devtmpfs    rw,nosuid,relatime,size=8182896k,nr_inodes=2045724,mode=755
│ ├─/dev/pts                          devpts      devpts      rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm                          tmpfs       tmpfs       rw,nosuid,nodev,size=8203192k,nr_inodes=2050798
│ ├─/dev/hugepages                    hugetlbfs   hugetlbfs   rw,relatime
│ └─/dev/mqueue                       mqueue      mqueue      rw,relatime
├─/run                                tmpfs       tmpfs       rw,nosuid,noexec,relatime,size=1640640k,nr_inodes=2050798,mode=755
│ ├─/run/lock                         tmpfs       tmpfs       rw,nosuid,nodev,noexec,relatime,size=5120k,nr_inodes=2050798
│ └─/run/user/0                       tmpfs       tmpfs       rw,nosuid,nodev,relatime,size=1641500k,mode=700
└─/var/lib/lxcfs                      lxcfs       fuse.lxcfs  rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other

root@hids-001a:~# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  200G  0 disk
└─xvda1 202:1    0  200G  0 part /

Best regards,
Jacky Qin
elasticsearch.log

Mayte Ariza

unread,
Jan 21, 2020, 5:41:45 AM1/21/20
to Wazuh mailing list
Hi Jacky,
 
Sorry for the late response.
 
I'm a little puzzled by this issue because although the logs show errors due to lack of space, the “df -h” output does not show anything unusual. Have you ever run out of space on your server? If so, modifying the settings cluster.routing.allocation.disk will not be enough. You should release the index block manually once there is enough disk space available with the command:
 
curl -X PUT "localhost:9200/_all/_settings?pretty" -H 'Content-Type: application/json' -d' { "index.blocks.read_only_allow_delete": null }'

 
We could try to perform some curls in order to get some details:
 
  • Creating a new index:
 
curl -X PUT "localhost:9200/wazuh-alerts-3.x-test-2020.01.21?pretty" -H 'Content-Type: application/json' -d'
{
    "settings" : {
        "number_of_shards" : 1,
        "number_of_replicas" : 0
    }
}
'

  • Indexing data:

(It is a sample dataset)
 
curl -H "Content-Type: application/json" -XPOST "https://elasticsearch:9200/wazuh-alerts-3.x-test-2020.01.21/_bulk?pretty" --data-binary "@accounts.json"

 
 
Can you perform this commands without getting any error?
 
 
Best regards,
Mayte Ariza.

Mayte Ariza

unread,
Jan 22, 2020, 3:24:19 AM1/22/20
to Wazuh mailing list
Hi Jacky,
 
I have been reviewing your elasticsearch configuration and I realized the issue may be triggered by the cluster shard limit, although I have not seen any log error related. By default, the shard limit per node is 1000.
 
Your current shards are 999 and since your Elasticsearch is configured as a single node cluster, the default shard limit is 1000. As a temporary measure, this setting can be dynamically adjusted using the following command:

curl -XPUT localhost/_cluster/settings -H 'Content-type: application/json' --data-binary $'{"transient":{"cluster.max_shards_per_node":<new_shard_limit>}}'

Change <new_shard_limit> to the desired value.
 
Although this limit is not a sizing recommendation, you could try to reduce the number of shares in the cluster. There is an article with more information about it: https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster

 
I hope it helps.
 
Best regards,
Mayte Ariza.

Jacky Qin

unread,
Feb 10, 2020, 5:10:59 AM2/10/20
to Wazuh mailing list
Hi Mayte,

I'm sorry to reply you late because of a long holiday.

The disk size of the server used to be 50GB. Later I expanded it to 200GB.

According to the command you provided, there is an error log when indexing data.

root@hids-001a:~# curl -X PUT "localhost:9200/_all/_settings?pretty" -H 'Content-Type: application/json' -d' { "index.blocks.read_only_allow_delete": null }'
{
  "acknowledged" : true
}


root@hids-001a:~# curl -X PUT "localhost:9200/wazuh-alerts-3.x-test-2020.02.10?pretty" -H 'Content-Type: application/json' -d'
> {
>     "settings" : {
>         "number_of_shards" : 1,
>         "number_of_replicas" : 0
>     }
> }
> '
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "index" : "wazuh-alerts-3.x-test-2020.02.10"
}


root@hids-001a:~# curl -H "Content-Type: application/json" -XPOST "https://elasticsearch:9200/wazuh-alerts-3.x-test-2020.02.10/_bulk?pretty" --data-binary "@accounts.json"
curl: (6) Could not resolve host: elasticsearch


There was no effect when adjusting the number of shards, so I modified the elasticsearch.yml file.(Added ---- cluster.max_shards_per_node: 2000)

root@hids-001a:~# curl -XPUT localhost/_cluster/settings -H 'Content-type: application/json' --data-binary $'{"transient":{"cluster.max_shards_per_node":2000}}'
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.10.3 (Ubuntu)</center>
</body>
</html>


root@hids-001a:~# cat /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 93%
cluster.routing.allocation.disk.watermark.high: 95%
cluster.max_shards_per_node: 2000


Finally, I restarted the server, and after making sure all services started normally, the problem was still the same.

I still don't know the specific solution, can you continue to help me?

Best regards,
Jacky Qin

Mayte Ariza

unread,
Feb 11, 2020, 3:38:51 AM2/11/20
to Wazuh mailing list
Hi Jacky,
 
I hope you had a nice holiday. Of course we can keep debugging the issue.
 
I realized that last time I forgot to replace the elasticsearch ip to make it work in your environment. Please run the followings commands to index the sample dataset:
curl -H "Content-Type: application/json" -XPOST "http://localhost:9200/wazuh-alerts-3.x-test-2020.02.11/_bulk?pretty" --data-binary "@accounts.json"

Is there any error when indexing the data?
 
Also, run the following commands to perform a quick check of the current status:

curl http://localhost:9200/_cluster/settings

curl http://localhost:9200/_cluster/health?pretty

curl  http://localhost:9200/_cat/indices?pretty

df -h

logger -t sshd "pam_unix(sshd:session): session opened for user root by TEST WAZUH 1"

This will generate an alert. It will be really useful to check the Wazuh and Elasticsearch logs after running it:

tail -n 100 /var/ossec/logs/ossec.log

tail -n 500 /var/log/elasticsearch/elasticsearch.log
(modify it to match your Elasticsearch log file)
 
Please, share with us the uncommitted information.
 
Best regards,
Mayte Ariza

Jacky Qin

unread,
Feb 12, 2020, 8:40:03 AM2/12/20
to Wazuh mailing list
Hi Mayte,

There was an error indexing the data.For all errors, please see the attachment indexing_error.txt.

root@hids-001a:~# curl -H "Content-Type: application/json" -XPOST "http://localhost:9200/wazuh-alerts-3.x-test-2020.02.12/_bulk?pretty" --data-binary "@accounts.json"
{
  "took" : 5,
  "errors" : true,
  "items" : [
    {
      "index" : {
        "_index" : "wazuh-alerts-3.x-test-2020.02.12",
        "_type" : "_doc",
        "_id" : "1",
        "status" : 400,
        "error" : {
          "type" : "validation_exception",
          "reason" : "Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"
        }
      }
    },

{"persistent":{},"transient":{}}root@hids-001a:~#


{
  "cluster_name" : "elasticsearch",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1000,
  "active_shards" : 1000,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}


For the result of curl http: // localhost: 9200 / _cat / indices? pretty, please see the attachment indices.txt.

root@hids-001a:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.9G     0  7.9G   0% /dev
tmpfs           1.6G   57M  1.6G   4% /run
/dev/xvda1      197G   38G  151G  21% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs           1.6G     0  1.6G   0% /run/user/0


root@hids-001a:~# logger -t sshd "pam_unix(sshd:session): session opened for user root by TEST WAZUH 1"
root@hids-001a:~# tail -n 100 /var/ossec/logs/ossec.log
2020/02/12 21:20:25 ossec-remoted: WARNING: (1213): Message from '110.129.117.42' not allowed.
2020/02/12 21:20:25 ossec-remoted: WARNING: (1213): Message from '110.129.71.59' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.127.72.142' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.129.63.15' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.129.74.223' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.127.74.140' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.129.71.29' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.127.74.135' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.129.65.161' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.129.76.40' not allowed.
2020/02/12 21:20:26 ossec-remoted: WARNING: (1213): Message from '110.129.77.61' not allowed.
..............................

root@hids-001a:~# tail -n 500 /var/log/elasticsearch/elasticsearch.log
[2020-02-12T15:44:05,853][INFO ][o.e.c.m.MetaDataIndexTemplateService] [hids-001a] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-12T15:44:07,217][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [hids-001a] updating number_of_replicas to [0] for indices [.wazuh]
[2020-02-12T15:44:07,287][INFO ][o.e.c.m.MetaDataIndexTemplateService] [hids-001a] adding template [wazuh-agent] for index patterns [wazuh-monitoring*, wazuh-monitoring-3.x-*]
[2020-02-12T15:44:07,311][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [hids-001a] updating number_of_replicas to [0] for indices [.wazuh-version]


Please continue to help me analyze, thank you.

Best regards,
Jacky Qin

indexing_error.txt
indices.txt

Mayte Ariza

unread,
Feb 13, 2020, 3:17:22 AM2/13/20
to Wazuh mailing list
Hi Jacky,
 
It seems that the setting cluster.max_shards_per_node: 2000 is not taking effect. I have been digging into it and settings may not work when they are specified in the elasticsearch.yml file. This may be happening in your environment as well. (Related issue: https://github.com/elastic/elasticsearch/issues/40803)
 
Since your Elasticsearch is configured as a single node cluster, the default shard limit is 1000. As a temporary measure, this setting can be dynamically adjusted using the API:
 
curl -X PUT localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "2000" } }'

(Use transient instead of persistent if you do not want the settings to survive a full cluster restart.)
 
However, since your indices are not heavy, the best solution could be to reduce the number of primary shards per index using the Shrink API. You can find more information about it here: 

Jacky Qin

unread,
Feb 13, 2020, 6:06:48 AM2/13/20
to Wazuh mailing list
Hi Mayte,

Following the command you provided, the number of shards has increased successfully.

root@hids-001a:~# curl -X PUT localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "2000" } }'
{"acknowledged":true,"persistent":{"cluster":{"max_shards_per_node":"2000"}},"transient":{}}root@hids-001a:~#


Now data is displayed in wazuh-alerts-3.x- *.Thank you very much for your continued patient help.

Best regards,
Jacky Qin
alerts.png

Mayte Ariza

unread,
Feb 13, 2020, 6:32:59 AM2/13/20
to Wazuh mailing list
Hi Jacky,
 
I'm glad it worked out!

Do not hesitate to contact us again if you have any questions.
 
Best regards,
Mayte Ariza
Reply all
Reply to author
Forward
0 new messages