no security events on wazuh

480 views
Skip to first unread message

German DiCasas

unread,
Mar 6, 2024, 5:38:39 AM3/6/24
to Wazuh | Mailing List
Hi team,

I am new over this..., for some reason the security events after a specifict date are not showed. I do not know. Searching over official docs and groups.google, I find that can be the problem the wazuh-indexer  that need to be reinstalled... not shure.  The discover tab  or security events stoped on a specifict date,  after that date there arent more events. The file archives.log and alerts.log works and logs are updated. 

Let me know if you know what the problem could be. I spent a few days troubleshooting this issue but no luck so far.I hope you can help me to fix this

This are some result of comands

cat /var/ossec/logs/ossec.log | grep -i -E "error|warn"
2024/03/05 19:15:24 wazuh-modulesd:vulnerability-detector: WARNING: (5500): The 'Ubuntu Trusty' database could not be fetched.
2024/03/05 19:15:24 wazuh-modulesd:vulnerability-detector: ERROR: (5513): CVE database could not be updated.
2024/03/05 19:16:04 wazuh-modulesd:vulnerability-detector: WARNING: (5500): The 'Ubuntu Xenial' database could not be fetched.
2024/03/05 19:16:04 wazuh-modulesd:vulnerability-detector: ERROR: (5513): CVE database could not be updated.
2024/03/05 19:16:41 wazuh-modulesd:vulnerability-detector: WARNING: (5500): The 'Ubuntu Bionic' database could not be fetched.
2024/03/05 19:16:41 wazuh-modulesd:vulnerability-detector: ERROR: (5513): CVE database could not be updated.
2024/03/05 19:17:01 wazuh-modulesd:vulnerability-detector: WARNING: (5500): The 'Ubuntu Focal' database could not be fetched.
2024/03/05 19:17:16 wazuh-modulesd:vulnerability-detector: WARNING: (5500): The 'Ubuntu Jammy' database could not be fetched.
2024/03/05 19:17:26 wazuh-modulesd:vulnerability-detector: WARNING: (5500): The 'Debian Buster' database could not be fetched.
2024/03/05 19:17:36 wazuh-modulesd:vulnerability-detector: WARNING: (5500): The 'Debian Bullseye' database could not be fetched.
2024/03/05 19:17:36 wazuh-modulesd:vulnerability-detector: ERROR: (5513): CVE database could not be updated.
2024/03/05 19:17:37 wazuh-modulesd:vulnerability-detector: WARNING: (5575): Unavailable vulnerability data for the agent '000' OS. Skipping it.


cat /usr/share/wazuh-dashboard/data/wazuh/logs/wazuhapp.log | grep -i -E "error|warn"
{"data":{"message":"validation_exception: [validation_exception] Reason: Validation Failed: 1: this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open;","stack":"ResponseError: validation_exception: [validation_exception] Reason: Validation Failed: 1: this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open;\n    at onBody (/usr/share/wazuh-dashboard/node_modules/@opensearch-project/opensearch/lib/Transport.js:374:23)\n    at IncomingMessage.onEnd (/usr/share/wazuh-dashboard/node_modules/@opensearch-project/opensearch/lib/Transport.js:293:11)\n    at IncomingMessage.emit (node:events:525:35)\n    at IncomingMessage.emit (node:domain:489:12)\n    at endReadableNT (node:internal/streams/readable:1358:12)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)"},"date":"2024-02-26T12:15:00.094Z","level":"info","location":"Cron-scheduler"}

{"date":"2024-02-26T12:15:00.142Z","level":"error","location":"monitoring:createIndex","message":"Could not create wazuh-monitoring-2024.9w index on elasticsearch due to validation_exception: [validation_exception] Reason: Validation Failed: 1: this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}

{"date":"2024-02-26T12:15:00.145Z","level":"error","location":"monitoring:insertMonitoringDataElasticsearch","message":"index_not_found_exception: [index_not_found_exception] Reason: no such index [wazuh-monitoring-2024.9w]"}

{"data":{"message":"connect ECONNREFUSED 127.0.0.1:9200","stack":"ConnectionError: connect ECONNREFUSED 127.0.0.1:9200\n    at ClientRequest.onError (/usr/share/wazuh-dashboard/node_modules/@opensearch-project/opensearch/lib/Connection.js:126:16)\n    at ClientRequest.emit (node:events:513:28)\n    at ClientRequest.emit (node:domain:489:12)\n    at TLSSocket.socketErrorListener (node:_http_client:494:9)\n    at TLSSocket.emit (node:events:513:28)\n    at TLSSocket.emit (node:domain:489:12)\n    at emitErrorNT (node:internal/streams/destroy:157:8)\n    at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)"},"date":"2024-03-04T15:40:00.102Z","level":"info","location":"Cron-scheduler"}
{"date":"2024-03-04T15:45:00.320Z","level":"error","location":"monitoring:cronTask","message":"connect ECONNREFUSED 127.0.0.1:9200"}


{"data":{"message":"connect ECONNREFUSED 127.0.0.1:9200","stack":"ConnectionError: connect ECONNREFUSED 127.0.0.1:9200\n    at ClientRequest.onError (/usr/share/wazuh-dashboard/node_modules/@opensearch-project/opensearch/lib/Connection.js:126:16)\n    at ClientRequest.emit (node:events:513:28)\n    at ClientRequest.emit (node:domain:489:12)\n    at TLSSocket.socketErrorListener (node:_http_client:494:9)\n    at TLSSocket.emit (node:events:513:28)\n    at TLSSocket.emit (node:domain:489:12)\n    at emitErrorNT (node:internal/streams/destroy:157:8)\n    at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)"},"date":"2024-03-04T15:45:01.405Z","level":"info","location":"Cron-scheduler"}

journalctl -u wazuh-dashboard:
Mar 05 19:14:54 server-siem opensearch-dashboards[795]: {"type":"response","@timestamp":"2024-03-05T22:14:51Z","tags":[],"pid":795,"method":"post","statusCode":200,"req":{"url":"/internal/search/opensearch","method":"post","headers":{"host":"192.168.1.200","connection":"keep-alive","content-length":"1587","sec-ch-ua":"\"Chromium\";v=\"118\", \"Google Chrome\";v=\"118\", \"Not=A?Brand\";v=\"99\"","content-type":"application/json","osd-xsrf":"osd-fetch","sec-ch-ua-mobile":"?0","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36","osd-version":"2.8.0","sec-ch-ua-platform":"\"Windows\"","accept":"*/*","origin":"https://192.168.1.200","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://192.168.1.200/app/wazuh","accept-encoding":"gzip, deflate, br","accept-language":"es-ES,es;q=0.9"},"remoteAddress":"192.168.1.10","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36","referer":"https://192.168.1.200/app/wazuh"},"res":{"statusCode":200,"responseTime":176,"contentLength":9},"message":"POST /internal/search/opensearch 200 176ms - 9.0B"}
Mar 05 19:14:54 server-siem opensearch-dashboards[795]: {"type":"response","@timestamp":"2024-03-05T22:14:59Z","tags":[],"pid":795,"method":"get","statusCode":200,"req":{"url":"/ui/default_branding/home.svg","method":"get","headers":{"host":"192.168.1.200","connection":"keep-alive","sec-ch-ua":"\"Chromium\";v=\"118\", \"Google Chrome\";v=\"118\", \"Not=A?Brand\";v=\"99\"","sec-ch-ua-mobile":"?0","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36","sec-ch-ua-platform":"\"Windows\"","accept":"image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8","sec-fetch-site":"same-origin","sec-fetch-mode":"no-cors","sec-fetch-dest":"image","referer":"https://192.168.1.200/app/wazuh","accept-encoding":"gzip, deflate, br","accept-language":"es-ES,es;q=0.9"},"remoteAddress":"192.168.1.10","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36","referer":"https://192.168.1.200/app/wazuh"},"res":{"statusCode":200,"responseTime":4,"contentLength":9},"message":"GET /ui/default_branding/home.svg 200 4ms - 9.0B"}

filebeat test output
elasticsearch: https://127.0.0.1:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.3
    dial up... OK
  talk to server... OK
  version: 7.10.2

tail -n5 /var/ossec/logs/alerts/alerts.json: this command get normal alerts  , the same with archies.log. Works both.


systemctl status wazuh-indexer
● wazuh-indexer.service - Wazuh-indexer
     Loaded: loaded (/lib/systemd/system/wazuh-indexer.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-03-04 16:57:36 -03; 1 day 2h ago
       Docs: https://documentation.wazuh.com
   Main PID: 14604 (java)
      Tasks: 153 (limit: 14213)
     Memory: 6.3G
        CPU: 54min 58.503s
     CGroup: /system.slice/wazuh-indexer.service
             └─14604 /usr/share/wazuh-indexer/jdk/bin/java -Xshare:auto -Dopensearch.networkaddress.cache.ttl=60 -Dopensearch.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=SPI,COMPAT -Xms6g -Xmx6g -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -Djava.io.tmpdir=/tmp/opensearch-4706699220673334452 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/wazuh-indexer -XX:ErrorFile=/var/log/wazuh-indexer/hs_err_pid%p.log "-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/wazuh-indexer/gc.log:utctime,pid,tags:filecount=32,filesize=64m" -Dclk.tck=100 -Djdk.attach.allowAttachSelf=true -Djava.security.policy=file:///etc/wazuh-indexer/opensearch-performance-analyzer/opensearch_security.policy --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED -XX:MaxDirectMemorySize=3221225472 -Dopensearch.path.home=/usr/share/wazuh-indexer -Dopensearch.path.conf=/etc/wazuh-indexer -Dopensearch.distribution.type=rpm -Dopensearch.bundled_jdk=true -cp "/usr/share/wazuh-indexer/lib/*" org.opensearch.bootstrap.OpenSearch -p /run/wazuh-indexer/wazuh-indexer.pid --quiet

Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:295)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:206)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:204)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:242)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:747)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedOpenSearchThreadPoolExecutor.java:282)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedOpenSearchThreadPoolExecutor.java:245)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
Mar 05 00:00:01 server-siem systemd-entrypoint[14604]:         at java.base/java.lang.Thread.run(Thread.java:833)

Antonio David Gutiérrez

unread,
Mar 6, 2024, 6:01:29 AM3/6/24
to Wazuh | Mailing List
Hi German,

According to the provided information, it seems you reached the shards limit in the node of the Wazuh indexer and the new events can not be indexed so you can not see them in the Wazuh dashboard.

Related log (of Wazuh plugin for Wazuh dashboard):

{"data":{"message":"validation_exception: [validation_exception] Reason: Validation Failed: 1: this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open;","stack":"ResponseError: validation_exception: [validation_exception] Reason: Validation Failed: 1: this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open;\n    at onBody (/usr/share/wazuh-dashboard/node_modules/@opensearch-project/opensearch/lib/Transport.js:374:23)\n    at IncomingMessage.onEnd (/usr/share/wazuh-dashboard/node_modules/@opensearch-project/opensearch/lib/Transport.js:293:11)\n    at IncomingMessage.emit (node:events:525:35)\n    at IncomingMessage.emit (node:domain:489:12)\n    at endReadableNT (node:internal/streams/readable:1358:12)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)"},"date":"2024-02-26T12:15:00.094Z","level":"info","location":"Cron-scheduler"}

This means the shards limit count was reached (1000 by default in the node). To fix this issue, there are multiple options:

- Delete indices. This frees shards. You could do it with old indices you don't want/need. Or even, you could automate it with ISM (Index State Management) policies to delete old indices after some time as explained in this post: https://wazuh.com/blog/wazuh-index-management (the blog post is old, so some things cloud look different, see the section related to ISM).

The automation of the indices deletion through ISM (Index State Management) policies is recommended because reduces manual maintenance.

- Add more nodes to your Wazuh indexer cluster.

- Increment the max shards per node (not recommended). But if you do this option, make sure you do not increase it too much, as it could cause inoperability and performance issues in your Elasticsearch/Wazuh indexer cluster. To do this:

  curl -k -u <USERNAME>:<PASSWORD> -XPUT <WAZUH_INDEXER_HOST_ADDRESS>/_cluster/settings -H "Content-Type: application/json" \
  -d '{ "persistent": { "cluster.max_shards_per_node": "<MAX_SHARDS_PER_NODE>" } }'

  replace the placeholders, where:
  - <USERNAME>: username to do the request
  - <PASSWORD>: password for the user
  - <WAZUH_INDEXER_HOST_ADDRESS>: Wazuh indexer host address. Include the protocol https if needed.
  - <MAX_SHARDS_PER_NODE>: Maximum shards by node. Maybe you could try with 1200 o something like that, depending on your case.

- Reduce the shards consumed by the indices: reduce the shards of existent indices and configure if possible for the new ones.
  - wazuh-alerts-4.x-* indices: https://documentation.wazuh.com/current/user-manual/elasticsearch/elastic-tuning.html#shards-and-replicas.
  - wazuh-monitoring-* and `wazuh-statistics-*` indices: they can be configured in the Wazuh plugin settings from the UI Settings/Configuration or through the configuration file wazuh.yml.
  - General application https://opster.com/guides/elasticsearch/capacity-planning/elasticsearch-reduce-shards/.

More info: https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster.

German DiCasas

unread,
Mar 7, 2024, 3:36:41 PM3/7/24
to Wazuh | Mailing List
thanks a lot

I did it the first item, I delete some old alerts and monit, works ok.. it is a temporal fix that will give me time to create the policy.

Thanks Antonio

Regards

German

Reply all
Reply to author
Forward
0 new messages