Auditlog not working properly after restart

206 views
Skip to first unread message

Andrehens Chicfici

unread,
Oct 17, 2024, 4:48:24 AM10/17/24
to Wazuh | Mailing List
Hey,
I updated ubuntu via apt update && apt upgrade. The packages were:

The following NEWpackages will be installed:

  linux-headers-6.8.0-47 linux-headers-6.8.0-47-generic linux-image-6.8.0-47-generic linux-modules-6.8.0-47-generic linux-modules-extra-6.8.0-47-generic linux-tools-6.8.0-47
  linux-tools-6.8.0-47-generic

The following upgrades have been deferred due to phasing:
  initramfs-tools initramfs-tools-bin initramfs-tools-core

The following packages will be upgraded (Upgrade):
  binutils binutils-common binutils-x86-64-linux-gnu gcc-14-base libarchive13t64 libasan8 libatomic1 libbinutils libcc1-0 libctf-nobfd0 libctf0 libgcc-s1 libgomp1 libgprofng0 libhwasan0
  libitm1 liblsan0 libquadmath0 libsframe1 libstdc++6 libtsan2 libubsan1 linux-generic linux-headers-generic linux-image-generic linux-libc-dev linux-tools-common nano snapd

So no new wazuh packages. After the recommended reboot I get the following error messages when I check my wazuh-cluster.log. Especially the warnings about the status of auditlog/audit log are concerning me.

 tail -n 99999 /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn|critical|fatal"

[2024-10-17T07:09:54,467][INFO ][o.o.n.Node               ] [node-1] JVM arguments [-Xshare:auto, -Dopensearch.networkaddress.cache.ttl=60, -Dopensearch.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.security.manager=allow, -Djava.locale.providers=SPI,COMPAT, -Xms24g, -Xmx24g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/var/log/wazuh-indexer/tmp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/wazuh-indexer, -XX:ErrorFile=/var/log/wazuh-indexer/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/wazuh-indexer/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.security.manager=allow, -Djava.util.concurrent.ForkJoinPool.common.threadFactory=org.opensearch.secure_sm.SecuredForkJoinWorkerThreadFactory, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///etc/wazuh-indexer/opensearch-performance-analyzer/opensearch_security.policy, --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED, -XX:MaxDirectMemorySize=12884901888, -Dopensearch.path.home=/usr/share/wazuh-indexer, -Dopensearch.path.conf=/etc/wazuh-indexer, -Dopensearch.distribution.type=deb, -Dopensearch.bundled_jdk=true]

[2024-10-17T07:09:59,977][WARN ][o.o.s.c.Salt             ] [node-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes

[2024-10-17T07:10:00,002][ERROR][o.o.s.a.s.SinkProvider   ] [node-1] Default endpoint could not be created, auditlog will not work properly.

[2024-10-17T07:10:00,003][WARN ][o.o.s.a.r.AuditMessageRouter] [node-1] No default storage available, audit log may not work properly. Please check configuration.

[2024-10-17T07:10:01,205][WARN ][o.o.s.p.SQLPlugin        ] [node-1] Master key is a required config for using create and update datasource APIs. Please set plugins.query.datasources.encryption.masterkey config in opensearch.yml in all the cluster nodes. More details can be found here: https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/datasources.rst#master-key-config-for-encrypting-credential-information

[2024-10-17T07:10:02,258][WARN ][o.o.g.DanglingIndicesState] [node-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually

[2024-10-17T07:10:03,397][WARN ][o.o.p.c.s.h.ConfigOverridesClusterSettingHandler] [node-1] Config override setting update called with empty string. Ignoring.

[2024-10-17T07:10:03,657][WARN ][o.o.o.i.ObservabilityIndex] [node-1] message: index [.opensearch-observability/1pRjyzrgRjSJ3VEWIq7KQQ] already exists

[2024-10-17T07:10:03,660][WARN ][o.o.s.SecurityAnalyticsPlugin] [node-1] Failed to initialize LogType config index and builtin log types

[2024-10-17T07:10:04,260][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)

[2024-10-17T07:10:06,022][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

[2024-10-17T07:10:08,538][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

[2024-10-17T07:10:11,090][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

[2024-10-17T07:10:13,599][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

[2024-10-17T07:10:16,112][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

[2024-10-17T07:10:18,620][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}



Is this something to worry about? How can I investigate further that my instance is working correctly?


cheers

chic

Mohamed El Amine Gaoudi

unread,
Oct 22, 2024, 12:52:25 AM10/22/24
to Wazuh | Mailing List
Hi there, 

It's seems that there are few things going wrong here, perhaps some limit was reached or indexing failure due to a lack of disk space on the device.

To investigate further, would you be able to share the output of the following commands with me?

tail -n 100 /var/log/filebeat/filebeat tail -n 100 /var/log/wazuh-indexer/wazuh-indexer-cluster.log df -h

Andrehens Chicfici

unread,
Oct 23, 2024, 9:32:16 AM10/23/24
to Mohamed El Amine Gaoudi, Wazuh | Mailing List
tail -n 100 /var/log/filebeat/filebeat
2024-10-23T12:18:58.816+0200    INFO    instance/beat.go:645    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2024-10-23T12:18:58.819+0200    INFO    instance/beat.go:653    Beat ID: 370b80af-991d-4d0d-8cd6-cc6dc0df819a
2024-10-23T12:18:58.820+0200    INFO    [seccomp]       seccomp/seccomp.go:124  Syscall filter successfully installed
2024-10-23T12:18:58.821+0200    INFO    [beat]  instance/beat.go:981    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "370b80af-991d-4d0d-8cd6-cc6dc0df819a"}}}
2024-10-23T12:18:58.821+0200    INFO    [beat]  instance/beat.go:990    Build info      {"system_info": {"build": {"commit": "aacf9ecd9c494aa0908f61fbca82c906b16562a8", "libbeat": "7.10.2", "time": "2021-01-12T22:10:33.000Z", "version": "7.10.2"}}}
2024-10-23T12:18:58.821+0200    INFO    [beat]  instance/beat.go:993    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":8,"version":"go1.14.12"}}}
2024-10-23T12:18:58.822+0200    INFO    [beat]  instance/beat.go:997    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2024-10-23T12:18:51+02:00","containerized":false,"name":"wazuh","ip":["127.0.0.1/8","::1/128","XXX.XXX.XXX.XXX/24","fe80::123:45ff:feb6:1232/64"],"kernel_version":"6.8.0-47-generic","mac":["00:50:56:b5:31:32"],"os":{"family":"debian","platform":"ubuntu","name":"Ubuntu","version":"24.04.1 LTS (Noble Numbat)","major":24,"minor":4,"patch":1,"codename":"noble"},"timezone":"CEST","timezone_offset_sec":7200,"id":"9a1551095a39457d8b1c07dd1c764b10"}}}
2024-10-23T12:18:58.822+0200    INFO    [beat]  instance/beat.go:1026   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"ambient":null}, "cwd": "/", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 1081, "ppid": 1, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2024-10-23T12:18:57.380+0200"}}}
2024-10-23T12:18:58.822+0200    INFO    instance/beat.go:299    Setup Beat: filebeat; Version: 7.10.2
2024-10-23T12:18:58.826+0200    INFO    eslegclient/connection.go:99    elasticsearch url: https:// XXX.XXX.XXX.XXX:9200
2024-10-23T12:18:58.827+0200    INFO    [publisher]     pipeline/module.go:113  Beat name: wazuh
2024-10-23T12:18:58.839+0200    INFO    beater/filebeat.go:117  Enabled modules/filesets: wazuh (alerts),  ()
2024-10-23T12:18:58.840+0200    INFO    instance/beat.go:455    filebeat start running.
2024-10-23T12:18:58.844+0200    INFO    memlog/store.go:119     Loading data file of '/var/lib/filebeat/registry/filebeat' succeeded. Active transaction id=2977265
2024-10-23T12:18:59.067+0200    INFO    memlog/store.go:124     Finished loading transaction log file for '/var/lib/filebeat/registry/filebeat'. Active transaction id=2985209
2024-10-23T12:18:59.067+0200    INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 1
2024-10-23T12:18:59.067+0200    INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 1
2024-10-23T12:18:59.068+0200    INFO    log/input.go:157        Configured paths: [/var/ossec/logs/alerts/alerts.json]
2024-10-23T12:18:59.068+0200    INFO    [crawler]       beater/crawler.go:141   Starting input (ID: 9132358592892857476)
2024-10-23T12:18:59.069+0200    INFO    [crawler]       beater/crawler.go:108   Loading and starting Inputs completed. Enabled inputs: 1
2024-10-23T12:18:59.071+0200    INFO    log/harvester.go:302    Harvester started for file: /var/ossec/logs/alerts/alerts.json
2024-10-23T12:19:00.073+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(elasticsearch(https:// XXX.XXX.XXX.XXX:9200))
2024-10-23T12:19:00.073+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-10-23T12:19:00.073+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-10-23T12:19:01.591+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://XXX.XXX.XXX.XXX:9200)): Get "https://XXX.XXX.XXX.XXX :9200": dial tcp XXX.XXX.XXX.XXX :9200: connect: connection refused
2024-10-23T12:19:01.591+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://XXX.XXX.XXX.XXX :9200)) with 1 reconnect attempt(s)
2024-10-23T12:19:01.591+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-10-23T12:19:01.591+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-10-23T12:19:04.052+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch( https://XXX.XXX.XXX.XXX:9200)): Get " https://XXX.XXX.XXX.XXX:9200": dial tcp wolbert:9200: connect: connection refused
2024-10-23T12:19:04.052+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://wolbert:9200)) with 2 reconnect attempt(s)
2024-10-23T12:19:04.052+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-10-23T12:19:04.052+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-10-23T12:19:09.584+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://wolbert:9200)): Get "https://wolbert:9200": dial tcp wolbert:9200: connect: connection refused
2024-10-23T12:19:09.584+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://wolbert:9200)) with 3 reconnect attempt(s)
2024-10-23T12:19:09.585+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-10-23T12:19:09.585+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-10-23T12:19:18.865+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://wolbert:9200)): Get "https://wolbert:9200": dial tcp wolbert:9200: connect: connection refused
2024-10-23T12:19:18.865+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://wolbert:9200)) with 4 reconnect attempt(s)
2024-10-23T12:19:18.865+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-10-23T12:19:18.865+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-10-23T12:19:47.553+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://wolbert:9200)): 503 Service Unavailable: OpenSearch Security not initialized.
2024-10-23T12:19:47.553+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://wolbert:9200)) with 5 reconnect attempt(s)
2024-10-23T12:19:47.553+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-10-23T12:19:47.553+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-10-23T12:19:47.790+0200    INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.10.2
2024-10-23T12:19:47.797+0200    INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.10.2
2024-10-23T12:19:47.804+0200    INFO    template/load.go:97     Template wazuh already exists and will not be overwritten.
2024-10-23T12:19:47.804+0200    INFO    [index-management]      idxmgmt/std.go:298      Loaded index template.
2024-10-23T12:19:47.805+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(elasticsearch(https://wolbert:9200)) established

tail -n 100 /var/log/wazuh-indexer/wazuh-cluster.log
[2024-10-23T12:46:19,544][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:46:49,552][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:47:19,557][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:47:49,563][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:48:19,569][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:48:49,574][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:49:17,227][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2024-10-23T12:49:17,481][INFO ][o.o.s.s.c.FlintStreamingJobHouseKeeperTask] [node-1] Starting housekeeping task for auto refresh streaming jobs.
[2024-10-23T12:49:17,481][INFO ][o.o.s.s.c.FlintStreamingJobHouseKeeperTask] [node-1] Finished housekeeping task for auto refresh streaming jobs.
[2024-10-23T12:49:19,579][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:49:49,585][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:50:19,591][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:50:49,596][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:51:19,602][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:51:49,608][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:52:19,613][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:52:49,619][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:53:19,625][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:53:49,630][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:54:17,227][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2024-10-23T12:54:19,636][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:54:49,642][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:55:19,647][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:55:49,653][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:56:19,658][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:56:49,664][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:57:19,674][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:57:49,680][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:58:19,685][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:58:49,691][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:59:17,228][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2024-10-23T12:59:19,698][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T12:59:49,705][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:00:00,703][INFO ][o.o.c.m.MetadataUpdateSettingsService] [node-1] updating number_of_replicas to [0] for indices [wazuh-monitoring-2024.43w]
[2024-10-23T13:00:19,711][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:00:49,717][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:01:19,723][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:01:49,730][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:02:19,737][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:02:49,743][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:03:19,752][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:03:49,758][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:04:17,229][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2024-10-23T13:04:17,482][INFO ][o.o.s.s.c.FlintStreamingJobHouseKeeperTask] [node-1] Starting housekeeping task for auto refresh streaming jobs.
[2024-10-23T13:04:17,482][INFO ][o.o.s.s.c.FlintStreamingJobHouseKeeperTask] [node-1] Finished housekeeping task for auto refresh streaming jobs.
[2024-10-23T13:04:19,764][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:04:49,771][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:05:19,778][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:05:49,783][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:06:19,789][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:06:49,795][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:07:19,801][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:07:49,807][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:08:19,812][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:08:49,818][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:09:17,229][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2024-10-23T13:09:19,824][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:09:49,834][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:10:19,841][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:10:49,848][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:11:19,853][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:11:49,859][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:12:19,864][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:12:49,870][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:13:19,876][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:13:49,882][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:14:17,229][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2024-10-23T13:14:19,887][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:14:49,893][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:15:01,608][INFO ][o.o.c.m.MetadataUpdateSettingsService] [node-1] updating number_of_replicas to [0] for indices [wazuh-monitoring-2024.43w]
[2024-10-23T13:15:19,899][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:15:49,904][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:16:19,908][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:16:49,914][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:17:19,920][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:17:49,925][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:18:19,931][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:18:49,936][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:19:17,229][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2024-10-23T13:19:17,417][INFO ][o.o.a.t.CronTransportAction] [node-1] Start running AD hourly cron.
[2024-10-23T13:19:17,417][INFO ][o.o.a.t.ADTaskManager    ] [node-1] Start to maintain running historical tasks
[2024-10-23T13:19:17,417][INFO ][o.o.a.c.HourlyCron       ] [node-1] Hourly maintenance succeeds
[2024-10-23T13:19:17,483][INFO ][o.o.s.s.c.FlintStreamingJobHouseKeeperTask] [node-1] Starting housekeeping task for auto refresh streaming jobs.
[2024-10-23T13:19:17,483][INFO ][o.o.s.s.c.FlintStreamingJobHouseKeeperTask] [node-1] Finished housekeeping task for auto refresh streaming jobs.
[2024-10-23T13:19:19,940][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:19:49,946][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:20:19,953][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:20:49,959][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:21:19,965][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:21:49,970][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:22:19,975][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:22:49,980][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:23:19,985][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:23:49,990][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:24:17,230][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2024-10-23T13:24:19,995][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:24:50,000][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:25:20,006][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:25:50,012][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.
[2024-10-23T13:26:20,018][WARN ][o.o.c.r.a.DiskThresholdMonitor] [node-1] Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark: 1.

df -h
Filesystem                         Size  Used Avail Use% Mounted on
tmpfs                              4,8G  1,2M  4,8G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  588G  437G  123G  79% /
tmpfs                               24G   80K   24G   1% /dev/shm
tmpfs                              5,0M     0  5,0M   0% /run/lock
/dev/sda2                          2,0G  182M  1,7G  10% /boot
tmpfs                              4,8G   12K  4,8G   1% /run/user/1000

Md. Nazmur Sakib

unread,
Dec 12, 2024, 6:20:17 AM12/12/24
to Wazuh | Mailing List

Based on this error

Putting index create block on cluster as all nodes are breaching high disk watermark. Number of nodes above high watermark

It seems like the issue is with disk space.

You can delete some old indexes and check if this solves the issue.

You can try to remove old indices:


curl -XDELETE -k -u admin:<password> https://localhost:9200/wazuh-alerts-(date)


Or add more resources to your indexer node


https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster.html#adding-wazuh-indexer-nodes



Further, you can check the Wazuh document about Wazuh index management to manage your indices automatically in the future: https://documentation.wazuh.com/current/user-manual/wazuh-indexer/index-life-management.html





Also, you can follow this document to Fix watermark errors

https://www.elastic.co/guide/en/elasticsearch/reference/8.x/fix-watermark-errors.html


I hope you find this information useful.
Reply all
Reply to author
Forward
0 new messages