Custom decoder and rules

307 views
Skip to first unread message

Matthias A

unread,
May 16, 2024, 5:06:06 AM5/16/24
to Wazuh | Mailing List
Hi,


I made a decoder and a rule, it works in the logtest module but it's not getting indexed, what could cause this? (it works with other decoders and rules I made, but this one is not working)


**Phase 1: Completed pre-decoding.
        full event: 'Apr 17 10:54:31 FW-11111-DC-01 1/1111/FW-111-DC-01/srv_FW-211-DC-1101_FW-111-DC-0111VPNDC:  Notice   FW-111-DC-01 Session PGRP-AUTH-16360abfzefezzfe32-b913-fbfba810c644-MNMTGK: Accounting LOGIN - user=jdd...@ddddd.be client=Vanilla IP=111.11.111.111 start="2024/04/17 10:54:31" VirtualIP="111.11.111.1111"'
        timestamp: 'Apr 17 10:54:31'
        hostname: 'FW-11111-DC-01'
        program_name: '1/1111/FW-111-DC-01/srv_FW-211-DC-1101_FW-111-DC-0111VPNDC'

**Phase 2: Completed decoding.
        name: 'BarracudaVPN'
        client: 'Vanilla'
        description: 'PGRP-AUTH-16360abfzefezzfe32-b913-fbfba810c644-MNMTGK: Accounting LOGIN'
        device: 'FW-111-DC-01'
        dstuser: 'jdd...@ddddd.be'
        ip-address: '111.11.111.111'
        severity: 'Notice'
        timestamp: '2024/04/17 10:54:31'
        virtual-ip: '111.11.111.1111'

**Phase 3: Completed filtering (rules).
        id: '201111'
        level: '7'
        description: 'Barracuda FW: VPN login'
        groups: '['Barracudavpnlogin']'
        firedtimes: '1'
        mail: 'False'
**Alert to be generated.



This is my decoder:


<decoder name="BarracudaVPN">
<program_name type="pcre2">\S*VPN\S*</program_name>
</decoder>

<decoder name="Barracuda-child">
  <parent>BarracudaVPN</parent>
  <regex offset="after_parent"> (\w+)\s+(\S+)\s\w+ </regex>
  <order>severity,device</order>
</decoder>



<decoder name="Barracuda-child">
  <parent>BarracudaVPN</parent>
  <regex offset="after_parent">(\S+:\s\w+\s\w+)\s-\s\w+=(\S+)\s\w+=(\S+)\s</regex>
  <order>description, user, client</order>
</decoder>

<decoder name="Barracuda-child">
  <parent>BarracudaVPN</parent>
  <regex offset="after_parent">(\d+.\d+.\d+.\d+)\s\w+="(\S+\s\S+)"\s\w+="(\d+.\d+.\d+.\d+)"</regex>
  <order>ip-address, timestamp, virtual-ip</order>
</decoder>


this is my rule:

<group name="Barracudavpnlogin">
  <rule id="201111" level="7">
   <decoded_as>BarracudaVPN</decoded_as>
   <match>Accounting LOGIN</match>
   <description>Barracuda FW: VPN login</description>
  </rule>
</group>

Matthias A

unread,
May 16, 2024, 5:09:59 AM5/16/24
to Wazuh | Mailing List
by the way, it's also coming trough the archives.logs, so I am receiving them actually

Op donderdag 16 mei 2024 om 11:06:06 UTC+2 schreef Matthias A:

Matthias A

unread,
May 16, 2024, 5:11:45 AM5/16/24
to Wazuh | Mailing List
and also in the alerts.log

Op donderdag 16 mei 2024 om 11:09:59 UTC+2 schreef Matthias A:

Stuti Gupta

unread,
May 17, 2024, 3:29:00 AM5/17/24
to Wazuh | Mailing List
Hi Matthias A

Can you please check  /var/log/filebeat/filebeat to know if there is any mapping issue? In case, there is no issue or warning found in filebeat then,  please ensure that the Wazuh indexer is active.

Additionally, execute the following commands to verify if indices are being created.
curl https://<WAZUH_INDEXER_IP>:9200/_cat/indices/wazuh-alerts-* -u <wazuh_indexer_user>:<wazuh_indexer_password> -k

Check the cluster health with:

curl -XGET -k -u user:pass "https://localhost:9200/_cluster/health?pretty"
If cluster health is not green and indices are not formed then please share the Wazuh indexer: # cat /var/log/wazuh-indexer/wazuh-cluster.log (edited) 

Hope to hear from you soon.

Matskoow

unread,
May 17, 2024, 3:51:49 AM5/17/24
to Wazuh | Mailing List
there is actually an issue in the filebeat log file


2024-05-17T09:48:54.212+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-05-17T09:48:54.212+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://127.0.0.1:9200)) with 1 reconnect attempt(s)
2024-05-17T09:48:57.007+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://127.0.0.1:9200)): Get "https://127.0.0.1:9200": dial tcp 127.0.0.1:9200: connect: connection refused
2024-05-17T09:48:57.007+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://127.0.0.1:9200)) with 2 reconnect attempt(s)
2024-05-17T09:48:57.008+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-05-17T09:48:57.008+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-05-17T09:49:04.716+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://127.0.0.1:9200)): Get "https://127.0.0.1:9200": dial tcp 127.0.0.1:9200: connect: connection refused
2024-05-17T09:49:04.716+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://127.0.0.1:9200)) with 3 reconnect attempt(s)
2024-05-17T09:49:04.716+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-05-17T09:49:04.716+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-05-17T09:49:17.978+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://127.0.0.1:9200)): Get "https://127.0.0.1:9200": dial tcp 127.0.0.1:9200: connect: connection refused
2024-05-17T09:49:17.979+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://127.0.0.1:9200)) with 4 reconnect attempt(s)
2024-05-17T09:49:17.979+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-05-17T09:49:17.980+0200    INFO    [publisher]     pipeline/retry.go:223     done
2024-05-17T09:49:37.227+0200    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://127.0.0.1:9200)): Get "https://127.0.0.1:9200": dial tcp 127.0.0.1:9200: connect: connection refused
2024-05-17T09:49:37.227+0200    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(https://127.0.0.1:9200)) with 5 reconnect attempt(s)
2024-05-17T09:49:37.227+0200    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2024-05-17T09:49:37.227+0200    INFO    [publisher]     pipeline/retry.go:223     done
root@WAZUH-VM:/home/matthias#

Op vrijdag 17 mei 2024 om 09:29:00 UTC+2 schreef Stuti Gupta:

Stuti Gupta

unread,
May 20, 2024, 5:08:29 AM5/20/24
to Wazuh | Mailing List
Hi,

Can you please share the `archives.json` log related to this issue? This log contains the full details of the data being parsed, especially the information in the `full_log` field.

Regarding the error:


ERROR   [publisher_pipeline_output] pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://127.0.0.1:9200)): Get "https://127.0.0.1:9200": dial tcp 127.0.0.1:9200: connect: connection refused


The error message in Filebeat indicates that it cannot connect to Elasticsearch. This means Filebeat is trying to send data to Elasticsearch at `https://127.0.0.1:9200`, but it is unable to establish a connection. To troubleshoot this error, please follow these steps:

1. Verify Alerts in Wazuh Dashboard
    Check if any alerts are appearing in the Wazuh dashboard.

2. Check Indexer Status:
    Ensure that the indexer is active by checking its status with the command: systemctl status wazuh-indexer
    

3. Test Filebeat Configuratio
    Ensure that Filebeat is correctly configured by running the following command:     filebeat test output
     The expected output should confirm the connection to  wazuh-indexer:
     output
     elasticsearch: https://127.0.0.1:9200...
       parse url... OK
       connection...
         parse host... OK
         dns lookup... OK
         addresses: 127.0.0.1
         dial up... OK
       TLS...
         security: server's certificate chain verification is enabled
         handshake... OK
         TLS version: TLSv1.3
         dial up... OK
       talk to server... OK
       version: 7.10.2


4.Verify Elasticsearch Configuration
   Confirm that the Wazuh indexer is configured to listen on `127.0.0.1:9200`. Check the configuration file `/etc/wazuh-indexer/opensearch.yml` and ensure it contains: network.host: 127.0.0.1
    
5. Check Indexer Logs:
    Check the Wazuh indexer logs for any errors or issues that might indicate why it is not accepting connections. Logs are typically located at: cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"


Hope to hear from you soon
Message has been deleted

Matskoow

unread,
May 21, 2024, 5:13:05 AM5/21/24
to Wazuh | Mailing List
Hi Stuti,

Thanks for getting back.

I also want to share this, maybe it has something to do with this?: https://groups.google.com/g/wazuh/c/QGXzmkmHryg



This is the full log:

May 17 10:47:39 FW-111-DD-11 1/111IT/FW-111-DC-01/srv_FW-111-DC-01_FW-111-DC-01VPNDC:  Notice   FW-111-DD-11 Session PGRP-AUTH-DDDDDD-5aff-4569-87ba-jfzfzoo222-MNMTGK: Accounting LOGIN - user=te...@test.com client=Vanilla IP=111.111.111.111 start="2024/05/17 10:47:39" VirtualIP="111.111.111.111"




1. Verify Alerts in Wazuh Dashboard
The dashboard looks normal, the only log missing is the one mentioned above.


2. Check Indexer Status:
    Ensure that the indexer is active by checking its status with the command: systemctl status wazuh-indexer
active (running)
    

3. Test Filebeat Configuration

    Ensure that Filebeat is correctly configured by running the following command:     filebeat test output
     The expected output should confirm the connection to  wazuh-indexer:


This is my output: 

elasticsearch: https://127.0.0.1:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2

    dial up... OK
  talk to server... OK
  version: 7.10.2

4.Verify Elasticsearch Configuration
   Confirm that the Wazuh indexer is configured to listen on `127.0.0.1:9200`. Check the configuration file `/etc/wazuh-indexer/opensearch.yml` and ensure it contains: network.host: 127.0.0.1

it contains this:
network.host: "127.0.0.1"
node.name: "node-1"
cluster.initial_master_nodes:
- "node-1"

    
5. Check Indexer Logs:

 cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"

root@WAZUH-VM:/var/ossec/bin# cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"
[2024-05-21T08:26:54,055][INFO ][o.o.n.Node               ] [node-1] JVM arguments [-Xshare:auto, -Dopensearch.networkaddress.cache.ttl=60, -Dopensearch.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1953m, -Xmx1953m, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/opensearch-13365031803950584132, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/wazuh-indexer, -XX:ErrorFile=/var/log/wazuh-indexer/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/wazuh-indexer/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///etc/wazuh-indexer/opensearch-performance-analyzer/opensearch_security.policy, --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///usr/share/wazuh-indexer/plugins/opendistro-performance-analyzer/pa_config/es_security.policy, -XX:MaxDirectMemorySize=1024458752, -Dopensearch.path.home=/usr/share/wazuh-indexer, -Dopensearch.path.conf=/etc/wazuh-indexer, -Dopensearch.distribution.type=rpm, -Dopensearch.bundled_jdk=true]
[2024-05-21T08:27:23,125][WARN ][o.o.s.OpenSearchSecurityPlugin] [node-1] File /etc/wazuh-indexer/opensearch-security/internal_users.yml has insecure file permissions (should be 0600)
[2024-05-21T08:27:23,139][WARN ][o.o.s.OpenSearchSecurityPlugin] [node-1] File /etc/wazuh-indexer/opensearch-security/roles.yml has insecure file permissions (should be 0600)
[2024-05-21T08:27:23,139][WARN ][o.o.s.OpenSearchSecurityPlugin] [node-1] File /etc/wazuh-indexer/opensearch-security/roles_mapping.yml has insecure file permissions (should be 0600)
[2024-05-21T08:27:43,994][WARN ][o.o.s.c.Salt             ] [node-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[2024-05-21T08:27:44,230][ERROR][o.o.s.a.s.SinkProvider   ] [node-1] Default endpoint could not be created, auditlog will not work properly.
[2024-05-21T08:27:44,234][WARN ][o.o.s.a.r.AuditMessageRouter] [node-1] No default storage available, audit log may not work properly. Please check configuration.
[2024-05-21T08:27:50,287][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,309][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,312][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,313][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,316][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,317][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,319][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,320][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,323][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,325][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,327][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,329][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,332][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,333][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,333][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,338][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,351][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,363][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,363][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,372][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,375][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,383][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,387][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,390][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,390][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,396][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,399][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,401][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,411][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,413][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,416][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,422][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,422][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,423][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:50,423][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection
[2024-05-21T08:27:52,356][WARN ][o.o.g.DanglingIndicesState] [node-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2024-05-21T08:27:57,281][WARN ][o.o.p.c.s.h.ConfigOverridesClusterSettingHandler] [node-1] Config override setting update called with empty string. Ignoring.
[2024-05-21T08:27:58,841][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:27:58,888][WARN ][o.o.o.i.ObservabilityIndex] [node-1] message: index [.opensearch-observability/13gzuzqASduFZymuonP9RA] already exists
[2024-05-21T08:27:58,933][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:27:58,945][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:27:58,958][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:27:58,966][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] get managed-index failed: NoShardAvailableActionException[No shard available for [org.opensearch.action.get.MultiGetShardRequest@1f613249]]
[2024-05-21T08:27:59,104][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] Failed to get ISM policies with templates: Failed to execute phase [query], all shards failed
[2024-05-21T08:27:59,411][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,412][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,412][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,413][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,413][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,413][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,413][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,414][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,414][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:27:59,414][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@1c564477] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-05-21T08:28:00,758][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:00,776][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:00,791][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:00,797][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:03,263][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:03,268][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:03,290][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:03,301][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:05,764][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:05,768][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:05,785][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:05,790][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:08,265][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:08,280][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:08,286][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:08,290][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:10,888][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:10,904][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:10,927][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:10,935][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:13,272][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:13,284][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:13,294][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)
[2024-05-21T08:28:13,307][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)

Op maandag 20 mei 2024 om 11:08:29 UTC+2 schreef Stuti Gupta:

Stuti Gupta

unread,
May 22, 2024, 5:07:56 AM5/22/24
to Wazuh | Mailing List
Hi Matskoow,

There seems to be a JVM issue related to `HeapDumpOnOutOfMemoryError`. To resolve this issue, you need to edit the `/etc/wazuh-indexer/jvm.options` file to increase the JVM heap size. The recommended value is typically half of the system RAM. For instance, if your system has 8 GB of RAM, you can set the size as follows:
-Xms4g
-Xmx4g

Where:
- `-Xms4g`: Sets the initial heap size to 4 GB of RAM.
- `-Xmx4g`: Sets the maximum heap size to 4 GB of RAM.

After making these changes, restart the Wazuh indexer service using the following commands:

systemctl daemon-reload
systemctl restart wazuh-indexer

This should resolve the issue. If you have any further questions or concerns, feel free to ask.

Best regards,

Stuti Gupta

unread,
May 23, 2024, 5:53:41 AM5/23/24
to Wazuh | Mailing List
Hi Matskoow,

Please let me know if this issue is resolved 

Matskoow

unread,
May 24, 2024, 2:35:20 AM5/24/24
to Wazuh | Mailing List
Hi, 

the problem was resolved, thank you so much for your assistance!

Kind regards

Op donderdag 23 mei 2024 om 11:53:41 UTC+2 schreef Stuti Gupta:

Andrehens Chicfici

unread,
Jul 24, 2024, 10:15:50 AM7/24/24
to Wazuh | Mailing List
Hello,

I am seeing the same errors. I changed  xms4g and  xmx4g in /etc/wazuh-indexer/jvm.options to half of my memory size (Xms24g/Xmx24g) but the error messages stay the same. The wazuh-cluster.log tells me to

      Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
     These can be adjusted by modifying /etc/security/limits.conf

Do I need to mess with this?

Stuti Gupta

unread,
Jul 25, 2024, 12:41:07 AM7/25/24
to Wazuh | Mailing List
Hi Andrehens Chicfici

For your issue, please open another thread so we can track it better, which will also help other team members.
Reply all
Reply to author
Forward
0 new messages