Hi team,
I restored a backup for the last error of upgradeto 4.8 until fix the issue. THe server is working fine but I have that error log. I did a cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -iE 'WARN|ERR' and get this:
[2024-06-25T17:54:51,923][INFO ][o.o.n.Node ] [node-1] JVM arguments [-Xshare:auto, -Dopensearch.networkaddress.cache.ttl=60, -Dopensearch.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms6g, -Xmx6g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/opensearch-6054602579876123289, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/wazuh-indexer, -XX:ErrorFile=/var/log/wazuh-indexer/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/wazuh-indexer/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///etc/wazuh-indexer/opensearch-performance-analyzer/opensearch_security.policy, --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED, -XX:MaxDirectMemorySize=3221225472, -Dopensearch.path.home=/usr/share/wazuh-indexer, -Dopensearch.path.conf=/etc/wazuh-indexer, -Dopensearch.distribution.type=rpm, -Dopensearch.bundled_jdk=true]
[2024-06-25T17:54:58,866][WARN ][o.o.s.c.Salt ] [node-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[2024-06-25T17:54:58,898][ERROR][o.o.s.a.s.SinkProvider ] [node-1] Default endpoint could not be created, auditlog will not work properly.
[2024-06-25T17:54:58,899][WARN ][o.o.s.a.r.AuditMessageRouter] [node-1] No default storage available, audit log may not work properly. Please check configuration.
[2024-06-25T17:55:01,074][WARN ][o.o.g.DanglingIndicesState] [node-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2024-06-25T17:55:02,973][ERROR][o.o.s.t.SecurityRequestHandler] [node-1] OpenSearchException[Transport client authentication no longer supported.]
[2024-06-25T17:55:02,979][ERROR][o.o.s.t.SecurityRequestHandler] [node-1] OpenSearchException[Transport client authentication no longer supported.]
[2024-06-25T17:55:02,985][WARN ][o.o.d.HandshakingTransportAddressConnector] [node-1] handshake failed for [connectToRemoteMasterNode[[::1]:9300]]
at org.opensearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:106) ~[opensearch-2.8.0.jar:2.8.0]
[2024-06-25T17:55:02,985][WARN ][o.o.d.HandshakingTransportAddressConnector] [node-1] handshake failed for [connectToRemoteMasterNode[
127.0.0.1:9300]]
at org.opensearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:106) ~[opensearch-2.8.0.jar:2.8.0]
[2024-06-25T17:55:03,176][WARN ][o.o.p.c.s.h.ConfigOverridesClusterSettingHandler] [node-1] Config override setting update called with empty string. Ignoring.
[2024-06-25T17:55:03,698][WARN ][o.o.o.i.ObservabilityIndex] [node-1] message: index [.opensearch-observability/S5Iu5XO-SvuK8ugCjA6m8Q] already exists
[2024-06-25T17:55:03,710][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] get managed-index failed: NoShardAvailableActionException[No shard available for [org.opensearch.action.get.MultiGetShardRequest@62b70b00]]
[2024-06-25T17:55:03,749][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] Failed to get ISM policies with templates: Failed to execute phase [query], all shards failed
[2024-06-25T17:55:04,235][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [node-1] Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@5bcebbb9] retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-06-25T17:55:16,315][ERROR][o.o.s.a.BackendRegistry ] [node-1] Not yet initialized (you may need to run securityadmin)
with the comand
cat /var/log/filebeat/filebeat | grep -i -E "error|warn"
2024-06-25T18:00:52.845-0300 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc196e9e0f16c5392, ext:292488974115, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"1c6a94c3-cbcf-49f9-96c3-a4f441ef8a5a","hostname":"siem","id":"e25effab-11b6-4ede-80c6-a6ddf5bfd38e","name":"siem","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"siem"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":352396278},"message":"{\"timestamp\":\"2024-06-25T18:00:51.022-0300\",\"rule\":{\"level\":3,\"description\":\"load average metrics\",\"id\":\"100018\",\"firedtimes\":2,\"mail\":false,\"groups\":[\"performance_metric\"]},\"agent\":{\"id\":\"000\",\"name\":\"siem\"},\"manager\":{\"name\":\"siem\"},\"id\":\"1719349251.514899479\",\"full_log\":\"Jun 25 18:00:51 siem load_average_check: ossec: output: 'load_average_metrics':\\n1,16, 1,47, 1,49\",\"predecoder\":{\"program_name\":\"load_average_check\",\"timestamp\":\"Jun 25 18:00:51\",\"hostname\":\"siem\"},\"decoder\":{\"parent\":\"load_average_check\",\"name\":\"load_average_check\"},\"data\":{\"1min_loadAverage\":\"1,16\",\"5mins_loadAverage\":\"1,47\",\"15mins_loadAverage\":\"1,49\"},\"location\":\"load_average_metrics\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::5250834-64768", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc0001c91e0), Source:"/var/ossec/logs/alerts/alerts.json", Offset:352396966, Timestamp:time.Time{wall:0xc196e997dad37188, ext:109856018, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x501f12, Device:0xfd00}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.5mins_loadAverage] of type [double] in document with id 'ragzUZAB3Gw1JP-ZksPj'. Preview of field's value: '1,47'","caused_by":{"type":"number_format_exception","reason":"For input string: \"1,47\""}}
Let me know what you thinks of the error.
Regards
German