Migrate and Upgrade at the same time...

234 views
Skip to first unread message

Sylvain Belanger

unread,
May 14, 2024, 1:04:11 AM5/14/24
to Wazuh | Mailing List
Hello!

We currently have a Wazuh 4.5 running with Elasticsearch stack 7.17. The Wazuh Manager runs on 1 server (name: wazuh-manager) and the Elasticsearch runs on a second server (name: wazuh-search).

We want to upgrade to the full Wazuh 4.7 stack and migrate from Elasticsearch to Opensearch at the same time and on a SINGLE HOST because volume is not large enough to warrant for 2 hosts at this time. There seem to be different ways for the migration but there seem to be some caveats that we encountered.

Here are a few plans I had:
  1. In place upgrade of Elasticsearch to Opensearch: This one did not work and we reverted back. There seemed to be some issues with Elasticsearch not being configured for SSL/TLS yet.
  2. Install Wazuh 4.7 on new server (name: wazuh-server) and use remote reindex API to import the data. It works but I cannot see events in the dashboard because the filter name for manager.name is fixed to wazuh-manager (the old manager name).
  3. Install Wazuh 4.7 on new server (name: wazuh-server) and reingest alerts.json files (I have 3 years worth of archives) on the new manager using the recovery.py script in the documentation. It works but I cannot see events in the dashboard because the filter name for manager.name is fixed to wazuh-manager (the old manager name).
From attemps 2 and 3, I know the old data is in Opensearch as I can see them but they always use the old manager name so imported data is not seen from the Wazuh Dashboard.

My next attempt would be to change the Wazuh configuration on the new server so that the manager has the same name as the old server as to ensure that imported data is seen on the dashboard. Would that make sense?

Do you have any ideas?
Regards,
Sylvain

Jose Luis Carreras Marin

unread,
May 14, 2024, 5:01:05 AM5/14/24
to Wazuh | Mailing List
Hello Sylvain

To complete the whole migration and the change of structure you want to make, we have in our documentation some step-by-step migration guides from Elasticsearch to the Wazuh-dashboard (Opensearch) available with which you should not encounter many problems:

Migrating to the Wazuh indexer.
Migrating to the Wazuh dashboard.

Regarding the SSL/TLS problem if you want to tell me more in depth or show me the errors you are seeing and I will be happy to help.
To solve the issue of not being able to see the alerts in the new Wazuh-dashboard try re-indexing using these steps from our documentation:
https://documentation.wazuh.com/current/user-manual/wazuh-indexer/re-indexing.html

If during any of these processes you encounter any problems or bugs, let me know in depth and I will be happy to help as much as possible.

Best regards
Jose

Sylvain Belanger

unread,
May 14, 2024, 2:19:56 PM5/14/24
to Wazuh | Mailing List
Thanks Jose for your feedback! 
I'll try one more time the Wazuh Indexer upgrade but initially I thought it didn't work because I was running a Post-Fork of Elasticsearch which is noted that it may not work. I'll report back if it does not work.

As for reindexing, I don't think this would work as the indexes are fine. It is just that the index value for manager.name is the old name where the new instance has a different name and the dashboard displays manager.name=NEWNAME hence why I cannot see the older entries. I cannot seem to be able to change the manager.name filter (see screenshot attached). The only thing I can think of is to name my NEW server/manager with the old name. Does that make sense? 

Regards,
Sylvain

Wazuh-ManagerName.PNG

Sylvain Belanger

unread,
May 14, 2024, 2:19:56 PM5/14/24
to Wazuh | Mailing List
Good afternoon,
I went ahead with attempting yet again an in-place upgrade from Elasticsearch 7.17 to Wazuh-Indexer. All went well except that Wazuh indexer is failing to start. Here's the wazuh.log extract.

For information, the existing Elasticsearch was not configured for HTTPS but we were using XPack to authorize users in; as a temporary short term solution. I copied the certs from another Wazuh-Indexer instance freshly installed.

The content of opensearch.yml is available below.

Anyone available to assist please?

[2024-05-14T13:24:21,335][INFO ][o.o.n.Node               ] [tel0103] version[2.8.0], pid[26436], build[rpm/db90a415ff2fd428b4f7b3f800a51dc229287cb4/2023-06-03T06:24:25.112415503Z], OS[Linux/3.10.0-1160.76.1.el7.x86_64/amd64], JVM[Eclipse Adoptium/OpenJDK 64-Bit Server VM/17.0.7/17.0.7+7]
[2024-05-14T13:24:21,337][INFO ][o.o.n.Node               ] [tel0103] JVM home [/usr/share/wazuh-indexer/jdk], using bundled JDK [true]
[2024-05-14T13:24:21,337][INFO ][o.o.n.Node               ] [tel0103] JVM arguments [-Xshare:auto, -Dopensearch.networkaddress.cache.ttl=60, -Dopensearch.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms14g, -Xmx14g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/opensearch-10683113506657049668, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/wazuh-indexer, -XX:ErrorFile=/var/log/wazuh-indexer/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/wazuh-indexer/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///etc/wazuh-indexer/opensearch-performance-analyzer/opensearch_security.policy, --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED, -XX:MaxDirectMemorySize=7516192768, -Dopensearch.path.home=/usr/share/wazuh-indexer, -Dopensearch.path.conf=/etc/wazuh-indexer, -Dopensearch.distribution.type=rpm, -Dopensearch.bundled_jdk=true]
[2024-05-14T13:24:22,090][INFO ][o.o.s.s.t.SSLConfig      ] [tel0103] SSL dual mode is disabled
[2024-05-14T13:24:22,090][INFO ][o.o.s.OpenSearchSecurityPlugin] [tel0103] OpenSearch Config path is /etc/wazuh-indexer
[2024-05-14T13:24:22,287][INFO ][o.o.s.s.DefaultSecurityKeyStore] [tel0103] JVM supports TLSv1.3
[2024-05-14T13:24:22,289][INFO ][o.o.s.s.DefaultSecurityKeyStore] [tel0103] Config directory is /etc/wazuh-indexer/, from there the key- and truststore files are resolved relatively
[2024-05-14T13:24:22,697][INFO ][o.o.s.s.DefaultSecurityKeyStore] [tel0103] TLS Transport Client Provider : JDK
[2024-05-14T13:24:22,697][INFO ][o.o.s.s.DefaultSecurityKeyStore] [tel0103] TLS Transport Server Provider : JDK
[2024-05-14T13:24:22,698][INFO ][o.o.s.s.DefaultSecurityKeyStore] [tel0103] TLS HTTP Provider             : JDK
[2024-05-14T13:24:22,698][INFO ][o.o.s.s.DefaultSecurityKeyStore] [tel0103] Enabled TLS protocols for transport layer : [TLSv1.3, TLSv1.2]
[2024-05-14T13:24:22,698][INFO ][o.o.s.s.DefaultSecurityKeyStore] [tel0103] Enabled TLS protocols for HTTP layer      : [TLSv1.3, TLSv1.2]
[2024-05-14T13:24:22,706][INFO ][o.o.s.OpenSearchSecurityPlugin] [tel0103] Clustername: wazuh
[2024-05-14T13:24:23,085][INFO ][o.o.p.c.c.PluginSettings ] [tel0103] Config: metricsLocation: /dev/shm/performanceanalyzer/, metricsDeletionInterval: 1, httpsEnabled: false, cleanup-metrics-db-files: true, batch-metrics-retention-period-minutes: 7, rpc-port: 9650, webservice-port 9600
[2024-05-14T13:24:23,419][INFO ][o.o.i.r.ReindexPlugin    ] [tel0103] ReindexPlugin reloadSPI called
[2024-05-14T13:24:23,420][INFO ][o.o.i.r.ReindexPlugin    ] [tel0103] Unable to find any implementation for RemoteReindexExtension
[2024-05-14T13:24:23,445][INFO ][o.o.j.JobSchedulerPlugin ] [tel0103] Loaded scheduler extension: opendistro_anomaly_detector, index: .opendistro-anomaly-detector-jobs
[2024-05-14T13:24:23,464][INFO ][o.o.j.JobSchedulerPlugin ] [tel0103] Loaded scheduler extension: reports-scheduler, index: .opendistro-reports-definitions
[2024-05-14T13:24:23,465][INFO ][o.o.j.JobSchedulerPlugin ] [tel0103] Loaded scheduler extension: opendistro-index-management, index: .opendistro-ism-config
[2024-05-14T13:24:23,483][INFO ][o.o.j.JobSchedulerPlugin ] [tel0103] Loaded scheduler extension: observability, index: .opensearch-observability-job
[2024-05-14T13:24:23,487][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [aggs-matrix-stats]
[2024-05-14T13:24:23,488][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [analysis-common]
[2024-05-14T13:24:23,488][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [geo]
[2024-05-14T13:24:23,488][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [ingest-common]
[2024-05-14T13:24:23,488][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [ingest-geoip]
[2024-05-14T13:24:23,488][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [ingest-user-agent]
[2024-05-14T13:24:23,488][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [lang-expression]
[2024-05-14T13:24:23,488][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [lang-mustache]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [lang-painless]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [mapper-extras]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [opensearch-dashboards]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [parent-join]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [percolator]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [rank-eval]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [reindex]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [repository-url]
[2024-05-14T13:24:23,489][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [search-pipeline-common]
[2024-05-14T13:24:23,490][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [systemd]
[2024-05-14T13:24:23,490][INFO ][o.o.p.PluginsService     ] [tel0103] loaded module [transport-netty4]
[2024-05-14T13:24:23,490][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-alerting]
[2024-05-14T13:24:23,490][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-anomaly-detection]
[2024-05-14T13:24:23,490][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-asynchronous-search]
[2024-05-14T13:24:23,491][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-cross-cluster-replication]
[2024-05-14T13:24:23,491][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-geospatial]
[2024-05-14T13:24:23,491][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-index-management]
[2024-05-14T13:24:23,491][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-job-scheduler]
[2024-05-14T13:24:23,491][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-knn]
[2024-05-14T13:24:23,491][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-ml]
[2024-05-14T13:24:23,491][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-neural-search]
[2024-05-14T13:24:23,491][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-notifications]
[2024-05-14T13:24:23,492][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-notifications-core]
[2024-05-14T13:24:23,492][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-observability]
[2024-05-14T13:24:23,492][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-performance-analyzer]
[2024-05-14T13:24:23,492][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-reports-scheduler]
[2024-05-14T13:24:23,492][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-security]
[2024-05-14T13:24:23,492][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-security-analytics]
[2024-05-14T13:24:23,492][INFO ][o.o.p.PluginsService     ] [tel0103] loaded plugin [opensearch-sql]
[2024-05-14T13:24:23,527][INFO ][o.o.s.OpenSearchSecurityPlugin] [tel0103] Disabled https compression by default to mitigate BREACH attacks. You can enable it by setting 'http.compression: true' in opensearch.yml
[2024-05-14T13:24:23,530][INFO ][o.o.e.ExtensionsManager  ] [tel0103] ExtensionsManager initialized
[2024-05-14T13:24:23,549][INFO ][o.o.e.NodeEnvironment    ] [tel0103] using [1] data paths, mounts [[/vol0 (/dev/mapper/vg_vol0-lv_vol0)]], net usable_space [165.3gb], net total_space [299.8gb], types [xfs]
[2024-05-14T13:24:23,549][INFO ][o.o.e.NodeEnvironment    ] [tel0103] heap size [14gb], compressed ordinary object pointers [true]
[2024-05-14T13:24:24,079][INFO ][o.o.n.Node               ] [tel0103] node name [tel0103], node ID [1YEM_D84TRSsLaJ6ad1TWw], cluster name [wazuh], roles [ingest, remote_cluster_client, data, cluster_manager]
[2024-05-14T13:24:26,297][WARN ][o.o.s.c.Salt             ] [tel0103] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[2024-05-14T13:24:26,321][ERROR][o.o.s.a.s.SinkProvider   ] [tel0103] Default endpoint could not be created, auditlog will not work properly.
[2024-05-14T13:24:26,322][WARN ][o.o.s.a.r.AuditMessageRouter] [tel0103] No default storage available, audit log may not work properly. Please check configuration.
[2024-05-14T13:24:26,322][INFO ][o.o.s.a.i.AuditLogImpl   ] [tel0103] Message routing enabled: false
[2024-05-14T13:24:26,352][INFO ][o.o.s.f.SecurityFilter   ] [tel0103] <NONE> indices are made immutable.
[2024-05-14T13:24:26,583][INFO ][o.o.a.b.ADCircuitBreakerService] [tel0103] Registered memory breaker.
[2024-05-14T13:24:26,850][INFO ][o.o.m.b.MLCircuitBreakerService] [tel0103] Registered ML memory breaker.
[2024-05-14T13:24:26,851][INFO ][o.o.m.b.MLCircuitBreakerService] [tel0103] Registered ML disk breaker.
[2024-05-14T13:24:26,852][INFO ][o.o.m.b.MLCircuitBreakerService] [tel0103] Registered ML native memory breaker.
[2024-05-14T13:24:26,951][INFO ][o.r.Reflections          ] [tel0103] Reflections took 38 ms to scan 1 urls, producing 15 keys and 37 values
[2024-05-14T13:24:27,452][INFO ][o.o.t.NettyAllocator     ] [tel0103] creating NettyAllocator with the following configs: [name=opensearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={opensearch.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=8mb}]
[2024-05-14T13:24:27,517][INFO ][o.o.d.DiscoveryModule    ] [tel0103] using discovery type [zen] and seed hosts providers [settings]
[2024-05-14T13:24:27,841][WARN ][o.o.g.DanglingIndicesState] [tel0103] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2024-05-14T13:24:28,199][INFO ][o.o.p.h.c.PerformanceAnalyzerConfigAction] [tel0103] PerformanceAnalyzer Enabled: false
[2024-05-14T13:24:28,255][INFO ][o.o.n.Node               ] [tel0103] initialized
[2024-05-14T13:24:28,256][INFO ][o.o.n.Node               ] [tel0103] starting ...
[2024-05-14T13:24:28,331][INFO ][o.o.t.TransportService   ] [tel0103] publish_address {10.35.250.23:9300}, bound_addresses {10.35.250.23:9300}
[2024-05-14T13:24:28,424][ERROR][o.o.b.Bootstrap          ] [tel0103] Exception
org.opensearch.core.xcontent.XContentParseException: [-1:179436] [data_stream] failed to parse field [data_stream]
        at org.opensearch.core.xcontent.ObjectParser.parseValue(ObjectParser.java:592) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parseSub(ObjectParser.java:604) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parse(ObjectParser.java:354) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ConstructingObjectParser.parse(ConstructingObjectParser.java:188) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.DataStreamMetadata.fromXContent(DataStreamMetadata.java:126) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.NamedXContentRegistry$Entry.lambda$new$0(NamedXContentRegistry.java:81) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.NamedXContentRegistry.parseNamedObject(NamedXContentRegistry.java:171) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.AbstractXContentParser.namedObject(AbstractXContentParser.java:429) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.Metadata$Builder.fromXContent(Metadata.java:1787) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.lambda$loadOnDiskState$1(PersistedClusterStateService.java:450) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.consumeFromType(PersistedClusterStateService.java:514) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.loadOnDiskState(PersistedClusterStateService.java:449) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.loadBestOnDiskState(PersistedClusterStateService.java:374) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.GatewayMetaState.start(GatewayMetaState.java:132) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.node.Node.start(Node.java:1256) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.Bootstrap.start(Bootstrap.java:339) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:413) [opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:180) [opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.execute(OpenSearch.java:171) [opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:104) [opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138) [opensearch-cli-2.8.0.jar:2.8.0]
        at org.opensearch.cli.Command.main(Command.java:101) [opensearch-cli-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:137) [opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:103) [opensearch-2.8.0.jar:2.8.0]
Caused by: org.opensearch.core.xcontent.XContentParseException: [-1:179430] [data_stream] unknown field [_meta]
        at org.opensearch.core.xcontent.ObjectParser.lambda$errorOnUnknown$2(ObjectParser.java:127) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parse(ObjectParser.java:327) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ConstructingObjectParser.parse(ConstructingObjectParser.java:188) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.DataStream.fromXContent(DataStream.java:205) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.DataStreamMetadata.lambda$static$1(DataStreamMetadata.java:76) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.AbstractObjectParser.lambda$declareObject$1(AbstractObjectParser.java:202) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.lambda$declareField$9(ObjectParser.java:421) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parseValue(ObjectParser.java:589) ~[opensearch-core-2.8.0.jar:2.8.0]
        ... 23 more
[2024-05-14T13:24:28,430][ERROR][o.o.b.OpenSearchUncaughtExceptionHandler] [tel0103] uncaught exception in thread [main]
org.opensearch.bootstrap.StartupException: org.opensearch.core.xcontent.XContentParseException: [-1:179436] [data_stream] failed to parse field [data_stream]
        at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:184) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.execute(OpenSearch.java:171) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:104) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138) ~[opensearch-cli-2.8.0.jar:2.8.0]
        at org.opensearch.cli.Command.main(Command.java:101) ~[opensearch-cli-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:137) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:103) ~[opensearch-2.8.0.jar:2.8.0]
Caused by: org.opensearch.core.xcontent.XContentParseException: [-1:179436] [data_stream] failed to parse field [data_stream]
        at org.opensearch.core.xcontent.ObjectParser.parseValue(ObjectParser.java:592) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parseSub(ObjectParser.java:604) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parse(ObjectParser.java:354) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ConstructingObjectParser.parse(ConstructingObjectParser.java:188) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.DataStreamMetadata.fromXContent(DataStreamMetadata.java:126) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.NamedXContentRegistry$Entry.lambda$new$0(NamedXContentRegistry.java:81) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.NamedXContentRegistry.parseNamedObject(NamedXContentRegistry.java:171) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.AbstractXContentParser.namedObject(AbstractXContentParser.java:429) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.Metadata$Builder.fromXContent(Metadata.java:1787) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.lambda$loadOnDiskState$1(PersistedClusterStateService.java:450) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.consumeFromType(PersistedClusterStateService.java:514) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.loadOnDiskState(PersistedClusterStateService.java:449) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.loadBestOnDiskState(PersistedClusterStateService.java:374) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.GatewayMetaState.start(GatewayMetaState.java:132) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.node.Node.start(Node.java:1256) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.Bootstrap.start(Bootstrap.java:339) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:413) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:180) ~[opensearch-2.8.0.jar:2.8.0]
        ... 6 more
Caused by: org.opensearch.core.xcontent.XContentParseException: [-1:179430] [data_stream] unknown field [_meta]
        at org.opensearch.core.xcontent.ObjectParser.lambda$errorOnUnknown$2(ObjectParser.java:127) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parse(ObjectParser.java:327) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ConstructingObjectParser.parse(ConstructingObjectParser.java:188) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.DataStream.fromXContent(DataStream.java:205) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.DataStreamMetadata.lambda$static$1(DataStreamMetadata.java:76) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.AbstractObjectParser.lambda$declareObject$1(AbstractObjectParser.java:202) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.lambda$declareField$9(ObjectParser.java:421) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parseValue(ObjectParser.java:589) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parseSub(ObjectParser.java:604) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ObjectParser.parse(ObjectParser.java:354) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.ConstructingObjectParser.parse(ConstructingObjectParser.java:188) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.DataStreamMetadata.fromXContent(DataStreamMetadata.java:126) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.NamedXContentRegistry$Entry.lambda$new$0(NamedXContentRegistry.java:81) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.NamedXContentRegistry.parseNamedObject(NamedXContentRegistry.java:171) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.core.xcontent.AbstractXContentParser.namedObject(AbstractXContentParser.java:429) ~[opensearch-core-2.8.0.jar:2.8.0]
        at org.opensearch.cluster.metadata.Metadata$Builder.fromXContent(Metadata.java:1787) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.lambda$loadOnDiskState$1(PersistedClusterStateService.java:450) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.consumeFromType(PersistedClusterStateService.java:514) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.loadOnDiskState(PersistedClusterStateService.java:449) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.PersistedClusterStateService.loadBestOnDiskState(PersistedClusterStateService.java:374) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.gateway.GatewayMetaState.start(GatewayMetaState.java:132) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.node.Node.start(Node.java:1256) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.Bootstrap.start(Bootstrap.java:339) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:413) ~[opensearch-2.8.0.jar:2.8.0]
        at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:180) ~[opensearch-2.8.0.jar:2.8.0]
        ... 6 more
[2024-05-14T13:27:19,161][INFO ][o.o.n.Node               ] [tel0103] stopping ...
[2024-05-14T13:27:19,161][INFO ][o.o.s.a.r.AuditMessageRouter] [tel0103] Closing AuditMessageRouter
[2024-05-14T13:27:19,162][INFO ][o.o.s.a.s.SinkProvider   ] [tel0103] Closing DebugSink
[2024-05-14T13:27:19,171][INFO ][o.o.n.Node               ] [tel0103] stopped
[2024-05-14T13:27:19,172][INFO ][o.o.n.Node               ] [tel0103] closing ...
[2024-05-14T13:27:19,177][INFO ][o.o.s.a.i.AuditLogImpl   ] [tel0103] Closing AuditLogImpl
[2024-05-14T13:27:19,180][INFO ][o.o.n.Node               ] [tel0103] closed


OPENSEARCH.YML
=====================
network.host: "10.35.250.23"
node.name: "tel0103"
cluster.initial_master_nodes:
- "tel0103"
#- "node-2"
#- "node-3"
cluster.name: "wazuh"
#discovery.seed_hosts:
#  - "node-1-ip"
#  - "node-2-ip"
#  - "node-3-ip"
node.max_local_storage_nodes: "3"
path.data: /vol0/wazuh-indexer/data
path.logs: /var/log/wazuh-indexer
#path.repo: /vol0/wazuh-indexer/backup

plugins.security.ssl.http.pemcert_filepath: /etc/wazuh-indexer/certs/indexer.pem
plugins.security.ssl.http.pemkey_filepath: /etc/wazuh-indexer/certs/indexer-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
plugins.security.ssl.transport.pemcert_filepath: /etc/wazuh-indexer/certs/indexer.pem
plugins.security.ssl.transport.pemkey_filepath: /etc/wazuh-indexer/certs/indexer-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
plugins.security.ssl.http.enabled: false
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.transport.resolve_hostname: false

plugins.security.authcz.admin_dn:
- "CN=admin,OU=Docu,O=Wazuh,L=California,C=US"
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.nodes_dn:
- "CN=tel0103,OU=Docu,O=Wazuh,L=California,C=US"
#- "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US"
#- "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
plugins.security.restapi.roles_enabled:
- "all_access"
- "security_rest_api_access"

plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-model", ".plugins-ml-task", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]

### Option to allow Filebeat-oss 7.10.2 to work ###
compatibility.override_main_response_version: true

On Tuesday, May 14, 2024 at 5:01:05 AM UTC-4 Jose Luis Carreras Marin wrote:

Jose Luis Carreras Marin

unread,
May 15, 2024, 5:49:10 AM5/15/24
to Wazuh | Mailing List
Hello Sylvain,

About the manager.name issue: Unfortunately built-in Wazuh dashboards enforce a mandatory filter to match only records where manager.name is the name of the current standalone Wazuh manager.

The options you have are to change the name to be the same as the old one, or you can view those alerts in the Discover or Visualize tab.
Perhaps an option could also be to modify all alerts already indexed to have the name of the new manager, you could take a look at ‘Update by Query’ here:
https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docs-update-by-query.html

I have been investigating the error you have shown when upgrading to Opensearch, but I haven't found anything yet, I will ask my colleagues and as soon as I have new news I will let you know.

Greetings

Jose Luis Carreras Marin

unread,
May 15, 2024, 7:59:55 AM5/15/24
to Wazuh | Mailing List
Hello again Sylvain,

After some further research and chatting with my colleagues, I have first been told about a few more options for the manager.name issue:
You could create a processor (in OpenSearch or Filebeat) to update the manager.name field of the alerts.

Regarding the OpenSearch error, it seems that some of the indexes you had in Elastic 7.17 cannot be interpreted by OpenSearch. There seems to be some info here:
https://forum.opensearch.org/t/error-when-starting-opensearch-failed-to-parse-field-index-template/13880
https://community.graylog.org/t/upgrade-migrate-elasticsearch-to-opensearch-failed-to-parse-field-data-stream/27866

But it's hard to conclude which indexes they might be. Do you use Elastic exclusively for Wazuh? For a somewhat risky possibility is to remove all indices that are not related to Wazuh. To see if this way you don't find any problems.
I hope I can help as much as possible, if you find more errors I will be happy to analyse them.

Greetings

Sylvain Belanger

unread,
May 16, 2024, 4:06:53 AM5/16/24
to Wazuh | Mailing List
Thanks Jose Luis for your help.

As for the Opensearch error, the ElasticSearch instance was dedicated to Wazuh and was setup with Elasticsearch 6.x to begin with 4 years ago then we upgraded Elasticsearch to 7.17 a few months ago. That being said, evrything I read seem to point in the direction of ES releases AFTER the fork of ES/OpenSearch making backup/restore from snapshots not working. This would leave me with 2 options:
  1. Use Logstash to export data from ES and import it into Wazuh Indexer (new instance).
  2. Use Wazuh Recovery.py Python script to reimport the alerts.json files that we still have on the Wazuh Manager (we have 3 years worth of data). We just need to know if we only need the alerts.json files or if we need other files.
What do you think?

Regards,
Sylvain

Jose Luis Carreras Marin

unread,
May 16, 2024, 10:49:37 AM5/16/24
to Wazuh | Mailing List
Hello Sylvain

I have consulted with some colleagues with more experience, but after some research, I found this blog where the use of recovery.py is explained in detail, where you can find very interesting information about its use:
https://wazuh.com/blog/recover-your-data-using-wazuh-alert-backups/

If I find any extra info I will come back here to help you with anything!

Greetings!

Jose Luis Carreras Marin

unread,
May 17, 2024, 10:02:09 AM5/17/24
to Wazuh | Mailing List

Hello Sylvain

My colleagues have confirmed that either of these two options should work for you. So whatever problems you encounter with your chosen solution, let me know and I will continue to help you in any way I can.
Just one thing, keep in mind that 4 years of alerts can take a long time to recover. The blog explains how to do it in order not to have too much impact on the system itself!

Greetings!
Reply all
Reply to author
Forward
0 new messages