Upgrade 3.11 to 3.13 troubles

326 views
Skip to first unread message

bjp...@gmail.com

unread,
Sep 1, 2020, 9:31:18 AM9/1/20
to Wazuh mailing list
I performed the upgrade from 3.11 to 3.13 following the documentation (https://documentation.wazuh.com/3.13/upgrade-guide/index.html) both Wazuh and Elastic Stack steps. 

The Wazuh website is no longer working and the Elasticsearch service will not start. I'm a bit at a loss even where to start troubleshooting. elasticsearch.yml, filebeat.yml, wazuh.yml files look fine. I do have a VM snapshot that I can revert back to 3.11. Here is the output of the elasticsearch service status. Any help would be appreciated!

● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/elasticsearch.service.d
           └─elasticsearch.conf
   Active: failed (Result: signal) since Mon 2020-08-31 20:37:44 UTC; 30min ago
  Process: 2018 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=killed, signal=KILL)
 Main PID: 2018 (code=killed, signal=KILL)

Aug 31 20:37:36 manager systemd[1]: Starting Elasticsearch...
Aug 31 20:37:42 manager systemd-entrypoint[2018]: OpenJDK 64-Bit Server VM warning: Ignoring option UseConcMarkSweepGC; support was removed in 14.0
Aug 31 20:37:42 manager systemd-entrypoint[2018]: OpenJDK 64-Bit Server VM warning: Ignoring option CMSInitiatingOccupancyFraction; support was removed in 14.0
Aug 31 20:37:42 manager systemd-entrypoint[2018]: OpenJDK 64-Bit Server VM warning: Ignoring option UseCMSInitiatingOccupancyOnly; support was removed in 14.0
Aug 31 20:37:44 manager systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
Aug 31 20:37:44 manager systemd[1]: Failed to start Elasticsearch.
Aug 31 20:37:44 manager systemd[1]: Unit elasticsearch.service entered failed state.
Aug 31 20:37:45 manager systemd[1]: elasticsearch.service failed.

Alberto Rodriguez

unread,
Sep 1, 2020, 11:59:38 AM9/1/20
to Wazuh mailing list
Hello

 There are some points that we could check:
  • Elasticsearch logs: we need to check the file `/var/log/elasticsearch/elasticsearch.log`. This file contains the logs and it will indicate what configuration could be wrong. If you configured a different name in elasticsearch.yml, this log file name could be different (wazuh.log or whatever defined in elasticsearch.yml). 
  • Elasticsearch configuration files: please share the `/etc/elasticsearch/elasticsearch.yml` and `/etc/elasticsearch/jvm.options`. In addition to this, it could be useful to know what specs have this VM. 
The log file will be a good start point. In the case of a normal log, I can test your environment with the provided information in the second point. 

Best regards, 
Alberto R 

bjp...@gmail.com

unread,
Sep 1, 2020, 2:46:13 PM9/1/20
to Wazuh mailing list
My VM has 4 vCPU's and 4GB of memory (only monitoring a few clients).

I do not see a elasticsearch.log in  /var/log/elasticsearch/ (which seems odd) , but I do see a wazuh-cluster.log and gc.log files. 

cat /var/log/elasticsearch/elasticsearch.log
cat: /var/log/elasticsearch/elasticsearch.log: No such file or directory


Elasticsearch.yml output:
cluster.name: wazuh-cluster
node.name: ${HOSTNAME}
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: ["127.0.0.1"]
discovery.zen.minimum_master_nodes: 1
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca/ca.crt" ]

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca/ca.crt" ]
xpack.security.enabled: true


jvm.options output:  
-Xms2g
-Xmx2g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+AlwaysPreTouch
-Xss1m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-XX:-OmitStackTraceInFastThrow
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Djava.io.tmpdir=${ES_TMPDIR}
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/lib/elasticsearch
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:/var/log/elasticsearch/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m
9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m


Thanks



On Tuesday, September 1, 2020 at 10:59:38 AM UTC-5 Alberto Rodriguez wrote:
Hello

 There are some points that we could check:
  • Elasticsearch logs: we need to check the file `/var/log/elasticsearch/elasticsearch.log`. This file contains the logs and it will indicate what configuration could be wrong. If you configured a different name in elasticsearch.yml, this log file name could be different (wazuh.log or whatever defined in elasticsearch.yml). 
  • Elasticsearch configuration files: please share the `/etc/elasticsearch/elasticsearch.yml` and `/etc/elasticsearch/jvm.options`. In addition to this, it could be useful to know what specs have this VM. 
The log file will be a good start point. In the case of a normal log, I can test your environment with the provided information in the second point. 

Best regards, 
Alberto R 

Alberto Rodriguez

unread,
Sep 2, 2020, 9:52:09 AM9/2/20
to Wazuh mailing list
Hello 

  Sorry, I didn't explain myself. We will need to check the wazuh-cluster.log log file, the last 150 or 200 lines. Please share and we will be able to determine the error. 

Regards, 

Brandon

unread,
Sep 2, 2020, 3:20:20 PM9/2/20
to Wazuh mailing list
[2020-08-31T18:52:42,855][WARN ][r.suppressed             ] [manager] path: /.kibana_task_manager/_count, params: {index=.kibana_task_manager}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
        at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:534) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:305) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:563) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:384) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:219) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:284) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) [elasticsearch-7.5.1.jar:7.5.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.1.jar:7.5.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
        at java.lang.Thread.run(Thread.java:830) [?:?]
[2020-08-31T18:52:44,444][INFO ][o.e.c.r.a.AllocationService] [manager] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana_1][0]]]).
[2020-08-31T18:52:52,280][INFO ][o.e.c.m.MetaDataIndexTemplateService] [manager] adding template [.management-beats] for index patterns [.management-beats]
[2020-08-31T18:52:52,364][INFO ][o.e.c.m.MetaDataIndexTemplateService] [manager] adding template [wazuh-agent] for index patterns [wazuh-monitoring-3.x-*]
[2020-08-31T18:52:52,411][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [manager] updating number_of_replicas to [0] for indices [wazuh-monitoring-3.x-2020.08.31]
[2020-08-31T19:06:54,664][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [manager] Get datafeed '_all'
[2020-08-31T19:08:37,455][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [manager] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:45526}
[2020-08-31T19:08:46,296][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [manager] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:45528}
[2020-08-31T20:11:55,048][INFO ][o.e.n.Node               ] [manager] version[7.9.0], pid[3307], build[default/rpm/a479a2a7fce0389512d6a9361301708b92dff667/2020-08-11T21:36:48.204330Z], OS[Linux/3.10.0-1127.19.1.el7.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/14.0.1/14.0.1+7]
[2020-08-31T20:11:55,053][INFO ][o.e.n.Node               ] [manager] JVM home [/usr/share/elasticsearch/jdk]
[2020-08-31T20:11:55,054][INFO ][o.e.n.Node               ] [manager] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-4818252396426730833, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:MaxDirectMemorySize=1073741824, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm, -Des.bundled_jdk=true]
[2020-08-31T20:11:57,308][INFO ][o.e.p.PluginsService     ] [manager] loaded module [aggs-matrix-stats]
[2020-08-31T20:11:57,308][INFO ][o.e.p.PluginsService     ] [manager] loaded module [analysis-common]
[2020-08-31T20:11:57,308][INFO ][o.e.p.PluginsService     ] [manager] loaded module [constant-keyword]
[2020-08-31T20:11:57,308][INFO ][o.e.p.PluginsService     ] [manager] loaded module [flattened]
[2020-08-31T20:11:57,308][INFO ][o.e.p.PluginsService     ] [manager] loaded module [frozen-indices]
[2020-08-31T20:11:57,309][INFO ][o.e.p.PluginsService     ] [manager] loaded module [ingest-common]
[2020-08-31T20:11:57,309][INFO ][o.e.p.PluginsService     ] [manager] loaded module [ingest-geoip]
[2020-08-31T20:11:57,309][INFO ][o.e.p.PluginsService     ] [manager] loaded module [ingest-user-agent]
[2020-08-31T20:11:57,309][INFO ][o.e.p.PluginsService     ] [manager] loaded module [kibana]
[2020-08-31T20:11:57,309][INFO ][o.e.p.PluginsService     ] [manager] loaded module [lang-expression]
[2020-08-31T20:11:57,309][INFO ][o.e.p.PluginsService     ] [manager] loaded module [lang-mustache]
[2020-08-31T20:11:57,310][INFO ][o.e.p.PluginsService     ] [manager] loaded module [lang-painless]
[2020-08-31T20:11:57,310][INFO ][o.e.p.PluginsService     ] [manager] loaded module [mapper-extras]
[2020-08-31T20:11:57,310][INFO ][o.e.p.PluginsService     ] [manager] loaded module [parent-join]
[2020-08-31T20:11:57,310][INFO ][o.e.p.PluginsService     ] [manager] loaded module [percolator]
[2020-08-31T20:11:57,310][INFO ][o.e.p.PluginsService     ] [manager] loaded module [rank-eval]
[2020-08-31T20:11:57,310][INFO ][o.e.p.PluginsService     ] [manager] loaded module [reindex]
[2020-08-31T20:11:57,311][INFO ][o.e.p.PluginsService     ] [manager] loaded module [repository-url]
[2020-08-31T20:11:57,311][INFO ][o.e.p.PluginsService     ] [manager] loaded module [search-business-rules]
[2020-08-31T20:11:57,311][INFO ][o.e.p.PluginsService     ] [manager] loaded module [searchable-snapshots]
[2020-08-31T20:11:57,311][INFO ][o.e.p.PluginsService     ] [manager] loaded module [spatial]
[2020-08-31T20:11:57,311][INFO ][o.e.p.PluginsService     ] [manager] loaded module [systemd]
[2020-08-31T20:11:57,311][INFO ][o.e.p.PluginsService     ] [manager] loaded module [tasks]
[2020-08-31T20:11:57,312][INFO ][o.e.p.PluginsService     ] [manager] loaded module [transform]
[2020-08-31T20:11:57,312][INFO ][o.e.p.PluginsService     ] [manager] loaded module [transport-netty4]
[2020-08-31T20:11:57,312][INFO ][o.e.p.PluginsService     ] [manager] loaded module [vectors]
[2020-08-31T20:11:57,312][INFO ][o.e.p.PluginsService     ] [manager] loaded module [wildcard]
[2020-08-31T20:11:57,312][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-analytics]
[2020-08-31T20:11:57,312][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-async]
[2020-08-31T20:11:57,313][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-async-search]
[2020-08-31T20:11:57,313][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-autoscaling]
[2020-08-31T20:11:57,313][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-ccr]
[2020-08-31T20:11:57,313][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-core]
[2020-08-31T20:11:57,313][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-data-streams]
[2020-08-31T20:11:57,313][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-deprecation]
[2020-08-31T20:11:57,313][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-enrich]
[2020-08-31T20:11:57,314][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-eql]
[2020-08-31T20:11:57,314][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-graph]
[2020-08-31T20:11:57,314][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-identity-provider]
[2020-08-31T20:11:57,314][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-ilm]
[2020-08-31T20:11:57,314][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-logstash]
[2020-08-31T20:11:57,314][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-ml]
[2020-08-31T20:11:57,314][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-monitoring]
[2020-08-31T20:11:57,315][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-ql]
[2020-08-31T20:11:57,315][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-rollup]
[2020-08-31T20:11:57,315][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-security]
[2020-08-31T20:11:57,315][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-sql]
[2020-08-31T20:11:57,315][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-stack]
[2020-08-31T20:11:57,315][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-voting-only-node]
[2020-08-31T20:11:57,315][INFO ][o.e.p.PluginsService     ] [manager] loaded module [x-pack-watcher]
[2020-08-31T20:11:57,316][INFO ][o.e.p.PluginsService     ] [manager] no plugins loaded
[2020-08-31T20:11:57,361][INFO ][o.e.e.NodeEnvironment    ] [manager] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [27.2gb], net total_space [39.9gb], types [rootfs]
[2020-08-31T20:11:57,362][INFO ][o.e.e.NodeEnvironment    ] [manager] heap size [2gb], compressed ordinary object pointers [true]
[2020-08-31T20:11:58,629][INFO ][o.e.n.Node               ] [manager] node name [manager], node ID [nGxnzN7-RuO-G8O4lEzMDA], cluster name [wazuh-cluster]
[2020-08-31T20:12:02,956][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [manager] [controller/3499] [Main.cc@114] controller (64 bit): Version 7.9.0 (Build 2639177a4c3ad6) Copyright (c) 2020 Elasticsearch BV
[2020-08-31T20:12:03,927][INFO ][o.e.x.s.a.s.FileRolesStore] [manager] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2020-08-31T20:12:05,160][INFO ][o.e.d.DiscoveryModule    ] [manager] using discovery type [zen] and seed hosts providers [settings]
[2020-08-31T20:12:05,659][WARN ][o.e.g.DanglingIndicesState] [manager] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2020-08-31T20:12:06,093][INFO ][o.e.n.Node               ] [manager] initialized
[2020-08-31T20:12:06,093][INFO ][o.e.n.Node               ] [manager] starting ...
[2020-08-31T20:12:06,224][INFO ][o.e.t.TransportService   ] [manager] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2020-08-31T20:12:07,883][WARN ][o.e.b.BootstrapChecks    ] [manager] the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
[2020-08-31T20:12:07,884][INFO ][o.e.c.c.Coordinator      ] [manager] cluster UUID [790-m8QWSiGcHVu--s1fbA]
[2020-08-31T20:12:07,894][INFO ][o.e.c.c.ClusterBootstrapService] [manager] no discovery configuration found, will perform best-effort cluster bootstrapping after [3s] unless existing master is discovered
[2020-08-31T20:12:08,027][INFO ][o.e.c.s.MasterService    ] [manager] elected-as-master ([1] nodes joined)[{manager}{nGxnzN7-RuO-G8O4lEzMDA}{_wjNhEiORe2WWC61slADFw}{127.0.0.1}{127.0.0.1:9300}{dilmrt}{ml.machine_memory=3973414912, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 39, version: 12520, delta: master node changed {previous [], current [{manager}{nGxnzN7-RuO-G8O4lEzMDA}{_wjNhEiORe2WWC61slADFw}{127.0.0.1}{127.0.0.1:9300}{dilmrt}{ml.machine_memory=3973414912, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}
[2020-08-31T20:12:08,278][INFO ][o.e.c.s.ClusterApplierService] [manager] master node changed {previous [], current [{manager}{nGxnzN7-RuO-G8O4lEzMDA}{_wjNhEiORe2WWC61slADFw}{127.0.0.1}{127.0.0.1:9300}{dilmrt}{ml.machine_memory=3973414912, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}, term: 39, version: 12520, reason: Publication{term=39, version=12520}
[2020-08-31T20:12:08,320][INFO ][o.e.h.AbstractHttpServerTransport] [manager] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2020-08-31T20:12:08,321][INFO ][o.e.n.Node               ] [manager] started
[2020-08-31T20:12:08,799][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] upgrading legacy template [.ml-anomalies-] for [ml] from version [7050199] to version [7090099]
[2020-08-31T20:12:08,799][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] upgrading legacy template [.ml-state] for [ml] from version [7050199] to version [7090099]
[2020-08-31T20:12:08,799][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] upgrading legacy template [.ml-config] for [ml] from version [7050199] to version [7090099]
[2020-08-31T20:12:08,800][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] adding legacy template [.ml-inference-000002] for [ml], because it doesn't exist
[2020-08-31T20:12:08,800][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] upgrading legacy template [.ml-meta] for [ml] from version [7050199] to version [7090099]
[2020-08-31T20:12:08,800][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] upgrading legacy template [.ml-notifications-000001] for [ml] from version [7050199] to version [7090099]
[2020-08-31T20:12:08,800][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] adding legacy template [.ml-stats] for [ml], because it doesn't exist
[2020-08-31T20:12:08,825][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] adding legacy template [.watch-history-11] for [watcher], because it doesn't exist
[2020-08-31T20:12:08,827][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] upgrading legacy template [.triggered_watches] for [watcher] from version [null] to version [11]
[2020-08-31T20:12:08,828][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] upgrading legacy template [.watches] for [watcher] from version [null] to version [11]
[2020-08-31T20:12:08,839][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] adding legacy template [ilm-history] for [index_lifecycle], because it doesn't exist
[2020-08-31T20:12:08,840][INFO ][o.e.x.c.t.IndexTemplateRegistry] [manager] upgrading legacy template [.slm-history] for [index_lifecycle] from version [null] to version [2]
[2020-08-31T20:12:08,867][INFO ][o.e.l.LicenseService     ] [manager] license [95508c51-04f9-47f0-a209-5869067dbd70] mode [basic] - valid
[2020-08-31T20:12:08,869][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [manager] Active license is now [BASIC]; Security is enabled
[2020-08-31T20:12:08,876][INFO ][o.e.g.GatewayService     ] [manager] recovered [185] indices into cluster_state
[2020-08-31T20:12:09,041][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding component template [metrics-settings]
[2020-08-31T20:12:09,321][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.ml-config] for index patterns [.ml-config]
[2020-08-31T20:12:09,479][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.ml-meta] for index patterns [.ml-meta]
[2020-08-31T20:12:09,697][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding component template [metrics-mappings]
[2020-08-31T20:12:09,979][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.ml-state] for index patterns [.ml-state*]
[2020-08-31T20:12:10,150][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.ml-anomalies-] for index patterns [.ml-anomalies-*]
[2020-08-31T20:12:10,333][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.ml-stats] for index patterns [.ml-stats-*]
[2020-08-31T20:12:10,548][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding component template [logs-settings]
[2020-08-31T20:12:10,759][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.ml-notifications-000001] for index patterns [.ml-notifications-000001]
[2020-08-31T20:12:10,944][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.ml-inference-000002] for index patterns [.ml-inference-000002]
[2020-08-31T20:12:11,193][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding component template [logs-mappings]
[2020-08-31T20:12:11,376][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2020-08-31T20:12:11,744][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.watches] for index patterns [.watches*]
[2020-08-31T20:12:11,938][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [ilm-history] for index patterns [ilm-history-2*]
[2020-08-31T20:12:12,099][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.watch-history-11] for index patterns [.watcher-history-11*]
[2020-08-31T20:12:12,282][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding template [.slm-history] for index patterns [.slm-history-2*]
[2020-08-31T20:12:12,492][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding index template [metrics] for index patterns [metrics-*-*]
[2020-08-31T20:12:12,653][INFO ][o.e.c.m.MetadataIndexTemplateService] [manager] adding index template [logs] for index patterns [logs-*-*]
[2020-08-31T20:12:16,074][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [manager] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:45596}
[2020-08-31T20:12:18,290][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [manager] adding index lifecycle policy [ml-size-based-ilm-policy]
[2020-08-31T20:12:18,427][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [manager] adding index lifecycle policy [logs]
[2020-08-31T20:12:18,518][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [manager] adding index lifecycle policy [metrics]
[2020-08-31T20:12:18,575][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [manager] adding index lifecycle policy [ilm-history-ilm-policy]
[2020-08-31T20:12:51,365][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [manager] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:45600}
[2020-08-31T20:13:20,077][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [manager] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:45604}
[2020-08-31T20:13:28,977][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [manager] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:45608}
[2020-08-31T20:13:39,948][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [manager] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:45612}
[2020-08-31T20:14:11,648][INFO ][o.e.c.r.a.AllocationService] [manager] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.apm-agent-configuration][0]]]).
[2020-08-31T20:20:16,727][INFO ][o.e.x.m.p.NativeController] [manager] Native controller process has stopped - no new native processes can be started
[2020-08-31T21:08:32,293][INFO ][o.e.n.Node               ] [manager] version[7.9.0], pid[4146], build[default/rpm/a479a2a7fce0389512d6a9361301708b92dff667/2020-08-11T21:36:48.204330Z], OS[Linux/3.10.0-1127.19.1.el7.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/14.0.1/14.0.1+7]
[2020-08-31T21:08:32,303][INFO ][o.e.n.Node               ] [manager] JVM home [/usr/share/elasticsearch/jdk]
[2020-08-31T21:08:32,303][INFO ][o.e.n.Node               ] [manager] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-17315242380790199656, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:MaxDirectMemorySize=1073741824, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm, -Des.bundled_jdk=true]

Alberto Rodriguez

unread,
Sep 3, 2020, 3:46:11 AM9/3/20
to Wazuh mailing list
Sorry Brandon, I thought that the lasts 200 lines could be enough but looks like if the error is before those lines. Could you please attach the entire log file? As soon as you send it I will analyze it. I will try to help you asap in order to have your stack working again. 

Alberto Rodriguez

unread,
Sep 3, 2020, 3:49:12 AM9/3/20
to Wazuh mailing list
If those lines are the latest ones, are you sure that your Elasticsearch is not working? I see the node status Started but not Stopped in the logs. 

Can you verify this? curl https://localhost:9200 -k -u elastic:password

Replace password with your Elasticsearch password (in case you configured it)

Brandon

unread,
Sep 3, 2020, 9:53:43 AM9/3/20
to Wazuh mailing list
Thanks

[root@manager elasticsearch]# curl https://localhost:9200 -k -u elastic:mypassword
curl: (7) Failed connect to localhost:9200; Connection refused
Log_for_wazuh_cluster.log

Alberto Rodriguez

unread,
Sep 3, 2020, 1:20:15 PM9/3/20
to Wazuh mailing list
It's really weird. Your log shows an active Elasticsearch, but you say that is not working. Ok, in order to verify if the Elasticsearch process is running and listening in 9200, please run `netstat -tunap | grep 9200`. The expected output (similar)

[root@wazuhmanager ~]# netstat -tunap | grep 9200
tcp        
0      0 127.0.0.1:48898         127.0.0.1:9200          ESTABLISHED 671/filebeat        
tcp        
0      0 127.0.0.1:48904         127.0.0.1:9200          ESTABLISHED 8927/node          
tcp        
0      0 127.0.0.1:48910         127.0.0.1:9200          ESTABLISHED 8927/node          
tcp        
0      0 127.0.0.1:48900         127.0.0.1:9200          ESTABLISHED 8927/node          
tcp        
0      0 127.0.0.1:48902         127.0.0.1:9200          ESTABLISHED 8927/node          
tcp        
0      0 127.0.0.1:48906         127.0.0.1:9200          ESTABLISHED 8927/node          
tcp        
0      0 127.0.0.1:48908         127.0.0.1:9200          ESTABLISHED 8927/node          
tcp        
0      0 127.0.0.1:48912         127.0.0.1:9200          ESTABLISHED 8927/node          
tcp6      
0      0 127.0.0.1:9200          :::*                    LISTEN      675/java            
tcp6      
0      0 127.0.0.1:9200          127.0.0.1:48904         ESTABLISHED 675/java            
tcp6      
0      0 127.0.0.1:9200          127.0.0.1:48900         ESTABLISHED 675/java            
tcp6      
0      0 127.0.0.1:9200          127.0.0.1:48910         ESTABLISHED 675/java            
tcp6      
0      0 127.0.0.1:9200          127.0.0.1:48908         ESTABLISHED 675/java            
tcp6      
0      0 127.0.0.1:9200          127.0.0.1:48912         ESTABLISHED 675/java            
tcp6      
0      0 127.0.0.1:9200          127.0.0.1:48902         ESTABLISHED 675/java            
tcp6      
0      0 127.0.0.1:9200          127.0.0.1:48898         ESTABLISHED 675/java            
tcp6      
0      0 127.0.0.1:9200          127.0.0.1:48906         ESTABLISHED 675/java    


Then, I saw in the log some errors referring to the security index and the elastic user password. So, I would recommend you to reset the `elastic` password. For that:

Stop all your Elasticsearch node:
systemctl stop elasticsearch

Generate a new file-based superuser. Do this in all Elasticsearch nodes (I suppose that only one):
/usr/share/elasticsearch/bin/elasticsearch-users useradd admin_user -p changeme_password -r superuser
This would generate the user admin_user with password changeme.

Start all your node:
systemctl start elasticsearch

Change the password for the elastic user. 
curl -u admin_user:changeme_password -k -XPUT 'https://10.250.3.236:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d' { "password": "H8PCE10Ui0hiUdrk6tky" }'

Replace the password for your desired one.
Then, I would recommend you to remove the superuser user
/usr/share/elasticsearch/bin/elasticsearch-users userdel admin_user

You should be able to connect with the provided curl and new password:
curl https://localhost:9200 -k -u elastic:H8PCE10Ui0hiUdrk6tky


If it works, please change the password in Filebeat and Kibana configuration files. 
Let me know if it works. 

Regards, 
Alberto R

Brandon

unread,
Sep 3, 2020, 2:48:12 PM9/3/20
to Wazuh mailing list
Not much luck here. I couldn't get the elasticsearch service to start so I couldn't move on from there. I may have to pull the plug on this upgrade soon and revert to snapshot. Do you recommend going from 3.11 to 3.12 instead of 3.11 to 3.13? 

[root@manager elasticsearch]# netstat -tunap | grep 9200
[root@manager elasticsearch]#

[root@manager elasticsearch]# systemctl stop elasticsearch
[root@manager elasticsearch]# /usr/share/elasticsearch/bin/elasticsearch-users useradd admin_user -p changeme_password -r superuser
[root@manager elasticsearch]# systemctl start elasticsearch
Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details.

[root@manager elasticsearch]# systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/elasticsearch.service.d
           └─elasticsearch.conf
   Active: failed (Result: exit-code) since Thu 2020-09-03 18:44:48 UTC; 1min 33s ago
  Process: 29710 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
 Main PID: 29710 (code=exited, status=1/FAILURE)

Sep 03 18:44:48 manager systemd-entrypoint[29710]: OpenJDK 64-Bit Server VM warning: Ignoring option UseCMSInitiatingOccupancyOnly; support was removed in 14.0
Sep 03 18:44:48 manager systemd-entrypoint[29710]: at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:126)
Sep 03 18:44:48 manager systemd-entrypoint[29710]: at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:88)
Sep 03 18:44:48 manager systemd-entrypoint[29710]: at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Sep 03 18:44:48 manager systemd-entrypoint[29710]: at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:137)
Sep 03 18:44:48 manager systemd-entrypoint[29710]: at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:95)
Sep 03 18:44:48 manager systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Sep 03 18:44:48 manager systemd[1]: Failed to start Elasticsearch.
Sep 03 18:44:48 manager systemd[1]: Unit elasticsearch.service entered failed state.
Sep 03 18:44:48 manager systemd[1]: elasticsearch.service failed.

Thanks

Alberto Rodriguez

unread,
Sep 4, 2020, 9:31:42 AM9/4/20
to Wazuh mailing list
The Wazuh upgrade is not the problem. The problem could be the Elasticsearch upgrade. What version of elasticsearch did you have? If you had a 6.x, you need to follow this guide: https://documentation.wazuh.com/3.13/upgrade-guide/upgrading-elastic-stack/elastic_server_rolling_upgrade.html but if you had 7.x the correct one is https://documentation.wazuh.com/3.13/upgrade-guide/upgrading-elastic-stack/elastic_server_minor_upgrade.html. What was the guide you used?

Brandon

unread,
Sep 4, 2020, 9:45:26 AM9/4/20
to Wazuh mailing list
I believe it was version ELK 7.6, or the version preloaded with the 3.11 OVA. And I used this guide ( https://documentation.wazuh.com/3.13/upgrade-guide/upgrading-elastic-stack/elastic_server_minor_upgrade.html. ).  

Alberto Rodriguez

unread,
Sep 7, 2020, 10:44:25 AM9/7/20
to Wazuh mailing list
Hello Brandon

  I start to figure out what could happen. The upgrade guide is not completely valid for OVA because OVA uses `oss` elastic versions. In order to confirm, do you have this output when you run the command rpm -qa | egrep

[root@manager ~]# rpm -qa | egrep "kibana|elastic|filebeat"
elasticsearch-oss-7.6.0-1.x86_64
filebeat-7.6.0-1.x86_64
kibana-oss-7.6.0-1.x86_64

If your packages are not in 7.9.0 version (all) and they are -oss, this is the problem. Instead of installing them using the repository, use rpm -uvh https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.9.0-x86_64.rpm , rpm -uvh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.9.0-x86_64.rpm and rpm -uvh https://artifacts.elastic.co/downloads/kibana/kibana-oss-7.9.0-x86_64.rpm.

Please let me know if this is your case and if you need more details on this special upgrade. From technical writing, the team is now aware of this situation with OVAs upgrades. 

Regards, 
Alberto R 
Reply all
Reply to author
Forward
0 new messages