Can't see logs on Kibana

80 views
Skip to first unread message

Miki Alkalay

unread,
Mar 31, 2020, 9:12:57 AM3/31/20
to Wazuh mailing list
Hi Team,
suddenly the logs not appearing anymore on the Kibana GUI.
The logs are coming and i see it under the archive.log 

please advise

Miki

Juan Pablo Saez

unread,
Mar 31, 2020, 10:12:15 AM3/31/20
to Wazuh mailing list
Hi Miki,

suddenly the logs not appearing anymore on the Kibana GUI

If I have understood correctly, you received alerts in Kibana about certain events and this has stopped happening. The events keep happening but the related alerts aren't shipped to Kibana, right?

The logs are coming and i see it under the archive.log 

I would like you to check for alerts related to this events on the alerts.json file. Is it happening for all the alerts or just for some? I think your issue could be due to a modification or deletion in the ruleset regarding these events that no longer appear. Let me know some more details.

Greetings,

JP Sáez

Miki Alkalay

unread,
Mar 31, 2020, 10:19:18 AM3/31/20
to Juan Pablo Saez, Wazuh mailing list
Hi,
it's the all alerts, i can't see nothing.
it might be related to elastic or filebeat?

Miki

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/e70486de-d9e5-43a1-9bfc-7e162dd24534%40googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

Miki Alkalay

unread,
Apr 1, 2020, 2:35:34 AM4/1/20
to Juan Pablo Saez, Wazuh mailing list
Hi,
Any news?
my system is down and i can't see anything.

Miki

Juan Pablo Saez

unread,
Apr 1, 2020, 5:23:38 AM4/1/20
to Wazuh mailing list

Hello Miki,

First of all, sorry for the late reply.

As a first step, please, check if new alerts are generated in the alerts.json file.

  • If you see no new alerts being generated on the alerts.json file:
           it may be a problem with the Wazuh manager. In that case please look in the ossec.log file for traces of error or other problem and paste it here.

  • If you see news alerts being generated on the alerts.json file:
           It may be a problem with Filebeat, Elastic or Kibana. You should check its logs and paste here any trace of error:

    • systemctl status filebeat -l | grep -i -E "err|warn"
    • systemctl status kibana -l | grep -i -E "err|warn"
    • /var/log/elasticsearch or systemctl status elasticsearch -l | grep -i -E "err|warn"

Let me know if you find any errors. Greetings and stay safe.

JP Sáez

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

Miki Alkalay

unread,
Apr 1, 2020, 8:09:54 AM4/1/20
to Juan Pablo Saez, Wazuh mailing list
Hi,
yes i can see and i think the same it's coming from elastic or filebeat
attached the statuses 

Miki

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/2f318772-af99-423a-b5a2-42f7cc2710a4%40googlegroups.com.
filebeat.status
elastic.status
kibana.status

Juan Pablo Saez

unread,
Apr 1, 2020, 11:15:04 AM4/1/20
to Wazuh mailing list

Helo again Miki,

I can see shards related errors in both filebeat and elastic logs{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [3] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}

Elastic 7.x has a limit for the maximum number of shards that can be allocated in a single node. If exceeded elasticsearch won’t start, but you can get your node to start by configuring inside /etc/elasticsearch/elasticsearch.yml the following setting: cluster.max_shards_per_node: 2000. Keep in mind that 2k shards per node is above Elasticsearch limit, so consider managing your indices shards to reduce its number.

Greetings,

JP Sáez

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

​

Miki Alkalay

unread,
Apr 1, 2020, 11:32:58 AM4/1/20
to Juan Pablo Saez, Wazuh mailing list
Hi,
Thanks for your answer.
i just add the shard line but still the logs are not in the GUI.
i restarted the services of the ELK

Miki

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/e0534983-237c-4cfe-afff-3d147562a1ac%40googlegroups.com.

Juan Pablo Saez

unread,
Apr 1, 2020, 11:41:07 AM4/1/20
to Wazuh mailing list
Hey Miki,
Have you checked the logs for filebeat and elastic after changing this value? Please check it and paste the result here so we can analyze 

Greetings, 

JP
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Miki Alkalay

unread,
Apr 1, 2020, 12:01:12 PM4/1/20
to Juan Pablo Saez, Wazuh mailing list
[2020-04-01T18:57:33,405][DEBUG][o.e.a.s.TransportSearchAction] [wazuh] All shards failed for phase: [query]
[2020-04-01T18:57:33,405][WARN ][r.suppressed             ] [wazuh] path: /.kibana_task_manager/_search, params: {ignore_unavailable=true, index=.kibana_task_manager}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:305) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:139) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:264) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:105) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:251) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:172) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:758) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.3.0.jar:7.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-04-01T18:57:33,406][DEBUG][o.e.a.s.TransportSearchAction] [wazuh] All shards failed for phase: [query]
[2020-04-01T18:57:33,406][WARN ][r.suppressed             ] [wazuh] path: /.kibana_task_manager/_search, params: {ignore_unavailable=true, index=.kibana_task_manager}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:305) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:139) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:264) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:105) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:251) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:172) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:758) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.3.0.jar:7.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-04-01T18:57:33,908][DEBUG][o.e.a.s.TransportSearchAction] [wazuh] All shards failed for phase: [query]
[2020-04-01T18:57:33,908][WARN ][r.suppressed             ] [wazuh] path: /.kibana_task_manager/_search, params: {ignore_unavailable=true, index=.kibana_task_manager}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:305) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:139) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:264) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:105) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:251) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:172) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:758) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.3.0.jar:7.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-04-01T18:57:33,910][DEBUG][o.e.a.s.TransportSearchAction] [wazuh] All shards failed for phase: [query]
[2020-04-01T18:57:33,910][WARN ][r.suppressed             ] [wazuh] path: /.kibana_task_manager/_search, params: {ignore_unavailable=true, index=.kibana_task_manager}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:305) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:139) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch


2019-09-15T12:57:08.671Z INFO instance/beat.go:606 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2019-09-15T12:57:08.671Z INFO instance/beat.go:614 Beat ID: e806e58a-1837-4507-a7d8-c7a778b8d4ec
2019-09-15T12:57:08.672Z INFO [beat] instance/beat.go:902 Beat info {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "e806e58a-1837-4507-a7d8-c7a778b8d4ec"}}}
2019-09-15T12:57:08.672Z INFO [beat] instance/beat.go:911 Build info {"system_info": {"build": {"commit": "6f0ec01a0e57fe7d4fd703b017fb5a2f6448d097", "libbeat": "7.3.0", "time": "2019-07-24T17:39:34.000Z", "version": "7.3.0"}}}
2019-09-15T12:57:08.672Z INFO [beat] instance/beat.go:914 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.12.4"}}}
2019-09-15T12:57:08.672Z INFO [beat] instance/beat.go:918 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-09-15T12:31:59Z","containerized":false,"name":"wazuh","ip":["127.0.0.1/8","::1/128","10.128.0.2/32","fe80::4001:aff:fe80:2/64"],"kernel_version":"3.10.0-957.27.2.el7.x86_64","mac":["42:01:0a:80:00:02"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":6,"patch":1810,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0,"id":"99026e84e6e37d84202a58c6bbb7b563"}}}
2019-09-15T12:57:08.673Z INFO [beat] instance/beat.go:947 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/root", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 19970, "ppid": 2216, "seccomp": {"mode":"disabled"}, "start_time": "2019-09-15T12:57:08.220Z"}}}
2019-09-15T12:57:08.673Z INFO instance/beat.go:292 Setup Beat: filebeat; Version: 7.3.0
2019-09-15T12:57:08.673Z INFO [index-management] idxmgmt/std.go:178 Set output.elasticsearch.index to 'filebeat-7.3.0' as ILM is enabled.
2019-09-15T12:57:08.673Z INFO elasticsearch/client.go:170 Elasticsearch url: http://localhost:9200
2019-09-15T12:57:08.673Z INFO [publisher] pipeline/module.go:97 Beat name: wazuh
2019-09-15T12:57:08.673Z INFO elasticsearch/client.go:170 Elasticsearch url: http://localhost:9200
2019-09-15T12:57:08.813Z INFO elasticsearch/client.go:743 Attempting to connect to Elasticsearch version 7.3.0
2019-09-15T12:57:08.884Z INFO [index-management] idxmgmt/std.go:252 Auto ILM enable success.
2019-09-15T12:57:08.949Z INFO [index-management] idxmgmt/std.go:265 ILM policy successfully loaded.
2019-09-15T12:57:08.949Z INFO [index-management] idxmgmt/std.go:394 Set setup.template.name to '{filebeat-7.3.0 {now/d}-000001}' as ILM is enabled.
2019-09-15T12:57:08.949Z INFO [index-management] idxmgmt/std.go:399 Set setup.template.pattern to 'filebeat-7.3.0-*' as ILM is enabled.
2019-09-15T12:57:08.949Z INFO [index-management] idxmgmt/std.go:433 Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.3.0 {now/d}-000001} as ILM is enabled.
2019-09-15T12:57:08.949Z INFO [index-management] idxmgmt/std.go:437 Set settings.index.lifecycle.name in template to {filebeat-7.3.0 {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2019-09-15T12:57:08.954Z INFO template/load.go:169 Existing template will be overwritten, as overwrite is enabled.
2019-09-15T12:57:09.068Z INFO template/load.go:108 Try loading template filebeat-7.3.0 to Elasticsearch
2019-09-15T12:57:09.291Z INFO template/load.go:100 template with name 'filebeat-7.3.0' loaded.
2019-09-15T12:57:09.291Z INFO [index-management] idxmgmt/std.go:289 Loaded index template.
2019-09-15T12:57:09.879Z INFO [index-management] idxmgmt/std.go:300 Write alias successfully generated.

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/448fd3d1-de6c-45a9-ba28-f68688763bb3%40googlegroups.com.

Juan Pablo Saez

unread,
Apr 2, 2020, 3:41:13 AM4/2/20
to Wazuh mailing list

Hello again Miky,

Seems like 2k max shards are too much for your ES node. Please, set the value back to the 1k default value and check which of the following fixes fits the better your use case:

  • Option 1: As every node can have a maximum of 1.000 open shards, an Elasticsearch cluster of 3 nodes would increase the limit of the cluster to 3.000 open shards ( 3nodes * 1.000 shards each node) so configuring an Elasticsearch cluster would solve this problem.
  • Option 2: If creating an Elasticsearch cluster is currently not possible, you can decrease the number of indices by reindexing your daily indices into monthly indices (this will be possible depending on the indices size) more info here. You can see in the following example how all daily indices of 2020.01 are merged into a unique monthly index.

POST _reindex
{
  "source": {
    "index": "wazuh-alerts-3.x-2020-01.*"
  },
  "dest": {
    "index": "wazuh-alerts-3.x-2020"
  }
} 

  • Option 3: You can also reduce the number of shards until that number of shards is below the limit by closing some indices, closed indices do not count for the cluster shard limit (closing an index does not mean that the index is going to be deleted and closed indices can be reopened again). Closed indices would not return any results when making a search in Elasticsearch/Kibana. More info here.

You can close/open and index as in the following example:

curl -X POST "ELASTIC_IP:9200/index_name/_close"
curl -X POST "ELASTIC_IP:9200/index_name/_open" 

  • Option 4: Another solution would be increasing the soft limit cluster.max_shards_per_node above 1.000 shards per node again. As you have seen it could lead to issues easily. You could try setting it to 1200 or 1100 instead of 2K as we previously tried.

Let me know how it goes. Greetings

JP

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

​

Miki Alkalay

unread,
Apr 2, 2020, 5:15:06 AM4/2/20
to Juan Pablo Saez, Wazuh mailing list
Hi,
thanks for your reply,
where can i do that:
POST _reindex { "source": { "index": "wazuh-alerts-3.x-2020-01.*" }, "dest": { "index": "wazuh-alerts-3.x-2020" } }  
can you please give me more details

Miki 

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.


--

Best Regards

Miki Alkalay
Mobile: 972-54-6496293

​

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/bce700b7-b2b8-4f6f-bbdb-205c96e0e3fe%40googlegroups.com.

Juan Pablo Saez

unread,
Apr 2, 2020, 12:30:27 PM4/2/20
to Wazuh mailing list

Hello again Miki,

I think in this situation it is a good idea to back to the initial state (set the cluster.max_shards_per_node back to 1000) and recheck the ES logs and cluster health:

  • First set back the cluster.max_shards_per_node: 1000 value on /etc/elasticsearch/elasticsearch.yml. Remember to restart ES
  • Then check and paste a good portion of the ES log: journalctl -u kibana.service
  • Also, check the cluster health through the GUI Dev tools: GET _cluster/health

​

2020-04-02_18-28.png



After checking this data we can choose the better option and see why increasing the soft open shards limit failed. Let me know how it goes.

Greetings, JP

Reply all
Reply to author
Forward
0 new messages