Wazuh no longer sending data to elasticsearch

453 views
Skip to first unread message

Matt Schenkman

unread,
Sep 15, 2021, 7:49:39 PM9/15/21
to Wazuh mailing list
My Wazuh server (4.0.4) is no longer sending any logs to elastic, thus not creating daily indices. My filebeat config is setup to send to our ELK server, and nothing's changed per se.

I am getting this error throughout my ES log: Search rejected due to missing shards

I found an article about recovering my indices, but I'm not sure if that's what the issue is.

Thanks in advance,
~Matt

Maximiliano Ibarra

unread,
Sep 16, 2021, 1:40:47 PM9/16/21
to Wazuh mailing list
Hi Matt, thanks for contact us.
We must seek more information about your issue. Sometimes if the maximum shard is exceeded you could get the error "Search rejected due to missing shards".
But first, lets gonna see your elasticsearch and kibana logs. Please run those commands and paste the result here.
  • cat /var/log/kibana.log | grep -i "error|warn"
  • cat /var/log/elasticsearch/<CLUSTER_NAME>.log | grep -i "error|warn"
Also, Do you have access to the Wazuh Dev tools? We need to run this request at dev tools:
  • GET /_cluster/health
This command gonna tell us your cluster status.
That's all for now. Thanks.
I looking forward to your reply.
Best regards

Matt Schenkman

unread,
Sep 16, 2021, 3:49:43 PM9/16/21
to Maximiliano Ibarra, Wazuh mailing list
OK here you go:

cat kibana.log: log is configured for stdout. There is nothing when I grep in the elasticsearch log and the syslog. Is there anywhere else you'd recommend I check?
cat cluster log: nothing returns
I have access to the dev tools, but am running a single node and it didn't return anything.


--
You received this message because you are subscribed to a topic in the Google Groups "Wazuh mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/TJedBD9EHVU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/1f2f5968-cd5e-49bc-8753-0ba69a3d9fc1n%40googlegroups.com.


--
Matt Schenkman
IT Operations Manager
FCP Euro

m. 347.416.NERD
e.  matt.sc...@fcpeuro.com
w. fcpeuro.com
facebooklinkedintwitteryoutubefacebooklinkedin
Every Part You Buy Is GUARANTEED FOR LIFE

Matt Schenkman

unread,
Sep 17, 2021, 4:31:25 AM9/17/21
to Maximiliano Ibarra, Wazuh mailing list
Here's a copy of the health summary:

{
  "index" : "filebeat-7.9.2-2020.12.14-000001",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2021-09-15T23:29:28.868Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "amTHlKPVTUG1Cg6T8nQsmQ",
      "node_name" : "node-1",
      "transport_address" : "127.0.0.1:9300",
      "node_attributes" : {
        "ml.machine_memory" : "16795901952",
        "rack" : "HillStreet",
        "xpack.installed" : "true",
        "transform.node" : "true",
        "ml.max_open_jobs" : "20"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "enable",
          "decision" : "NO",
          "explanation" : "replica allocations are forbidden due to cluster setting [cluster.routing.allocation.enable=primaries]"
        },
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "a copy of this shard is already allocated to this node [[filebeat-7.9.2-2020.12.14-000001][0], node[amTHlKPVTUG1Cg6T8nQsmQ], [P], s[STARTED], a[id=DDr5z06yQTm6twAfkbXN-Q]]"
        }
      ]
    }
  ]
}

Matt Schenkman

unread,
Sep 17, 2021, 4:36:08 AM9/17/21
to Maximiliano Ibarra, Wazuh mailing list
oh, I also found this in filebeat logs "this cluster currently has [999]/[1000] maximum shards open"

Matt Schenkman

unread,
Sep 17, 2021, 5:25:59 AM9/17/21
to Maximiliano Ibarra, Wazuh mailing list
wazuh is flowing into ES again. I applied a broader template for ILM to include all indices with the header wazuh-* and reduced the delete window. Let me know if you have any other recommendations.

Maximiliano Ibarra

unread,
Sep 17, 2021, 10:56:26 AM9/17/21
to Wazuh mailing list
Hi Matt, I'm glad to hear that. 
Yes, one solution is create an ILM policy to delete old indices, you can see more details in this link: https://wazuh.com/blog/wazuh-index-management.
Also, you can increment the max of shards per node (not recommended).  Somethings is better keep your shard storage smaller and updated (Please, If you increase your shards size don't exceed 1200)
  • curl -k -u USERNAME:PASSWORD -XPUT ELASTICSEARCH_HOST_ADDRESS/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "MAX_SHARDS_PER_NODE" } }'
But, you have chosen the better solution, create an ILM policy. Congrats
If you have any doubts about this or another topic, don't hesitate to contact us again.
Have a nice weekend.
Best regards
Maximiliano
Reply all
Reply to author
Forward
0 new messages