Can't set .opendistro-alerting-config to 0 replicas

1,597 views
Skip to first unread message

JonR

unread,
Nov 21, 2023, 7:17:04 PM11/21/23
to Wazuh | Mailing List
Wazuh hasn't been logging anything in the web ui for a few weeks now (despite me getting email notifications for rules I've created). I'm running 1 node on Ubuntu 22.04.

Upon investigation, nothing seemed awry. After a restart of wazuh-indexer, multiple indexes have 1 replica and the cluster is yellow. I was able to set most indices to 0 manually, however I cannot do so for .opendistro-alerting-config as I receive the below security error (same when doing so using the API): 

{
  "error": {
    "root_cause": [
      {
        "type": "security_exception",
        "reason": "no permissions for [] and User [name=admin, backend_roles=[admin], requestedTenant=null]"
      }
    ],
    "type": "security_exception",
    "reason": "no permissions for [] and User [name=admin, backend_roles=[admin], requestedTenant=null]"
  },
  "status": 403
}

Below is the result of the following query: GET _cluster/allocation/explain

{
  "index": ".opendistro-alerting-config",
  "shard": 0,
  "primary": false,
  "current_state": "unassigned",
  "unassigned_info": {
    "reason": "CLUSTER_RECOVERED",
    "at": "2023-11-21T23:22:21.297Z",
    "last_allocation_status": "no_attempt"
  },
  "can_allocate": "no",
  "allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions": [
    {
      "node_id": "hPbN3vJIQUGDE-vYU_-5_g",
      "node_name": "node-hawk",
      "transport_address": "192.168.63.254:9300",
      "node_attributes": {
        "shard_indexing_pressure_enabled": "true"
      },
      "node_decision": "no",
      "deciders": [
        {
          "decider": "same_shard",
          "decision": "NO",
          "explanation": "a copy of this shard is already allocated to this node [[.opendistro-alerting-config][0], node[hPbN3vJIQUGDE-vYU_-5_g], [P], s[STARTED], a[id=g2ZnLHdbRW-1LI7Yg8BilA]]"
        }
      ]
    }
  ]
}

Sebastian Dario Bustos

unread,
Nov 21, 2023, 10:33:10 PM11/21/23
to Wazuh | Mailing List
Hi @ JonR,
Thank you for using Wazuh!!!

If you search that index on your "Dashboard menu -> Index management -> Indices" section, do you see the replicas count on "1"?    If so, you can click on the index name and on the settings section change this from 1 to 0 and then save changes, does it returns an error as well?

About your lack of events, can you please check your disk space on the Indexer node not to be at 90% or above (watermark level by default after which it goes into read only mode), also please check the health of the cluster from the Dev Tools console by using this query:
GET _cluster/health

By default there is a maximum of 1000 shards per node which if reached will prevent you from recording new events or saving any config on your Dashboard/Indexer.

Also, it may worth checking the status of filebeat on your Wazuh manager's node, if this is down for some reason, the events on your manager will not reach your Indexer therefore not showing events on your Dashboard UI (you can check with: systemctl status filebeat).

Please let me know if this is helpful.
Regards.

JonR

unread,
Nov 22, 2023, 12:14:51 AM11/22/23
to Wazuh | Mailing List
Thanks for the advice Sebastian. The index .opendistro-alerting-config is actually not visible in my dashboard, but I do see other indexes that start with .opendistro.

Disk space is below 90% and I only have 92 active shard on the node, but your right filebeat is failing to start. It looks like it's failing to connect to wazuh-indexer at port 9200 for some reason. 

Error: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..
Log: pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://192.168.63.254:9200)): Get "https://192.168.63.254:9200": dial tcp 192.168.63.254:9200: connect: connection refused

I double checked filebeat.yml and everything looks correct. I am able to use the API at port 9200 just fine.

Wazuh-indexer is running, however I noticed I'm getting the following warnings when starting the service:

WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by org.opensearch.bootstrap.OpenSearch (file:/usr/share/wazuh-indexer/lib/opensearch-2.8.0.jar)
WARNING: Please consider reporting this to the maintainers of org.opensearch.bootstrap.OpenSearch
WARNING: System::setSecurityManager will be removed in a future release
WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by org.opensearch.bootstrap.Security (file:/usr/share/wazuh-indexer/lib/opensearch-2.8.0.jar)
WARNING: Please consider reporting this to the maintainers of org.opensearch.bootstrap.Security
WARNING: System::setSecurityManager will be removed in a future release
Started Wazuh-indexer.

JonR

unread,
Nov 22, 2023, 4:55:14 PM11/22/23
to Wazuh | Mailing List
Just want to add, after another reboot of the server filebeat is now connecting. I'm a bit concerned as to why opensearch was refusing it's connection though. Filebeat test output was a success, so I don't believe it was any configuration issue with filebeat. There weren't any errors in the wazuh-indexer log either.

I was able to set all system indices to 0 by changing "plugins.security.system_indices.enabled" to "false" in opensearch.yml. After changing the replica amount I re-enabled the setting.

My cluster status is now green.

Thanks for the help!

Ivan Janeš

unread,
Dec 18, 2023, 8:32:57 AM12/18/23
to Wazuh | Mailing List
I had similar issues getting yellow cluster status. I am using one node wazuh-indexer setup.

Configuration file for wazuh indexer has following settings regarding opensearch system indices - https://opensearch.org/docs/latest/security/configuration/system-indices/

plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]

I was able to find unassigned shards using certificate authentication and curl, found this on https://repost.aws/knowledge-center/opensearch-unassigned-shards

# cd /etc/wazuh-indexer/certs
# curl  --key admin-key.pem --cert admin.pem  --insecure -XGET https://127.0.0.1:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep UNASSIGNED
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
 Dload  Upload   Total   Spent    Left  Speed
.opendistro-ism-managed-index-history-2023.12.13-000015 0 r UNASSIGNED INDEX_CREATED
100 76723.opendistro-ism-managed-index-history-2023.12.15-000017 0 r UNASSIGNED INDEX_CREATED
  100 76723    0     0  .opendistro-ism-managed-index-history-2023.12.01-000003 0 r UNASSIGNED INDEX_CREATED
 523k .opendistro-ism-managed-index-history-2023.12.12-000014 0 r UNASSIGNED INDEX_CREATED
 .opendistro-ism-managed-index-history-2023.12.08-000010 0 r UNASSIGNED INDEX_CREATED
0 --:-.opendistro-ism-config                                  0 r UNASSIGNED CLUSTER_RECOVERED
-:-- --:--:-- --:--:--.opendistro-alerting-config                             0 r UNASSIGNED CLUSTER_RECOVERED
  5.opendistro-ism-managed-index-history-2023.12.17-000019 0 r UNASSIGNED INDEX_CREATED
23.opendistro-ism-managed-index-history-2023.12.11-000013 0 r UNASSIGNED INDEX_CREATED
k
.opendistro-ism-managed-index-history-2023.12.10-000012 0 r UNASSIGNED INDEX_CREATED
.opendistro-ism-managed-index-history-2023.12.03-000005 0 r UNASSIGNED INDEX_CREATED
.opendistro-alerting-alerts                             0 r UNASSIGNED CLUSTER_RECOVERED
.opendistro-ism-managed-index-history-2023.12.16-000018 0 r UNASSIGNED INDEX_CREATED
.opendistro-ism-managed-index-history-2023.11.29-000002 0 r UNASSIGNED INDEX_CREATED
.opendistro-ism-managed-index-history-2023.12.05-000007 0 r UNASSIGNED INDEX_CREATED
.opendistro-ism-managed-index-history-2023.12.09-000011 0 r UNASSIGNED INDEX_CREATED
.opendistro-ism-managed-index-history-2023.12.04-000006 0 r UNASSIGNED INDEX_CREATED
.opendistro-ism-managed-index-history-2023.12.07-000009 0 r UNASSIGNED INDEX_CREATED
.opendistro-ism-managed-index-history-2023.12.02-000004 0 r UNASSIGNED INDEX_CREATED

I have created index templates for all opensearch system indices and set number of replicas to 0 (https://opensearch.org/docs/1.2/im-plugin/ism/settings/#audit-history-indices):

# cd /etc/wazuh-indexer/certs
# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/ism_history_indices -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-ism-managed-index-history-*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_alerting_config -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-alerting-config-*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_alerting_alerts -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-alerting-alerts*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_anomaly_results -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-anomaly-results*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_anomaly_detector -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-anomaly-detector*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_anomaly_checkpoints -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
"..opendistro-anomaly-checkpoints*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_anomaly_detection_state -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-anomaly-detection-state*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_reports -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-reports-*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_notifications -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-notifications-*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_notebooks -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-notebooks"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opensearch_observability -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opensearch-observability"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/opendistro_asynchronous_search_response -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".opendistro-asynchronous-search-response*"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

# curl --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/_index_template/replication_metadata_store -H 'Content-Type: application/json' -d'
{
  "index_patterns": [
".replication-metadata-store"
  ],
  "template": {
"settings": {
  "number_of_shards": 1,
  "number_of_replicas": 0
}
  }
}'

After creating templates I have changed number of replicas for all problematic indices using curl and certificate authentication
# cd /etc/wazuh-indexer/certs
# curl  --key admin-key.pem --cert admin.pem  --insecure -XPUT https://127.0.0.1:9200/.opendistro-*/_settings -H 'Content-Type: application/json' -d'
{
  "index" : {
"number_of_replicas" : 0
  }
}'

Is there better solution for this problem ? I am missing any setting for opensearch system indices ?

Ivan Janeš

unread,
Dec 20, 2023, 7:16:36 AM12/20/23
to Wazuh | Mailing List
Solution with templates does not work for .opendistro-* indices.

I have created opensearch ISM which does the work, deployed via dev tools

PUT _plugins/_ism/policies/set_opendistro_replica_to_0
{
    "policy": {
        "policy_id": "Opendistro replica to 0",
        "description": "Set replica count for .opendistro-* indices to 0",
        "default_state": "index_created",
        "states": [
            {
                "name": "index_created",
                "actions": [],
                "transitions": [
                    {
                        "state_name": "replica_0",
                        "conditions": {
                            "min_index_age": "0ms"
                        }
                    }
                ]
            },
            {
                "name": "replica_0",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "replica_count": {
                            "number_of_replicas": 0
                        }
                    }
                ],
                "transitions": []
            }
        ],
        "ism_template": [
            {
                "index_patterns": [
                    ".opendistro-*"
                ],
                "priority": 1
            }
        ]
    }
}

After applying newly created ISM to .opendistro* indices replica count has been successfully changed from 1 to 0. It takes few minutes to execute policy after applying it to index.
Reply all
Reply to author
Forward
0 new messages