I deleted indexes by policy. How to get them back?

177 views
Skip to first unread message

Dmitry Mikheev

unread,
Aug 27, 2024, 5:58:13 AM8/27/24
to Wazuh | Mailing List
I configured the policy according to the instructions:

But after some time all indexes were deleted!
Deleted policies, new Alerts are coming...

1. I have achives.log. How can I reload them into the system?

2. Here is the policy. What is my mistake? Why was everything deleted?

{

    "id": "wazuh-alert-retention-policy",

    "seqNo": 0,

    "primaryTerm": 1,

    "policy": {

        "policy_id": "wazuh-alert-retention-policy",

        "description": "A sample description of the policy",

        "last_updated_time": 1724504425391,

        "schema_version": 19,

        "error_notification": null,

        "default_state": "delete_alerts",

        "states": [

            {

                "name": "initial",

                "actions": [],

                "transitions": [

                    {

                        "state_name": "delete_alerts",

                        "conditions": {

                            "min_index_age": "365d"

                        }

                    }

                ]

            },

            {

                "name": "delete_alerts",

                "actions": [

                    {

                        "retry": {

                            "count": 3,

                            "backoff": "exponential",

                            "delay": "1m"

                        },

                        "delete": {}

                    }

                ],

                "transitions": []

            }

        ],

        "ism_template": [

            {

                "index_patterns": [

                    "wazuh-alerts-*"

                ],

                "priority": 1,

                "last_updated_time": 1724504425391

            },

            {

                "index_patterns": [

                    "wazuh-archives-*"

                ],

                "priority": 1,

                "last_updated_time": 1724504425391

            }

        ]

    }

}



Lamya Imam

unread,
Aug 27, 2024, 6:33:46 AM8/27/24
to Wazuh | Mailing List
Hello  Dmitry Mikheev,

I believe the issue lies at here:


 "default_state": "delete_alerts",
        "states": [
            {
                "name": "initial",
                "actions": [],
                "transitions": [
                    {
                        "state_name": "delete_alerts",
                        "conditions": {
                            "min_index_age": "365d"
                        }
                    }
                ]
            },

The default state would be the retention period (retention_state), because it is the duration of time you want to keep the alerts. After that you can define the delete state. In your case, it would be 365 days.

Try editing the policy like this:
{

    "policy": {
        "policy_id": "wazuh-alert-retention-policy",
        "description": "A sample description of the policy",
        "last_updated_time": 1724504425391,
        "schema_version": 19,
        "error_notification": null,
        "default_state": "retention_state",
        "states": [
            {
                "name": "retention_state",

Select the index or indices and apply the policy.

If you face this error:
Failed to apply policy to [wazuh-alerts-4.x-2024.06.20, This index already has a policy, use the update policy API to update index policies]

It means your indices are already assigned with a policy and it has to be removed for the new policy to work. It may take a while to perform the transition process.

Please find the attached screenshot for reference, this policy was used to retain logs that are 30 days old. 

Hope this helps!
Let me know if this worked out for you!
image (1).png
image.png
image (2).png

Dmitry Mikheev

unread,
Aug 27, 2024, 8:14:38 AM8/27/24
to Wazuh | Mailing List
Hei, Lamya Imam,

Thanks for the advice. I'll try it.

Is it possible to restore old data from archives?

Lamya Imam

unread,
Aug 28, 2024, 7:02:01 AM8/28/24
to Wazuh | Mailing List
Hello Dmitry Mikheev,

You can follow the guidelines from this blog to recover your data using Wazuh alerts backups:
https://wazuh.com/blog/recover-your-data-using-wazuh-alert-backups/

- Create recovery.py script on your system
- Copy the script content from the blog post into a file named recovery.py on your server
- Make the script executable (chmod +x recovery.py)
- Update /etc/filebeat/filebeat.yml as described in the blog post to include alerts from the file you specify for output when running the script (/tmp/recovery.json in the examples)
- Run the script using values appropriate for your requirements
- The time stamp used for "-max" should avoid overlapping the data still in the index
- Use the command nohup to execute the script in the background and keep it running until the session is closed.

Let me know if you need further assistance on this!

Dmitry Mikheev

unread,
Aug 29, 2024, 6:11:59 AM8/29/24
to Wazuh | Mailing List
Hei, Lamya Imam.

I would be glad to get a hint.

I don't have elastic installed.
I have opensearc
and other configuration files

/usr/share/filebeat/module/wazuh/alerts/manifest.yml
module_version: 0.1
var:
  - name: paths
    default:
      - /var/ossec/logs/alerts/alerts.json
  - name: index_prefix
    default: wazuh-alerts-4.x-

input: config/alerts.yml

ingest_pipeline: ingest/pipeline.json


/usr/share/filebeat/module/wazuh/alerts/config/alerts.yml
fields:
  index_prefix: {{ .index_prefix }}
type: log
paths:
{{ range $i, $path := .paths }}
 - {{$path}}
{{ end }}


/etc/filebeat/filebeat.yml
# Wazuh - Filebeat configuration file
output.elasticsearch.hosts:
        - 127.0.0.1:9200
#        - <elasticsearch_ip_node_2>:9200
#        - <elasticsearch_ip_node_3>:9200

output.elasticsearch:
  protocol: https
  username: ${username}
  password: ${password}
  ssl.certificate_authorities:
    - /etc/filebeat/certs/root-ca.pem
  ssl.certificate: "/etc/filebeat/certs/wazuh-server.pem"
  ssl.key: "/etc/filebeat/certs/wazuh-server-key.pem"
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false
# mda
setup.ilm.check_exists: false

filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: true

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644

logging.metrics.enabled: false

seccomp:
  default_action: allow
  syscalls:
  - action: allow
    names:
    - rseq

Lamya Imam

unread,
Sep 3, 2024, 4:57:29 AM9/3/24
to Wazuh | Mailing List
Hello Dmitry Mikheev,

The configurations are quite similar, as OpenSearch is a fork of Elasticsearch.
You would need to add the /tmp/recovery.json at /usr/share/filebeat/module/wazuh/alerts/manifest.yml, as shown in the screenshot. Then restart filebeat.

Other than that, follow the guidelines and use #!/usr/bin/env python3 instead of #!/usr/bin/env python [Depends on the version of python installed into your OS].

If the nohup command does not work, run the command like this:
 ./recovery.py -eps 500 -min 2019-07-21T13:59:30 -max 2019-07-24T22:00:00 -o /tmp/recovery.json -log ./recovery.log -sz 2.5 &

Let me know if you face any error during the process or need further assistance on this.
Screenshot 2024-09-03 144110.png

Dmitry Mikheev

unread,
Sep 8, 2024, 3:15:54 AM9/8/24
to Wazuh | Mailing List
Hei, Lamya Imam,

Updated to 4.9 and ran the script...
The data did load. Thanks for the advice.

The truth is that you'll get 22 pages of logs:

grep -iE "error|crit|fatal|warn" /var/log/wazuh-indexer/wazuh-cluster.log 
 [2024-09-08T00:00:00,481][WARN ][o.o.p.c.u.JsonConverter ] [node-1] Json Mapping Error: Cannot invoke "java.lang.Long.longValue()" because "this.cacheMaxSize" is null (through reference chain: org.opensearch.performanceanalyzer.collectors.CacheConfigMetricsCollector$CacheMaxSizeStatus["Cache_MaxSize"]) 
[2024-09-08T00:00:05,482][WARN ][o.o.p.c.u.JsonConverter ] [node-1] Json Mapping Error: Cannot invoke "java.lang.Long.longValue()" because "this.cacheMaxSize" is null (through reference chain: org.opensearch.performanceanalyzer.collectors.CacheConfigMetricsCollector$CacheMaxSizeStatus["Cache_MaxSize"]) 
[2024-09-08T00:00:10,483][WARN ][o.o.p.c.u.JsonConverter ] [node-1] Json Mapping Error: Cannot invoke "java.lang.Long.longValue()" because "this.cacheMaxSize" is null (through reference chain: org.opensearch.performanceanalyzer.collectors.CacheConfigMetricsCollector$CacheMaxSizeStatus["Cache_MaxSize"])


Maybe you know what this parameter is and how to adjust it?
I saw on opensearch that there are tips to disable performanceanalyzer. But I don't understand what the consequences will be.
Reply all
Reply to author
Forward
0 new messages