Cold storage

673 views
Skip to first unread message

Walter Tomas

unread,
May 17, 2023, 4:51:24 AM5/17/23
to Wazuh mailing list
Hello, everyone

I have a system of 3 nodes

1- node1
wazuh-manager
wazuh-indexer
wazuh-dashboard

2- node2
wazuh-manager
wazuh-indexer

3- node3
wazuh-manager
wazuh-indexer

I want to know if I can create a cold storage policy and all indexes older than 30 days be moved to /mnt/storage-node1, /mnt/storage-node2,
/mnt/storage-node3,

Can you guide me how to do this policy?

or is there another solution?
a node specially created for cold storage!?

Thank you

Alejandro Ruiz Becerra

unread,
May 17, 2023, 7:04:22 AM5/17/23
to Wazuh mailing list
Hello Walter, thank you for reaching out to us.

Yes, you can create a cold storage policy to move the indexes older than 30d, but there are a few things that I am not understanding completely on your message.

  1. You have a cluster of 3 nodes, but are the indexers configured to work as a cluster or are they working independently?
  2. As far as I know, it's not possible to have different stores for hot-cold indexes, so moving cold indexes to "/mnt/storage-nodeN" would not be directly possible.

There are 2 possible options:

  1. Configure an ISM policy to move indexes older than 30d to cold storage. The storage is automatically managed by the cluster in this case.
  2. Set up a hot-warm architecture, where some nodes would be used as cold storage. The ISM policy from option 1 can be used here to automatically move the old indexes from hot nodes to cold nodes. The nodes used as cold-storage can be configured to store the data in the path you want.
I think the option 2 is the best for you, but let me know what you think and it suits you.

Here's some additional information to better understand these options:

Let me know what suits you better and I'll guide you through the process.

Regards
Alex

Walter Tomas

unread,
May 17, 2023, 8:37:59 AM5/17/23
to Wazuh mailing list
Hello,

thank you for the answer.
yes it is a 3 node cluster

I don't understand where I have to set so that after 30 days the index is moved to /mnt/storage-node

I don't know if I have to have /mnt/storage* on each node ... but considering that I have wazuh-indexer installed on each node ... logic tells me that I have to have /mnt/storage-nodeN as well.. right?

# /var/ossec/bin/cluster_control -l
NAME     TYPE    VERSION  ADDRESS        
wazuh-1  master  4.4.0    12.10.244.1  
wazuh-2  worker  4.4.0    12.10.244.2  
wazuh-3  worker  4.4.0    12.10.244.3  

"states": [
            {
                "name": "hot",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "replica_count": {
                            "number_of_replicas": 2
                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "30d"
                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "read_only": {}
                    }
                ],
Message has been deleted
Message has been deleted

Alejandro Ruiz Becerra

unread,
May 17, 2023, 1:10:20 PM5/17/23
to Wazuh mailing list
Hello again Walter

First, please don't share any sensitive data, such as public IPs, as this channel is public to the internet.
On another note, take this as theoretical advice. Let's first define what to do and how, and then we'll find the way to do it.

This is your architecture:

- A cluster of Wazuh servers: 

# /var/ossec/bin/cluster_control -l
NAME     TYPE    VERSION  ADDRESS        
wazuh-1  master  4.4.0    ip-address 
wazuh-2  worker  4.4.0    ip-address 
wazuh-3  worker  4.4.0    ip-address 

- A cluster of Wazuh indexers
Check the opensearch.yml to see their names. For now let's use indexer-1, indexer-2 and indexer-3

- A Wazuh dashboard

The ISM policies are applied to the indexers. You can create a policy using the Index Management app of the UI (Wazuh dashboard), clicking on the Create policy button and then I recommend using the JSON editor.


Screenshot from 2023-05-17 18-29-53.png


In order to set up a hot-warm architecture, first you need to think which indexer nodes will act as hot nodes and which ones as cold ones. The cold ones will only store the cold indexes. Now, if you really want the indexers to store the data in a different path, you need to change the configuration of these nodes (path.data).

Walter Tomas

unread,
May 18, 2023, 9:34:57 AM5/18/23
to Wazuh mailing list
Ok, if I wanted to make the 4th node4  specifically for logs older than 30 days.

I don't want to have agents on it at all, only logs older than 30 days

What should I install on it?

and what should the policy look like, can you give me an example?

Alejandro Ruiz Becerra

unread,
May 18, 2023, 12:26:50 PM5/18/23
to Wazuh mailing list
Hello Walter

Alright, I understand.

You don't have to worry about the agents because they connect to the servers, not to the indexers.

In that case, you'll need to scale the Indexer's cluster with a 4th node, then configure it as a cold node, create the ISM policy and apply it to the other 3 nodes, so they migrate the data older than 30 days to the new 4th node.

I've never scaled an existing cluster before, so I will need to test it out.

Do you keep the config.yml file used to install Wazuh?

Walter Tomas

unread,
May 19, 2023, 12:07:58 PM5/19/23
to Wazuh mailing list
Hello

yes, I have config.yml used for installation

Walter Tomas

unread,
May 19, 2023, 12:11:31 PM5/19/23
to Wazuh mailing list
Hello

yes, I have config.yml used for installation.

Question: Will the logs older than 30 days moved to node 4 be visible in the dashboard when I search for a longer period?

Alejandro Ruiz Becerra

unread,
May 22, 2023, 10:21:25 AM5/22/23
to Wazuh mailing list
Hello Walter

Good, we can use that to scale the cluster

In reply to your question, yes, you can configure the dashboard to query this node too, using the opensearch.hosts: setting in the opensearch_dashboards.yml file. To add it or not is up to you.

Now, to scale your Indexer cluster, follow this guide.

You'll need to edit your config.yml and add the new node in the indexer section. Example:

nodes:
indexer:
- name: node-1
ip: <node.1.ip>
- name: node-cold-storage
ip: <node.cold.storage.ip>

Once you finish that guide. You should have the new node up and running.

Let me know when that's done and we'll continue from there.

Next steps are to re-configure the indexers nodes and create the ISM policy, but let's do one step at a time.
Regards
Alex

Walter Tomas

unread,
May 23, 2023, 3:18:49 AM5/23/23
to Wazuh mailing list
Alex, thanks for the reply

I will get to work and let you know according to the progress.

Thank you

Walter Tomas

unread,
May 24, 2023, 11:05:31 AM5/24/23
to Wazuh mailing list
Hello, on this occasion i have added 2 hot and one cold nodes
If node6-cold needs to be reinstalled, I have a vm ready.

Can you help me with a policy to bring logs older than 30 days to node6-cold.
I mention that logs older than 30 days must no longer exist on the other 5 nodes, but must be able to be queried from the dashboard

now my cluster looks like this:

:~# /var/ossec/bin/cluster_control -l
NAME     TYPE    VERSION  ADDRESS        
wazuh-1  master  4.4.0    XXX.XXX.XXX.25  
wazuh-2  worker  4.4.0    XXX.XXX.XXX.26  
wazuh-3  worker  4.4.0    XXX.XXX.XXX.27  
wazuh-4  worker  4.4.0    XXX.XXX.XXX.28  
wazuh-5  worker  4.4.0    XXX.XXX.XXX.29  
----------------------------------------------------
I installed the following on the nodes:


1- node1
wazuh-manager
wazuh-indexer
wazuh-dashboard

2- node2
wazuh-manager
wazuh-indexer

3- node3
wazuh-manager
wazuh-indexer

4- node4
wazuh-manager
wazuh-indexer

5- node5
wazuh-manager
wazuh-indexer

6- node6-cold
wazuh-indexer

----------------------------------------------------


Thank you very much for your support

Walter Tomas

unread,
May 29, 2023, 3:12:15 AM5/29/23
to Wazuh mailing list
Hello!

can you help me ? plz

Alejandro Ruiz Becerra

unread,
May 29, 2023, 11:45:24 AM5/29/23
to Wazuh mailing list
Hello Walter

There was no need to install a wazuh-manager on each of the nodes, only the wazuh-indexer was needed.

Now, there are 2 things we need to do:

1. Specify which nodes will be hot and which one will be cold.

Edit the /usr/share/wazuh-indexer/opensearch.yml file of each node as follows:

IF the node is hot
then add

node.attr.temp: hot

IF the node is cold
then add

node.attr.temp: cold

Restart each wazuh-indexer node to apply the changes.



2. Define the ISM policy to move logs older than 30 days to the cold nodes.

In the UI, navigate to the Index Management app and click on Create Policy > JSON editor

Copy and paste the code between ==:

== BEGIN ==
{
    "policy": {
        "description": "Wazuh index state management to move indices into a cold state.",
        "default_state": "hot",

        "states": [
            {
                "name": "hot",
                "actions": [
                    {
                        "replica_count": {
                            "number_of_replicas": 1

                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "30d"
                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [
                    {
                        "read_only": {}
                    }
                ]
            }
        ],
       "ism_template": {
           "index_patterns": ["wazuh-alerts*"],
           "priority": 100
       }
    }
}
== END ==

Click on create policy.

This policy will be applied to new indices, but not for already existing ones. Check the blog in the link below for the steps to follow in order to apply this. Also, take into account that this will be applied only to indexes matching the index patterns specified, in this case, wazuh-alerts*. If you want to apply this policy to every Wazuh index, relax the index pattern regex: wazuh*

NOTE. 
This policy does not automatically remove indices. This is highly recommended to avoid your cluster from running out of space over time. Check the link below for more information about this policy, the creation procedure and how to automatically delete indices older than a certain age.  


That's all. I hope you find this useful.

Regards,
Alex

Walter Tomas

unread,
May 29, 2023, 3:04:54 PM5/29/23
to Alejandro Ruiz Becerra, Wazuh mailing list
Hello,
Thanks for the reply!

The reason why i put wazuh-manager is the fact that I will have 5,000 agents and i want to distribute 1,000 agents on each node in the cluster and node 6 to remain for cold, without agents on it.

If it is wrong how i proceeded, please tell me.

Also, if i apply the new policy on old indexes, doesn't it move to node6 to free up space on the 5 nodes? (there is a lack of space on the 5 nodes)

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/fa3f9ee6-2323-41c1-9812-486b6db3d0f3n%40googlegroups.com.

Alejandro Ruiz Becerra

unread,
May 30, 2023, 6:05:18 AM5/30/23
to Wazuh mailing list
Hello Walter

No problem at all, seems you are putting these wazuh-managers to good use.

Yes, if you apply the policy to the existing indexes, the indexes older than 30 days should be moved to the cold node.

Regards
Alex
Message has been deleted

Walter Tomas

unread,
May 31, 2023, 6:02:50 PM5/31/23
to Wazuh mailing list
Hello,
somehow i solved it (install same version of indexer in node6), but it seems that the applied policy is not initiated.

it remains as in the picture in the attachment  wazuh-alerts-4.x-2023.04.01 green Yes open 487.8mb
polici.png
In addition to this, cold node 6 brought me 276G of indices without applying any policy :( and I have no idea why

in the logs i receive:
[2023-06-01T00:27:21,963][INFO ][o.o.j.s.JobScheduler     ] [node-1] Scheduling job id XObVrH7_TwmIVqod8hui7A for index .opendistro-ism-config .
[2023-06-01T00:27:21,964][INFO ][o.o.j.s.JobScheduler     ] [node-1] Will delay 58088 miliseconds for next execution of job wazuh-alerts-4.x-2023.04.01
[2023-06-01T00:27:22,085][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [node-1] Detected cluster change event for destination migration
[2023-06-01T00:27:28,289][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-06-01T00:32:28,289][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-06-01T00:33:20,037][INFO ][o.o.j.s.JobScheduler     ] [node-1] Will delay 177797 miliseconds for next execution of job wazuh-alerts-4.x-2023.04.01
[2023-06-01T00:37:28,290][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-06-01T00:40:19,746][INFO ][o.o.j.s.JobScheduler     ] [node-1] Will delay 56461 miliseconds for next execution of job wazuh-alerts-4.x-2023.04.01
[2023-06-01T00:42:28,290][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-06-01T00:43:18,410][INFO ][o.o.j.s.JobScheduler     ] [node-1] Will delay 167509 miliseconds for next execution of job wazuh-alerts-4.x-2023.04.01
[2023-06-01T00:46:54,142][INFO ][o.o.a.t.CronTransportAction] [node-1] Start running AD hourly cron.
[2023-06-01T00:47:28,291][INFO ][o.o.j.s.JobSweeper       ] [node-1] Running full sweep
[2023-06-01T00:50:09,459][INFO ][o.o.j.s.JobScheduler     ] [node-1] Will delay 147432 miliseconds for next execution of job wazuh-alerts-4.x-2023.04.01


-----------------------------------------------
---start-----
{
    "policy": {
        "policy_id": "cold_after_30days",
        "description": "Wazuh index state management to move indices into a cold state after 30 days and delete them after a 550days.",
        "last_updated_time": 1685557214907,
        "schema_version": 17,
        "error_notification": null,

        "default_state": "hot",
        "states": [
            {
                "name": "hot",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "replica_count": {
                            "number_of_replicas": 1
                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "30d"
                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "read_only": {}
                    }
                ],
                "transitions": [
                    {
                        "state_name": "delete",
                        "conditions": {
                            "min_index_age": "550d"
                        }
                    }
                ]
            },
            {
                "name": "delete",

                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "delete": {}
                    }
                ],
                "transitions": []
            }
        ],
        "ism_template": [
            {
                "index_patterns": [
                    "wazuh-*"
                ],
                "priority": 99,
                "last_updated_time": 1685557214907
            }
        ]
    }
}
----end-----

Walter Tomas

unread,
Jun 2, 2023, 6:59:46 PM6/2/23
to Wazuh mailing list
Please help me,
At this moment a newly created index should stay on nodes 1-5 which are HOT, but it goes on all 6 even though node 6 is node.attr.temp: cold
I'm doing something wrong somewhere, can you help me with an advice, please

ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles                                               cluster_manager name
xxx.xxx.xxx.27           26          85  11    0.30    0.52     0.92 dimmr     cluster_manager,data,ingest,master,remote_cluster_client -               node-3
xxx.xxx.xxx.29           26          84   2    0.29    0.13     0.24 dimmr     cluster_manager,data,ingest,master,remote_cluster_client *               node-5
xxx.xxx.xxx.25           30          96  28    1.81    2.09     2.70 dimmr     cluster_manager,data,ingest,master,remote_cluster_client -               node-1
xxx.xxx.xxx.26           45          98   4    0.09    0.27     0.46 dimmr     cluster_manager,data,ingest,master,remote_cluster_client -               node-2
xxx.xxx.xxx.28           67          81   4    0.01    0.21     0.35 dimmr     cluster_manager,data,ingest,master,remote_cluster_client -               node-4
xxx.xxx.xxx.30           33          98   0    0.03    0.07     0.24 dr        data,remote_cluster_client                               -               node-6

wazuh-alerts-4.x-2023.04.12 2     p      STARTED 7539404 4.9gb xxx.xxx.xxx.29 node-5
wazuh-alerts-4.x-2023.04.12 2     r      STARTED 7539404 4.9gb xxx.xxx.xxx.26 node-2
wazuh-alerts-4.x-2023.04.12 1     p      STARTED 7536276 4.9gb xxx.xxx.xxx.28 node-4
wazuh-alerts-4.x-2023.04.12 1     r      STARTED 7536276 4.9gb xxx.xxx.xxx.30 node-6
wazuh-alerts-4.x-2023.04.12 0     r      STARTED 7540501 4.9gb xxx.xxx.xxx.25 node-1
wazuh-alerts-4.x-2023.04.12 0     p      STARTED 7540501 4.9gb xxx.xxx.xxx.27 node-3

node-6 opensearch.yml

node.master: false
node.data: true
node.ingest: false
node.attr.temp: cold
bootstrap.memory_lock: true

cluster.name: wazuh-indexer-cluster
cluster.routing.allocation.disk.threshold_enabled: false

node.max_local_storage_nodes: "6"
--------------------cut-----------------------------
### Option to allow Filebeat-oss 7.10.2 to work ###
compatibility.override_main_response_version: true
node.name: node-6
cluster.initial_master_nodes:
        - node-1
        - node-2
        - node-3
        - node-4
        - node-5
discovery.seed_hosts:
        - xxx.xxx.xxx.25
        - xxx.xxx.xxx.26
        - xxx.xxx.xxx.27
        - xxx.xxx.xxx.28
        - xxx.xxx.xxx.29
        - xxx.xxx.xxx.30
network.host: xxx.xxx.xxx.30
---------------------------------------------------------

node-1-to-5 opensearch.yml

node.master: true
node.data: true
node.ingest: true
node.attr.temp: hot
bootstrap.memory_lock: true

cluster.name: wazuh-indexer-cluster
cluster.routing.allocation.disk.threshold_enabled: false

node.max_local_storage_nodes: "6"
--------------------cut-----------------------------

### Option to allow Filebeat-oss 7.10.2 to work ###
compatibility.override_main_response_version: true
node.name: node-1
cluster.initial_master_nodes:
        - node-1
        - node-2
        - node-3
        - node-4
        - node-5
discovery.seed_hosts:
        - xxx.xxx.xxx.25
        - xxx.xxx.xxx.26
        - xxx.xxx.xxx.27
        - xxx.xxx.xxx.28
        - xxx.xxx.xxx.29
        - xxx.xxx.xxx.30
network.host: xxx.xxx.xxx.xx
plugins.security.nodes_dn:
        - CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-4,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-5,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-6,OU=Wazuh,O=Wazuh,L=California,C=US

Walter Tomas

unread,
Jun 5, 2023, 6:41:13 AM6/5/23
to Wazuh mailing list
Please help if possible :(
or another idea how I could move logs older than 30 days to cold node-6. I note that nodes 1-5 are SSD and node-6 is normal HDD
Thank you very much for your support and time in this process

Alejandro Ruiz Becerra

unread,
Jun 5, 2023, 12:56:07 PM6/5/23
to Wazuh mailing list
Hello Walter

From what I can see in the screenshot you sent, the policy is RUNNING status, so that's ok.

I've been reviewing the documentation and I think we've missed a couple of things:

1. The ISM policy needs an additional action for the cold state:


"actions"
: [
{
"read_only": {},
"allocation": {
"require": {
"temp": "cold"
}
}
}
]

Complete policy:

{
"policy": {
"description": "Wazuh index state management to move indices into a cold state.",
"default_state": "hot",
"states": [
{
"name": "hot",
"actions": [
{
"replica_count": {
"number_of_replicas": 1
}
}
],
"transitions": [
{
"state_name": "cold",
"conditions": {
"min_index_age": "30d"
}
}
]
},
{
"name": "cold",
"actions": [
{
"read_only": {},
"allocation": {
"require": {
"temp": "cold"
}
}
}
]
}
],
"ism_template": {
"index_patterns": [
"wazuh-alerts*"
],
"priority": 100
}
}
}

2. The wazuh index template needs to be updated, so new indices are only allocated to hot nodes:

In the UI, go to Dev tools and copy this request:

PUT _template/wazuh
{
"order": 0,
"index_patterns": [
"wazuh-alerts-4.x-*",
"wazuh-archives-4.x-*"
],
"settings": {
"index.routing.allocation.require.temp": "hot",
"index.refresh_interval": "5s",
"index.number_of_shards": "3",
"index.number_of_replicas": "0",
"index.auto_expand_replicas": "0-1",
"index.mapping.total_fields.limit": 10000,
"index.query.default_field": [
"GeoLocation.city_name",
"GeoLocation.continent_code",
"GeoLocation.country_code2",
"GeoLocation.country_code3",
"GeoLocation.country_name",
"GeoLocation.ip",
"GeoLocation.postal_code",
"GeoLocation.real_region_name",
"GeoLocation.region_name",
"GeoLocation.timezone",
"agent.ip",
"cluster.node",
"command",
"data",
"data.action",
"data.audit",
"data.audit.acct",
"data.audit.arch",
"data.audit.auid",
"data.audit.command",
"data.audit.cwd",
"data.audit.directory.inode",
"data.audit.directory.mode",
"data.audit.egid",
"data.audit.enforcing",
"data.audit.euid",
"data.audit.exe",
"data.audit.execve.a0",
"data.audit.execve.a1",
"data.audit.execve.a2",
"data.audit.execve.a3",
"data.audit.exit",
"data.audit.file.inode",
"data.audit.file.mode",
"data.audit.fsgid",
"data.audit.fsuid",
"data.audit.gid",
"data.audit.key",
"data.audit.list",
"data.audit.old-auid",
"data.audit.old-ses",
"data.audit.old_enforcing",
"data.audit.old_prom",
"data.audit.op",
"data.audit.pid",
"data.audit.ppid",
"data.audit.prom",
"data.audit.res",
"data.audit.session",
"data.audit.sgid",
"data.audit.srcip",
"data.audit.subj",
"data.audit.success",
"data.audit.suid",
"data.audit.syscall",
"data.audit.tty",
"data.audit.uid",
"data.aws.accountId",
"data.aws.account_id",
"data.aws.action",
"data.aws.actor",
"data.aws.aws_account_id",
"data.aws.description",
"data.aws.dstport",
"data.aws.errorCode",
"data.aws.errorMessage",
"data.aws.eventID",
"data.aws.eventName",
"data.aws.eventSource",
"data.aws.eventType",
"data.aws.requestParameters.accessKeyId",
"data.aws.requestParameters.bucketName",
"data.aws.requestParameters.gatewayId",
"data.aws.requestParameters.groupDescription",
"data.aws.requestParameters.groupId",
"data.aws.requestParameters.groupName",
"data.aws.requestParameters.host",
"data.aws.requestParameters.hostedZoneId",
"data.aws.requestParameters.instanceId",
"data.aws.requestParameters.instanceProfileName",
"data.aws.requestParameters.loadBalancerName",
"data.aws.requestParameters.loadBalancerPorts",
"data.aws.requestParameters.masterUserPassword",
"data.aws.requestParameters.masterUsername",
"data.aws.requestParameters.natGatewayId",
"data.aws.requestParameters.networkAclId",
"data.aws.requestParameters.path",
"data.aws.requestParameters.policyName",
"data.aws.requestParameters.port",
"data.aws.requestParameters.stackId",
"data.aws.requestParameters.stackName",
"data.aws.requestParameters.subnetId",
"data.aws.requestParameters.subnetIds",
"data.aws.requestParameters.volumeId",
"data.aws.requestParameters.vpcId",
"data.aws.resource.accessKeyDetails.accessKeyId",
"data.aws.resource.accessKeyDetails.principalId",
"data.aws.resource.accessKeyDetails.userName",
"data.aws.resource.instanceDetails.instanceId",
"data.aws.resource.instanceDetails.instanceState",
"data.aws.resource.instanceDetails.networkInterfaces.privateDnsName",
"data.aws.resource.instanceDetails.networkInterfaces.publicDnsName",
"data.aws.resource.instanceDetails.networkInterfaces.subnetId",
"data.aws.resource.instanceDetails.networkInterfaces.vpcId",
"data.aws.resource.instanceDetails.tags.value",
"data.aws.responseElements.AssociateVpcCidrBlockResponse.vpcId",
"data.aws.responseElements.description",
"data.aws.responseElements.instanceId",
"data.aws.responseElements.instances.instanceId",
"data.aws.responseElements.instancesSet.items.instanceId",
"data.aws.responseElements.listeners.port",
"data.aws.responseElements.loadBalancerName",
"data.aws.responseElements.loadBalancers.vpcId",
"data.aws.responseElements.loginProfile.userName",
"data.aws.responseElements.networkAcl.vpcId",
"data.aws.responseElements.ownerId",
"data.aws.responseElements.publicIp",
"data.aws.responseElements.user.userId",
"data.aws.responseElements.user.userName",
"data.aws.responseElements.volumeId",
"data.aws.service.serviceName",
"data.aws.severity",
"data.aws.source",
"data.aws.sourceIPAddress",
"data.aws.srcport",
"data.aws.userIdentity.accessKeyId",
"data.aws.userIdentity.accountId",
"data.aws.userIdentity.userName",
"data.aws.vpcEndpointId",
"data.command",
"data.cis.group",
"data.cis.rule_title",
"data.data",
"data.docker.Actor.Attributes.container",
"data.docker.Actor.Attributes.image",
"data.docker.message",
"data.docker.status",
"data.dstip",
"data.dstport",
"data.dstuser",
"data.extra_data",
"data.gcp.jsonPayload.queryName",
"data.gcp.jsonPayload.vmInstanceName",
"data.gcp.resource.labels.location",
"data.gcp.resource.labels.project_id",
"data.gcp.resource.labels.source_type",
"data.gcp.resource.type",
"data.github.actor",
"data.github.action",
"data.github.repo",
"data.hardware.serial",
"data.integration",
"data.netinfo.iface.adapter",
"data.netinfo.iface.ipv4.address",
"data.netinfo.iface.ipv6.address",
"data.netinfo.iface.mac",
"data.office365.UserId",
"data.office365.Operation",
"data.office365.ClientIP",
"data.os.architecture",
"data.os.build",
"data.os.codename",
"data.os.hostname",
"data.os.major",
"data.os.minor",
"data.os.patch",
"data.os.platform",
"data.os.release",
"data.os.release_version",
"data.os.display_version",
"data.os.sysname",
"data.os.version",
"data.oscap.check.description",
"data.oscap.check.identifiers",
"data.oscap.check.rationale",
"data.oscap.check.references",
"data.oscap.check.result",
"data.oscap.check.severity",
"data.oscap.check.title",
"data.oscap.scan.content",
"data.oscap.scan.profile.title",
"data.osquery.columns.address",
"data.osquery.columns.command",
"data.osquery.columns.description",
"data.osquery.columns.dst_ip",
"data.osquery.columns.gid",
"data.osquery.columns.hostname",
"data.osquery.columns.md5",
"data.osquery.columns.path",
"data.osquery.columns.sha1",
"data.osquery.columns.sha256",
"data.osquery.columns.src_ip",
"data.osquery.columns.user",
"data.osquery.columns.username",
"data.osquery.pack",
"data.port.process",
"data.port.protocol",
"data.port.state",
"data.process.args",
"data.process.cmd",
"data.process.egroup",
"data.process.euser",
"data.process.fgroup",
"data.process.rgroup",
"data.process.ruser",
"data.process.sgroup",
"data.process.state",
"data.process.suser",
"data.program.architecture",
"data.program.description",
"data.program.format",
"data.program.location",
"data.program.multiarch",
"data.program.priority",
"data.program.section",
"data.program.source",
"data.program.vendor",
"data.program.version",
"data.protocol",
"data.pwd",
"data.sca",
"data.sca.check.compliance.cis",
"data.sca.check.compliance.cis_csc",
"data.sca.check.compliance.pci_dss",
"data.sca.check.compliance.hipaa",
"data.sca.check.compliance.nist_800_53",
"data.sca.check.description",
"data.sca.check.directory",
"data.sca.check.file",
"data.sca.check.previous_result",
"data.sca.check.process",
"data.sca.check.rationale",
"data.sca.check.reason",
"data.sca.check.references",
"data.sca.check.registry",
"data.sca.check.remediation",
"data.sca.check.result",
"data.sca.check.title",
"data.sca.description",
"data.sca.file",
"data.sca.invalid",
"data.sca.policy",
"data.sca.policy_id",
"data.sca.scan_id",
"data.sca.total_checks",
"data.script",
"data.src_ip",
"data.src_port",
"data.srcip",
"data.srcport",
"data.srcuser",
"data.status",
"data.system_name",
"data.title",
"data.tty",
"data.uid",
"data.url",
"data.virustotal.description",
"data.virustotal.error",
"data.virustotal.found",
"data.virustotal.permalink",
"data.virustotal.scan_date",
"data.virustotal.sha1",
"data.virustotal.source.alert_id",
"data.virustotal.source.file",
"data.virustotal.source.md5",
"data.virustotal.source.sha1",
"data.vulnerability.cve",
"data.vulnerability.cvss.cvss2.base_score",
"data.vulnerability.cvss.cvss2.exploitability_score",
"data.vulnerability.cvss.cvss2.impact_score",
"data.vulnerability.cvss.cvss2.vector.access_complexity",
"data.vulnerability.cvss.cvss2.vector.attack_vector",
"data.vulnerability.cvss.cvss2.vector.authentication",
"data.vulnerability.cvss.cvss2.vector.availability",
"data.vulnerability.cvss.cvss2.vector.confidentiality_impact",
"data.vulnerability.cvss.cvss2.vector.integrity_impact",
"data.vulnerability.cvss.cvss2.vector.privileges_required",
"data.vulnerability.cvss.cvss2.vector.scope",
"data.vulnerability.cvss.cvss2.vector.user_interaction",
"data.vulnerability.cvss.cvss3.base_score",
"data.vulnerability.cvss.cvss3.exploitability_score",
"data.vulnerability.cvss.cvss3.impact_score",
"data.vulnerability.cvss.cvss3.vector.access_complexity",
"data.vulnerability.cvss.cvss3.vector.attack_vector",
"data.vulnerability.cvss.cvss3.vector.authentication",
"data.vulnerability.cvss.cvss3.vector.availability",
"data.vulnerability.cvss.cvss3.vector.confidentiality_impact",
"data.vulnerability.cvss.cvss3.vector.integrity_impact",
"data.vulnerability.cvss.cvss3.vector.privileges_required",
"data.vulnerability.cvss.cvss3.vector.scope",
"data.vulnerability.cvss.cvss3.vector.user_interaction",
"data.vulnerability.cwe_reference",
"data.vulnerability.package.source",
"data.vulnerability.package.architecture",
"data.vulnerability.package.condition",
"data.vulnerability.package.generated_cpe",
"data.vulnerability.package.version",
"data.vulnerability.rationale",
"data.vulnerability.severity",
"data.vulnerability.title",
"data.vulnerability.assigner",
"data.vulnerability.cve_version",
"data.win.eventdata.auditPolicyChanges",
"data.win.eventdata.auditPolicyChangesId",
"data.win.eventdata.binary",
"data.win.eventdata.category",
"data.win.eventdata.categoryId",
"data.win.eventdata.data",
"data.win.eventdata.image",
"data.win.eventdata.ipAddress",
"data.win.eventdata.ipPort",
"data.win.eventdata.keyName",
"data.win.eventdata.logonGuid",
"data.win.eventdata.logonProcessName",
"data.win.eventdata.operation",
"data.win.eventdata.parentImage",
"data.win.eventdata.processId",
"data.win.eventdata.processName",
"data.win.eventdata.providerName",
"data.win.eventdata.returnCode",
"data.win.eventdata.service",
"data.win.eventdata.status",
"data.win.eventdata.subcategory",
"data.win.eventdata.subcategoryGuid",
"data.win.eventdata.subcategoryId",
"data.win.eventdata.subjectDomainName",
"data.win.eventdata.subjectLogonId",
"data.win.eventdata.subjectUserName",
"data.win.eventdata.subjectUserSid",
"data.win.eventdata.targetDomainName",
"data.win.eventdata.targetLinkedLogonId",
"data.win.eventdata.targetLogonId",
"data.win.eventdata.targetUserName",
"data.win.eventdata.targetUserSid",
"data.win.eventdata.workstationName",
"data.win.system.channel",
"data.win.system.computer",
"data.win.system.eventID",
"data.win.system.eventRecordID",
"data.win.system.eventSourceName",
"data.win.system.keywords",
"data.win.system.level",
"data.win.system.message",
"data.win.system.opcode",
"data.win.system.processID",
"data.win.system.providerGuid",
"data.win.system.providerName",
"data.win.system.securityUserID",
"data.win.system.severityValue",
"data.win.system.userID",
"decoder.ftscomment",
"decoder.parent",
"full_log",
"host",
"id",
"input",
"location",
"message",
"offset",
"predecoder.hostname",
"predecoder.program_name",
"previous_log",
"previous_output",
"program_name",
"rule.cis",
"rule.cve",
"rule.description",
"rule.gdpr",
"rule.gpg13",
"rule.groups",
"rule.mitre.tactic",
"rule.mitre.technique",
"rule.pci_dss",
"rule.hipaa",
"rule.nist_800_53",
"syscheck.audit.process.ppid",
"syscheck.diff",
"syscheck.event",
"syscheck.gid_after",
"syscheck.gid_before",
"syscheck.gname_after",
"syscheck.gname_before",
"syscheck.inode_after",
"syscheck.inode_before",
"syscheck.md5_after",
"syscheck.md5_before",
"syscheck.path",
"syscheck.mode",
"syscheck.perm_after",
"syscheck.perm_before",
"syscheck.sha1_after",
"syscheck.sha1_before",
"syscheck.sha256_after",
"syscheck.sha256_before",
"syscheck.tags",
"syscheck.uid_after",
"syscheck.uid_before",
"syscheck.uname_after",
"syscheck.uname_before",
"syscheck.arch",
"syscheck.value_name",
"syscheck.value_type",
"syscheck.changed_attributes",
"title"
]
},
"mappings": {
"dynamic_templates": [
{
"string_as_keyword": {
"mapping": {
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"date_detection": false,
"properties": {
"@timestamp": {
"type": "date"
},
"timestamp": {
"type": "date",
"format": "date_optional_time||epoch_millis"
},
"@version": {
"type": "text"
},
"agent": {
"properties": {
"ip": {
"type": "keyword"
},
"id": {
"type": "keyword"
},
"name": {
"type": "keyword"
}
}
},
"manager": {
"properties": {
"name": {
"type": "keyword"
}
}
},
"cluster": {
"properties": {
"name": {
"type": "keyword"
},
"node": {
"type": "keyword"
}
}
},
"full_log": {
"type": "text"
},
"previous_log": {
"type": "text"
},
"GeoLocation": {
"properties": {
"area_code": {
"type": "long"
},
"city_name": {
"type": "keyword"
},
"continent_code": {
"type": "text"
},
"coordinates": {
"type": "double"
},
"country_code2": {
"type": "text"
},
"country_code3": {
"type": "text"
},
"country_name": {
"type": "keyword"
},
"dma_code": {
"type": "long"
},
"ip": {
"type": "keyword"
},
"latitude": {
"type": "double"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "double"
},
"postal_code": {
"type": "keyword"
},
"real_region_name": {
"type": "keyword"
},
"region_name": {
"type": "keyword"
},
"timezone": {
"type": "text"
}
}
},
"host": {
"type": "keyword"
},
"syscheck": {
"properties": {
"path": {
"type": "keyword"
},
"hard_links": {
"type": "keyword"
},
"mode": {
"type": "keyword"
},
"sha1_before": {
"type": "keyword"
},
"sha1_after": {
"type": "keyword"
},
"uid_before": {
"type": "keyword"
},
"uid_after": {
"type": "keyword"
},
"gid_before": {
"type": "keyword"
},
"gid_after": {
"type": "keyword"
},
"perm_before": {
"type": "keyword"
},
"perm_after": {
"type": "keyword"
},
"md5_after": {
"type": "keyword"
},
"md5_before": {
"type": "keyword"
},
"gname_after": {
"type": "keyword"
},
"gname_before": {
"type": "keyword"
},
"inode_after": {
"type": "keyword"
},
"inode_before": {
"type": "keyword"
},
"mtime_after": {
"type": "date",
"format": "date_optional_time"
},
"mtime_before": {
"type": "date",
"format": "date_optional_time"
},
"uname_after": {
"type": "keyword"
},
"uname_before": {
"type": "keyword"
},
"size_before": {
"type": "long"
},
"size_after": {
"type": "long"
},
"diff": {
"type": "keyword"
},
"event": {
"type": "keyword"
},
"audit": {
"properties": {
"effective_user": {
"properties": {
"id": {
"type": "keyword"
},
"name": {
"type": "keyword"
}
}
},
"group": {
"properties": {
"id": {
"type": "keyword"
},
"name": {
"type": "keyword"
}
}
},
"login_user": {
"properties": {
"id": {
"type": "keyword"
},
"name": {
"type": "keyword"
}
}
},
"process": {
"properties": {
"id": {
"type": "keyword"
},
"name": {
"type": "keyword"
},
"ppid": {
"type": "keyword"
}
}
},
"user": {
"properties": {
"id": {
"type": "keyword"
},
"name": {
"type": "keyword"
}
}
}
}
},
"sha256_after": {
"type": "keyword"
},
"sha256_before": {
"type": "keyword"
},
"tags": {
"type": "keyword"
}
}
},
"location": {
"type": "keyword"
},
"message": {
"type": "text"
},
"offset": {
"type": "keyword"
},
"rule": {
"properties": {
"description": {
"type": "keyword"
},
"groups": {
"type": "keyword"
},
"level": {
"type": "long"
},
"tsc": {
"type": "keyword"
},
"id": {
"type": "keyword"
},
"cve": {
"type": "keyword"
},
"info": {
"type": "keyword"
},
"frequency": {
"type": "long"
},
"firedtimes": {
"type": "long"
},
"cis": {
"type": "keyword"
},
"pci_dss": {
"type": "keyword"
},
"gdpr": {
"type": "keyword"
},
"gpg13": {
"type": "keyword"
},
"hipaa": {
"type": "keyword"
},
"nist_800_53": {
"type": "keyword"
},
"mail": {
"type": "boolean"
},
"mitre": {
"properties": {
"id": {
"type": "keyword"
},
"tactic": {
"type": "keyword"
},
"technique": {
"type": "keyword"
}
}
}
}
},
"predecoder": {
"properties": {
"program_name": {
"type": "keyword"
},
"timestamp": {
"type": "keyword"
},
"hostname": {
"type": "keyword"
}
}
},
"decoder": {
"properties": {
"parent": {
"type": "keyword"
},
"name": {
"type": "keyword"
},
"ftscomment": {
"type": "keyword"
},
"fts": {
"type": "long"
},
"accumulate": {
"type": "long"
}
}
},
"data": {
"properties": {
"audit": {
"properties": {
"acct": {
"type": "keyword"
},
"arch": {
"type": "keyword"
},
"auid": {
"type": "keyword"
},
"command": {
"type": "keyword"
},
"cwd": {
"type": "keyword"
},
"dev": {
"type": "keyword"
},
"directory": {
"properties": {
"inode": {
"type": "keyword"
},
"mode": {
"type": "keyword"
},
"name": {
"type": "keyword"
}
}
},
"egid": {
"type": "keyword"
},
"enforcing": {
"type": "keyword"
},
"euid": {
"type": "keyword"
},
"exe": {
"type": "keyword"
},
"execve": {
"properties": {
"a0": {
"type": "keyword"
},
"a1": {
"type": "keyword"
},
"a2": {
"type": "keyword"
},
"a3": {
"type": "keyword"
}
}
},
"exit": {
"type": "keyword"
},
"file": {
"properties": {
"inode": {
"type": "keyword"
},
"mode": {
"type": "keyword"
},
"name": {
"type": "keyword"
}
}
},
"fsgid": {
"type": "keyword"
},
"fsuid": {
"type": "keyword"
},
"gid": {
"type": "keyword"
},
"id": {
"type": "keyword"
},
"key": {
"type": "keyword"
},
"list": {
"type": "keyword"
},
"old-auid": {
"type": "keyword"
},
"old-ses": {
"type": "keyword"
},
"old_enforcing": {
"type": "keyword"
},
"old_prom": {
"type": "keyword"
},
"op": {
"type": "keyword"
},
"pid": {
"type": "keyword"
},
"ppid": {
"type": "keyword"
},
"prom": {
"type": "keyword"
},
"res": {
"type": "keyword"
},
"session": {
"type": "keyword"
},
"sgid": {
"type": "keyword"
},
"srcip": {
"type": "keyword"
},
"subj": {
"type": "keyword"
},
"success": {
"type": "keyword"
},
"suid": {
"type": "keyword"
},
"syscall": {
"type": "keyword"
},
"tty": {
"type": "keyword"
},
"type": {
"type": "keyword"
},
"uid": {
"type": "keyword"
}
}
},
"protocol": {
"type": "keyword"
},
"action": {
"type": "keyword"
},
"srcip": {
"type": "keyword"
},
"dstip": {
"type": "keyword"
},
"srcport": {
"type": "keyword"
},
"dstport": {
"type": "keyword"
},
"srcuser": {
"type": "keyword"
},
"dstuser": {
"type": "keyword"
},
"id": {
"type": "keyword"
},
"status": {
"type": "keyword"
},
"data": {
"type": "keyword"
},
"extra_data": {
"type": "keyword"
},
"system_name": {
"type": "keyword"
},
"url": {
"type": "keyword"
},
"oscap": {
"properties": {
"check": {
"properties": {
"description": {
"type": "text"
},
"id": {
"type": "keyword"
},
"identifiers": {
"type": "text"
},
"oval": {
"properties": {
"id": {
"type": "keyword"
}
}
},
"rationale": {
"type": "text"
},
"references": {
"type": "text"
},
"result": {
"type": "keyword"
},
"severity": {
"type": "keyword"
},
"title": {
"type": "keyword"
}
}
},
"scan": {
"properties": {
"benchmark": {
"properties": {
"id": {
"type": "keyword"
}
}
},
"content": {
"type": "keyword"
},
"id": {
"type": "keyword"
},
"profile": {
"properties": {
"id": {
"type": "keyword"
},
"title": {
"type": "keyword"
}
}
},
"return_code": {
"type": "long"
},
"score": {
"type": "double"
}
}
}
}
},
"office365": {
"properties": {
"Actor": {
"properties": {
"ID": {
"type": "keyword"
}
}
},
"UserId": {
"type": "keyword"
},
"Operation": {
"type": "keyword"
},
"ClientIP": {
"type": "keyword"
},
"ResultStatus": {
"type": "keyword"
},
"Subscription": {
"type": "keyword"
}
}
},
"github": {
"properties": {
"org": {
"type": "keyword"
},
"actor": {
"type": "keyword"
},
"action": {
"type": "keyword"
},
"actor_location": {
"properties": {
"country_code": {
"type": "keyword"
}
}
},
"repo": {
"type": "keyword"
}
}
},
"type": {
"type": "keyword"
},
"netinfo": {
"properties": {
"iface": {
"properties": {
"name": {
"type": "keyword"
},
"mac": {
"type": "keyword"
},
"adapter": {
"type": "keyword"
},
"type": {
"type": "keyword"
},
"state": {
"type": "keyword"
},
"mtu": {
"type": "long"
},
"tx_bytes": {
"type": "long"
},
"rx_bytes": {
"type": "long"
},
"tx_errors": {
"type": "long"
},
"rx_errors": {
"type": "long"
},
"tx_dropped": {
"type": "long"
},
"rx_dropped": {
"type": "long"
},
"tx_packets": {
"type": "long"
},
"rx_packets": {
"type": "long"
},
"ipv4": {
"properties": {
"gateway": {
"type": "keyword"
},
"dhcp": {
"type": "keyword"
},
"address": {
"type": "keyword"
},
"netmask": {
"type": "keyword"
},
"broadcast": {
"type": "keyword"
},
"metric": {
"type": "long"
}
}
},
"ipv6": {
"properties": {
"gateway": {
"type": "keyword"
},
"dhcp": {
"type": "keyword"
},
"address": {
"type": "keyword"
},
"netmask": {
"type": "keyword"
},
"broadcast": {
"type": "keyword"
},
"metric": {
"type": "long"
}
}
}
}
}
}
},
"os": {
"properties": {
"hostname": {
"type": "keyword"
},
"architecture": {
"type": "keyword"
},
"name": {
"type": "keyword"
},
"version": {
"type": "keyword"
},
"codename": {
"type": "keyword"
},
"major": {
"type": "keyword"
},
"minor": {
"type": "keyword"
},
"patch": {
"type": "keyword"
},
"build": {
"type": "keyword"
},
"platform": {
"type": "keyword"
},
"sysname": {
"type": "keyword"
},
"release": {
"type": "keyword"
},
"release_version": {
"type": "keyword"
},
"display_version": {
"type": "keyword"
}
}
},
"port": {
"properties": {
"protocol": {
"type": "keyword"
},
"local_ip": {
"type": "ip"
},
"local_port": {
"type": "long"
},
"remote_ip": {
"type": "ip"
},
"remote_port": {
"type": "long"
},
"tx_queue": {
"type": "long"
},
"rx_queue": {
"type": "long"
},
"inode": {
"type": "long"
},
"state": {
"type": "keyword"
},
"pid": {
"type": "long"
},
"process": {
"type": "keyword"
}
}
},
"hardware": {
"properties": {
"serial": {
"type": "keyword"
},
"cpu_name": {
"type": "keyword"
},
"cpu_cores": {
"type": "long"
},
"cpu_mhz": {
"type": "double"
},
"ram_total": {
"type": "long"
},
"ram_free": {
"type": "long"
},
"ram_usage": {
"type": "long"
}
}
},
"program": {
"properties": {
"format": {
"type": "keyword"
},
"name": {
"type": "keyword"
},
"priority": {
"type": "keyword"
},
"section": {
"type": "keyword"
},
"size": {
"type": "long"
},
"vendor": {
"type": "keyword"
},
"install_time": {
"type": "keyword"
},
"version": {
"type": "keyword"
},
"architecture": {
"type": "keyword"
},
"multiarch": {
"type": "keyword"
},
"source": {
"type": "keyword"
},
"description": {
"type": "keyword"
},
"location": {
"type": "keyword"
}
}
},
"process": {
"properties": {
"pid": {
"type": "long"
},
"name": {
"type": "keyword"
},
"state": {
"type": "keyword"
},
"ppid": {
"type": "long"
},
"utime": {
"type": "long"
},
"stime": {
"type": "long"
},
"cmd": {
"type": "keyword"
},
"args": {
"type": "keyword"
},
"euser": {
"type": "keyword"
},
"ruser": {
"type": "keyword"
},
"suser": {
"type": "keyword"
},
"egroup": {
"type": "keyword"
},
"sgroup": {
"type": "keyword"
},
"fgroup": {
"type": "keyword"
},
"rgroup": {
"type": "keyword"
},
"priority": {
"type": "long"
},
"nice": {
"type": "long"
},
"size": {
"type": "long"
},
"vm_size": {
"type": "long"
},
"resident": {
"type": "long"
},
"share": {
"type": "long"
},
"start_time": {
"type": "long"
},
"pgrp": {
"type": "long"
},
"session": {
"type": "long"
},
"nlwp": {
"type": "long"
},
"tgid": {
"type": "long"
},
"tty": {
"type": "long"
},
"processor": {
"type": "long"
}
}
},
"sca": {
"properties": {
"type": {
"type": "keyword"
},
"scan_id": {
"type": "keyword"
},
"policy": {
"type": "keyword"
},
"name": {
"type": "keyword"
},
"file": {
"type": "keyword"
},
"description": {
"type": "keyword"
},
"passed": {
"type": "integer"
},
"failed": {
"type": "integer"
},
"score": {
"type": "long"
},
"check": {
"properties": {
"id": {
"type": "keyword"
},
"title": {
"type": "keyword"
},
"description": {
"type": "keyword"
},
"rationale": {
"type": "keyword"
},
"remediation": {
"type": "keyword"
},
"compliance": {
"properties": {
"cis": {
"type": "keyword"
},
"cis_csc": {
"type": "keyword"
},
"pci_dss": {
"type": "keyword"
},
"hipaa": {
"type": "keyword"
},
"nist_800_53": {
"type": "keyword"
}
}
},
"references": {
"type": "keyword"
},
"file": {
"type": "keyword"
},
"directory": {
"type": "keyword"
},
"registry": {
"type": "keyword"
},
"process": {
"type": "keyword"
},
"result": {
"type": "keyword"
},
"previous_result": {
"type": "keyword"
},
"reason": {
"type": "keyword"
}
}
},
"invalid": {
"type": "keyword"
},
"policy_id": {
"type": "keyword"
},
"total_checks": {
"type": "keyword"
}
}
},
"command": {
"type": "keyword"
},
"integration": {
"type": "keyword"
},
"timestamp": {
"type": "date"
},
"title": {
"type": "keyword"
},
"uid": {
"type": "keyword"
},
"virustotal": {
"properties": {
"description": {
"type": "keyword"
},
"error": {
"type": "keyword"
},
"found": {
"type": "keyword"
},
"malicious": {
"type": "keyword"
},
"permalink": {
"type": "keyword"
},
"positives": {
"type": "keyword"
},
"scan_date": {
"type": "keyword"
},
"sha1": {
"type": "keyword"
},
"source": {
"properties": {
"alert_id": {
"type": "keyword"
},
"file": {
"type": "keyword"
},
"md5": {
"type": "keyword"
},
"sha1": {
"type": "keyword"
}
}
},
"total": {
"type": "keyword"
}
}
},
"vulnerability": {
"properties": {
"cve": {
"type": "keyword"
},
"cvss": {
"properties": {
"cvss2": {
"properties": {
"base_score": {
"type": "keyword"
},
"exploitability_score": {
"type": "keyword"
},
"impact_score": {
"type": "keyword"
},
"vector": {
"properties": {
"access_complexity": {
"type": "keyword"
},
"attack_vector": {
"type": "keyword"
},
"authentication": {
"type": "keyword"
},
"availability": {
"type": "keyword"
},
"confidentiality_impact": {
"type": "keyword"
},
"integrity_impact": {
"type": "keyword"
},
"privileges_required": {
"type": "keyword"
},
"scope": {
"type": "keyword"
},
"user_interaction": {
"type": "keyword"
}
}
}
}
},
"cvss3": {
"properties": {
"base_score": {
"type": "keyword"
},
"exploitability_score": {
"type": "keyword"
},
"impact_score": {
"type": "keyword"
},
"vector": {
"properties": {
"access_complexity": {
"type": "keyword"
},
"attack_vector": {
"type": "keyword"
},
"authentication": {
"type": "keyword"
},
"availability": {
"type": "keyword"
},
"confidentiality_impact": {
"type": "keyword"
},
"integrity_impact": {
"type": "keyword"
},
"privileges_required": {
"type": "keyword"
},
"scope": {
"type": "keyword"
},
"user_interaction": {
"type": "keyword"
}
}
}
}
}
}
},
"cwe_reference": {
"type": "keyword"
},
"package": {
"properties": {
"source": {
"type": "keyword"
},
"architecture": {
"type": "keyword"
},
"condition": {
"type": "keyword"
},
"generated_cpe": {
"type": "keyword"
},
"name": {
"type": "keyword"
},
"version": {
"type": "keyword"
}
}
},
"published": {
"type": "date"
},
"updated": {
"type": "date"
},
"rationale": {
"type": "keyword"
},
"severity": {
"type": "keyword"
},
"title": {
"type": "keyword"
},
"assigner": {
"type": "keyword"
},
"cve_version": {
"type": "keyword"
}
}
},
"aws": {
"properties": {
"source": {
"type": "keyword"
},
"accountId": {
"type": "keyword"
},
"log_info": {
"properties": {
"s3bucket": {
"type": "keyword"
}
}
},
"region": {
"type": "keyword"
},
"bytes": {
"type": "long"
},
"dstaddr": {
"type": "ip"
},
"srcaddr": {
"type": "ip"
},
"end": {
"type": "date"
},
"start": {
"type": "date"
},
"source_ip_address": {
"type": "ip"
},
"service": {
"properties": {
"count": {
"type": "long"
},
"action.networkConnectionAction.remoteIpDetails": {
"properties": {
"ipAddressV4": {
"type": "ip"
},
"geoLocation": {
"type": "geo_point"
}
}
},
"eventFirstSeen": {
"type": "date"
},
"eventLastSeen": {
"type": "date"
}
}
},
"createdAt": {
"type": "date"
},
"updatedAt": {
"type": "date"
},
"resource.instanceDetails": {
"properties": {
"launchTime": {
"type": "date"
},
"networkInterfaces": {
"properties": {
"privateIpAddress": {
"type": "ip"
},
"publicIp": {
"type": "ip"
}
}
}
}
}
}
},
"cis": {
"properties": {
"benchmark": {
"type": "keyword"
},
"error": {
"type": "long"
},
"fail": {
"type": "long"
},
"group": {
"type": "keyword"
},
"notchecked": {
"type": "long"
},
"pass": {
"type": "long"
},
"result": {
"type": "keyword"
},
"rule_title": {
"type": "keyword"
},
"score": {
"type": "long"
},
"timestamp": {
"type": "keyword"
},
"unknown": {
"type": "long"
}
}
},
"docker": {
"properties": {
"Action": {
"type": "keyword"
},
"Actor": {
"properties": {
"Attributes": {
"properties": {
"image": {
"type": "keyword"
},
"name": {
"type": "keyword"
}
}
}
}
},
"Type": {
"type": "keyword"
}
}
},
"gcp": {
"properties": {
"jsonPayload": {
"properties": {
"authAnswer": {
"type": "keyword"
},
"queryName": {
"type": "keyword"
},
"responseCode": {
"type": "keyword"
},
"vmInstanceId": {
"type": "keyword"
},
"vmInstanceName": {
"type": "keyword"
}
}
},
"resource": {
"properties": {
"labels": {
"properties": {
"location": {
"type": "keyword"
},
"project_id": {
"type": "keyword"
},
"source_type": {
"type": "keyword"
}
}
},
"type": {
"type": "keyword"
}
}
},
"severity": {
"type": "keyword"
}
}
},
"osquery": {
"properties": {
"name": {
"type": "keyword"
},
"pack": {
"type": "keyword"
},
"action": {
"type": "keyword"
},
"calendarTime": {
"type": "keyword"
}
}
}
}
},
"program_name": {
"type": "keyword"
},
"command": {
"type": "keyword"
},
"type": {
"type": "text"
},
"title": {
"type": "keyword"
},
"id": {
"type": "keyword"
},
"input": {
"properties": {
"type": {
"type": "keyword"
}
}
},
"previous_output": {
"type": "keyword"
}
}
},
"version": 1
}

Regards
Alex

Walter Tomas

unread,
Jun 6, 2023, 5:42:44 AM6/6/23
to Wazuh mailing list
Hello,
Alex, thanks for the reply.

I made the proposed changes, but unfortunately i notice that sards are allocated to cold nodes.

wazuh-alerts-4.x-2023.04.25 1     r      STARTED    7961177 5.5gb xxx.xxx.xxx.28 node-4
wazuh-alerts-4.x-2023.04.25 1     p      STARTED    7961177 5.5gb xxx.xxx.xxx.27 node-3
wazuh-alerts-4.x-2023.04.25 2     r      STARTED    7957392 5.5gb xxx.xxx.xxx.29 node-5
wazuh-alerts-4.x-2023.04.25 2     p      RELOCATING 7957392 5.5gb xxx.xxx.xxx.25 node-1 -> xxx.xxx.xxx.30 SkF-Wph2S16G742Xx-Np2g node-6
wazuh-alerts-4.x-2023.04.25 0     p      STARTED    7959155 5.5gb xxx.xxx.xxx.26 node-2
wazuh-alerts-4.x-2023.04.25 0     r      STARTED    7959155 5.5gb xxx.xxx.xxx.25 node-1


I don't even know what to do with them, or what information to provide you to be able to help me :(

I modified a policy and allowed to move the logs older than one day... but unfortunately it doesn't work that way either


"transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "1d"

                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "allocation": {
                            "requires": {
                                "temp": "cold"
                            },
                            "it includes": {},
                            "exclude": {},
                            "wait_for": false
                        }
                    }
                ],

Walter Tomas

unread,
Jun 6, 2023, 9:21:17 AM6/6/23
to Wazuh mailing list
Update:

Alex, I added: 
cluster.routing.allocation.include.temp: hot on nodes 1-5 and on node 6 this: cluster.routing.allocation.include.temp: cold

now node-6 has been released of shards, but the policy is still not taken into account to move all my old logs to node-6 :(

pls help

Alejandro Ruiz Becerra

unread,
Jun 6, 2023, 10:13:51 AM6/6/23
to Wazuh mailing list
Hello Walter

cluster.routing.allocation.include.temp and cluster.routing.allocation.require.temp are similar options. Note that your policy has a syntax error, the option is require not requires.  So, you might want to check that out.

On the other hand, your indexes seem to be being re-allocated to node-6 as desired:


wazuh-alerts-4.x-2023.04.25 2     p      RELOCATING 7957392 5.5gb xxx.xxx.xxx.25 node-1 -> xxx.xxx.xxx.30 SkF-Wph2S16G742Xx-Np2g node-6

Could you please elaborate a bit more about this problem you are having?

Regards,
Alex

Walter Tomas

unread,
Jun 6, 2023, 10:43:30 AM6/6/23
to Wazuh mailing list

If I remove: cluster.routing.allocation.include.temp: cold from node-6, only a shard from the other nodes is moved to node-6, the entire index is not moved
it is divided like the index below and that without applying any policy


wazuh-alerts-4.x-2023.04.12 2     p      STARTED 7539404 4.9gb xxx.xxx.xxx.29 node-5
wazuh-alerts-4.x-2023.04.12 2     r      STARTED 7539404 4.9gb xxx.xxx.xxx.26 node-2
wazuh-alerts-4.x-2023.04.12 1     p      STARTED 7536276 4.9gb xxx.xxx.xxx.28 node-4
wazuh-alerts-4.x-2023.04.12 1     r      STARTED 7536276 4.9gb xxx.xxx.xxx.30 node-6
wazuh-alerts-4.x-2023.04.12 0     r      STARTED 7540501 4.9gb xxx.xxx.xxx.25 node-1
wazuh-alerts-4.x-2023.04.12 0     p      STARTED 7540501 4.9gb xxx.xxx.xxx.27 node-3

Alejandro Ruiz Becerra

unread,
Jun 6, 2023, 10:57:15 AM6/6/23
to Wazuh mailing list
Could you share your full ISM policy?

Have you re-applied the policy to existing indexes? Remember that new or updated ISM policies are only applied to new indexes, not to existing ones, unless you do manually,

Walter Tomas

unread,
Jun 6, 2023, 11:05:20 AM6/6/23
to Wazuh mailing list
Sure,
output to GET /_opendistro/_ism/explain

{
  "wazuh-alerts-4.x-2023.04.08" : {
    "index.plugins.index_state_management.policy_id" : "cold_after_30days_default",
    "index.opendistro.index_state_management.policy_id" : "cold_after_30days_default",
    "index" : "wazuh-alerts-4.x-2023.04.08",
    "index_uuid" : "Qt5eQkhJTJOsdWuU88Et1w",
    "policy_id" : "cold_after_30days_default",
    "enabled" : true
  },
  "wazuh-alerts-4.x-2023.04.09" : {
    "index.plugins.index_state_management.policy_id" : "cold_after_30days_default",
    "index.opendistro.index_state_management.policy_id" : "cold_after_30days_default",
    "index" : "wazuh-alerts-4.x-2023.04.09",
    "index_uuid" : "pnuEOTD-Q-Cv7IuLSeQF0Q",
    "policy_id" : "cold_after_30days_default",
    "enabled" : true
  },
  "total_managed_indices" : 2
}
---------------------------------------------------------------------------------------------------------------------------------

output to GET _cat/shards/wazuh-alerts-4.x-2023.04.08?v

wazuh-alerts-4.x-2023.04.08 2     p      STARTED 3864033 2.6gb xxx.xxx.xxx.29 node-5
wazuh-alerts-4.x-2023.04.08 2     r      STARTED 3864033 2.6gb xxx.xxx.xxx.26 node-2
wazuh-alerts-4.x-2023.04.08 1     p      STARTED 3863314 2.6gb xxx.xxx.xxx.28 node-4
wazuh-alerts-4.x-2023.04.08 1     r      STARTED 3863314 2.6gb xxx.xxx.xxx.25 node-1
wazuh-alerts-4.x-2023.04.08 0     r      STARTED 3863716 2.6gb xxx.xxx.xxx.26 node-2
wazuh-alerts-4.x-2023.04.08 0     p      STARTED 3863716 2.6gb xxx.xxx.xxx.27 node-3
---------------------------------------------------------------------------------------------------------------------------------
----START---
{
    "policy": {
        "policy_id": "cold_after_30days_default",
        "description": "Wazuh index state management for OpenDistro to move indices into a cold state after 30 days and delete them after a year.",
        "last_updated_time": 1686041860767,
        "schema_version": 17,
        "error_notification": null,

        "default_state": "hot",
        "states": [
            {
                "name": "hot",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "replica_count": {
                            "number_of_replicas": 1
                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "1d"
                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "allocation": {
                            "require": {
                                "temp": "cold"
                            },
                            "include": {},

                            "exclude": {},
                            "wait_for": false
                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "delete",
                        "conditions": {
                            "min_index_age": "365d"
                        }
                    }
                ]
            },
            {
                "name": "delete",

                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "delete": {}
                    }
                ],
                "transitions": []
            }
        ],
        "ism_template": [
            {
                "index_patterns": [
                    "wazuh-alerts-4.x-2*"
                ],
                "priority": 100,
                "last_updated_time": 1685791773490
            }
        ]
    }
}

---END---

Alejandro Ruiz Becerra

unread,
Jun 6, 2023, 12:08:40 PM6/6/23
to Wazuh mailing list
Let me test that out in a lab

Walter Tomas

unread,
Jun 12, 2023, 3:02:55 AM6/12/23
to Wazuh mailing list

Hello,
do you have an update here?

Walter Tomas

unread,
Jun 19, 2023, 3:19:07 AM6/19/23
to Wazuh mailing list
Hello,
can someone help me with an idea?

I can't send logs older than 30 days to a cluster cold node

Alejandro Ruiz Becerra

unread,
Jun 21, 2023, 6:35:01 AM6/21/23
to Wazuh mailing list
Hello again Tomas

I was not able to reproduce the problem. I'll keep trying.

Do you have any news?

Walter Tomas

unread,
Jun 26, 2023, 5:13:34 AM6/26/23
to Wazuh mailing list
Hello,

Alex, yes, I did it

config on the hot and cold nodes must not contain "cluster.routing.allocation.require.temp:"

it must contain only: node.attr.temp: hot or cold

and i activated in _cluster/settings: "balance": "true",

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.awareness.attributes": ""
  }
}

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.awareness.attributes": "",
    "cluster.routing.allocation.awareness.force.zone.values":["", ""]
  }
}

PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "routing.allocation.awareness.balance": "true"
    }
  }
}

Now all the policies are working properly and I managed to move all the logs older than 30 days to the cold nodes
And very important...we must have a minimum of 2 cold nodes
Thank you very much for your support and patience

Alejandro Ruiz Becerra

unread,
Jun 26, 2023, 5:31:00 AM6/26/23
to Walter Tomas, Wazuh mailing list
Hello Walter

That's great news. It's good to hear you could make it work. Thank you very much for sharing these latest details, I'm sure this will be helpful in the future for other cases similar as yours.
I'll take a time to write it down and create some kind of guide for this, which we do not have at the moment.

Are there any resources (guides, tutorials, documentation, ...) that you have used for this, so I can check it out?

Thanks again for your patience.

Regards,
Alex

You received this message because you are subscribed to a topic in the Google Groups "Wazuh mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/vmhuzx4qU1c/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/f41e6b6d-f06e-45fa-92ac-927b5692cf5en%40googlegroups.com.

Walter Tomas

unread,
Jun 26, 2023, 7:15:28 AM6/26/23
to Wazuh mailing list
Sure,

https://opensearch.org/docs/2.4/tuning-your-cluster/cluster/

this is where i got my inspiration from, only that i not wanting to divide the cluster into zones, left it blank.

I leave you my config below

A small script was created that takes the place of LB but with a limit of 800 agents per node.


When the limit of 800 per node is reached, an alert will be raised and then we will probably install a 6th HOT node that will have wazuh-manager on it
Deploying agents on wazuh1 node
curl -so wazuh-agent.deb https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.4.3-1_amd64.deb && sudo WAZUH_MANAGER='wazuh1.domainname.tld' WAZUH_AGENT_GROUP='wazuh1' dpkg -i ./wazuh-agent.deb
Deploying agents on wazuh2 node
curl -so wazuh-agent.deb https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.4.3-1_amd64.deb && sudo WAZUH_MANAGER='wazuh2.domainname.tld' WAZUH_AGENT_GROUP='wazuh2' dpkg -i ./wazuh-agent.deb

and so on up to node 5, pls modify MANAGER_IP with the ip or domain of the node where you want to take the agents and i created for each node a group with a similar name to be able to monitor them more easily

The agents will be automatically distributed during installation on nodes 1-5.

after installing the agents, I pushed the following config for the first 800 agents in /var/ossec/etc/ossec.conf:
--------------------------------------------------------------------------------------------------------------------
  <client>
    <server>
      <address>wazuh1.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <server>
      <address>wazuh2.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <server>
      <address>wazuh3.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <server>
      <address>wazuh4.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <server>
      <address>wazuh5.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
       
    <config-profile>ubuntu, ubuntudigi, ubuntudigiro</config-profile>
    <notify_time>10</notify_time>
    <time-reconnect>60</time-reconnect>
    <auto_restart>yes</auto_restart>
    <crypto_method>aes</crypto_method>
    <enrollment>
      <enabled>yes</enabled>
      <groups>wazuh1</groups>
    </enrollment>
  </client>
--------------------------------------------------------------------------------------------------------------------
next 801 to 1601 agents

  <client>
    <server>
      <address>wazuh2.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <server>
      <address>wazuh3.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <server>
      <address>wazuh4.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <server>
      <address>wazuh5.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <server>
      <address>wazuh1.domainname.tld</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
       
    <config-profile>ubuntu, ubuntudigi, ubuntudigiro</config-profile>
    <notify_time>10</notify_time>
    <time-reconnect>60</time-reconnect>
    <auto_restart>yes</auto_restart>
    <crypto_method>aes</crypto_method>
    <enrollment>
      <enabled>yes</enabled>
      <groups>wazuh2</groups>
    </enrollment>
  </client>
 

next 1602 to 2402 agents and so on

--------------------------------------------------------------------------------------------------------------------
Cluster consisting of 7 nodes for a large number of agents.
5 HOT nodes (SSD storage) & 2 COLD nodes (SATA storage)


HOT nodes:

wazuh1
dependencies installed:
wazuh-indexer
wazuh-dashboard
wazuh-manager

wazuh2
dependencies installed:
wazuh-indexer
wazuh-manager

wazuh3
dependencies installed:
wazuh-indexer
wazuh-manager

wazuh4
dependencies installed:
wazuh-indexer
wazuh-manager

wazuh5
dependencies installed:
wazuh-indexer
wazuh-manager

noduri COLD:

wazuh6
dependencies installed:
wazuh-indexer
wazuh-manager

wazuh7
dependencies installed:
wazuh-indexer
wazuh-manager
------------------------------------------------------------------------------------------------------------------------------------
From wazuh dashboars -> Management -> Dev Tools (Console) :


PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.awareness.attributes": ""
  }
}

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.awareness.attributes": "",
    "cluster.routing.allocation.awareness.force.zone.values":["", ""]
  }
}

PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "routing.allocation.awareness.balance": "true"
    }
  }
}
------------------------------------------------------------------------------------------------------------------------------------


/etc/wazuh-indexer/opensearch.yml  for HOT nodes


node.master: true
node.data: true
node.ingest: true
node.attr.temp: hot
cluster.name: wazuh-indexer-cluster
cluster.routing.allocation.disk.threshold_enabled: false

node.max_local_storage_nodes: "7"
path.data: /var/lib/wazuh-indexer
path.logs: /var/log/wazuh-indexer


plugins.security.ssl.http.pemcert_filepath: /etc/wazuh-indexer/certs/node-1.pem
plugins.security.ssl.http.pemkey_filepath: /etc/wazuh-indexer/certs/node-1-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
plugins.security.ssl.transport.pemcert_filepath: /etc/wazuh-indexer/certs/node-1.pem
plugins.security.ssl.transport.pemkey_filepath: /etc/wazuh-indexer/certs/node-1-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
plugins.security.ssl.http.enabled: true
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.transport.resolve_hostname: false
plugins.security.ssl.http.enabled_ciphers:
  - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
  - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
  - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
  - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
plugins.security.ssl.http.enabled_protocols:
  - "TLSv1.2"
plugins.security.authcz.admin_dn:
- "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US"
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.restapi.roles_enabled:
- "all_access"
- "security_rest_api_access"

plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]


### Option to allow Filebeat-oss 7.10.2 to work ###
compatibility.override_main_response_version: true
node.name: node-1

node.name: node-1
cluster.initial_master_nodes:
        - node-1
        - node-2
        - node-3
        - node-4
        - node-5
discovery.seed_hosts:
        - xxx.xxx.xxx.25
        - xxx.xxx.xxx.26
        - xxx.xxx.xxx.27
        - xxx.xxx.xxx.28
        - xxx.xxx.xxx.29
        - xxx.xxx.xxx.30
        - xxx.xxx.xxx.31
network.host: xxx.xxx.xxx.25

plugins.security.nodes_dn:
        - CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-4,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-5,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-6,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-7,OU=Wazuh,O=Wazuh,L=California,C=US
       
------------------------------------------------------------------------------------------------------------------------------------        
/etc/wazuh-indexer/opensearch.yml  for COLD nodes  

node.master: false
node.data: true
node.ingest: true
node.attr.temp: cold
cluster.name: wazuh-indexer-cluster
cluster.routing.allocation.disk.threshold_enabled: false

node.max_local_storage_nodes: "7"
path.data: /var/wazuh_cold7
path.logs: /var/log/wazuh-indexer


plugins.security.ssl.http.pemcert_filepath: /etc/wazuh-indexer/certs/node-7.pem
plugins.security.ssl.http.pemkey_filepath: /etc/wazuh-indexer/certs/node-7-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
plugins.security.ssl.transport.pemcert_filepath: /etc/wazuh-indexer/certs/node-7.pem
plugins.security.ssl.transport.pemkey_filepath: /etc/wazuh-indexer/certs/node-7-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
plugins.security.ssl.http.enabled: true
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.transport.resolve_hostname: false
plugins.security.ssl.http.enabled_ciphers:
  - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
  - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
  - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
  - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
plugins.security.ssl.http.enabled_protocols:
  - "TLSv1.2"
plugins.security.authcz.admin_dn:
- "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US"
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.restapi.roles_enabled:
- "all_access"
- "security_rest_api_access"

plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]


### Option to allow Filebeat-oss 7.10.2 to work ###
compatibility.override_main_response_version: true
node.name: node-7

cluster.initial_master_nodes:
        - node-1
        - node-2
        - node-3
        - node-4
        - node-5
discovery.seed_hosts:
        - xxx.xxx.xxx.25
        - xxx.xxx.xxx.26
        - xxx.xxx.xxx.27
        - xxx.xxx.xxx.28
        - xxx.xxx.xxx.29
        - xxx.xxx.xxx.30
        - xxx.xxx.xxx.31
network.host: xxx.xxx.xxx.31

plugins.security.nodes_dn:
        - CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-4,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-5,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-6,OU=Wazuh,O=Wazuh,L=California,C=US
        - CN=node-7,OU=Wazuh,O=Wazuh,L=California,C=US
------------------------------------------------------------------------------------------------------------------------------------
Policy :
#start#

{
    "policy": {
        "policy_id": "cold_after_30days_default",
        "description": "Wazuh index state management to move indices into a cold state and cold node after 30 days and delete them after a 550days.",
        "last_updated_time": 1687211701790,

        "schema_version": 17,
        "error_notification": null,
        "default_state": "hot",
        "states": [
            {
                "name": "hot",
                "actions": [
                    {
                        "retry": {
                            "count": 5,

                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "replica_count": {
                            "number_of_replicas": 2
                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "30d"
                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [
                    {
                        "retry": {
                            "count": 3,
                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "allocation": {
                            "require": {
                                "temp": "cold"
                            },
                            "include": {},
                            "exclude": {},
                            "wait_for": false
                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "delete",
                        "conditions": {
                            "min_index_age": "550d"

                        }
                    }
                ]
            },
            {
                "name": "delete",
                "actions": [
                    {
                        "retry": {
                            "count": 5,

                            "backoff": "exponential",
                            "delay": "1m"
                        },
                        "delete": {}
                    }
                ],
                "transitions": []
            }
        ],
        "ism_template": [
            {
                "index_patterns": [
                    "wazuh-alerts-4.x-2*"
                ],
                "priority": 101,
                "last_updated_time": 1685557214907
            }
        ]
    }
}
 
#end#
------------------------------------------------------------------------------------------------------------------------------------
From wazuh dashboars -> Management -> Dev Tools (Console) :


PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.awareness.attributes": ""
  }
}

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.awareness.attributes": "",
    "cluster.routing.allocation.awareness.force.zone.values":["", ""]
  }
}

PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "routing.allocation.awareness.balance": "true"
    }
  }
}      
------------------------------------------------------------------------------------------------------------------------------------      
If you see something wrong, please let me know.

I hope the above information will be useful to you and the whole community

Thanks

Walter Tomas (sclipici)            

Reply all
Reply to author
Forward
0 new messages