Hi Sagar,
When configuring an OpenSearch node for Wazuh, you should consider several factors, such as the expected amount of data and search volume, as well as the level of performance you desire. One best practice for optimal performance is to allocate more memory to the OpenSearch heap than to the operating system. As a rule of thumb, about 50% of the total system memory should be allocated to the heap. For example, if you have a node with 16GB of RAM, you could allocate 8GB to the heap and 8GB to the operating system.
Aside from memory allocation, you should also factor in the number of CPU cores and storage capacity of the node. The number of CPU cores required depends on the expected search volume and the complexity of the queries you will be doing. Storage capacity should be considered based on the amount of data you expect to index and how quickly it will grow.
All in all, it is extremely difficult to specify a maximum amount of data that OpenSearch can handle with a given configuration as it varies depending on multiple factors. So, in this case, I recommend doing trial and error, as OpenSearch is designed to scale horizontally, which means you can add more nodes to the cluster as your data grows to improve performance and capacity if needed.
I hope this helps