Indexer Issue error on Java

541 views
Skip to first unread message

Valerio Vinci

unread,
Sep 20, 2023, 9:34:59 PM9/20/23
to Wazuh | Mailing List
Hello,

after upgrading from 4.3 to 4.4 (and now 4.5) I'm having trouble in showing event.
I've checked log of Agent, Server and filbeat and no warning/error found.

Checking Indexer I can see these error on Java (and maybe parsing) error:

[ERROR][o.o.s.s.h.n.SecuritySSLNettyHttpServerTransport] [Indexer1] Exception during establishing a SSL connection: java.net.SocketException: Connection reset

java.net.SocketException: Connection reset

        at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) ~[?:?]

        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) ~[?:?]

        at org.opensearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:155) ~[transport-netty4-client-2.6.0.jar:2.6.0]

        at org.opensearch.transport.CopyBytesSocketChannel.doReadBytes(CopyBytesSocketChannel.java:140) ~[transport-netty4-client-2.6.0.jar:2.6.0]

        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) [netty-transport-4.1.87.Final.jar:4.1.87.Final]

        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.87.Final.jar:4.1.87.Final]

        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.87.Final.jar:4.1.87.Final]

        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.87.Final.jar:4.1.87.Final]

        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.87.Final.jar:4.1.87.Final]

        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.87.Final.jar:4.1.87.Final]

        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.87.Final.jar:4.1.87.Final]

        at java.lang.Thread.run(Thread.java:833) [?:?]

[2023-09-21T03:26:44,380][WARN ][r.suppressed             ] [Indexer1] path: /selexi-vsoc-alerts-*/_search, params: {ignore_unavailable=true, preference=1695259576027, index=selexi-vsoc-alerts-*, timeout=30000ms, track_total_hits=true}

org.opensearch.action.search.SearchPhaseExecutionException:

        at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:663) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:128) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:54) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.threadpool.TaskAwareRunnable.doRun(TaskAwareRunnable.java:78) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:59) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:806) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]

        at java.lang.Thread.run(Thread.java:833) [?:?]

Caused by: java.lang.NullPointerException: Cannot invoke "org.opensearch.search.aggregations.InternalAggregations.getSerializedSize()" because "reducePhase.aggregations" is null

        at org.opensearch.action.search.QueryPhaseResultConsumer.reduce(QueryPhaseResultConsumer.java:165) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:137) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:123) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-2.6.0.jar:2.6.0]

        ... 8 more

[2023-09-21T03:26:44,399][WARN ][r.suppressed             ] [Indexer1] path: /selexi-vsoc-alerts-*/_search, params: {ignore_unavailable=true, preference=1695259576027, index=selexi-vsoc-alerts-*, timeout=30000ms, track_total_hits=true}

org.opensearch.action.search.SearchPhaseExecutionException:

        at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:663) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:128) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:54) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.threadpool.TaskAwareRunnable.doRun(TaskAwareRunnable.java:78) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:59) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:806) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]

        at java.lang.Thread.run(Thread.java:833) [?:?]

Caused by: java.lang.NullPointerException: Cannot invoke "org.opensearch.search.aggregations.InternalAggregations.getSerializedSize()" because "reducePhase.aggregations" is null

        at org.opensearch.action.search.QueryPhaseResultConsumer.reduce(QueryPhaseResultConsumer.java:165) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:137) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:123) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-2.6.0.jar:2.6.0]

        ... 8 more

[2023-09-21T03:26:44,402][WARN ][r.suppressed             ] [Indexer1] path: /selexi-vsoc-alerts-*/_search, params: {ignore_unavailable=true, preference=1695259576027, index=selexi-vsoc-alerts-*, timeout=30000ms, track_total_hits=true}

org.opensearch.action.search.SearchPhaseExecutionException:

        at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:663) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:128) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:54) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.threadpool.TaskAwareRunnable.doRun(TaskAwareRunnable.java:78) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:59) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:806) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]

        at java.lang.Thread.run(Thread.java:833) [?:?]

Caused by: java.lang.NullPointerException: Cannot invoke "org.opensearch.search.aggregations.InternalAggregations.getSerializedSize()" because "reducePhase.aggregations" is null

        at org.opensearch.action.search.QueryPhaseResultConsumer.reduce(QueryPhaseResultConsumer.java:165) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:137) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:123) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-2.6.0.jar:2.6.0]

        ... 8 more

[2023-09-21T03:26:44,407][WARN ][r.suppressed             ] [Indexer1] path: /selexi-vsoc-alerts-*/_search, params: {ignore_unavailable=true, preference=1695259576027, index=selexi-vsoc-alerts-*, timeout=30000ms, track_total_hits=true}

org.opensearch.action.search.SearchPhaseExecutionException:

        at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:663) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:128) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:54) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.threadpool.TaskAwareRunnable.doRun(TaskAwareRunnable.java:78) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:59) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:806) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]

        at java.lang.Thread.run(Thread.java:833) [?:?]

Caused by: java.lang.NullPointerException: Cannot invoke "org.opensearch.search.aggregations.InternalAggregations.getSerializedSize()" because "reducePhase.aggregations" is null

        at org.opensearch.action.search.QueryPhaseResultConsumer.reduce(QueryPhaseResultConsumer.java:165) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:137) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:123) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-2.6.0.jar:2.6.0]

        ... 8 more

[2023-09-21T03:26:44,410][WARN ][r.suppressed             ] [Indexer1] path: /selexi-vsoc-alerts-*/_search, params: {ignore_unavailable=true, preference=1695259576027, index=selexi-vsoc-alerts-*, timeout=30000ms, track_total_hits=true}

org.opensearch.action.search.SearchPhaseExecutionException:

        at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:663) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:128) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:54) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.threadpool.TaskAwareRunnable.doRun(TaskAwareRunnable.java:78) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:59) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:806) [opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-2.6.0.jar:2.6.0]

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]

        at java.lang.Thread.run(Thread.java:833) [?:?]

Caused by: java.lang.NullPointerException: Cannot invoke "org.opensearch.search.aggregations.InternalAggregations.getSerializedSize()" because "reducePhase.aggregations" is null

        at org.opensearch.action.search.QueryPhaseResultConsumer.reduce(QueryPhaseResultConsumer.java:165) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:137) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:123) ~[opensearch-2.6.0.jar:2.6.0]

        at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-2.6.0.jar:2.6.0]

        ... 8 more

[2023-09-21T03:27:23,636][INFO ][o.o.j.s.JobSweeper       ] [Indexer1] Running full sweep


During update, I've updated filbeat template and wazuh plugin. the connection with Indexer it's ok.

but no event where showed in Dashboard or collected by indexer.
At now the problem is very serious, at now we are not collecting log and we need to solve ASAP!


Manuel Alejandro Roldan Mella

unread,
Sep 20, 2023, 11:36:14 PM9/20/23
to Wazuh | Mailing List
Hi Valerio,

The error message "java.lang.NullPointerException: Cannot invoke "org.opensearch.search.aggregations.InternalAggregations.getSerializedSize()" because "reducePhase.aggregations" is null" indicates that the Elasticsearch Indexer is trying to serialize a null value for the "reducePhase.aggregations" field. This can happen for a number of reasons, such as a bug in the Elasticsearch code or a problem with the data that is being indexed.

In the case of Wazuh, this error may be caused by a problem with the Wazuh plugin for Elasticsearch. The Wazuh plugin is responsible for indexing and managing Wazuh events in Elasticsearch. If the plugin is not configured correctly or if it is not up to date, it can cause errors when indexing Wazuh events.

To troubleshoot this problem, you can try the following:

  • Make sure that the Wazuh plugin for Elasticsearch is installed and configured correctly.
  • Make sure that the Wazuh plugin for Elasticsearch is up to date.
  • Try restarting the Wazuh Indexer service.
  • If you are still having problems, you can try disabling aggregations on the Wazuh Indexer. To do this, set the wazuh.indexer.aggregations.enabled option to false in the wazuh.yml file on the Wazuh Indexer host.
  • Make sure that the Elasticsearch Indexer has enough resources to run properly.
  • Increase the Elasticsearch cluster timeout (network.connect_timeout and network.read_timeout) settings.
  • Check the Elasticsearch logs for any other errors that may be related to the problem.

Valerio Vinci

unread,
Sep 21, 2023, 3:56:19 AM9/21/23
to Wazuh | Mailing List
Hello,

many thanks.
the base service is OpenSearch not Elastic. the configuration suggested work for both?

may be the same issue?

Valerio Vinci

unread,
Sep 21, 2023, 6:04:46 PM9/21/23
to Wazuh | Mailing List
Hello,

do you have any suggestion for my trouble?
there isn't any wazuh.yml in my indexer. If I remember well the OpenSearch package come with wazuh plugin included..

Valerio Vinci

unread,
Sep 25, 2023, 9:48:40 AM9/25/23
to Wazuh | Mailing List
Hello,

sorry for pressing but it's very urgent.
It's possible to have any suggestion to do another step or troubleshooting?

Thanks

Manuel Alejandro Roldan Mella

unread,
Sep 25, 2023, 1:27:19 PM9/25/23
to Wazuh | Mailing List
Hi Valerio,

Did you check, if the Wazuh server is generating alerts? 

Please check /var/ossec/logs/alerts/alerts.log and alerts.json

Valerio Vinci

unread,
Sep 26, 2023, 11:22:23 AM9/26/23
to Wazuh | Mailing List
Hi,

yes, the server is generating alert and at now are only on the server.
also the filebeat service is up and running and the filebeat check vs the indexer return all ok in checking.

Message has been deleted

Valerio Vinci

unread,
Sep 27, 2023, 2:29:06 PM9/27/23
to Wazuh | Mailing List
Hi,

and found a warning in indexer with the command:

cat /var/log/wazuh-indexer/wazuh-indexer-cluster.log | grep -i -E "error|warn"

[.opendistro-anomaly-results-history-2023.02.17-1][0] no index mapper found for field: [_type] returning default postings format
[.opendistro-anomaly-results-history-2023.02.17-1][1] no index mapper found for field: [_type] returning default postings format

seems to be a very old index but can it cause the problem?

checking the .kibana* indexes and mapping I can't see any problems..

{
  ".kibana_92668751_admin_1" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_2126339_xxx_1" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_92668751_admin_3" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_92668751_admin_2" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_2" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_1" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_3" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_-1822153828_xxx_1" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_-1009942076_xxx_3" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_-1009942076_xxx_1" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_-1009942076_xxx_2" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_-1822153828_xxx_3" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_-1822153828_xxx_2" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_2126339_xxx_3" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_2126339_xxx_2" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_131683293_xxx_1" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_131683293_xxx_2" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  },
  ".kibana_131683293_xxx_3" : {
    "mappings" : {
      "type" : {
        "full_name" : "type",
        "mapping" : {
          "type" : {
            "type" : "keyword"
          }
        }
      }
    }
  }
}

Valerio Vinci

unread,
Sep 29, 2023, 3:34:17 AM9/29/23
to Wazuh | Mailing List
hello,

any suggestion on how we can solve this issue?
What we can check ?

Valerio Vinci

unread,
Oct 2, 2023, 10:55:23 AM10/2/23
to Wazuh | Mailing List
Hello,

the problem was on the number of shards.
So, in my environment I've 2 Indexer and I need to preserve log for years..
the single shards it's about 20-30MB and from open search best practice should be in range 20-40GB.

so, it's possible to create index by week or month? with daily index I need a huge number of indexer..

at now I've done the following tuning:
- increase the number of shards to 1500
- reduce the "index.number_of_shards" from 3 to 2 (if I understood correct, it should be equal to the number of indexer in replica in the cluster, in my case 2 indexer )

it that correct?
I can't delete old index...

there's any way to optimize?

Manuel Alejandro Roldan Mella

unread,
Oct 4, 2023, 12:50:34 AM10/4/23
to Wazuh | Mailing List
Hi,

Please read this official article about the optimization of shards and replicas

I hope this help

Manuel Alejandro Roldan Mella

unread,
Oct 4, 2023, 12:51:39 AM10/4/23
to Wazuh | Mailing List
Reply all
Reply to author
Forward
0 new messages