Optimisation helps

111 views
Skip to first unread message

Romain Hennebois

unread,
Sep 5, 2025, 8:50:42 AMSep 5
to Wazuh | Mailing List
Hi guys, it's been a long time since I wrote here.

I need help from an experienced guy or a brain guy.

I installed Wazuh in production few months ago, everything working fine, but I need help with optimisation.

I am in a work-study situation, and I am in lack of solution situation.

First problem, the Filebeat log separation with the pipeline.json file. We did this configuration to apply different retention time to logs. It actually works, my logs are separate, but I have a duplicate problem. For example, my proxy's logs are in the proxy index, and at the same time in the default index. This will be a problem in the future, and I would like to find a way to solve this problem.

Second problem, I did configure Wazuh's Ansible official repository to push the agents to all our servers. My problem is that it doesn't discover log files. I had to write them manually in the main.yml file (/roles/wazuh/ansible-wazuh-agent/defaults/main.yml). I don't think it's a proper way to do, and of course I have to find these files by hand by connecting to every server and looking for log files.

How can I do, what would be by experience the best way to do?
Maybe I will have more questions in the future, so I will come back with another post later.
If you need more information, I'll be able to answer from Monday.

Thanks for any help and answers. You would be my saviour.

esteban...@wazuh.com

unread,
Sep 5, 2025, 12:45:01 PMSep 5
to Wazuh | Mailing List
Hello Romain , 

This is not an standard way to do it , but what i really suggest you maybe it would be the define the configuration on the filebeat in order to get that information on the indexer , now you can define where the pipeline.json logs is going and show.

  1.  Modify the pipeline /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json and add the below:

    {
      "date_index_name": {
        "if": "ctx.agent?.groups == 'groupa'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}groupa-",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        "if": "ctx.agent?.groups != 'groupa'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },

  2. Then reload the pipeline filebeat setup --pipelines


In the above example the index wazuh-alerts-*-groupa- will be created when the agent groups is groupa else everything will go under wazuh-alerts-*.

Let me know if this help you out.

Esteban Fonseca - Wazuh Engineer.

Romain Hennebois

unread,
Sep 8, 2025, 10:05:27 AMSep 8
to Wazuh | Mailing List
Hello, 

I did this : 

{
  "date_index_name": {
    "if": "ctx?.rule?.description == 'Proxy: Howlite event'",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}proxy-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location == '/var/log/maillog' || ctx?.full_log?.contains('postfix')",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}mail-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location == '/var/log/auth.log'",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}auth-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location?.startsWith('/var/log/nginx/') || ctx?.location?.startsWith('/var/log/apache2/')",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}webserver-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "['vm-zosma.dmzappli.lan','vm-denebola.dmzappli.lan'].contains(ctx?.agent?.name) && (ctx?.location == '/var/log/slapd-ltb/slapd.log' || ctx?.full_log?.contains('slapd') || ctx?.full_log?.contains('ldap'))",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}ldap-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location == '/var/log/syslog' && !(ctx?.rule?.description == 'Proxy: Howlite event' || ctx?.full_log?.contains('postfix') || ctx?.full_log?.contains('slapd') || ctx?.full_log?.contains('ldap'))",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}syslog-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "!(ctx?.rule?.description == 'Proxy: Howlite event' || ctx?.location == '/var/log/maillog' || ctx?.full_log?.contains('postfix') || ctx?.location == '/var/log/auth.log' || (ctx?.location?.startsWith('/var/log/nginx/') || ctx?.location?.startsWith('/var/log/apache2/')) || (['vm-zosma.dmzappli.lan','vm-denebola.dmzappli.lan'].contains(ctx?.agent?.name) && (ctx?.location == '/var/log/slapd-ltb/slapd.log' || ctx?.full_log?.contains('slapd') || ctx?.full_log?.contains('ldap'))) || (ctx?.location == '/var/log/syslog' && !(ctx?.rule?.description == 'Proxy: Howlite event' || ctx?.full_log?.contains('postfix') || ctx?.full_log?.contains('slapd') || ctx?.full_log?.contains('ldap'))))",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}default-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},

But it still shows the logs in both indexes. Have I misconfigured something?

esteban...@wazuh.com

unread,
Sep 9, 2025, 2:35:40 AMSep 9
to Wazuh | Mailing List

The problem that you have is that you have multiple date_index_name that will overwrite the other one in the process

Instead of defining 7 separate date_index_name processors, combine them into a single script processor that sets a custom field like log_index_type, and use only one final date_index_name processor that uses this to set the index name.

Maybe that will be easier in order to gather all together. 

The problem will be that everytime that the process running will overwrite the other one

Maybe using a IF chain will fix the problem or make an script that could help you with that. but having 7 different will not be optimal at the end 

Esteban Fonseca

Romain Hennebois

unread,
Sep 9, 2025, 2:37:47 AMSep 9
to Wazuh | Mailing List
Hi,

Thanks for taking the time to help me.
I'll try this and see if it works!

Romain Hennebois

unread,
Sep 9, 2025, 5:11:45 AMSep 9
to Wazuh | Mailing List
Sorry to bother you again, but I tried this: 


{
  "date_index_name": {
    "if": "ctx?.location == '/var/log/auth.log'",
    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}auth-",
    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location != '/var/log/auth.log'",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},

But I don't understand why it still goes into the default index and not the Auth index. Sorry, I'm not familiar with JSON and Filebeat.

esteban...@wazuh.com

unread,
Sep 9, 2025, 10:53:34 AMSep 9
to Wazuh | Mailing List
Hey no problem we are here to help :) 

Pretty much it could be due the events from /var/log/auth.log are not matching the condition and move to default index due missing or the location field isn't preset. you can try to check the location in the auth.log it's matching .

You can try to use 1 data_index_name . try it then you can add the other one. because what i am seeing is that the location is not working properly due the location do not exist or the processor is overwriting the other one.

Esteban Fonseca

Romain Hennebois

unread,
Oct 8, 2025, 5:24:17 AM (2 days ago) Oct 8
to Wazuh | Mailing List
Hi,

Sorry to bother you again, but I don't know anything about JSON. Could someone help me combine my pipeline.json file into a single script processor that sets a custom field, such as log_index_type, and uses a final date_index_name processor to set the index name?

Romain Hennebois

unread,
Oct 8, 2025, 5:49:03 AM (2 days ago) Oct 8
to Wazuh | Mailing List
However, when I look at the log in the wazuh-alerts-* index, I can see that my log is in the correct index (i.e.    
wazuh-alerts-4.x-auth-2025.10.08). Is this a real problem, or have I misunderstood how it works? Or is it indeed a duplicate for the future?
Reply all
Reply to author
Forward
0 new messages