Optimisation helps

45 views
Skip to first unread message

Romain Hennebois

unread,
Sep 5, 2025, 8:50:42 AM (3 days ago) Sep 5
to Wazuh | Mailing List
Hi guys, it's been a long time since I wrote here.

I need help from an experienced guy or a brain guy.

I installed Wazuh in production few months ago, everything working fine, but I need help with optimisation.

I am in a work-study situation, and I am in lack of solution situation.

First problem, the Filebeat log separation with the pipeline.json file. We did this configuration to apply different retention time to logs. It actually works, my logs are separate, but I have a duplicate problem. For example, my proxy's logs are in the proxy index, and at the same time in the default index. This will be a problem in the future, and I would like to find a way to solve this problem.

Second problem, I did configure Wazuh's Ansible official repository to push the agents to all our servers. My problem is that it doesn't discover log files. I had to write them manually in the main.yml file (/roles/wazuh/ansible-wazuh-agent/defaults/main.yml). I don't think it's a proper way to do, and of course I have to find these files by hand by connecting to every server and looking for log files.

How can I do, what would be by experience the best way to do?
Maybe I will have more questions in the future, so I will come back with another post later.
If you need more information, I'll be able to answer from Monday.

Thanks for any help and answers. You would be my saviour.

esteban...@wazuh.com

unread,
Sep 5, 2025, 12:45:01 PM (3 days ago) Sep 5
to Wazuh | Mailing List
Hello Romain , 

This is not an standard way to do it , but what i really suggest you maybe it would be the define the configuration on the filebeat in order to get that information on the indexer , now you can define where the pipeline.json logs is going and show.

  1.  Modify the pipeline /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json and add the below:

    {
      "date_index_name": {
        "if": "ctx.agent?.groups == 'groupa'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}groupa-",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        "if": "ctx.agent?.groups != 'groupa'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },

  2. Then reload the pipeline filebeat setup --pipelines


In the above example the index wazuh-alerts-*-groupa- will be created when the agent groups is groupa else everything will go under wazuh-alerts-*.

Let me know if this help you out.

Esteban Fonseca - Wazuh Engineer.

Romain Hennebois

unread,
10:05 AM (6 hours ago) 10:05 AM
to Wazuh | Mailing List
Hello, 

I did this : 

{
  "date_index_name": {
    "if": "ctx?.rule?.description == 'Proxy: Howlite event'",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}proxy-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location == '/var/log/maillog' || ctx?.full_log?.contains('postfix')",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}mail-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location == '/var/log/auth.log'",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}auth-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location?.startsWith('/var/log/nginx/') || ctx?.location?.startsWith('/var/log/apache2/')",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}webserver-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "['vm-zosma.dmzappli.lan','vm-denebola.dmzappli.lan'].contains(ctx?.agent?.name) && (ctx?.location == '/var/log/slapd-ltb/slapd.log' || ctx?.full_log?.contains('slapd') || ctx?.full_log?.contains('ldap'))",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}ldap-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "ctx?.location == '/var/log/syslog' && !(ctx?.rule?.description == 'Proxy: Howlite event' || ctx?.full_log?.contains('postfix') || ctx?.full_log?.contains('slapd') || ctx?.full_log?.contains('ldap'))",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}syslog-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},
{
  "date_index_name": {
    "if": "!(ctx?.rule?.description == 'Proxy: Howlite event' || ctx?.location == '/var/log/maillog' || ctx?.full_log?.contains('postfix') || ctx?.location == '/var/log/auth.log' || (ctx?.location?.startsWith('/var/log/nginx/') || ctx?.location?.startsWith('/var/log/apache2/')) || (['vm-zosma.dmzappli.lan','vm-denebola.dmzappli.lan'].contains(ctx?.agent?.name) && (ctx?.location == '/var/log/slapd-ltb/slapd.log' || ctx?.full_log?.contains('slapd') || ctx?.full_log?.contains('ldap'))) || (ctx?.location == '/var/log/syslog' && !(ctx?.rule?.description == 'Proxy: Howlite event' || ctx?.full_log?.contains('postfix') || ctx?.full_log?.contains('slapd') || ctx?.full_log?.contains('ldap'))))",

    "field": "timestamp",
    "date_rounding": "d",
    "index_name_prefix": "{{fields.index_prefix}}default-",

    "index_name_format": "yyyy.MM.dd",
    "ignore_failure": true
  }
},

But it still shows the logs in both indexes. Have I misconfigured something?
Reply all
Reply to author
Forward
0 new messages