Pushing filebeat output to separate indeices

1,204 views
Skip to first unread message

Srijan Nandi

unread,
Jun 21, 2022, 11:58:09 PM6/21/22
to Wazuh mailing list
Hi,

I am trying to achieve pushing logs from different groups to separate indices using filebeat. I have create two groups, for eg group1 and group2. 

I want agents enrolled under group1 to send logs to wazuh-alerts-4.x-group1-* index and agents from group2 to send to wazuh-alerts-4.x-group2-*.

Below is my filebeat.yml file:

Screenshot 2022-06-22 at 9.18.03 AM.png

My wazuh-template.json mentions the new groups and has been pushed to opensearch.

Screenshot 2022-06-22 at 9.24.12 AM.png

However, all alerts are going to the default index (wazuh-alerts-4.x-*). Can anyone point out where I am going wrong.

All help will be highly appreciated.

Thanks and Regards,
-=Srijan Nandi

elw...@wazuh.com

unread,
Jun 22, 2022, 3:19:21 AM6/22/22
to Wazuh mailing list
Hello Srijan,

The creation of the indices is handled by the pipeline in the Wazuh module of Filebeat and to create separate indices the process would be the following:

  1.  Modify the pipeline /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json and add the below:

    {
      "date_index_name": {
        "if": "ctx.agent?.groups == 'groupa'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}groupa-",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        "if": "ctx.agent?.groups != 'groupa'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },


  2. Then reload the pipeline filebeat setup --pipelines


In the above example the index wazuh-alerts-*-groupa- will be created when the agent groups is groupa else everything will go under wazuh-alerts-*.

I hope this helps.

Regards,
Wali

Srijan Nandi

unread,
Jun 22, 2022, 5:50:16 AM6/22/22
to Wazuh mailing list
Hello Wali,

Thank you for pointing me to the right direction.

I have changed the pipeline.json file to the following:

     {
        "date_index_name" : {

          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : false
        }
      },

      {
        "date_index_name" : {
          "if" : "ctx.agent?.groups == 'groupa",
          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}groupa-",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : true
        }
      },
      {
        "date_index_name" : {
          "if" : "ctx.agent?.groups != 'groupa'",
          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : true
        }
      },

I also removed the following line from the pipeline.json file:
     {
        "date_index_name" : {

          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : false
        }
      },

However, still the alerts are going to the wazuh-alerts-4.x-* index.

My agent conf looks like this:

  <agent_config>
    <!-- Shared agent configuration here -->
    <labels>
      <label key="group">groupa</label>
    </labels>
  </agent_config>

Thanks and Regards,
-=Srijan Nandi

Srijan Nandi

unread,
Jun 22, 2022, 8:12:27 AM6/22/22
to Wazuh mailing list
I also tried deleting the wazuh alerts pipeline before restarting filebeat.

DELETE _ingest/pipeline/filebeat-7.10.2-wazuh-alerts-pipeline

Still it does not work. Index is getting created but the alerts still go to the default index wazuh-alerts-4.x-*.

Thanks and Regards,
-=Srijan Nandi

elw...@wazuh.com

unread,
Jun 23, 2022, 2:32:01 AM6/23/22
to Wazuh mailing list
Hello Srijan,

The full pipeline.json would be as below:

{
  "description": "Wazuh alerts pipeline",
  "processors": [
    { "json" : { "field" : "message", "add_to_root": true } },
    {
      "geoip": {
        "field": "data.srcip",
        "target_field": "GeoLocation",
        "properties": ["city_name", "country_name", "region_name", "location"],
        "ignore_missing": true,
        "ignore_failure": true
      }
    },
    {
      "geoip": {
        "field": "data.win.eventdata.ipAddress",
        "target_field": "GeoLocation",
        "properties": ["city_name", "country_name", "region_name", "location"],
        "ignore_missing": true,
        "ignore_failure": true
      }
    },
    {
      "geoip": {
        "field": "data.aws.sourceIPAddress",
        "target_field": "GeoLocation",
        "properties": ["city_name", "country_name", "region_name", "location"],
        "ignore_missing": true,
        "ignore_failure": true
      }
    },
    {
      "geoip": {
        "field": "data.gcp.jsonPayload.sourceIP",
        "target_field": "GeoLocation",
        "properties": ["city_name", "country_name", "region_name", "location"],
        "ignore_missing": true,
        "ignore_failure": true
      }
    },
    {
      "geoip": {
        "field": "data.office365.ClientIP",
        "target_field": "GeoLocation",
        "properties": ["city_name", "country_name", "region_name", "location"],
        "ignore_missing": true,
        "ignore_failure": true
      }
    },
    {
      "date": {
        "field": "timestamp",
        "target_field": "@timestamp",
        "formats": ["ISO8601"],
        "ignore_failure": false
      }
    },
    {
        "date_index_name" : {
          "if" : "ctx.agent.labels?.group == 'groupa' ",

          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}groupa-",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : true
        }
      },
      {
        "date_index_name" : {
          "if" : "ctx.agent.labels?.group != 'groupa' '",

          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : true
        }
    },
    { "remove": { "field": "message", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "ecs", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "beat", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "input_type", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "tags", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "count", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "@version", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "log", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "offset", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "type", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "host", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "fields", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "event", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "fileset", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "service", "ignore_missing": true, "ignore_failure": true } }
  ],
  "on_failure" : [{
    "drop" : { }
  }]
}


I hope this helps.

Regards,
Wali

Srijan Nandi

unread,
Jun 23, 2022, 3:21:41 AM6/23/22
to Wazuh mailing list
Hello Wali,

Here is what I have:

        "if": "ctx.agent?.groups == 'groupa'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}groupa-",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        "if": "ctx.agent?.groups != 'groupa'",
        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        "if": "ctx.agent?.groups == 'groupb'",

        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}groupb-",

        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    {
      "date_index_name": {
        "if": "ctx.agent?.groups != 'groupb'",

        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": true
      }
    },
    { "remove": { "field": "message", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "ecs", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "beat", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "input_type", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "tags", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "count", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "@version", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "log", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "offset", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "type", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "host", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "fields", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "event", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "fileset", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "service", "ignore_missing": true, "ignore_failure": true } }
  ],
  "on_failure" : [{
    "drop" : { }
  }]
}

Here is my filebeat.yml file.

# Wazuh - Filebeat configuration file
output.elasticsearch:
  hosts: ["54.158.30.64:9200"]
  protocol: https
  username: admin
  password: WcGDblCsfEWY1GpSNNztaEtwND8BRbp0
  ssl.certificate_authorities:
  - /etc/filebeat/certs/root-ca.pem
  ssl.certificate: "/etc/filebeat/certs/filebeat.pem"
  ssl.key: "/etc/filebeat/certs/filebeat-key.pem"
    #  index: "wazuh-alerts-4.x-*"
    #indices:
    #- index: "wazuh-alerts-4.x-"
    #  when.contains:
    #    agent.name: "wazuh-1"
    #- index: "wazuh-alerts-4.x-groupa-"
    #  when.contains:
    #    agent.labels.group: "groupa"
    #- index: "wazuh-alerts-4.x-groupb-"
    #  when.contains:
    #    agent.labels.group: "groupb"

setup.template.name: 'wazuh'
setup.template.pattern: 'wazuh-alerts-4.x-*'
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false

filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: false
        #input:
        #fields:
        #  index_prefix: "wazuh-alerts-4.x-"
        #input:
        #fields:
        #  index_prefix: "wazuh-alerts-4.x-groupa-"
        #input:
        #fields:
        #  index_prefix: "wazuh-alerts-4.x-groupb-"


Thanks and Regards,
-=Srijan Nandi

elw...@wazuh.com

unread,
Jun 24, 2022, 2:28:11 AM6/24/22
to Wazuh mailing list
Hello Srijan,

The labels field in the pipeline should be accessed using ctx.agent?.labels?.group and the whole file would be as follows:


          "if" : "ctx.agent?.labels?.group == 'groupa' ",


          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}groupa-",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : true
        }
      },
    {
        "date_index_name" : {
          "if" : "ctx.agent?.labels?.group == 'groupb' ",


          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}groupb-",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : true
        }
      },
      {
        "date_index_name" : {
          "if": "ctx.agent?.labels?.group != 'groupa' && ctx.agent?.labels?.group == 'groupb'",

          "field" : "timestamp",
          "date_rounding" : "d",
          "index_name_prefix" : "{{fields.index_prefix}}",
          "index_name_format" : "yyyy.MM.dd",
          "ignore_failure" : true
        }
    },
    { "remove": { "field": "message", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "ecs", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "beat", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "input_type", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "tags", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "count", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "@version", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "log", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "offset", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "type", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "host", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "fields", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "event", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "fileset", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "service", "ignore_missing": true, "ignore_failure": true } }
  ],
  "on_failure" : [{
    "drop" : { }
  }]
}



Make sure to reload the pipeline after adding it.


Regards,
Wali

Srijan Nandi

unread,
Jun 24, 2022, 4:12:34 AM6/24/22
to Wazuh mailing list
Hello Wali,

I did exactly as you told me to. However, it does not create the new indices. At any time, it just creates just one index apart from the default one, either groupa or groupb and that too, randomly.

I also deleted the indexes for groupa and then pushed a new wazuh-template.json using the following command:
curl -k -u admin:XXXXXXXX -XPUT 'https://XX:XX:XX:XX:9200/_template/wazuh' -H 'Content-Type: application/json' -d @wazuh-template.json

Then restart the wazuh-indexer, wazuh-manager, wazuh-dashboard and filebeat in that order.

Still a no go. It never creates both the indexes for groupa and groupb.

The other thing to notice is that when I manually created an index wazuh-alerts-4.x-groupq-*. I can see logs going to both the default index wazuh-alerts-4.x-* as well as wazuh-alerts-4.x-groupq-*, whereas the _index field clearly shows as wazuh-alerts-4.x-groupa-2022.06.24


I am attaching both the wazuh.yml file as well as the wazuh-template.json file.



Thanks and Regards,
-=Srijan Nandi
wazuh-template.json.txt
wazuh.yml.txt

elw...@wazuh.com

unread,
Jun 27, 2022, 3:31:58 AM6/27/22
to Wazuh mailing list
Hello Srijan,

There is no need to touch the template or the Wazuh YAML except for the pipeline as mentioned previously, I have just quickly tested and it is working as expected. I am recapping the full process below:

1 - I am assuming that you have defined a label named group and you have alerts with that field:

image (138).png


2 -Modify the pipeline /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json :
        "ignore_failure": false
      }
    },
    { "remove": { "field": "message", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "ecs", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "beat", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "input_type", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "tags", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "count", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "@version", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "log", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "offset", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "type", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "host", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "fields", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "event", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "fileset", "ignore_missing": true, "ignore_failure": true } },
    { "remove": { "field": "service", "ignore_missing": true, "ignore_failure": true } }
  ],
  "on_failure" : [{
    "drop" : { }
  }]
}



3 - Reload the pipeline and restart Filebeat :

filebeat setup --pipelines
systemctl restart filebeat


4 - Create the index pattern:

image (139).png

image (140).png


5 - The result as expected:

image (141).png

image (142).png


Hope this helps.

Regards,
Wali

Srijan Nandi

unread,
Jun 27, 2022, 9:12:02 AM6/27/22
to Wazuh mailing list
Hello Wali,

Thank you so much for your help.

I was creating new indexed but it wasn't showing up. Finally, I did a little modification to your code. The following change did the trick.

    "date_index_name" : {
       "if" : "ctx?.agent?.labels?.group == 'groupa' ",

       "field" : "timestamp",
       "date_rounding" : "d",
       "index_name_prefix" : "{{fields.index_prefix}}groupa-",
       "index_name_format" : "yyyy.MM.dd",
       "ignore_failure" : true
     }
    },
    {
      "date_index_name": {
        "if" : "ctx?.agent?.labels?.group != 'groupa' ",

        "field": "timestamp",
        "date_rounding": "d",
        "index_name_prefix": "{{fields.index_prefix}}",
        "index_name_format": "yyyy.MM.dd",
        "ignore_failure": false 
      }
    },


Now I am able to segregate logs to different indexes.

Thanks a ton, Wali.

Regards,
-=Srijan Nandi

Saad khattak

unread,
Aug 29, 2025, 3:00:58 AM8/29/25
to Wazuh | Mailing List

I am doing the exact same thing but with a different touch actually i have multiple groups so i want to create indexes dynamically not statically defining all the groups in filebeat.yml and pipline to ingest what my indexes looks like is something like this 
Here is my filebeat.yml

```output.elasticsearch:
  hosts: ["192.168.1.1:9200"]
  protocol: https
  username: 'admin'
  password: 'admin'

  ssl.certificate_authorities:
    - /etc/filebeat/certs/root-ca.pem
  ssl.certificate: "/etc/filebeat/certs/filebeat.pem"
  ssl.key: "/etc/filebeat/certs/filebeat-key.pem"

  index: "wazuh-alerts-4.x-%{[agent.labels.group]:}-%{+yyyy.MM.dd}"
  # Index configuration
  indices:
    - index: "wazuh-alerts-4.x-%{+yyyy.MM.dd}"
      when.not.has_fields: ["agent.labels.group"]


setup.template.name: 'wazuh'
setup.template.pattern: 'wazuh-alerts-4.x-*'
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false

filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: false

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644
logging.metrics.enabled: false
seccomp:
  default_action: allow
  syscalls:
  - action: allow
    names:
    - rseq``` Here in this file i as you can see i am creating indexes dynamically altough i have one issue how am i got put that in the pipline.json ```  /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json```


Reply all
Reply to author
Forward
0 new messages