Hi,
I have some logs with fields showing the message:
“No cached mapping for this field. Refresh field list from the Dashboards management > index patterns page”
I already refreshed the field list, but the issue was not resolved.
I need to use these fields as filters in Discover, but with this warning I’m not able to.
Is there a way to fix this issue and make these fields usable as filters?
Thanks.
Hi Ricardo,
I can see that you have mentioned that you have refreshed the index pattern. Just to be sure can you recheck if you have done it in a similar way.
Go to Dashboard management > Dashboards Management > Index patterns
And select wazuh-alerts-* template.
And click on the refresh icon. Similar to the screenshot. 

Restart the indexer service.
systemctl daemon-reload
systemctl restart wazuh-indexer
In the Wazuh dashboard, click on the hamburger icon at the top left > Index Management > Dev Tools. Please run and share the outputs of the following commands:
GET wazuh-alerts-*/_mapping/field/<affectedfield>
Where <affectedfield> is the field that is not populated with data.
Is this also happening to your current indices as well, or only to old indices?
Let me know your findings.
This is unusual. I can see that the mapping shows as a keyword for this field.
"mapping": {
"eventname": {
"type": "keyword"
In the Wazuh dashboard, click on the hamburger icon at the top left > Index Management > Dev Tools. Please run and check the outputs of the following commands:
GET wazuh-alerts-*/_settings
It will show you the field's limit.
"mapping": {
"total_fields": {
"limit": "10000"
}
Go to Dashboard management > Dashboards Management > Index patterns
And select wazuh-alerts-* template.
Check how many fields you have currently on that index pattern.

If you have hit the index limit. You can use this to increase the index field limit.
Index Management > Dev Tools.
PUT wazuh-alerts-*/_settings
{
"index.mapping.total_fields.limit": 20000
}
Also for the future indices.
Go to vi /etc/filebeat/wazuh-template.json
Change the total_fields.limit
"index.mapping.total_fields.limit": 20000,
And load the configuration
filebeat setup --pipelines
filebeat setup --index-management -E output.logstash.enabled=false
Restart Filebeat:
systemctl restart filebeat
Now reindex today's index.
Create a backup of the data with this command.
POST _reindex
{
"source": {
"index": "wazuh-alerts-4.x-2026.01.26"
},
"dest": {
"index": "wazuh-alerts-4.x-backup"
}
}
Delete your data index
DELETE /wazuh-alerts-4.x-2026.01.26
Recreate the data index from the backup
POST _reindex
{
"source": {
"index": "wazuh-alerts-4.x-backup"
},
"dest": {
"index": "wazuh-alerts-4.x-2026.01.26"
}
}
Delete the backup index
DELETE /wazuh-alerts-4.x-backup
Hi everyone,
I’ve already followed the recommended procedure, but my application is currently generating around 100,000 fields.
If I increase the value of index.mapping.total_fields.limit too much, the application becomes very slow and starts to lose performance.
Is there a way to control or choose which fields should be mapped, instead of mapping everything automatically?
Any best practices or suggestions to handle this scenario would be really appreciated.
Thanks