Hi Team,I have set up a wazuh agent in my Mac studio, where all the logs mentioned in the ossec.conf are getting transferred to the wazuh server and visible in the dashboard, except for a single log file. The log from this file is not available in the dashboard for the past 28 hours. One entry is somehow visible in the dashboard around 5 hours earlier.Although the log file is receiving new entries, the logs are not being ingested. All other log files provided in the configuration have no issues.I have tried changing the log format, but still no success./Users/galaxy/galaxy_builds/galaxy_access_log/galaxy_access_log_syncer/log/app.log.Any small help is appreciated.Ossec.log file from the wazuh-agent also doesn't have any error messages related to this.bash-3.2# tail -f /Library/Ossec/logs/ossec.log2025/10/23 11:50:10 rootcheck: INFO: Ending rootcheck scan.
2025/10/23 12:48:04 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/10/23 12:49:37 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2025/10/23 13:22:03 wazuh-logcollector: ERROR: Large message size from file '/Users/galaxy/galaxy_builds/galaxy_mysql/log/app.log' (length = 65279): '2025/10/23 13:22:02 Monitoring Servers: [{Service:mysql MasterID'...
2025/10/23 13:49:38 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/10/23 13:51:16 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2025/10/23 14:51:17 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/10/23 14:54:57 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2025/10/23 15:54:58 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/10/23 15:56:51 wazuh-modulesd:syscollector: INFO: Evaluation finished.
{"level":"info","ip":"10.10.3.102","cluster":"xxx","port":"22","service":"nginx","jump_1_ip":"xx.x.xx.xx","jump_1_port":"22","jump_1_username":"ec2-user","isttime":"2025-10-24 10:10:00.466964 +0530 IST","servertime":"2025-10-24T10:10:05+05:30","message":"status successfully updated to failed"}'
Here I can see two main problems,
2025/10/23 13:22:03 wazuh-logcollector: ERROR: Large message size from file '/Users/galaxy/galaxy_builds/galaxy_mysql/log/app.log'
The only possible workarounds currently would be changing the source value or trying to break down the logs.
This subject has been heavily discussed, as the size is limited to this static value because of the agent-manager coordination, and was already increased from 6kB to 64kB to avoid these kinds of issues.
Another issue you have mentioned is that you have seen mapping in the filebeat.
Check if you can see the alerts in the alerts.json file
Also, share the logs from the following log files.
cat /var/ossec/logs/alerts/alerts.json | grep -i -E "galaxy_mysql"
If you can see the alerts in the alert.json file not in the dashboard. Share a sample log and share the output of the filebeat command.
cat /var/log/filebeat/filebeat/* | grep -i -E "error|warn"
For mapping conflict, we can take different approaches, like making custom decoders or changing the filebeat pipeline file; it will depend on the error and your log format.
Please replace the sensitive values with dummy values.
Looking forward to your update.
One quick, easy fix to make this port field unindexable so that it doesn't create any conflict by configuring the mapping template.
/etc/filebeat/wazuh-template.json
https://www.elastic.co/guide/en/elasticsearch/reference/6.2/enabled.html
But I believe this is an important field, and you would like to have that in your alerts.
So we can change the name of the field before indexing by configuring the pipeline (/usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json)
https://www.elastic.co/docs/reference/beats/filebeat/rename-fields
Can you share a sample alert log from the alerts.json file?
cat /var/ossec/logs/alerts/alerts.json | grep -i -E "galaxy_mysql"
Please replace the sensitive values with dummy values.
I will replicate this on my end and provide you with the best possible solution.
{"level":"info","ip":"10.10.3.102","cluster":"xxx","port":"22","service":"nginx","jump_1_ip":"xx.x.xx.xx","jump_1_port":"22","jump_1_username":"ec2-user","isttime":"2025-10-24 10:10:00.466964 +0530 IST","servertime":"2025-10-24T10:10:05+05:30","message":"status successfully updated to failed"}'
--
You received this message because you are subscribed to a topic in the Google Groups "Wazuh | Mailing List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/4wAgEHFblz0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/wazuh/141bac3b-19f8-46f5-810b-96767fe95ceen%40googlegroups.com.
I was able to reproduce the filebeat mapping error using your log.
You can resolve it by changing the field name data.port and data.service by configuring the filebeat data ingest pipeline.
I am sharing the step-by-step guideline.
Go to
/usr/share/filebeat/module/wazuh/archives/ingest/pipeline.json
And right between the "date_index_name": { and { "remove": { "field" section,
{
"date_index_name": {
"field": "timestamp",
"date_rounding": "d",
"index_name_prefix": "{{fields.index_prefix}}",
"index_name_format": "yyyy.MM.dd",
"ignore_failure": false
}
},
—---------
{ "remove": { "field": "message", "ignore_missing": true, "ignore_failure": true } },
Add these configurations highlighted in green.
{
"date_index_name": {
"field": "timestamp",
"date_rounding": "d",
"index_name_prefix": "{{fields.index_prefix}}",
"index_name_format": "yyyy.MM.dd",
"ignore_failure": false
}
},
{
"rename": {
"field": "data.port",
"target_field": "data.port_number",
"ignore_missing": true,
"if": "def g = ctx?.data?.service; def isPg = g != null && ((g instanceof List && g.contains('nginx')) || (g instanceof String && g == 'nginx') || (g instanceof String && g == 'postgresql')); def v = ctx?.data?.port; return isPg && v != null && !(v instanceof Map);"
}
},
{
"convert": {
"field": "data.port_number",
"type": "long",
"ignore_missing": true
}
},
{
"remove": {
"field": "data.port",
"ignore_missing": true,
"if": "def v = ctx?.data?.port; return v != null && !(v instanceof Map);"
}
},
{
"rename": {
"field": "data.service",
"target_field": "data.service_name",
"ignore_missing": true,
"if": "def g = ctx?.data?.service; def isPg = g != null && ((g instanceof List && g.contains('nginx')) || (g instanceof String && g == 'nginx') || (g instanceof String && g == 'postgresql')); def v = ctx?.data?.service; return isPg && v != null && !(v instanceof Map);"
}
},
{
"remove": {
"field": "data.service",
"ignore_missing": true,
"if": "def g = ctx?.data?.service; def isPg = g != null && ((g instanceof List && g.contains('nginx')) || (g instanceof String && g == 'nginx') || (g instanceof String && g == 'postgresql')); def v = ctx?.data?.service; return isPg && v != null && !(v instanceof Map);"
}
},
{ "remove": { "field": "message", "ignore_missing": true, "ignore_failure": true } },
The configuration will change the data.filed name and data.service name when the log will have a decoded field name service with value nginx and postgressql.
Now load the configuration and restart the filebeat service.
filebeat setup --pipelines
systemctl restart filebeat
If you also face a similar issue with alerts. You can make changes for the alert indices by making similar changes in /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json
I have tested this and it worked for me. Check the screenshot for reference.
Let me know if this works for you.

2025-10-30T12:49:11.067+0530 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc238e23b8f847f0b, ext:110371853512, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, Fields:{"agent":{"ephemeral_id":"7cd3c931-afee-4299-b1ab-d4ec2653029a","hostname":"wazuh","id":"54864f23-ed8d-4c5b-97f5-7560d04fb281","name":"wazuh","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},"host":{"name":"wazuh"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/archives/archives.json"},"offset":9595355780},"message":"{\"timestamp\":\"2025-10-30T12:49:09.515+0530\",\"rule\":{\"level\":2,\"description\":\"Unknown problem somewhere in the system.\",\"id\":\"1002\",\"firedtimes\":15752,\"mail\":false,\"groups\":[\"syslog\",\"errors\"],\"gpg13\":[\"4.3\"]},\"agent\":{\"id\":\"001\",\"name\":\"mafirees-Mac-Studio.local\",\"ip\":\"192.168.71.204\"},\"manager\":{\"name\":\"wazuh\"},\"id\":\"1761808749.609184463\",\"full_log\":\"{\\\"level\\\":\\\"info\\\",\\\"ip\\\":\\\"10.10.3.102\\\",\\\"cluster\\\":\\\"Clovia\\\",\\\"port\\\":\\\"22\\\",\\\"service\\\":\\\"nginx\\\",\\\"jump_1_ip\\\":\\\"xx.1.177.xx\\\",\\\"jump_1_port\\\":\\\"22\\\",\\\"jump_1_username\\\":\\\"ec2-user\\\",\\\"isttime\\\":\\\"2025-10-30 12:49:03.348022 +0530 IST\\\",\\\"servertime\\\":\\\"2025-10-30T12:49:09+05:30\\\",\\\"message\\\":\\\"status successfully updated to failed\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"level\":\"info\",\"ip\":\"10.10.3.102\",\"cluster\":\"Clovia\",\"port\":\"22\",\"service\":\"nginx\",\"jump_1_ip\":\"x.x.x.x\",\"jump_1_port\":\"22\",\"jump_1_username\":\"ec2-user\",\"isttime\":\"2025-10-30 12:49:03.348022 +0530 IST\",\"servertime\":\"2025-10-30T12:49:09+05:30\",\"message\":\"status successfully updated to failed\"},\"location\":\"/Users/galaxy/galaxy_builds/galaxy_access_log/galaxy_access_log_syncer/log/app.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::14450474-66306", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc00077a1a0), Source:"/var/ossec/logs/archives/archives.json", Offset:9595356907, Timestamp:time.Time{wall:0xc238e21ff6b5e586, ext:29402459, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0xdc7f2a, Device:0x10302}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.isttime] of type [date] in document with id '2xP8M5oBj46dJ1KsGcPQ'. Preview of field's value: '2025-10-30 12:49:03.348022 +0530 IST'","caused_by":{"type":"illegal_argument_exception","reason":"failed to parse date field [2025-10-30 12:49:03.348022 +0530 IST] with format [strict_date_optional_time||epoch_millis]","caused_by":{"type":"date_time_parse_exception","reason":"Failed to parse with all enclosed parsers"}}}
To view this discussion visit https://groups.google.com/d/msgid/wazuh/30da61f9-c0e4-4f9e-80aa-1fb69ded17e1n%40googlegroups.com.

To view this discussion visit https://groups.google.com/d/msgid/wazuh/a72a6a88-1ad8-45a6-83d0-997b6232715fn%40googlegroups.com.
I have reviewed all the information you have shared. I do not see any reference to this field being defined as a date in your logs.
Can you run this command and check if there are any custom pipelines where it is defined?
This command will show all ingest pipeline configurations.
From the web interface, go to Intexer Management > Dev tools
To view this discussion visit https://groups.google.com/d/msgid/wazuh/32e4af16-1a82-4c2d-80dc-21b484b009d2n%40googlegroups.com.