Re: Only one log file is not sending logs to the wazuh server

160 views
Skip to first unread message

Subash Ponnuswamy

unread,
Oct 24, 2025, 5:12:12 AM (13 days ago) Oct 24
to Wazuh | Mailing List
Attaching the dashboard screen for reference,

Screenshot from 2025-10-23 17-27-12.png



--
Regards,
SUBASH P


On Thu, Oct 23, 2025 at 4:09 PM Subash Ponnuswamy <suba...@mafiree.com> wrote:
Hi Team,

I have set up a wazuh agent in my Mac studio, where all the logs mentioned in the ossec.conf are getting transferred to the wazuh server and visible in the dashboard, except for a single log file. The log from this file is not available in the dashboard for the past 28 hours. One entry is somehow visible in the dashboard around 5 hours earlier.

Although the log file is receiving new entries, the logs are not being ingested. All other log files provided in the configuration have no issues.

I have tried changing the log format, but still no success.

/Users/galaxy/galaxy_builds/galaxy_access_log/galaxy_access_log_syncer/log/app.log. 

Any small help is appreciated.

Ossec.log file from the wazuh-agent also doesn't have any error messages related to this.

bash-3.2# tail -f /Library/Ossec/logs/ossec.log
2025/10/23 11:50:10 rootcheck: INFO: Ending rootcheck scan.
2025/10/23 12:48:04 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/10/23 12:49:37 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2025/10/23 13:22:03 wazuh-logcollector: ERROR: Large message size from file '/Users/galaxy/galaxy_builds/galaxy_mysql/log/app.log' (length = 65279): '2025/10/23 13:22:02 Monitoring Servers: [{Service:mysql MasterID'...
2025/10/23 13:49:38 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/10/23 13:51:16 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2025/10/23 14:51:17 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/10/23 14:54:57 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2025/10/23 15:54:58 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2025/10/23 15:56:51 wazuh-modulesd:syscollector: INFO: Evaluation finished.


image.png



--
Regards,
SUBASH P

Subash Ponnuswamy

unread,
Oct 24, 2025, 5:13:02 AM (13 days ago) Oct 24
to Wazuh | Mailing List
Hi Team,
Small progress on the issue,

This is the format of the log

{"level":"info","ip":"10.10.3.102","cluster":"xxx","port":"22","service":"nginx","jump_1_ip":"xx.x.xx.xx","jump_1_port":"22","jump_1_username":"ec2-user","isttime":"2025-10-24 10:10:00.466964 +0530 IST","servertime":"2025-10-24T10:10:05+05:30","message":"status successfully updated to failed"}'

In the log file, if I add a log manually without using the 'port' key, then the logs are visible in the dashboard. Looks like port key is causing the issue.

In the index mapping, the 'port' is mapped as an object.  Any workaround for this?

--
Regards,
SUBASH P

Subash Ponnuswamy

unread,
Oct 24, 2025, 5:15:07 AM (13 days ago) Oct 24
to Wazuh | Mailing List

Md. Nazmur Sakib

unread,
Oct 24, 2025, 6:08:23 AM (13 days ago) Oct 24
to Wazuh | Mailing List
Hello Subash,

I am looking into your query. Please allow me some time.

Md. Nazmur Sakib

unread,
Oct 24, 2025, 8:52:52 AM (12 days ago) Oct 24
to Wazuh | Mailing List

Here I can see two main problems,



2025/10/23 13:22:03 wazuh-logcollector: ERROR: Large message size from file '/Users/galaxy/galaxy_builds/galaxy_mysql/log/app.log'


The only possible workarounds currently would be changing the source value or trying to break down the logs.


This subject has been heavily discussed, as the size is limited to this static value because of the agent-manager coordination, and was already increased from 6kB to 64kB to avoid these kinds of issues.

Another issue you have mentioned is that you have seen mapping in the filebeat.

Check if you can see the alerts in the alerts.json file


Also, share the logs from the following log files.

 
cat /var/ossec/logs/alerts/alerts.json | grep -i -E "galaxy_mysql"

If you can see the alerts in the alert.json file not in the dashboard. Share a sample log and share the output of the filebeat command.

cat /var/log/filebeat/filebeat/* | grep -i -E "error|warn"


For mapping conflict, we can take different approaches, like making custom decoders or changing the filebeat pipeline file; it will depend on the error and your log format.

Please replace the sensitive values with dummy values.

Looking forward to your update.

Subash Ponnuswamy

unread,
Oct 27, 2025, 3:01:22 AM (10 days ago) Oct 27
to Wazuh | Mailing List
Hi Nazmur Sakib,

Thanks for looking into this issue.

I've included the filebeat warning message. This must be the reason the logs are not visible in the Wazuh Archives index in the dashboard. Kindly let me know your inputs on how I can fix this.


2025-10-27T08:30:30.336+0530 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc237d5f3525b81b8, ext:837201525933569, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, Fields:{"agent":{"ephemeral_id":"9e3722f4-e29b-4207-ac74-c6019c422ba3","hostname":"wazuh","id":"54864f23-ed8d-4c5b-97f5-7560d04fb281","name":"wazuh","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},"host":{"name":"wazuh"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/archives/archives.json"},"offset":6619262831},"message":"{\"timestamp\":\"2025-10-27T08:30:28.717+0530\",\"agent\":{\"id\":\"001\",\"name\":\"mafirees-Mac-Studio.local\",\"ip\":\"192.168.71.204\"},\"manager\":{\"name\":\"wazuh\"},\"id\":\"1761534028.371120134\",\"full_log\":\"{\\\"level\\\":\\\"info\\\",\\\"cluster\\\":\\\"xxxx\\\",\\\"ip\\\":\\\"x.x.x.x\\\",\\\"port\\\":22,\\\"jump_ip\\\":\\\"x.x.x.x\\\",\\\"jump_port\\\":22,\\\"jump_username\\\":\\\"mafiree\\\",\\\"service\\\":\\\"postgresql\\\",\\\"servertime\\\":\\\"2025-10-27 08:30:26.797228000\\\",\\\"isttime\\\":\\\"2025-10-27T08:30:26+05:30\\\",\\\"message\\\":\\\"Connected to x.x.x.x and executed command at 2025-10-27 08:30:26.797175 +0530 IST m=+70387.958252710 via singleJumpConnection in 26.208859875s\\\\n\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"level\":\"info\",\"cluster\":\"xxxx\",\"ip\":\"10.0.9.83\",\"port\":\"22\",\"jump_ip\":\"xx.x.x.xx\",\"jump_port\":\"22\",\"jump_username\":\"mafiree\",\"service\":\"postgresql\",\"servertime\":\"2025-10-27 08:30:26.797228000\",\"isttime\":\"2025-10-27T08:30:26+05:30\",\"message\":\"Connected to 10.x.x.x and executed command at 2025-10-27 08:30:26.797175 +0530 IST m=+70387.958252710 via singleJumpConnection in 26.208859875s\\n\"},\"location\":\"/Users/galaxy/galaxy_builds/galaxy/monitoring_agentless/logs/agentless_logs/general_logs/2025-10-27/app.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::14450144-66306", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc0006aa9c0), Source:"/var/ossec/logs/archives/archives.json", Offset:6619264027, Timestamp:time.Time{wall:0xc237b80b5d633073, ext:806577710986403, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0xdc7de0, Device:0x10302}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400):

{"type":"mapper_parsing_exception","reason":"object mapping for [data.port] tried to parse field [port] as object, but found a concrete value"}

Subash Ponnuswamy

unread,
Oct 28, 2025, 2:52:58 AM (9 days ago) Oct 28
to Wazuh | Mailing List
Hi Team,

Any inputs for fixing this exception? Thanks in advance.

Md. Nazmur Sakib

unread,
Oct 28, 2025, 5:04:14 AM (9 days ago) Oct 28
to Wazuh | Mailing List

One quick, easy fix to make this port field unindexable so that it doesn't create any conflict by configuring the mapping template.

/etc/filebeat/wazuh-template.json

https://www.elastic.co/guide/en/elasticsearch/reference/6.2/enabled.html


But I believe this is an important field, and you would like to have that in your alerts.
So we can change the name of the field before indexing by configuring the pipeline (/usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json)

https://www.elastic.co/docs/reference/beats/filebeat/rename-fields


Can you share a sample alert log from the alerts.json file?



cat /var/ossec/logs/alerts/alerts.json | grep -i -E "galaxy_mysql"

Please replace the sensitive values with dummy values.


I will replicate this on my end and provide you with the best possible solution.

Subash Ponnuswamy

unread,
Oct 28, 2025, 8:04:10 AM (8 days ago) Oct 28
to Md. Nazmur Sakib, Wazuh | Mailing List
Thanks Nazmur Sakib,

Let me check this. I don't have any alert log for this. I just want to view the logs in the wazuh-archives.

This is the format of the log

{"level":"info","ip":"10.10.3.102","cluster":"xxx","port":"22","service":"nginx","jump_1_ip":"xx.x.xx.xx","jump_1_port":"22","jump_1_username":"ec2-user","isttime":"2025-10-24 10:10:00.466964 +0530 IST","servertime":"2025-10-24T10:10:05+05:30","message":"status successfully updated to failed"}'

--
Regards,
SUBASH P

--
You received this message because you are subscribed to a topic in the Google Groups "Wazuh | Mailing List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/4wAgEHFblz0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/wazuh/141bac3b-19f8-46f5-810b-96767fe95ceen%40googlegroups.com.

Md. Nazmur Sakib

unread,
Oct 29, 2025, 8:26:59 AM (7 days ago) Oct 29
to Wazuh | Mailing List
I will test it in my lab and will share with you an update by tomorrow.

Md. Nazmur Sakib

unread,
Oct 30, 2025, 3:01:31 AM (7 days ago) Oct 30
to Wazuh | Mailing List

I was able to reproduce the filebeat mapping error using your log.

You can resolve it by changing the field name data.port and data.service by configuring the filebeat data ingest pipeline.

I am sharing the step-by-step guideline.

Go to

/usr/share/filebeat/module/wazuh/archives/ingest/pipeline.json

And right between the  "date_index_name": { and    { "remove": { "field" section,

    {

      "date_index_name": {

        "field": "timestamp",

        "date_rounding": "d",

        "index_name_prefix": "{{fields.index_prefix}}",

        "index_name_format": "yyyy.MM.dd",

        "ignore_failure": false

      }

    },
—---------

    { "remove": { "field": "message", "ignore_missing": true, "ignore_failure": true } },



Add these configurations highlighted in green.


    {

      "date_index_name": {

        "field": "timestamp",

        "date_rounding": "d",

        "index_name_prefix": "{{fields.index_prefix}}",

        "index_name_format": "yyyy.MM.dd",

        "ignore_failure": false

      }

    },

        {

  "rename": {

    "field": "data.port",

    "target_field": "data.port_number",

    "ignore_missing": true,

    "if": "def g = ctx?.data?.service; def isPg = g != null && ((g instanceof List && g.contains('nginx')) || (g instanceof String && g == 'nginx') || (g instanceof String && g == 'postgresql')); def v = ctx?.data?.port; return isPg && v != null && !(v instanceof Map);"

  }

},

{

  "convert": {

    "field": "data.port_number",

    "type": "long",

    "ignore_missing": true

  }

},

{

  "remove": {

    "field": "data.port",

    "ignore_missing": true,

    "if": "def v = ctx?.data?.port; return v != null && !(v instanceof Map);"

  }

},

{

  "rename": {

    "field": "data.service",

    "target_field": "data.service_name",

    "ignore_missing": true,

    "if": "def g = ctx?.data?.service; def isPg = g != null && ((g instanceof List && g.contains('nginx')) || (g instanceof String && g == 'nginx') || (g instanceof String && g == 'postgresql')); def v = ctx?.data?.service; return isPg && v != null && !(v instanceof Map);"

  }

},

{

  "remove": {

    "field": "data.service",

    "ignore_missing": true,

    "if": "def g = ctx?.data?.service; def isPg = g != null && ((g instanceof List && g.contains('nginx')) || (g instanceof String && g == 'nginx') || (g instanceof String && g == 'postgresql')); def v = ctx?.data?.service; return isPg && v != null && !(v instanceof Map);"

  }

},

    { "remove": { "field": "message", "ignore_missing": true, "ignore_failure": true } },



The configuration will change the data.filed name and data.service name when the log will have a decoded field name service with value nginx and postgressql.


Now load the configuration and restart the filebeat service.

filebeat setup --pipelines

systemctl restart filebeat


If you also face a similar issue with alerts. You can make changes for the alert indices by making similar changes in /usr/share/filebeat/module/wazuh/alerts/ingest/pipeline.json

I have tested this and it worked for me. Check the screenshot for reference.

test26.png


Let me know if this works for you.

Subash Ponnuswamy

unread,
Oct 30, 2025, 4:05:25 AM (7 days ago) Oct 30
to Md. Nazmur Sakib, Wazuh | Mailing List
Thanks Nazmur, 

Followed your instructions, and now the log is visible in the dashboard with the renamed fields.

image.png

In the filebeat log, I'm getting an error regarding the timestamp. Is this related to this same issue?

2025-10-30T12:49:11.067+0530    WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc238e23b8f847f0b, ext:110371853512, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, Fields:{"agent":{"ephemeral_id":"7cd3c931-afee-4299-b1ab-d4ec2653029a","hostname":"wazuh","id":"54864f23-ed8d-4c5b-97f5-7560d04fb281","name":"wazuh","type":"filebeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},"host":{"name":"wazuh"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/archives/archives.json"},"offset":9595355780},"message":"{\"timestamp\":\"2025-10-30T12:49:09.515+0530\",\"rule\":{\"level\":2,\"description\":\"Unknown problem somewhere in the system.\",\"id\":\"1002\",\"firedtimes\":15752,\"mail\":false,\"groups\":[\"syslog\",\"errors\"],\"gpg13\":[\"4.3\"]},\"agent\":{\"id\":\"001\",\"name\":\"mafirees-Mac-Studio.local\",\"ip\":\"192.168.71.204\"},\"manager\":{\"name\":\"wazuh\"},\"id\":\"1761808749.609184463\",\"full_log\":\"{\\\"level\\\":\\\"info\\\",\\\"ip\\\":\\\"10.10.3.102\\\",\\\"cluster\\\":\\\"Clovia\\\",\\\"port\\\":\\\"22\\\",\\\"service\\\":\\\"nginx\\\",\\\"jump_1_ip\\\":\\\"xx.1.177.xx\\\",\\\"jump_1_port\\\":\\\"22\\\",\\\"jump_1_username\\\":\\\"ec2-user\\\",\\\"isttime\\\":\\\"2025-10-30 12:49:03.348022 +0530 IST\\\",\\\"servertime\\\":\\\"2025-10-30T12:49:09+05:30\\\",\\\"message\\\":\\\"status successfully updated to failed\\\"}\",\"decoder\":{\"name\":\"json\"},\"data\":{\"level\":\"info\",\"ip\":\"10.10.3.102\",\"cluster\":\"Clovia\",\"port\":\"22\",\"service\":\"nginx\",\"jump_1_ip\":\"x.x.x.x\",\"jump_1_port\":\"22\",\"jump_1_username\":\"ec2-user\",\"isttime\":\"2025-10-30 12:49:03.348022 +0530 IST\",\"servertime\":\"2025-10-30T12:49:09+05:30\",\"message\":\"status successfully updated to failed\"},\"location\":\"/Users/galaxy/galaxy_builds/galaxy_access_log/galaxy_access_log_syncer/log/app.log\"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::14450474-66306", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc00077a1a0), Source:"/var/ossec/logs/archives/archives.json", Offset:9595356907, Timestamp:time.Time{wall:0xc238e21ff6b5e586, ext:29402459, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0xdc7f2a, Device:0x10302}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data.isttime] of type [date] in document with id '2xP8M5oBj46dJ1KsGcPQ'. Preview of field's value: '2025-10-30 12:49:03.348022 +0530 IST'","caused_by":{"type":"illegal_argument_exception","reason":"failed to parse date field [2025-10-30 12:49:03.348022 +0530 IST] with format [strict_date_optional_time||epoch_millis]","caused_by":{"type":"date_time_parse_exception","reason":"Failed to parse with all enclosed parsers"}}}

The value for 'isttime' is in two formats in the log file `2025-10-30 05:29:03.199568 +0530 IST` and `2025-10-30T05:29:08+05:30`

--
Regards,
SUBASH P

Subash Ponnuswamy

unread,
Nov 3, 2025, 4:45:35 AM (3 days ago) Nov 3
to Wazuh | Mailing List
Hi Team,

Any suggestions for this latest error.

{"type":"mapper_parsing_exception","reason":"failed to parse field [data.isttime] of type [date] in document with id '2xP8M5oBj46dJ1KsGcPQ'. Preview of field's value: '2025-10-30 12:49:03.348022 +0530 IST'","caused_by":{"type":"illegal_argument_exception","reason":"failed to parse date field [2025-10-30 12:49:03.348022 +0530 IST] with format [strict_date_optional_time||epoch_millis]","caused_by":{"type":"date_time_parse_exception","reason":"Failed to parse with all enclosed parsers"}}}



Md. Nazmur Sakib

unread,
Nov 4, 2025, 12:17:17 AM (yesterday) Nov 4
to Wazuh | Mailing List
Sorry for the late response. I was on holiday.

2025-11-04 11 03 18.png
As you can see in the screenshot, the field data.isttime should be indexed in text format by default if you have not specified it as a date format in the filebeat template.

Can you share if you have any configuration in this filebeat template file?

Run this command on the Wazuh manager server’s CLI to check.

cat /etc/filebeat/wazuh-template.json | grep isttime


Or from the web interface, go to Intexer Management > Dev tools

And run this command and share the output in a text file.

GET _template/



Looking forward to your update.

Subash Ponnuswamy

unread,
Nov 4, 2025, 2:00:37 AM (yesterday) Nov 4
to Md. Nazmur Sakib, Wazuh | Mailing List
Hi Nazmur,

Please find the config files attached.
image.png

--
Regards,
SUBASH P

pipeline.json
template output
wazuh-template.json

Md. Nazmur Sakib

unread,
2:55 AM (16 hours ago) 2:55 AM
to Wazuh | Mailing List


I have reviewed all the information you have shared. I do not see any reference to this field being defined as a date in your logs.

Can you run this command and check if there are any custom pipelines where it is defined?

This command will show all ingest pipeline configurations.

From the web interface, go to Intexer Management > Dev tools


And run this command and share the output in a text file.

GET _ingest/pipeline/*



Looking forward to your update.

Subash Ponnuswamy

unread,
3:59 AM (15 hours ago) 3:59 AM
to Md. Nazmur Sakib, Wazuh | Mailing List
Hi Nazmur, 

I've attached the output for GET _ingest/pipeline/*


--
Regards,
SUBASH P

ingest pipline
Reply all
Reply to author
Forward
0 new messages