Integrate AWS CloudWatch with Wazuh

912 views
Skip to first unread message

kanaka raju

unread,
Oct 25, 2023, 5:32:49 AM10/25/23
to Wazuh | Mailing List
Hey guys, I'm trying to integrate AWS RDS CloudWatch with Wazuh.




















 

But, we are not able to view the logs being shown in the dashboard. This is configured at one of the agent.

   <wodle name="aws-s3">
  <disabled>no</disabled>
  <interval>5m</interval>
  <run_on_start>yes</run_on_start>
  <service type="cloudwatchlogs">
    <aws_profile>default</aws_profile>
    <aws_log_groups>LOG GROUP LOCATION</aws_log_groups>
    <regions>us-east-1</regions>
  </service>
</wodle>

it says that the logs generated and sending events to analysis, but if I check the logs of the agent.

2023/10/25 00:25:51 wazuh-modulesd:aws-s3: INFO: Executing Service Analysis: (Service: cloudwatchlogs)
2023/10/25 00:25:51 wazuh-modulesd:aws-s3: WARNING: Service: cloudwatchlogs  -  Returned exit code 10
2023/10/25 00:25:51 wazuh-modulesd:aws-s3: WARNING: Service: cloudwatchlogs  -  pyarrow module is required.
2023/10/25 00:25:51 wazuh-modulesd:aws-s3: INFO: Fetching logs finished.


It shows the above error, is this expected, can someone please guide me in configuring CloudWatch Logs.

kanaka raju

unread,
Oct 25, 2023, 5:40:50 AM10/25/23
to Wazuh | Mailing List
Also in addition to this, I'm getting this error when I'm running the 

wodles/aws/aws-s3  --service cloudwatchlogs --aws_profile default  --regions us-east-1 --aws_log_groups  <LOG-GROUP-NAME>


error:
ERROR: Message too long to send to Wazuh.  Skipping message...
ERROR: Message too long to send to Wazuh.  Skipping message...
ERROR: Message too long to send to Wazuh.  Skipping message...

Federico Rodriguez

unread,
Oct 25, 2023, 6:56:05 AM10/25/23
to Wazuh | Mailing List
Hi kanaka raju,
can you please describe your Wazuh installation? Which version are you using?

kanaka raju

unread,
Oct 25, 2023, 7:05:47 AM10/25/23
to Wazuh | Mailing List
Hello Federico,

I've installed Wazuh Server on the Kubernetes cluster (AWS EKS) and the version is v:4.5.3, the agent is on standalone EC2 Instance where I've configured cloud watch to collect and send logs to the server.

Federico Rodriguez

unread,
Oct 26, 2023, 7:58:55 AM10/26/23
to Wazuh | Mailing List
Hi Kanaka, 
it seems the pyarrow module is not installed. Here's a guide to install the dependencies:
https://documentation.wazuh.com/current/cloud-security/amazon/services/prerequisites/dependencies.html#amazon-dependencies

For more information here's the troubleshooting guide:
https://documentation.wazuh.com/current/cloud-security/amazon/services/troubleshooting.html

kanaka raju

unread,
Oct 26, 2023, 8:50:32 AM10/26/23
to Wazuh | Mailing List
Hello Federico,

I was able to solve this issue, but could you please help in the other error related to processing log events.

wodles/aws/aws-s3  --service cloudwatchlogs --aws_profile default  --regions us-east-1 --aws_log_groups  <LOG-GROUP-NAME>

error:
ERROR: Message too long to send to Wazuh.  Skipping message...
ERROR: Message too long to send to Wazuh.  Skipping message...
ERROR: Message too long to send to Wazuh.  Skipping message...


This is the error which I get while dealing with cloudwatch logs.

Federico Rodriguez

unread,
Oct 26, 2023, 12:05:47 PM10/26/23
to Wazuh | Mailing List
The "ERROR: Message too long to send to Wazuh." error is thrown by a buffer protection mechanism in Wazuh API. Here's the issue related to it: https://github.com/wazuh/wazuh/issues/17689
Unfortunately, there isn't much to be done on the Wazuh config, but you could try to control the AWS events size. Maybe you can check if log groups quotas can be reduced.

https://docs.aws.amazon.com/servicequotas/

kanaka raju

unread,
Oct 27, 2023, 3:25:22 AM10/27/23
to Wazuh | Mailing List
Thanks Federico, 

Also in addition to this, I had a query with the log management. Currently the logs are stored at two locations.
/var/ossec/logs/archives/ and  /var/ossec/logs/alerts. Since these are huge files is it safe to delete these log files or move these to say S3 Buckets.

Will this effect the dashboard querying of older data by any chance ????


Thanks and Regards

Federico Rodriguez

unread,
Oct 27, 2023, 10:40:53 AM10/27/23
to Wazuh | Mailing List
Wazuh by default stores its alerts in two files, alerts.json and alerts.log, in /var/ossec/logs/alerts folder that contains only the alerts of the current day. It is not advised to delete these files, as you may incur in data loss. However, files located in the /var/ossec/logs/archives/ folder are no longer used and can be safetly be deleted (keep in mind the historic backup files will be permanently lost).

Additionally, the option logall or logall.json on the manager's ossec.conf Wazuh will store the archives in /var/ossec/logs/archives with the same structure of the alerts logs. These logs contains every log that has reached the manager, regardless if an alert was generated or not, and for that reason they will use more space than the alert logs. It is recommended to disable the logall or logall_json options unless you need them, to reduce the storage requirements of the Manager.

More info:
https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/global.html#logall
https://documentation.wazuh.com/current/user-manual/manager/wazuh-archives.html#enabling-the-wazuh-archives

Hope it helps!
Reply all
Reply to author
Forward
0 new messages