Cloudwatch integration: Log are not being decoded

672 views
Skip to first unread message

Nandhu Krishnan

unread,
May 8, 2021, 9:20:25 AM5/8/21
to Wazuh mailing list
Hi,
I'm trying to integrate cloudwatch with wazuh. the logs are fetched from the logstreams in the specified log groups, but they're not being decoded and not being matched with any rules. 

I have been looking on to few discussions and couldn't find a proper solution. These logs are not fetched from any bucket but directly from the log streams. I was sucessfully able to decode logs from cloudtrail s3 bucket but not from logstreams in this case. Only configuration parameters that I've set for this is the following. 

<wodle name="aws-s3"> 
  <disabled>no</disabled> 
  <interval>5m</interval>
<run_on_start>yes</run_on_start> <service type="cloudwatchlogs"> <aws_profile>default</aws_profile> <aws_log_groups>example_log_group</aws_log_groups> 
  <regions>us-east-1</regions> 
  </service> </wodle>

the events are decoded by the default json decoder, but since the fields are slightly different (awsRegion instead of data.aws.awsRegion or sourceIPAddress instead of data.aws.sourceIPAddress for example)
and VPC-flow logs aren't decoded since its plain like below

2 123456789010 eni-1235b8ca123456789 172.31.16.139 172.31.16.21 20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK

sent to elasticsearch as

{ "agent": { "name": "wazuh-manager", "id": "000" }, "manager": { "name": "wazuh-manager" }, "decoder": {}, "full_log": "2 123456789010 eni-1235b8ca123456789 172.31.16.139 172.31.16.21 20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK", "input": { "type": "log" }, "@timestamp": "2021-05-08T10:20:15.927Z", "location": "Wazuh-AWS", "id": "1223456789.6190035", "timestamp": "2021-05-08T10:20:15.927+0000" }

Where did I go wrong in this case? Is there any way to properly decode the logs from the logstreams without writing custom decoders? 

thanks in advance!




Cesar Moreno

unread,
May 10, 2021, 8:27:18 PM5/10/21
to Wazuh mailing list
Hello,
Thanks for posting on the Wazuh mailing list. Hope this finds you very well.

Unfortunately, The Wazuh thread intelligence team is still working on issue #7956 to get the decoders and rules working as expected for CloudWatch logs.
As you can see in the issue, there are 2 files that you can rename and modify to decode and create some alerts for CloudWatch.

In the following Amazon guide, you'll be able to find the meaning of each value in the log that you can use to create the decoders based on the 4 possible <action> values, for example:

<version> <account-id> <interface-id> <srcaddr> <dstaddr> <srcport> <dstport> <protocol> <packets> <bytes> <start> <end> <action> <log-status>

  • Following the decoders for this format:

<decoder name="aws-cloudwatch">
  <prematch>^\S+ \S+ \w+ \S+ \S+ \S+ \S+ \S+ \S+ \S+ \d+ \d+ \w+ \w+</prematch>
</decoder>

<decoder name="aws-cloudwatch-child">
  <parent>aws-cloudwatch</parent>
  <regex>^(\S+) (\S+) (\w+) (\S+) (\S+) (\S+) (\S+) (\S+) (\S+) (\S+) (\d+) (\d+) (\w+) (\w+)</regex>
  <order>version,account-id,interface-id,srcaddr,dstaddr,srcport,dstport,protocol,packets,bytes,sec-unix-time,end,action,log-status</order>
</decoder>
  • The rules to create the alerts:
<group name="aws-cloudwatch">
        <rule id="107000" level="0">
                <decoded_as>aws-cloudwatch</decoded_as>
                <description>Wazuh-AWS Cloudwatch decoded group.</description>
        </rule>
        <rule id="107001" level="3">
                <if_sid>107000</if_sid>
                <action>REJECT</action>
                <description>Amazon CloudWatch: VPC flow Rejected</description>
                <options>no_full_log</options>
        </rule>
        <rule id="107002" level="3">
                <if_sid>107000</if_sid>
                <action>ACCEPT</action>
                <description>Amazon CloudWatch: VPC flow Accepted</description>
                <options>no_full_log</options>
        </rule>
        <rule id="107003" level="3">
                <if_sid>107000</if_sid>
                <field name="log-status">NODATA</field>
                <description>Amazon CloudWatch: VPC flow Status: NODATA</description>
                <options>no_full_log</options>
        </rule>
        <rule id="107004" level="3">
                <if_sid>107000</if_sid>
                <field name="log-status">SKIPDATA</field>
                <description>Amazon CloudWatch: VPC flow Status: SKIPDATA</description>
                <options>no_full_log</options>
        </rule>
</group>

Additionally, since this log doesn't have any timestamp known by default in Elasticsearch and Kibana, you can use the <start> field since it's an epoch_second formatted value (renamed as sec-unix-time in the decoder). You can manually add this as a date in the Wazuh template in the date/properties section for Filebeat (/etc/filebeat/wazuh-template.json)  if you use it as follows:

      "data.sec-unix-time",

      ...

      },
      "data": {
        "properties": {
          "sec-unix-time": {
            "type": "date",
            "format": "epoch_second"
          },

          "audit": {

As you can see, following the result in Kibana well date-formatted:
Kibana.PNG

Hope this helps you. Any questions, please let me know, I'm glad to help.

Kind regards,
Cesar Moreno.

Nandhu Krishnan

unread,
May 11, 2021, 2:26:51 AM5/11/21
to Wazuh mailing list
Thankyou for the response! I'll try this out.

Cesar Moreno

unread,
May 11, 2021, 8:07:02 PM5/11/21
to Wazuh mailing list
Hello,
Perfect, any questions, please don't hesitate to ask us about. We are happy to help you.

I look forward to your feedback,
Kind regards,
Cesar Moreno.
Reply all
Reply to author
Forward
0 new messages