Hello,
I am new user of Wazuh I've manage to deploy solution as "all-in-one" on linux machine.
I've manage to link it to AWS WAF logs thur S3 bucket this is my config:
<ossec_config>
<wodle name="aws-s3">
<disabled>no</disabled>
<interval>1m</interval>
<run_on_start>yes</run_on_start>
<skip_on_error>yes</skip_on_error>
<bucket type="waf">
<name>bucket_name</name>
<path>firehose</path>
<access_key>xxxxxxxx</access_key>
<secret_key>xxxxxxxxxxxxxxxxxxxx</secret_key>
</bucket>
</wodle>
</ossec_config>
When I check logs I can see this:
2023/04/06 19:00:42 wazuh-modulesd:aws-s3[16018] wm_aws.c:78 at wm_aws_main(): DEBUG: Sleeping until: 2023/04/06 19:01:41
2023/04/06 19:01:41 wazuh-modulesd:aws-s3[16018] wm_aws.c:82 at wm_aws_main(): INFO: Starting fetching of logs.
2023/04/06 19:01:41 wazuh-modulesd:aws-s3[16018] wm_aws.c:134 at wm_aws_main(): INFO: Executing Bucket Analysis: (Bucket: bucket_name, Path: firehose, Type: waf)
2023/04/06 19:01:41 wazuh-modulesd:aws-s3[16018] wm_aws.c:329 at wm_aws_run_s3(): DEBUG: Create argument list
2023/04/06 19:01:41 wazuh-modulesd:aws-s3[16018] wm_aws.c:444 at wm_aws_run_s3(): DEBUG: Launching S3 Command: wodles/aws/aws-s3 --bucket bucket_name --access_key xxxxxxxxxxxxx --secret_key xxxxxxxxxxxxxxxxxx --trail_prefix firehose --type waf --debug 2 --skip_on_error
2023/04/06 19:01:43 wazuh-modulesd:aws-s3[16018] wm_aws.c:484 at wm_aws_run_s3(): DEBUG: Bucket: - OUTPUT: DEBUG: +++ Debug mode on - Level: 2
DEBUG: +++ Marker: firehose/2023/04/06/13/aws-waf-logs-cloudfront-test-3-2023-04-06-13-48-58-6c757e50-533c-45d7-a673-a3bce0f6fadf
2023/04/06 19:01:43 wazuh-modulesd:aws-s3[16018] wm_aws.c:174 at wm_aws_main(): INFO: Fetching logs finished.
But no new logs appear on discovery.
When I run this command:
./aws-s3.py -b 'bucket_name' --reparse --debug 5 -a xxxxxxxxxxxxxxxxx -k xxxxxxxxxxxxxxx -t waf -l firehose
I get a lot of messages which ends: DEBUG: ++ Reformat message
But if I take any event of that list and run via wazuh-logtest command that all parse fine reaching Phase 3 with matched rule.
I've tried to delete s3_***.db file to force reparse of all logs in bucket and indexer found them but no change.
I activated also Amazon AWS module.
I am stuck and looking ofr advice now. Please help me :)?
Sorry for my language My English is not the best.
Many thanks,
Greg