Thank you for the response!
The AWS module configuration is as follows:
<ossec_config>
<wodle name="aws-s3">
<disabled>no</disabled>
<remove_from_bucket>no</remove_from_bucket>
<interval>30m</interval>
<run_on_start>yes</run_on_start>
<skip_on_error>no</skip_on_error>
<bucket type="cloudtrail">
<name>REDACTED</name>
<only_logs_after>2021-JAN-14</only_logs_after>
<iam_role_arn>REDACTED</iam_role_arn>
</bucket>
<bucket type="guardduty">
<name>REDACTED</name>
<only_logs_after>2021-JAN-14</only_logs_after>
<iam_role_arn>REDACTED</iam_role_arn>
<path>firehose/</path>
</bucket>
</wodle>
</ossec_config>
The coordinator sits in one AWS account, while the IAM role and S3 buckets configured are in a separate account. The role applied to the coordinator allows it to assume said role in the other account. I know this works, as running the aws-s3 command (copied from ossec.log with wazuh_modules.debug=2) will display data from these buckets in stdout. We can also see that the roles have been used to access the buckets via IAM access advisor.
DEBUG: +++ Working on REDACTED - REDACTED
DEBUG: +++ Marker: AWSLogs/REDACTED/CloudTrail/REDACTED/2021/01/14/REDACTED_CloudTrail_REDACTED_20210114T0025Z_37F25NhZ2zjVmk90.json.gz
DEBUG: ++ Skipping previously processed file: AWSLogs/REDACTED/CloudTrail/REDACTED/2021/01/14/REDACTED_CloudTrail_REDACTED_20210114T0100Z_g4OvrL9efPcOrGDl.json.gz
I can make it reprocess files, and new files are being picked up, and there are no error messages that I can see.
Entries from Cloudtrail can also be found in ossec/logs/archives.log. Lowering the alert level to 0 didn't seem to change anything.