s3 Forbidden message on succesive flush

998 views
Skip to first unread message

Bob Tucker

unread,
Apr 23, 2014, 3:00:09 PM4/23/14
to flu...@googlegroups.com
It looks like the s3 plugin is failing trying to use a duplicate file name 

2014-04-23 18:41:37 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2014-04-23 18:42:43 +0000 error_class="AWS::S3::Errors::Forbidden" error="AWS::S3::Errors::Forbidden" instance=70307609144060

What is a way around this? It will write out a bunch of files then choke the next time possibly due to input that straddles time_slice units?

<match app.aa.user.*>
  type s3
  aws_key_id foo
  aws_sec_key bar
  s3_bucket bucketname
  s3_object_key_format %{path}%{time_slice}_%{index}.%{file_extension}
  path app.aa.user/fluentd_hub
  buffer_path /var/log/fluentd/s3
  flush_interval 15m
  flush_at_shutdown
  time_slice_format /%Y/%m/%d/%H/%M
  store_as json
  utc
</match>

Thanks in advance.

Bob Tucker

Kiyoto Tamura

unread,
Apr 23, 2014, 4:18:36 PM4/23/14
to flu...@googlegroups.com
Hi Bob,


>It looks like the s3 plugin is failing trying to use a duplicate file name 
>
>2014-04-23 18:41:37 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2014-04-23 18:42:43 +0000 >error_class="AWS::S3::Errors::Forbidden" error="AWS::S3::Errors::Forbidden" instance=70307609144060
>
>What is a way around this? It will write out a bunch of files then choke the next time possibly due to input that straddles time_slice units?

Hm, the out_s3 plugin shouldn't be trying to write to the same s3 object key twice. As it tries to write to S3, it checks if the object key exists, and if it does, it increments the "index" value.

Do you have any other error message you can share? Also, apologies in advance if I am misunderstanding your description.

Kiyoto


--
You received this message because you are subscribed to the Google Groups "Fluentd Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fluentd+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Check out Fluentd, the open source data collector for high-volume data streams

Masahiro Nakagawa

unread,
Apr 23, 2014, 4:40:42 PM4/23/14
to flu...@googlegroups.com
Hm, the out_s3 plugin shouldn't be trying to write to the same s3 object key twice.

out_s3 plugin checks the object key exists or not.
Maybe above error is caused by another problem.


Bob Tucker

unread,
Apr 23, 2014, 6:39:36 PM4/23/14
to flu...@googlegroups.com
The IAM user has * permissions to s3. The workaround is to delete the last file created and then the plugin is able to unload it's buffer which points at a duplicate filename issue. What other s3 API call could fail at that point.?

Thanks for your help

Bob Tucker

unread,
Apr 23, 2014, 8:11:16 PM4/23/14
to flu...@googlegroups.com
Looking at the code: It seems like this fails when true end while @bucket.objects[s3path].exists? but doesn't fail when false.

It would make more sense if it failed here: @bucket.objects[s3path].write with a duplicate filename.

Could this be a bug in the SDK?

Bob

On Wednesday, April 23, 2014 1:40:42 PM UTC-7, repeatedly wrote:

Kiyoto Tamura

unread,
Apr 23, 2014, 11:57:07 PM4/23/14
to flu...@googlegroups.com
Hi Bob,

Thanks for more info. Can you share the version of aws-sdk that's running on your machine (fluent-plugin-s3 requires awk-sdk  >= 1.8.2). I will try to reproduce what you are seeing.

Thanks,

Kiyoto

Bob Tucker

unread,
Apr 24, 2014, 2:09:51 PM4/24/14
to flu...@googlegroups.com
Hi Kiyoto, Thanks for all of your help. We're on 1.38.0:

  2014-04-24 08:07:23 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.45/lib/fluent/buffer.rb:296:in `write_chunk'
  2014-04-24 08:07:23 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.45/lib/fluent/buffer.rb:276:in `pop'
  2014-04-24 08:07:23 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.45/lib/fluent/output.rb:310:in `try_flush'
  2014-04-24 08:07:23 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.45/lib/fluent/output.rb:132:in `run'
2014-04-24 16:58:57 +0000 [warn]: failed to flush the buffer. error_class="AWS::S3::Errors::Forbidden" error="AWS::S3::Errors::Forbidden" instance=70252393229020
2014-04-24 16:58:57 +0000 [warn]: retry count exceededs limit.
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/aws-sdk-1.38.0/lib/aws/core/client.rb:374:in `return_or_raise'
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/aws-sdk-1.38.0/lib/aws/core/client.rb:475:in `client_request'
  2014-04-24 16:58:57 +0000 [warn]: (eval):3:in `head_object'
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/aws-sdk-1.38.0/lib/aws/s3/s3_object.rb:293:in `head'
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/aws-sdk-1.38.0/lib/aws/s3/s3_object.rb:270:in `exists?'
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluent-plugin-s3-0.3.7/lib/fluent/plugin/out_s3.rb:156:in `write'
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.45/lib/fluent/buffer.rb:296:in `write_chunk'
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.45/lib/fluent/buffer.rb:276:in `pop'
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.45/lib/fluent/output.rb:310:in `try_flush'
  2014-04-24 16:58:57 +0000 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.45/lib/fluent/output.rb:132:in `run'
2014-04-24 16:58:57 +0000 [error]: throwing away old logs.

Bob Tucker

unread,
Apr 24, 2014, 4:35:50 PM4/24/14
to flu...@googlegroups.com
Hi Kiyoto,

I found the issue. It turns out to be a weird way permissions are handled between bucket perms and IAM. 

If an IAM user has permissions to the bucket but doesn't have !AM permissions  to s3 that user can write files but @bucket.objects[s3path].exists? will fail with 'Permission Denied' when true but not when false (Weird).

If you give that IAM user permissions to s3 explicitly and permissions to the bucket, the issue dissappears and fluentd behaves normally.

This is true with the ruby-sdk as well as for the s3-plugin. Thanks for taking time for us, I hope you find this info helpful.

Masahiro Nakagawa

unread,
Apr 24, 2014, 5:13:12 PM4/24/14
to flu...@googlegroups.com
Bob,

Thank you for the investigation.

I wrote very simple script which calling 'exists' and 'write' methods but error didn't occur on my S3 bucket.
So I was just googling IAM / permission problem with aws-sdk-ruby.

We will check your investigation later.
After we can reproduce it, we will add FAQ to fluent-plugin-s3 document.


Masahiro

Dan Langer

unread,
Nov 11, 2016, 2:12:35 PM11/11/16
to Fluentd Google Group
We didn't want to run with the suggested "s3:*" IAM permissions, as this seemed to broad. We were also reticent to use bucket policies.

In our testing, this was fine:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucket",
"s3:GetBucketLocation"
],
"Resource": ["S3_ARN"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": ["S3_ARN/*"]
}
]
}

To test your configuration, make sure you can do the "aws s3api head-object" and "aws s3api head-bucket" calls on buckets and objects in the appropriate place, and that they don't hit permission errors. Under the hood, that's what bucket.exists? and object.exists? do.

Regards,

Dan
Reply all
Reply to author
Forward
0 new messages