Shrine S3 presigned url + Rails Caching (cache_control, expires, X-Amz-Expires)

1,575 views
Skip to first unread message

advme...@gmail.com

unread,
Apr 26, 2017, 7:11:14 AM4/26/17
to Shrine
Hello,

I enjoy Shrine a lot, the modular construction make a lot of sense, and the look and feel is cleaner to my eyes. So thank for that!

My class Post contains thumbnails hosted on S3 with the help of Shrine, and everything works perfectly.
I then implemented caching in rails, especially for index actions and each Post is encapsulated inside a cache tag with a proper key.
I encounred 403 errors while loading (GET) the thumbnail, so I suspect my cache (90 mins) to mess with the "cache_control" and "expires" options specified inside Shrine::Storage::S3 like so :
store: Shrine::Storage::S3.new(prefix: "store", upload_options: { acl: 'public-read', cache_control: 'public, max-age=315569260', expires: 604800 }, **s3_options), # permanent
# cache_control 10years & expires 7days

Moreover, I saw that the presigned url generated by Shrine has a X-Amz-Expires of 15 mins, the maximum value authorized by AWS being 7 days.

This is not directly a Shrine question, but since AWS presigned url is quite new for me, would you mind orienting me to the right direction using it with Shrine and caching ?
Should I act on the X-Amz-Expires to set the max value of 7 days and, if yes, how can I set this param ? Or should I play with the cache_control & expires ?

Thank you In advance for all your work and help.
Kind Regards,
Thomas

Janko Marohnić

unread,
Apr 27, 2017, 1:09:47 AM4/27/17
to Thomas Menelle, Shrine
If you're caching the image links, it's best to generate public links, ones which don't include "X-Amz-Expires" or other query params. You can do that by passing `public: true` to `UploadedFile#url`:

photo.image[:thumbnail].url

--
You received this message because you are subscribed to the Google Groups "Shrine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-shrine+unsubscribe@googlegroups.com.
To post to this group, send email to ruby-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ruby-shrine/9dffea91-7d85-4faa-a9b8-d0f33ff6cfc2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Scott Knight

unread,
Oct 16, 2018, 6:18:36 PM10/16/18
to Shrine
Hi Janko,
Sorry to bring up an old issue but I am running into this problem also.
My rails app requires the images on S3 to be private and to use the presigned urls. However, in some of my views I have many posts with potentially many images and files attached to each post. This is causing my app performance and memory to suffer.
I am trying other performance improvements but it seems like I can't utilize any of rails' built in caching because of the file urls being dynamic and regenerated on each request.
Do you know of any other workarounds here? In my case, the X-Amz-Expires could be set to several days or a week in the future so that caching could be utilized for that time period.
Do you think something like this would work:
Thanks!
Scott

On Wednesday, April 26, 2017 at 10:09:47 PM UTC-7, Janko Marohnić wrote:
If you're caching the image links, it's best to generate public links, ones which don't include "X-Amz-Expires" or other query params. You can do that by passing `public: true` to `UploadedFile#url`:

photo.image[:thumbnail].url
On Wed, Apr 26, 2017 at 9:11 PM, <advme...@gmail.com> wrote:
Hello,

I enjoy Shrine a lot, the modular construction make a lot of sense, and the look and feel is cleaner to my eyes. So thank for that!

My class Post contains thumbnails hosted on S3 with the help of Shrine, and everything works perfectly.
I then implemented caching in rails, especially for index actions and each Post is encapsulated inside a cache tag with a proper key.
I encounred 403 errors while loading (GET) the thumbnail, so I suspect my cache (90 mins) to mess with the "cache_control" and "expires" options specified inside Shrine::Storage::S3 like so :
store: Shrine::Storage::S3.new(prefix: "store", upload_options: { acl: 'public-read', cache_control: 'public, max-age=315569260', expires: 604800 }, **s3_options), # permanent
# cache_control 10years & expires 7days

Moreover, I saw that the presigned url generated by Shrine has a X-Amz-Expires of 15 mins, the maximum value authorized by AWS being 7 days.

This is not directly a Shrine question, but since AWS presigned url is quite new for me, would you mind orienting me to the right direction using it with Shrine and caching ?
Should I act on the X-Amz-Expires to set the max value of 7 days and, if yes, how can I set this param ? Or should I play with the cache_control & expires ?

Thank you In advance for all your work and help.
Kind Regards,
Thomas

--
You received this message because you are subscribed to the Google Groups "Shrine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-shrine...@googlegroups.com.

Hiren Mistry

unread,
Oct 16, 2018, 7:54:51 PM10/16/18
to Shrine
Hi Scott

Yes I think that would work. You can place the `url` method in the uploader of interest and it overrides the method to use Rails cache or calls `super` to fetch a new url. It doesn't look like it'll break anything in Shrine if that's your concern... pretty sure but haven't tried it. :)

Other than that, I'd explore if using a CDN or signed cookies might be an alternate option. 

Let us know if you still have issues and which option you went with when you get it working. It'll help others.

Regards,
Hiren.

Scott Knight

unread,
Oct 16, 2018, 8:18:27 PM10/16/18
to Shrine
Awesome, thanks Hiren! I will keep working on this and report back.

Janko Marohnić

unread,
Oct 17, 2018, 4:03:55 AM10/17/18
to Scott Knight, ruby-shrine
That StackOverflow did not specify where this `#url` method should be defined. It probably should be defined in `Shrine::UploadedFile`. Also, the second #fetch argument is not documented, so I have no idea what it does and wouldn't recommend passing `self` to it; the cache key should be `url:<storage>:<id>`.

  Shrine.plugin :module_include

  Shrine.file_methods do
    def url(**options)
      return super unless storage.is_a?(Shrine::Storage::S3)

      Rails.cache.fetch("url:#{storage_key}:#{id}", expires_in: 6.days) do
        super(expires_in: 7.days, **options)
      end
    end
  end

If you're going to serve your private S3 objects via a CDN (CloudFront), which is probably a good idea as it should help with performance, in addition to setting the URL :host you'll need this additional configuration. Note that at the moment the new :signer option is only available on the master branch, it will be released in Shrine 2.13.0.

Kind regards,
Janko

Hiren Mistry

unread,
Oct 17, 2018, 11:40:07 AM10/17/18
to Shrine
Janko

Thanks for chiming in and catching the extra `fetch` param and creating a better key. Thinking a little more, adding this method to the base Shrine uploader will mean that all subclassed uploaders and all S3 storages will be cached. This may not be desired.

If one wants to only cache a certain uploader/s and say for the `store` bucket only, then this is what it'd look like:

class MyUploader < Shrine
  plugin
:module_include


  file_methods
do
   
def url(**options)
   
return super unless storage.is_a?(Shrine::Storage::S3) && storage_key == :store

   
Rails.cache.fetch("url:#{storage_key}:#{id}", expires_in: 6.days) do

     
super(expires_in: 7.days, **options)
   
end
 
end
end



In this example you're only caching the files that have been promoted into the `store` S3 storage of `MyUploader` so you have more fine grain control over what's getting cached or not.

Hiren

Scott Knight

unread,
Oct 25, 2018, 7:39:33 PM10/25/18
to Shrine
Hi all,
Just giving an update, I went ahead and implemented Rails caching with some minor changes to the example code above:
Shrine.plugin :module_include
Shrine.file_module do
 def url(**options)
   return super unless storage.is_a?(Shrine::Storage::S3) && storage_key == 'store'
   Rails.cache.fetch("url:#{storage_key}:#{id}", expires_in: 6 * 24 * 60 * 60) do
     super(expires_in: 7 * 24 * 60 * 60, **options)
   end
 end
end
I added this to the Shrine initializer since I wanted it used for all uploaded files. I needed to changed storage_key == :store to storage_key == 'store' 
The cache also gave an error for 6.days showing it was expecting seconds. 6.days.to_i would likely work too.
It's up and running and seems to be working great, I'll let you know if there are any problems. I also still plan to investigate using CDNs as well.
Thank you guys for all your help!
Scott

Scott Knight

unread,
Oct 25, 2018, 7:41:15 PM10/25/18
to Shrine
Jeez, that code hi-lighting looks bad, here it is again:
Shrine.plugin :module_include
Shrine.file_module do
  def url(**options)
    return super unless storage.is_a?(Shrine::Storage::S3) && storage_key == 'store'
    Rails.cache.fetch("url:#{storage_key}:#{id}", expires_in: 6 * 24 * 60 * 60) do
      super(expires_in: 7 * 24 * 60 * 60, **options)
    end
  end
end


Reply all
Reply to author
Forward
0 new messages