Aloha!
Sorry in advance for whats probably going to be a very noob question. I believe its related to something you described in this thread so going to reply here.. let me know if its not and I'm happy to move it to a top-level thread. I'm actually in the midst of replacing some thumbnail generation being handled by Paperclip to be handled by Shrine before doing the big push of seeing if Shrine can manage the process im about to describe.
The one big question I have is: In a "direct to s3" world, if I leave the pre/post process hooks empty, will Shrine still attempt to bring the files its trying to attach into the server/dyno memory?
For me it makes my servers crap out since the file can be many gigs large and other than the encoding work Zencoder is doing, I don't have any need for post processing.
Full description of what I'm going to try and solve with Shrine:
What I've currently got implemented is:
- Custom code to provide a presign'ed endpoint,
- A javascript uploader that sends directly to my S3 bucket (using evaporate.js) and comes back to hit one of my Rails endpoints to create the record.
- Which then kicks off a job to Zencoder to create a bunch of encoded versions, which Zencoder puts directly back onto S3 + sends a notification to my server about all the locations on the bucket.
- A bunch of code to create proper CDN urls to all those versions.
- A bunch of code to delete files if the rails record is deleted.
I originally had paperclip managing the files up on S3 after they got put there but paperclip was automatically trying to read the file into memory and I saw no way around that. I felt forced to write my own helpers to manage the files (which felt like the dark ages of "There should be a library that can handle this for me!").
My hope is that I can use Shrine to:
- Use the built in pre-signed endpoint, looks identical to the one I wrote myself, (I always love deleting custom code)
- Continue to use my custom javascript uploader which uses EvaporateJS under the hood to go directly to S3
- Have it save the rails record with the URL to the S3 as it does today and have shrine manage the file as the 'original'
- Kicks off the Zencoder job which creates all the encoded versions and puts them directly back on S3
- When Zencoder tells my rails app about those versions I have Shrine manage all those versions too
- Get file deletion on rails-record deletion "for free". (again, love deleting code)
- Get CDN urls built for me "for free" with some Shrine configuration. (again, love deleting code)
All without the memory overhead of a multi-gig file being pulled down by Shrine automatically.
I saw a lot of resources that touched on a bunch of these points/goals but nothing specifically about the memory management. My hope also one day I can say "Hey shrine, on this one particular version zencoder gave me, pull that one into a background job and rip EXIF data from it" since i can point it at a known-small version... but thats not a usecase I need to worry about for a while.
Sorry for the verbosity and if this was already answered and I'm just too dense to have found or understood the answer that was in front me.
Cheers and excited about what feels like a really big step forward in the Rails File Handling world!
Sumit