Clean up bucket s3

459 views
Skip to first unread message

alexgr007

unread,
Mar 2, 2011, 9:46:31 AM3/2/11
to ruby-fog
Hi everyone.

I wish clean up one directory into s3. This directory have a lot of
images, but i don't want to delete image by image. So, when I try to
do something like:

bucket = "sk-dev"

directory = storage.directories.get(bucket)

directory.destroy

I get, this message: The bucket you tried to delete is not empty

My aim is delete everything in the bucket, and then uploads new
images.

Thanks.

geemus

unread,
Mar 2, 2011, 1:18:58 PM3/2/11
to ruby-fog
Unfortunately, due to the way that storage providers work there
generally isn't a way to do this without deleting thing image by
image.

Sorry I can't be of more help than that, but it is really the only
viable option.

wes

John Vincent

unread,
Mar 2, 2011, 1:24:10 PM3/2/11
to ruby...@googlegroups.com, alexgr007
As Wes said, there's not a uniform way to do that. You'll have to
iterate the contents.

However, depending on the size of the bucket, you'll run into the 1k
key limit. I'll save you some searching now ;)

https://gist.github.com/763977

--
John E. Vincent
http://about.me/lusis

Nic

unread,
Apr 13, 2011, 11:17:10 AM4/13/11
to ruby-fog
Is there a way to access a nested directory so that you can iterate
through it? So if I have a directory 6 levels deep with 10 images in
it how would I get that directory that I want to iterate through?

geemus

unread,
Apr 13, 2011, 1:00:01 PM4/13/11
to ruby-fog
The cloud providers don't actually provide for nested directories. In
the end they just have long keys that contain many '/' characters. As
such you can probably limit the list down to stuff with the prefix
option and then iterate over those.

John Vincent

unread,
Apr 13, 2011, 1:14:27 PM4/13/11
to ruby...@googlegroups.com, Nic
Guess I missed this when it came in, sorry. Again Wes beat me to it.
In the case of S3, the "directory" structure you're seeing is just
bucket names or keys with "/" in the name. S3 originally didn't even
have support for that. It was bucket + contents.

One option you might want to try down the road is, depending on how
you're putting stuff in the bucket, is to maintain an "index" bucket
somewhere. It's not as pretty but you could have a bucket called, say,
"my-bucket-indexes" and put a single file per bucket in there. The
file would contain a list of items in a given bucket. The you just
iterate over the contents of the file to clean out the bucket when it
gets to a certain size.

--

Nic

unread,
Apr 13, 2011, 1:16:54 PM4/13/11
to ruby-fog
Okay thanks Wes, that's what I'll do!

Nic

unread,
Apr 18, 2011, 1:45:05 PM4/18/11
to ruby-fog
Does fog have a built-in iterator that I can use for this?

On Apr 13, 12:00 pm, geemus <wbe...@engineyard.com> wrote:

geemus (Wesley Beary)

unread,
Apr 18, 2011, 1:55:20 PM4/18/11
to ruby...@googlegroups.com
fog has support for the regular s3 iterators, but not really anything built in beyond that (at least at present).  You can see some discussion of using the s3 stuff here: https://groups.google.com/group/ruby-fog/browse_thread/thread/b34ba6914d191a3a

Let me know if that doesn't help and I can try to further clarify or provide an example.
Reply all
Reply to author
Forward
0 new messages