S3 Directory Listing Gives "... exists, but isn't a directory" Error in 4.5.1.000

138 views
Skip to first unread message

Jamie Jackson

unread,
May 15, 2015, 12:09:32 PM5/15/15
to lu...@googlegroups.com
This seems like a Lucee bug, but I wanted to run it by the list first.

I'm having trouble listing certain directories using S3 URIs. I haven't figured out a pattern, and I can't find anything on the S3 side that looks like a problem. In fact, using the same credentials, I have no problem accessing anything, whatsoever, from another client (s3cmd).

This seems to happen with S3 mappings configured in the admin, as well as with S3 URIs used directly.


It has:
  • List bucket (successful)
  • List /reports directory (successful)
  • Read /a/foo.txt (successful)
  • List /a directory (failure)
I've seen other S3-related posts lately, and I realize that some have abandoned Lucee S3 functionality due to reliability problems, but it would be nice to simply have robust native support, so it's something worth pursuing.

Thoughts?

Thanks,
Jamie

Jamie Jackson

unread,
May 19, 2015, 9:39:30 AM5/19/15
to lu...@googlegroups.com
El Bumpito.

Tom Chiverton

unread,
May 20, 2015, 3:56:16 AM5/20/15
to lu...@googlegroups.com
I saw this too recently, using S3 URLs directly. I assumed it was something crazy in Amazon. Removing and recreating the top-level folder in the bucket (aws s3 sync is your friend) fixed it.
Curiously, other folders in the same bucket were fine.

Tom

Tom Chiverton

unread,
May 20, 2015, 4:52:08 AM5/20/15
to lu...@googlegroups.com
This happened again today, with a different top level folder.

I restarted Lucee and the error does not clear.

Our code looks like:

if( ! directoryExists(base)  ){
                        writeLog('client dir missing for #base#','debug');
                        directoryCreate(base);
                        writelog('FileBrowser created client dir for '&clientFolder);
                }

and the directoryCreate() throws 'it exist a file with same name'.

For now we're going to try catch'ing and swallowing this error...

Tom

Tom Chiverton

unread,
May 20, 2015, 4:57:17 AM5/20/15
to lu...@googlegroups.com
If we swallow the createDirectory error, subsequent directoryList() fail with 'isn't a directory'.

I think directoryExists() is broken ? And maybe then directoryCreate() does something funny with the S3 bucket, which leaves it permanently broken ?

Tom Chiverton

unread,
May 20, 2015, 5:03:34 AM5/20/15
to lu...@googlegroups.com
Hmm. There is something different about working vs. not working directories, as viewed from the S3 console:

tchiverton@ev34:~$ aws s3 ls s3://xxx-files/extravisionbeta2/
                           PRE _archivedMessageContent/
                           PRE docs/
                           PRE images/
tchiverton@ev34:~$ aws s3 ls s3://xxx-files/extravision/
                           PRE _archivedMessageContent/
                           PRE _editor/
                           PRE docs/
                           PRE images/
                           PRE static/
2015-05-15 11:10:59          1

I'm not sure what that 1 (byte ?) file is. But we didn't mean to put it there... maybe this is the symptom of the Lucee bug ?

Jamie, could you check what you get ?

Tom

Tom Chiverton

unread,
May 20, 2015, 5:42:00 AM5/20/15
to lu...@googlegroups.com
I am defeintly able to restore my ability to list effected top level buckets by running the following Python code.
It basically finds a one byte object, and removes it, from the indicated folder:

s3_conn = boto.connect_s3(aws_access_key, aws_secret_key)
bucket = s3_conn.get_bucket(bucket_name)
for key in bucket.list('extravision/'):
    if key.size == 1:
        print(key.key+' '+str(key.size))
        print key.get_metadata('Content-Type')
        key.delete()

Tom

Jamie Jackson

unread,
May 20, 2015, 10:06:59 AM5/20/15
to lu...@googlegroups.com
Hey Tom,

Nice job finding that! Yes, that was what was causing my problem, too.

After a little investigation, it looks like that 0-byte object is a "directory" object. One way it comes into existence is if you use the AWS S3 web interface to first create the directory (as you might do before uploading files into the directory).

If you create an object directly by doing something like:


... then one of those "directory" objects is not generated.

Aside from your solution, an easy way to delete a single one of those 0-byte directory objects is simply to delete that "directory" object (in this case, "foo"):

aws s3 rm s3://my.bucket.com/foo/

... and the "contained" files remain intact.

With all that said, other clients don't seem to have much trouble with "directories" containing these "directory" objects, so Lucee should be able to handle the situation as well, so I'll file a ticket. (Though now that I know what we're dealing with, I finally understand why s3cmd throws a warning about an empty object during some operations.)

Thanks,
Jamie

--
You received this message because you are subscribed to the Google Groups "Lucee" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lucee+un...@googlegroups.com.
To post to this group, send email to lu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lucee/90bd85a0-9bf9-4f22-8a99-4f30c536c00f%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Jamie Jackson

unread,
May 20, 2015, 10:22:24 AM5/20/15
to lu...@googlegroups.com
I meant 1-byte files, not 0-byte files, BTW.

Jamie Jackson

unread,
May 20, 2015, 10:36:34 AM5/20/15
to lu...@googlegroups.com

On Wed, May 20, 2015 at 10:06 AM, Jamie Jackson <jamie...@gmail.com> wrote:
Reply all
Reply to author
Forward
0 new messages