Trying to do simple upload: 'str' object has no attribute 'connection'

1,541 views
Skip to first unread message

truebosko

unread,
May 25, 2008, 12:21:58 AM5/25/08
to boto-users
Hi there,

I have a project that was using the Amazon S3 Python wrappers but I
need to handle uploading large files and apparently boto is good at
that!

With that in mind, I decided to do a simple method using the Boto
system:

def save_s3_data(key, data, content_type):
k = Key(AWS_BUCKET_NAME)
k.key = key
k.set_contents_from_string(data)

My system passes in a key based on the user/project and "data' is
essentially the file from a FILE UPLOAD form

Looks pretty simple, but I get the following error:
AttributeError at /xxxx
'str' object has no attribute 'connection'

Which junks out on:
D:\dev\cake\boto\s3\key.py in set_contents_from_file

315. self._compute_md5(fp)
316. if self.name == None:
317. self.name = self.md5
318. if not replace:
319. k = self.bucket.lookup(self.name)
320. if k:
321. return

322. self.send_file(fp, headers, cb, num_cb)


Any ideas what I am doing wrong?

Thanks

Patrick Altman

unread,
May 25, 2008, 1:52:06 AM5/25/08
to boto-...@googlegroups.com
I think you want to be setting k.name instead of k.key.

Also, you might want to consider using set_contents_from_file(fp)
which takes a file handle (if data is a filename --
set_contents_from_file(open(fp, 'r')).

On May 24, 2008, at 11:21 PM, truebosko wrote:

> def save_s3_data(key, data, content_type):
> k = Key(AWS_BUCKET_NAME)
> k.key = key
> k.set_contents_from_string(data)

---
Patrick Altman
(615) 300-2930

truebosko

unread,
May 25, 2008, 12:12:29 PM5/25/08
to boto-users
Hmm. I am not sure now what I should be doing.

Using this: k.set_contents_from_file(open(data, 'r'))

(note I am using django)
Passing in self.cleaned_data['file_name'] my error is:
coercing to Unicode: need string or buffer, UploadedFile found

Passing in self.cleaned_data['file_name'].content (which is the raw
file content) I get:
file() argument 1 must be (encoded string without NULL bytes), not str

Any ideas? Uploading worked fine on the old S3 wrapper and it sucks
that it's being a hassle to convert over to boto. Hopefully I can get
it working


Thanks

Patrick Altman

unread,
May 25, 2008, 12:36:19 PM5/25/08
to boto-...@googlegroups.com
You need the full path to the file local to your server. I believe
clean data is just going to give you a fileneme base from the form
upload. That being said I would highly recommend not tying up your
request with a push to s3, but rather put the push to s3 in another
server side process.

---
Patrick Altman
(615) 300-2930

[Sent from my iPhone]

truebosko

unread,
May 25, 2008, 1:25:50 PM5/25/08
to boto-users
Ok,

So what do you exactly suggest? My assumptions are:
- Upload the file to my local server, then push it to S3 after wards
and remove the file on my local drive. Problem with this is, I am
using bandwidth on my server.
- Use some other process to upload the file from a File Upload form
that I do not know of ?? :)

Sorry, I am trying to look into it and figure out how I can solve this
but that is pretty much the only things I can think of.

Thanks ..


On May 25, 12:36 pm, Patrick Altman <palt...@gmail.com> wrote:
> You need the full path to the file local to your server. I believe  
> clean data is just going to give you a fileneme base from the form  
> upload. That being said I would highly recommend not tying up your  
> request with a push to s3, but rather put the push to s3 in another  
> server side process.
>
> ---
> Patrick Altman
> (615) 300-2930
>
> [Sent from my iPhone]
>

Patrick Altman

unread,
May 25, 2008, 1:46:38 PM5/25/08
to boto-...@googlegroups.com, boto-users
You are still using bandwidth of you upload in the request process.
You can avoid this and upload directly to amazon without touching your
server by using the post API instead of using boto to make put requests.

---
Patrick Altman
(615) 300-2930

[Sent from my iPhone]

Mitchell Garnaat

unread,
May 25, 2008, 2:03:56 PM5/25/08
to boto-...@googlegroups.com
Hi -

You need to pass a Bucket object into the constructor rather than a bucket name.  Here's a code snippet that should work for you:

>>> c = boto.connect_s3()
>>> b = c.lookup('bucket_name')
>>> k = b.new_key('key_name')
>>> k.set_contents_from_string('this is a test')

If you are uploading large files, the best way is to use set_contents_from_file (which takes a file pointer as an argument) or set_contents_from_filename (which takes the fully qualified path to the file to be uploaded).

Hope that helps.

Mitch

truebosko

unread,
May 26, 2008, 11:11:16 AM5/26/08
to boto-users
Hmm, are you sure? I was told by others that if I have a upload
process that feeds directly into Amazon it's fine

I would use a simple POST but the problem is, with large files it
simply wont work. I was told boto is a bit better for large files
because it essentially "streams" an upload (I read about it having a
callBack, so you could tell the user how many kilobytes have been
uploaded)

Was I wrong in this? Was my information not correct? My main issue is
that I need to upload large files and that is why I decided to check
out boto.

Thanks

On May 25, 1:46 pm, Patrick Altman <palt...@gmail.com> wrote:
> You are still using bandwidth of you upload in the request process.  
> You can avoid this and upload directly to amazon without touching your  
> server by using the post API instead of using boto to make put requests.
>
> ---
> Patrick Altman
> (615) 300-2930
>
> [Sent from my iPhone]
>

Patrick Altman

unread,
May 26, 2008, 11:25:32 AM5/26/08
to boto-...@googlegroups.com
yes, you are going to take a bandwidth hit:


--upload-->[ your server doing boto magic ]---upload-to-aws--->[s3
bucket]

the fact that you are streaming bytes through your webserver and then
back out to amazon s3, means you are paying for both inbound and
outbound bytes for the same file.

Reply all
Reply to author
Forward
0 new messages