I remember long time ago I had a bit different idea.
There are actually two aspects of what happens to an uploaded file:
- its data should be stored somewhere
- a user might want to do some additional processing (count bytes, unzip
it on the fly, resend it to a remote machine)
And I thought that for the first part -- storing it somewhere -- there
shouldn't be actually any handlers, Django should just store it in temp
files indefinitely. However this your proposal may be better. Because,
given my example of unzipping files on the fly, user might not want
event store original file as it is. What do you think about it?
> request.FILES
> -----------------
> This is no longer a MultiValueDict of raw content, but a
> MultiValueDict of UploadedFile objects.
> This will probably hurt the most, as there are probably applications
> assuming that they can take the content from this dict.
I believe this can be made backwards compatible. In my patch[1] to
ticket 1484 (which was duplicated by 2070 long ago) I had a FileDict
class that was lazily fetching file content upon accessing
request.FILES['some_file']['content']. Have a look at it.
> 2. receive_data_chunk(raw_data, start, stop) -- Some data has been
> received by the parser.
Am I right thinking that raw_data is not a raw data from socket but is
already decoded from whatever content-transfer-encoding it might be in
(i.e. base64)?
> 5. get_chunk_size()
Why not just an attribute chunk_size? It's shorter and it's almost
always will be just a plain class attribute.
> By adding a set_upload_handler() method to request, anyone can
> override the default upload handler. However, this must be done before
> the POST was accessed, and it is probably recommended we raise an
> error if someone tries to set a new upload handler after the FILES
> MultiValueDict is populated.
Instead of having users figure out where to stick a call to
set_upload_handler we could steal the design from middleware and just
have UPLOAD_HANDLERS setting... It might not be such a good idea if
people will often want different logic per view. However I think a
single global setting is needed for the most common use case: store
everything in temp files.
> It's interesting to note that with this framework a lot of interesting
> possibilities open up. I will not write any of the code to do anything
> but the basic temporary disk storage, but here are a few interesting
> examples of what can happen:
> - Gzipping data on the fly [GZipFileUploadHandler +
> GZipFileBackend].
> - Saving file to another FTP Server on the fly
> [FTPFileUploadHandler +
> NoOpFileBackend].
> - Having Cool Ajax-y file uploads [AjaxProgressUploadHandler + Any
> Backend].
> - Having user-based quotas [QuotaUploadHandler + Any Backend].
Heh :-). I was inventing my use cases before I get to this point :-).
[1]: http://code.djangoproject.com/attachment/ticket/1484/1484.m-r.6.diff
*Very* well done -- I'm in agreement with nearly every aspect of your
proposal. Major props for taking on such a sticky issue -- walking
into #2070 is a bit like exploring an overgrown jungle :) A few
comments inline below, but in general I quite like your API and would
like to see your code.
On Tue, Mar 18, 2008 at 11:30 PM, Mike Axiak <mca...@gmail.com> wrote:
> request.set_upload_handler(<upload_handler>)
I especially like this -- it neatly encapsulates the fact that
different folks are going to want quite different file upload
behavior. A few things to think about:
* Do you think it's worth firing a signal
(file_upload_started/file_upload_finished) from request or from the
base upload handler? This would let decentralized processing of file
uploads, but could get confusing.
* Do you think we should allow multiple upload handlers (which makes
this call into something like request.add_upload_handler(handler))?
The other option would be to just make folks compose upload handlers
with a wrapper class, which is hardly a hardship.
I don't have answers to either of these questions; something to think
about, at least.
> request.FILES
> -----------------
> This is no longer a MultiValueDict of raw content, but a
> MultiValueDict of UploadedFile objects.
> This will probably hurt the most, as there are probably applications
> assuming that they can take the content from this dict.
It seems to me that you could pretty easily provide a
backwards-compatible API by defining __getitem__ on UploadedFile;
raise a DeprecationWarning there but provide the data in the "old"
style.
> 5. get_chunk_size() -
Just make this FileUploadHandler.chunk_size -- no reason for getters/setters.
Again, thanks -- this is good stuff.
Jacob
But what about default behaviour? There should be some place to say "all
file uploads should go on disk".
P.S. In my other response we seem to agree on __getitem__ for
request.FILES and an attribute .chunk_size almost exactly :-).
I think the default behavior doesn't need to be a setting:
upload-to-disk for all files over N bytes (1M, maybe?), and
upload-to-RAM for anything smaller.
Jacob
My concern is for this "out-of-the-box" default not to be hard-coded.
There should be some way for a user to set its own global (i.e. non
per-view) handler. I understand that everyone can write a one-line
middleware that sets the handler but while we're at designing stage I
thought we could make it a setting instead of another FAQ :-)
This is the right way. Uploaded file is almost never will be treated as
text and unicode doesn't make sense for binary data. It's also
especially useless for the default behaviour of storing uploaded data to
disk: you'll have to encode just decoded data back to a stream of bytes
to store it in a file.
Sorry, I didn't mean "default" in a sense of absent user settings. I
want to have a way to set a user-specific handler not only on per-view
basis but as a "default" for all upload requests.
To think of a use-case... Imagine some system where /tmp directory is on
a very small partition. An admin would want to direct all uploads
(potentially big) to some other device. And he wants to do it only for
this Django site, not the whole system. Something like this...
IS> An admin would want to direct all uploads (potentially big) to some
IS> other device. And he wants to do it only for this Django site, not
IS> the whole system.
Security also comes in mind: it may be undesirable to use same directory
for different sites to prevent information leakage (even the fact of
file upload may be security-sensitive).
--
It does. But I was thinking of a setting. However I don't insist on it.
Since you and Jacob seem to not feel that it's necessary let it be just
a method on request. I was merely raising a concern.
I don't like #1 because there's no point to keep deprecated code in
Django where we can fix it. And #3 is indeed broken because there is
much code in the wild that uses request.FILES directly. It's a public
API after all.
#2 looks reasonable to me.
> I realized that I just want people to have a list interface.
> Therefore, I decided to just leave it as a plain old list. Thus, to
> add an upload handler, you'd just write::
>
> request.upload_handlers.append(some_upload_handler)
>
> And to replace them all::
>
> request.upload_handlers = [some_upload_handler]
>
> I've made a few efforts to ensure that it will raise an error if the
> upload has already been handled. I know this isn't as simple as the
> .set_upload_handler() interface, but I think it's the simplest way we
> can support the list modification/replacement in a useful fashion.
> What do people think about this?
It would be good to invent a declarative setting for this but since it's
a per-view thing it would be hard. So may be indeed a list would be enough.
> Currently when you try uploading Large files
> (~2GB and greater), you will get a weird Content-Length header (less
> than zero, overflowing).
> ...
> Should
> I/we just ignore this and expect people to be sane and upload
> reasonable data?
If Content-Length header is crewed then neither we, nor users can do
anything about it. When these file volumes become more widespread I
believe browsers will fix themselves to send correct Content-Length.
I haven't had time to sit down and devote to reading through the whole
patch, but I have a possibly very easy question that I can't answer from
the docs at the moment.
I'm a simple guy. I like simple stuff. So if I'm deploying a Django
system and it will handle file uploads from some forms and all I want to
do is ensure that large file uploads aren't held in memory, how do I do
that? In other words, I want to avoid the problem that originally
prompted this ticket and the related one.
Does this Just Work(tm) out of the box?
Malcolm
--
Remember that you are unique. Just like everyone else.
http://www.pointy-stick.com/blog/
I didn't either. I thought I may have just been missing something
obvious. But now that you've written the extra bits at the top of the
document, it makes more sense to me as a user. Thanks.
Okay, so this all has to go on the review pile now I guess. That should
be fun, for some value of "fun". Nice work, Mike.
Regards,
Malcolm
--
No one is listening until you make a mistake.
http://www.pointy-stick.com/blog/
Woo! Thanks for your hard work. My thoughts on your questions follow inline:
> Supporting dictionaries in form code
> ------------------------------------
> [...]
> TextFileForm(data={'description': u'Assistance'}, files={'file':
> {'filename': 'test1.txt', 'content': 'hello world'}})
What would an equivalent line look like under the new system? That is, what do
folks need to change their tests to?
> I see three options to deal with this:
> [...]
> 2. Modify the form code to access the attributes of the
> UploadedFile, but on AttributeError use the old dict-style interface
> and emit a DeprecationWarning.
Yes, this is correct approach. Something along these lines:
if isinstance(uploadedfile, dict):
warn(...)
uploadedfile = uploadedfile_from_dict(uploadedfile)
Option #3 is unacceptable: if at all possible we want people's tests to
not break. Warnings are fine; breakage if avoidable is a bad idea.
> The other issue is what we should do with the tests. Should we
> leave them? Should we copy them and create a copy of them for the new
> style? Should we replace them with the new style?
The latter -- fix all the tests to use the new syntax as a demo of how it's
supposed to be done.
> Having upload_handlers be a list object
> ---------------------------------------
> [...]
> Therefore, I decided to just leave it as a plain old list. Thus, to
> add an upload handler, you'd just write::
>
> request.upload_handlers.append(some_upload_handler)
>
> And to replace them all::
>
> request.upload_handlers = [some_upload_handler]
If we do indeed need this -- see below -- then this is the right way to do it.
> What do people think about this?
I'm thinking YAGNI here. Why would I need multiple upload handlers? I think you
need to talk me through your thinking here, because at first glance this smacks
of overegineering. Remember that multiple handlers can always be accomplished by
composition anyway, so unless there's a good reason I think set_upload_handler()
is just much cleaner.
Similarly, I'm a bit suspicious of FILE_UPLOAD_HANDLERS. Couldn't you just write
a simple middleware to do the same thing? As a general principle, you should try
as hard as you can to avoid introducing new settings; if there's another way
just do it that way. In this case, I'd just document that if you want to use a
custom upload handler globally that you should write a middleware class.
> (Mis)Handling Content-Length
> ----------------------------
> [...]
> There's probably not much room for argument here, but it's worth
> asking a larger group. Currently when you try uploading Large files
> (~2GB and greater), you will get a weird Content-Length header (less
> than zero, overflowing).
Personally, I can't see any reason to care too much about people trying
to upload 2GB files through the web, anyway. Anyone allowing uploads should
have a sane upload size limit set at the web server level. Anyone who's allowing
uploads of over 2GB is just asking to get DOSed.
I think this is a place where we just add a note to the docs about setting the
Apache/lighttpd/etc. upload limit and move on.
> Revised API
> ===========
> [...]
> Let me know if you have any comments on the API. (Things like how it
> deals with multiple handlers could be up for discussion.)
I quite like the API. I'm not sure why you'd need to return a custom
UploadedFile from an upload handler, but props for documenting the interface
anyway :)
Thanks again for the hard work -- this looks very good!
Jacob
OK, I think I understand; thanks. It's hard sometimes figuring out a
thought process from its final result. I think you're probably right
to make this a list, then.
> This [FILE_UPLOAD_HANDLERS] took a while for me to get sold on. I can
> only think of 5 or 6 useful upload handlers, but that's still 1956
> possible orderings. It'd be tough to make a handler easy to install
> when it has to be aware of where it belongs. This functionality could
> be replicated in a middleware which reads the settings, but I'm not
> sure that's the best place for something like that.
Hrm, good point. I'll chew a bit more, but I can't think of a good way
to avoid the extra setting (as much as I dislike creeping settings).
Jacob
Maybe I'm missing something obvious, but why would there ever be an
S3UploadHandler? Shouldn't that be handled by a file storage backend?
As for GZipUploadHandler, if you're talking about gzipping it as part
of the save process, shouldn't that also be a file storage backend? Of
course, if you're talking about *un*gzipping it during upload, so it
can be processed by Python, I withdraw the question.
I admit I haven't been following this terribly closely, but now that
both #5361 and #2070 are nearing completion, I'm trying to get a good
handle on all of this in case there are any interactions between the
two that I can help with.
-Gul
Yeah, one thing we'll need to figure out PDQ is what's appropriate for
an upload handler, and what's appropriate for a storage backend.
Hopefully the two of you can work out the breakdown.
FYI, I plan to merge 2070 first (sorry, Marty!) since I think it works
a bit better that way from a dependancy POV.
Jacob
I'll read over the patch and the docs and see if I can get a better
handle on how it works, so I can be of more use there. Also, Mike and
I put our heads together in IRC sometimes, so we should be able to get
it sorted out soon.
> FYI, I plan to merge 2070 first (sorry, Marty!) since I think it works
> a bit better that way from a dependancy POV.
No worries. I still have another of David's suggestions to integrate
before I can have it "finished" anyway. Plus, there's a 3291-ticket
age difference. The youngest always gets the shaft. :) I can live with
that.
-Gul
Then those people deserve to be beaten heavily about the head and
shoulders. S3 is NOT a reliable upload endpoint. They (Amazon) say
there'll be approximately a 1% failure rate for attempted uploads. As 37
signals have noted in the past (they use it for their whiteboard file
storage), that doesn't mean that when you try again it will work,
either. It means, in practice, that periodically for 1% of the time, all
your uploads will fail. Then, a little later, they'll start to work. But
there could be minutes on end where you cannot upload. So unless your
web application is designed to intentionally lose data (in which case I
can think of a big optimisation on the saving end to lose it even faster
and more reliably), you must save to disk before trying to upload to S3.
In short, I can't see any reason the file upload handler should care
about storage systems like this.
Regards,
Malcolm
--
Experience is something you don't get until just after you need it.
http://www.pointy-stick.com/blog/
My original reason for settings was that a single upload handler can't
possibly know the semantics of other upload handlers and can't decide
where to put itself in the list. This should be a decision of a
developer who compiles a project from third-party apps with upload
handlers how to arrange them. It's very similar to middleware: the order
is decided in settings. The only exception would be when a handler
really depends on some other handler which it can require be raising an
Exception somewhere.
Nobody stops a developer from doing both things in parallel: storing on
disk and streaming to S3. Then when S3 fails a stored file can be
scheduled for repeating uploading.
The reason for not doing it only over-the-disk way is speed. Since 99%
of uploads do succeed they will gain heavily from not doing writes and
reads of the whole file on local disk.
Anyway S3 is just an example. It could be some local media-server
instead. I think the main reason between an upload handler and a file
backend is that the latter is generally simpler to write and the former
is generally more flexible.
Looks like I'm contradicting myself a bit here :-). But it's not the
point actually :-)