http multipart post -- file upload

7,239 views
Skip to first unread message

James Lyons

unread,
Apr 26, 2012, 12:29:50 PM4/26/12
to golang-nuts
So I spent some time trying to code up a little upload function to
send files up to a server. The server expects a form upload, like you
would get from using this html in the browser:
<form method='POST' enctype='multipart/form-data'
action='http://localhost:8888'>
File to upload: <input type=file name=upfile><br>
<br>
<input type=submit value=Press> to upload the file!
</form>

I couldn't find a pre-packaged way to do this, so I came up with the
following simple approach:

package main

import (
"fmt"
"net/http"
"mime/multipart"
"bytes"
"os"
"io"
)

func main() {
target_url := "http://localhost:8888/"
body_buf := bytes.NewBufferString("")
body_writer := multipart.NewWriter(body_buf)
filename := "/path/to/file.rtf"
file_writer, err := body_writer.CreateFormFile("upfile", filename)
if err != nil {
fmt.Println("error writing to buffer")
return
}
fh, err := os.Open(filename)
if err != nil {
fmt.Println("error opening file")
return
}
io.Copy(file_writer, fh)
body_writer.Close()
http.Post(target_url, "bad/mime", body_buf)
}
-------------

Aside from not doing mimetype setting -- this "works". My fear
however is that for large files io.Copy is going to write the whole
contents of the file into the buffer, before I can send it with
http.Post. For small files this isn't a concern. But for large files
(4GB+ lets say) thats a bunch of memory. I'm wondering if there is a
way to have a buffer type that has a notion of "maximum size before I
start using disk" so that for large files it would store the buffer to
disk and not chew memory. I feel like there should be some way (that
i'm missing) to do this using a bufio or something, but I wasn't
coming up with anything. Its unfortunate that I have to copy the file
to the buffer for the request, and then send the whole request to the
wire. Perhaps I should create a reader/writer where you can "append"
a reader to a set of readers -- and when one returns EOF it starts
reading the next.... So you could append a file, to a buffer without
reading the contents -- until you're actually "reading" the whole
thing to copy to the socket.

Anyone have experience with this?

-James-

ryan.bressler

unread,
Apr 26, 2012, 2:18:05 PM4/26/12
to golang-nuts
I recently had to do something like this to post some json and used
io.Pipe and launched a go routine to do the writing:

preader, pwriter := io.Pipe()
mpf := multipart.NewWriter(pwriter)

r, err := http.NewRequest("POST", *url, preader)
if err != nil {
fmt.Println(err)

}

go func() {

mpfwriter, err := mpf.CreateFormFile("jsonfile", "data.json")
if err != nil {
fmt.Println(err)
}

jsonencoder := json.NewEncoder(mpfwriter)
jsonencoder.Encode(tasklist)
mpf.Close()
pwriter.Close()

}()


client := &http.Client{}
resp, err := client.Do(r)
if err != nil {
fmt.Println(err)

}

(edited down from actual code but not actually run)

James Lyons

unread,
Apr 27, 2012, 4:06:43 PM4/27/12
to golang-nuts
So... For those golang experts out there, I'm sure this was probably
obvious, but incase there is anyone else out there struggling with
learning whats available in the standard library, io.MultiReader was
*very* useful in this context.

The following is a very simple POST of file data to a web service that
expects files to arrive in multipart form uploads, and does so without
reading the file data unnecessarily.

package main

import (
"fmt"
"net/http"
"mime/multipart"
"bytes"
"os"
"io"
)


func postFile(filename string, target_url string) (*http.Response, error) {
body_buf := bytes.NewBufferString("")
body_writer := multipart.NewWriter(body_buf)

// use the body_writer to write the Part headers to the buffer
_, err := body_writer.CreateFormFile("upfile", filename)
if err != nil {
fmt.Println("error writing to buffer")
return nil, err
}

// the file data will be the second part of the body
fh, err := os.Open(filename)
if err != nil {
fmt.Println("error opening file")
return nil, err
}
// need to know the boundary to properly close the part myself.
boundary := body_writer.Boundary()
close_string := fmt.Sprintf("\r\n--%s--\r\n", boundary)
close_buf := bytes.NewBufferString(fmt.Sprintf("\r\n--%s--\r\n", boundary))

// use multi-reader to defer the reading of the file data until
writing to the socket buffer.
request_reader := io.MultiReader(body_buf, fh, close_buf)
fi, err := fh.Stat()
if err != nil {
fmt.Printf("Error Stating file: %s", filename)
return nil, err
}
req, err := http.NewRequest("POST", target_url, request_reader)
if err != nil {
return nil, err
}

// Set headers for multipart, and Content Length
req.Header.Add("Content-Type", "multipart/form-data; boundary=" + boundary)
req.ContentLength = fi.Size()+int64(body_buf.Len())+int64(close_buf.Len())

return http.DefaultClient.Do(req)
}

// sample usage
func main() {
target_url := "http://localhost:8888/"
filename := "/path/to/file.rtf"
postFile(filename, target_url)

Paddy Foran

unread,
Apr 27, 2012, 4:14:50 PM4/27/12
to James Lyons, golang-nuts
I've actually been working on this for the last few days, trying to remove the Ruby dependency from http://dev.iron.io/worker/languages/go. Your question couldn't have come at a better time.

This is what I'm using at the moment: https://gist.github.com/2504488 I'm going to continue to refine it. Thanks for all the help in getting it working. :)

Kyle Lemons

unread,
Apr 27, 2012, 4:15:15 PM4/27/12
to James Lyons, golang-nuts
What happened to multipart writer?  When I looked at the code, it doesn't appear to buffer anything, and it does the MIME encoding for you.

James Lyons

unread,
Apr 27, 2012, 4:31:03 PM4/27/12
to Kyle Lemons, golang-nuts
@Kyle -- Well, I wanted to use the multipart writer, but I didn't see
an interface that would let me pass a file to it. The multipart
writer formats the Part, and then lets you write data to the Part
(using the returned writer) and then finally lets you close the part
by closing the writer. Which is great when what you have is a
smallish amount of data to post. But when its a file (potentially
large) you want to use something like multi-reader to defer the
reading of things till the http library wants to write to the request
to the bufio that it wraps the socket with. Which it doesn't do for
you. So I had to replicate much of what it would do, to then let me
use mulireaders to accomplish what I want. (logical concatenation
without reading a file) It would be nice if there was a version of
multipart writer that worked more like this, because often what you
want to do is stick a whole file in the Part, and there isn't any
reason to read it all.

@Paddy -- you'll notice in the first email I sent I did something very
similar to your current approach. Which for small files is fine, but
if you have a big one, its really annoying. Plus there is a
performance hit (of a sort) in that you have to copy the data once to
a buffer from disk, and then from the buffer to the socket buffer at
the end. The extra copy is all in memory however, so i'm not sure it
would show up except under the heaviest of loads. I care more about
stability though, and potentially running out of memory on a large
file just feels bad. So I like this better -- it feels like it should
be nominally more performant, and quite a bit more stable.

Kyle Lemons

unread,
Apr 27, 2012, 5:30:52 PM4/27/12
to James Lyons, golang-nuts
On Fri, Apr 27, 2012 at 1:31 PM, James Lyons <james...@gmail.com> wrote:
@Kyle -- Well, I wanted to use the multipart writer, but I didn't see
an interface that would let me pass a file to it.  The multipart
writer formats the Part, and then lets you write data to the Part
(using the returned writer) and then finally lets you close the part
by closing the writer.  Which is great when what you have is a
smallish amount of data to post.  But when its a file (potentially
large) you want to use something like multi-reader to defer the
reading of things till the http library wants to write to the request
to the bufio that it wraps the socket with.

io.Copy will write only as fast as it's being read by the client, and a deferred close cleans up nicely.

James Lyons

unread,
Apr 27, 2012, 6:59:47 PM4/27/12
to Kyle Lemons, golang-nuts
I don't understand how that is helpful in this context. It depends of
course on how you set things up.
io.Copy will write to a byte.Buffer using ReadFrom, or WriteTo, which
will happen instantly. So if you're trying to setup
a reader to use as your body input to http.Post() one might choose to
use a buffer. The problem with that is it reads all the data from the
file to the buffer. Because it calls Write, (or ReadFrom, WriteTo,
depending -- but a buffer implements those as instant read/write) in
the implementation. I suspect that initial choice of basic writer
implementation may be the problem, any other suggestions here? But if
not i'm basing this on the behavior bytes.Buffer.

http://golang.org/src/pkg/io/io.go?s=11080:11140#L326

So I don't see how io.Copy buffers anything at all in this context.
But i'm curious if i'm missing something.

Then there is the use of multipart. I used CreateFilePart -- which
just is a convinence wrapper for CreatePart()
This returns a writer, but the writer which you can use to write the
part -- but not in a deferred way. If you call write on it, it will
pass that through to the underlying writer implementation (in this
case a bytes.Buffer) and actually write the data (file contents) to
that buffer.

http://golang.org/src/pkg/mime/multipart/writer.go?s=2274:2352#L86

So, what you'd like is for there to be a... "append" method on the
writer returned by CreatePart() -- that would let you add a reader
(file) without reading its contents to the underlying buffer. And
then Close would call the same thing, to close the Part by "appending"
the close string to the reader that forme the part.

This can all be accomplished with multireader, and then not till the
calls to io.Copy that happen deeper in the call stack for http.Post()
will the file contents actually be read. Which is the desired
behavior.

Kyle Lemons

unread,
Apr 27, 2012, 9:04:57 PM4/27/12
to James Lyons, golang-nuts
Perhaps I'm missing something, but it seems like:

file := os.Open("filename.txt")
defer file.Close()
parts := multipart.NewWriter(rw) // responsewriter
defer parts.Close()
filePart := parts.NewFilePart("filename")
io.Copy(filePart, file)

would be the behavior you want.

James Lyons

unread,
Apr 27, 2012, 9:25:48 PM4/27/12
to Kyle Lemons, golang-nuts
If I was writing a server, then perhaps. However I'm attempting a
multipart form, POST upload. I'm not trying to handle the reception
of such a request.
There is no "requestwriter" that I have found.

Kyle Lemons

unread,
Apr 27, 2012, 9:32:27 PM4/27/12
to James Lyons, golang-nuts
Ah, yes, my mistake.  I would create an io.Pipe and give the read end to the request's Body and use the write end on the io.Copy.

Matt Aimonetti

unread,
Jul 2, 2013, 8:23:28 PM7/2/13
to golan...@googlegroups.com, James Lyons
I went through the same exercise and here is my version (relying on mime/multipart)

el.rey....@gmail.com

unread,
Jun 22, 2014, 7:22:54 AM6/22/14
to golan...@googlegroups.com
I was looking for a way to do this without the bytes.Buffer to cut down on the ram usage for larger files.
This can be done with an io.Pipe, you can see it here: https://gist.github.com/cryptix/9dd094008b6236f4fc57

I usually don't like to bump old topics but I think it's a nice addition and this topic comes up in the search index.
Reply all
Reply to author
Forward
0 new messages