Where does the file Saves when i Upload

120 views
Skip to first unread message

V Phani

unread,
Jan 16, 2014, 9:24:14 AM1/16/14
to mongoos...@googlegroups.com

Hi,

 I just tried to upload file to server, which doesn't has enough memory.
When I click upload in web-page (enctype as multipart/form-data)mongoose trying to store file in somewhere in server and showing Out of memory, but here on server side I am not using any mg_upload or mg_read to save it.
where by default it stores ?

Thanks in advance

Phanendra

Sergey Lyubka

unread,
Jan 16, 2014, 9:27:51 AM1/16/14
to mongoose-users
Mongoose buffers POST data before it calls the callback, calling realloc() repeatedly to get more memory. If the file is big, realloc() fails. You can use -DUSE_POST_SIZE_LIMIT=X to limit POST site (documented at http://cesanta.com/#docs,Embed.md)

When successful, mongoose passes POST contents to the callback via
struct mg_connection::content pointer.


--
You received this message because you are subscribed to the Google Groups "mongoose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongoose-user...@googlegroups.com.
To post to this group, send email to mongoos...@googlegroups.com.
Visit this group at http://groups.google.com/group/mongoose-users.
For more options, visit https://groups.google.com/groups/opt_out.

V Phani

unread,
Jan 17, 2014, 1:13:30 AM1/17/14
to mongoos...@googlegroups.com
Thanks for your response.

Is there any way to avoid calling realloc() repeatedly to get more memory. when I call mg_upload, then only it should write data into a directory specified in mg_upload.

or

Is there any way to write data in chunks.

Thanks,
Phanendra

Sergey Lyubka

unread,
Jan 17, 2014, 6:23:33 AM1/17/14
to mongoose-users
On Fri, Jan 17, 2014 at 6:13 AM, V Phani <phanend...@gmail.com> wrote:
Thanks for your response.

Is there any way to avoid calling realloc() repeatedly to get more memory. when I call mg_upload, then only it should write data into a directory specified in mg_upload.

Currently, there is no way to have streaming POST data. One possible
solution to that is to command mongoose to save POST data into a
temporary file, then when all POST data is saved, mongoose will call a handler.
A temporary file would have a non-interpreted, raw POST data.

Note that multipart POST data might have multiple files, and non-files too. How
would you like to have it presented? Right now, there is a function mg_parse_multipart()
but it works on memory buffer. Usage example is at https://github.com/cesanta/mongoose/blob/master/examples/upload.c
Take a look at that function, would you use similar function that works on a file stream?

jeff shanab

unread,
Jan 17, 2014, 6:45:41 AM1/17/14
to mongoos...@googlegroups.com
I deal with security cameras and a lot of them stream audio out via post using a multipart/x-mixed-replace mime header.
It is the servers responsibility in these cases to take the payloads out of each section, the parts marked with --MyBoundry.

AFAIK This mime type was originaly created for the other direction. For browsers to display MJPEG by letting them just keep replacing the image. But has been adopted to stream to a server.
This would mean the server must work on each chunk not the whole post. So I would guess there would be two callbacks. One at post start and one for each myBoundry section.


--

Sergey Lyubka

unread,
Jan 17, 2014, 6:59:11 AM1/17/14
to mongoose-users
On Fri, Jan 17, 2014 at 11:45 AM, jeff shanab <jsha...@gmail.com> wrote:
I deal with security cameras and a lot of them stream audio out via post using a multipart/x-mixed-replace mime header.
It is the servers responsibility in these cases to take the payloads out of each section, the parts marked with --MyBoundry.

AFAIK This mime type was originaly created for the other direction. For browsers to display MJPEG by letting them just keep replacing the image. But has been adopted to stream to a server.
This would mean the server must work on each chunk not the whole post. So I would guess there would be two callbacks. One at post start and one for each myBoundry section.

Thanks Jeff.

Assume that URI handler is called with POST data buffered in a
temporary file. I.e. a handler has FILE * stream opened. Assume that
that file stream has multipart POST data in it.

From this point on, there are 2 basic approaches on how to parse that data.
1. As Jeff suggested, callback-based approach:
   mg_parse_multipart_file(FILE *fp, void (*callback)(char *file_name, FILE *data, int data_len));

   mg_parse_multipart() would call a callback function for every chunk in
   in multipart data, passing file name and a data pointer to the callback.

2. Another approach is a "synchronous" one. There no callbacks, mg_parse_multipart_file()
    can be called multiple times until it fails. Each time mg_parse_multipart_file() is called,
    it would advance to the next chunk. This approach is implemented in existing
    mg_parse_multipart() function.

    int mg_parse_multipart_file(FILE *fp,
                       char *var_name, int var_name_len,
                       char *file_name, int file_name_len,
                       FILE *data, int *data_len);

Implementation-wise, these two would be very similar. The difference is in the API.
I prefer the second one.

What are your thoughts?

Sergey Lyubka

unread,
Jan 17, 2014, 8:54:30 AM1/17/14
to mongoose-users
Thinking about large POST buffers, I've just got another idea.
Mongoose can spool POST requests to temporary files, but instead
of passing a FILE * pointer, it can memory map the file and
give a memory address as if it was malloc-ed. This way, the API wouldn't
change at all, and existing mg_parse_multipart() can be used.

There would be a limitation on a POST size on 32-bit systems, cause
multi-gigabyte files could not be fully mapped.

Environments without a filesystem cannot do file IO, therefore temporary
file approach wouldn't work. In such a case, Mongoose can do memory
buffering by default, falling back to temporary files only for large POSTs.
A threshold could be configured, so the solution seems to suit all cases.
Reply all
Reply to author
Forward
0 new messages