Big files in Mogile (> 2G)

6 views
Skip to first unread message

dbond...@gmail.com

unread,
May 8, 2009, 7:24:30 PM5/8/09
to mogile
I am looking to save files larger than 2G into MogileFS. I have been
able to upload/download no problem after changing the length column in
the file table to bigint(20). The problem is the replication. Mogile
keeps trying but failing, probably a 32bit int limit in the code
somewhere.

Are there any problems with storing files this big in Mogile without
chunking them up? We would like to go up to 10GB. (These long 1080p
movies)

Any ideas where it would it be failing in the code on replication?

thanks
- daniel



Ask Bjørn Hansen

unread,
May 8, 2009, 8:38:39 PM5/8/09
to mog...@googlegroups.com

On May 8, 2009, at 16:24, dbond...@gmail.com wrote:

> Any ideas where it would it be failing in the code on replication?

Maybe a dumb question, but: Are your webdav/http storage servers
configured to support large files?


- ask

--
http://develooper.com/ - http://askask.com/


Daniel B

unread,
May 8, 2009, 8:47:16 PM5/8/09
to mogile
yes, they make it to the webdav servers no problem. md5 checksums
match the originals.
We are running nginx on FreeBSD

On May 8, 5:38 pm, Ask Bjørn Hansen <a...@develooper.com> wrote:

Tomas Doran

unread,
May 11, 2009, 5:24:30 AM5/11/09
to mog...@googlegroups.com
Daniel B wrote:
> yes, they make it to the webdav servers no problem. md5 checksums
> match the originals.
> We are running nginx on FreeBSD
>
> On May 8, 5:38 pm, Ask Bjørn Hansen <a...@develooper.com> wrote:
>> On May 8, 2009, at 16:24, dbondur...@gmail.com wrote:
>>
>>> Any ideas where it would it be failing in the code on replication?

Large file replication uses multiple HTTP PUT requests for ranges of the
content, rather than just doing a PUT for the entire file contents at
once (IIRC).

When we tested this with nginx, it got it completely wrong, and
subsequent writes overwrote the first chunk..

Maybe this is the issue that you're seeing? I'm unsure if this has been
fixed in latter versions of nginx or not yet.

lighttpd doesn't suffer from this problem, but it limited to less than
2Gb (signed 32 bit value - don't ask me why it's signed!) on 32 bit
machines unless you modify the source code.

Cheers
t0m

Daniel B

unread,
May 11, 2009, 2:08:44 PM5/11/09
to mogile
It uses PUT and copies the file in 1MB chunks

http://cpansearch.perl.org/src/DORMANDO/mogilefs-server-2.30/lib/MogileFS/Worker/Replicate.pm
see the http_copy method.

This may be the problem. If $clen too large for a 32bit integer,
$bytes_to_read may fail or be 0, so nothing will be written.
my ($data, $written, $remain) = ('', 0, $clen);
my $bytes_to_read = 1024*1024; # read 1MB at a time until there's
less than that remaining
$bytes_to_read = $remain if $remain < $bytes_to_read;

also, it uses PUT in 1MB chunks with HTTP1.0.



Reply all
Reply to author
Forward
0 new messages