Very slow copy speed to s3ql mount

755 views
Skip to first unread message

Chris

unread,
Dec 5, 2015, 1:59:41 PM12/5/15
to s3ql
I'm copying files both large and small (10MB to 40GB) and my upload speed is very slow. I'm on a 100mbit connection and can saturate it in other uses, but with s3ql I am currently transferring at 2-3MB/sec (1/4th of my upload capacity) and I'm wondering if this is normal or if there's something I can do.

I tried changing the upload threads and nothing seemed to help.

I mount with this command:mount.s3ql --compress none --threads 25 --metadata-upload-interval 3600 --allow-other (swift location and mount location)

Is there something I'm missing or should try differently?


Nikolaus Rath

unread,
Dec 5, 2015, 3:42:22 PM12/5/15
to s3ql


On December 5, 2015 10:59:40 AM PST, Chris <exh...@gmail.com> wrote:
>I'm copying files both large and small (10MB to 40GB) and my upload
>speed
>is very slow. I'm on a 100mbit connection and can saturate it in other
>uses, but with s3ql I am currently transferring at 2-3MB/sec (1/4th of
>my
>upload capacity) and I'm wondering if this is normal or if there's
>something I can do.

What does contrib/benchmark.py say?

What storage provider are you using?

Are you sure the problem is with S3QL? Can you upload with faster speed to the same server of you use different tools?


Best
Nikolaus

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Chris

unread,
Dec 9, 2015, 6:39:39 PM12/9/15
to s3ql

I'm using hubic, and I realize now that you aren't so fond of it! ;)

Using other methods I can max out my upload speed to my hubic account, so the problem does lie with s3ql somewhere.

Looking at my network usage, the upload speed is very erratic. It will keep spiking up to very fast (10MB/sec) then fall to 1MB/sec and then spike back up again.

My filesystem was created with the default object size. Would a larger one have been better?

Riku Bister

unread,
Dec 10, 2015, 4:53:52 AM12/10/15
to s3ql
I just realized you wrote on 2 places, its better continue here under right topic:


My kimsufi (OVH) dedicated server has a 100mbit connection and I can only get about 3000kb/sec upload speed on a good day. Using swiftexplorer or the like I can easily max out my upload speed, so the bottleneck is s3ql in my case. I tried disabling compression and it made no difference, so I'm wondering if there's something else I could try. Maybe my Atom processor is too weak?

Tell me what settings you using, the mount command. could you paste it here please? i try help my best. also please tell your mkfs.s3ql --version.

Have you looked command top  while uploading? if cpu goes 100% while you upload, then that is slowing, it can be tuned abit.. My case i use ovh aswell, but not kimsufi, its vps classic my file transfer goes 150Mbit/s when upload a chunk/file (while watching network traffic via slurm program).
Other thing what slowes s3ql abit down is disk speed. (well i use old vps, so it has hdd instead sdd) it work ok.

Nikolaus Rath

unread,
Dec 10, 2015, 11:17:38 AM12/10/15
to s3...@googlegroups.com
On Dec 09 2015, Chris <exh...@gmail.com> wrote:
> On Saturday, December 5, 2015 at 3:42:22 PM UTC-5, Nikolaus Rath wrote:
>> On December 5, 2015 10:59:40 AM PST, Chris <exh...@gmail.com <javascript:>>
>> wrote:
>> >I'm copying files both large and small (10MB to 40GB) and my upload
>> >speed
>> >is very slow. I'm on a 100mbit connection and can saturate it in other
>> >uses, but with s3ql I am currently transferring at 2-3MB/sec (1/4th of
>> >my
>> >upload capacity) and I'm wondering if this is normal or if there's
>> >something I can do.
>>
>> What does contrib/benchmark.py say?
>>
>> What storage provider are you using?
>>
>> Are you sure the problem is with S3QL? Can you upload with faster speed to
>> the same server of you use different tools?
>>
>
> I'm using hubic, and I realize now that you aren't so fond of it! ;)
>
> Using other methods I can max out my upload speed to my hubic account, so
> the problem does lie with s3ql somewhere.
>
> Looking at my network usage, the upload speed is very erratic. It will keep
> spiking up to very fast (10MB/sec) then fall to 1MB/sec and then spike back
> up again.
>
> My filesystem was created with the default object size. Would a larger one
> have been better?

Probably not.

What does contrib/benchmark.py say?


Best,
-Nikolaus

--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

»Time flies like an arrow, fruit flies like a Banana.«

Chris

unread,
Dec 10, 2015, 2:26:26 PM12/10/15
to s3ql

I unfortunately can't run benchmark.py right now because I'm in the middle of a large transfer to my s3ql mount, I'll do so when it's done.

As for my settings, I do this:

mount.s3ql --compress none --threads 25 --metadata-upload-interval 3600 --allow-other swift://localhost/default /mnt/hubic

My output of mkfs.s3ql --version: S3QL 2.15

When I watch top, I seem to over around 50% CPU usage at all times (quad-core, atom) but the load average is 2.59, 2.96, 3.31

I'm guessing this is probably a CPU issue, because like I said earlier, I can upload to my container with SwiftExplorer and it's extremely fast. Is there anything I can do to lower the CPU usage? I already disabled compression (don't need it) but I'm not sure what else I could to to optimize this.

Riku Bister

unread,
Dec 10, 2015, 3:23:02 PM12/10/15
to s3ql



mount.s3ql --compress none --threads 25 --metadata-upload-interval 3600 --allow-other swift://localhost/default /mnt/hubic


i think cpu is ok, did you watch one cpu or total cpu(s) usage? press number "1"  on top to open all cores.

and you might try --threads 5 or less, i have noticed they do matter, you must find a best setting what suites you. try maybe first 2 then raise it. to me was not much difference between 5 or 10, 10 just took alot more power, but as i cant upload same time so fast then not matter (im using ownCloud) also using s3ql as samba share to my home network via openvpn tunnel
Try also different cachesize, lets say 1gig or something.. --cachesize=1512000  = 1.5G

Im using this setting --allow-other --cachesize=1512000 --threads=5 --metadata-upload-interval=7200 --compres=none --nfs   (i have mkfsed this particular filesystem with 50M --max-obj-size) i have 3 filesystems working same time all have different setting, and this one is mainly for media. about NFS setting, i have no idea what it does, there not much documentation about it, so i put it because i use samba, haven't tried yet without it

Nikolaus Rath

unread,
Dec 10, 2015, 10:48:48 PM12/10/15
to s3...@googlegroups.com
On Dec 10 2015, Riku Bister <riku...@gmail.com> wrote:
> about NFS setting, i have no idea what it does, there not much
> documentation about it, so i put it because i use samba, haven't tried yet
> without it

--nfs is really only required if you want to export the S3QL file system
over NFS. Samba, sshfs, WebDAV and everything else doesn't count.

Chris

unread,
Dec 12, 2015, 2:17:50 PM12/12/15
to s3ql

Unfortunately I tried your suggestions and it made no noticeable difference. I can get up to maybe 5MB/sec but it should be about 10MB/sec, because that's what I can get with swiftexplorer.

If I watch the bandwidth used with the program bmon I can see it spike over and over, like it's not constantly writing to the mount, but only every few seconds, so it averages out to half the speed it should be.

Riku Bister

unread,
Dec 12, 2015, 3:12:08 PM12/12/15
to s3ql
# dd if=/dev/urandom of=/mnt/s3ql/Varasto_2/testi2  bs=1M count=300
300+0 tietuetta sisään
300+0 tietuetta ulos
314572800 tavua (315 MB) kopioitu 45,4556 sekunnissa, 6,9 MB/s
# dd if=/dev/urandom of=/mnt/s3ql/Varasto_2/testi2  bs=1M count=100
100+0 tietuetta sisään
100+0 tietuetta ulos
104857600 tavua (105 MB) kopioitu 14,0052 sekunnissa, 7,5 MB/s


# s3ql/s3ql-2.15/contrib/benchmark.py swift://sensored/ /home/riku/TNC_Test_Ver-1.1.iso07.cdr --threads 3
Preparing test data...
Measuring throughput to cache...
Cache throughput with   4 KiB blocks: 15739 KiB/sec
Cache throughput with   8 KiB blocks: 20506 KiB/sec
Cache throughput with  16 KiB blocks: 24010 KiB/sec
Cache throughput with  32 KiB blocks: 26103 KiB/sec
Cache throughput with  64 KiB blocks: 27438 KiB/sec
Cache throughput with 128 KiB blocks: 27952 KiB/sec
Measuring raw backend throughput..
Backend throughput: 8058 KiB/sec
Test file size: 31.19 MiB
compressing with lzma-6...
lzma compression speed: 2192 KiB/sec per thread (in)
lzma compression speed: 363 KiB/sec per thread (out)
compressing with bzip2-6...
bzip2 compression speed: 11196 KiB/sec per thread (in)
bzip2 compression speed: 2719 KiB/sec per thread (out)
compressing with zlib-6...
zlib compression speed: 26023 KiB/sec per thread (in)
zlib compression speed: 8012 KiB/sec per thread (out)

With 128 KiB blocks, maximum performance for different compression
algorithms and thread counts is:

Threads:                              1           2           3           4           8
Max FS throughput (lzma):     2192 KiB/s   4384 KiB/s   6576 KiB/s   8769 KiB/s  17538 KiB/s
..limited by:                       CPU         CPU         CPU         CPU         CPU
Max FS throughput (bzip2):   11196 KiB/s  22393 KiB/s  27952 KiB/s  27952 KiB/s  27952 KiB/s
..limited by:                       CPU         CPU   S3QL/FUSE   S3QL/FUSE   S3QL/FUSE
Max FS throughput (zlib):    26023 KiB/s  26170 KiB/s  26170 KiB/s  26170 KiB/s  26170 KiB/s
..limited by:                       CPU      uplink      uplink      uplink      uplink

All numbers assume that the test file is representative and that
there are enough processor cores to run all active threads in parallel.
To compensate for network latency, you should use about twice as
many upload threads as indicated by the above table.

kib means kibibyte or kilobyte?

Riku Bister

unread,
Dec 12, 2015, 3:18:05 PM12/12/15
to s3ql

yes you see spike over and over because it writes 10M at the time(if you dident changed that setting while mkfs.s3ql) it should be like that way. format a bigger chunk then it will be more data when sending spike

Nikolaus Rath

unread,
Dec 12, 2015, 4:03:49 PM12/12/15
to s3...@googlegroups.com
On Dec 12 2015, Riku Bister <ri...@titanix.org> wrote:
> kib means kibibyte or kilobyte?

It means 2^10 bytes, i.e. kibibytes.

https://en.wikipedia.org/wiki/Kibibyte

Best

Riku Bister

unread,
Dec 12, 2015, 6:32:44 PM12/12/15
to s3ql


It means 2^10 bytes, i.e. kibibytes.

https://en.wikipedia.org/wiki/Kibibyte

on that case its alot speed over 200Mbit/s just to clear out for Cris, :) google has nice calclulator :D

Chris

unread,
Dec 12, 2015, 9:59:29 PM12/12/15
to s3ql
Preparing test data...
Measuring throughput to cache...
Unable to execute ps, assuming process 8286 has terminated.
Cache throughput with   4 KiB blocks: 2545 KiB/sec
Cache throughput with   8 KiB blocks: 4503 KiB/sec
Cache throughput with  16 KiB blocks: 4252 KiB/sec
Cache throughput with  32 KiB blocks: 5343 KiB/sec
Cache throughput with  64 KiB blocks: 5850 KiB/sec
Cache throughput with 128 KiB blocks: 5865 KiB/sec
Measuring raw backend throughput..
Enter backend login:
Enter backend passphrase:
Backend throughput: 890 KiB/sec
Test file size: 87.11 MiB
compressing with lzma-6...
lzma compression speed: 6219 KiB/sec per thread (in)
lzma compression speed: 0 KiB/sec per thread (out)
compressing with bzip2-6...
bzip2 compression speed: 18169 KiB/sec per thread (in)
bzip2 compression speed: 0 KiB/sec per thread (out)
compressing with zlib-6...
zlib compression speed: 31261 KiB/sec per thread (in)
zlib compression speed: 30 KiB/sec per thread (out)


With 128 KiB blocks, maximum performance for different compression
algorithms and thread counts is:

Threads:                              1           2           4           8
Max FS throughput (lzma):     5865 KiB/s   5865 KiB/s   5865 KiB/s   5865 KiB/s
..limited by:                 S3QL/FUSE   S3QL/FUSE   S3QL/FUSE   S3QL/FUSE
Max FS throughput (bzip2):    5865 KiB/s   5865 KiB/s   5865 KiB/s   5865 KiB/s
..limited by:                 S3QL/FUSE   S3QL/FUSE   S3QL/FUSE   S3QL/FUSE
Max FS throughput (zlib):     5865 KiB/s   5865 KiB/s   5865 KiB/s   5865 KiB/s
..limited by:                 S3QL/FUSE   S3QL/FUSE   S3QL/FUSE   S3QL/FUSE

Riku Bister

unread,
Dec 13, 2015, 6:03:47 AM12/13/15
to s3ql
hmm....
it seems yes slow, try other test different settings and other file sizes.
but on your case there comes limited by fuse even on lzma. this is very odd, and slow cache.. what is your disk speeds? did you installed a required dugdong and everything? python3 -c 'import dugong; print(dugong.__version__)' because my case this was biggest problem on debian, but finally got managed it

what distribution you using?

Nikolaus Rath

unread,
Dec 13, 2015, 3:25:57 PM12/13/15
to s3...@googlegroups.com
On Dec 12 2015, Chris <exh...@gmail.com> wrote:
> Preparing test data...
> Measuring throughput to cache...
> Unable to execute ps, assuming process 8286 has terminated.
> Cache throughput with 4 KiB blocks: 2545 KiB/sec
> Cache throughput with 8 KiB blocks: 4503 KiB/sec
> Cache throughput with 16 KiB blocks: 4252 KiB/sec
> Cache throughput with 32 KiB blocks: 5343 KiB/sec
> Cache throughput with 64 KiB blocks: 5850 KiB/sec
> Cache throughput with 128 KiB blocks: 5865 KiB/sec
> Measuring raw backend throughput..
> Enter backend login:
> Enter backend passphrase:
> Backend throughput: 890 KiB/sec

So for some reason S3QL is only able to send data with at most 800 KiB/s
to the backend. That is your bottleneck. Changing compression settings
or number of threads won't change it.

Are you sure that a different application is able to do better? Did you
try the other application right before or after this test?

Best,

Riku Bister

unread,
Dec 16, 2015, 12:24:50 PM12/16/15
to s3ql
i see what your point on slow copies. maybe i got this same problem now.
i started to have now same problems, really i have no idea what i done, only change on system is php7 but its installed on local so that is not issus, gate is using php5 so its same old.
2015-12-16 19:11:01.231 6329:Dummy-16 s3ql.backends.common.wrapped: Encountered ConnectionTimedOut (send/recv timeout exceeded), retrying ComprencBackend.perform_read (attempt 1)...
2015-12-16 19:11:01.251 6329:Dummy-16 s3ql.backends.swift._do_request: started with 'GET', '/s3ql_data_1', None, None, None, None
2015-12-16 19:11:01.252 6329:Dummy-16 s3ql.backends.swift._do_request_inner: started with GET /v1/AUTH_sensored/s3ql_data_1

it just stalls there nohing happens!
this is a new filesystem only few files on it for testing.
There is something going on, definitely a bug on s3ql. while download via swift command the file comes with no problems without any delays.

this problem started on me on all s3ql drives, actually they are on this moment unusable. only write works, but reading it just stalls for long time on GET command without any reason. where i can look into this???
and one more thing i have over 60k files now. it might have also doing slowness, but its not a excuse, because i use swift command to download a file without any problems!


Riku Bister

unread,
Dec 16, 2015, 5:14:59 PM12/16/15
to s3ql
i must cancel what i said.
after looking few hours dumping packets, doing tests etc.. the problem is hubic server, there is 6 of them round robin ip:s assigned to same hostname, 50% of them are super slow, they are broke. i guess this need to be informed to their team to fix, but well there is no place to tell them, as forums is full of spam and noone cares there to answer. i managed to temporary fix it to use only one host. Now the question is how long it will last...

Nikolaus Rath

unread,
Dec 17, 2015, 11:02:36 AM12/17/15
to s3...@googlegroups.com
On Dec 16 2015, Riku Bister <ri...@titanix.org> wrote:
> i must cancel what i said.
> after looking few hours dumping packets, doing tests etc.. the problem is
> hubic server, there is 6 of them round robin ip:s assigned to same
> hostname, 50% of them are super slow, they are broke. i guess this need to
> be informed to their team to fix, but well there is no place to tell them,
> as forums is full of spam and noone cares there to answer.

Well, I guess with hubic you get what you paid for.
Reply all
Reply to author
Forward
0 new messages