Reading large file on external HDD (ext4) causes 100% cpu usage

103 views
Skip to first unread message

stan...@gmail.com

unread,
Jan 17, 2015, 10:27:28 AM1/17/15
to beagl...@googlegroups.com
Hi.

I am running ubuntu on BBB and I have an external HDD attached over USB.

I am having trouble reading large files on the external disks over sftp and ftp. I tried different ftp clients but the result is the same. After reading some of the file, typically a few GB, the cpu uses 100% CPU and the speed drops to practically nothing.

I believe I have traced the problem to be an issue of reading from the disk. If I make a large file:
dd if=/dev/zero of=file.txt count=1024 bs=4000000
(which is not a problem apparently)

and then read it using
time sh -c "dd if=file.txt bs=4k"
It will in a few minutes jump to use 100% CPU and in many instances crash the ssh session with:
"The client has disconnected from the server.  Reason:
Message Authentication Code did not verify (packet #222795). Data integrity has been compromised. "


I found some discussion (https://bbs.archlinux.org/viewtopic.php?id=112846&p=4) of what seems to be a similar issue where they suggest to set /sys/kernel/mm/transparent_hugepage/defrag to madvise. But I don't know how to try that on the BBB or if it is even relevant.

Any idea about how to resolve this nasty problem?

William Hermans

unread,
Jan 19, 2015, 11:36:33 AM1/19/15
to beagl...@googlegroups.com
you may want to look into making smaller ( much smaller ) file buffers for the ftp / sftp servers you're using on the beaglebone side. It's been a while, so I could tell you how specifically, but it should be something you can look up with an internet search or two.

--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to the Google Groups "BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beagleboard...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jan Stanstrup

unread,
Jan 20, 2015, 4:49:54 PM1/20/15
to beagl...@googlegroups.com
Thank you for your answer.
I have spend some hours now to look into your suggestion but I have not been able to find anything on how to set the server side buffer size.
I found something on tcp buffers that apparently can be changed with /proc/sys/net/core/rmem_max /proc/sys/net/core/wmem_max but I have no idea if that is relevant. I tried lowering that by a factor of 10 but it made no difference. It even stalled faster it seems (could be random. it doesn't stall at the same point every time).

But to me it is a disk read issue. It cannot be right that you cannot read a 4GB file on a BBB as was the case with my read test in my first post. If that is the case I am very disappointed about the product.

Don deJuan

unread,
Jan 20, 2015, 5:36:58 PM1/20/15
to beagl...@googlegroups.com
I rarely use ftp but isnt it the rcvbuf sndbuf xferbuf settings? sftp also has the -B but with sftp you also have ssh to take into account




Don deJuan

unread,
Jan 20, 2015, 5:38:04 PM1/20/15
to beagl...@googlegroups.com
On 01/20/2015 01:49 PM, Jan Stanstrup wrote:
also just so you know that was all found in the ftp sftp man pages..

Jan Stanstrup

unread,
Feb 24, 2015, 2:00:27 PM2/24/15
to beagl...@googlegroups.com
Just to conclude on this. It seems to be a kernel issue. With the more official debian release it doesn't die. It still uses 100% and is quite slow for reading:

time sh -c "dd if=file.txt bs=4k" 1000000+0 records in 1000000+0 records out 4096000000 bytes (4.1 GB) copied, 1105.02 s, 3.7 MB/s

Writing on the other hand is just fine:
if=/dev/zero of=file.txt count=1024 bs=4000000 1024+0 records in 1024+0 records out 4096000000 bytes (4.1 GB) copied, 177.923 s, 23.0 MB/s

Robert Nelson

unread,
Feb 24, 2015, 2:02:46 PM2/24/15
to Beagle Board
On Tue, Feb 24, 2015 at 1:00 PM, Jan Stanstrup <stan...@gmail.com> wrote:
> Just to conclude on this. It seems to be a kernel issue. With the more
> official debian release it doesn't die. It still uses 100% and is quite slow
> for reading:
>
> time sh -c "dd if=file.txt bs=4k" 1000000+0 records in 1000000+0 records out
> 4096000000 bytes (4.1 GB) copied, 1105.02 s, 3.7 MB/s
>
> Writing on the other hand is just fine:
> if=/dev/zero of=file.txt count=1024 bs=4000000 1024+0 records in 1024+0
> records out 4096000000 bytes (4.1 GB) copied, 177.923 s, 23.0 MB/s

Please add your kernel details.. (uname -r)

That way we can use your testing and make adjustments. ;)

Regards,

--
Robert Nelson
http://www.rcn-ee.com/

Jan Stanstrup

unread,
Feb 24, 2015, 2:21:25 PM2/24/15
to beagl...@googlegroups.com
I am sorry but I send back the BBB while it was still possible. To be honest I lost patients with this issue after spending evening after evening trying to figure out what was wrong.
I wanted ubuntu and gave up. But I was using BBB-eMMC-flasher-ubuntu-14.04.1-console-armhf-2015-01-06-2gb.img.xz

--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to a topic in the Google Groups "BeagleBoard" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/beagleboard/mrCI5wkIgMY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to beagleboard...@googlegroups.com.

Robert Nelson

unread,
Feb 25, 2015, 12:16:06 PM2/25/15
to Beagle Board
On Tue, Feb 24, 2015 at 1:20 PM, Jan Stanstrup <stan...@gmail.com> wrote:
> I am sorry but I send back the BBB while it was still possible. To be honest
> I lost patients with this issue after spending evening after evening trying
> to figure out what was wrong.
> I wanted ubuntu and gave up. But I was using
> BBB-eMMC-flasher-ubuntu-14.04.1-console-armhf-2015-01-06-2gb.img.xz

Sorry to bring this up, but for future users...

You actually where not testing usb "read" but instead you were testing
how slow "stdout" is... (and it's really slow on arm....)

beaglebone:

3.14.33-ti-r51:

htop: idle: cpu: 30%

time dd if=/dev/zero of=file.txt count=1024 bs=4000000

htop:
total-cpu: 70-80%
dd-cpu: 40-50%
iotop: 2x.xx M/s : WRITE

4096000000 bytes (4.1 GB) copied, 186.76 s, 21.9 MB/s

real 3m6.805s
user 0m0.020s
sys 1m11.190s

#clear the memory cache
sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"

time dd if=./file.txt of=/dev/null bs=4k

htop:
total-cpu: 60-80%
dd-cpu: 10-20%
iotop: 1x.XX M/s : READ

4096000000 bytes (4.1 GB) copied, 298.522 s, 13.7 MB/s

real 4m58.575s
user 0m1.850s
sys 0m49.980s

#clear the memory cache
sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"

time dd if=./file.txt bs=4k

htop:
total-cpu: 3x.xx%
dd-cpu: 0-0.7%
iotop: 125 K/s : READ (once every 5 seconds...)

^C315+0 records in
314+0 records out
1286144 bytes (1.3 MB) copied, 112.122 s, 11.5 kB/s

real 1m52.222s
user 0m0.020s
sys 0m0.130s

(stopped early, just way too slow..)

While on x86:

3.19.0

voodoo@hades:~$ time dd if=/dev/zero of=file.txt count=1024 bs=4000000
1024+0 records in
1024+0 records out
4096000000 bytes (4.1 GB) copied, 27.8636 s, 147 MB/s

real 0m27.866s
user 0m0.008s
sys 0m5.956s
voodoo@hades:~$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
[sudo] password for voodoo:
voodoo@hades:~$ time dd if=./file.txt of=/dev/null bs=4k
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 68.3066 s, 60.0 MB/s

real 1m8.413s
user 0m0.340s
sys 0m5.708s
voodoo@hades:~$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
voodoo@hades:~$ time dd if=./file.txt bs=4k
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 93.481 s, 43.8 MB/s

real 1m33.655s
user 0m0.368s
sys 0m24.136s
Reply all
Reply to author
Forward
0 new messages