This is probably more of a question as you seem to have more experience with hardware than I do (I am a programmer). I have been messing around with all the various raid combinations on the device (raid 0, 1 and 10) and no matter which raid combo I try I get the exact same write speed (about 72mbps) which I believe is the max write speed of the disks I am using (when used singularly) So I tried a dd if=/dev/zero of=test bs=1M count=1000 on two threads to two of the drives and got the same combined write speed 72mbps (when you add the speeds together) This got me scratching my head a bit. Is there only 1 6gbps lane all the ports are multiplexed through or is there something strange going on? I should have at least gotten above the write speed of a singular drive? I have tried on old kernels too (like the original v3 kernel) and different debian distros (jessie/stretch and buster) and I think I am getting the same results. Is there a hardware limitation I am missing or something? On a side note, thanks for all your hard work! It has been inspiring for me playing with your tools and building the kernels etc. |
I just tried with all six disks. here is the output. notice the 19 MB/s per drive. Are all the sata ports multiplexed together on one lane or something?
|
According to the MediaTek specs the chips has 3x PCI express lanes and looking at the specs for the ASM1061 (and the commodity 1061 cards you can buy on ebay) they claim you can run 2 full sata III 6gbp/s ports on the pci-ex lane. Looking at the bottom of the board (gnubee PC2) I can see 3 ASM1061 chips all seemingly connected to a different lane dirrectly to the mediatek chip. So from a theoretical hardware perspective that all seems to add up to the prospect of 6 full speed sata ports! But as you can see above the max throughput I can put through all the busses at the same time is less then 1 saturated SATA III port. Far far less! Only getting around 100MB/s when "theoretically" even one sata II port should be able to hit 300MB/s. Even the orignal pci-ex v1.0a has a theoretical throughput of 250MB/s so even if it was 3 pci-ex v1.0a lanes we should be able to see higher than 100MB/s across all devices. This suggests there's a kernel issue or driver issue somewhere that needs to be addressed, unless I'm misinterpreting the specs (or maybe a hardware bottle neck elsewhere?). I'd love to help work on this if you think its an issue. (sorry for the spam this is my first time hacking at a kernel for an sbc device and I'm loving it). Just need some direction of where to focus my efforts! http://www.asmedia.com.tw/eng/e_show_products.php?item=118 |
root@gnubee:~# uname -a
Linux gnubee.gnubee 5.4.6+ #3 SMP Mon Jan 6 20:31:09 CET 2020 mips GNU/Linux
5.5 us, 17.5 sy, 0.0 ni, 64.0 id, 12.6 wa, 0.0 hi, 0.3 si, 0.0 st
dd if=/dev/zero of=/root/test-1gb-zero bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 11.1887 s, 93.7 MB/s
6.8 us, 36.6 sy, 0.0 ni, 55.6 id, 0.0 wa, 0.0 hi, 1.0 si, 0.0 st
dd if=/dev/zero of=/root/test-1gb-zero bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 21.7277 s, 48.3 MB/s
# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
7: 0 0 0 0 MIPS 7 timer
8: 5488291 5471178 5683959 5474474 MIPS GIC Local 1 timer
9: 13882051 0 0 0 MIPS GIC 63 IPI call
10: 0 7854029 0 0 MIPS GIC 64 IPI call
11: 0 0 14188299 0 MIPS GIC 65 IPI call
12: 0 0 0 7681272 MIPS GIC 66 IPI call
13: 1527765 0 0 0 MIPS GIC 67 IPI resched
14: 0 1580563 0 0 MIPS GIC 68 IPI resched
15: 0 0 1669087 0 MIPS GIC 69 IPI resched
16: 0 0 0 1585764 MIPS GIC 70 IPI resched
17: 37 0 0 0 MIPS GIC 19 1e000600.gpio-bank0, 1e000600.gpio-bank1, 1e000600.gpio-bank2
18: 1415 0 0 0 MIPS GIC 33 ttyS0
19: 0 0 0 0 MIPS GIC 27 1e130000.sdhci
20: 29 0 0 0 MIPS GIC 29 xhci-hcd:usb1
21: 467043 0 0 0 MIPS GIC 10 1e100000.ethernet
23: 397374 0 0 0 MIPS GIC 11 ahci[0000:01:00.0]
24: 0 0 0 0 MIPS GIC 31 ahci[0000:02:00.0]
25: 397 0 0 0 MIPS GIC 32 ahci[0000:03:00.0]
26: 37 0 0 0 1e000600.gpio 18 reset
ERR: 439
root@gnubee:~# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
7: 0 0 0 0 MIPS 7 timer
8: 5494033 5476867 5689943 5480236 MIPS GIC Local 1 timer
9: 13892559 0 0 0 MIPS GIC 63 IPI call
10: 0 7857592 0 0 MIPS GIC 64 IPI call
11: 0 0 14197088 0 MIPS GIC 65 IPI call
12: 0 0 0 7685115 MIPS GIC 66 IPI call
13: 1529080 0 0 0 MIPS GIC 67 IPI resched
14: 0 1581723 0 0 MIPS GIC 68 IPI resched
15: 0 0 1670498 0 MIPS GIC 69 IPI resched
16: 0 0 0 1587295 MIPS GIC 70 IPI resched
17: 37 0 0 0 MIPS GIC 19 1e000600.gpio-bank0, 1e000600.gpio-bank1, 1e000600.gpio-bank2
18: 1415 0 0 0 MIPS GIC 33 ttyS0
19: 0 0 0 0 MIPS GIC 27 1e130000.sdhci
20: 29 0 0 0 MIPS GIC 29 xhci-hcd:usb1
21: 467257 0 0 0 MIPS GIC 10 1e100000.ethernet
23: 398796 0 0 0 MIPS GIC 11 ahci[0000:01:00.0]
24: 0 0 0 0 MIPS GIC 31 ahci[0000:02:00.0]
25: 397 0 0 0 MIPS GIC 32 ahci[0000:03:00.0]
26: 37 0 0 0 1e000600.gpio 18 reset
ERR: 440
--
You received this message because you are subscribed to the Google Groups "GnuBee" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gnubee+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gnubee/0cc1c74e-3d03-4c20-a6e3-7fbada0190b9%40googlegroups.com.
01:46:44 PM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util
01:46:46 PM sda 120.00 80128.00 0.00 667.73 0.00 5.47 8.21 98.50
01:46:46 PM sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:46:46 PM sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:46:46 PM sdd 36.50 0.00 30720.00 841.64 0.12 9.16 5.62 20.50
01:46:46 PM sde 106.00 27264.00 0.00 257.21 0.00 2.09 9.34 99.00
01:46:46 PM sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:46:46 PM md1 107.00 27320.25 0.00 255.33 0.00 0.00 0.00 0.00
Average: DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util
Average: sda 107.88 61493.50 9.50 570.13 0.00 5.03 8.93 96.37
Average: sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sdd 0.38 0.00 129.50 345.33 0.01 27.00 13.33 0.50
Average: sde 86.25 54272.00 0.00 629.24 0.25 7.94 10.86 93.63
Average: sdf 86.75 54656.00 0.00 630.04 0.27 8.10 10.79 93.63
Average: md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util
Average: sda 65.17 40277.33 0.00 618.07 0.00 5.42 11.48 74.83
Average: sdb 56.00 34133.33 0.00 609.52 0.10 6.87 12.98 72.67
Average: sdc 55.83 34816.00 0.00 623.57 0.07 6.56 12.90 72.00
Average: sdd 0.33 0.00 2.00 6.00 0.08 249.50 10.00 0.33
Average: sde 56.83 35072.00 0.00 617.10 0.11 7.14 12.76 72.50
Average: sdf 55.17 33536.00 0.00 607.90 0.10 7.08 12.75 70.33
Average: md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util
Average: sda 61.50 35496.00 0.00 577.17 0.00 5.34 11.59 71.25
Average: sdb 58.25 33408.00 0.00 573.53 0.11 6.80 11.93 69.50
Average: sdc 60.50 34688.00 0.00 573.36 0.07 6.62 11.94 72.25
Average: sdd 0.25 0.00 256.00 1024.00 0.00 10.00 20.00 0.50
Average: sde 61.00 35584.00 0.00 583.34 0.12 7.23 11.89 72.50
Average: sdf 62.00 36992.00 0.00 596.65 0.07 6.43 12.18 75.50
Average: md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
You can play around with the irq smp_affinity bitmask. On my x86 systems, the irqs seem to balance on their own. I think on this MIPS board, linux cannot figure out how to assign an irq to multiple cores and just assigns them to the first core of the bitmask.However, if you set the smp_affinity to a specific core, linux will use the first cpu of your affinity. E.g. you could reassign the 3 ahci irqs to core1, core2, and core3 with this:echo 2 > /proc/irq/23/smp_affinityecho 4 > /proc/irq/24/smp_affinityecho 8 > /proc/irq/25/smp_affinity
To unsubscribe from this group and stop receiving emails from it, send an email to gnu...@googlegroups.com.
root@gnubee:~# dd if=/dev/zero of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 3.64753 s, 287 MB/s
root@gnubee:~# mount -t tmpfs none /mnt/tmp
root@gnubee:~# dd if=/dev/zero of=/mnt/tmp/test_$$ bs=1M conv=fdatasync
dd: error writing '/mnt/tmp/test_1978': No space left on device
250+0 records in
249+0 records out
261382144 bytes (261 MB, 249 MiB) copied, 2.90455 s, 90.0 MB/s
root@gnubee:~# rm /mnt/tmp/test_1978
root@gnubee:~# umount /mnt/tmp
root@gnubee:~# mount -t tmpfs none /mnt/tmp
root@gnubee:~# dd if=/dev/zero of=/mnt/tmp/test_$$ bs=1M oflag=dsync
dd: error writing '/mnt/tmp/test_1978': No space left on device
250+0 records in
249+0 records out
261382144 bytes (261 MB, 249 MiB) copied, 2.87896 s, 90.8 MB/s
root@gnubee:~# rm /mnt/tmp/test_1978
root@gnubee:~# umount /mnt/tmp
:~$ dd if=/dev/zero of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1,0 GB, 1000 MiB) copied, 0,189893 s, 5,5 GB/s
:~$ sudo mount -t tmpfs none /mnt/tmp
:~$ dd if=/dev/zero of=/mnt/tmp/test_$$ bs=1M count=1000 conv=fdatasync
1000+0 records in
1000+0 records out
1048576000 bytes (1,0 GB, 1000 MiB) copied, 0,952923 s, 1,1 GB/s
:~$ sudo umount /mnt/tmp
:~$ sudo mount -t tmpfs none /mnt/tmp
:~$ dd if=/dev/zero of=/mnt/tmp/test_$$ bs=1M count=1000 oflag=dsync
1000+0 records in
1000+0 records out
1048576000 bytes (1,0 GB, 1000 MiB) copied, 1,01765 s, 1,0 GB/s
:~$ sudo umount /mnt/tmp
ubnt@ubnt:~$ time dd if=/dev/zero of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
real 0m0.570s
user 0m0.000s
sys 0m0.560s
1GB/0.570s = 1,75 GB/s