Recommended Settings for "-block-size" and "-threshold"

205 views
Skip to first unread message

Yvan Tian

unread,
Nov 9, 2018, 6:36:24 AM11/9/18
to stressapptest-discuss

Hello Nike,


I'm a bit confused with the options "--read-block-size/--write-block-size" and "--read-threshold/--write-threshold". 


As I see in sat source code, "read_threshold_/write_threshold_" are recommended to be set at 100ms for a sector(512 Bytes). That is to say, the read thread or write thread should finish reading or writing the block at a speed of no less than 5KB/s, which is much lower than the I/O speed of most modern HDDs. If I wanna read/write a block with 1000KB, should I set the read and write threshold at 100s? Does this time make sense?



  read_threshold_ = 100000;         // 100ms is a reasonable limit for

  write_threshold_ = 100000;        // reading/writing a sector


  read_timeout_ = 5000000;          // 5 seconds should be long enough for a

  write_timeout_ = 5000000;         // timout for reading/writing



The following is the log of sat test on a system configured with 7200RPM SATA HDD(/dev/sdb1). I find a threshold with 0.1s will be OK for most of the time, for a short period of time it was taking more time but never exceeds 1s to finish a block reading/writing at a short period of time. To set the threshold either too short or too long may not be suitable and so does the block size. Is there any suggestion or recommended values for setting these options?


[root@localhost stressapptest_v18]# stressapptest -M 10636.94843 -m 48 -i 48 -c 48 -W --read-block-size 1024000 --write-block-size 1024000 --read-threshold 100000 --write-threshold 100000 --segment-size 10240000000 --blocks-per-segment 32 -s 900 -d /dev/sdb1  -l runlog,

2018/11/09-23:55:46(CST) Log: Commandline - stressapptest -M 10636.94843 -m 48 -i 48 -c 48 -W --read-block-size 1024000 --write-block-size 1024000 --read-threshold 100000 --write-threshold 100000 --segment-size 10240000000 --blocks-per-segment 32 -s 900 -d /dev/sdb1 -l runlog

2018/11/09-23:55:46(CST) Stats: SAT revision 1.0.9_autoconf, 64 bit binary

2018/11/09-23:55:46(CST) Log: root @ localhost.localdomain on Fri Nov  9 18:45:35 CST 2018 from open source release

2018/11/09-23:55:46(CST) Log: 1 nodes, 24 cpus.

2018/11/09-23:55:46(CST) Log: Prefer plain malloc memory allocation.

2018/11/09-23:55:46(CST) Log: Using mmap() allocation at 0x7f70e0602000.

2018/11/09-23:55:46(CST) Stats: Starting SAT, 10636M, 900 seconds

2018/11/09-23:55:48(CST) Log: region number 17 exceeds region count 1

2018/11/09-23:55:48(CST) Log: Region mask: 0x1

2018/11/09-23:55:59(CST) Log: Seconds remaining: 890

2018/11/09-23:56:09(CST) Log: Seconds remaining: 880

2018/11/09-23:56:19(CST) Log: Seconds remaining: 870

2018/11/09-23:56:29(CST) Log: Seconds remaining: 860

2018/11/09-23:56:39(CST) Log: Seconds remaining: 850

2018/11/09-23:56:49(CST) Log: Seconds remaining: 840

2018/11/09-23:56:59(CST) Log: Seconds remaining: 830

2018/11/09-23:57:09(CST) Log: Seconds remaining: 820

2018/11/09-23:57:19(CST) Log: Seconds remaining: 810

2018/11/09-23:57:29(CST) Log: Seconds remaining: 800

2018/11/09-23:57:39(CST) Log: Seconds remaining: 790

2018/11/09-23:57:49(CST) Log: Seconds remaining: 780

2018/11/09-23:57:59(CST) Log: Seconds remaining: 770

2018/11/09-23:58:09(CST) Log: Seconds remaining: 760

2018/11/09-23:58:19(CST) Log: Seconds remaining: 750

2018/11/09-23:58:29(CST) Log: Seconds remaining: 740

2018/11/09-23:58:39(CST) Log: Seconds remaining: 730

2018/11/09-23:58:49(CST) Log: Seconds remaining: 720

2018/11/09-23:58:59(CST) Log: Seconds remaining: 710

2018/11/09-23:59:09(CST) Log: Seconds remaining: 700

2018/11/09-23:59:19(CST) Log: Seconds remaining: 690

2018/11/09-23:59:29(CST) Log: Seconds remaining: 680

2018/11/09-23:59:39(CST) Log: Seconds remaining: 670

2018/11/09-23:59:49(CST) Log: Seconds remaining: 660

2018/11/09-23:59:59(CST) Log: Seconds remaining: 650

2018/11/10-00:00:09(CST) Log: Seconds remaining: 640

2018/11/10-00:00:19(CST) Log: Seconds remaining: 630

2018/11/10-00:00:29(CST) Log: Seconds remaining: 620

2018/11/10-00:00:39(CST) Log: Seconds remaining: 610

2018/11/10-00:00:49(CST) Log: Seconds remaining: 600

2018/11/10-00:00:59(CST) Log: Seconds remaining: 590

2018/11/10-00:01:04(CST) Log: Read took 135524 us which is longer than threshold 100000 us on disk /dev/sdb1 (thread 145).

2018/11/10-00:01:04(CST) Log: Read took 169463 us which is longer than threshold 100000 us on disk /dev/sdb1 (thread 145).

2018/11/10-00:01:05(CST) Log: Read took 473145 us which is longer than threshold 100000 us on disk /dev/sdb1 (thread 145).

2018/11/10-00:01:05(CST) Log: Read took 144052 us which is longer than threshold 100000 us on disk /dev/sdb1 (thread 145).

2018/11/10-00:01:06(CST) Log: Read took 145652 us which is longer than threshold 100000 us on disk /dev/sdb1 (thread 145).

2018/11/10-00:01:07(CST) Log: Read took 118733 us which is longer than threshold 100000 us on disk /dev/sdb1 (thread 145).

2018/11/10-00:01:09(CST) Log: Seconds remaining: 580

2018/11/10-00:01:19(CST) Log: Seconds remaining: 570

2018/11/10-00:01:29(CST) Log: Seconds remaining: 560

2018/11/10-00:01:39(CST) Log: Seconds remaining: 550

2018/11/10-00:01:49(CST) Log: Seconds remaining: 540

2018/11/10-00:01:59(CST) Log: Seconds remaining: 530

2018/11/10-00:02:09(CST) Log: Seconds remaining: 520

2018/11/10-00:02:19(CST) Log: Seconds remaining: 510

2018/11/10-00:02:29(CST) Log: Seconds remaining: 500

2018/11/10-00:02:39(CST) Log: Seconds remaining: 490

2018/11/10-00:02:49(CST) Log: Seconds remaining: 480

2018/11/10-00:02:59(CST) Log: Seconds remaining: 470

2018/11/10-00:03:09(CST) Log: Seconds remaining: 460

2018/11/10-00:03:19(CST) Log: Seconds remaining: 450

2018/11/10-00:03:29(CST) Log: Seconds remaining: 440

2018/11/10-00:03:39(CST) Log: Seconds remaining: 430

2018/11/10-00:03:49(CST) Log: Seconds remaining: 420

2018/11/10-00:03:59(CST) Log: Seconds remaining: 410

2018/11/10-00:04:09(CST) Log: Seconds remaining: 400

2018/11/10-00:04:19(CST) Log: Seconds remaining: 390

2018/11/10-00:04:29(CST) Log: Seconds remaining: 380

2018/11/10-00:04:39(CST) Log: Seconds remaining: 370

2018/11/10-00:04:49(CST) Log: Seconds remaining: 360

2018/11/10-00:04:59(CST) Log: Seconds remaining: 350

2018/11/10-00:05:02(CST) Log: Read took 315416 us which is longer than threshold 100000 us on disk /dev/sdb1 (thread 145).

2018/11/10-00:05:09(CST) Log: Seconds remaining: 340

2018/11/10-00:05:19(CST) Log: Seconds remaining: 330

2018/11/10-00:05:29(CST) Log: Seconds remaining: 320

2018/11/10-00:05:39(CST) Log: Seconds remaining: 310

2018/11/10-00:05:49(CST) Log: Seconds remaining: 300

2018/11/10-00:05:49(CST) Log: Pausing worker threads in preparation for power spike (300 seconds remaining)

2018/11/10-00:05:59(CST) Log: Seconds remaining: 290

2018/11/10-00:06:04(CST) Log: Resuming worker threads to cause a power spike (285 seconds remaining)

2018/11/10-00:06:09(CST) Log: Seconds remaining: 280

2018/11/10-00:06:19(CST) Log: Seconds remaining: 270

2018/11/10-00:06:29(CST) Log: Seconds remaining: 260

2018/11/10-00:06:39(CST) Log: Seconds remaining: 250

2018/11/10-00:06:49(CST) Log: Seconds remaining: 240

2018/11/10-00:06:59(CST) Log: Seconds remaining: 230

2018/11/10-00:07:09(CST) Log: Seconds remaining: 220

2018/11/10-00:07:19(CST) Log: Seconds remaining: 210

2018/11/10-00:07:29(CST) Log: Seconds remaining: 200

2018/11/10-00:07:39(CST) Log: Seconds remaining: 190

2018/11/10-00:07:49(CST) Log: Seconds remaining: 180

2018/11/10-00:07:59(CST) Log: Seconds remaining: 170

2018/11/10-00:08:09(CST) Log: Seconds remaining: 160

2018/11/10-00:08:19(CST) Log: Seconds remaining: 150

2018/11/10-00:08:29(CST) Log: Seconds remaining: 140

2018/11/10-00:08:39(CST) Log: Seconds remaining: 130

2018/11/10-00:08:49(CST) Log: Seconds remaining: 120

2018/11/10-00:08:59(CST) Log: Seconds remaining: 110

2018/11/10-00:09:09(CST) Log: Seconds remaining: 100

2018/11/10-00:09:19(CST) Log: Seconds remaining: 90

2018/11/10-00:09:29(CST) Log: Seconds remaining: 80

2018/11/10-00:09:39(CST) Log: Seconds remaining: 70

2018/11/10-00:09:49(CST) Log: Seconds remaining: 60

2018/11/10-00:09:59(CST) Log: Seconds remaining: 50

2018/11/10-00:10:09(CST) Log: Seconds remaining: 40

2018/11/10-00:10:19(CST) Log: Seconds remaining: 30

2018/11/10-00:10:29(CST) Log: Seconds remaining: 20

2018/11/10-00:10:39(CST) Log: Seconds remaining: 10

2018/11/10-00:10:50(CST) Stats: Found 0 hardware incidents

2018/11/10-00:10:50(CST) Stats: Completed: 13841693.00M in 901.17s 15359.63MB/s, with 0 hardware incidents, 0 errors

2018/11/10-00:10:50(CST) Stats: Memory Copy: 5298400.00M at 5883.14MB/s

2018/11/10-00:10:50(CST) Stats: File Copy: 0.00M at 0.00MB/s

2018/11/10-00:10:50(CST) Stats: Net Copy: 0.00M at 0.00MB/s

2018/11/10-00:10:50(CST) Stats: Data Check: 3260596.00M at 3619.53MB/s

2018/11/10-00:10:50(CST) Stats: Invert Data: 5228520.00M at 5807.38MB/s

2018/11/10-00:10:50(CST) Stats: Disk: 54177.00M at 60.18MB/s

2018/11/10-00:10:50(CST) 

2018/11/10-00:10:50(CST) Status: PASS - please verify no corrected errors

2018/11/10-00:10:50(CST) 

Yvan Tian

unread,
Nov 9, 2018, 9:06:28 PM11/9/18
to stressapptest-discuss

Add some information on Linux kernel: 


[root@localhost ~]# uname -a

Linux localhost.localdomain 2.6.32-504.el6.x86_64 #1 SMP Wed Oct 15 04:27:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[root@localhost ~]# cat /etc/redhat-release 

CentOS release 6.6 (Final)

Nick Sanders

unread,
Nov 11, 2018, 6:05:53 PM11/11/18
to stressappt...@googlegroups.com
What is your goal for this test? What are you trying to validate?
My general suggestion would be to not use '-d' at all.

--

---
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-di...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yvan Tian

unread,
Nov 11, 2018, 8:46:38 PM11/11/18
to stressapptest-discuss
I'm using this test as an aging test tool to ensure the reliability of using of server products  
"-d" is used to validate the configuration of hard disks on a server.
I would also like to know whether the components of hard disks I bought from HDD sponsors are reliable to use.

在 2018年11月12日星期一 UTC+8上午7:05:53,Nick写道:
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.

Nick Sanders

unread,
Nov 12, 2018, 12:12:44 PM11/12/18
to stressappt...@googlegroups.com
-d doesn't do that. You need a different, non-stressapptest tool, since stressapptest is a memory and interface test. 

To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-di...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-di...@googlegroups.com.

Yvan Tian

unread,
Nov 13, 2018, 5:01:04 AM11/13/18
to stressapptest-discuss
Thank you. Indeed, this test stress memory and processors. I find  "io_submit" function is called in WriteBlocktoDisk(), but I found no write bandwidth generated on sdb during the test.

Is it normal?

iostat -p sdb :

avg-cpu:  %user   %nice %system %iowait  %steal   %idle

          99.19    0.00    0.81    0.00    0.00    0.00


Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn

sda             146.80         0.00       600.80          0       3004

sda1              0.00         0.00         0.00          0          0

sda2              0.40         0.00         4.00          0         20

sda3            146.20         0.00       596.00          0       2980

sda4              0.00         0.00         0.00          0          0

sdb             144.80     72400.00         0.00     362000          0

sdb               1.18       745.94         0.00  325991996          0




在 2018年11月13日星期二 UTC+8上午1:12:44,Nick写道:
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages