how to calculate/come up the number of thread to run?

484 views
Skip to first unread message

paopao

unread,
May 22, 2014, 1:39:58 PM5/22/14
to stressappt...@googlegroups.com
for my ARM platform, if I use -m 10, run ok.
If run -m 20, it will fail immediately.
My question, how to calculate/come up the number of thread to run?

Nick Sanders

unread,
May 22, 2014, 2:34:25 PM5/22/14
to stressappt...@googlegroups.com
It should not fail with -m 20. Can you post the actual failure? 

Typically you might want 1 thread per core, which is the default for stressapptest if you don't specify "-m". 

No matter how many busy threads you start, you can only execute one per core at a time, so past that, more threads launched just means more threads sleeping.
--

---
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-di...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yvan Tian

unread,
Nov 9, 2018, 6:59:45 AM11/9/18
to stressapptest-discuss
I have a similar question with Paopao.

Are the reading threads, invert threads and cpu stress threads running at the same time when I configure the number of threads at a suitable value?
Is there any way to calculate the number of thread to run so that the memory will be at full bandwidth, Disk I/O will also be at full bandwidth and the usage of CPU is 100% at the same time?

If we start one thread per core when memory copying or writing blocks to disk, will some of the threads be suspended and as a result the memory couldn't get full stress? or some computing threads will be suspended so that the processor couldn't get full stress from sat?

As in my case, I run sat with the following command, do you have any tuning advice?

 stressapptest -M 10636.94843 -m 48 -i 48 -c 48 -W --read-block-size 1024000 --write-block-size 1024000 --read-threshold 100000 --write-threshold 100000 --segment-size 10240000000 --blocks-per-segment 32 -s 1800 -d /dev/sdb1  -l runlog


And configuration of our system:

CPU: 1x E5-2650 v4 (24-core, 2 Threads per Core)
Mem: 1x 16G DDR4 2400MHz
Disk:  1x 2.0 TB 7200RPM SATA

Nick Sanders

unread,
Nov 11, 2018, 6:15:36 PM11/11/18
to stressappt...@googlegroups.com
On Fri, Nov 9, 2018 at 3:59 AM Yvan Tian <tian...@gmail.com> wrote:
Are the reading threads, invert threads and cpu stress threads running at the same time when I configure the number of threads at a suitable value?
No, all of these are less stressful than the default and will displace more meaningful workloads.
 
Is there any way to calculate the number of thread to run so that the memory will be at full bandwidth, Disk I/O will also be at full bandwidth and the usage of CPU is 100% at the same time?
This is somewhat system dependent, but often the best is one memory copy thread per core/hyperthread, and 2 file copy threads per block device. You can increase/decrease and check power consumption or IO bandwidth to tune to maximize whatever parameter you want maximized. 
  

If we start one thread per core when memory copying or writing blocks to disk, will some of the threads be suspended and as a result the memory couldn't get full stress? or some computing threads will be suspended so that the processor couldn't get full stress from sat?
Yes, in your command below most of the threads will be suspended most of the time. Less stressful threads such as invert, check, or disk will run more often than the more stressful memory copy threads.
 

As in my case, I run sat with the following command, do you have any tuning advice?

 stressapptest -M 10636.94843 -m 48 -i 48 -c 48 -W --read-block-size 1024000 --write-block-size 1024000 --read-threshold 100000 --write-threshold 100000 --segment-size 10240000000 --blocks-per-segment 32 -s 1800 -d /dev/sdb1  -l runlog


Don't use this command as a stress test. 
try instead:
stressapptest -W -f /mount/sdb-fs/file1 -f /mount/sdb-fs/file2 -s 1800 -l runlog
 
And configuration of our system:

CPU: 1x E5-2650 v4 (24-core, 2 Threads per Core)
Mem: 1x 16G DDR4 2400MHz
Disk:  1x 2.0 TB 7200RPM SATA

--

Tian Yuwan

unread,
Nov 12, 2018, 12:28:48 AM11/12/18
to stressappt...@googlegroups.com
Many thanks for your careful reply. Then I have a few other questions:

- Is it enough to check whether the disk are reliable if I remove the "-d" option? If so, what are the options "-d" and "--read--block-size/--write--block-size" combined together with "--write-threshold/--read--threshold"  designed to validate?

- According to your advice, " one memory copy thread per core/hyperthread, and 2 file copy threads per block device" is often the best choice for a system under test and thus the command "stressapptest -W -f /mount/sdb-fs/file1 -f /mount/sdb-fs/file2 -s 1800 -l runlog" is suitable for my configuration.  Does the default setting of "-m" also work for a server system configured with different CPUs and memory such as SUT2 upgraded from the default configuration of SUT1?  

->  Default Configuration(SUT1):

CPU:  1x Intel Xeon E5-2650 v4 (24 cores per chip, 2 Threads per Core)
MEM: 1x 16G DDR4 2400MHz
Disk:   1x 16G DDR4 2400MHz

-> Upgraded Configuration(SUT2):

CPU:  2x Intel Xeon Platinum 8180 (28 cores per chip, 2 Threads per Core)
MEM: 16x 16G DDR4 2666MHz
Disk:   1x 16G DDR4 2400MHz

Thanks,
Yvan Tian

Nick Sanders

unread,
Nov 12, 2018, 11:58:41 AM11/12/18
to stressappt...@googlegroups.com
The default number of memory copy threads will be set to the detected number of runnable cores/hyper threads.

Stresapptest isn't well suited as a rotating storage media test so if you want to do that, you'd need a different tool.

Yvan Tian

unread,
Nov 13, 2018, 2:54:05 AM11/13/18
to stressapptest-discuss
Why "-d" is not well suited for disk testing? Is there any bug in the function WriteBlocktoDisk()?


在 2018年11月13日星期二 UTC+8上午12:58:41,Nick写道:
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

在 2018年11月13日星期二 UTC+8上午12:58:41,Nick写道:
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "stressapptest-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stressapptest-discuss+unsub...@googlegroups.com.

Nick Sanders

unread,
Nov 13, 2018, 12:41:15 PM11/13/18
to stressappt...@googlegroups.com
On Mon, Nov 12, 2018 at 11:54 PM Yvan Tian <tian...@gmail.com> wrote:
Why "-d" is not well suited for disk testing? Is there any bug in the function WriteBlocktoDisk()?
We ran it on a bunch of disks and it wasn't particularly effective at finding problems.
Reply all
Reply to author
Forward
0 new messages