Comments on RAID configuration wanted

6 views
Skip to first unread message

Joe

unread,
Jan 9, 2003, 11:03:48 AM1/9/03
to
I am not sure if it is the right forum for this post but my hardware
configuration is designed for SCO OpenServer 5.0.6. Here is the
current config:

1. HP LH3000 with dual P3 1Ghz CPUs
2. 640MB RAM
3. HP Integrated RAID with 4 x 18GB, 15k rpm HDD running RAID-10

The 4 drives now give me a total of 36GB disk space because of the
RAID-10 configuration. The application on SCO generates a lot of disk
I/O and that's why 15k rpm drives were chosen. As a matter of fact,
we started with just 2 drives to run RAID-1 a year ago. Due to
expansion, two more drives were added later and RAID-10 was used.

Now the 36GB logical disk has reached its capacity and we are in
immediate need to expand the storage. But we would love to have the
best possible disk performance with the existing server. We want the
disk space to be sufficient for the next 2 years and thus we are
thinking of doubling the size to 72GB. Here are the options I have
thought of:

1. Get rid of all four of the 18GB (15k rpm) drives and replace with
four 36GB (15k rpm) drives, keep the RAID-10 config
Pros: meet size requirement, maintain performance level
Cons: very very expensive

2. Get rid of all four of the 18GB (15k rpm) drives and replace with
two 72GB (10k rpm only, 15 rpm not available), RAID-1
Pros: meet size requirement, price is moderate
Cons: performance inferior to that of 15 rpm drives
(Avg read seek: 4.9ms for 72GB drive with 10k rpm, 4.1ms for
existing 18GB
Avg write seek: 5.7ms for 72GB drive with 10k rpm, 4.7ms for
existing 18GB
Avg latency: 2.99ms for 72GB drive with 10k rpm, 2ms for existing
18GB )

3. Add two more 18GB 15k rpm drives to the existing RAID-10 to
increase usable space to 54GB
Pros: alleviate the space issue for some time, cheaper
Cons: short -term, no more room for expansion
as the max no. of drives per channel is only 6

4. Keep existing drives and the RAID-10, add two 36 GB 15k rpm
drives and configure the new drives to run RAID-1. All six drives
will be on the same RAID channel. A new file system will be added to
the operating system.
Pros: meet size requirement, acceptable cost, keep existing
drives
Cons: two sets of RAID, complicated RAID configuration

5. Add one more 18GB 15k rpm drive and change from RAID-10 to RAID-5
Pros: meet size requirement, least cost, keep existing drives
Cons: RAID-5 has slower performance than that of RAID-1 and
RAID-10

I can think of all the above possibilities but I do not have the
ability to make the best decision. Admittedly, I am not too sure of
the actual performance impact on the UNIX box, which is running a
version of business basic. Is RAID-5 really inferior when compared
with the other RAID options?

Would 10k rpm drive really slow down the performance? I heard from
some people that some 15k rpm drives can handle 32% more i/o requests
than the 10k rpm version.

Anyway, if someone can shed some lights to ease my headache, I would
be most obliged.

Cheers,

Joe

Jeff Liebermann

unread,
Jan 9, 2003, 11:59:46 AM1/9/03
to
On 9 Jan 2003 08:03:48 -0800, pan...@netzero.com (Joe) wrote:

>I am not sure if it is the right forum for this post but my hardware
>configuration is designed for SCO OpenServer 5.0.6. Here is the
>current config:
>
> 1. HP LH3000 with dual P3 1Ghz CPUs
> 2. 640MB RAM
> 3. HP Integrated RAID with 4 x 18GB, 15k rpm HDD running RAID-10
>
>The 4 drives now give me a total of 36GB disk space because of the
>RAID-10 configuration. The application on SCO generates a lot of disk
>I/O and that's why 15k rpm drives were chosen.

(...)

I don't have any great suggestions on optimizing the RAID array. ALL
(and I do mean ALL) of my few OSR5 RAID arrays use RAID 10 (or RAID 0
+ 1) because I consider it the best compromise between performance and
reliability. I can fail 2 out of 4 drives, on opposite sides of the
stripe, in the array and continue to run. Traditional failures seem
to be that ALL the identical drives fail at identical times. It's not
unusual for me to replace the entire RAID array when one drive fails
as I expect the others to follow shortly.

However, you failed to indicate what problem you are trying to solve.
Expensive 15Krpm drives indicate that you apparently have a disk i/o
bottleneck or similar performance issue. Methinks you are optimizing
the wrong area. SCO ships with the disk buffers (NBUF) set at what I
would consider to be overly conservative. That was fine when RAM was
cheap, but makes no sense in these days of commodity ECC RAM. May I
suggest you read up on past comments regarding tweaking NBUF and NHBUF
in comp.unix.sco.misc.
<http://groups.google.com/groups?as_q=NBUF&as_ugroup=comp.unix.sco.misc>
Methinks you will find a rather spectacular improvement in disk
performance with an increase in these. On small boxes, my
rule-o-thumb is about 30-50% of RAM goes to disk buffering. There's
apparently a 450MByte maximum (NBUF=450000). Also, make sure you have
a really good UPS as leaving live data in the disk buffers is not a
great idea.

--
Jeff Liebermann 150 Felker St #D Santa Cruz CA 95060
(831)421-6491 pgr (831)336-2558 home
http://www.LearnByDestroying.com WB6SSY
je...@comix.santa-cruz.ca.us je...@cruzio.com

Rainer Zocholl

unread,
Jan 9, 2003, 3:52:00 PM1/9/03
to
(Jeff Liebermann) 09.01.03 in /comp/unix/sco/misc:

>On 9 Jan 2003 08:03:48 -0800, pan...@netzero.com (Joe) wrote:

>>I am not sure if it is the right forum for this post but my hardware
>>configuration is designed for SCO OpenServer 5.0.6. Here is the
>>current config:
>>
>> 1. HP LH3000 with dual P3 1Ghz CPUs
>> 2. 640MB RAM
>> 3. HP Integrated RAID with 4 x 18GB, 15k rpm HDD running RAID-10
>>
>>The 4 drives now give me a total of 36GB disk space because of the
>>RAID-10 configuration. The application on SCO generates a lot of
>>disk I/O and that's why 15k rpm drives were chosen.
>(...)

>would consider to be overly conservative. That was fine when RAM was


>cheap, but makes no sense in these days of commodity ECC RAM. May I
>suggest you read up on past comments regarding tweaking NBUF and NHBUF
>in comp.unix.sco.misc.

Too there should be analyzed sar to see what is really required.
"The disc lights are always on" must not indicated the need
of fast discs.

><http://groups.google.com/groups?as_q=NBUF&as_ugroup=comp.unix.sco.misc
> Methinks you will find a rather spectacular improvement in disk
>performance with an increase in these. On small boxes, my
>rule-o-thumb is about 30-50% of RAM goes to disk buffering. There's
>apparently a 450MByte maximum (NBUF=450000).

>Also, make sure you have a really good UPS as leaving live data
>in the disk buffers is not a great idea.

An UPS only helps on a failure of external power...
But there are more data anemies ;-)
For example power supplies can burn down, so at least 2 redundant power
supplies (and a UPS for each, UPS can fail on it self or an (overworked)
idiot can turn it off by acident!) should be integrated if the RAM
carries lot of "fresh work".
If data is only read, there is no problem if power fails.
And once uppon a time there was the almost fallen out supply plug directly
at the box. With 2 main supplies with 2 power cords: no problem...
With only one power cord (but 2 supplies, ouch): ooooops...who turned
the server off the hard way?
Especially if the RAID controler cache has no battery backup unit,
but cache RAM and RAID and discs are run with write caching
(some call it: russian data roulette)...


Joe

unread,
Jan 9, 2003, 10:18:41 PM1/9/03
to
"Jeff Liebermann" <je...@comix.santa-cruz.ca.us> wrote in message
news:rt9r1vs2mv92o67pe...@4ax.com...

Jeff,

Thanks for the suggestion. With 640MB of physical memory, we have already
allocated 306MB for buffer:
mem: total = 654904k, kernel = 353752k, user = 301152k
swapdev = 1/41, swplo = 0, nswap = 3000002, swapmem = 1500000k
rootdev = 1/42, pipedev = 1/42, dumpdev = 1/41
kernel: Hz = 100, i/o bufs = 307200k (high bufs = 306176k)

I did not see any swapping activities, but the disk activities have been
quite heavy. That's why the manager would like to try the best to improve
disk performance. It is believed that 15k rpm disk has a better read/write
performance but the difference in cost is huge. We have to pay twice the
money to get a 15k rpm one with the same capacity.

I am not too sure if the difference in actual performance is so discernable.
I have run some sar reports and it may be helpful for making a decision.


Regards,

Joe

# sar -d

SCO_SV Prod 3.2v5.0.6 PentIII 01/09/2003

00:00:01 device %busy avque r+w/s blks/s avwait avserv
(-d)
01:00:00 Sdsk-0 29.33 1.00 107.11 2743.28 0.00 2.74
Stp-0 99.39 1.01 41.70 2668.58 0.15 23.84

02:00:00 Sdsk-0 22.08 1.00 96.46 2617.06 0.00 2.29
Stp-0 99.59 1.00 40.50 2591.75 0.10 24.59

03:00:00 Sdsk-0 44.10 1.30 133.54 2998.09 0.98 3.30
Stp-0 84.93 1.18 49.96 3197.69 3.01 17.00

04:00:00 Sdsk-0 23.16 1.00 142.75 3681.41 0.00 1.62
Stp-0 79.66 1.26 58.11 3718.89 3.50 13.71

05:00:00 Sdsk-0 18.97 1.00 116.87 1991.64 0.00 1.62
Stp-0 84.50 1.18 30.70 1964.84 5.05 27.52

06:00:00 Sdsk-0 21.59 1.03 132.84 2532.06 0.04 1.63
Stp-0 83.53 1.20 39.31 2516.16 4.19 21.25

07:00:00 Sdsk-0 6.18 1.00 34.16 640.99 0.00 1.81
Stp-0 28.46 1.13 9.86 630.81 3.77 28.88

08:00:01 Sdsk-0 100.00 1.86 40.03 517.51 52.32 60.90

08:20:01 Sdsk-0 8.89 1.02 5.20 63.24 0.36 17.09

08:40:00 Sdsk-0 100.00 1.67 28.86 345.57 44.84 66.61

09:00:00 Sdsk-0 73.77 1.30 12.32 126.47 17.93 59.90

09:20:00 Sdsk-0 85.86 1.14 13.32 149.86 8.74 64.45

09:40:00 Sdsk-0 100.00 1.34 14.35 151.00 30.85 89.66

10:00:00 Sdsk-0 100.00 1.38 14.98 161.23 40.62 107.34

10:20:00 Sdsk-0 100.00 1.38 29.15 378.69 17.58 46.35

10:40:00 Sdsk-0 100.00 1.32 19.25 235.00 24.31 75.15

11:00:01 Sdsk-0 100.00 1.39 22.61 316.89 32.87 84.03

11:20:00 Sdsk-0 100.00 1.52 37.47 467.20 32.94 63.94

11:40:00 Sdsk-0 100.00 1.48 45.02 572.31 26.50 54.99

12:00:00 Sdsk-0 100.00 1.54 31.89 377.61 40.93 76.25

12:20:00 Sdsk-0 100.00 1.57 30.66 365.20 43.02 75.53

12:40:00 Sdsk-0 100.00 1.39 26.17 310.70 25.82 66.85

13:00:00 Sdsk-0 100.00 1.61 40.07 496.28 40.95 66.81

13:20:00 Sdsk-0 100.00 1.55 26.02 298.63 41.91 75.55

13:40:00 Sdsk-0 100.00 1.38 23.52 263.98 27.38 72.78

14:00:01 Sdsk-0 100.00 1.25 21.29 212.36 14.29 56.93

14:20:00 Sdsk-0 100.00 1.30 24.57 235.45 16.70 56.61

14:40:00 Sdsk-0 100.00 1.52 42.63 570.95 28.99 55.88

15:00:00 Sdsk-0 100.00 1.70 73.22 902.02 40.27 57.47

15:20:00 Sdsk-0 100.00 1.43 38.88 493.35 23.01 52.92

15:40:01 Sdsk-0 100.00 1.51 31.09 351.33 36.39 70.83

16:00:00 Sdsk-0 100.00 1.63 31.52 358.87 48.44 77.15

16:20:00 Sdsk-0 100.00 1.48 25.76 277.47 42.77 88.41

16:40:00 Sdsk-0 100.00 1.50 26.66 296.07 41.00 81.88

17:00:00 Sdsk-0 100.00 1.25 32.83 435.48 11.53 45.73

17:20:01 Sdsk-0 100.00 1.19 162.10 2480.39 3.08 15.96

17:40:00 Sdsk-0 100.00 1.23 98.95 1416.08 3.33 14.65

18:00:00 Sdsk-0 100.00 1.90 132.62 1705.20 73.18 81.07


Average Sdsk-0 100.00 1.54 66.19 1258.91 11.69 21.78
Stp-0 31.11 1.13 15.01 960.47 2.67 20.73

# sar -t

SCO_SV Prod 3.2v5.0.6 PentIII 01/09/2003

00:00:01 prdp/s swdnm/s (-t)
01:00:00 33.81 0.48
02:00:00 26.29 0.48
03:00:00 85.82 0.45
04:00:00 115.30 0.35
05:00:00 90.85 0.37
06:00:00 106.47 0.38
07:00:00 29.13 0.96
08:00:01 19.90 1.10
08:20:01 4.46 1.21
08:40:00 31.80 0.94
09:00:00 7.22 1.08
09:20:00 7.55 1.00
09:40:00 7.67 0.94
10:00:00 8.45 0.88
10:20:00 13.39 0.94
10:40:00 9.51 0.92
11:00:01 10.46 0.82
11:20:00 18.85 0.77
11:40:00 21.69 0.67
12:00:00 224.00 0.73
12:20:00 19.29 0.69
12:40:00 13.80 0.72
13:00:00 22.14 0.67
13:20:00 16.33 0.64
13:40:00 12.56 0.70
14:00:01 12.59 0.69
14:20:00 12.09 0.72
14:40:00 20.60 0.70
15:00:00 36.12 0.65
15:20:00 19.98 0.69
15:40:01 14.73 0.65
16:00:00 18.31 0.68
16:20:00 22.39 0.64
16:40:00 18.60 0.68
17:00:00 14.45 0.63
17:20:01 287.35 0.81
17:40:00 61.31 0.70
18:00:00 390.22 0.64
19:00:01 190.53 0.69

Average 60.92 0.68

#SCO_SV Prod 3.2v5.0.6 PentIII 01/09/2003

00:00:01 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s
(-b)
01:00:00 1368 2372 42 3 13 74 0 0
02:00:00 1306 2370 45 3 11 76 0 0
03:00:00 1491 4069 63 8 101 92 0 0
04:00:00 1840 3745 51 1 2 67 0 0
05:00:00 992 3041 67 4 14 74 0 0
06:00:00 1262 3387 63 4 15 76 0 0
07:00:00 318 679 53 2 8 74 0 0
08:00:01 214 4939 96 45 472 91 0 0
08:20:01 29 346 91 2 8 72 0 0
08:40:00 97 2849 97 75 268 72 0 0
09:00:00 47 2285 98 16 341 95 0 0
09:20:00 58 2460 98 17 58 70 0 0
09:40:00 51 1512 97 25 108 77 0 0
10:00:00 50 1932 97 30 196 85 0 0
10:20:00 164 3443 95 25 122 79 0 0
10:40:00 89 3002 97 29 176 84 0 0
11:00:01 129 2871 96 29 246 88 0 0
11:20:00 188 5318 96 46 446 90 0 0
11:40:00 241 6838 96 45 384 88 0 0
12:00:00 142 5858 98 47 487 90 0 0
12:20:00 138 5933 98 44 495 91 0 0
12:40:00 122 3717 97 34 267 87 0 0
13:00:00 198 6042 97 50 558 91 0 0
13:20:00 110 3924 97 39 439 91 0 0
13:40:00 97 4215 98 35 408 91 0 0
14:00:01 84 4495 98 23 151 85 0 0
14:20:00 89 7193 99 29 272 89 0 0
14:40:00 240 5686 96 45 499 91 0 0
15:00:00 377 8177 95 75 873 91 0 0
15:20:00 208 4455 95 38 348 89 0 0
15:40:01 135 5851 98 40 320 87 0 0
16:00:00 135 4810 97 44 375 88 0 0
16:20:00 96 5364 98 42 409 90 0 0
16:40:00 108 5209 98 40 356 89 0 0
17:00:00 189 5274 96 28 237 88 0 0
17:20:01 1209 11785 90 31 210 85 0 0
17:40:00 688 9929 93 20 133 85 0 0
18:00:00 654 17034 96 199 2392 92 0 0
19:00:01 175 4892 96 27 269 90 0 0

Average 580 4321 87 27 251 89 0 0

# sar -n

SCO_SV Prod 3.2v5.0.6 PentIII 01/09/2003

00:00:01 c_hits cmisses (hit %) (-n)
01:00:00 521727 37136 (93%)
02:00:00 384657 31631 (92%)
03:00:00 849197 45344 (94%)
04:00:00 59393 3534 (94%)
05:00:00 568106 41293 (93%)
06:00:00 557449 37987 (93%)
07:00:00 372665 23577 (94%)
08:00:01 519183 39022 (93%)
08:20:01 84325 5938 (93%)
08:40:00 498056 30505 (94%)
09:00:00 447213 26605 (94%)
09:20:00 351710 25421 (93%)
09:40:00 341477 30264 (91%)
10:00:00 323780 34226 (90%)
10:20:00 405696 32993 (92%)
10:40:00 360452 34229 (91%)
11:00:01 406632 37005 (91%)
11:20:00 594494 52767 (91%)
11:40:00 727975 69053 (91%)
12:00:00 678410 69546 (90%)
12:20:00 772246 81201 (90%)
12:40:00 574659 53056 (91%)
13:00:00 703196 76013 (90%)
13:20:00 438396 48695 (90%)
13:40:00 386150 40135 (90%)
14:00:01 606944 59143 (91%)
14:20:00 896037 76521 (92%)
14:40:00 471338 47491 (90%)
15:00:00 695527 70708 (90%)
15:20:00 527682 54021 (90%)
15:40:01 611216 59919 (91%)
16:00:00 734432 75839 (90%)
16:20:00 691959 69772 (90%)
16:40:00 721586 72624 (90%)
17:00:00 472154 44276 (91%)
17:20:01 559619 39519 (93%)
17:40:00 300749 26694 (91%)
18:00:00 679501 62615 (91%)
19:00:01 849659 80783 (91%)

Average 531939 47361 (91%)


# sar -q

SCO_SV Prod 3.2v5.0.6 PentIII 01/09/2003

00:00:01 runq-sz %runocc swpq-sz %swpocc (-q)
01:00:00 1.0 0
02:00:00 1.1 29
03:00:00 1.1 14
04:00:00 1.0 13
05:00:00 1.0 5
06:00:00 1.1 7
07:00:00 1.0 3
08:00:01 1.0 3
08:20:01 1.0 1
08:40:00 1.0 1
09:00:00 1.0 1
09:20:00 1.0 1
09:40:00 1.0 1
10:00:00 1.0 1
10:20:00 1.0 0
10:40:00 1.5 1
11:00:01 1.0 1
11:20:00 1.0 2
11:40:00 1.0 1
12:00:00 1.1 3
12:20:00 1.0 1
12:40:00
13:00:00 1.0 0
13:20:00 1.0 0
13:40:00 1.0 0
14:00:01 1.0 0
14:20:00 1.0 1
14:40:00 1.0 1
15:00:00 2.0 0
15:20:00 1.0 1
15:40:01 1.0 1
16:00:00 1.2 1
16:20:00 1.1 2
16:40:00 1.2 1
17:00:00 1.0 1
17:20:01 1.2 3
17:40:00 1.2 1
18:00:00 1.1 8
19:00:01 1.1 16

Average 1.1 52

# sar -v

SCO_SV Prod 3.2v5.0.6 PentIII 01/09/2003

00:00:01 proc-sz ov inod-sz ov file-sz ov lock-sz (-v)
01:00:00 108/ 440 0 505/3072 0 814/5802 0 394/4352
02:00:00 103/ 440 0 493/3072 0 805/5802 0 394/4352
03:00:00 102/ 440 0 494/3072 0 816/5802 0 410/4352
04:00:00 101/ 440 0 491/3072 0 814/5802 0 410/4352
05:00:00 102/ 440 0 495/3072 0 816/5802 0 410/4352
06:00:00 102/ 440 0 495/3072 0 816/5802 0 410/4352
07:00:00 100/ 440 0 489/3072 0 808/5802 0 410/4352
08:00:01 140/ 440 0 588/3072 0 1234/5802 0 734/4352
08:20:01 144/ 440 0 603/3072 0 1398/5802 0 874/4352
08:40:00 198/ 440 0 721/3072 0 1838/5802 0 1196/4352
09:00:00 226/ 440 0 803/3072 0 2059/5802 0 1321/4352
09:20:00 234/ 440 0 820/3072 0 2247/5802 0 1484/4352
09:40:00 258/ 440 0 880/3072 0 2738/5802 0 1910/4352
10:00:00 269/ 440 0 898/3072 0 3000/5802 0 2131/4352
10:20:00 279/ 440 0 919/3072 0 3068/5802 0 2186/4352
10:40:00 297/ 440 0 960/3072 0 3391/5802 0 2435/4352
11:00:01 310/ 440 0 975/3072 0 3666/5802 0 2654/4352
11:20:00 332/ 440 0 1022/3072 0 4125/5802 0 3045/4352
11:40:00 352/ 440 0 1085/3072 0 4360/5802 0 3214/4352
12:00:00 360/ 440 0 1084/3072 0 4659/5802 0 3476/4352
12:20:00 370/ 440 0 1096/3072 0 4932/5802 0 3708/4352
12:40:00 376/ 440 0 1112/3072 0 4893/5802 0 3662/4352
13:00:00 378/ 440 0 1114/3072 0 4984/5802 0 3750/4352
13:20:00 377/ 440 0 1112/3072 0 4980/5802 0 3746/4352
13:40:00 374/ 440 0 1113/3072 0 5068/5802 0 3823/4352
14:00:01 372/ 440 0 1116/3072 0 4986/5802 0 3744/4352
14:20:00 386/ 440 0 1143/3072 0 5244/5802 0 3967/4352
14:40:00 368/ 440 0 1108/3072 0 4953/5802 0 3730/4352
15:00:00 381/ 440 0 1122/3072 0 4983/5802 0 3731/4352
15:20:00 373/ 440 0 1118/3072 0 4933/5802 0 3683/4352
15:40:01 393/ 440 0 1164/3072 0 5027/5802 0 3731/4352
16:00:00 382/ 440 0 1144/3072 0 5099/5802 0 3837/4352
16:20:00 382/ 440 0 1125/3072 0 5195/5802 0 3935/4352
16:40:00 380/ 440 0 1133/3072 0 5346/5802 0 4070/4352
17:00:00 373/ 440 0 1121/3072 0 5365/5802 0 4095/4352
17:20:01 312/ 440 0 990/3072 0 3993/5802 0 2953/4352
17:40:00 292/ 440 0 952/3072 0 3811/5802 0 2804/4352
18:00:00 269/ 440 0 904/3072 0 3230/5802 0 2302/4352
19:00:01 222/ 440 0 815/3072 0 2698/5802 0 1911/4352


# sysdef
*
* i386 Configuration
*
*
* Tunable Parameters
*
307200 buffers in buffer cache (NBUF)
712 clist buffers (NCLIST)
100 processes per user id (MAXUP)
0 hash slots for buffer cache (NHBUF)
200 size of system virtual space map (SPTMAP)
10 auto update time limit in seconds (NAUTOUP)
110 maximum number of open files per process (NOFILES)
524287 maximum size of user's virtual address space in pages (MAXUMEM)
524287 for package compatibility equal to MAXUMEM (MAXMEM)
600 page stealing low water mark (GPGSLO)
1500 page stealing high water mark (GPGSHI)
30 bdflush run rate (BDFLUSHR)
25 minimum resident memory for avoiding deadlock (MINARMEM)
25 minimum swapable memory for avoiding deadlock (MINASMEM)
8 maximum number of pages swapped out (MAXSC)
8 maximum number of pages saved (MAXFC)
*
* Utsname Tunables
*
3.2 release (REL)
Prod node name (NODE)
SCO_SV system name (SYS)
5.0.6 version (VER)
*
* Streams Tunables
*
8192 number of streams head structures (NSTREAM)
2512 maximum page count for streams buffers (NSTRPAGES)
448 number of multiplexor links (NMUXLINK)
9 maximum number of pushes allowed (NSTRPUSH)
16384 maximum stream message size (STRMSGSZ)
1024 max size of ctl part of message (STRCTLSZ)
*
* IPC Messages
*
512 entries in msg map (MSGMAP)
8192 max message size (MSGMAX)
8192 max bytes on queue (MSGMNB)
50 message queue identifiers (MSGMNI)
8 message segment size (MSGSSZ)
1024 system message headers (MSGTQL)
1024 message segments (MSGSEG)
*
* IPC Semaphores
*
10 entries in semaphore map (SEMMAP)
10 semaphore identifiers (SEMMNI)
60 semaphores in system (SEMMNS)
30 undo structures in system (SEMMNU)
25 max semaphores per id (SEMMSL)
10 max operations per semop call (SEMOPM)
10 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)
*
* IPC Shared Memory
*
524288 max shared memory segment size (SHMMAX)
1 min shared memory segment size (SHMMIN)
100 shared memory identifiers (SHMMNI)

Mike Brown

unread,
Jan 9, 2003, 11:51:25 PM1/9/03
to

I have snipped out the 8:00 and 18:00 stats from your SAR report:

00:00:01 device %busy avque r+w/s blks/s avwait avserv

08:00:01 Sdsk-0 100.00 1.86 40.03 517.51 52.32 60.90

18:00:00 Sdsk-0 100.00 1.90 132.62 1705.20 73.18 81.07

00:00:01 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s


08:00:01 214 4939 96 45 472 91 0 0

18:00:00 654 17034 96 199 2392 92 0 0

You are obviously disk bound, but at a 96% read cache hit rate. Adding more
memory may improve your cache hit rate, but possible not much.

A lread/s of 4939 is a high number, and at 18:00 you were at 17034 with a
cache hit rate of 96%.

The avserv was 81 msec, which indicates a lot of latency and head motion,
and very likely a slow system.

Without knowing you application I am only making a guess, but you need
more then 2 more additional disk drives.

If you are running a database, say as an example Progress, with a datasize
of 20Gbytes and 60 active users I would recommend the following to balance
the CPU speed of 2 X 1G Pentium III ( Xeons ? )

2 x 18 GB 10K raid 1 O/S + LP + application scripts
6 x 18 GB 15K raid 0+1 database
6 x 18 GB 15K raid 0+1 database
2 x 18 GB 10K raid 1 for transaction logs ( ie BI files in Progress )
if the database uses them.

When you build the 0+1 raid arrays break then into 4 ~12GB partitions
each, and try to use just one partition from each array for the database.
This will keep all the data together and reduce HD stroke time.
If you total database is over 24GB, then break them into 3 partitions,
and so on.

I found a good balance on a data intensive app with 2 x 550 MHz Xeons
was a total of 18 10K drives, and with a dual 1Ghz Xeon system I have
installed 54 10K drives total. The wait I/O is generally below 10%
at the worst point.

The disk numbers may seem high, but this site is 7 x 24 with a 48GB
database size. I had to size everything to allow an online backup
AND reasonable system response at the same time. If you can do
your backups during company off hours the above recommendation should
be a good starting point.

Mike

--
Michael Brown

The Kingsway Group

Steve Fabac

unread,
Jan 10, 2003, 12:51:06 AM1/10/03
to
6. Add two 36G 15k drives then restripe 18+18+36 for 72G (This may
not be possible, I don't have documentation available while
writing this post and restriping may result in using only 18G of
the 36G disk.) You will need to restore from backup as
adding a third disk to a raid0 stripe will require deleting
current configuration and creating a new RAID0 array. Then
create RAID10 using the other 18+18+36G RAID0 stripe.

7. Add two 36G 15K drives as RAID1 and keep the existing 18G RAID10.
You now have space to add 4*7 divisions.

>
> I can think of all the above possibilities but I do not have the
> ability to make the best decision. Admittedly, I am not too sure of
> the actual performance impact on the UNIX box, which is running a
> version of business basic. Is RAID-5 really inferior when compared
> with the other RAID options?
>
> Would 10k rpm drive really slow down the performance? I heard from
> some people that some 15k rpm drives can handle 32% more i/o requests
> than the 10k rpm version.
>
> Anyway, if someone can shed some lights to ease my headache, I would
> be most obliged.
>
> Cheers,
>
> Joe

--

Steve Fabac
S.M. Fabac & Associates
816/765-1670

Joe

unread,
Jan 10, 2003, 10:09:31 AM1/10/03
to
Steve,

> 6. Add two 36G 15k drives then restripe 18+18+36 for 72G (This may
> not be possible, I don't have documentation available while
> writing this post and restriping may result in using only 18G of
> the 36G disk.) You will need to restore from backup as
> adding a third disk to a raid0 stripe will require deleting
> current configuration and creating a new RAID0 array. Then
> create RAID10 using the other 18+18+36G RAID0 stripe.

That may be complicated and I don't think the HP Raid would support a
18+18+36 stripe set. Quite possibly, I will have to configure them as
18+18+18, which will cause some wastage. I have to reload the whole system
from tape too.

> 7. Add two 36G 15K drives as RAID1 and keep the existing 18G RAID10.
> You now have space to add 4*7 divisions.

I like this idea much better. I find it funny to have two sets of RAIDs in
one channel with four 18G running RAID10 and two 36G running RAID1. But if
it has no compatibility or performance issue, it should be acceptable.

Regards,

Joe

Bill Vermillion

unread,
Jan 10, 2003, 10:57:11 AM1/10/03
to
In article <34545947.03010...@posting.google.com>,

Joe <pan...@netzero.com> wrote:
>I am not sure if it is the right forum for this post but my hardware
>configuration is designed for SCO OpenServer 5.0.6. Here is the
>current config:

> 1. HP LH3000 with dual P3 1Ghz CPUs
> 2. 640MB RAM
> 3. HP Integrated RAID with 4 x 18GB, 15k rpm HDD running RAID-10

>2. Get rid of all four of the 18GB (15k rpm) drives and replace with


>two 72GB (10k rpm only, 15 rpm not available), RAID-1
> Pros: meet size requirement, price is moderate
> Cons: performance inferior to that of 15 rpm drives

The one thing you did not mention on the drives - though you
included seek information and rotation speed is the data transfer
internal and external on the drive. I have seen slower drives with
higher performance because better head design packed more data
per track. It's something overlooked but it important.

I say this because I've seen this disparity in the past where a
lower RPM drive out-performed a higher RPM drive. Something to be
aware of.

>Would 10k rpm drive really slow down the performance? I heard from
>some people that some 15k rpm drives can handle 32% more i/o requests
>than the 10k rpm version.

See above and look at all the specs on the drive not just seek
and rpm

A quick look show that a 36GB Seagate at 10K run 49MB-63MB/sec
transfer internally and the 15K is 51MB-69MB/sec which isn't great
incrementally as the 50% speed increase in RPM yields less than
10% performance. Not really cost effective in that respect but as
always with all things the last few percent improvement cost the
most money.

If you look at the 73GB drive at 10K it's data transfer rate
is 57-86MB. That's only from Seagate. You might check Maxtor
on their Atlas and IBM/Fuji [whoever it is that has them all
at the moment].

So while those cost more they start with minimum higher performance
at 10K than the 15K drives. And so much of the emphasis on seek
times came from the MS world with synchronous OS and no ordered
writes to seek time was of prime importance there.

Bill
--
Bill Vermillion - bv @ wjv . com

Bela Lubkin

unread,
Jan 10, 2003, 4:45:34 PM1/10/03
to sco...@xenitec.on.ca
Jeff Liebermann wrote:

> Methinks you are optimizing
> the wrong area. SCO ships with the disk buffers (NBUF) set at what I
> would consider to be overly conservative. That was fine when RAM was
> cheap, but makes no sense in these days of commodity ECC RAM. May I
> suggest you read up on past comments regarding tweaking NBUF and NHBUF
> in comp.unix.sco.misc.
> <http://groups.google.com/groups?as_q=NBUF&as_ugroup=comp.unix.sco.misc>
> Methinks you will find a rather spectacular improvement in disk
> performance with an increase in these. On small boxes, my
> rule-o-thumb is about 30-50% of RAM goes to disk buffering. There's
> apparently a 450MByte maximum (NBUF=450000). Also, make sure you have
> a really good UPS as leaving live data in the disk buffers is not a
> great idea.

The other night I quadrupled the performance of a machine by _reducing_
NBUF from 400000 to 7000. Until this is fixed, be aware that the NFS
client code on OSR5 gives you a huge performance _penalty_ for large
buffer caches. As long as your machine isn't doing a lot of NFS client
I/O (accessing files "over there" via NFS), it shouldn't be a problem.

Technobabble: every time the NFS client code opens a remote inode, it
tells the OSR5 buffer cache to purge old references to that inode. The
particular routine it calls walks linearly through every buffer in the
cache. With 400000 buffers, that takes a bit of time. This particular
application was reading zillions of small files over NFS, compounding
the problem greatly.

NFS client machines which don't actually do a lot of NFS file accessing
should be fine. Accessing large files via NFS should be fine. Systems
whose whole lives are devoted to reading small files over NFS may
benefit from sharply _reduced_ NBUF. Hopefully this will be fixed in a
future NFS update (won't be fixed in 507 out the gate).

>Bela<

Bob Bailin

unread,
Jan 11, 2003, 11:28:46 AM1/11/03
to

"Joe" <pan...@netzero.com> wrote in message
news:NmBT9.5168$IC6.5...@news20.bellglobal.com...

After checking the hp site, it seems the lh3000 has a 2-channel raid
controller and
a 32MB cache, upgradeable to 128MB. There's also an (optional?) 2nd
ultra/wide
controller.

Why are all of your drives on one channel, and are you at least using the
128MB
cache? And is the limit of 6 drives/channel due to the hot-swap bay
configuration?

Can you give us a brief idea of what sort of business basic application you
are
running that keeps the disk activity at 100% virtually all of the time? Are
you doing
a lot of reporting or analysis? Data entry or lookups, even by a lot of
users
(how many?) wouldn't result in the unrelenting activity you show in your sar
report.

Bob


Joe

unread,
Jan 12, 2003, 11:41:15 AM1/12/03
to

Bill,

Unfortunately, HP does not provide any detailed specifications on the drives
they are selling. I only have some basic information such as capacity,
rotational speed, read/write seek time and latency (
http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=lpn12
870). They don't even have the manufacturer info and the original model
number.

Those drives have only a 4MB buffer but newer ones in the market has already
increased the buffer to 8MB. My supplier told me that the 72GB drives from
HP will soon be sold out and may not be available any more as NetServers are
retired products. What a pain to deal with HP!

Thanks for your advice.

Joe


http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=lpn12
870


Joe

unread,
Jan 12, 2003, 12:20:25 PM1/12/03
to
Mike,

> I have snipped out the 8:00 and 18:00 stats from your SAR report:
>
> 00:00:01 device %busy avque r+w/s blks/s avwait
avserv
> 08:00:01 Sdsk-0 100.00 1.86 40.03 517.51 52.32
60.90
> 18:00:00 Sdsk-0 100.00 1.90 132.62 1705.20 73.18
81.07
>
> 00:00:01 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s
> 08:00:01 214 4939 96 45 472 91 0 0
> 18:00:00 654 17034 96 199 2392 92 0 0
>
> You are obviously disk bound, but at a 96% read cache hit rate. Adding
more
> memory may improve your cache hit rate, but possible not much.
>

I am thinking for increasing the RAID cache from 32Mb to 64 or 128Mb. Not
sure if it makes too much of a difference.

> A lread/s of 4939 is a high number, and at 18:00 you were at 17034 with a
> cache hit rate of 96%.
>
> The avserv was 81 msec, which indicates a lot of latency and head motion,
> and very likely a slow system.
>
> Without knowing you application I am only making a guess, but you need
> more then 2 more additional disk drives.
>

The application was built with Providex, a business basic interpreter.
Workstations have a GUI client connected to the SCO box. Order entry,
transaction update, queries and reports are run extensively. At peak time,
there will be as much as 60 concurrent sessions.

> If you are running a database, say as an example Progress, with a datasize
> of 20Gbytes and 60 active users I would recommend the following to balance
> the CPU speed of 2 X 1G Pentium III ( Xeons ? )

The system is a dual P3 at 1Ghz, but not the Xeon version.

> 2 x 18 GB 10K raid 1 O/S + LP + application scripts
> 6 x 18 GB 15K raid 0+1 database
> 6 x 18 GB 15K raid 0+1 database
> 2 x 18 GB 10K raid 1 for transaction logs ( ie BI files in Progress )
> if the database uses them.
>
> When you build the 0+1 raid arrays break then into 4 ~12GB partitions
> each, and try to use just one partition from each array for the database.
> This will keep all the data together and reduce HD stroke time.
> If you total database is over 24GB, then break them into 3 partitions,
> and so on.

In the meantime, there is only 1 logical drive which has only a single
partition for 3 filesystems.

>I found a good balance on a data intensive app with 2 x 550 MHz Xeons
> was a total of 18 10K drives, and with a dual 1Ghz Xeon system I have
> installed 54 10K drives total. The wait I/O is generally below 10%
> at the worst point.
>

Mike, is it generally true that I/O performance is better with more small
drives than a few bigger drives of the same speed?

Stuart J. Browne

unread,
Jan 12, 2003, 6:08:02 PM1/12/03
to
> > You are obviously disk bound, but at a 96% read cache hit rate. Adding
> more
> > memory may improve your cache hit rate, but possible not much.
> >
>
> I am thinking for increasing the RAID cache from 32Mb to 64 or 128Mb. Not
> sure if it makes too much of a difference.

.. short note .. HELL YEA IT MAKES A DIFFERENCE! :)

The controllers we use these days have a minimum of 128MB cache on them. The
performance boost is quite noticable on a multi-user system.

Just put a system in with 256MB cache in it (6 x 36Gb 15k RPM, RAID5 array),
and wow.. quick.. Was fun :P Still brought it to it's knees, but I'm just a
cruel user..

bkx


Stuart J. Browne

unread,
Jan 12, 2003, 6:15:29 PM1/12/03
to
<snip>

> After checking the hp site, it seems the lh3000 has a 2-channel raid
> controller and a 32MB cache, upgradeable to 128MB. There's also an
>(optional?) 2nd ultra/wide controller.

Correct. However, it's a shared port. You either use it for the OnBoard
RAID, or you use it for the OnBoard SLHA SCSI.

> Why are all of your drives on one channel, and are you at least using the
> 128MB cache?

Whole heartedly agree..

> And is the limit of 6 drives/channel due to the hot-swap bay configuration?

6 drives a cage for the standard height drives. 9 for the slim-line drives
(but I've never used them). The LH3000 doesn't come with the 2nd cage by
default, it's an added extra.

Mike Brown

unread,
Jan 12, 2003, 11:12:47 PM1/12/03
to
Joe wrote:
>
> Mike,
>
> > I have snipped out the 8:00 and 18:00 stats from your SAR report:
> >
> > 00:00:01 device %busy avque r+w/s blks/s avwait
> avserv
> > 08:00:01 Sdsk-0 100.00 1.86 40.03 517.51 52.32
> 60.90
> > 18:00:00 Sdsk-0 100.00 1.90 132.62 1705.20 73.18
> 81.07
> >
> > 00:00:01 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s
> > 08:00:01 214 4939 96 45 472 91 0 0
> > 18:00:00 654 17034 96 199 2392 92 0 0
> >
> > You are obviously disk bound, but at a 96% read cache hit rate. Adding
> more
> > memory may improve your cache hit rate, but possible not much.
> >
>
> I am thinking for increasing the RAID cache from 32Mb to 64 or 128Mb. Not
> sure if it makes too much of a difference.

Everything helps, but the basic principle of cache is a statistics game. If
you have a large database, and run a report that reads ALL of the records,
then cache does not help. At all. Cache only helps when the same data is
being access repeatly in a small time frame.

Your application shows a 96 %rcache, which is a good number, and would lead
me to believe that the cache memory allocation is tuned enough.


>
> > A lread/s of 4939 is a high number, and at 18:00 you were at 17034 with a
> > cache hit rate of 96%.
> >
> > The avserv was 81 msec, which indicates a lot of latency and head motion,
> > and very likely a slow system.
> >
> > Without knowing you application I am only making a guess, but you need
> > more then 2 more additional disk drives.
> >
>
> The application was built with Providex, a business basic interpreter.
> Workstations have a GUI client connected to the SCO box. Order entry,
> transaction update, queries and reports are run extensively. At peak time,
> there will be as much as 60 concurrent sessions.

I work with Providex a lot, an excellent business basic product. Providex
responds to tuning well, and some site are running over 120 active users
on a large DB. The Windex version of PVX needs to be tuned differently
then the character version, and can sustain a high DB load. Order entry
and distribution may require a large number of small record updates, like
... check the # of widgets, check the price, sell 10 widgets, update a sales
file, update history, update inventory. The system and raid controller do
their best to cache the writes, but they still have to get to disk at some
point. If the reads and writes are all over the disk then you end up with
a lot of disk head motion and slow I/O.


>
> > If you are running a database, say as an example Progress, with a datasize
> > of 20Gbytes and 60 active users I would recommend the following to balance
> > the CPU speed of 2 X 1G Pentium III ( Xeons ? )
>
> The system is a dual P3 at 1Ghz, but not the Xeon version.
>
> > 2 x 18 GB 10K raid 1 O/S + LP + application scripts
> > 6 x 18 GB 15K raid 0+1 database
> > 6 x 18 GB 15K raid 0+1 database
> > 2 x 18 GB 10K raid 1 for transaction logs ( ie BI files in Progress )
> > if the database uses them.
> >
> > When you build the 0+1 raid arrays break then into 4 ~12GB partitions
> > each, and try to use just one partition from each array for the database.
> > This will keep all the data together and reduce HD stroke time.
> > If you total database is over 24GB, then break them into 3 partitions,
> > and so on.
>
> In the meantime, there is only 1 logical drive which has only a single
> partition for 3 filesystems.

This is a normal setup, but may cause a lot of disk head motion. Providex
does not require a transaction log, so I would recommend upgrading to
a 6 drive raid 0+1 array, or two of them if the budget allows.

>
> >I found a good balance on a data intensive app with 2 x 550 MHz Xeons
> > was a total of 18 10K drives, and with a dual 1Ghz Xeon system I have
> > installed 54 10K drives total. The wait I/O is generally below 10%
> > at the worst point.
> >
>
> Mike, is it generally true that I/O performance is better with more small
> drives than a few bigger drives of the same speed?

I would suggest in your case yes. Streaming out video, or acting as a
samba fileserver is very different from an order entry database. In your
application the overhead in looking up a record and seeking to it will
be longer than the time to read or write the few bytes involved. Spreading
that work over more drives is a win.

Applications that read or write large blocks work well with big fast drives,
and do not see much of a penalty in the seek time.

It is a totally different project tuning the disk system for 4 requests a
second of 128K versus 240 requests a second of 1K. The total I/O in the
first case is higher, and will be fine on a 4 drive raid 0+1 system.
The second case will strain a dual 6 drive 0+1 raid.

At peak times your app is in the later category.



>
> > The disk numbers may seem high, but this site is 7 x 24 with a 48GB
> > database size. I had to size everything to allow an online backup
> > AND reasonable system response at the same time. If you can do
> > your backups during company off hours the above recommendation should
> > be a good starting point.
> >
> > Mike
> >

Mike

Joe

unread,
Jan 14, 2003, 9:13:14 PM1/14/03
to
Mike,

Very thorough. Thanks for the analysis. When I was a Summit customer, they
were so proud of having you as a resource and I must admit that I had gained
a lot from your expertise too. :-)

> Everything helps, but the basic principle of cache is a statistics game.
If
> you have a large database, and run a report that reads ALL of the records,
> then cache does not help. At all. Cache only helps when the same data is
> being access repeatly in a small time frame.
>

I am going to upgrade the RAID cache to get the most from all possibilities.

> I work with Providex a lot, an excellent business basic product. Providex
> responds to tuning well, and some site are running over 120 active users
> on a large DB. The Windex version of PVX needs to be tuned differently
> then the character version, and can sustain a high DB load.
>

There may be some tunable kernel parameters that I can consider changing to
customize it for the Windex. I will probably look into it after solving the
space issue.

> > In the meantime, there is only 1 logical drive which has only a single
> > partition for 3 filesystems.
>
> This is a normal setup, but may cause a lot of disk head motion. Providex
> does not require a transaction log, so I would recommend upgrading to
> a 6 drive raid 0+1 array, or two of them if the budget allows.
>

As mentioned in the original post, we are using four 18G 15k rpm drives on 1
channel. They are running RAID-10 right now. Since HP has only the 10k rpm
version of the 72G drives, I am going to add another pair of 36G 15k rpm
drives and configured the new member as RAID-1. They will be put onto the
same channel. In the future, I will move them to the second channel and add
more drives to turn them into RAID-10.

Thanks again, Mike.

Regards,

Joe

Bill Vermillion

unread,
Jan 14, 2003, 10:27:14 PM1/14/03
to
In article <LUgU9.317567$F2h1....@news01.bloor.is.net.cable.rogers.com>,

>Bill,

>jectID=lpn12 870). They don't even have the manufacturer info


>and the original model number.

>Those drives have only a 4MB buffer but newer ones in the market
>has already increased the buffer to 8MB. My supplier told me
>that the 72GB drives from HP will soon be sold out and may not
>be available any more as NetServers are retired products. What a
>pain to deal with HP!

We are down to only a handful of HD manufacturers anymore. Fuji
has taken over the IBM lines, Seagate is still making them, and
Maxtor acquired the Quantum line - with some of the original DEC
engineering going into their Atlas product.

Must you put in HP drives. And I take there is no hint of who
OEM'ed the drives for HP.

I wish you luck.

Bill

>http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=lpn12
>870

Stuart J. Browne

unread,
Jan 14, 2003, 11:00:18 PM1/14/03
to
<Snip>

> We are down to only a handful of HD manufacturers anymore. Fuji
> has taken over the IBM lines, Seagate is still making them, and
> Maxtor acquired the Quantum line - with some of the original DEC
> engineering going into their Atlas product.
>
> Must you put in HP drives. And I take there is no hint of who
> OEM'ed the drives for HP.

HP's OEM drives are primarily Seagate, or Maxtors from experience. Had a
system a while ago (using 9GB Ultra2 disks) of which didn't like mixing the
two brands.

> I wish you luck.
>
>
>
>http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=lpn12
> >870


Joe

unread,
Jan 15, 2003, 7:30:34 AM1/15/03
to

"Stuart J. Browne" <stu...@promed.com.au> wrote in message
news:b02mgb$k6t$1...@perki.connect.com.au...

> <Snip>
>
> > We are down to only a handful of HD manufacturers anymore. Fuji
> > has taken over the IBM lines, Seagate is still making them, and
> > Maxtor acquired the Quantum line - with some of the original DEC
> > engineering going into their Atlas product.
> >
> > Must you put in HP drives. And I take there is no hint of who
> > OEM'ed the drives for HP.
>
> HP's OEM drives are primarily Seagate, or Maxtors from experience. Had a
> system a while ago (using 9GB Ultra2 disks) of which didn't like mixing
the
> two brands.
>

From the existing HP drives in the cage, I remeber seeing Seagate and
another brand name (can't remember if it's a WD or Quantum). But the
technician from HP told me that it was the firmware that mattered. He said
that HP put in their own firmware on those hot swappable drives. At one
time, he sent me a newer firmware to flash the disk BIOS. As disks are
quite critical to the system, I dare not to use any other products from the
market.

My suppliers told me that HP stopped selling parts for their retired
servers. There are only a few 36G hard drives left. Once sold, they will
no longer be available.

The HP NetServer was bought in Nov 2001, what a pity!

Joe


Mike Brown

unread,
Jan 15, 2003, 9:54:00 AM1/15/03
to

That seems to be an unreasonably short time, since HP did not have any
special requirements for the HDs. HP uses a standard RAID controller,
something like a Mylex DAC960 or a AMI MegaRaid, that will work with
normal SCA drives. We have replaced drives inside of HP disk carriers,
they were standard Seagate barracudas. The customer is running Point
Force software and PVX. By replacing the drives one at a time in the
mirror the RAID controller rebuilt the data, but always back up first.
Remember with PVX if you back up and reload the executables it
gets unlicensed.

You may want to check some of the other newsgroup postings, HP had an
issue running its RAID controller daemon, I think it was amird. The
daemon causes the system to slow down under heavy disk load. A fix
was to prevent the daemon from running, I am not sure if a newer
version is available.

Reply all
Reply to author
Forward
0 new messages