Effective February 22, 2024, Google Groups will no longer support new Usenet content. Posting and subscribing will be disallowed, and new content from Usenet peers will not appear. Viewing and searching of historical data will still be supported as it is done today.
Dismiss

Raid

4 views
Skip to first unread message

corrlens

unread,
Oct 18, 2002, 6:30:47 PM10/18/02
to
I have a HP Net Raid 1M controller with 32 megs of RAM in it. If I upgrade
to 64 megs of RAM, Do you think I'll notice my SCO Unix perform faster ?


moncho

unread,
Oct 19, 2002, 6:05:32 AM10/19/02
to

"corrlens" <a...@sbcglobal.net> wrote in message
news:rY%r9.6006$El.377...@newssvr21.news.prodigy.com...

> I have a HP Net Raid 1M controller with 32 megs of RAM in it. If I upgrade
> to 64 megs of RAM, Do you think I'll notice my SCO Unix perform faster ?

Probably not. It really depends upon how you have your RAID setup. If you
have it to write-thru, which is slower but MUCH safer than write-back, it
will probably not make much of a difference.

Also, what applications are you running on your server?

Bumping up the system memory and playing around with the buffer cache will
increase the performance of your system rather than the more memory on your
RAID controller.

moncho

>
>


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.404 / Virus Database: 228 - Release Date: 10/15/2002


Tony Lawrence

unread,
Oct 19, 2002, 6:46:37 AM10/19/02
to
corrlens wrote:
> I have a HP Net Raid 1M controller with 32 megs of RAM in it. If I upgrade
> to 64 megs of RAM, Do you think I'll notice my SCO Unix perform faster ?


If your system is disk bound, maybe. But realize that the answer
depends on the controllers ability to cache the data. If your data set
is 600MB and your usage is constantly reading or writing through all of
it, then 64 MB can't help you very much. Caches depend upon locality of
reference: if your access to that 200 MB is typically to the same areas,
then doubling the cache might help a lot, but if not, not.

And of course if you aren't disk bound at all, then this can't help period.

--

Please note new phone number: (781) 784-7547

Tony Lawrence
SCO/Linux Support Tips, How-To's, Tests and more: http://pcunix.com
Free Unix/Linux Consultants list: http://pcunix.com/consultants.html

Tony Lawrence

unread,
Oct 19, 2002, 8:25:19 AM10/19/02
to
moncho wrote:
> "corrlens" <a...@sbcglobal.net> wrote in message
> news:rY%r9.6006$El.377...@newssvr21.news.prodigy.com...
>
>>I have a HP Net Raid 1M controller with 32 megs of RAM in it. If I upgrade
>>to 64 megs of RAM, Do you think I'll notice my SCO Unix perform faster ?
>
>
> Probably not. It really depends upon how you have your RAID setup. If you
> have it to write-thru, which is slower but MUCH safer than write-back, it
> will probably not make much of a difference.

That would depend on the ratio of reading to writing (among other
things). Setting or not setting write-through doesn't mean much if most
of your access is reading.

>
> Also, what applications are you running on your server?
>
> Bumping up the system memory and playing around with the buffer cache will
> increase the performance of your system rather than the more memory on your
> RAID controller.

That's not necessarily true either. The buffer cache is more
intelligent than a a disk cache because it has a concept of files
whereas the disk cache just looks at tracks. But intelligence isn't
always the speediest approach: it's easy to visualize situations where
it is counterproductive and track caching would make more sense. I
can't think of a situation where adding disk caching would actually hurt
(as opposed to just not helping) but that may be just the limitations of
my imagination.

Jeff Liebermann

unread,
Oct 19, 2002, 12:09:11 PM10/19/02
to
On Fri, 18 Oct 2002 22:30:47 GMT, "corrlens" <a...@sbcglobal.net>
wrote:

>I have a HP Net Raid 1M controller with 32 megs of RAM in it. If I upgrade
>to 64 megs of RAM, Do you think I'll notice my SCO Unix perform faster ?

Probably not. It all despends on how you have your system configured.
In my never humble opinion, the automatic selection of 10% of physical
ram for NBUF is grossly insufficient for most applications. Unix
tends to be a disk basher and the drive system needs all the buffering
it can get. With systems that have lots of ram and a good UPS, I
frequently tweak NBUF to about 30% to 50% of physical memory. NHBUF
goes to about 1/2 of that rounded up to the nearest binary number.
I've seen spectacular performance improvements with this simple tweak.

In these daze of gigabloat ram systems, an additional 32MB of
buffering isn't gonna do much. Play with NBUF and NHBUF and you'll
better results if your disk array is a bottleneck. Use "sar" or just
watch the flashing drive lights to be sure its a bottleneck.

Reference:
http://pcunix.com/Bofcusm/108.html
http://pcunix.com/Bofcusm/106.html

http://osr5doc.ca.caldera.com:457/cgi-bin/getnav/PERFORM/buffer_cache.html
http://osr5doc.ca.caldera.com:457/PERFORM/mp_nhbuf.html


--
Jeff Liebermann 150 Felker St #D Santa Cruz CA 95060
(831)421-6491 pgr (831)336-2558 home
http://www.LearnByDestroying.com WB6SSY
je...@comix.santa-cruz.ca.us je...@cruzio.com

Rainer Zocholl

unread,
Oct 19, 2002, 2:43:00 PM10/19/02
to
(Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:


>http://osr5doc.ca.caldera.com:457/cgi-bin/getnav/PERFORM/buffer_cache.
>html http://osr5doc.ca.caldera.com:457/PERFORM/mp_nhbuf.html

Thanks for the references (but why in heaven caldera uses port 457 which
may be blocked on firewalls/proxies "default settings"?)


http://osr5doc.ca.caldera.com:457/PERFORM/increase_buffer_cache.html

...

|If applications append to files but do not modify existing buffers, the
|write hit rate will be low and the newly written blocks will tend to
|remove possibly useful buffers from the cache.
|
|If you are running such applications on your system, increasing the
|buffer cache size may adversely affect system performance whenever
|the buffer flushing daemon runs. When this happens, applications may
|appear to stop working temporarily (hang) although most keyboard input
|will continue to be echoed to the screen. Applications such as vi(C)
|and telnet(TC) which process keyboard input in user mode may appear
|to stop accepting key strokes.

|The kernel suspends the activity of all user processes until the
|flushing daemon has written the delayed-write buffers to disk.

Why? Why can't that be done in "background"?


|On a large buffer cache, this could take several seconds.

Informix SE 7.13 on Dual Pentium 2GB-RAM SCO 5.0.6a
with Adpatec 3200S RAID5 "write thru", 16MB cache RAM no BBU (but an UPS ;-) )
i found that the write performance is as slow as 1MB/s!
If i copy a 600MB file from one directory to an other it took approx. 18s,
but when i do a "sync" the system is blocked for minutes.

Is there no way to tell SCO Openserver to do that flushing
more "polite" for the user, with lower priority e.g.?
If not, can someone explain why is it required to block
the user activity so hard?


|To improve this situation, spread out the disk activity over time in the
|following ways:
|
| - Decrease the value of BDFLUSHR so that the flushing daemon
| runs more often.

In the case of the 600MB copy example: Wouldn't that just cause the kernel
block to occure more "instantly", but lasts the same time?
Or will the blocking be shorter then too?

| This will reduce the peak demand on disk I/O at the possible expense of a
| slight increase in context switching activity.

Do i understand "BDFLUSHR" right:
That deamon is active every 30s by "default".
If we get a power failure upto 30sec work may be lost?
Oops.

| - Decrease the value of NAUTOUP so that fewer delayed-write buffers
| accumulate in the cache. Potentially useful data remains in the
| buffers that have been marked clean until they are reused.
| Do not reduce NAUTOUP too much or caching may become ineffective.
|
| - Use caching disk controllers (with battery backup if you
| are concerned about the integrity of your data).

Nice, important tip! Thanks!


Tom Parsons

unread,
Oct 19, 2002, 3:43:20 PM10/19/02
to sco...@xenitec.on.ca
Rainer Zocholl enscribed:

| (Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:
|
|
| >http://osr5doc.ca.caldera.com:457/cgi-bin/getnav/PERFORM/buffer_cache.
| >html http://osr5doc.ca.caldera.com:457/PERFORM/mp_nhbuf.html
|
| Thanks for the references (but why in heaven caldera uses port 457 which
| may be blocked on firewalls/proxies "default settings"?)

Because the people immediately responsible that area are the only ones
in the world who don't understand the problem and don't appear to be
unable to understand the explanation, even when expressed in words of
one syllable.

I've beat on that dead horse many times.
--
==========================================================================
Tom Parsons t...@tegan.com
==========================================================================

Jeff Liebermann

unread,
Oct 19, 2002, 10:11:04 PM10/19/02
to
On Sat, 19 Oct 2002 19:43:20 GMT, Tom Parsons <c...@tegan.com> wrote:

>Rainer Zocholl enscribed:

>| (Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:
>| >http://osr5doc.ca.caldera.com:457/cgi-bin/getnav/PERFORM/buffer_cache.
>| >html http://osr5doc.ca.caldera.com:457/PERFORM/mp_nhbuf.html
>|
>| Thanks for the references (but why in heaven caldera uses port 457 which
>| may be blocked on firewalls/proxies "default settings"?)

>Because the people immediately responsible that area are the only ones
>in the world who don't understand the problem and don't appear to be
>unable to understand the explanation, even when expressed in words of
>one syllable.
>I've beat on that dead horse many times.

I may have been the original cause of the problem. I was having a bad
day and elected to take out my frustrations on the SCO web-meister of
the month. Specifically, I wanted to have the installation docs
available online so that when an OSR5 install blows up in my face, and
I can't get to the local online docs, and I can't remember who
borrowed the printed docs, I can read them online with another
computah. Trying to read the docs on a Windoze box, directly from the
OSR5 cdrom, is rather futile as all the "/" and "\" directory
delimiters are backwards and cause all common web browsers to barf.

This was also the time when I was practicing being abusive, abrasive,
and obnoxious, without swearing. SCO was temporarily taking me
seriously, although that changes from month to month. It may also
have been the full moon, but I'm not certain. Whatever the
concordance of coincidental events, I decided that this was the right
time to get the docs put online.

I lobbied all the usual culprits, sent email to anyone that was
interested, bitched incessantly to all the would listen, and generally
made it sound as if civilization would collapse if I didn't get my
way. From experience, this was the only way to make things happen at
SCO.

A few weeks later, I received email from the web-meister in charge of
some corner of the SCO web pile asking what the hell I wanted. I've
learned never to tell a technical person how to impliment anything as
they always get insulted. I indicated that it would be nice to see
the man pages, installation instructions, and all the html docs
online, along with an extensive list of fabricated reasons why this
should be done yesterday.

A few daze later, I was told by the web-meister, that this was
impossible to perform for some reason which I've forgotten. I vaguely
recall that it was of the bureaucratic flavour. Here's where I made
my mistake. I indicated that it was incredibly simple. Since the SCO
web pile was running on an OSR5 server, all that would be necessary is
to open port 457 which points to a crude web server (scohelp) used to
dispense the docs. About 3 days later, the docs were magically
available on port 457. A bit later, I received email asking if I was
happy and informing me that the web-meister is gone for 2 weeks on
vacation.

Well, I wasn't happy because the online HTML docs were all compressed
on the server. Most browsers I tried didn't have a clue what to do
with compressed HTML. About 3 weeks later, someone finally
uncompressed the online man pages. That's roughly where things have
stood ever since. To someone's credit, docs for new OS releases have
been posted in a timely manner.

At various times, I've also tried to get some of the older docs
planted online. Trying to find older Xenix install instructions, docs
for some of the SCO Office Portfolio, and Foxplus stuff, is getting
difficult. I often find myself trying to deduce how something should
be done from TA's (techy articles). However, my fear is that any
attempt to "improve" the user interface, content, format, or access,
will result in some type of unacceptable nightmare. The "improved" TA
search makes me ill. Better to leave sleeping dogs alone.

Moral: If it works, it's permanent.

Jeff Liebermann

unread,
Oct 19, 2002, 10:41:44 PM10/19/02
to
On 19 Oct 2002 20:43:00 +0200, UseNet-Pos...@zocki.toppoint.de
(Rainer Zocholl) wrote:

> (Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:

>http://osr5doc.ca.caldera.com:457/PERFORM/increase_buffer_cache.html

I can't answer your questions because frankly, I don't understand how
much of the kernel I/O buffering mechanics works. However, that
doesn't stop me from tweaking, tuning, playing, and breaking kernel
parameters. Most of your comments deal with everything EXCEPT the
most important ones, NBUF and NHBUF. You cited an Informix 7.13
example, using an SMP system, and what I guess is a 16MB NBUF buffer.
That's about what you would get with a 256MB ram system. That's
rather small for a high performance database machine.

I have learned from experience and a considerable amount of testing,
using a wierd variety of applications, that dramatically increasing
NBUF and NHBUF yields dramatic performance benifits. You can play
with BDFLUSH and some of the other you've mentioned and get
improvements in some specific circumstances, but nothing has a bigger
effect than NBUF and NHBUF.

I once optimized (tinkered) with a 3.2v5.0.4 system running RM Cobol
and a brain dead application that didn't seem to understand the
concept of a pipe. Instead it scribbled temporary files in /tmp and
re-read the files as needed. With about 50 users, there were
thousands of files involved. The system would appear to freeze when
reports were run. The only way to get any kind of performance boost
was to increase the effectiveness of disk buffering.

For testing, I used running a monthly trial balance in the accounting
system. Such reports tend to open lots of files, scribble lots of
crud to /tmp, and take forever. With 256MB of ram, and one user on
the system, a trial balance initially took about 15 minutes. After
tinkering with NBUF and NHBUF, and running a trial balance with each
kernel relink, I settled on 60000 (60MByte) for NBUF and I forgot what
for NHBUF. The trial balance benchmark went from 15 minutes to about
3 minutes. I could have increased it more, but my graph of the buffer
size vs performance curve was beginning to flatten out. A 5x
improvement, with no freezing when busy, was good enough.

Don't forget that you'll need memory for user processes. You can't
grab all the memory for disk buffering or user processes will cause
the system to swap. Keep an eye on swapping (swap -l), extrapolate a
typical user load, and don't let the system swap for any reason.

I will confess to having increased some of the other parameters
(NFILE, NPROC, NREGION, MAXUP, etc) that sar was showing were getting
close to overflowing. However, these were not for performance tuning.

Try adjusting NBUF/NHBUF, see what it does, make sure you have a
really good UPS, don't worry about the other parameters for now, and
have a good day.

Rainer Zocholl

unread,
Oct 20, 2002, 7:31:00 AM10/20/02
to
(Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:

>On 19 Oct 2002 20:43:00 +0200, UseNet-Pos...@zocki.toppoint.de
>(Rainer Zocholl) wrote:

>> (Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:

>>http://osr5doc.ca.caldera.com:457/PERFORM/increase_buffer_cache.html

>I can't answer your questions because frankly, I don't understand how
>much of the kernel I/O buffering mechanics works. However, that
>doesn't stop me from tweaking, tuning, playing, and breaking kernel
>parameters. Most of your comments deal with everything EXCEPT the
>most important ones, NBUF and NHBUF.

I'm starting in that area and wanted to stay with K.I.S.S. ;-)
"If you never wallpapered a room, start with one without doors and windows"..

>You cited an Informix 7.13 example, using an SMP system,
>and what I guess is a 16MB NBUF buffer.
>That's about what you would get with a 256MB ram system.

It has 2GB SD-RAM. We had 256 MB years ago with a V5(6?) but were
strongly recommended to update to at least 1GB when going to V7.

>That's rather small for a high performance database machine.

>I have learned from experience and a considerable amount of testing,
>using a wierd variety of applications, that dramatically increasing
>NBUF and NHBUF yields dramatic performance benifits.

Reading or writing?
The read performance of the Box is IIRC approx "only" 15MB/s but sufficient.
The 1MB/s for writing would be OK too, because we don't have to write much.
What the users annoys are those entire blocking for seconds if someone
had written something bigger.


>You can play with BDFLUSH and some of the other you've mentioned and get
>improvements in some specific circumstances, but nothing has a bigger
>effect than NBUF and NHBUF.

Ok, that's set to "automatic".

I only find a way to set that values.
Is there a way to make a report to see what those
"automatic" values really are?


>I once optimized (tinkered) with a 3.2v5.0.4 system running RM Cobol
>and a brain dead application that didn't seem to understand the
>concept of a pipe.
>Instead it scribbled temporary files in /tmp and re-read the files
>as needed.

You are not alone... same here.
Seems to be the usual way: Make it run "anyhow" and then
compensate programming weakness with brute hardware...?


>With about 50 users, there were thousands of files involved.
>The system would appear to freeze when reports were run.
>The only way to get any kind of performance boost
>was to increase the effectiveness of disk buffering.

>For testing, I used running a monthly trial balance in the accounting
>system. Such reports tend to open lots of files, scribble lots of
>crud to /tmp, and take forever. With 256MB of ram, and one user on
>the system, a trial balance initially took about 15 minutes. After
>tinkering with NBUF and NHBUF, and running a trial balance with each
>kernel relink, I settled on 60000 (60MByte) for NBUF and I forgot what
>for NHBUF.

That are those "approx. 30% of entire RAM" i read somewhere else?

With 2GB that would be 600MB...
When that buffer is flushed it will run 600s 10 minutes!

Currently the UPS would turn power off after 5 minutes...

>The trial balance benchmark went from 15 minutes to about
>3 minutes.

Sounds worth testing.

>I could have increased it more, but my graph of the buffer
>size vs performance curve was beginning to flatten out. A 5x
>improvement, with no freezing when busy, was good enough.


>Don't forget that you'll need memory for user processes. You can't
>grab all the memory for disk buffering or user processes will cause
>the system to swap. Keep an eye on swapping (swap -l), extrapolate a
>typical user load, and don't let the system swap for any reason.

Yes, of cause.
The typical load is arround 0.01...i assume that box is powerful
enough. (As long as BDflush is not running users are really happy
about the response times, but when it runs no one can work for
a while. That's annoying, when you are talking to a customer
on phone and suddenly you can't give him the values... )


>I will confess to having increased some of the other parameters
>(NFILE, NPROC, NREGION, MAXUP, etc) that sar was showing were getting
>close to overflowing. However, these were not for performance tuning.

>Try adjusting NBUF/NHBUF, see what it does,

>make sure you have a really good UPS,

I hope i have. There are 2 redundant main supplies in the box.
Maybe i'd attach an UPS on each...;-)

>don't worry about the other parameters for now, and
>have a good day.

Thanks.


Rainer

Tony Lawrence

unread,
Oct 20, 2002, 10:27:02 AM10/20/02
to
Rainer Zocholl wrote:
> (Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:
>
>
>>On 19 Oct 2002 20:43:00 +0200, UseNet-Pos...@zocki.toppoint.de
>>(Rainer Zocholl) wrote:
>
>
>>>(Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:

>

>>You can play with BDFLUSH and some of the other you've mentioned and get
>>improvements in some specific circumstances, but nothing has a bigger
>>effect than NBUF and NHBUF.
>
>
> Ok, that's set to "automatic".
>
> I only find a way to set that values.
> Is there a way to make a report to see what those
> "automatic" values really are?

Yes.
echo v | crash | grep buf

#(v_hbuf is NHBUF)


See http://pcunix.com/Unixart/memory.html for more on that.

Stephen M. Dunn

unread,
Oct 20, 2002, 10:58:01 PM10/20/02
to
In article <8ZBn3...@zocki.toppoint.de> UseNet-Pos...@zocki.toppoint.de (Rainer Zocholl) writes:
$ (Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:
$>http://osr5doc.ca.caldera.com:457/cgi-bin/getnav/PERFORM/buffer_cache.
$>html http://osr5doc.ca.caldera.com:457/PERFORM/mp_nhbuf.html
$
$Thanks for the references (but why in heaven caldera uses port 457 which
$may be blocked on firewalls/proxies "default settings"?)

457 is the port used for man pages in OSR5. Not using port
80 for that was done for a very good reason - you might want to
use port 80 for your own Web server, and if you did, you'd break
the man pages.

It would have been nice if they'd set up this one particular
server to have its man pages on port 80, though ... looks to me
like it would be a one-line change in each of two files on that
system, though not having tried it I can't say for sure if those
changes would break anything else.

$|To improve this situation, spread out the disk activity over time in the
$|following ways:
$|
$| - Decrease the value of BDFLUSHR so that the flushing daemon
$| runs more often.
$
$In the case of the 600MB copy example: Wouldn't that just cause the kernel
$block to occure more "instantly", but lasts the same time?
$Or will the blocking be shorter then too?

Nope. bdflush doesn't write _all_ delayed write data each time
it's run; it writes only those blocks which have been waiting
for at least NAUTOUP seconds.

If BDFLUSHR is 30 seconds and NAUTOUP is (say) 10 seconds, then
every 30 seconds, the system writes out everything that's older than
10 seconds (so anything that's 10-40 seconds), but doesn't write
out anything under 10 seconds.

Set BDFLUSHR to 1 second, for example, and leave NAUTOUP at
10 seconds. Now, every second, the system writes out everything
that's older than 10 seconds (so anything that's 10-11 seconds
is fair game; nothing is older than 11 seconds because if it were,
it would have been written last time, or the time before, or ...),
and still doesn't write anything under 10 seconds.

In the first case, you only have to put up with bdflush
blocking you twice a minute, but since it has to write half a
minute's worth of dirty blocks each time, the pauses could be
quite long.

In the second case, bdflush blocks you every second - but since
it only has to write a second's worth of data, the pauses will
generally be quite short and, unless you have very heavy write
activity (or very slow hard drives), the pauses will probably be
less noticeable or objectionable to the user.

The first case is slightly more efficient overall, but
if the second case generates fewer user complaints, it may
be better than the first one even though it's less efficient.
And of course you don't have to pick between 30 seconds and 1
second; you can choose whatever value of BDFLUSHR best suits your
system (and ditto for NAUTOUP).

FWIW, my home PC, which is on a small UPS and doesn't have
a caching disk controller, has NAUTOUP at 10 seconds and
BDFLUSHR at 1 second. At times when disk activity is heavy
(like a batch of incoming news being processed), it doesn't
have big pauses like it used to when I used the default settings.

$Do i understand "BDFLUSHR" right:
$That deamon is active every 30s by "default".
$If we get a power failure upto 30sec work may be lost?
$Oops.

Yes. This is one of the classic performance vs. reliability
tradeoffs. Delayed writes (whether done in the OS or in the
disk subsystem) can significantly improve performance, but at
the cost of potentially greater data loss in the case of a
crash.

$| - Use caching disk controllers (with battery backup if you
$| are concerned about the integrity of your data).
$
$Nice, important tip! Thanks!

Also, if you're using a caching disk controller with battery
backup, make sure you're monitoring the status of the battery.
I can't speak for all such controllers, but many (most?) of them
use a lithium battery, which will probably expire 5-10 years
down the road. Some might use rechargeable batteries, but
they don't have infinite lives, either.

Some battery-backed caching controllers will automatically
switch to write-through mode if they detect that the battery is
dying, to ensure reliability at the cost of performance.

Check the manuals for your controller to find out how you can
get status reports on the battery - perhaps there will be
messages displayed on the console and/or written to syslog,
or maybe there's a special monitoring program you have to run
in order to find out if there are any problems.
--
Stephen M. Dunn <ste...@stevedunn.ca>
>>>----------------> http://www.stevedunn.ca/ <----------------<<<
------------------------------------------------------------------
Say hi to my cat -- http://www.stevedunn.ca/photos/toby/

Stephen M. Dunn

unread,
Oct 20, 2002, 10:58:00 PM10/20/02
to
In article <rY%r9.6006$El.377...@newssvr21.news.prodigy.com> "corrlens" <a...@sbcglobal.net> writes:
$I have a HP Net Raid 1M controller with 32 megs of RAM in it. If I upgrade
$to 64 megs of RAM, Do you think I'll notice my SCO Unix perform faster ?

It's hard to say without knowing anything about
- what version of SCO Unix you have
- how much RAM the server itself has
- how you've configured it to use that RAM
- how much of a bottleneck your disk subsystem is
- how you're using the system
- etc.

If you have at least 64 MB of RAM configured as cache buffers in
the OS (assuming you're doing filesystem access - if not, then
if you have at least 64 MB of RAM configured as cache buffers in whatever
database engine or other program is accessing the disk), chances are
you won't see much of an improvement, because the 32 or 64 MB of RAM
on the RAID array mostly holds the same data as 32 or 64 MB of the
cache buffers in system RAM.

If your system does a lot of writing to disk, and the RAID card
is configured for write-back operation, you could improve write
performance this way, though the potential cost is more data lost
if the system loses power during disk writes. Even if the system
is on a UPS, this could happen if, for example, the server's power
supply dies. I don't know that RAID card so I don't know if it
has onboard battery backup for its RAM - some cards do, some
don't - but if it does, that could help reduce the risk of data
loss, as long as you are monitoring the battery and can replace
it when it dies.

Do some performance profiling on your system. Find out where
the bottlenecks are. Make sure the OS itself is tuned appropriately
for your hardware and your use; ditto for any database or other
software that does its own buffer management. Then, if you're still
having performance problems and the profiling shows that it's
due to disk I/O, look at things like:
- adding system RAM to be used for buffering
- replacing hard drives with faster hard drives
- adding more hard drives
- adding more RAM to the RAID array

Stephen M. Dunn

unread,
Oct 20, 2002, 11:37:56 PM10/20/02
to
In article <8ZC1t...@zocki.toppoint.de> UseNet-Pos...@zocki.toppoint.de (Rainer Zocholl) writes:
$(Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:
$>You can play with BDFLUSH and some of the other you've mentioned and get
$>improvements in some specific circumstances, but nothing has a bigger
$>effect than NBUF and NHBUF.
$
$Ok, that's set to "automatic".
$
$I only find a way to set that values.
$Is there a way to make a report to see what those
$"automatic" values really are?

The value used for NBUF is reported at boot time, and recorded for
posterity in /usr/adm/messages (it's also fed into the syslog system),
just below the table of hardware configuration:

--------------------8<--------(cut here)--------->8-------------------
mem: total = 392764k, kernel = 296312k, user = 96452k
swapdev = 44/8, swplo = 0, nswap = 524288, swapmem = 262144k
rootdev = 1/42, pipedev = 1/42, dumpdev = 44/8
kernel: Hz = 100, i/o bufs = 262140k (high bufs = 256900k)
--------------------8<--------(cut here)--------->8-------------------

The one that says "i/o bufs" is NBUF. Note that the above example
is _not_ a default setting - you won't see 256 MB of buffers on
a 384 MB system with NBUF left at its default setting :-)

$>Instead it scribbled temporary files in /tmp and re-read the files
$>as needed.
$
$You are not alone... same here.
$Seems to be the usual way: Make it run "anyhow" and then
$compensate programming weakness with brute hardware...?

Unfortunately, there seems to be more and more of that going
around these days. The first fileserver I administered used
12 MB of RAM (yes, I know, lots of people used less), and my first
home Xenix box had either 2 or 4 MB of RAM. Both of them ran
just fine by the expectations of their eras; I doubt a current
version of NetWare or NT could even be installed in 12 MB, nor
will the OSR5 install disk work with 2 or 4 MB.

If you're using HTFS filesystems (or others which use the ht
driver in OSR5), you could try mounting them with the tmp
option (man ADM mount for more info). I haven't tried benchmarking
it but I can't say I'm convinced it makes much difference - has
anyone done any testing on the effects of this?

$With 2GB that would be 600MB...
$When that buffer is flushed it will run 600s 10 minutes!

If it's all dirty, yes. I think you said earlier that most
of the activity is read activity (or maybe I'm imagining things),
in which case you would expect to have rather less than 600 MB
of data to write out. Also, playing with BDFLUSHR and NAUTOUP
can affect how much has to be written at once.

Your 1 MB/s sounds rather low, though. The slower of my two
hard drives (an older 2 GB fast narrow 5400 rpm SCSI drive) writes at
about 5 MB/s ... I wonder why your write rates are so low?

Bela Lubkin

unread,
Oct 21, 2002, 1:11:09 AM10/21/02
to sco...@xenitec.on.ca
Stephen M. Dunn wrote:

> $With 2GB that would be 600MB...
> $When that buffer is flushed it will run 600s 10 minutes!
>
> If it's all dirty, yes. I think you said earlier that most
> of the activity is read activity (or maybe I'm imagining things),
> in which case you would expect to have rather less than 600 MB
> of data to write out. Also, playing with BDFLUSHR and NAUTOUP
> can affect how much has to be written at once.
>
> Your 1 MB/s sounds rather low, though. The slower of my two
> hard drives (an older 2 GB fast narrow 5400 rpm SCSI drive) writes at
> about 5 MB/s ... I wonder why your write rates are so low?

Perhaps the subject line is a hint? I've lost the big picture on this
thread, but -- some types of RAID setup lead to really slow writes. Any
RAID mode that uses parity requires either an extra read and write or an
extra N reads + 1 extra write. This can be badly compounded if the
stripe size is large.

We should be re-focusing on why write performance is so slow. Rainre,
post the brand and model of the RAID controller, and a detailed
description of the RAID setup on the virtual disk that's experiencing
the slow writes. That should be something like:

RAID controller is a SpaZcorp R2D2
The slow virtual disk is a RAID 5 with 7 physical disks
The disks are all MondoByte SZ9023 23GB fast/wide 15000RPM SCSI drives
6 disks are in active use, with one hot spare
RAID 5, so rotating parity uses one whole disk's worth of space --
total usable virtual disk size = 5 * 23GB = 115GB
Stripe size is 64K

... and if that's close to your actual setup, you'll probably get a
noticable benefit from reducing the strip size or going to a different
RAID level (mirroring). Beware that with most RAID subsystems, changing
the layout destroys the data...

>Bela<

Jeff Liebermann

unread,
Oct 21, 2002, 2:08:19 AM10/21/02
to
On 20 Oct 2002 13:31:00 +0200, UseNet-Pos...@zocki.toppoint.de
(Rainer Zocholl) wrote:

>>You cited an Informix 7.13 example, using an SMP system,
>>and what I guess is a 16MB NBUF buffer.
>>That's about what you would get with a 256MB ram system.

>It has 2GB SD-RAM. We had 256 MB years ago with a V5(6?) but were
>strongly recommended to update to at least 1GB when going to V7.

With the present cost of RAM, that's a cheap upgrade, especially if
your system is memory bound. If you're swapping, the performance gain
is spectacular. If you're NOT swapping, then the performance gain of
adding huge amounts of memory is dubious. Extra memory never hurt,
but is not a guaranteed fix for performance problems. I sure hope
your 2GBytes is ECC memory.

I have a customer running about 30 telnet users (each with about 3
sessions) pounding on a Foxplus database on 3.2v5.0.5. The system was
running on 128MB of ram and never swapped. The price of ECC ram came
down sufficiently for me stock up on some. I crammed 1GB into the
server, made no kernel configuration changes, and tested performance.
It was about the same. Nobody even noticed.

We settled on 256MB. However, I used the extra memory NOT for user
application, but for NBUF (and NHBUF) which was increased from 12MB to
about 50MB. I got a major performance boost for my trouble.
Everything was faster including being able to run the DDS-3 tape drive
during business hours without anyone noticing (much). Obviously, the
hard disk i/o was the bottleneck, not the computational memory.

>>I have learned from experience and a considerable amount of testing,
>>using a wierd variety of applications, that dramatically increasing
>>NBUF and NHBUF yields dramatic performance benifits.

>Reading or writing?
>The read performance of the Box is IIRC approx "only" 15MB/s but sufficient.
>The 1MB/s for writing would be OK too, because we don't have to write much.
>What the users annoys are those entire blocking for seconds if someone
>had written something bigger.

The 1MByte/sec is way too slow unless you're running a seriously
bottlnecked RAID 1 system with no write cache. However, both your
tests are for sequential access, reading and writing. This is NOT
what a real system does for a living (exept during tape backups and
restores). While sequential access benchmarks are a good indication
of expected performance, neither uses the kernel buffering or RAID
care buffering mechanism efficiently. More specifically, neither
sequential read or write benchmarks ever use the cache in any way as
no data in the cache is read or written more than once. Might as well
not have a cache at all.

If you want to benchmark buffering performance, either use a real
application as I suggestion or find a random block read/write
benchmark that simulates a real application. That will exercise the
buffering mechanism and offer a more realistic test.

>I only find a way to set that values.
>Is there a way to make a report to see what those
>"automatic" values really are?

That was answered in detail by Tony Lawrence and Stephen Dunn.

Use the various buffer and i/o reports belched by sar to determine
performance. If your i/o buffer queue only has one instruction
waiting for the hard disk system to execute, you're doing fine. You
should also setup sar data collection for 24 hours instead of just
business hours. See:
http://www.cruzio.com/~jeffl/sco/sar24hour.txt
for instructions.

>>After
>>tinkering with NBUF and NHBUF, and running a trial balance with each
>>kernel relink, I settled on 60000 (60MByte) for NBUF and I forgot what
>>for NHBUF.

>That are those "approx. 30% of entire RAM" i read somewhere else?

Yes. I set NBUF to equal between 30% and 50% of physical RAM size.
I've used larger than 50% for application servers that will benifit
from a large read cache. A web server, without a Squid object cache,
is a good candidate. Same with a database server that has a high
ratio of user reads, to user updates (writes).

>With 2GB that would be 600MB...
>When that buffer is flushed it will run 600s 10 minutes!

NBUF should not be that large. There will also need to be other
parameters (name cache) that will need to be scaled to accomidate huge
disk buffers. Offhand, I would suggest no larger than about 90MB for
NBUF.

Also, the entire write buffer does not flush at once. You will never
have 600MB of stale write data that needs to be flushed at once.

>Currently the UPS would turn power off after 5 minutes...

That's rediculous and far too short. You need to give uses a warning
to logoff, close files, autosave data, kill off running processes,
initiate a shutdown, and then kill the power. The shutdown stage
initiates a sync which (hopefully) flushes the write buffers. Some
cacheing controllers are fairly smart about flushing the write buffer
on shutdown. (Others are completely stupid).

Gotta run. Dr Who is on TV.

corrlens

unread,
Oct 21, 2002, 10:19:30 AM10/21/02
to

"Stephen M. Dunn" <ste...@bokonon.stevedunn.ca> wrote in message
news:H4B79...@bokonon.stevedunn.ca...

> In article <rY%r9.6006$El.377...@newssvr21.news.prodigy.com> "corrlens"
<a...@sbcglobal.net> writes:
> $I have a HP Net Raid 1M controller with 32 megs of RAM in it. If I
upgrade
> $to 64 megs of RAM, Do you think I'll notice my SCO Unix perform faster ?
>
> It's hard to say without knowing anything about
> - what version of SCO Unix you have
> - how much RAM the server itself has
> - how you've configured it to use that RAM
> - how much of a bottleneck your disk subsystem is
> - how you're using the system
> - etc.

Answers:

1) SCO's 5.0.5 OS
2) 1.5 Gbytes RAM, 4 -18Gbyte 15K SCSI drives on a RAID 10 .
3) I let Tuneup (olympus?) do the tunning , Should I still modify the RAM to
get better performance ?
4) I have about 200 users using fileproplus database plus a lot of vsifaxing
and some www activity.

Thanks !

Rainer Zocholl

unread,
Oct 21, 2002, 3:08:00 PM10/21/02
to
(Jeff Liebermann) 20.10.02 in /comp/unix/sco/misc:

Yes. That's done. UPS starts a regular "shutdown now".
After that there are still 5 minutes on battery, IIRC.
Shutdown took less than 2 minutes, but i should retest it.
Too i have to tell that that shutdown sequence is issued
5 minutes after the main power failure, so the box is run
already 10 minutes on battery, when it is finally turned off.
(The above quote sounds a little that the power is simply turned off
after 5 min. That was not meant, sorry.)
So the system users would have had 10 minutes "grant".
But I assume that the user PCs and switches (which are mostly not
battery buffered, except those between UPS and server ;-) ) are
already "OFF", so the users can't do anthing anymore...

Maybe i should tell UPS "rmtcmd" to do a sync already in the
moment the UPS took over, mains drops? Hm...


This "short times" are used to avoid "hard dropping" the
data bases/file systems due to empty batteries when serveral
blackout are in a row (if we have had blackouts they were often
at least double.)
(Of cause the UPS should not turn on power as long as the
batteries are not filled up again. But i don't trust them in this case.)

Rainer Zocholl

unread,
Oct 21, 2002, 3:14:00 PM10/21/02
to
(Bela Lubkin) 21.10.02 in /comp/unix/sco/misc:

>Stephen M. Dunn wrote:

>> Your 1 MB/s sounds rather low, though. The slower of my two
>> hard drives (an older 2 GB fast narrow 5400 rpm SCSI drive) writes
>> at about 5 MB/s ... I wonder why your write rates are so low?

>Perhaps the subject line is a hint? I've lost the big picture on this
>thread, but -- some types of RAID setup lead to really slow writes.
>Any RAID mode that uses parity requires either an extra read and write
>or an extra N reads + 1 extra write. This can be badly compounded if
>the stripe size is large.

Jepp-

>We should be re-focusing on why write performance is so slow. Rainre,
>post the brand and model of the RAID controller, and a detailed
>description of the RAID setup on the virtual disk that's experiencing
>the slow writes. That should be something like:

> RAID controller is a SpaZcorp R2D2

Adaptek("DPT") 3200S

> The slow virtual disk is a RAID 5 with 7 physical disks

Jepp. 6 Bingo.

> The disks are all MondoByte SZ9023 23GB fast/wide 15000RPM SCSI
>drives 6 disks are in active use, with one hot spare

5+1

That's it. Newest Seagate 15RPM 18GB each. "Wide".

(There were no smaler available)

> RAID 5, so rotating parity uses one whole disk's worth of space --
> total usable virtual disk size = 5 * 23GB = 115GB

Hm, there are only 60 or 70GB IIRC (Sorry i don't have the box
in front of me)


> Stripe size is 64K

IIRC: Yes.


>... and if that's close to your actual setup,

Very close

>you'll probably get a noticable benefit from reducing the strip size
>or going to a different RAID level (mirroring).


We tried that already.
But: This is the only configuration we found, in which OSR 5.0.6s is
booting relyable on both CPUs. We had a hard time to make it boot at all
(Siemens F200) stable(!) on both CPUs.
We tried a mix 2 disc RAID1(system) remainder RAID5(data base)
but that seems to led to unrelyable boot behavior:
Sometimes (first not often, later always) the box hangs when it turns on
the second CPU. That was of cause not acceptable.


>Beware that with most RAID subsystems,
>changing the layout destroys the data...

I assume that the currently missing "write back" will be the
main reason for the bad write performance.
But as long as there is no BBU the box has to stay with "write thru".
And too: I did not expect that "writing" would have such an impact
to the users. (When flushing bigger amounts of data, the users
can't work anymore. I thought that this was done in background.)

Too there are 2 parameters mentioned in the controllers setup software
which i can't see in "raidutil" or mentioned else where:
One is "sync command mandantory" and it too set to "mandantory",
meaning that, if the OS does a SCSI-sync command, that it is
noticed finished only when it is really done.
If "optional" the controller may "lie"

Stephen M. Dunn

unread,
Oct 21, 2002, 9:58:01 PM10/21/02
to
In article <S1Us9.7556$bZ3.44...@newssvr21.news.prodigy.com> "corrlens" <a...@sbcglobal.net> writes:
$> In article <rY%r9.6006$El.377...@newssvr21.news.prodigy.com> "corrlens"
$<a...@sbcglobal.net> writes:
$> $I have a HP Net Raid 1M controller with 32 megs of RAM in it. If I
$upgrade
$> $to 64 megs of RAM, Do you think I'll notice my SCO Unix perform faster ?

[...my questions about configuration deleted...]

$2) 1.5 Gbytes RAM, 4 -18Gbyte 15K SCSI drives on a RAID 10 .
$3) I let Tuneup (olympus?) do the tunning , Should I still modify the RAM to
$get better performance ?
$4) I have about 200 users using fileproplus database plus a lot of vsifaxing
$and some www activity.

I haven't used Olympus Tuneup much (and not at all in several years)
so I don't know how good a job it does. I also don't know technical
details of fileproplus so I'm going to make an assumption: its
databases are accessed through the Unix filesystem and not via
direct access to the hard drive.

Database access varies depending on the application, but it's often
a lot of reading combined with some writing. A Web server usually
does much more reading than writing. Faxing tends to put a lot of
load on the CPU while the fax is being converted into a compressed
TIFF, along with disk writes to spool up the job (I haven't used
vsifax in a few years, either, but I'm guessing it either writes
the source document to spool, converts it, writes that to spool, and
then sends, or converts on the fly, writing only the compressed TIFF
to spool), and rather light disk read activity while sending, since
you can't shove the fax down a phone line very quickly. I'm guessing
that the database is the most significant component of disk access.

Based on your description above, I'd be surprised if Tuneup didn't
set aside a good chunk of your RAM for various sorts of caching,
including cranking NBUF up to rather more than 64 MB.

For heavy read traffic, there will be little if any benefit;
the controller's cache will store the last 64 MB of data read from
disk, as will the kernel's cache, and since it's already in buffers
in RAM, the operating system isn't going to ask the controller for
it again.

For heavy write traffic, well, it depends. If the controller
does write-back caching, system performance could be improved by
the extra RAM, at the potential cost of data loss as discussed
elsewhere in this thread. If the controller does write-through
caching, performance probably won't improve much if at all,
since the data written to the controller will be written to
hard disk immediately regardless of whether the controller has
32 or 64 MB of cache.

So based on my guesses and assumptions, I don't think adding
RAM to the controller card is likely to provide a significant
performance boost.

I believe there's a section in the FAQ (see pcunix.com) on
basic performance monitoring; give that a read and try it out
on your system. The first step in trying to improve performance
is to find out what is limiting performance - for example, it
may be running short on CPU power if there's a lot of computation
taking place, or perhaps the CPU is sitting there a lot of the
time waiting for the hard drives to return data for the CPU to
crunch. There's no point boosting the capacity of one component
if something else is the bottleneck - new running shoes won't make
you run any faster if the problem is that your cardio conditioning
is letting you down.

Bela Lubkin

unread,
Oct 21, 2002, 11:58:45 PM10/21/02
to sco...@xenitec.on.ca
Rainer Zocholl wrote:

That sounds like you have more fundamental hardware problems, and need
to work with your hardware vendor to make sure the system is functioning
properly. Changing the stripe size or layout of a RAID system should
not affect stability at all.

> >Beware that with most RAID subsystems,
> >changing the layout destroys the data...
>
> I assume that the currently missing "write back" will be the
> main reason for the bad write performance.
> But as long as there is no BBU the box has to stay with "write thru".
> And too: I did not expect that "writing" would have such an impact
> to the users. (When flushing bigger amounts of data, the users
> can't work anymore. I thought that this was done in background.)

Run it in "write back" mode for a while just to see if it affects
performance. Make a good backup beforehand and tell the users that
you're doing a test that could potentially lose data. You only need to
run it long enough to see whether it feel faster. The point of this
test would be to help isolate the issues. If write-back is much faster,
you know that you need to get an appropriate UPS ASAP...

> Too there are 2 parameters mentioned in the controllers setup software
> which i can't see in "raidutil" or mentioned else where:
> One is "sync command mandantory" and it too set to "mandantory",
> meaning that, if the OS does a SCSI-sync command, that it is
> noticed finished only when it is really done.
> If "optional" the controller may "lie"

You might also try turning this off (as a test). Again, if it's fast
then you know that what you really need is a good power protection
system for the RAID subsystem, so you can trust write-back and lazy
syncing.

And, as suggested before, play with BDFLUSHR and NAUTOUP. Setting
BDFLUSHR to 1 will cause stale data to be flushed much more frequently,
in smaller pieces. That should reduce the number and length of pauses.
NAUTOUP can stay at 10 or whatever it currently is (e.g. with
NAUTOUP=10, BDFLUSHR=1, then every second, data that is between 10 and
11 seconds old will be flushed).

>Bela<

Rainer Zocholl

unread,
Oct 22, 2002, 2:23:00 PM10/22/02
to
(Bela Lubkin) 22.10.02 in /comp/unix/sco/misc:

>Rainer Zocholl wrote:

>> Adaptek("DPT") 3200S
>>


>>>you'll probably get a noticable benefit from reducing the strip size
>>>or going to a different RAID level (mirroring).
>>
>>
>> We tried that already.
>> But: This is the only configuration we found, in which OSR 5.0.6s is
>> booting relyable on both CPUs. We had a hard time to make it boot at
>> all (Siemens F200) stable(!) on both CPUs.
>>
>> We tried a mix 2 disc RAID1(system) remainder RAID5(data base)
>> but that seems to led to unrelyable boot behavior:
>> Sometimes (first not often, later always) the box hangs when it
>> turns on the second CPU. That was of cause not acceptable.

>That sounds like you have more fundamental hardware problems, and need
>to work with your hardware vendor to make sure the system is
>functioning properly.

It was sent to them. The only thing they did was (at least what the
consultant told us they did): Change the RAID...):

After investigating the server spec in very very detail, we found
that Siemens seems not to support SCO Openserver at all, only Unixware.
At least that's no where mentioned and they delivered an Intel
Gigabit-Card. But that card do only have Unixware drivers...we had to
buy our own 3com and integrate it... (but the boot error was with the
useless intel card present too.)

>Changing the stripe size or layout of a RAID
>system should not affect stability at all.

Yepp. i would assume that too.
But i saw with my own eyes that the hardware boot message
occurs and the box dies in the moment in which the second CPU
should be turned on.
After removing any "complicate" RAID5 all was well and is still.
Meanwhile we had only one day where the box turned the
second CPU off, but runs on the other one. (Never had so many user calls ;-)).
As cause(?) we found that samba logging was on a very high level and the
users had started to use the samba drives a lot... That gave heavy disc
load because of tons of writes and deletes because the logfile were limited
to only 200KB (which was normaly sufficient) and grows in fracts of
seconds above that size.


My personal opinon:
The 3200S DPT/Adaptek driver is broken, maybe not really SMP proof?
That would explain the "dead on boot" and the bad performance.
(When the second CPU was turned off, sar shows 50% iowait
but the users can't work...so i assume that the driver is not
releasing the CPU or is polling(!) the SCSI bus..)

>> I assume that the currently missing "write back" will be the
>> main reason for the bad write performance.
>> But as long as there is no BBU the box has to stay with "write
>> thru". And too: I did not expect that "writing" would have such an
>> impact to the users. (When flushing bigger amounts of data, the
>> users can't work anymore. I thought that this was done in
>> background.)

>Run it in "write back" mode for a while just to see if it affects
>performance. Make a good backup beforehand and tell the users that
>you're doing a test that could potentially lose data. You only need
>to run it long enough to see whether it feel faster. The point of
>this test would be to help isolate the issues. If write-back is much
>faster, you know that you need to get an appropriate UPS ASAP...

External UPS is there. I want to have an "internal" battery backup
unit just for the RAID cache. (I thought it was there because we ordered
the box with "highest possible data reliablity" (dual redundant powersupply,
RAID 5, hot standby etc). I was very astonished when i raidutil
says "No battery"...)


I'll try as soon as the BBU is there (or some jumpered the forgotten
battery cables? ;-))

Thanks a lot.

Bill Vermillion

unread,
Oct 23, 2002, 12:27:13 PM10/23/02
to
In article <H4B8t...@bokonon.stevedunn.ca>, Stephen M. Dunn
<ste...@bokonon.stevedunn.ca> wrote:

>In article <8ZBn3...@zocki.toppoint.de>
>UseNet-Pos...@zocki.toppoint.de (Rainer Zocholl)
>writes: $ (Jeff Liebermann) 19.10.02 in /comp/unix/sco/misc:
>$>http://osr5doc.ca.caldera.com:457/cgi-bin/getnav/PERFORM/buffe

>r_cache. $>html
>http://osr5doc.ca.caldera.com:457/PERFORM/mp_nhbuf.html

>$Thanks for the references (but why in heaven caldera uses
>$port 457 which may be blocked on firewalls/proxies "default
>$settings"?)

> 457 is the port used for man pages in OSR5. Not using port
>80 for that was done for a very good reason - you might want to
>use port 80 for your own Web server, and if you did, you'd break
>the man pages.

And an FYI. Port 457 is listed as scohelp in the well known
services on all Unix systems I've seen, but notably missing in
Linux. Many of the Linux users seemed to have a very strong
anti-SCO attitude. I see 457 even in Mac OS/X as scohelp.

--
Bill Vermillion - bv @ wjv . com

Jeff Liebermann

unread,
Oct 23, 2002, 1:12:56 PM10/23/02
to
On Wed, 23 Oct 2002 16:27:13 GMT, b...@wjv.comREMOVE (Bill Vermillion)
wrote:

Well, at least it's IANA official.
http://www.iana.org/assignments/port-numbers

scohelp 457/tcp scohelp
scohelp 457/udp scohelp
# Faith Zack <fai...@sco.com>

I wouldn't expect Linux to add it to their /etc/services file any more
than I would expect it to appear in \windoze\services on a Windoze
box. It's not that much of a "well known" port.

Rainer Zocholl

unread,
Oct 23, 2002, 2:22:00 PM10/23/02
to
(Jeff Liebermann) 23.10.02 in /comp/unix/sco/misc:

At least that is "sufficient" to convince the proxy/firewall
admins to open the port!

Thanks.

Tho we are at end of thread(maybe;-) i change the total
missleading subject ;-)

corrlens

unread,
Oct 25, 2002, 2:14:16 PM10/25/02
to
Hi,

Well , I ended up putting a 128megs SDRAM in the HP NEtRaid 1M , and I ran a
script that runs like 100 different tasks and here are the results:


1 CPU PIII 1.4Ghz
3 18Gb 10K RPM
512 Megs RAM
Net Raid with 128Megs of RAM

Did it in 16 minutes

versus

2 CPU's PIII 1.4Ghz
4 18Gb 15,000 RPM!!!
1.5 Gb RAM
Net Raid with 32Megs of RAM

did it in 21 minutes!

The test was run using the exactly same OS with same enviroment (it was a
full edgebackup restore) on same kind of machine. at 4.00 am in the
morning!.

CONCLUSION: buying a 128 Megs SDRAM PC100 for $19. it's worth it. plus
you'll end up having an extra 32megs SDRAM for an old PC100 computer.

"Stephen M. Dunn" <ste...@bokonon.stevedunn.ca> wrote in message

news:H4D0K...@bokonon.stevedunn.ca...

Tony Lawrence

unread,
Oct 25, 2002, 5:57:02 PM10/25/02
to
corrlens wrote:
> Hi,
>
> Well , I ended up putting a 128megs SDRAM in the HP NEtRaid 1M , and I ran a
> script that runs like 100 different tasks and here are the results:
>
>
> 1 CPU PIII 1.4Ghz
> 3 18Gb 10K RPM
> 512 Megs RAM
> Net Raid with 128Megs of RAM
>
> Did it in 16 minutes
>
> versus
>
> 2 CPU's PIII 1.4Ghz
> 4 18Gb 15,000 RPM!!!
> 1.5 Gb RAM
> Net Raid with 32Megs of RAM
>
> did it in 21 minutes!
>
> The test was run using the exactly same OS with same enviroment (it was a
> full edgebackup restore) on same kind of machine. at 4.00 am in the
> morning!.
>
> CONCLUSION: buying a 128 Megs SDRAM PC100 for $19. it's worth it. plus
> you'll end up having an extra 32megs SDRAM for an old PC100 computer.

Not necessarily. Unfortunately, your test is unlikely to really
simulate what your users really do and the way you do it.

But.. for 19.00, who cares? If it helps, it helps and if not, not much
lost.

--

Please note new phone number: (781) 784-7547

Tony Lawrence
Unix/Linux Support Tips, How-To's, Tests and more: http://aplawrence.com
Free Unix/Linux Consultants list: http://aplawrence.com/consultants.html

Jeff Liebermann

unread,
Oct 26, 2002, 11:59:12 AM10/26/02
to
On Fri, 25 Oct 2002 18:14:16 GMT, "corrlens" <a...@sbcglobal.net>
wrote:

>Well , I ended up putting a 128megs SDRAM in the HP NEtRaid 1M , and I ran a
>script that runs like 100 different tasks and here are the results:

What kind of tests? If they were sequential reads or writes, your
RAID adapter cache would do nothing because none of the data is read
more than once. Same with statically linked programs that do not use
(much) shared memory.

>The test was run using the exactly same OS with same enviroment (it was a
>full edgebackup restore) on same kind of machine. at 4.00 am in the
>morning!.

I dunno about you, but benchmarks run at 4AM all look the same to me
and are often blurred. Coffee usually helps.

>CONCLUSION: buying a 128 Megs SDRAM PC100 for $19. it's worth it. plus
>you'll end up having an extra 32megs SDRAM for an old PC100 computer.

Observation: Using non-ECC (Error Correcting RAM) on large systems is
a potential problem. I have a box of ram on my workbench labeled
"flakey RAM", which will pass self test, boots most operating systems,
works for a few hours (or days), and then starts causing problems.
The flakey ram seems to be epidemic and getting worse. I don't buy
PC100 ram because it tends to be the fallout from PC133 testing. I've
fixed many a computah that exhibits erratic behavior with a RAM (and
sometimes a CPU) transplant. For something important like the
controller cache RAM, I would use nothing but the best.

Did you try playing with NBUF and NHBUF? I'll wager you some of my
"flakey ram" sticks that inceasing both will have a much larger effect
on your benchmark tests than tinkering with the controller RAM.

Reply all
Reply to author
Forward
0 new messages