Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hmm - myby its time for Win2k3 server ????

5 views
Skip to first unread message

Warped

unread,
Nov 15, 2003, 6:09:25 PM11/15/03
to
Hi all,

My ACP2+FP3 is working as Inet gateway + SMB file server since 6
months.
Generally works well, but 2 rather critical problems are sign of death
OS/2:

1. Using XNAP 2.5b3 on JFS cause filesystem hang after 2-3 days (on
HPFS
is OK). Such problem with FS are unacceptable in any OS different than

dead OS.
2. In SMB network served by ACP file size limit is 2G. I can catch
this
limit quite easel y: try copy 3G file form WinXP (client) to ACP
(server) via network. Copy process on client is returning error on
exactly 2G point :-((((

Well - IMHO second problem is disqualifying OS/2 as modern SMB server.

I'm wonder - is there any workaround for this problem ?


--
cYa, 3.14iotr/2
Dobry programista wiesza siÄ™ z programem....
Hiroshima'45; Czernobyl'86; Windows'95

Zwrotne bajty daj na "warpme_r...@o2.pl"

eric w

unread,
Nov 15, 2003, 6:46:33 PM11/15/03
to
On Sat, 15 Nov 2003 23:09:25 UTC, "Warped" <wolo...@kki.net.pl> wrote:

> Hi all,
>
> My ACP2+FP3 is working as Inet gateway + SMB file server since 6
> months.
> Generally works well, but 2 rather critical problems are sign of death
> OS/2:
>
> 1. Using XNAP 2.5b3 on JFS cause filesystem hang after 2-3 days (on
> HPFS
> is OK). Such problem with FS are unacceptable in any OS different than
>
> dead OS.
> 2. In SMB network served by ACP file size limit is 2G. I can catch
> this
> limit quite easel y: try copy 3G file form WinXP (client) to ACP
> (server) via network. Copy process on client is returning error on
> exactly 2G point :-((((
>
> Well - IMHO second problem is disqualifying OS/2 as modern SMB server.
>

if you can't say it in a 2 gig segment, it prolly ain't worth saying!

...eric

dinkmeister

unread,
Nov 15, 2003, 8:31:39 PM11/15/03
to
the jfs from xr_c004 seems to fix some problems, although I do get strange
keyboard+mouse+screen hangs every minute or so for about a second or 2
but the system keeps going after the weird hang. The same thing happens
with bittorrent.. Needless to say I reformatted the partition to hpfs(386).

but anyways, the updated jfs is worth a try. I'm not sure about the smb
problem though :(


regards,
- dink

On Sat, 15 Nov 2003 23:09:25 +0000 (UTC), Warped wrote:

:Hi all,


:
:My ACP2+FP3 is working as Inet gateway + SMB file server since 6
:months.
:Generally works well, but 2 rather critical problems are sign of death
:OS/2:
:
:1. Using XNAP 2.5b3 on JFS cause filesystem hang after 2-3 days (on
:HPFS
:is OK). Such problem with FS are unacceptable in any OS different than
:
:dead OS.
:2. In SMB network served by ACP file size limit is 2G. I can catch
:this
:limit quite easel y: try copy 3G file form WinXP (client) to ACP
:(server) via network. Copy process on client is returning error on
:exactly 2G point :-((((
:
:Well - IMHO second problem is disqualifying OS/2 as modern SMB server.
:
: I'm wonder - is there any workaround for this problem ?
:
:
:--
:cYa, 3.14iotr/2

:Dobry programista wiesza si z programem....

:

Marty

unread,
Nov 20, 2003, 12:12:32 AM11/20/03
to
Warped wrote:
> Hi all,
>
> My ACP2+FP3 is working as Inet gateway + SMB file server since 6
> months.
> Generally works well, but 2 rather critical problems are sign of death
> OS/2:
>
> 1. Using XNAP 2.5b3 on JFS cause filesystem hang after 2-3 days (on
> HPFS is OK).

No wonder I never had any luck with the goofy thing. I've got nothing
but JFS here (save my tiny boot partition which doesn't hold any apps).
Never even occurred to me that it could be the filesystem.

Well in spite of this nasty sounding thread, I managed to find some
useful info. In your face! ;-)

Marty

unread,
Nov 20, 2003, 12:14:01 AM11/20/03
to
dinkmeister wrote:
> the jfs from xr_c004 seems to fix some problems, although I do get strange
> keyboard+mouse+screen hangs every minute or so for about a second or 2
> but the system keeps going after the weird hang. The same thing happens
> with bittorrent.. Needless to say I reformatted the partition to hpfs(386).
>
> but anyways, the updated jfs is worth a try. I'm not sure about the smb
> problem though :(

Have you tried playing with the cache settings and specifically lazy-write?

MMI

unread,
Nov 20, 2003, 7:38:06 AM11/20/03
to
"dinkmeister" <di...@yadda.com> wrote:
> the jfs from xr_c004 seems to fix some problems, although I do get strange
> keyboard+mouse+screen hangs every minute or so for about a second or 2
> but the system keeps going after the weird hang. The same thing happens
> with bittorrent.. Needless to say I reformatted the partition to hpfs(386).
>
> but anyways, the updated jfs is worth a try. I'm not sure about the smb
> problem though :(
>
>
> regards,
> - dink

Seems to me like JFS driver does too many things at once while in
cooperative kernel mode, mainly when JFS log entries need to be put in
the FS. I get these second-or-two hangs (mouse moves, MP3 stops
playing and everything is frozen) when copying a lot of smaller files.
I don't remember this to happen on HPFS(386) even at the times I had
386 or 486 at 33MHz, I think HPFS developers back in 386 times had to
make their code much more cooperative friendly, because such
CPU-belongs-to-me lazy writing would freeze good ol' 386 for a longer
time...

Cheers,
Martin

Marty

unread,
Nov 20, 2003, 11:41:42 AM11/20/03
to
MMI wrote:
> Seems to me like JFS driver does too many things at once while in
> cooperative kernel mode, mainly when JFS log entries need to be put in
> the FS. I get these second-or-two hangs (mouse moves, MP3 stops
> playing and everything is frozen) when copying a lot of smaller files.
> I don't remember this to happen on HPFS(386) even at the times I had
> 386 or 486 at 33MHz, I think HPFS developers back in 386 times had to
> make their code much more cooperative friendly, because such
> CPU-belongs-to-me lazy writing would freeze good ol' 386 for a longer
> time...

I have the opposite problem. My entire system grinds to a halt while
doing large transfers to HPFS. When I do the transfers to JFS,
everything runs smooth as ice.

MMI

unread,
Nov 24, 2003, 8:55:54 AM11/24/03
to
Marty <mam...@stny.rr.com> wrote:
>
> I have the opposite problem. My entire system grinds to a halt while
> doing large transfers to HPFS. When I do the transfers to JFS,
> everything runs smooth as ice.

Just halts or slows down but background MP3s keep on playing?
Sometimes I feel like getting big caches is a bit counterproductive,
because the lazywriters have to dump a lot of data occasionally, this
way slowing down or halting the system temporarily. I encountered this
at the times I got my first 256M RAM and played with big caches (128+
M).

Cheers,
Martin

Ilya Zakharevich

unread,
Nov 24, 2003, 4:56:22 PM11/24/03
to
[A complimentary Cc of this posting was sent to
MMI
<m...@nautimail.com>], who wrote in article <a9aca7aa.03112...@posting.google.com>:

> Just halts or slows down but background MP3s keep on playing?
> Sometimes I feel like getting big caches is a bit counterproductive,
> because the lazywriters have to dump a lot of data occasionally, this
> way slowing down or halting the system temporarily. I encountered this
> at the times I got my first 256M RAM and played with big caches (128+
> M).

Do I understand correctly that this is purely an IDE restriction? My
(layman's) understanding is that SCSI should be able to write data
without any data-copying by processor (due to scatter-gather). [Maybe:
a proper alignment of chunks is neede?]

Is it right that IDE should be able to do the same, but only with many
tiny requests? However, given that the requests may be ordered, with
typical current write speed of 20MB/sec and 2M cache, this should be
hardly noticable...

Thanks,
Ilya

Marty

unread,
Nov 25, 2003, 12:07:36 AM11/25/03
to
MMI wrote:
> Marty <mam...@stny.rr.com> wrote:
>
>>I have the opposite problem. My entire system grinds to a halt while
>>doing large transfers to HPFS. When I do the transfers to JFS,
>>everything runs smooth as ice.
>
> Just halts or slows down but background MP3s keep on playing?

Nope. In fact, no matter what priority other tasks are running at on my
system (I wrote my own and cranked it to the max) they still experience
BAD choppiness during large HPFS transfers. This chopiness does not
occur doing the same transfers on JFS.

> Sometimes I feel like getting big caches is a bit counterproductive,
> because the lazywriters have to dump a lot of data occasionally, this
> way slowing down or halting the system temporarily. I encountered this
> at the times I got my first 256M RAM and played with big caches (128+
> M).

My HPFS cache is 2MB. My JFS cache is 64MB. Doesn't seem to be the
issue here. The JFS lazy writer doesn't seem to cause any undue hiccups
for me.

MMI

unread,
Nov 25, 2003, 4:46:31 AM11/25/03
to
Marty <mam...@stny.rr.com> wrote:
> MMI wrote:
> > Marty <mam...@stny.rr.com> wrote:
> >
> >>I have the opposite problem. My entire system grinds to a halt while
> >>doing large transfers to HPFS. When I do the transfers to JFS,
> >>everything runs smooth as ice.
> >
> > Just halts or slows down but background MP3s keep on playing?
>
> Nope. In fact, no matter what priority other tasks are running at on my
> system (I wrote my own and cranked it to the max) they still experience

No matter what priority is, kernel (and drivers) has the right of way.

> BAD choppiness during large HPFS transfers. This chopiness does not
> occur doing the same transfers on JFS.

Perhaps due to the size of the cache? 2MBs of HPFS' cache gets used up
much faster...

> > Sometimes I feel like getting big caches is a bit counterproductive,
> > because the lazywriters have to dump a lot of data occasionally, this
> > way slowing down or halting the system temporarily. I encountered this
> > at the times I got my first 256M RAM and played with big caches (128+
> > M).
>
> My HPFS cache is 2MB. My JFS cache is 64MB. Doesn't seem to be the
> issue here. The JFS lazy writer doesn't seem to cause any undue hiccups
> for me.

I once had HPFS386 with 128+ M cache. I let a CD image go through it,
and started to burn the CD immediately after mkisofs completed. But
alas, the HPFS386 lazywriter began to dump the data on the disk after
a while (completing the CD image file save), hogging the OS/2 really
down and I got a buffer underrun. The machine was back in 2001 a
K6-400MHz.

Cheers,
Martin

MMI

unread,
Nov 25, 2003, 4:50:52 AM11/25/03
to


Unfortunately I don't have a SCSI disk experience. I was given a SCSI
disk once, but it died a month after that. I agree that on smaller
caches one does not notice much (since the cache memory gets used up
quickly and transfer speed drops, but system continues to process,
although slower), but I had my observations from a larger caches on
HPFS386 and JFS.

Cheers,
Martin

Marty

unread,
Nov 25, 2003, 8:20:21 AM11/25/03
to
MMI wrote:
> Marty <mam...@stny.rr.com> wrote:
>
>>MMI wrote:
>>
>>>Marty <mam...@stny.rr.com> wrote:
>>>
>>>>I have the opposite problem. My entire system grinds to a halt while
>>>>doing large transfers to HPFS. When I do the transfers to JFS,
>>>>everything runs smooth as ice.
>>>
>>>Just halts or slows down but background MP3s keep on playing?
>>
>>Nope. In fact, no matter what priority other tasks are running at on my
>>system (I wrote my own and cranked it to the max) they still experience
>
> No matter what priority is, kernel (and drivers) has the right of way.

The kernel and drivers shouldn't be taking that much CPU time though.

>>BAD choppiness during large HPFS transfers. This chopiness does not
>>occur doing the same transfers on JFS.
>
> Perhaps due to the size of the cache? 2MBs of HPFS' cache gets used up
> much faster...

The cache becomes irrelevant in both cases, since the transfer is 250+MB.

>>>Sometimes I feel like getting big caches is a bit counterproductive,
>>>because the lazywriters have to dump a lot of data occasionally, this
>>>way slowing down or halting the system temporarily. I encountered this
>>>at the times I got my first 256M RAM and played with big caches (128+
>>>M).
>>
>>My HPFS cache is 2MB. My JFS cache is 64MB. Doesn't seem to be the
>>issue here. The JFS lazy writer doesn't seem to cause any undue hiccups
>>for me.
>
> I once had HPFS386 with 128+ M cache. I let a CD image go through it,
> and started to burn the CD immediately after mkisofs completed. But
> alas, the HPFS386 lazywriter began to dump the data on the disk after
> a while (completing the CD image file save), hogging the OS/2 really
> down and I got a buffer underrun. The machine was back in 2001 a
> K6-400MHz.

Even with lazy writing disabled, I still notice a massive slowdown doing
transfers with HPFS that isn't there using JFS.

Daniela Engert

unread,
Nov 25, 2003, 12:45:24 PM11/25/03
to
Ilya Zakharevich wrote:

> Do I understand correctly that this is purely an IDE restriction? My
> (layman's) understanding is that SCSI should be able to write data
> without any data-copying by processor (due to scatter-gather). [Maybe:
> a proper alignment of chunks is neede?]
>
> Is it right that IDE should be able to do the same,

IDE *is* able to do the same. Without scatter-gather busmastering no
controller were able to go faster than about 15 MiB/s. My SATA disk does
transfers at 110 MiB/s (measured from user space).

> but only with many tiny requests?

define 'tiny'. The maximum request size per command is 128 KiB with CHS
and LBA28 addressing, and 32 MiB with LBA48 addressing.

Ciao,
Dani

Ilya Zakharevich

unread,
Nov 25, 2003, 2:58:52 PM11/25/03
to
[A complimentary Cc of this posting was sent to
MMI
<m...@nautimail.com>], who wrote in article <a9aca7aa.03112...@posting.google.com>:
> > Nope. In fact, no matter what priority other tasks are running at on my
> > system (I wrote my own and cranked it to the max) they still experience
>
> No matter what priority is, kernel (and drivers) has the right of way.

AFAIU, kernel (and drivers) do not run code on its own behalf, only on
behalf of interrupt requests and API entries from programs. This is
why lazy writing should be imlemented via an external program.

So I do not see how your comment can be applicable. [But see below!]

> I once had HPFS386 with 128+ M cache. I let a CD image go through it,
> and started to burn the CD immediately after mkisofs completed. But
> alas, the HPFS386 lazywriter began to dump the data on the disk after
> a while (completing the CD image file save), hogging the OS/2 really
> down and I got a buffer underrun. The machine was back in 2001 a
> K6-400MHz.

What was the priority of cdrecord? IIRC, older versions would not go
to TC+31 automatically... Hmm, on the other hand, if LAZY WRITE of
HPFS386 can be triggerred by a disk READ (as opposed to cache386'
timer being triggered), then what you experienced would be
priority-independent...

I always run cdrecord with 32M buffer (more on 256M memory machine),
so I think it may be able to survive even this...

Hope this helps,
Ilya

MMI

unread,
Nov 26, 2003, 4:36:34 AM11/26/03
to
Ilya Zakharevich <nospam...@ilyaz.org> wrote:
> [A complimentary Cc of this posting was sent to
> MMI
> <m...@nautimail.com>], who wrote in article <a9aca7aa.03112...@posting.google.com>:
> > > Nope. In fact, no matter what priority other tasks are running at on my
> > > system (I wrote my own and cranked it to the max) they still experience
> >
> > No matter what priority is, kernel (and drivers) has the right of way.
>
> AFAIU, kernel (and drivers) do not run code on its own behalf, only on
> behalf of interrupt requests and API entries from programs. This is
> why lazy writing should be imlemented via an external program.
>
> So I do not see how your comment can be applicable. [But see below!]
>
> > I once had HPFS386 with 128+ M cache. I let a CD image go through it,
> > and started to burn the CD immediately after mkisofs completed. But
> > alas, the HPFS386 lazywriter began to dump the data on the disk after
> > a while (completing the CD image file save), hogging the OS/2 really
> > down and I got a buffer underrun. The machine was back in 2001 a
> > K6-400MHz.
>
> What was the priority of cdrecord? IIRC, older versions would not go

I didn't mess with its preset priorities.

> to TC+31 automatically... Hmm, on the other hand, if LAZY WRITE of
> HPFS386 can be triggerred by a disk READ (as opposed to cache386'
> timer being triggered), then what you experienced would be

Of course I had cache386 running (because this is what does lazywrite
for HPFS386)... After mkisofs stopped, I started to burn immediately,
at speed 8x, and then cache386's timer was triggered and a lot of data
was being written to the disk (I have (and had at that time too) an
extra 850M disk to make CD images on it exclusively)) for some ten
long seconds and cdrecord's buffer dropped to zero.

> priority-independent...
>
> I always run cdrecord with 32M buffer (more on 256M memory machine),
> so I think it may be able to survive even this...

I've got to learn this, thank you :-))

>
> Hope this helps,
> Ilya

Cheers,
Martin

MMI

unread,
Nov 26, 2003, 4:40:09 AM11/26/03
to
Marty <mam...@stny.rr.com> wrote :

<snip>

> Even with lazy writing disabled, I still notice a massive slowdown doing
> transfers with HPFS that isn't there using JFS.

Well, the HPFS.IFS driver is known to be quite slow. That was well
documented in Michal Necasek's OS/2 filesystem review some year ago.

But still, I have my HPFS transfers slow (when compared to JFS), but
still the machine is not crawling... At my system it feels more like
an old, slow disk is installed in the machine.

Cheers,
Martin

Ilya Zakharevich

unread,
Dec 1, 2003, 4:32:35 AM12/1/03
to
[A complimentary Cc of this posting was sent to
MMI
<m...@nautimail.com>], who wrote in article <a9aca7aa.03112...@posting.google.com>:
> > What was the priority of cdrecord? IIRC, older versions would not go

> I didn't mess with its preset priorities.

... which does not answer my question. IIRC, different versions have
different priorities.

> > to TC+31 automatically... Hmm, on the other hand, if LAZY WRITE of
> > HPFS386 can be triggerred by a disk READ (as opposed to cache386'
> > timer being triggered), then what you experienced would be

> Of course I had cache386 running (because this is what does lazywrite
> for HPFS386)...

Well, I did not ask you that. ;-)

> After mkisofs stopped, I started to burn immediately,
> at speed 8x, and then cache386's timer was triggered and a lot of data
> was being written to the disk

Well, the timer should not have been triggered if the priorities are
"right". But, as I said, the implementation of the "obsoleting"
algorithm could have triggered something by *READ* request (not by
timer). Is there anybody understanding how these cache*.exe are
implemented?

*** Now, after many years of running with this setup, I discovered
that I do not have cache.exe running! Oh, sh*t!

Do I understand correct that (with HPFS) even if running cache.exe reports

Lazy writes are enabled.

this is wrong as far as cache.exe is not running detached? Hmm,
running "help hpfs.ifs" does not say I *need* to run cache.exe...
Well, looks I'm pretty much confused by differences between HPFS
and HPFS386 to remember which one is doing which...

Yours,
Ilya

MMI

unread,
Dec 2, 2003, 6:35:17 AM12/2/03
to
Ilya Zakharevich <nospam...@ilyaz.org> wrote in message news:<bqf1rj$9g9$1...@agate.berkeley.edu>...

> [A complimentary Cc of this posting was sent to
> MMI
> <m...@nautimail.com>], who wrote in article <a9aca7aa.03112...@posting.google.com>:
> > > What was the priority of cdrecord? IIRC, older versions would not go
>
> > I didn't mess with its preset priorities.
>
> ... which does not answer my question. IIRC, different versions have
> different priorities.

Sorry, I don't do much investigation of what priority particular
process runs at :-)



> > > to TC+31 automatically... Hmm, on the other hand, if LAZY WRITE of
> > > HPFS386 can be triggerred by a disk READ (as opposed to cache386'
> > > timer being triggered), then what you experienced would be
>
> > Of course I had cache386 running (because this is what does lazywrite
> > for HPFS386)...
>
> Well, I did not ask you that. ;-)
>
> > After mkisofs stopped, I started to burn immediately,
> > at speed 8x, and then cache386's timer was triggered and a lot of data
> > was being written to the disk
>
> Well, the timer should not have been triggered if the priorities are
> "right". But, as I said, the implementation of the "obsoleting"
> algorithm could have triggered something by *READ* request (not by

I can't imagine quite exactly why the cache dump of lazy written data
should be triggered by the read operation. A new write operation would
be quite different scenario of course. :-) And there are few timers
defined for both of the HPFS drivers.

> timer). Is there anybody understanding how these cache*.exe are
> implemented?

It is believed that these are the lazy-writer and read ahead "daemons"
for the IFS drivers. Without them the IFS drivers are expected to work
as write-through.

BUT.

At the times of OS/2 2.1 running or not running cache.exe didn't help
me much and the disk heads were still thrashing like hell on my HDD.
After I installed HPFS386 and cache386 the disk heads started to
behave much more quietly. But this is maybe a particular computer's
problem, because I installed the good ol' 2.1 just for fun a few
months ago and cache.exe works as expected. Or maybe at those early
times (1994) I messed something up ;-)

> *** Now, after many years of running with this setup, I discovered
> that I do not have cache.exe running! Oh, sh*t!
>
> Do I understand correct that (with HPFS) even if running cache.exe reports
>
> Lazy writes are enabled.
>
> this is wrong as far as cache.exe is not running detached? Hmm,
> running "help hpfs.ifs" does not say I *need* to run cache.exe...

Add (example) RUN=E:\OS2\CACHE.EXE /lazy:6 /readahead:on line to your
config. sys

> Well, looks I'm pretty much confused by differences between HPFS
> and HPFS386 to remember which one is doing which...

Apart from HPFS being 16bit and HPFS386 being 32 bit, there is some
difference:

HPFS386:
- does not have 2M limit on cache size
- does include SMB server to bypass the kernel for NETBIOS requests
- can work with ACLs/Local-Security when installed on Warp Server
- is reported to have human optimised assembly inside

Cheers,
Martin

>
> Yours,
> Ilya

Ilya Zakharevich

unread,
Dec 2, 2003, 4:47:54 PM12/2/03
to
[A complimentary Cc of this posting was sent to
MMI
<m...@nautimail.com>], who wrote in article <a9aca7aa.03120...@posting.google.com>:

> > ... which does not answer my question. IIRC, different versions have
> > different priorities.
>
> Sorry, I don't do much investigation of what priority particular
> process runs at :-)

reading the docs should be enough. I think recent versions run at max
or max-1.

> > Well, the timer should not have been triggered if the priorities are
> > "right". But, as I said, the implementation of the "obsoleting"
> > algorithm could have triggered something by *READ* request (not by
>
> I can't imagine quite exactly why the cache dump of lazy written data
> should be triggered by the read operation. A new write operation would
> be quite different scenario of course. :-) And there are few timers
> defined for both of the HPFS drivers.

Free cache for caching more read data, e.g.?

> > Do I understand correct that (with HPFS) even if running cache.exe reports
> >
> > Lazy writes are enabled.
> >
> > this is wrong as far as cache.exe is not running detached? Hmm,
> > running "help hpfs.ifs" does not say I *need* to run cache.exe...
>
> Add (example) RUN=E:\OS2\CACHE.EXE /lazy:6 /readahead:on line to your
> config. sys

There should be no difference between RUN= and DETACH, right? Anyway,
my question still remains: even if "Lazy writes are enabled" message
comes, there is no lazy writing without detached CACHE.EXE?

And these versions are not documented... But I can find the
discussion on google... Nope, mine does not accept these options; w3fp42.

Thanks
Ilya

MMI

unread,
Dec 4, 2003, 9:13:36 AM12/4/03
to
Ilya Zakharevich <nospam...@ilyaz.org> wrote:

<snip>


> > > Well, the timer should not have been triggered if the priorities are
> > > "right". But, as I said, the implementation of the "obsoleting"
> > > algorithm could have triggered something by *READ* request (not by
> >
> > I can't imagine quite exactly why the cache dump of lazy written data
> > should be triggered by the read operation. A new write operation would
> > be quite different scenario of course. :-) And there are few timers
> > defined for both of the HPFS drivers.
>
> Free cache for caching more read data, e.g.?

Ah yes, that's also possible. I went too much focused on write
operations so that I completely ignored this possibility :-))

> > > Do I understand correct that (with HPFS) even if running cache.exe reports
> > >
> > > Lazy writes are enabled.
> > >
> > > this is wrong as far as cache.exe is not running detached? Hmm,
> > > running "help hpfs.ifs" does not say I *need* to run cache.exe...
> >
> > Add (example) RUN=E:\OS2\CACHE.EXE /lazy:6 /readahead:on line to your
> > config. sys
>
> There should be no difference between RUN= and DETACH, right? Anyway,

I assume so. IIRC writing "cache.exe" on the command prompt should be
enough to have HPFS operations cached. But it is always good to check
that with PSTAT or equivalent.

> my question still remains: even if "Lazy writes are enabled" message
> comes, there is no lazy writing without detached CACHE.EXE?

That message should indicate that CACHE.EXE is running and doing lazy
writes.

> And these versions are not documented... But I can find the
> discussion on google... Nope, mine does not accept these options; w3fp42.

Warp 3's CACHE.EXE did not have readahead parameter IIRC, and for sure
it couldn't have more than one lazy write thread. So the parameters
should be set according to Warp 3's CMDREF.INF.

> Thanks
> Ilya

Cheers,
Martin

James J. Weinkam

unread,
Dec 4, 2003, 3:18:06 PM12/4/03
to
Ilya Zakharevich wrote:
>>> Do I understand correct that (with HPFS) even if running cache.exe reports
>>>
>>> Lazy writes are enabled.
>>>
>>> this is wrong as far as cache.exe is not running detached? Hmm,
>>> running "help hpfs.ifs" does not say I *need* to run cache.exe...
>>
>>Add (example) RUN=E:\OS2\CACHE.EXE /lazy:6 /readahead:on line to your
>>config. sys
>
>
> There should be no difference between RUN= and DETACH, right? Anyway,
> my question still remains: even if "Lazy writes are enabled" message
> comes, there is no lazy writing without detached CACHE.EXE?
>
At least in warp4 all cache.exe does is either query or set the parameters
that control how the hpfs cache operates. There is always a cache, and its
size can be specified (up to 2MB) using the /c or /cache parameter on the ifs
statement for hpfs.ifs. If /c is omitted, the default is 10% of available
physical memory up a maximum of 2MB.

Running cache.exe dispalys the cache control parameters in effect. Running
cache.exe with parameters sets the specified parameters to the given values.

I no longer have access to any warp3 systems so I can't check, but I don't
recall any change in how hpfs was configured when warp4 came out.

Ilya Zakharevich

unread,
Dec 4, 2003, 4:21:21 PM12/4/03
to
[A complimentary Cc of this posting was sent to
MMI
<m...@nautimail.com>], who wrote in article <a9aca7aa.03120...@posting.google.com>:
> > my question still remains: even if "Lazy writes are enabled" message
> > comes, there is no lazy writing without detached CACHE.EXE?
>
> That message should indicate that CACHE.EXE is running and doing lazy
> writes.

This has little relationship to what I see:

J:\home\TODO>cache

DiskIdle: 1000 milliseconds
MaxAge: 5000 milliseconds
BufferIdle: 500 milliseconds
Cache size: 2048 kbytes
Lazy writes are enabled.

J:\home\TODO>pstat | grep -i cache

J:\home\TODO>

As I said (and as you can see from the above screen copy):
cache.exe is NOT running;
running it by hand does NOT create a permanently running copy;
BUT it reports lazy writes as enabled.

Thanks,
Ilya

MMI

unread,
Dec 5, 2003, 4:43:47 AM12/5/03
to
Ilya Zakharevich <nospam...@ilyaz.org> wrote in message

Gotta check this on my unfixed Warp 3 (which is installed on my
"historical" computer along with 1.3 and 2.1), and on my Warp 4
notebook. Maybe there's some difference in CACHE.EXE behavior...?

Cheers,
Martin

MMI

unread,
Dec 15, 2003, 6:09:18 AM12/15/03
to

Well, it IS possible to make a permanently running copy - and it works
on Warp 3 as well as 2.1 (I didn't test this on my Warp 4 since at
that time it had a lot of work which I didn't want to screw in any
way.

When one starts an OS/2 system without a CACHE.EXE in the config sys,
it is not possible to make a permanently running copy by running
CACHE.EXE
with /lazy:on parameter or without any parameters. It simply reports
and returns. This would support Scott's remark about WPS running a
lazywrite thread regardless how strange it may sound. I did some (not
very thorough) tests and it looks like there is some lazywrite daemon
in the background. BUT - and here it begins:

1. Disable lazywrite by CACHE /lazy:off
2. Open new CMD window and type CACHE /lazy:on
3. See? It will not return and it runs permanently, until you
4. Open a new CMD window and type CACHE /lazy:off
5. The CACHE.EXE in the previous CMD window returns and you can see
the prompt.

This applies to both 2.1 and 3.0 version, I can assume it is the same
in the case of 4.X version (will try today). Conclusion?

It looks like we have two HPFS lazywrite daemons in the OS/2:
1. One may be Scott's LW thread which however does not survive the
first "CACHE /lazy:off" command, and which place then takes
2. CACHE.EXE

IMHO we still have caching daemon in ring 3, therefore LW threads are
possible in CACHE.EXE in Warp 4.

Cheers,
Martin

0 new messages