Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Forcing NT to use RAM for PAGEFILE.SYS using RAMDISK ??

43 views
Skip to first unread message

hyc

unread,
Sep 21, 2005, 3:17:05 PM9/21/05
to
The above was the subject of a pretty long thread in this group, back
in 1997.
http://groups.google.com/group/comp.os.ms-windows.nt.misc/browse_frm/thread/5b3260f345a97644/9b784761a66f1d69#9b784761a66f1d69
Sad to see that today with WindowsXP the situation hasn't improved.

I have 2GB of RAM in my laptop, and TaskInfo shows that 1.4GB is free,
but I still see periodic page-out activity on my disk, with only 130MB
being used by the filesystem cache. I've enabled LargeSystemCache and
various other registry tweaks to tell Windows to cache as much as
possible, but it's still ineffective.

It's interesting to note that running a natively compiled gcc under
MSYS, it takes about 11 minutes to compile a large source tree on
Windows. But, if I load up VMware under Windows with raw disk access
and then boot Linux in the VM, I can compile the same source tree on
the same drive in only 3 minutes, and the disk doesn't spin madly the
whole time. Talk about pathetic.

Moreover, if I then do a "make clean" on the Linux system and rerun the
build, the entire build completes with just about zero disk activity.
If I do the same thing on Windows, the disk spins busily both during
the delete phase and again during the compile. Even if I preload all of
the source tree into the cache (e.g., "find . -type f | xargs cat") it
still wants to page things in during the compilation.

It's idiocy like this that leads to people trying "dumb" ideas like the
Subject line above. And even more ridiculous is that dumb ideas like
that actually work to improve system response.

Despite the fact that the SysInternals guys dropped access to those
cache manager variables like CcFirstDelay and such in their current
CACHESET utility, I decided to try to play with them on my current XP
machine. I downloaded the Windows symbol files and fired up WinDbg, and
found that the same variable names still exist, although their values
are negative 64-bit integers now (system time, units of
100nanoseconds). The units weren't surprising, but I don't understand
why they're negative integers. Anyway, I bumped the delays
(CcFirstDelay, CcIdleDelay, CcTargetCleanDelay) up to 30 seconds each,
and I've noticed somewhat less paging activity going on. But it still
keeps the drive busy during a big compile, and TaskInfo shows that the
file cache shrinks and grows during the build, when I would expect it
to continually grow. I guess part of the problem is that the cache
manager always flushes pages whenever a file is closed, regardless of
any delay settings. But I don't see why it *releases* those pages,
especially since the files will be used again very shortly. (E.g., you
compile a file, create a .o - that gets read again for moving into an
object library, or it gets read again for linking into an executable.
There's no reason to page the .o file out of memory so immediately, as
Windows appears to do.)

I'm thinking that the best approach to fix this stupid cache behavior
that releases cache pages too early, is to write a filesystem filter
driver that intercepts all Open requests and creates an additional
reference to every file, the first time that file is opened. Then, it
should not release this additional reference until several minutes
after the file is closed, unless the file is explicitly deleted.

All those explanations in the above-referenced thread about why Windows
leaves so much RAM unused for multitasking purposes are pure bunk, by
the way. Since Windows loads programs by demand-paging, it really only
needs to leave a small amount of memory free - enough to build a couple
process contexts, nothing more. There is no reason Windows should be
letting 1.4GB of RAM go unused on this machine, paging stuff out
continuously when there is no memory pressure on the system. An
intelligent kernel design only pages stuff out because there is
pressure to bring other pages in, otherwise it runs steady-state with
no I/O. All Windows accomplishes with its current design is to consume
battery power on a laptop, prevent the hard drive from spinning down
and stay down, add wear and tear on the drive motors, etc. etc. etc...
In short, it's designed to make your current computer fail sooner and
force you to buy a new one ASAP.

netjustin

unread,
Oct 19, 2005, 1:08:31 PM10/19/05
to
..it certainly accomplishes that. Another one this probably
accomplishes is have your data recorded and on-disk after recovering
from power failure, something that couldn't possibly be achieved with
data kept in volatile ram.

0 new messages