Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Status of the DOS Build

3 views
Skip to first unread message

David Golub

unread,
Apr 6, 2009, 11:43:49 AM4/6/09
to
I'm pleased to announce that, as of change #34332, the DOS build is working
again! Here are some notes about getting it to work:

1. The latest release of the compiler (OW 1.8) cannot be used to build on
DOS because of a command line length limitation. This problem was resolved
in change #34100. You must update your compiler to include versions of
wcl.exe and wcl386.exe that incorporate this change.

2. The OW intermediate files generated by the build occupy more than 2 GB.
Therefore, the build must be done on a FAT32 partition. This requires a
version of DOS that supports FAT32. For MS-DOS, this means the version
included with Windows 95 OSR2 or later.

3. Be sure to load SMARTDRV.EXE when running the build. Otherwise, it will
take more than twice as long.

4. The version of SORT.EXE included with MS-DOS (as well as the FreeDOS
version) has a size limitation on the file to be sorted that is too low for
some of the sorting operations involved in building OW. Replacing it with a
different utility such as the freeware version available at
http://members.impulse.net/~thebob/Sort.html should do the trick.

The build currently takes somewhere on the order of 48 hours when running on
Windows 95 OSR2 DOS in Microsoft Virtual PC, which is substantially longer
than it takes when running on Windows XP on my computer. It's possible that
I may be able to increase the speed by tweaking certain DOS settings.

I'll eventually document all of the above somewhere when I get a chance.

David Golub


Wilton Helm

unread,
Apr 6, 2009, 1:33:30 PM4/6/09
to
>The build currently takes somewhere on the order of 48 hours when running
>on Windows 95 OSR2 DOS in Microsoft Virtual PC, which is substantially
>longer than it takes when running on Windows XP on my computer.

That is not surprising. Several of the tools have options of using files
for dynamic memory storage in DOS because of memory size limitations and
lack of virtual memory. Storing things like linked lists and large
temporaries that are accessed frequently on secondary storage does extract a
cost. OTOH, it allows things to be done that would otherwise be impossible
on such a limited platform.

Wilton


Paul S. Person

unread,
Apr 7, 2009, 12:44:38 PM4/7/09
to

Running it in Microsoft Virtual PC instead of directly on the hardware
might, I suppose, also slow it down a bit.
--
Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, "I never knew him."

Rugxulo

unread,
Apr 8, 2009, 5:56:56 PM4/8/09
to
Hi,

On Apr 7, 11:44 am, Paul S. Person <psper...@ix.netscom.com.invalid>
wrote:


> On Mon, 6 Apr 2009 11:33:30 -0600, "Wilton Helm"
>
> <wh...@compuserve.com> wrote:
> >>The build currently takes somewhere on the order of 48 hours when running
> >>on Windows 95 OSR2 DOS in Microsoft Virtual PC, which is substantially
> >>longer than it takes when running on Windows XP on my computer.
>

> ...


>
> Running it in Microsoft Virtual PC instead of directly on the hardware
> might, I suppose, also slow it down a bit.

Last I heard (but never checked), VPC only emulated a PII. I suspect
something like VirtualBox (esp. with VT-X enabled in BIOS) will run
much faster.

Peter C. Chapin

unread,
Apr 9, 2009, 6:17:10 AM4/9/09
to
"David Golub" <david...@yale.edu> wrote in
news:grd7vk$uar$1...@www.openwatcom.org:

> I'm pleased to announce that, as of change #34332, the DOS build is
> working again! Here are some notes about getting it to work:

That's excellent! Thanks for your work on this. Now we need to set up a
DOS build server. :-)

> 2. The OW intermediate files generated by the build occupy more than 2
> GB. Therefore, the build must be done on a FAT32 partition. This
> requires a version of DOS that supports FAT32. For MS-DOS, this means
> the version included with Windows 95 OSR2 or later.

Does FreeDOS support FAT32?

> 4. The version of SORT.EXE included with MS-DOS (as well as the
> FreeDOS version) has a size limitation on the file to be sorted that
> is too low for some of the sorting operations involved in building OW.
> Replacing it with a different utility such as the freeware version
> available at http://members.impulse.net/~thebob/Sort.html should do
> the trick.

Perhaps OW should grow its own sort utility to go along with the other
POSIX command line tools we have.

> The build currently takes somewhere on the order of 48 hours when
> running on Windows 95 OSR2 DOS in Microsoft Virtual PC, which is
> substantially longer than it takes when running on Windows XP on my
> computer. It's possible that I may be able to increase the speed by
> tweaking certain DOS settings.

My experience has been that VirtualPC is quite a bit slower than real
hardware and slower than other competing virtualization products (such
as VMware).

> I'll eventually document all of the above somewhere when I get a
> chance.

Yes, definitely. Thanks again!

Peter

Rugxulo

unread,
Apr 9, 2009, 2:19:31 PM4/9/09
to
Hi,

On Apr 9, 5:17 am, "Peter C. Chapin" <pe...@openwatcom.org> wrote:
> "David Golub" <david.go...@yale.edu> wrote innews:grd7vk$uar$1...@www.openwatcom.org:


>
> > 2. The OW intermediate files generated by the build occupy more than 2
> > GB. Therefore, the build must be done on a FAT32 partition.  This
> > requires a version of DOS that supports FAT32.  For MS-DOS, this means
> > the version included with Windows 95 OSR2 or later.
>
> Does FreeDOS support FAT32?

Yes, and so does EDR-DOS.

If you're really worried about space during building, just call UPX on
some of the .EXEs (assuming those take up most of the space). Or
use .ZIP to temporarily compress stuff you don't need. Or heck, even
old DIET resident TSR might help. (Just random thoughts, feel free to
ignore.)

> Perhaps OW should grow its own sort utility to go along with the other
> POSIX command line tools we have.

It might be easier to just recompile FreeDOS' for 32-bits and/or use
the one from DJGPP's TXT20B.ZIP (GNU textutils). Some of the ones I
listed in that other thread have sources, too.

> > The build currently takes somewhere on the order of 48 hours when
> > running on Windows 95 OSR2 DOS in Microsoft Virtual PC, which is
> > substantially longer than it takes when running on Windows XP on my
> > computer.  It's possible that I may be able to increase the speed by
> > tweaking certain DOS settings.

A RAM disk probably helps if you can spare it. And Jack Ellis' UIDE
(Ultra DMA + cache + DVD driver) works better than SMARTDRV if your
hardware supports it. Otherwise, try JEMMEX + JLOAD + XDMA32.

http://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/util/system/udma+drivers/drivers-01nov2007.zip

http://www.japheth.de/dwnload4.html (JEMM572B.ZIP, JEMM572S.ZIP)

> > I'll eventually document all of the above somewhere when I get a
> > chance.
>
> Yes, definitely. Thanks again!

I'm sure some FreeDOS guys will be interested. :-)

David Golub

unread,
Apr 13, 2009, 11:52:08 AM4/13/09
to
I'll give that a shot when I have a chance.

David Golub

"Rugxulo" <rug...@gmail.com> wrote in message
news:aac41f83-1592-45ab...@g37g2000yqn.googlegroups.com...

David Golub

unread,
Apr 13, 2009, 11:55:13 AM4/13/09
to
Having a DOS build server is probably somewhat difficult, since I'm not sure
how one runs a server on DOS, but I can definitely run the DOS build on a
somewhat regular basis to make sure that things still build.

I've thought of the idea of writing our own sort utility, and I'm inclined
to go ahead and do it. I would think that it be a pretty simple program to
write, especially with the qsort function in the standard library.

David Golub
"Peter C. Chapin" <pe...@openwatcom.org> wrote in message
news:Xns9BE83FF191B35p...@216.55.181.205...

Kevin G. Rhoads

unread,
Apr 13, 2009, 1:14:43 PM4/13/09
to
>I've thought of the idea of writing our own sort utility, and I'm inclined
>to go ahead and do it. I would think that it be a pretty simple program to
>write, especially with the qsort function in the standard library.

Is there anything that makes FreeDOS sort not suffice?

Steve Fabian

unread,
Apr 13, 2009, 1:22:35 PM4/13/09
to

... or the age-old QSORT from Ben Baker? I am worried that a "simple" sort
routine may not be able to deal with high record counts...
--
Steve

Mat Nieuwenhoven

unread,
Apr 13, 2009, 1:54:24 PM4/13/09
to
On Mon, 13 Apr 2009 11:55:13 -0400, David Golub wrote:

>Having a DOS build server is probably somewhat difficult, since I'm not sure
>how one runs a server on DOS, but I can definitely run the DOS build on a
>somewhat regular basis to make sure that things still build.

I'm not sure if you are referring to the 'web server' part of it, or to the
'run regular' part. The least one could be met by setting a wake-up in the
BIOS (not sure all BIOSs support that), and start the build from
autoexec.bat
For the web server, I don't know. There's wattcp for the IP part, but you
need more. It is quite possible to write multithreaded software under Dos,
provided you code that multithreading part yourself (or maybe there are
already open source versions).

>I've thought of the idea of writing our own sort utility, and I'm inclined
>to go ahead and do it. I would think that it be a pretty simple program to
>write, especially with the qsort function in the standard library.

<snip>
If it's a tool used by the OW build process, then IMHO it would be nice to
have an 'internal' version of it: one less dependency on external items. In
addition one can then be sure it behaves the same on all build platforms.

Mat Nieuwenhoven

Roald Ribe

unread,
Apr 13, 2009, 2:12:30 PM4/13/09
to

OW should build without dependence on external utilities, to the extent
possible.

Roald

Kevin G. Rhoads

unread,
Apr 13, 2009, 3:35:22 PM4/13/09
to
>OW should build without dependence on external utilities, to the extent
>possible.

That's fine, but FreeDOS sort is available as source, since FreeDOS is an
open source project. So the FD sort source could be customized, if necessary,
for OW and even included in the source tree.

Why re-invent the wheel? Unless there is some over-riding reason ...

David Golub

unread,
Apr 13, 2009, 5:34:59 PM4/13/09
to
Yes, it has the same size limitations as the other DOS sort utilities.

David Golub

"Kevin G. Rhoads" <kgrh...@alum.mit.edu> wrote in message
news:49E37303...@alum.mit.edu...

David Golub

unread,
Apr 13, 2009, 5:39:52 PM4/13/09
to
I'm pretty sure that Yale ITS policies prohibit me from running a server on
my computer. Even if I could, I have to move annually at this point and
don't have the ability to connect this computer to the Internet when school
is not in session. Maybe when I'm in graduate school and have a (somewhat)
permanent residence I'll consider it.

Also, I only have one computer, the build will have to be done in a virtual
machine. My thought is to just do it manually on a somewhat regular basis
and upload the reports (and possibly the built files) to the web.

David Golub

"Mat Nieuwenhoven" <mni...@dontincludethis.zap.a2000.nl> wrote in message
news:zavrhjqbagvapyhqrguvf...@news.openwatcom.org...

Roald Ribe

unread,
Apr 13, 2009, 8:47:40 PM4/13/09
to

I do not propose re-inventing anything, if it is possible to avoid.

FreeDOS and most of it's utilities are GPL licensed, and I think that
license should be avoided in OW, because of it's viral nature.

Someone did an effort to include portable versions of posix utilities a
while ago I seem to remember, with BSD license. Isn't there a sort utility
in there somewhere?

Anyone may be of a different opinion of course, but I think BSD and Public
Domain licensed source is OK to include in OW, GPL is not.

Roald

Arkady V.Belousov

unread,
Apr 14, 2009, 5:09:02 AM4/14/09
to
Hi!

David Golub 13.04.09 19:55 wrote:

> I've thought of the idea of writing our own sort utility, and I'm inclined
> to go ahead and do it. I would think that it be a pretty simple program to
> write, especially with the qsort function in the standard library.

qsort is not best for sorting strings and it is "internal sort"
(requires all data be placed in internal, directly accessible memory).

Kevin G. Rhoads

unread,
Apr 14, 2009, 8:59:30 AM4/14/09
to
Avoiding GPLed stuff, I understand. Thanks for the
clarification.

I think inclusion of Other People's code is rare enough
that a discussion of what licenses are reasonable and
what are not is probably not worth the time. One specific
set of questions I would like to ask for discussion, in
part because I am posting (trivial) source on scribd are:

I presume the objection to GPL would not extend to LGPL?
How about varieties of Creative Commons?
DO you view the CC SHARE-ALIKE attribute as viral like GPL?

Any specific licenses that are particularly good or bad from
the viewpoint of inclusion in OW?

Roald Ribe

unread,
Apr 14, 2009, 2:02:34 PM4/14/09
to
Kevin G. Rhoads wrote:
> Avoiding GPLed stuff, I understand. Thanks for the
> clarification.
>
> I think inclusion of Other People's code is rare enough
> that a discussion of what licenses are reasonable and
> what are not is probably not worth the time. One specific
> set of questions I would like to ask for discussion, in
> part because I am posting (trivial) source on scribd are:
>
> I presume the objection to GPL would not extend to LGPL?

In my opinion, no (L)GPL code should be allowed in the OW
source tree. I am not a license expert, but I think the license
of GPL could accidentally "leak" onto other parts.

Maybe we could use LGPL libraries, but then I think we should
have two separate source trees, to avoid leaks.

If we want to go GPL, we should seek to relicense and/or
dual license the entire OW source at once.

> How about varieties of Creative Commons?
> DO you view the CC SHARE-ALIKE attribute as viral like GPL?

I do suspect it, but I am not an expert.

> Any specific licenses that are particularly good or bad from
> the viewpoint of inclusion in OW?

Public Domain, BSD, Apache, LLVM and the likes licenses I see as
compatible. But for a real evaluation a legal expert should be sought.

Roald

David Golub

unread,
Apr 15, 2009, 11:22:12 AM4/15/09
to
I tried running it on VirtualBox, and it doesn't run any faster than in
Virtual PC.

David Golub

"David Golub" <david...@yale.edu> wrote in message
news:grvn39$vcg$1...@www.openwatcom.org...

Mat Nieuwenhoven

unread,
Apr 15, 2009, 1:02:09 PM4/15/09
to
On Wed, 15 Apr 2009 11:22:12 -0400, David Golub wrote:

>I tried running it on VirtualBox, and it doesn't run any faster than in
>Virtual PC.

<snip>

Have you tried creating a ramdisk and setting the temp / tmp directories on
there?

Mat Nieuwenhoven


Peter C. Chapin

unread,
Apr 15, 2009, 8:57:04 PM4/15/09
to
Roald Ribe <rr.n...@pogostick.net> wrote in news:gs2j3u$geg$1
@www.openwatcom.org:

> But for a real evaluation a legal expert should be sought.

This is a situation where having an Open Watcom Foundation would be
helpful. It would be a place where money for such things could be stored.

Peter

Peter C. Chapin

unread,
Apr 15, 2009, 8:59:32 PM4/15/09
to
"David Golub" <david...@yale.edu> wrote in
news:gs0bf9$t8o$1...@www.openwatcom.org:

> Also, I only have one computer, the build will have to be done in a
> virtual machine. My thought is to just do it manually on a somewhat
> regular basis and upload the reports (and possibly the built files) to
> the web.

Executing the build semi-regularly is probably good enough. Ideally we'd
like to discover build breakage early when the problem is still fresh in
everyone's mind.

Also it would be good to execute the regression tests as well.

Peter

David Golub

unread,
Apr 16, 2009, 11:19:07 AM4/16/09
to
Does the OW compiler use temporary files?

David Golub

"Mat Nieuwenhoven" <mni...@dontincludethis.zap.a2000.nl> wrote in message
news:zavrhjqbagvapyhqrguvf...@news.openwatcom.org...

Wilton Helm

unread,
Apr 16, 2009, 9:43:51 PM4/16/09
to
Yes, Especially on a DOS environment where RAM is limited. For instance the
linker creates linked lists pointing to object fragments that are being
organized. Under, say, Windows, these are stored in dynamic memory.
However, under DOS, they are kept in a temporary file.

Wilton


David Golub

unread,
Apr 17, 2009, 11:14:52 AM4/17/09
to
There does seem to be some improvement, but it's still nowhere as fast as on
Windows XP. I'll add the use of a RAM drive to the DOS build instructions.
By the way, I assume that set TEMP=path and set TMP=path are sufficient to
put the temporary file on the RAM drive? Thanks.

David Golub

"Wilton Helm" <wh...@compuserve.com> wrote in message
news:gs8n38$8pc$1...@www.openwatcom.org...

Bart Oldeman

unread,
Apr 20, 2009, 7:51:51 PM4/20/09
to
> "Wilton Helm" <wh...@compuserve.com> wrote in message
> news:gs8n38$8pc$1...@www.openwatcom.org...
>> Yes, Especially on a DOS environment where RAM is limited. For instance
>> the linker creates linked lists pointing to object fragments that are
>> being organized. Under, say, Windows, these are stored in dynamic memory.
>> However, under DOS, they are kept in a temporary file.

Where is the code in the linker that does that? As far as I can see wlink
uses a swap file when it runs out of normal memory (i.e., malloc()
returns NULL), but not if there is enough memory available.

David Golub wrote:

> There does seem to be some improvement, but it's still nowhere as fast as on
> Windows XP. I'll add the use of a RAM drive to the DOS build instructions.
> By the way, I assume that set TEMP=path and set TMP=path are sufficient to
> put the temporary file on the RAM drive? Thanks.

Much of the slowness in building in DOS comes from reloading
wcc386.exe a zillion times. In modern OSes this is done efficiently,
e.g., in Linux it's just a virtual memory remapping of the page cache.
But in DOS the whole file is loaded and that goes via a bunch of chunked
INT21 reads involving a few copies.

The default DOS4GW is particularly slow; if you replace DOS4GW.EXE by
CWSTUB.EXE or DOS32A.EXE it already loads faster; you can also bind
WCC386.EXE with one of those tools and then with DOS32A configure the
DOS transfer buffer size using the SS.EXE utility.

For quick testing try this in CMD.EXE/COMMAND.COM:
for %a in (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1) do wcc386 -q
and compare how these two compare in DOS and Windows XP.

Bart

Mat Nieuwenhoven

unread,
Apr 21, 2009, 12:13:04 AM4/21/09
to

Plus, I think, in OS/2 and Windows wmake uses (in the OW build process) the
DLL method, so the compilers and stuff are loaded only once during the
processing of one make file. On Linux and DOS this isn't the case.

Mat Nieuwenhoven


Arkady V.Belousov

unread,
Apr 21, 2009, 4:12:48 AM4/21/09
to
Hi!

Bart Oldeman 21.04.09 3:51 wrote:

> Much of the slowness in building in DOS comes from reloading
> wcc386.exe a zillion times.

I long ago propose to add support of multiple targets (directly and/or
through "response file") in compiler (this allows to compile all need files
in one pass/one compiler load; make just collects modified files in list:
internal or external env-variable/file). I even mention, that Borland does
this successfully (both in BCC and in MAKE). Unfortunately, this proposal
was ridiculed as inappropriate, because "OW is not Borland, and anything
what present in Borland is wrong".

> In modern OSes this is done efficiently,

No any OS may do "zillion times reloading" more efficiently, than one
call with internal loop over files list.

Bart Oldeman

unread,
Apr 21, 2009, 8:40:36 AM4/21/09
to
21-04-2009, Mat Nieuwenhoven <mni...@dontincludethis.zap.a2000.nl> wrote:
>
> Plus, I think, in OS/2 and Windows wmake uses (in the OW build process) the
> DLL method, so the compilers and stuff are loaded only once during the
> processing of one make file. On Linux and DOS this isn't the case.

Yes, exactly, I wanted to write this too but I forgot. For Linux this doesn't
matter much because it is already very fast with demand paging from the page
cache (about 1 ms per call on my laptop).

For DOS it might be possible to modify wmake to use DLLs with Causeway
I think.

Bart

Roald Ribe

unread,
Apr 21, 2009, 8:49:47 AM4/21/09
to
Mat Nieuwenhoven wrote:
>
> Plus, I think, in OS/2 and Windows wmake uses (in the OW build process) the
> DLL method, so the compilers and stuff are loaded only once during the
> processing of one make file. On Linux and DOS this isn't the case.

And if the HX DOS extender is used, one may use the WIN32 versions of the
OW command line tools on DOS, so the DLL method in wmake should be available.
I have no idea if it will be any faster than other DOS solutions, but it
might possibly be because of the DLL tool loading support in WIN32 wmake.

Roald

Wilton Helm

unread,
Apr 21, 2009, 2:32:14 PM4/21/09
to
>Where is the code in the linker that does that?

It's been three or four years since I seriously worked with the linker code.
As I recall a macro is used to allocate memory and depending on the build
flags it resolves to either malloc() calls or temp file accesses. There is
some clever bit utilization going on so that the upper level access,
including pointer, is transparent. Fortunately, I didn't have to work with
anything other than top level, so I didn't thoroughly explore it. I did
look at the pointer handling to make sure I wasn't doing anything that could
generate invalid references.

Wilton


Kevin G. Rhoads

unread,
Apr 22, 2009, 9:33:07 AM4/22/09
to
>Much of the slowness in building in DOS comes from reloading
>wcc386.exe a zillion times.

Easier and faster than converting to DLL usage would be to
load the tools that are reloaded frequently onto the
RAM disk, not merely pointing TEMP and TMP there.

I remember using this approach with a modified IBM DOS 1.1
which supported RAM disk on an original 4.77 MHz 8088 IBM PC
with ASM/MASM v1, Pascal v1 and IBM Personal FORTRAN v1. The
boot floppy copied compiler files and libraries from the
two floppies into RAM disk. Then you could edit, run compilers
and test all from RAM disk, and only save modified sources
back to floppy. (I still have some disk images around here
somewhere ....)

CyberSimian

unread,
Apr 23, 2009, 8:01:05 AM4/23/09
to
Kevin G. Rhoads wrote:
> Easier and faster than converting to DLL usage would be to
> load the tools that are reloaded frequently onto the
> RAM disk, not merely pointing TEMP and TMP there.

Even easier would be to use disk cacheing software. My IBM PC DOS 2000 (aka
7.01) comes with SMARTDRV.EXE, which has this help information:

C:\DOS>SMARTDRV.EXE /?

Installs and configures the SMARTDrive disk-caching utility.

SMARTDRV [/X] [[drive[+|-]]...] [/U] [/C | /R] [/L] [/V | /Q | /S]
[InitCacheSize [WinCacheSize]] [/E:ElementSize] [/B:BufferSize]

/X Disables write-behind caching for all drives.
drive Sets caching options on specific drive(s). The specified
drive(s) will have write-caching disabled unless you add +.
+ Enables write-behind caching for the specified drive.
- Disables all caching for the specified drive.
/U Do not load CD-ROM caching module.
/C Writes all information currently in write-cache to hard disk.
/R Clears the cache and restarts SMARTDrive.
/L Prevents SMARTDrive from loading itself into upper memory.
/V Displays SMARTDrive status messages when loading.
/Q Does not display status information.
/S Displays additional information about SMARTDrive's status.
InitCacheSize Specifies XMS memory (KB) for the cache.
WinCacheSize Specifies XMS memory (KB) for the cache with Windows.
/E:ElementSize Specifies how many bytes of information to move at one time.
/B:BufferSize Specifies the size of the read-ahead buffer.

I invoke SMARTDRV from AUTOEXEC.BAT when I boot my DOS partition.

-- from CyberSimian in the UK


Steve Fabian

unread,
Apr 23, 2009, 9:23:42 AM4/23/09
to
CyberSimian wrote:
| Kevin G. Rhoads wrote:
|| Easier and faster than converting to DLL usage would be to
|| load the tools that are reloaded frequently onto the
|| RAM disk, not merely pointing TEMP and TMP there.
|
| Even easier would be to use disk cacheing software. My IBM PC DOS
| 2000 (aka
| 7.01) comes with SMARTDRV.EXE, which has this help information:
|
| C:\DOS>SMARTDRV.EXE /?
|
| Installs and configures the SMARTDrive disk-caching utility.
|
| SMARTDRV [/X] [[drive[+|-]]...] [/U] [/C | /R] [/L] [/V | /Q | /S]
| [InitCacheSize [WinCacheSize]] [/E:ElementSize]
| [/B:BufferSize]
...

| I invoke SMARTDRV from AUTOEXEC.BAT when I boot my DOS partition.

The major difference between using the available XMS for a virtual disk vs.
using the same for cacheing is that processing a large source file may
replace the tools in the cache, so reloading them may need to go back to
disk. If you make the size of the virtual disk match the total size of the
tools (watching out for the difference between actual and allocated sizes,
and directory etc. overhead of the virtual disk; you may want to put the
command interpreter there too!) and dedicate the rest of the available
internal storage to the disk cache, you probably optimized your system
configuration.

--
HTH, Steve

David Golub

unread,
Apr 23, 2009, 10:23:25 AM4/23/09
to
I already tried putting the compiler on a RAM disk, and it didn't make a
major difference in the build time.

David Golub

"Steve Fabian" <ESFa...@comcast.net> wrote in message
news:gspq4v$v54$1...@www.openwatcom.org...

CyberSimian

unread,
Apr 23, 2009, 7:27:12 PM4/23/09
to
Steve Fabian wrote:
> The major difference between using the available XMS for a virtual disk vs.
> using the same for cacheing is that processing a large source file may
> replace the tools in the cache, so reloading them may need to go back to
> disk.

Some disk cache software is smart enough not to flush the cache of small files
when a large file is encountered (the large file passes straight through,
without being stored in the cache). I don't know whether SMARTDRV does this.

The advantage of disk cacheing software is that it requires no change to the
build process, so it should be simple to try it out to see what (if any)
difference it makes.

Hans-Bernhard Bröker

unread,
Apr 25, 2009, 6:23:07 AM4/25/09
to
Steve Fabian wrote:

> The major difference between using the available XMS for a virtual disk vs.
> using the same for cacheing is that processing a large source file may
> replace the tools in the cache, so reloading them may need to go back to
> disk.

On closer examination I think you'll find that's not really a major
difference.

The critical figure is the overall working set size of a complete
compiler run (including make, the compiler, their data, and all
temporary files). Once that working set gets bigger than physical RAM,
there will have to be disk I/O. It doesn't matter all that much whether
it's temporary files being written to disk, executable code being
reloaded, disk cache spilling over to the real disk, or virtual memory
being paged out --- the disk is equally slow for all of them.

RAM disks provide no noticeable improvement of speed over the same
amount of memory being spent on disk cache, which will most likely hold
the compiler executable. In the case at hand they would do so at the
cost of keeping *all* the compilers in memory *all* the time --- even
though you're using only one of them.

The advantage of a bigger cache over a cache plus RAM disk is that it
allows the cache driver to takes care of optimizing RAM utilization
dynamically. It'll hold what you actually use at the time, not what you
thought you would need when you set up the RAM disk.

0 new messages