Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

WARNING: DOUBLE SPACE is definatelly corrupt

55 views
Skip to first unread message

Erwin Dondorp

unread,
May 24, 1993, 7:31:51 AM5/24/93
to
Hi all,

this is a final warning for you that still do not get it:

DOS6's DOUBLE SPACE definatelly can destroy
the contents of your harddisk!

am I clear?

I am very sure of this and so are dozens of other people who reported to the
net.
The problem is always the same, we can therefore safely rule out any
compatibility problems with stupid hardware.
The problem has been described many times her but here it is again:
- 1, your computer crashes
- 2, you reboot and your C:(compressed) drive seems empty
- 3, running the chkdsk from the dblspace menu tells you that the header
information of the CVF is corrupt and it gets fixed OK.
- 4, there are no reports of other problems here (Sometimes 1 or 2 lost
chains, but they were there before the crash)
- 5, running chkdsk on the uncompressed drive (the one the CVF is on)
reports thousands of lost chains. the errors are too severe to fix
by chkdsk (files/directories with garbled names are not fixed,
and any file still left on your disk is garbled)
- final: You have just lost your complete compressed disk.
and you cannot format or delete this compressed disk because it is your
startup disk! in this case you should just make the file
dblspace.000 writable and remove it.

I cannot be accused of using any strange software, I use a standard
AMI based 486 with ONLY Microsoft software:
DOS6, SmartDRIVE, DBLSPACE, HIMEM/EMM386, MS-Windows 3.1 and MicroSoft C7.00
so:
NO NDOS, 4DOS, NCACHE, QEMM or borland compilers, everything is MS!
I did this to get a very secure system, i.e. no compatibility problems.
And before you say it, I have the original floppies of all operating system
type software I use and I did install this software from these floppies.


My advice to all of you is:
1) Create a full backup of your compressed drive
2) remove the compressed drive
3) Restore the backup on the uncompressed drive
or
1) Create a full backup of your compressed drive
2) remove the compressed drive
3) Install a good runtime compression program
3) Restore the backup on the new compressed drive


My advice to MicroSoft is:
1) Don't be shy about admitting a simple bug!
But it will help all of us.
I don't care if a program contains bugs, all programs do. I'm not
angry, even if I lost a harddisk contents, I did lose about
1 week of work but it can be reconstructed in about 1 day, with
1 day for installing all the software again this makes 2.


Above statements are based on:
- The reports placed in various newsgroups of this net.
- My own experience with DOS6/DOUBLE SPACE.

Please don't flame with saying that I probably misjudged the facts that
I have seen. I am a professional software developper (MSDOS/WINDOWS/MAC/UNIX)
with more than enough experience to solve any problems that occur on
MSDOS systems (as also for WINDOWS/MAC, UNIX is too complex to solve all
problems but I can manage most of 'em)

Erwin
--
Erwin Dondorp Xirion bv, Software and Consultancy
er...@xirion.nl Burg. Verderlaan 15X
3454 PE De Meern
the Netherlands
Phone +31 (0)3406-61990
Fax +31 (0)3406-61981

Currently put to work at: PTT-T-NWB-NWO-PCS-SDC (070-3434955)

Een heer van stand is geen nummer! (OBB)

Stephen L Wyatt

unread,
May 24, 1993, 12:22:19 PM5/24/93
to
In article <1993May24.1...@pttdis.pttnwb.nl> er...@pcssdc.pttnwb.nl (Erwin Dondorp) writes:
>Hi all,
>
>this is a final warning for you that still do not get it:
>
> DOS6's DOUBLE SPACE definatelly can destroy
> the contents of your harddisk!
>
>am I clear?

Let me say this-- I also only use standard software (mostly microsoft) and
AMI 386 system, MFM drive, etc... And I gave microsoft the benefit of the
doubt and installed doublespace.. It ran well for about 2 weeks. Yesterday,
my uncompressed drive just would not boot (hardisk that is..). It said
invalid media and I booted off floppy, and ran Norton Disk Doctor. It said
that there was an invalid entry in the root directory, and seemed to fix it.
But, I am now wondering...


Just to let you all know, I am still using it, (I have a tape drive..), but
am very cautious. I think that there might be a problem in the write-through
cache, instead of doublespace. I am going to turn off the write-back option
though.

--
----------------------------------------------------------------------------
swy...@brahms.udel.edu !!! no disclaimer...I blame everything on someone else
----------------------------------------------------------------------------

johnson scott andrew

unread,
May 24, 1993, 2:50:24 PM5/24/93
to
>Just to let you all know, I am still using it, (I have a tape drive..), but
>am very cautious. I think that there might be a problem in the write-through
>cache, instead of doublespace. I am going to turn off the write-back option
>though.

Probably a good idea. I am not an expert on disk caching, but it seems to me
that if write caching is enabled, and you switch off your computer before the
cache is written, you will lose data. This problem is especially severe on a
compressed volume, where one bad bit can mess a LOT of things up.

I use doublespace and READ caching only, and have NEVER had a problem. My
advice is if you use DoubleSpace or any other realtime file compression, do
not use write caching on your hard drive. Or if you must, make sure the cache
is written before you switch off. (CTRL+ALT+DELETE is safe; it writes the
cache before it reboots.)


Any comments?

/sj/


Jack Green

unread,
May 24, 1993, 11:13:17 PM5/24/93
to
In article <1tr5dg...@flop.ENGR.ORST.EDU> joh...@prism.CS.ORST.EDU (johnson scott andrew) writes:
>>(Stuff deleted) I think that there might be a problem in the write-through

>>cache, instead of doublespace. I am going to turn off the write-back option
>>though.
>
>Probably a good idea. I am not an expert on disk caching, but it seems to me
>that if write caching is enabled, and you switch off your computer before the
>cache is written, you will lose data. This problem is especially severe on a
>compressed volume, where one bad bit can mess a LOT of things up.
>
>I use doublespace and READ caching only, and have NEVER had a problem. My
>advice is if you use DoubleSpace or any other realtime file compression, do
>not use write caching on your hard drive. Or if you must, make sure the cache
>is written before you switch off. (CTRL+ALT+DELETE is safe; it writes the
>cache before it reboots.)
>>>
>Any comments?
>
That's pretty much what I've been wondering. I've been using dblspace on two
machines for almost seven weeks and haven't had the first hint of trouble. I'm
inclined to believe Dblspace itself is not buggy but that when something does
happen, it does it big-time (comes from having all your eeggs in one basket).
Does anybody have any hard numbers as to the failure rate for Dblspace? I've
seen maybe two dozen posts from people who had catastrophic disk failure while
using Dblspace. Considering the number of people who are probably using
Dblspace does this seem high or low? Particularly when compared to Stacker.

On a related note, is running Norton's image (version 6) with a compressed
volume a waste of time? If it is, does version 7 handle it better?

Jack Green Why doesn't
Dept. of Biochemistry etc. Unix have
Mississippi State University have a decent
jgr...@isis.msstate.edu editor?


Rene Peeren

unread,
May 25, 1993, 3:05:54 AM5/25/93
to
In article p...@Tut.MsState.Edu, jgr...@Isis.MsState.Edu (Jack Green) writes:
>That's pretty much what I've been wondering. I've been using dblspace on two
>machines for almost seven weeks and haven't had the first hint of trouble. I'm
>inclined to believe Dblspace itself is not buggy but that when something does
>happen, it does it big-time (comes from having all your eeggs in one basket).
>Does anybody have any hard numbers as to the failure rate for Dblspace? I've
>seen maybe two dozen posts from people who had catastrophic disk failure while
>using Dblspace. Considering the number of people who are probably using
>Dblspace does this seem high or low? Particularly when compared to Stacker.
>

I was wondering that meself.

Concerning Smartdrive: looks like that could be a reason for disk-failure.
Suggestion: anyone having any complaints
about Dblspace: let this group know if you use Smartdrive with write.
If there seems to be enough reason to suspect smartdrive, then we can post
a general warning. Could save lifes...

I have another, possibly related, problem. My machine is a (don't laugh)
16 MHz 286, with a 43 Mbyte harddisk. Believe it or not, I'm running
Windows 3.1, spreadsheets, lots of window (and other) games, music software and a few
other things. Before DblSPace, I had appr. 6 MByte free. I also use
Falcon 3.0 (which already uses 12 Mbyte), which requires 602 kb free memory
to run. On my 286, this means I must run Falcon from an uncompressed
volume. So I backed up my hard-disk, removed falcon,
installed dblspace, resized dbl-space to make room for falcon on my
uncompressed disk, and restored falcon. For backup and restore I used
the new Backup-program (windows). Falcon used up 6 disks. Restoring the first
two disks was no problem. Then, the program got stuck a few times:
hardisk was running like a madman, but no data was restored.
Usually continued after a few minutes. When about 68% of
my restore was done, the program got stuck altogether. I couldn't do anything,
not even switch to another window. I turned off the machine. A day later
(had a party) I continued from where I left off. Hardisk running, no progress
for about two minutes. Then, hoops, progress.. And the story ended happily ever
after (no hiccops since then). Anyone have an idea what caused the problem?
Remember: I was restoring to my uncompressed disk! BTW, even with appr 15 Mb out
of 43 uncompressed, and 6 Mb free before compression, I still had 18 Mb free
after compression on my compressed drive, and 1,5 on my uncompressed drive.
Not bad..

Rene

Dan Sinclair

unread,
May 24, 1993, 8:38:40 PM5/24/93
to

>
> In article <1993May24.1...@pttdis.pttnwb.nl> er...@pcssdc.pttnwb.nl
> (Erwin Dondorp) writes:
> >Hi all,
> >
> >this is a final warning for you that still do not get it:
> >
> > DOS6's DOUBLE SPACE definatelly can destroy
> > the contents of your harddisk!
> >
> >am I clear?
>
> Let me say this-- I also only use standard software (mostly microsoft) and
> AMI 386 system, MFM drive, etc... And I gave microsoft the benefit of the
> doubt and installed doublespace.. It ran well for about 2 weeks. Yesterday,
> my uncompressed drive just would not boot (hardisk that is..). It said
> invalid media and I booted off floppy, and ran Norton Disk Doctor. It said
> that there was an invalid entry in the root directory, and seemed to fix it.
> But, I am now wondering...
>

I may as well toss in my two cents on this thread. I installed
double space a few weeks ago. I loved it until last week. I booted up
one morning only to discover 60 or 70 crosslinked files and all sorts of
bad sectors on my drive. Until that fateful morning my drive was
perfect, no bad sectors. When I ran chkdsk it ran for almost three
hours, I had to reboot to get out. After the three finger kill the
system would only boot from a floppy. You could say doublespace got my
attention big time! Until I installed dblspc, my HD never no much as
hic cupped.

I can't tell you how important a good backup is, if you are going
to use this program. The extra space doublespace gives you is great,
until it starts acting up. I liked the space, hated the hassle.
Doublespace is nothing but a bad memory on my system now, my backup saved
the day.

Dan Sinclair

Rudi Maelbrancke

unread,
May 25, 1993, 9:07:32 AM5/25/93
to
I am using doublespace for 6 weeks now, and I did not have any
problems. It works Ok and I have a 2.1:1 compression rate.

Remark: I always type smartdrv /c before shutting down!
This flushes all write buffers to disk.

Rudi

Mark Harding

unread,
May 25, 1993, 9:35:27 AM5/25/93
to
Ok, my turn :-)

I've been using DBLSPACE for 1 month. In that time I've had one problem where
I was left with 175 errors according to CHKDSK. This resulted from having to
reboot the machine when it had locked. I was using SMARTDRV with write-back so
a lot of data was not written to disk.

I know use SMARTDRV with write-through when running under DOS and write-back
when running under windows.

Since then I have had NO problems whatsoever.

Thanks for listening.
Mark

Kyle Jones

unread,
May 25, 1993, 10:24:15 AM5/25/93
to
sinc...@nlbbs.rn.com writes:
> I may as well toss in my two cents on this thread. I installed
> double space a few weeks ago. I loved it until last week. I booted up
> one morning only to discover 60 or 70 crosslinked files and all sorts of
> bad sectors on my drive.

Were you using Smartdrv? A common thread through a lot of these
trashed filesystem stories is that the errors were noticed after
a reboot. This suggests that disk buffers had been only
partially flushed. So maybe DOS 6's Double Space is taking a bad
rap.

sean kerns

unread,
May 25, 1993, 10:01:07 AM5/25/93
to


I have used DBLSPACE for a couple of months with a read AND write cache, BUT...
I always make sure to dump the cache before I turn the computer off, and before I
reboot (when I have a choice). I had some disk errors before DOS6, and someone
suggested to me dumping the cache, and I haven't had any such problems since.
MS can only be responsible for so much. If you trun off your computer with a
cache full of stuff, of course you're gonna have problems. At work, the solution
to that is "RTFD".

This is not to say that there are no bugs in DOS6. But I have not had nearly the
catastrophic experiences that other people on this group have had. In fact,
execept for some memory problems with a few games, I've had no problems...

I wonder how many of those people who keep "Mysteriously" losing their disks are
the same ones who keep asking what 386spart.par is, and how they can get rid of
it?

A lot of problems can be avoided if you look before you hit Enter.

Sean

*****************************************************************
* ------------------------------------------------------------ *
* | "As God is my witness, I thought turkeys could fly..." | *
* | | *
* | e-mail: sean....@sdrc.com (Sean R. "Snake" Kerns) | *
* | | *
* | These may not even be MY opinions... | *
* | | *
* ------------------------------------------------------------ *
*****************************************************************


Ron Albury

unread,
May 25, 1993, 10:31:02 AM5/25/93
to
Sean suggests that dumping the cache buffers is the way to avoid DOS-6
doublespace disk corruption, and suggests we RTFD. I would like to point
out that when a system hangs we do not always have a means of dumping the
buffers. In a perfect world with a robust operating system, write cache
can be great. When you have a single-tasking operating system with
non-robust applications that may hang, write cache is somewhat risky. When
you layer a disk compression system that effects every file on your disk on
top of this single-tasking operating system, it is probably a good idea to
turn off the write cache alltogether.

Ron

Yousuf Khan

unread,
May 25, 1993, 11:22:52 AM5/25/93
to

>Hi all,

>this is a final warning for you that still do not get it:

> DOS6's DOUBLE SPACE definatelly can destroy
> the contents of your harddisk!

>am I clear?

>I am very sure of this and so are dozens of other people who reported to the
>net.
>The problem is always the same, we can therefore safely rule out any
>compatibility problems with stupid hardware.

Yup, what you describe is exactly the same as what I went through,
so it must be there.

>I cannot be accused of using any strange software, I use a standard
>AMI based 486 with ONLY Microsoft software:
>DOS6, SmartDRIVE, DBLSPACE, HIMEM/EMM386, MS-Windows 3.1 and MicroSoft C7.00
>so:
>NO NDOS, 4DOS, NCACHE, QEMM or borland compilers, everything is MS!
>I did this to get a very secure system, i.e. no compatibility problems.
>And before you say it, I have the original floppies of all operating system
>type software I use and I did install this software from these floppies.

Actually, I do not believe that Dblspace is completely to blame alone,
my feeling is that it is Smartdrv in combination with Dblspace that
causes the problem. I had been running a very stable Dblspace setup,\
with Hyperdisk & Qemm, and the problems only began to happen when I
switched to Smartdrive from Hyperdisk. After the switch to Smartdrive
I began to have random reboots, before Smartdrv completely updated the
disk from its write buffers. The type of corruption problems that have
been described are consistent with what happens when a disk isn't
updated before a reboot.

I think the real culprit is Smartdrive if you dig a little deeper.

Yousuf Khan

Yousuf Khan

unread,
May 25, 1993, 1:48:34 PM5/25/93
to

>I have used DBLSPACE for a couple of months with a read AND write cache, BUT...
>I always make sure to dump the cache before I turn the computer off, and before I
>reboot (when I have a choice). I had some disk errors before DOS6, and someone
>suggested to me dumping the cache, and I haven't had any such problems since.

It's not quite so simple as that. What if Smartdrive itself causes the reboot
before it completely updates the disk? I think that's where the proble
is because some of us sophisticated enough to know not to hit the reset
or power switches.


Yousuf Khan

Joe Morris

unread,
May 25, 1993, 2:43:32 PM5/25/93
to
In a recent article jgr...@Isis.MsState.Edu (Jack Green) writes:

>In a previous article joh...@prism.CS.ORST.EDU (johnson scott andrew) writes:

[attribution chain lost; the thread started with a rather shrill
condemnation of DBLSPACE]

>>>(Stuff deleted) I think that there might be a problem in the write-through
>>>cache, instead of doublespace. I am going to turn off the write-back option
>>>though.

>>Probably a good idea. I am not an expert on disk caching, but it seems to me
>>that if write caching is enabled, and you switch off your computer before the
>>cache is written, you will lose data. This problem is especially severe on a
>>compressed volume, where one bad bit can mess a LOT of things up.

>>I use doublespace and READ caching only, and have NEVER had a problem. My
>>advice is if you use DoubleSpace or any other realtime file compression, do
>>not use write caching on your hard drive. Or if you must, make sure the cache
>>is written before you switch off. (CTRL+ALT+DELETE is safe; it writes the
>>cache before it reboots.)

>That's pretty much what I've been wondering. I've been using dblspace on two

>machines for almost seven weeks and haven't had the first hint of trouble. I'm
>inclined to believe Dblspace itself is not buggy but that when something does
>happen, it does it big-time (comes from having all your eeggs in one basket).
>Does anybody have any hard numbers as to the failure rate for Dblspace? I've
>seen maybe two dozen posts from people who had catastrophic disk failure while
>using Dblspace. Considering the number of people who are probably using
>Dblspace does this seem high or low? Particularly when compared to Stacker.

This last question is one which is missing from much of the discussion
of the problems some users are reporting with DBLSPACE. Compression
by definition increases the effective information density on a disk, so
contaminating a random byte has a better chance of spreading trash
through the system, and if you hit one of the fundamental control areas
of the compressed disk you can lose everything. The problem with the
reports we're seeing about DBLSPACE is that they compare systems with
no compression to ones using DBLSPACE. The experiment isn't controlled
with respect to the risks inherent in using *any* compression feature,
so the data doesn't necessarily support blaming DBLSPACE for the problems.

So: is there someone in net.land who has experience with both DBLSPACE
and some other compression package (such as Stacker) on the same system
which was changed only by switching compression products?

Joe Morris / MITRE

sean kerns

unread,
May 25, 1993, 4:15:54 PM5/25/93
to


That was exactly my point before; not to flame users in general, but to suggest
that something else might be causing the problem. I think that a cache would be
far more likely to cause this sort of trouble than a compression routine.

Sean

P.S. I know you can't flush if you lock up; that's when I've lost stuff. At least
then you can be pretty sure that it's SMARTDRV, not DBLSPACE. Maybe MS should
have some kind of autoflush fix for SMARTDRV.

Thomas D. LeFlore

unread,
May 25, 1993, 6:13:39 PM5/25/93
to
In article <1993May24.1...@pttdis.pttnwb.nl>, er...@pcssdc.pttnwb.nl

(Erwin Dondorp) says:
>
>Hi all,
>
>this is a final warning for you that still do not get it:
>
> DOS6's DOUBLE SPACE definatelly can destroy
> the contents of your harddisk!
>
>am I clear?

Yes, after 2 sucessfull months using dblspace a 286 with no disk cache. , a
few days ago everything FELL APART.

1st of all I have been lucky that after 3 1/2 years my SCSI 80 meg has had
no bad sectors (knock on wood). But after downloading several files I got
a message saying error on drive C sector not found retry abort etc. After
I pressed R several times the download continued. The resultant file was
bad. I ran the Norton disk doctor v. 7.0 and wow my first bad sectors.
I figure no bid deal, Norton marked those as bad, so all is well right?

Wrong, witout going into too much detail, I rebooted several times from
the hard disk and a floppy disk, I'd run ndd /complete and it would find
more bad sectors. I run speedisk, it would work for 15 minutes stop and
tell me to run ndd /complete again. I'd run ndd /complete and more bad sectors
would be found. All of a sudden maybe about 5% of the files I tried to access
cound not be read. I ran another norton program It told me I had many
crosslinked files. I told it to fix them. Then it said something about a
problem with compression, I told it to fix that too. after rerunning ndd
still more bad sectors. Keep in mind this is on both the compressed and the
host drives, and I don't have any funny things in my autoexec or config.sys
and when i started from my floppy I had no autoexec or config.sys yet bad
sectors continue to develop.

The solution...
I backed up those files that I could that have changed since my last backup.
created a system disk without dblspace.bin on it.
ran diskmanager and the AMI built-in diagnostic program several times
reformating the hard disk and verifying the data area and at no time did
any bad sectors show up.

So now I'm with the people who know first hand that DBLSPACE will eventually
destroy your harddisk.

Tom

robert.k.nichols

unread,
May 25, 1993, 8:07:04 PM5/25/93
to
In article <1tta6f...@rodan.UU.NET>, ky...@rodan.UU.NET (Kyle Jones) writes:

My own feeling is that this whole problem was caused by Microsoft's
(Picosoft's?) decision to disallow caching of the compressed drive and
allow only the host drive to be cached. Yes, I know that caching
the already-compressed data makes the effective cache size larger, but
(a) you spend time decompressing the data each time it is accessed, and
(b) critical control structures on the compressed volume may languish in
memory, where they are subject to loss in the event of a crash.

I know that SMARTDRV is capable of caching device-driven file systems (it
works with Norton's encrypted Ndisks), so it would seem that caching the
DoubleSpace compressed drive should also be possible. Instead, we are
offered only the more dangerous option. This seems to be a shortsighted
error by a Johnny-come-lately to the disk caching business.

BTW, how does a Stacker/SMARTDRV combination work? Do you have the
option of caching the Stacker-driven file system? For that matter, what
about a DoubleSpace/Ncache combination? Hmmm, that one I _could_ check
out for myself, if I ever get the urge to re-install DoubleSpace.

--
Bob Nichols
AT&T Bell Laboratories
rnic...@ihlpm.ih.att.com

Keith R. Bennett

unread,
May 26, 1993, 3:32:56 AM5/26/93
to
In article <1993May25.1...@linus.mitre.org> jcmo...@mwunix.mitre.org (Joe Morris) writes:
>In a recent article jgr...@Isis.MsState.Edu (Jack Green) writes:
>
>...no compression to ones using DBLSPACE. The experiment isn't controlled

>with respect to the risks inherent in using *any* compression feature,
>so the data doesn't necessarily support blaming DBLSPACE for the problems.
>
>So: is there someone in net.land who has experience with both DBLSPACE
>and some other compression package (such as Stacker) on the same system
>which was changed only by switching compression products?
>
>Joe Morris / MITRE

Joe -

I haven't had the misfortune of using DBLSPACE, but I've been using DR-DOS
6.0's SuperStor for over a year, and have had *no* problems with it.

I was really frustrated at all the computer journalists parroting
Microsoft's propaganda when MS-DOS 6.0 came out. They consistently
"forgot" to mention that this great innovation in disk compression was
already alive and well in DR-DOS.

One problem I had while installing SuperStor on someone else's machine
though. After spending over an hour compressing the existing files, it
displayed an error message something like "Error while compressing files.
Reboot your computer." When I did, all the files were gone.
Nevertheless, despite the frustration and the wasted time, this was an
installation problem only.

- Keith

----------------------------------------------------------------------------
Keith Bennett Bennett Business Solutions, Inc.
C++/C Software Development 1605 Ingram Terrace
(301) 929-8500 Silver Spring, MD USA 20906-5932

--
----------------------------------------------------------------------------
Keith Bennett Bennett Business Solutions, Inc.
C++/C Software Development 1605 Ingram Terrace
(301) 929-8500 Silver Spring, MD USA 20906-5932

Matt Nelson

unread,
May 26, 1993, 10:25:45 AM5/26/93
to
jcmo...@mwunix.mitre.org (Joe Morris) writes:

>In a recent article jgr...@Isis.MsState.Edu (Jack Green) writes:

[stuff hacked out]

>So: is there someone in net.land who has experience with both DBLSPACE
>and some other compression package (such as Stacker) on the same system
>which was changed only by switching compression products?

>Joe Morris / MITRE

When I first installed Stacker 3.0 on my notebook, I was a little worried
about the reliability of the system. so I watched the net. I saw the
random "Stacker killed my disk!" message. Fortunately, they were *all*
eventually followed by a "oops, it was really a hardware problem" or a
"ooops, it was really a misbehaved game problem" message. I honestly do
not recall a single case where there was a clear problem with what Stacker
had advertised to do (somebody please correct me if my memory is failing).

this is not the case with doublespace. Microsoft has (according to a
previous post) admitted that there is a genuine, mysterious, bug in about
10% of the cases of doublespace failures. 10% is WAY too high for a product
with the distribution of something like a 'new' DOS.

Me, I'll stick to DOS5 and Stacker.

-matt nelson

Deon Strydom

unread,
May 25, 1993, 2:58:30 AM5/25/93
to
Stephen L Wyatt wrote in a message to All:

> DOS6's DOUBLE SPACE definatelly can destroy
> the contents of your harddisk!

SLW> Just to let you all know, I am still using it, (I have a
SLW> tape drive..), but am very cautious. I think that there
SLW> might be a problem in the write-through cache, instead of
SLW> doublespace. I am going to turn off the write-back option
SLW> though.

Hi Stephen,
I use DoubleDisk (which was changed to DoubleSpace by MS, I believe) and the ONLY problem I ever had was with SmartDrive 4.0 & write caching. In fact, my DoubleDisk package came with a note, warning NOT to use write caching (write-behind, whatever).

Could it be that MS bought the rights to DD, and NEVER thought of reading the docs, or of fixing the problem with write-caching....?

Me, I'm happy with MS-DOS 5.0 & DoubleDisk - have been working for a year plus without any problems other than the above ;-)

Groetnis (=cheers)
Deon


--
INTERNET: Deon.S...@f7.n7104.z5.fidonet.org
via: THE CATALYST BBS in Port Elizabeth, South Africa.
(catpe.alt.za) +27-41-34-2859, V32bis & HST.

Michael Panayiotakis

unread,
May 27, 1993, 3:07:02 AM5/27/93
to

Well, here's my few bucks worth:

As I see it, it started out with DOUBLESPACE being bad. THen it turns
out that it's not doublespace that has a problem, but smartdrive. So
I'm wondering...all you people using stalker or what not, AND
smartdrive, and any/all sorts of disk caching,, Does STALKER or any
other disk compressor have a problem with smartdrive. Apparently they
should, unless they take it upon themselves to clean up when you do a
cold restart (ctrl-alt-del or what-not).

At any rate, if disk caching is the prob, why specifically smartdrive?
And, in reply to whomever said that when stalker came out, people's
stalker problems were eventually attributed to games, when you have
something having a bad reaction to something a few people use, that's
not necesarily reason to correct it. (because stalker may have problem
with a game, it's no reason to fix either stalker or the game). But
when something as what (I predict) will be as widely used as dos 6 &
dblspace, interferes badly with something as widely used as smartdrive
(Now new comps come with DOS 6, and have windows, which requires (or
doesn't it?) smartdrive), well, then either dblspace or smrtdrv gotta
change so the two utils wokr well thegither.

BTW, what happened to all the other (non dblspace-related) problems? I
don't know whether to get DOS 6 or not...I've already decided not to run
dblspace yet, though.

Well, let me reiterate something here...does MS Windows need smartdrive
or any other disk-caching? If so, what options would work well with
dblspace? You can't have a fix for doublesapce if it won't work with
windows!

peace,
Mickey

Phil Ngai

unread,
May 26, 1993, 9:45:47 PM5/26/93
to
In article <znr738290320k@nlbbs> sinc...@nlbbs.rn.com writes:
> I can't tell you how important a good backup is, if you are going
>to use this program. The extra space doublespace gives you is great,
>until it starts acting up. I liked the space, hated the hassle.

MS has done a big favor for tape backup vendors by shipping MS-DOS6. I
got my data munched by DS and went out and bought TWO tape drives. (one
for work and one for home)

Andrew Murdoch

unread,
May 27, 1993, 4:35:42 AM5/27/93
to
In article <20...@heimdall.sdrc.com> drk...@sony7.sdrc.com (sean kerns) writes:
>Subject: Re: WARNING: DOUBLE SPACE is definatelly corrupt

>In article <1tta6f...@rodan.UU.NET>, ky...@rodan.UU.NET (Kyle Jones) writes:
>|> sinc...@nlbbs.rn.com writes:
>|> > I may as well toss in my two cents on this thread. I installed
>|> > double space a few weeks ago. I loved it until last week. I booted up
>|> > one morning only to discover 60 or 70 crosslinked files and all sorts of
>|> > bad sectors on my drive.
>|>
>|> Were you using Smartdrv? A common thread through a lot of these
>|> trashed filesystem stories is that the errors were noticed after
>|> a reboot. This suggests that disk buffers had been only
>|> partially flushed. So maybe DOS 6's Double Space is taking a bad
>|> rap.
>
>
>That was exactly my point before; not to flame users in general, but to suggest
>that something else might be causing the problem. I think that a cache would be
>far more likely to cause this sort of trouble than a compression routine.
>
>Sean
>
>P.S. I know you can't flush if you lock up; that's when I've lost stuff. At least
>then you can be pretty sure that it's SMARTDRV, not DBLSPACE. Maybe MS should
>have some kind of autoflush fix for SMARTDRV.
At home I have been using doublespace, and NOT smartdrv, and have had no
problems (touch wood). On my work machine I have NOT been using
doublespace, but have been using smartdrive with double buffering, and have
had a few problems with FAT entries and lost chains, even to the extent of
two identical directory entries. Both machines have been rebooted in all
sorts of ways, mainly power failures.
My conclusion: I have no reason (other than from what I read) to suspect
doublespace, but I do see problems with smartdrv.
Can anybody confirm this?

--
|\/| v | ANDREW MURDOCH - Institute for Water Research
/_\ | | \___@ | Rhodes University, Box 94, Grahamstown, 6140, South Africa
/ \ / \ | Internet: and...@iwr.ru.ac.za Phone: [xx27] [0]461 24014

J.P. Singh

unread,
May 27, 1993, 10:28:36 AM5/27/93
to

Ok. I am scared. I am new (very) to DOS and have DOS6 and have
Dblspace!! I want to get back to DOS5.0 and use stacker instead. Someone
mentioned that some hidden files need to be deleted. Can someone please
let me know how to re-install dos5.0? Or, rather, can Iuse dos6 and use
stacker instead of dblspace? Shall I just unfragment the disk and use
stacker, or should I delete some files?
ThX
jp

Alan Su

unread,
May 27, 1993, 12:56:55 PM5/27/93
to
In article <1993May27....@seas.gwu.edu> lou...@seas.gwu.edu (Michael Panayiotakis) writes:
|>
|>Well, here's my few bucks worth:
|>
|>As I see it, it started out with DOUBLESPACE being bad. THen it turns
|>out that it's not doublespace that has a problem, but smartdrive. So
|>I'm wondering...all you people using stalker or what not, AND
^^^^^^^ That's Stacker! Sorry...just
being picky...=)

|>smartdrive, and any/all sorts of disk caching,, Does STALKER or any
|>other disk compressor have a problem with smartdrive. Apparently they
|>should, unless they take it upon themselves to clean up when you do a
|>cold restart (ctrl-alt-del or what-not).

I use Stacker and Smartdrv with write-back caching (although I'm thinking
about changing that very soon now) and have experienced no problems. Of
course, the power has only failed once on my computer and every time I turn
it off, I always run a batch file which flushes the cache among other
things. This has kept me out of trouble (so far).

|>At any rate, if disk caching is the prob, why specifically smartdrive?

I think that any cache with write-back caching has the potential to have
problems.

|>Well, let me reiterate something here...does MS Windows need smartdrive
|>or any other disk-caching? If so, what options would work well with
|>dblspace? You can't have a fix for doublesapce if it won't work with
|>windows!

I don't believe Windows _requires_ Smartdrv, but it will (presumably) run
better with at least a read cache.
--
-alan su
-al...@uclink.berkeley.edu

Andy

unread,
May 27, 1993, 9:41:01 AM5/27/93
to
I am using stacker 3.0 and HyperDisk diskcache with a quite big wait
write buffer. If the diskcache misses to write information to the
disk.. if I reboot or have to shutdown to early.. the only error I
have noticed are that i get a huge bunch of lost clusters, and
eventually a invalid directory entry.
But this has mostly touched the files I recently have written to the
disk, not any othe files. And I have always been able to restore the
disk to 100% functionality again.
Andy

In article <1993May27....@seas.gwu.edu> lou...@seas.gwu.edu (Michael Panayiotakis) writes:

Newsgroups: comp.os.msdos.misc
Path: lysator.liu.se!isy!liuida!sunic!uunet!seas.gwu.edu!louray
From: lou...@seas.gwu.edu (Michael Panayiotakis)
Sender: ne...@seas.gwu.edu
Organization: George Washington University
References: <znr738290320k@nlbbs> <1tta6f...@rodan.UU.NET> <C7Lxn...@cbfsb.cb.att.com>
Date: Thu, 27 May 1993 07:07:02 GMT
Lines: 34

Joe Morris

unread,
May 27, 1993, 3:54:08 PM5/27/93
to
In a recent article jps...@advtech.uswest.com ( J.P. Singh) writes:

>Ok. I am scared. I am new (very) to DOS and have DOS6 and have
>Dblspace!! I want to get back to DOS5.0 and use stacker instead. Someone
>mentioned that some hidden files need to be deleted. Can someone please
>let me know how to re-install dos5.0?

The procedure for removing DBLSPACE has been discussed in several
postings on usenet. It's also discussed in section 7.6 of the
README.TXT file you'll find in your DOS direcory. Follow the
procedures (including the removal of the C:\DBLSPACE.BIN hidden
file), then use the uninstall disk created when you upgraded to
DOS 6.

Joe Morris / MITRE

The Supreme One

unread,
May 27, 1993, 5:42:31 PM5/27/93
to
In article <andrew.227...@iwr.ru.ac.za> and...@iwr.ru.ac.za (Andrew Murdoch) writes:

B


>My conclusion: I have no reason (other than from what I read) to suspect
>doublespace, but I do see problems with smartdrv.
>Can anybody confirm this?
>

I've been using smartdrv and dblspace (with read/write cacheing)
for a while now, and haven't come across any problems.. of course, in a
power failure, or when you experience a lockup, the cache won't be flushed
(if using write-behind cacheing), so problems would b encountered there,
but (correct me if I'm wrong), problems will result regardless of whether
or not the drive is compressed... so I wouldn't think that running smartdrv
alongside with dblspace would be any more risky than running smartdrv
alone... unless the data scrambling that results from an unflushed cache
would be worse with a compressed drive, since the actual drive is merely a
large compressed file. Of course, if you use windows, not running smartdrv
(or some disk-cacheing) would greatly hinder performance.. even if you just
use read-caceing and no write-behind, you'll have to wait for the computer
to write most of whatever it needs to write before continuing.
I was wondering something.. since dblspace makes the actual
compressed drive merely a large file on drive "H", would this makvirus
attacks especially destuctive, at least with certain forms of virii? If it
alters just the one file dblspace makes, then all of drive C would be
scrambled, would it not? I hope we don't see a new virus geared
specifically for DOS 6.0 and dblspace...


jeff

johnson scott andrew

unread,
May 27, 1993, 8:12:45 PM5/27/93
to
In article <1993May27.2...@oucsace.cs.ohiou.edu> co...@bobcat.ent.ohiou.edu (The Supreme One) writes:
>In article <andrew.227...@iwr.ru.ac.za> and...@iwr.ru.ac.za (Andrew Murdoch) writes:
>
>B
>>My conclusion: I have no reason (other than from what I read) to suspect
>>doublespace, but I do see problems with smartdrv.
>>Can anybody confirm this?
>>
> I've been using smartdrv and dblspace (with read/write cacheing)
>for a while now, and haven't come across any problems.. of course, in a
>power failure, or when you experience a lockup, the cache won't be flushed
>(if using write-behind cacheing), so problems would b encountered there,
>but (correct me if I'm wrong), problems will result regardless of whether
>or not the drive is compressed... so I wouldn't think that running smartdrv
>alongside with dblspace would be any more risky than running smartdrv
>alone... unless the data scrambling that results from an unflushed cache
>would be worse with a compressed drive, since the actual drive is merely a
>large compressed file. Of course, if you use windows, not running smartdrv
>(or some disk-cacheing) would greatly hinder performance.. even if you just
>use read-caceing and no write-behind, you'll have to wait for the computer
>to write most of whatever it needs to write before continuing.

This is what the problem is. The compressed volume is in fact one big file, and
this is what SmartDrive caches. In other words, the virtual FAT information
AND the virtual data are thrown into the same pot and cached together. If the
cache is not written, then FAT information can be corrupted just as easily as
data. If data is corrupted, then one file can be damaged. If the FAT is
damaged, then you can lose EVERYTHING.

The other problem is the nature of data compression. Data compression works by
eliminating redundancy in the data (most intelligence has oodles of redundancy
in it). However, redundancy is one of the traits that makes recovery of
corrupt data easier. If the redundancy is eliminated, then recovering a trahsed
file is that much harder.

> I was wondering something.. since dblspace makes the actual
>compressed drive merely a large file on drive "H", would this makvirus
>attacks especially destuctive, at least with certain forms of virii? If it
>alters just the one file dblspace makes, then all of drive C would be
>scrambled, would it not? I hope we don't see a new virus geared
>specifically for DOS 6.0 and dblspace...
>

I wouldn't worry about it. If a virus gets to the point where it intends to
trash your hard drive, it will trash your hard drive whether it is corrupt or
not. It isn't difficult for a virus to wipe out a FAT......

If you are concerned about this, then purchase a virus checker and/or back up
your hard drive frequently. And practice "safe computing". This will be more
effective in virus-fighting then worrying about doublespace.
\\/sj/

Mitch Gorman

unread,
May 27, 1993, 10:02:06 AM5/27/93
to
In article <znr738290320k@nlbbs> sinc...@nlbbs.rn.com writes:
>
> I may as well toss in my two cents on this thread. I installed
>double space a few weeks ago. I loved it until last week. I booted up
>one morning only to discover 60 or 70 crosslinked files and all sorts of
>bad sectors on my drive. Until that fateful morning my drive was
>perfect, no bad sectors. When I ran chkdsk it ran for almost three
>hours, I had to reboot to get out. After the three finger kill the
>system would only boot from a floppy. You could say doublespace got my
>attention big time! Until I installed dblspc, my HD never no much as
>hic cupped.
>
> I can't tell you how important a good backup is, if you are going
>to use this program. The extra space doublespace gives you is great,
>until it starts acting up. I liked the space, hated the hassle.
>Doublespace is nothing but a bad memory on my system now, my backup saved
>the day.
>

I am not using Dos6/DblSpace. I'm still working with Dos 5 and
Stacker. After reading all these complaints about DblSpace, and its
possible unhappy marriage with SmartDrv, I feel I should relate my
experiences with Stacker and SmartDrv.

Usually, I do a smartdrv/c before shutting down, but not always.
Occasionally I have to give a three-finger salute. Maybe once out of
every five times I shut down without clearing the cache, I will see
Stacker do some "correction" processing on the succeeding boot. No
cross-linked files. No bad sectors. No lost data. Yes, I am using
write-caching.

So, why does DblSpace have such trouble with this situation? It seems
like not clearing a write-cache before rebooting will always, at least
from what I've seen on this newsfroup, result in [near-]catastophic
errors. Is that really the case, or is it simply that when DblSpace
has a problem because of that sort of thing, it has a *big* problem
with it, and there is nothing like Stacker's startup correction code
to deal with it?
--
Mitch Gorman mgo...@telesciences.com

"You can blow out a candle, but you can't blow out a fire.
Once the flame begins to catch, the wind will blow it higher."

David Hough

unread,
May 27, 1993, 2:24:42 PM5/27/93
to
You will almost certainly find that it is the caching of *writes* which
cause the problem. I never knowingly run a write-cacheing program, I always
switch that bit off. It slows things down a bit but at least when I do
silly things and crash the machine it ought to have all the data on the
disk and the FAT intact.

Dave

*****************************************************************************
* G4WRW @ GB7WRW.#41.GBR.EU AX25 * You think *you* have problems? *
* da...@llondel.demon.co.uk Internet * What do you do if you *are* *
* g4...@g4wrw.ampr.org Amprnet * a manically depressed robot?? *
*****************************************************************************

Erwin Dondorp

unread,
May 28, 1993, 3:21:59 AM5/28/93
to

I think Stacker can be used with DOS6.

Apart from the problems with doublespace I have not witnessed any
other problems. So I definatelly would not tell anybody to
stay with DOS5, because that is just unnecessary.

If you wait for Stacker 3.1, you can use Stacker's feature to convert
a doublespaced drive into a stacker'ed drive.
But be sure to be very careful with your doublespaced drive,
at least turn off smartdrv's write caching.
--
Erwin Dondorp Xirion bv, Software and Consultancy
er...@xirion.nl Burg. Verderlaan 15X
3454 PE De Meern
the Netherlands
Phone +31 (0)3406-61990
Fax +31 (0)3406-61981

Currently put to work at: PTT-T-NWB-NWO-PCS-SDC (070-3434955)

Een heer van stand is geen nummer! (OBB)

Michael Panayiotakis

unread,
May 28, 1993, 2:33:24 AM5/28/93
to

OK, people, I just swallowed my pride and decided to read
the...the....the...[drums...] MANUAL. (Background: "OOoohh,
NNoooo). But at any rate, I decided to read about smartdrive.
And right there, smack-dab in the middle of the last page of the
manual (before the index) of DOS 5, which coincides with the last
page of the SMARTDRV.SYS section, is a warning, quoted here:

[begin quote]
CANNOT RUN DISK-COMPACTION PROGRAM
To avoid losing data, do not urn a disk-compaction program while
SMARTDRV.SYS is loaded.
[end quote]

That's straight from DOS's "User's Guide and reference", straight
in black and white. I don't suppose anyone actually *read* that
before installing dos 6.0, and from the looks of it, it looks
like Micro$oft didn't, either.

Also, smartdrv.exe (which comes with windows)cautions to check
that smartdrv has completed all write-caching vefore turning off
the computer, but fails to mention why, and that consequences may
be more sever if a disk-compactor is run.

And now for the questions:
Is there a significant difference between smartdrv.sys and
smartdrv.exe??
Also, can any kind soul out there briefly describe why write-
caching will crash your HD, but read-caching won't?

(for those using windows, and smartdrv.exe, and want to know how
to disable write-caching, change
c:\windows\smartdrv.exe 1024 512
(or whatever you have after smartdrv.exe, numbers depend on RAM
size; change that to
c:\windows\smartdrv.exe c 1024 512
where c is your HD. If your HD were d, you would've written
c:\windows\smartdrv.exe d 1024 512. Write-caching is always
disabled for floppies. And, just for the record, I'm planning on
getting dos 6.0 (SHIT... I only have a few days 'till the $50
promo ends),so I just changed the caching. It seems windows
runs a bit faster with only read cachiing, which is kinda weird.
I guess that's only true for small files, huh?
At any rate, good luck to y'all.

BTW, I *still* think that Micro$oft should've put the
smartdrv.sys warning on the DOS 6.0 manual somewhere. Though I
don't have the manual, i believe it's safe to assume that either
it's not there, or that it's low-visibility.

peace,
Mickey

PS what d'yall think of that "press release" posted here?

Mickey

Rudi Maelbrancke

unread,
May 28, 1993, 5:43:39 AM5/28/93
to
Suppose that I want to use double space on a drive C: as follows:
I tell doublespace to use half of my disk for a compressed drive D:.
This means that C is the host for D.

My question:
How can I tell smartdrv to do rw-caching on c: and r-caching on d:?

This means that if this can not be done that smartdrv does no more
than caching the accesses to dblspace.00? and not the logical file
blocks of the accessed compressed file.
Which also means that smartdrv changes for use with double space
were limited to preventing smartdrv acting in the same way on a
compressed drive as on an uncompressed drive. I even think that
smartdrv is not aware of the fact that if it is caching dblspace.00?
that it is caching things for a compressed drive.
Is it that difficult to adapt smartdrv or dblspace in order that it keeps
a kind of rollback log which avoids problems when a 500K piece of a
200M file is not flushed to disk? I wonder what will happen if
someone has a hardcached drive?

Rudi Maelbrancke

Dave Jackson

unread,
May 28, 1993, 6:18:59 AM5/28/93
to
In article <1993May28.0...@cs.kuleuven.ac.be>, ru...@cs.kuleuven.ac.be

(Rudi Maelbrancke) writes:
|> Suppose that I want to use double space on a drive C: as follows:
|> I tell doublespace to use half of my disk for a compressed drive D:.
|> This means that C is the host for D.
|>
|> My question:
|> How can I tell smartdrv to do rw-caching on c: and r-caching on d:?
|>

smartdrv c+ d will do the trick


smartdrv /c will flush the write cache - do this before turning off your
system.
(and wait for the drive light to stop flashing too!)


Dave.
________________________________________________________________________
David Jackson djac...@axion.bt.co.uk
BT Research Laboratories +44 473 645805
Ipswich, England
________________________________________________________________________

johnson scott andrew

unread,
May 28, 1993, 6:23:17 AM5/28/93
to
The previous poster asked why write-caching was dangerous, but read-caching
safe.


One of the biggest bottlenecks in today's computers is NOT the processor. It
is memory, and even to a greater extent, IO. Much performance is lost every
time a program has to access the outside world (for example, your hard drive.)

Hard drives are among the faster physical storage devices, but they are still
extremely slow compared to a microprocessor. There are several delays involved
in a hard drive access. The most significant is the seek time--the amount of
time it takes for a hard drive in an idle state to locate a specific record on
the disk. This is on the order of 10-15 milliseconds for most hard drives
(although some are faster and some are slower than this.) However, once a
hard drive is active, further data access is much faster (by at least an order
of magnitude.)

For this reason, disk caching was invented. When a record on a hard drive is
accessed, a disk cache will access its "neighbors" on the disk, and load it
into a memory buffer. This extra access is negligible compared to the initial
seek time. The caching software tries to predict which record on the hard
drive will be accesed next, and load these into the buffer.

The next time the computer accesses the hard drive, it check first if what it
needs is in the cache, which is in memory. If it is, then we have a "cache
hit", and the record is loaded from memory instead of from the disk. This is
good, as memory is several orders of magnitude faster than any disk drive in
existance. If the record is not in memory, then we have a "cache miss". This
is bad, as then the record must be loaded from the hard disk into the cache.
Every time there is a cache miss, there is a substantial performance penalty.
Much research is done on algorithms and methods to increase the cache hit/miss
ratio (this is why a bigger cache is better. If there is more information in
the cache, there is a greater chance that what YOU want is there, and that
means a greater chance there will be a cache hit.)

What I just described is READ-caching.

In many instances, the user will want to write to the same records he/she is
reading from. When a user attempts to write a record to the hard drive, the
old record it replaces may or may not be in the cache. If it is in the cache,
there are two options. The cache can be modified with the new record, and the
information can be simultaneously written to the disk. This is called "write-
through" caching, or "read-only" caching. The writes, in effect, are NOT
being cached (however, the read cache is updated as needed.) The other option
is to update the cache only and not the disk, and wait until a lot of changes
have accumulated before writing to the disk. This is write-back caching, and
it offers substantial perfomance improvements on disk writes over write-through
caching.

HOWEVER, write-back caching suffers from a problem. What happens when, for
some reason, the cache does not get written back? In this case, the data which
is on the disk is incorrect. The two main causes of the cache not being
written back are system crashes and users turning the power off. Both of these
are problems in DOS/Windows environments. On workstations and the like (with
robust operating systems), this is much less of a problem, and write-caching is
frequently employed (this is one reason why you DO NOT turn workstations off,
by the way.)

Normally, when a file is incorrectly written, only data is damaged. On an
uncompressed drive, Smartdrive only caches the data stream, not the file
allocation tables. Thus the damaged caused by this is limited.

However, Microsoft for some unknown reason decided when they introduced
doublespace to compress the "host drive", or the physical uncompressed drive,
instead of the logical, compressed virtual drive. The result of this is that
the virtual drive's FAT is part of the host drive's data stream! In other
words, your file allocation information is cached. If this is information is
not written back, then serious damage can result. As the net know, quite a
few hard drives have been wiped out by this.


The moral of the story is this. If you use DoubleSpace, do NOT use write
caching!!! DOS is too unreliable of an operating system for this combination
to be safe. Read-caching is fine. However, write caching is not.

If you DO use write caching, then do the following three things: Back your
hard drive up fairly often, type SMARTDRV /C before powering down (this writes
the cache back to the drive.), and try not to crash the system.

Hope this helps,

/sj/

Leo Chouinard

unread,
May 28, 1993, 1:02:54 PM5/28/93
to
In <1u4ouj$9...@zaphod.axion.bt.co.uk> djac...@axion.bt.co.uk writes:

>smartdrv /c will flush the write cache - do this before turning off your
>system.
>(and wait for the drive light to stop flashing too!)

As others have noted, this is, a good suggestion. But it seems to me that
another good idea if you want to use smartdrv with write caching is to set
up everything you regularly do to run from *.BAT files (this is a good idea
for other reasons, like path control, anyway), and then put "smartdrv /c"
as the last line of each of those batch files. That should buy you a
little extra bit of protection. I may have missed it, but I haven't seen
that mentioned on the net.

You should still, of course, follow the initial suggestion, and also run
the command before shutting down, especially if you use any DOS internal
commands, as most users do.
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
These are _MY_ opinions. Clear enough? | Leo C., aka l...@hoss.unl.edu
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Save the whales _and_ the humans.

sean kerns

unread,
May 28, 1993, 1:37:55 PM5/28/93
to

There was someone who posted on comp.virus recently who had done some testing of
the effects of a virus on compressed drives, particularly the boot sector.
Apparently, he found that these boot sector viruses tended to infect the actual
absolute boot sector, which with DBLSPACE is actually on drive H:, and not the
mirror boot sector on compressed volume C:. This made the compressed drive
somewhat less susceptible to this type of attack, and since, in his test, few
files were stored on H:, and there were duplicates of those on C:, cleaning up
the virus was fairly easy.

These were his findings, not mine, and I'm paraphrasing, so I take no
responsibility for the accuracy or validity of the above.

Sean

Keith Petersen

unread,
May 29, 1993, 1:52:31 AM5/29/93
to
joh...@prism.CS.ORST.EDU (johnson scott andrew) writes:
>...]

>If you DO use write caching, then do the following three things: Back your
>hard drive up fairly often, type SMARTDRV /C before powering down (this writes
>the cache back to the drive.), and try not to crash the system.

---Forwarded message:
Date: Thu, 20 May 93 13:39:12 EDT
From: St...@grc.com (Steve Gibson)
Message-Id: <6601....@TACOM-EMH1.Army.Mil>
To: MSDO...@TACOM-EMH1.Army.Mil (MS-DOS upload announce)
Subject: SMARTPMT.ZIP - Flushes SmartDrive deferred writes on DOS pmt.
Summary: Reposted by Keith Petersen

I have uploaded to WSMR-SIMTEL20.Army.Mil and OAK.Oakland.Edu:

pd1:<msdos.dskutl>
SMARTPMT.ZIP Flushes SmartDrive deferred writes on DOS pmt.

SMARTPMT.ZIP (Smart Prompt) contains a README.TXT text file with
additional descriptive information and the SMARTPMT.COM program.

Smart Prompt is Steve Gibson's tiny (336 resident bytes) TSR which
helps to prevent drive (and especially DoubleSpace) partition
corruption in DOS 6.0 systems. When returning to the DOS system
prompt with hard disk cached data not yet completely written to any
drive, SMARTPMT triggers immediate disk cache writing and briefly
suspends the return of the DOS system prompt until Smart Drive has
finished writing all pending data. SMARTPMT.COM requires only a tiny
bit of RAM space yet it makes the DOS 6.0 prompt completely safe again.

Microsoft's default auto-installation of MS-DOS 6.0 installs their
SmartDrive hard disk cache with deferred writing ... but lacking any
option to suspend the return of the DOS prompt until all cached data
has been written. Many people are currently losing their hard disk
data by innocently turning their systems off __DURING__ the SmartDrive
flushing which occurs AFTER the DOS prompt is attained. SMARTPMT.COM
prevents data loss for this reason.

Steve Gibson
St...@grc.com

Michael Panayiotakis

unread,
May 30, 1993, 1:18:25 AM5/30/93
to

Hey all. Lotsatalk around on how useful smartdrv really is.
Well, I decided to conduct my own test, using Windows. I
initially turned smartdrive off. BAD IDEA. Windows crawled.
Slithered. Whatever you want to call it. it was SLOOWW, the
keyword being slow. I guess you get the point. Then I decided
to experiment with smartdrv.
First of all I should note that, though I took off smartdrv for a
while, I did *not* increase the buffers, so that's a possible
reason for the slow-down. also, as said later, each different
time I tested the config, I had a different wallpaper as the
background. But this shouldn't matter, as the type of wallpaper
(size, rather) justifies ther results. (Where different size,
the smaller was used when the times are higher.) The test was
conducted in the following manner: I set smartdrv settings. I
should note here that I"m using "smartdrv.exe" which was what
came with windows, and the two numbers are, as said later, the
initial (DOS) cache size first, and the windows cache size next.
E.g. "smartdrv 1024 512" allocates 1024 Kb cached for dos, and
512 Kb cached for windows.

The test:
I change the setting in autoexec.bat. Flush. Restart. (ctrl-alt-
del.) The startup time is measured (approx.) from the time the
underscore character went from the 2nd line in the screen to the
1st line (after ctrl-alt-del...when you push ctrl-alt-del, it
hangs a while. Then the cursor goes up a line, and startup
really begins. This is when time was started) until windows was
loaded (from autoexec), and ready for me to work on. The
applications running at this time were windows' file manager (my
shell) and screenpeace, my screensaver. I then started a pop-up
menu utility from the file manager via "run". Then from the pop-
up menu I invoked an "Ami Pro 3.0" file, 405 killobytes in size.
This is mostly text, about 200 pages. From the time I selected
the document from the popup menu until the time the document was
loaded and ready for me to work on is the "open file" time. Then
I told Ami Pro to "show powerfields". I have a *lot* of
powerfields in the document: the document is a lyric book, and
each title is a bookmark, which is a powerfield. Further, some
titles have an extra powerfield associated with them. So,
anyway, from the time I selected "show powerfields" until the
time the fields were shown, and I was able to work on the
document again, this is the "show fields" time. I decided to do
that somewhere half way, so it's n/a for some cache settings.
Lastly, I saved the file. From the time I selected "save" until
the time the file was saved and I was able to work on the
document again, is the "save file" time. As you can see, there
are some surprising results there. Note that flushing time was
not included. I figure the only reason to cache a disk is that
it will allow you to return to work faster, and then actually
save to the disk when you're not really working...and that's the
reasoning behind this experiment...the actual *work time* saved
with different configurations of smartdrv.
Here's the table, and I'll continue afterwards.
-----------------------------------------------------------------------
Test results for my unscientific research on smartdrv.exe (for win)

-----------------|------------|-----------|---------------|------------
smartdrv conf. |startup | open file | show fields | save file
-----------------|------------|-----------|---------------|------------
RO 1024 512 | 30 s | 17 s | 6 sec (note) | 8.5 sec
-----------------|------------|-----------|---------------|------------
RO 1024 1024 | n/a | 15 sec | n/a,longer | 57 sec (!!)
-----------------|------------|-----------|---------------|-------------
WR 1024 512 | 35 sec | 14 sec | n/a | 12 sec
-----------------|------------|-----------|---------------|-------------
WR 1024 1024 | 30 sec | 16 sec | 17 sec | 34 sec
--------------------------------------------------------------------------

NOTE: Times are measured from the time the action was selected
(i.e. the time I clicked on "save" or whatever) to the time when
I was able to return to the document. Flushin' time wan't
measured (where applicable). Also, after I was returned to the
document (i.e. mouse in MS Windows mouse pointer turned to arrow from
hourglass, and I was able to move in the doc.), the HD still
worked a while, both with RO (Read Only cache) and WR
(write-back cache). on "show fields" entry on RO 1024 512 (read
only caching, 1024 bytes (or is Kbytes) w/ dos and 512 for
windows), I waited a while before selecting show power fields,
and in that time the HD stopped recording. This may account for
decreased time. These were taken using Ami Pro 3.0 and windows
3.1 in enhanced mode. Possible source of error is that I used a
different background (wallpaper) each time. "But Longer" means
noticeable longer.
------------------------------

OK, I should note here that w/o smartdrive I opened windows. It
took noticeably longer to open, so I did not do any more tests w/
it. Here's my configuration:

dos 5.0. (NO compression)
486sx/25
4 MB RAM
5 MB windows permanent swapfile.
130 MB HD.
no double buffering used w/ smartdrv.

I should note here that when I started the experiment I expected
a *huge* slowdown with read-only caching, so this is a surprise
to me. At the end of the file I'm including my autoexec.bat, and
config.sys as they are now. In the autoexec.bat I added lines
beggining with '*' to indicate the changes in smartdrv
configuration.

If you have any comments about the experiment, or found something
I did wrong, or anything, please feel *free* to (as matter of fact,
PLEASE) let me know, so I can re-do it. I'm also planning to re-
do this experiment w/o a wallpaper for windows, so I can get rid
of that inconsistency, when I'm bored.

peace,
Mickey
lou...@seas.gwu.edu

---------------
--begin autoexec.bat--
@echo off
C:\WINDOWS\SMARTDRV.EXE c 1024 512 /Q
* smartdrv.exe c 1024 1024 /Q
* smartdrv.exe c+ 1024 512 /Q
* smartdrv.exe c+ 1024 1024 /Q

LOADHIGH C:\PS1TOOLS\VSTOP -Q
LOADHIGH C:\DOS\MOUSE
LOADHIGH C:\DOS\DOSKEY
PROMPT $l$t$g $P$G
PATH C:\WINDOWS;C:\DOS;C:\MSWORKS;c:\archives;C:\amipro;c:\
SET DIRCMD=/O/P
SET PCPLUS=C:\PCPLUS
SET TEMP=C:\0mickey
rem C:\PS1TOOLS\HWCHECk
rem next few lines backup autoexec & config.sys
[deleted]
c:\WINDOWS\WIN.COM
-------------------------
--begin config.sys----------
DEVICE=C:\WINDOWS\HIMEM.SYS
DEVICE=C:\WINDOWS\EMM386.EXE NOEMS
DOS=HIGH, UMB
FILES=30
BUFFERS=10
SHELL=C:\DOS\COMMAND.COM C:\DOS /P /E:512
rem STACKS=9,256
stacks=0,0
DEVICEHIGH=C:\DOS\SETVER.EXE
--------end---------------

Anibal Jodorcovsky

unread,
May 26, 1993, 12:09:20 PM5/26/93
to
In article <C7Lxn...@cbfsb.cb.att.com> rnic...@cbnewsg.cb.att.com (robert.k.nichols) writes:
[...]

>BTW, how does a Stacker/SMARTDRV combination work? Do you have the
>option of caching the Stacker-driven file system? For that matter, what
>about a DoubleSpace/Ncache combination? Hmmm, that one I _could_ check
>out for myself, if I ever get the urge to re-install DoubleSpace.
>
>--
>Bob Nichols
>AT&T Bell Laboratories
>rnic...@ihlpm.ih.att.com

Well, I have been using Stacker 3.0 with Smartdrive for a few months now,
and have had no problems at all!!!
I stacked my drive when using MS-DOS 5.0 and then upgraded to MS-DOS 6.0.
I am using the last version of Smartdrive that came with MS-DOS 6.0 and
still have had no problems! I had had a lot of crashed though, within Windows
and within DOS (games, applications, etc) and still no problems.
I even installed Stacker on a Laptop (40 meg HD) and compressed EVERYTHING!
The whole drive (boot drive included, I know it is dangerous but I needed the
space) and still had no problems.

Just my two cents in these.

-Anibal

Jen Kilmer

unread,
May 29, 1993, 11:27:56 PM5/29/93
to
In article <1139.2...@catpe.alt.za> Deon.S...@f7.n7104.z5.fidonet.org (Deon Strydom) writes:
>
>Hi Stephen,
>I use DoubleDisk (which was changed to DoubleSpace by MS, I believe)
>and the ONLY problem I ever had was with SmartDrive 4.0 & write
>caching. In fact, my DoubleDisk package came with a note, warning
>NOT to use write caching (write-behind, whatever).

Perhaps DoubleDisk isn't DoubleSpace, then, eh?

BTW, the difference between Smartdrv 4.1 and Smartdrv 4.0 is that
4.1 works with DoubleSpace.

-jen

--
je...@microsoft.com In a world so hard & dirty, so fouled & confused
#include <stdisclaimer> Searching for a little bit of God's mercy
msdos testing I found living proof. - Bruce Springsteen

Jen Kilmer

unread,
Jun 1, 1993, 4:20:42 PM6/1/93
to
In article <1993May28.0...@seas.gwu.edu> lou...@seas.gwu.edu (Michael Panayiotakis) writes:
>
>[MS-DOS 5 manual Smartdrv.SYS entry]

>
>[begin quote]
>CANNOT RUN DISK-COMPACTION PROGRAM
>To avoid losing data, do not urn a disk-compaction program while
>SMARTDRV.SYS is loaded.
>[end quote]
>
>That's straight from DOS's "User's Guide and reference",

...and has nothing to do with compression. It refers to defragmenting
utilities like the msdos 6 defrag, Norton Utilities' SpeedDisk, and
PC Tools Compress. It was put in the manual due to requests from
other companies - they'd encountered some problems with disk caches,
and don't want people running any disk cache but theirs while
compacting the disk. Note that they hadn't run into problems with
Smartdrv.sys, they just wanted to cover all the bases.

>I don't suppose anyone actually *read* that
>before installing dos 6.0, and from the looks of it, it looks
>like Micro$oft didn't, either.

Installing msdos 6 doesn't require running a disk defragmenter.
Installing dblspace does, and for some reason we took the time
to retest dblspace compatibility with smartdrv.EXE and (to a lesser
extent) smartdrv.SYS. (Smartdrv had been heavily tested with
speedisk already, also with compress.)

>Also, smartdrv.exe (which comes with windows)cautions to check
>that smartdrv has completed all write-caching vefore turning off
>the computer, but fails to mention why,

...umm...so that all data may finish being written to the disk?

>and that consequences may
>be more sever if a disk-compactor is run.

That one I missed.

>And now for the questions:
>Is there a significant difference between smartdrv.sys and
>smartdrv.exe??

One does an int 13-(BIOS)-based read-cache, the other does an
msdos block-device read-write cache.

>Also, can any kind soul out there briefly describe why write-
>caching will crash your HD, but read-caching won't?

Read-caching can if it's done improperly (reads wrong info).
Write-caching can if the system is shut down before all recent
data is written to the actual drive.

>BTW, I *still* think that Micro$oft should've put the
>smartdrv.sys warning on the DOS 6.0 manual somewhere.

Actually, the warning was dropped as a doc bug. The warning
was added at the suggestion of another software vendor - when
we asked, that vendor couldn't tell us of a case where smartdrv
(.SYS or .EXE) had caused a problem with their disk compactor.

>Though I
>don't have the manual, i believe it's safe to assume that either
>it's not there, or that it's low-visibility.

And the 5.0 one was high-visibility?

bawa...@corpsb.remnet.rockwell.com

unread,
Jun 1, 1993, 8:25:52 PM6/1/93
to

In article <1993Jun01....@microsoft.com> je...@microsoft.com (Jen Kilmer) writes:
>In article <1993May28.0...@seas.gwu.edu> lou...@seas.gwu.edu (Michael Panayiotakis) writes:
>>
>>[MS-DOS 5 manual Smartdrv.SYS entry]
>>
>>[begin quote]
>>CANNOT RUN DISK-COMPACTION PROGRAM
>>To avoid losing data, do not urn a disk-compaction program while
>>SMARTDRV.SYS is loaded.
>>[end quote]
>>
>>That's straight from DOS's "User's Guide and reference",
>
>....and has nothing to do with compression. It refers to defragmenting

>utilities like the msdos 6 defrag, Norton Utilities' SpeedDisk, and
>PC Tools Compress. It was put in the manual due to requests from
>other companies - they'd encountered some problems with disk caches,
>and don't want people running any disk cache but theirs while
>compacting the disk. Note that they hadn't run into problems with
>Smartdrv.sys, they just wanted to cover all the bases.
>
>>I don't suppose anyone actually *read* that
>>before installing dos 6.0, and from the looks of it, it looks
>>like Micro$oft didn't, either.
>
>Installing msdos 6 doesn't require running a disk defragmenter.
>Installing dblspace does, and for some reason we took the time
>to retest dblspace compatibility with smartdrv.EXE and (to a lesser
>extent) smartdrv.SYS. (Smartdrv had been heavily tested with
>speedisk already, also with compress.)
>
>>Also, smartdrv.exe (which comes with windows)cautions to check
>>that smartdrv has completed all write-caching vefore turning off
>>the computer, but fails to mention why,
>
>....umm...so that all data may finish being written to the disk?


The real problem with SMARTDRV (IMHO) is that it DEFAULTS to a write
cache when installed. This has been true since the SMARTDRV.EXE that
shipped with Windows 3.1. Not until the INFOWORLD testing of DBLSPACE
did this become a widespread issue. Nobody should be running a write-
back cache UNLESS they are fully aware they are doing it AND are fully
aware of the rammifications of doing so. The ONLY way to ensure this
is to force the user to enable the write cache themselves and NOT
making it a default. Microsoft cannot absovle responsibility for this
item unless this default is somehow modified in a future release.

... Brent

CSHL

unread,
Jun 1, 1993, 10:15:55 PM6/1/93
to
Was it really necessary to quote the entire article?!? What you said didn't
really even apply to what came (in the 3 screens) above it.

--
Speaketh ourt master, the Gatesian one himself: All pigs art mammals,
yet all mammals art not pigs!


D461-David_F_Haertig(Dave)83040

unread,
Jun 1, 1993, 11:01:50 AM6/1/93
to
From article <1993May28.0...@seas.gwu.edu>, by lou...@seas.gwu.edu (Michael Panayiotakis):

> [begin quote]
> CANNOT RUN DISK-COMPACTION PROGRAM
> To avoid losing data, do not urn a disk-compaction program while
> SMARTDRV.SYS is loaded.
> [end quote]

I think they're talking about disk *defragmenter* programs here,
not disk *compression* programs. Defragmenter program examples
are Norton's SpeedDisk, Stacker's Sdefrag, etc. Compression
programs are Stacker, DoubleSpace, etc.

Dave Haertig
hae...@att.com

Stannon Yen

unread,
Jun 1, 1993, 8:04:05 AM6/1/93
to
lou...@seas.gwu.edu (Michael Panayiotakis) writes:

>First of all I should note that, though I took off smartdrv for a
>while, I did *not* increase the buffers, so that's a possible
>reason for the slow-down. also, as said later, each different
>time I tested the config, I had a different wallpaper as the
>background. But this shouldn't matter, as the type of wallpaper
>(size, rather) justifies ther results. (Where different size,
>the smaller was used when the times are higher.) The test was

Althought you honestly tell us you've used different wall paper in
Windows during the test, I have to say that your test is unfair. First
of all, you do not eliminate all side effect : like Wall Paper. I can't
understand why you use different Wall papers in your test.

Moreover, I've noticed that you aren't writing a script to do the
experiment. Which means during the Windows session, you might not notice
the swap file effect. And since you only work out once, I don't think
you have any s.d. value or c.i. to justify your experiment.

Regards,
CT Yen.
--
Microsoft(R) MS-DOS(R) Version 6
(C) Copyright Microsoft Corp 1981-1993
C:\>yen...@cs.cuhk.hk <-- a yr.2 CS student of The Chinese University
Bad command or file name

0 new messages