Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A little research about powerful WIndows mainframes by Gartner and Unisys... not bad!

0 views
Skip to first unread message

Alessandro Federici

unread,
Jun 14, 2001, 9:59:31 PM6/14/01
to
AJ and Paul asked me to find something even comparable to that system he
described....
So I did a little bit of research and found something interesting (and much
cheaper than other *nix systems).
Gartnet helped a hell lot. I didn't even know about this system(s)!!!

AJ asked also to see how a cluster of anything running Windows 2000 can come
close to that *nix hottie. Well, I guess that request has been greately
fulfilled. The only thing I cannot find anymore is a network
bandwidth-record article I read amost a year or so ago but that is the last
of my problems.

http://www.gartner.com/webletter/microsoft/article4/article4.html (see the
price list at the end and read it all).

This was my first take so far:

Unisys ES7000
Cellular Multiprocessor Architecture (CMP)
http://www.unisys.com/hw/servers/es7000/technology.asp
up to 32 processors
up to 64Gb RAM
*internal* disks up to 144Gb max disk capacity supported 11Terabyte
"Servers within a server" partitioning
Independent service processors
Two-way clustering

Snippets...

".because of the awesome engineering talent at Unisys, they can take 32/32
bit processors and give about the same power as the E10000 from Sun with
64/64 bit processors. And of course Unisys is doing that at a fraction of
the price. And that from our perspective is really the opportunity. It's
been the dream that Intel and Microsoft have had for many years in terms of
the high-end scalability, reliability, availability."
Redundant configurations, N+1 power and cooling, multiple power domains,
resilient I/O configurations, clustering, and partitioning all support high
availability. Extensive live insertion capabilities let you replace a failed
component while the rest of the system continues running. Hot swappable
sub-pods support the continued availability of other sub-pods. Memory error
checking and correction (ECC) and full data path checking contribute to high
availability. ES7000 systems can be easily configured to achieve 99.9
percent availability

Or read how this system outperformed Sun 64x64 E1000 at
http://www.unisys.com/hw/servers/es7000/benchmarks.asp

ES7000 Server Searches 1.4 Billion Records in 1/10th of a Second "In a speed
and scalability test that pushed the boundaries of business intelligence, a
32-processor ES7000 established a new benchmark in data analysis. This new
speed and scalability measure shows that size of the very largest databases
is not a barrier to quickly uncovering global trends and patterns for making
business decisions. The test results showed the ability of the Unisys ES7000
and Microsoft Windows 2000 Datacenter Server combination to continue setting
performance records, giving you an unmatched platform for your most
demanding e-business applications."

The part two has a lot to do with SQL Server
(http://www.microsoft.com/sql/evaluation/overview/2000/FastFacts.doc)
from which we find very interesting names and performances (NASDAQ.com,
OfficeDepot, Bigfoot.com, Stabucks.com, uBid.com, Monster.com, Expedia.com,
Dell, Radioshak, Barns and Noble, AskJeeves, Ticketmaster (over 200.000
users at peak time), Intellisys uses SQL Server to support First Union Bank,
Chase Manhattan Bank, Ford, Texas Instruments, Hasbro)

The list could go on...
For now I am quite happy. BTW don't forget to put that baby above in a 4
nodes cluster to get better performance and more fail over/up time ;-)

Obviously Linux/Unix should be able to get on the same box as well.


pnichols

unread,
Jun 14, 2001, 10:20:05 PM6/14/01
to

"Alessandro Federici" <al...@bigfoot.com> wrote in message
news:3b296c7c$2_2@dnews...

> AJ and Paul asked me to find something even comparable to that system he
> described....
> So I did a little bit of research and found something interesting (and
much
> cheaper than other *nix systems).
> Gartnet helped a hell lot. I didn't even know about this system(s)!!!
>
> AJ asked also to see how a cluster of anything running Windows 2000 can
come
> close to that *nix hottie. Well, I guess that request has been greately
> fulfilled. The only thing I cannot find anymore is a network
> bandwidth-record article I read amost a year or so ago but that is the
last
> of my problems.
>
32 processors impressive.. I said nothing about Unix in particular, but
re-read my post!!

IBM G6 Mainframe

Up to 64 gigabytes of RAM (got that covered)
640 Processors (you are 608 Processors in the negative column)
64 Bit Linux Support (possible, you didn't say)
2 CMOS Crytographic CoProcessors (you didn't say but I doubt it)
16 PCI Crypto CoProcessors (Again I doubt it).
Parallel SysPlex support (I doubt it)
Support for 64,000 devices (VERY SERIOSULY DOUBT IT)
256 IO Paths -(LOL, Again big doubt)
24 GB/sec I/O bandwidth -- (doubt it)
3 GB/sec networking bandwidth :Doubful, but possible
Can support up to 32 different systems (OS/Virtual Machines) _ with only 32
processors, highly doubtful.
Can run Linux, OS/390, and vitually all Unix Systems.:Possible.

If you are going to make the contention that a Windows cluster is more
powerful, reliable. and efficient than a MF solution, the solution you
posted will not do it...

How about a real world working model, that's all I would ask..


.

Alessandro Federici

unread,
Jun 14, 2001, 10:44:28 PM6/14/01
to
"pnichols" <paul@computer-logic> wrote in message news:3b297123$2_2@dnews...
>

> 32 processors impressive.. I said nothing about Unix in particular, but
re-read my post!!

Actually would be 128 if you make the calculations I recommended at the end
of the post and obviously you didn't read the articles because you would
have found more interesting things to say...
So, in order to make it easier for you:

> Up to 64 gigabytes of RAM (got that covered)

It's actually 256Gb in the cluster

> 640 Processors (you are 608 Processors in the negative column)

128 Processors

> 64 Bit Linux Support (possible, you didn't say)

Don't really know. I am after Windows and Windows 64 bit+development for it
will be doable on Itanium thanks to Borland, IBM and Microsoft.

> 2 CMOS Crytographic CoProcessors (you didn't say but I doubt it)

I don't think it has them by default but you can add those later with the
money you save <G>

> 16 PCI Crypto CoProcessors (Again I doubt it).

Same as above.

> Parallel SysPlex support (I doubt it)

This I have no idea...

> Support for 64,000 devices (VERY SERIOSULY DOUBT IT)

Very probably

> 24 GB/sec I/O bandwidth -- (doubt it)
> 3 GB/sec networking bandwidth :Doubful, but possible

Very duable actually and already achieved/passed a year ago if I remember
correcltly. I know you'd be back at me for this ;-)

> Can support up to 32 different systems (OS/Virtual Machines) _ with only
32
> processors, highly doubtful.

Actually there are some extra nice features in the brochure...

> Can run Linux, OS/390, and vitually all Unix Systems.:Possible.

Sure

> If you are going to make the contention that a Windows cluster is more
> powerful, reliable. and efficient than a MF solution, the solution you
> posted will not do it...

Oh boy, oh boy... you are the only one in that thread Paul that doesn't get
it.
Windows has nothing to do with what I was talking about in the first place.
As I repeatedly said, the same can be said for clusters/replicated Linux
boxes. The best solution ever is not that box that you displayed. Is a set
of those clustered and replicated. Accept it! It's not rocket science...
Then, in top of this you said "Show me your MS Windows Server farm that can
do anywhere near what this baby
can do!" which I respectfully matched (=anywhere near).

Have a good night Paul!


Richard Bayarri Bartual

unread,
Jun 15, 2001, 9:20:46 AM6/15/01
to

Alessandro Federici wrote:
>
SNIP>

> Or read how this system outperformed Sun 64x64 E1000 at
> http://www.unisys.com/hw/servers/es7000/benchmarks.asp
>

Actually, it outperformed the Sun running SAP, which proves little,
as we do not know how effectively SAP uses Sun's 64-bit architecture.
Note also that the performance difference was tiny - well under 1%,
and even minute differences in the way the two systems were configured
could easily account for that. What is however very impressive is the
price/performance ratio, which is definitely much better than a Sun box,
although it should be noted that Paul and AJ were talking about mainframes,
and Sun do not make mainframes (they make servers, which are actually
rather different).

Richard Bayarri Bartual

unread,
Jun 15, 2001, 9:23:31 AM6/15/01
to

Alessandro Federici wrote:
>
> > 64 Bit Linux Support (possible, you didn't say)
> Don't really know. I am after Windows and Windows 64 bit+development for it
> will be doable on Itanium thanks to Borland, IBM and Microsoft.
>

The Unisys system you cited will not run any 64-bit OS because they
clearly state that it uses 32-bit processors. This makes it less than
ideal for many large enterprise tasks due to file sizes being limited
to ~2GB.

Mike Swaim

unread,
Jun 15, 2001, 9:49:40 AM6/15/01
to

"Alessandro Federici" <al...@bigfoot.com> wrote in message
news:3b297705$2_1@dnews...

> "pnichols" <paul@computer-logic> wrote in message
news:3b297123$2_2@dnews...
> Actually would be 128 if you make the calculations I recommended at the
end
> of the post and obviously you didn't read the articles because you would
> have found more interesting things to say...
> So, in order to make it easier for you:
>
> > Up to 64 gigabytes of RAM (got that covered)
> It's actually 256Gb in the cluster
>
> > 640 Processors (you are 608 Processors in the negative column)
> 128 Processors

The sales brochure only mentions 2 way clustering. (And the default is to
cluster internal partitions.)


> > Support for 64,000 devices (VERY SERIOSULY DOUBT IT)
> Very probably

It supports up to 96 PCI devices.

> > 24 GB/sec I/O bandwidth -- (doubt it)
> > 3 GB/sec networking bandwidth :Doubful, but possible
>
> Very duable actually and already achieved/passed a year ago if I remember
> correcltly. I know you'd be back at me for this ;-)

It's listed as 20GB/sec, sustained.

--
Mike Swaim
Michae...@Enron.com

Alessandro Federici

unread,
Jun 15, 2001, 9:52:25 AM6/15/01
to
"Mike Swaim" <Michae...@enron.com> wrote in message
news:3b2a1272$1_2@dnews...

> The sales brochure only mentions 2 way clustering. (And the default is to
> cluster internal partitions.)

That is hardware not software


Alessandro Federici

unread,
Jun 15, 2001, 10:47:44 AM6/15/01
to
"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B2A0BAE...@visual-limes.com...

> Sun do not make mainframes (they make servers, which are actually rather
different).

This is fine with me, but for the objetive of the discussion (a cluster
offer better performances than 1 box) I could also use the same mainframe he
reported, take out half of the processors and add similar 2/3 boxes and
outperform the 1 box with double processors...
In any system there are some bottlenecks that always degrade performances
after a certain load. It's easier to get to those limits in one box rather
than 2/3 separate ones... This applies to small pcs and to mainframes as
well. 2 dual pentiums can easily do a better job than a 4 pentium system.
This is all.


Alessandro Federici

unread,
Jun 15, 2001, 10:59:43 AM6/15/01
to
"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B2A0C53...@visual-limes.com...
>
>
> Alessandro Federici wrote:

> The Unisys system you cited will not run any 64-bit OS because they
> clearly state that it uses 32-bit processors. This makes it less than
> ideal for many large enterprise tasks due to file sizes being limited
> to ~2GB.

Where did you get the idea that since the system uses 32 bit processors
cannot have a file bigger than 2GB???
According to this logic the DOS should have not had files bigger than
64Kb...
See the API GetFileSizeEx


Richard Bayarri Bartual

unread,
Jun 16, 2001, 6:44:31 AM6/16/01
to

Alessandro Federici wrote:
>
> In any system there are some bottlenecks that always degrade performances
> after a certain load.

Agreed.

> It's easier to get to those limits in one box rather than 2/3 separate
> ones...

I think this actually depends on the nature of each type of box. Mainframes
tend to be designed for sustained high data throughput, whereas PC-type
boxes aren't (I'm not talking about the Intel CPUs here, but rather the
PC architecture, and PCI in particular) - that's why specialist Intel-based
machines like the rather impressive one you found have custom architectures
that in many ways resemble those used in mainframes. Thus, I agree with
what you are saying with the caveat that the two or three boxes must
be capable of a similar sustained data throughput to the single box.

> This applies to small pcs and to mainframes as well. 2 dual pentiums can
> easily do a better job than a 4 pentium system.

This actually depends how those Pentiums are used. For example, Windows
NT tends to work them in series, so each subsequent processor only realises
1/2 of the performance of the one prior to it, resulting in four processors
achieving around double the performance of a single one (Nihquist's memory
bandwidth rule). BeOS, Solaris and Linux (and maybe Win2K DataCenter - I'm not
sure about this) on the other hand use a symmetrical multiprocessor approach
which divides the workload equally among all processors present on the system,
thereby resulting in (for example) an eight-way arrangement being able to
realise eight times the work of a single-processor system (on tasks which benefit
from parallelism). A symmetrical multiprocessing system of this sort will
generally out-perform a cluster of single-CPU machines because internal CPU
buses tend to have much higher bandwidth than external ones, and this is particularly
true in PCs, where the PCI bus is _significantly_ slower than the internal CPU
bus (hence the need for AGP to let graphics cards guzzle data at high rates).

Richard Bayarri Bartual

unread,
Jun 16, 2001, 7:38:18 AM6/16/01
to

Alessandro Federici wrote:
>
> "Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
> news:3B2A0C53...@visual-limes.com...
> >
> >
> > Alessandro Federici wrote:
>
> > The Unisys system you cited will not run any 64-bit OS because they
> > clearly state that it uses 32-bit processors. This makes it less than
> > ideal for many large enterprise tasks due to file sizes being limited
> > to ~2GB.
>
> Where did you get the idea that since the system uses 32 bit processors
> cannot have a file bigger than 2GB???

Two to the power of 32 (actually 4GB - whether this actually equates
to 4GB depends on the use of signed or unsigned integers - all versions
of 32 bit Windows use unsigned integers).

> According to this logic the DOS should have not had files bigger than
> 64Kb...

DOS used variable sized clusters to allow large files - the net result
of this was systems where a text file with "hello world" in it could
occupy up to 256KB using VFAT, i.e. vast amounts of wasted disk space.

> See the API GetFileSizeEx

NTFS volumes allow file sizes up to 64 bits, but they do so using
clusters again (as with FAT, the size of said clusters depends on
the size of the formatted volume), and so waste disk space - if
you force the system to use a cluster size of 1 byte (which is
possible), then you cannot have a file that is larger than 4GB.
This limit is also apparent in FAT32 and UNIX's HPFS, both of
which always use 1 byte clusters.

Note that this clustering system is not used in Microsoft's experimental
NTFS-64 (to be used in 64-bit Windows implementations), which allows
truly vast files to be addressed without the use of space-wasting
work-arounds like disk clusters.

Barry Kelly

unread,
Jun 16, 2001, 8:14:24 AM6/16/01
to
In article <3B2B452A...@visual-limes.com>

Richard Bayarri Bartual <r...@visual-limes.com> wrote:

> Alessandro Federici wrote:


> > "Richard Bayarri Bartual" <r...@visual-limes.com> wrote:
> > > The Unisys system you cited will not run any 64-bit OS because they
> > > clearly state that it uses 32-bit processors. This makes it less than
> > > ideal for many large enterprise tasks due to file sizes being limited
> > > to ~2GB.
> >
> > Where did you get the idea that since the system uses 32 bit processors
> > cannot have a file bigger than 2GB???
>
> Two to the power of 32 (actually 4GB - whether this actually equates
> to 4GB depends on the use of signed or unsigned integers - all versions
> of 32 bit Windows use unsigned integers).

You should look up MapViewOfFile, and SetFilePointer; both specify an
optional upper 32-bit word for 64-bit positioning.

> > According to this logic the DOS should have not had files bigger than
> > 64Kb...
>
> DOS used variable sized clusters to allow large files - the net result
> of this was systems where a text file with "hello world" in it could
> occupy up to 256KB using VFAT, i.e. vast amounts of wasted disk space.

I suggest you learn a little about FAT12, FAT16 and FAT32 before
posting this stuff. VFAT is a Win95 concept, the virtualized file
allocation table, allowing LFNs etc.

Cluster size in FAT16 varies from 2KB to 32KB. No more than 64K
clusters allowed.

Cluster size in FAT32 varies from 512 bytes to 32KB. No more than 2^32
clusters allowed.

> > See the API GetFileSizeEx
>
> NTFS volumes allow file sizes up to 64 bits, but they do so using
> clusters again (as with FAT, the size of said clusters depends on
> the size of the formatted volume), and so waste disk space - if
> you force the system to use a cluster size of 1 byte (which is
> possible), then you cannot have a file that is larger than 4GB.

The smallest cluster you can have in practice is a one-to-one mapping
between clusters and sectors. Sectors are 512 bytes in size.

> This limit is also apparent in FAT32 and UNIX's HPFS, both of
> which always use 1 byte clusters.

I presume you mean OS/2's HPFS. And no, they don't.

> Note that this clustering system is not used in Microsoft's experimental
> NTFS-64 (to be used in 64-bit Windows implementations), which allows
> truly vast files to be addressed without the use of space-wasting
> work-arounds like disk clusters.

Sectors are 512 bytes in size. Sectors are used at the device level.
If you want smaller sectors, then you need to low-level format the
device.

If you want more clusters than sectors, it would seriously hamper
performance due to the need to read and write a sector to write a
single cluster, rather than simply write necessary sector(s).

-- Barry

--
One must sometimes choose between expressiveness, safety, and
performance. But a scarcity of one isn't always excused by an
abundance of another. - Thant Tessman
Team JEDI: http://www.delphi-jedi.org
NNQ - Quoting Style in Newsgroup Postings
http://web.infoave.net/~dcalhoun/nnq/nquote.html

Alessandro Federici

unread,
Jun 16, 2001, 3:11:57 PM6/16/01
to
"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B2B452A...@visual-limes.com...

> Alessandro Federici wrote:
> >
> > "Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
> > news:3B2A0C53...@visual-limes.com...

> Two to the power of 32 (actually 4GB - whether this actually equates


> to 4GB depends on the use of signed or unsigned integers - all versions
> of 32 bit Windows use unsigned integers).

Richard... c'mon! You can have files bigger than that. Sure, is like having
segments but that doesn't mean you cannot have them...
I am not aware of the file size limit but is definitely something bigger
than a 32bit integer.
So, once again, your original statement is just wrong (="to file sizes being
limited to ~2GB")


Alessandro Federici

unread,
Jun 16, 2001, 3:15:03 PM6/16/01
to
"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B2B388F...@visual-limes.com...
> Alessandro Federici wrote:

> Thus, I agree with
> what you are saying with the caveat that the two or three boxes must
> be capable of a similar sustained data throughput to the single box.

Thanks! That was it...

> This actually depends how those Pentiums are used.

Correct but this doesn't mean that they can never do it and does not
guarantee a single machine will always beat the other 2/3. That was the
other part of "That was it" <G>

Regards


Richard Bayarri Bartual

unread,
Jun 17, 2001, 4:59:38 AM6/17/01
to

Barry Kelly wrote:
>
> > > According to this logic the DOS should have not had files bigger than
> > > 64Kb...
> >
> > DOS used variable sized clusters to allow large files - the net result
> > of this was systems where a text file with "hello world" in it could
> > occupy up to 256KB using VFAT, i.e. vast amounts of wasted disk space.
>
> I suggest you learn a little about FAT12, FAT16 and FAT32 before
> posting this stuff.

I am very familiar with them, than you very much.

> VFAT is a Win95 concept, the virtualized file
> allocation table, allowing LFNs etc.

And Win95 is what? It is a DOS application that boots DOS before
Windows, and reads the DOS config.sys and autoexec.bat files. Thus,
VFAT is correctly described as a DOS technology that can be accessed
from DOS without having Windows loaded - the same can in fact be
said about FAT32, which DOS tools such as FDISK can create partitions
for, and the DOS command-line can access. This is not true of NTFS,
which is a pure Windows technology that DOS cannot use, as anybody
who has had the misfortune to try and recover a NT installation
that has become non-bootable well knows.

>
> > > See the API GetFileSizeEx
> >
> > NTFS volumes allow file sizes up to 64 bits, but they do so using
> > clusters again (as with FAT, the size of said clusters depends on
> > the size of the formatted volume), and so waste disk space - if
> > you force the system to use a cluster size of 1 byte (which is
> > possible), then you cannot have a file that is larger than 4GB.
>
> The smallest cluster you can have in practice is a one-to-one mapping
> between clusters and sectors. Sectors are 512 bytes in size.
>

Sectors are _by default_ 512 bytes in size _on PCs_. And no, the
smallest cluster you can have is _not_ a single sector!

> > Note that this clustering system is not used in Microsoft's experimental
> > NTFS-64 (to be used in 64-bit Windows implementations), which allows
> > truly vast files to be addressed without the use of space-wasting
> > work-arounds like disk clusters.
>
> Sectors are 512 bytes in size.

1) Irrelevant, because you can have several clusters per sector,
2) Again, sectors are _by default_ 512 bytes, and then only on
PCs.

> Sectors are used at the device level.

But not at the hardware level - they are a formatting convention,
and their size can therefore vary considerably.

> If you want smaller sectors, then you need to low-level format the
> device.
>

Indeed - however, sector size does not dictate cluster size.



> If you want more clusters than sectors, it would seriously hamper
> performance due to the need to read and write a sector to write a
> single cluster, rather than simply write necessary sector(s).
>

This is only the case with files or portions of files that are
smaller than one sector - all other operations will involve
sufficient clusters that they can be resolved in terms of sector
reads and writes (i.e. they can be buffered at the driver level,
as indeed many device operations already are).

Richard Bayarri Bartual

unread,
Jun 17, 2001, 6:22:14 AM6/17/01
to

Alessandro Federici wrote:
>
> "Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
> news:3B2B452A...@visual-limes.com...
> > Alessandro Federici wrote:
> > >
> > > "Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
> > > news:3B2A0C53...@visual-limes.com...
>
> > Two to the power of 32 (actually 4GB - whether this actually equates
> > to 4GB depends on the use of signed or unsigned integers - all versions
> > of 32 bit Windows use unsigned integers).
>
> Richard... c'mon! You can have files bigger than that.

You can on NTFS, but not on FAT32. Of course, once WinXP replaces
the 9X derivatives, FAT32 will become a legacy system, and therefore
of no more real relevance than FAT16.

> Sure, is like having segments but that doesn't mean you cannot have them...


You can indeed on NTFS - I was totally wrong about this.

> I am not aware of the file size limit but is definitely something bigger
> than a 32bit integer.

I've done a bit of research, and MS say that NTFS uses 64-bit file
addressing, so the maximum size should be several exabytes (i.e.
_very_ big!). Obviously, the practical limit will depend on drivers,
BIOSes and the like - some BIOSes for example cannot address drives
bigger than 8GB. These are not however limitations imposed by the
OS.



> So, once again, your original statement is just wrong (="to file sizes being
> limited to ~2GB")

It is indeed wrong with regards to NTFS. I stand corrected.

Richard Bayarri Bartual

unread,
Jun 17, 2001, 6:24:30 AM6/17/01
to

Alessandro Federici wrote:
>
Stuff that I totally agree with.

Alessandro Federici

unread,
Jun 17, 2001, 1:22:23 PM6/17/01
to
"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B2C855E...@visual-limes.com...

>
> Alessandro Federici wrote:
> >
> Stuff that I totally agree with.

AH! Allright! Champagne on the table pleaze ;-)
Thx Richard


Alessandro Federici

unread,
Jun 17, 2001, 1:21:35 PM6/17/01
to
"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B2C84D6...@visual-limes.com...

> It is indeed wrong with regards to NTFS. I stand corrected.

Very good.
Thanks


Barry Kelly

unread,
Jun 18, 2001, 1:56:19 AM6/18/01
to
In article <3B2C717A...@visual-limes.com>

Richard Bayarri Bartual <r...@visual-limes.com> wrote:

I'm not interested in arguments, Richard. However, you said:

> > > occupy up to 256KB using VFAT, i.e. vast amounts of wasted disk space.

Which is wrong, and you also said (but snipped)

> > > This limit is also apparent in FAT32 and UNIX's HPFS, both of
> > > which always use 1 byte clusters.

Which is also wrong. That's all.

William Meyer

unread,
Jun 18, 2001, 2:04:53 AM6/18/01
to
"Barry Kelly" <dyn...@eircom.net> wrote in message
news:rsjpitkmncadapvqu...@4ax.com...

> In article <3B2C717A...@visual-limes.com>
> Richard Bayarri Bartual <r...@visual-limes.com> wrote:
>
> I'm not interested in arguments, Richard. However, you said:
>
> > > > occupy up to 256KB using VFAT, i.e. vast amounts of wasted disk
space.
>
> Which is wrong, and you also said (but snipped)
>
> > > > This limit is also apparent in FAT32 and UNIX's HPFS, both of
> > > > which always use 1 byte clusters.
>
> Which is also wrong. That's all.

Doesn't sound good. I've not known your research to be lacking, so I suspect
there's an issue here, wainting for resolution. <sigh> You'll fight the good
fight, I'm sure. I tire of battles, for my part, but wish you well, none the
less.

Bill


pnichols

unread,
Jun 18, 2001, 3:19:32 AM6/18/01
to

"Alessandro Federici" <al...@bigfoot.com> wrote in message
news:3b297705$2_1@dnews...

> "pnichols" <paul@computer-logic> wrote in message
news:3b297123$2_2@dnews...
> >
>
> Oh boy, oh boy... you are the only one in that thread Paul that doesn't
get
> it.
> Windows has nothing to do with what I was talking about in the first
place.
> As I repeatedly said, the same can be said for clusters/replicated Linux
> boxes. The best solution ever is not that box that you displayed. Is a set
> of those clustered and replicated. Accept it! It's not rocket science...
> Then, in top of this you said "Show me your MS Windows Server farm that
can
> do anywhere near what this baby
> can do!" which I respectfully matched (=anywhere near).
>
> Have a good night Paul!
>
Linux/Unix is irrelevant, since Unix/Linux can run on either clustered
servers or the Mainframe. So I would include both in the Mainframe category.


Barry Kelly

unread,
Jun 18, 2001, 4:32:44 AM6/18/01
to
In article <3b2d99c5_1@dnews>
"William Meyer" <wme...@earthlink.net> wrote:

> Doesn't sound good. I've not known your research to be lacking, so
> I suspect there's an issue here, wainting for resolution. <sigh>
> You'll fight the good fight, I'm sure. I tire of battles, for
> my part, but wish you well, none the less.

I'm tired of pointless bickering too. <g> I think I'll cut back on my
non-tech ration for a while...

-- Barry

Richard Bayarri Bartual

unread,
Jun 18, 2001, 6:11:17 AM6/18/01
to

Hey, we agree on lots of things - no Champagne is necessary!

Alessandro Federici

unread,
Jun 18, 2001, 10:10:22 AM6/18/01
to
"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B2DD3C5...@visual-limes.com...
>

> Hey, we agree on lots of things - no Champagne is necessary!

LOL then let's drink because we agree on a lot of things <G>


Alessandro Federici

unread,
Jun 18, 2001, 10:09:53 AM6/18/01
to
"pnichols" <paul@computer-logic> wrote in message news:3b2dabd4$1_1@dnews...

>
> > As I repeatedly said, the same can be said for clusters/replicated Linux
> > boxes. The best solution ever is not that box that you displayed. Is a
set
> > of those clustered and replicated. Accept it! It's not rocket science.

> Linux/Unix is irrelevant, since Unix/Linux can run on either clustered


> servers or the Mainframe. So I would include both in the Mainframe
category.

Ok, so you were wrong. That I can live with <G>


pnichols

unread,
Jun 18, 2001, 3:57:17 PM6/18/01
to

"Alessandro Federici" <al...@bigfoot.com> wrote in message
news:3b2e0a66$1_2@dnews...

> "pnichols" <paul@computer-logic> wrote in message
news:3b2dabd4$1_1@dnews...
> >
>
> > Linux/Unix is irrelevant, since Unix/Linux can run on either clustered
> > servers or the Mainframe. So I would include both in the Mainframe
> category.
>
> Ok, so you were wrong. That I can live with <G>
>
How?


Alessandro Federici

unread,
Jun 18, 2001, 4:32:52 PM6/18/01
to
"pnichols" <paul@computer-logic> wrote in message news:3b2e5d6e_2@dnews...

> > Ok, so you were wrong. That I can live with <G>
> >
> How?

Because a cluster of smaller mainframes can outperform that big one you
reported.


Richard Bayarri Bartual

unread,
Jun 20, 2001, 5:45:18 AM6/20/01
to

Barry Kelly wrote:
>
> In article <3B2C717A...@visual-limes.com>
> Richard Bayarri Bartual <r...@visual-limes.com> wrote:
>
> I'm not interested in arguments, Richard. However, you said:
>
> > > > occupy up to 256KB using VFAT, i.e. vast amounts of wasted disk space.
>
> Which is wrong,

What is wrong about the statement that VFAT can have clusters of
up to 256K is size, or the statement that this wastes disk space?

> and you also said (but snipped)
>
> > > > This limit is also apparent in FAT32 and UNIX's HPFS, both of
> > > > which always use 1 byte clusters.
>
> Which is also wrong.

I agree that I was wrong when I said this.

Barry Kelly

unread,
Jun 20, 2001, 7:49:48 AM6/20/01
to
In article <3B3070AE...@visual-limes.com>

Richard Bayarri Bartual <r...@visual-limes.com> wrote:

> > > > > occupy up to 256KB using VFAT, i.e. vast amounts of wasted disk space.
> >
> > Which is wrong,
>
> What is wrong about the statement that VFAT can have clusters of
> up to 256K is size, or the statement that this wastes disk space?

I snipped a little too much:

> > > > > DOS used variable sized clusters to allow large files - the net result
> > > > > of this was systems where a text file with "hello world" in it could

> > > > > occupy up to 256KB using VFAT, i.e. vast amounts of wasted disk space.

DOS only allows up to 32KB clusters, since it uses FAT16.

Windows 9x, Me & 2K can use FAT32.

VFAT is a device driver used in Win9x to allow long file names (LFNs)
on FAT16 & FAT32.

Talking about cluster size in relation to VFAT is meaningless. <g>

Why do we end up in these nasty discussions? <g>

-- Barry

--
If you're not part of the solution, you're part of the precipitate.

Mike Swaim

unread,
Jun 20, 2001, 1:07:57 PM6/20/01
to

"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B2B388F...@visual-limes.com...

> This actually depends how those Pentiums are used. For example, Windows
> NT tends to work them in series, so each subsequent processor only
realises
> 1/2 of the performance of the one prior to it, resulting in four
processors
> achieving around double the performance of a single one (Nihquist's memory
> bandwidth rule). BeOS, Solaris and Linux (and maybe Win2K DataCenter - I'm
not
> sure about this) on the other hand use a symmetrical multiprocessor
approach

Windows NT/2000 also uses SMP. I believe that their implementation was
better than Linux's prior to the current kernel (and it may still be). I'm
not aware of any SMP implementation that scales anywhere near linearly.
Lixux (and the BSDs) has usually had scaling problems caused by poor
granularity. (i.e. locks were overly broad, so processes ended up waiting on
services too much.) NT tries to associate processes with particular
processes to avoid cache flushes/loads. My group uses a variety of NT
Servers (4, 3, and 2 way) and we've gotten good performance with them. We
tend to run a lot of processes (86 processes, 587 threads on production
server 1), and have had good performance. We also have gobs of memory (2 GB,
of which 900 MB is typically used).

--
Mike Swaim
Michae...@Enron.com

Richard Bayarri Bartual

unread,
Jun 21, 2001, 6:42:34 AM6/21/01
to

Barry Kelly wrote:
>
> In article <3B3070AE...@visual-limes.com>
> Richard Bayarri Bartual <r...@visual-limes.com> wrote:
>
> DOS only allows up to 32KB clusters, since it uses FAT16.
>

The DOS supplied with Win95 OSR2 and above can however also use FAT32.
It is thus perfectly fair to describe FAT32 as a DOS technology, because
it can be used from the DOS command-line without needing to have Win9X
loaded. This is in stark contrast to NTFS, which is a pure Windows
technology that cannot be utilised in any way from any version of DOS.



>
> VFAT is a device driver used in Win9x to allow long file names (LFNs)
> on FAT16 & FAT32.
>

And on Windows/NT - for example, if you tell WinNT to format a drive
as FAT, it will actually format it as VFAT. This is where the very
large clusters can come into play, as the following table from
Microsoft's document at
http://support.microsoft.com/support/kb/articles/Q140/3/65.asp
indicates:

"The FAT file system uses the following cluster sizes. These sizes the same under Microsoft Windows NT, Microsoft MS-DOS, Microsoft Windows 95 and any other operating system that supports FAT:
Drive Size FAT Type Sectors Cluster
(logical volume) Per Cluster Size
----------------- -------- ----------- -------
0 MB - 15 MB 12-bit 8 4K
16 MB - 127 MB 16-bit 4 2K
128 MB - 255 MB 16-bit 8 4K
256 MB - 511 MB 16-bit 16 8K
512 MB - 1023 MB 16-bit 32 16K
1024 MB - 2048 MB 16-bit 64 32K
2048 MB - 4096 MB 16-bit 128 64K
*4096 MB - 8192 MB 16-bit 256 128K NT V4.0 only
*8192 MB - 16384 MB 16-bit 512 256K NT V4.0 only
* To support > 4GB FAT partitions using 128k or 256k clusters, the drives must use > 512 byte sectors."

Thus, while I was wrong in stating that DOS had clusters of up to 256K,
I was not wrong in claiming that these were supported under VFAT, or
in saying that they waste large amounts of disk space!


>
> Talking about cluster size in relation to VFAT is meaningless. <g>
>

See above.



> Why do we end up in these nasty discussions? <g>
>

Probably because we differ in what each of us regards as "DOS". I for
example class all the Win9X technologies as being elabourate DOS shells
because they are launched from it - Windows/NT, Win2K, and XP are
however operating-systems in their own right, and do not depend on
DOS for anything (well, except during installation with NT).

Richard Bayarri Bartual

unread,
Jun 21, 2001, 8:59:59 AM6/21/01
to

Mike Swaim wrote:
>
> Windows NT/2000 also uses SMP.

Well, NT 4 sort of supported SMP, but only with programs that were
specially written to use it (you also had to specifically turn it
on, as most multi-processor Intel-based systems do not have a
symmetrical architecture). Win2K on the other hand is a far more
sophisticated system that not only supports auto-scheduling of
monolithic (i.e. not multi-threaded) tasks, but can also effectively
utilise weakly-ordered memory in specialist SMP hardware (weakly
ordered memory means that the hardware can reorder loads and stores
between CPU and memory so that single large tasks can effectively
be split up into a series of parallel operations, while at the same
time avoiding the possibility of any processor in the array being
presented with "stale" data).

> I believe that their implementation was better than Linux's prior to
> the current kernel (and it may still be).

From what I can gather (which isn't much, sadly), the Win2K system is
notably superior to that of current Linuxes, while today's Linux kernels
are better at SMP than Windows/NT was (although there were so many patches
and SPs to NT during its life that such things are difficult to say with
any certainty). Prior to the current kernel, I would not have classed Linux
as a SMP-ready OS, even though there were certain kernels that supported
it in a rather limited fashion. Note also that, like Windows, Linux
will not be capable of true SMP on most multiprocessor PC motherboards,
as these do not have a symmetrical architecture.

> I'm not aware of any SMP implementation that scales anywhere near linearly.
>

It depends on whether the hardware is designed for SMP or not. Linear
scaling cannot be realistically achieved with most multiprocessor PC
motherboards because of the way their CPUs are connected to memory -
there are however some non-PC architecture machines based around Intel
CPUs that do scale in a more or less linear fashion. These are
capable of _excellent_ performance, but they tend to be rather expensive
(although less so than most similar machines based around other
processors).

>
> Lixux (and the BSDs) has usually had scaling problems caused by poor
> granularity.

This was a problem with most "SMP" UNIXes until quite recently, including
many commercial offerings for quite big machines.

> (i.e. locks were overly broad, so processes ended up waiting on
> services too much.)

Agreed.

> NT tries to associate processes with particular
> processes to avoid cache flushes/loads.

On-board caches present a number of problems for designers of SMP
systems. They are one of those items that realises great benefits
on single processor systems, and also on non-symmetrical
multiprocessing setups, but good SMP machines need to ensure that
every processor is actually "eating" what the specialist scheduling
hardware has prepared for it, and this is quite difficult in a system
where each has a realatively large cache (and therefore effectively
becomes in many ways like non-shared memory parralel processors and/or
clusters).

> My group uses a variety of NT
> Servers (4, 3, and 2 way) and we've gotten good performance with them.

Are these SMP systems though, or multi-way PC motherboards? There
is actually a difference, because a SMP OS requires hardware in
which each processor has equal access to memory and I/O at all
times, and this is not the case with many multi-CPU PC motherboards
because they lack the necessary data bus crossbar switches. This
is one reason why such systems are generally limited to 4 CPUs,
as Nihquist's memory bandwith limitation (proven many times in
practice) states that any CPUs over and above 4 will realise no
gains, and may even reduce performance in this type of simple
"everything on the same bus with no crossbar switches" architecture.

> We tend to run a lot of processes (86 processes, 587 threads on
> production server 1), and have had good performance.

I do not doubt this - however, it does not necessarily prove that
you are getting true SMP, even Windows/NT is telling you that
you are.

Mike Swaim

unread,
Jun 21, 2001, 10:34:00 AM6/21/01
to

"Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
news:3B31EFCF...@visual-limes.com...

>
>
> Mike Swaim wrote:
> >
> > Windows NT/2000 also uses SMP.
>
> Well, NT 4 sort of supported SMP, but only with programs that were
> specially written to use it (you also had to specifically turn it
> on, as most multi-processor Intel-based systems do not have a
> symmetrical architecture).
Our servers are running NT4, and we don't use any applications "specially
written" to take advantage of SMP. Even our single threaded apps benefit
because more than one can run at a time. Most of our server applications
(and some of our desktop apps) do use multiple threads at some point in
their lifetimes.

> > My group uses a variety of NT
> > Servers (4, 3, and 2 way) and we've gotten good performance with them.
>
> Are these SMP systems though, or multi-way PC motherboards? There
> is actually a difference, because a SMP OS requires hardware in
> which each processor has equal access to memory and I/O at all
> times, and this is not the case with many multi-CPU PC motherboards
> because they lack the necessary data bus crossbar switches.

I can't tell what the hardware is, because it's locked up in a room, and
I've forgotten what we ordered. They're Compaq rack mounted servers, so I
suspect that they're using a server chipset, so I suspect that they're true
SMP boxes.

--
Mike Swaim
Michae...@Enron.com

Barry Kelly

unread,
Jun 21, 2001, 11:15:42 AM6/21/01
to
In article <3B31CF9A...@visual-limes.com>

Richard Bayarri Bartual <r...@visual-limes.com> wrote:

> > VFAT is a device driver used in Win9x to allow long file names (LFNs)
> > on FAT16 & FAT32.
>
> And on Windows/NT - for example, if you tell WinNT to format a drive
> as FAT, it will actually format it as VFAT.

There are only a few variants, FAT12 (for floppies), FAT16, FAT16B,
FAT32, FAT32X, and a couple more I'm probably forgetting.

There is *no partition code* for 'VFAT'. VFAT is something made up by
Microsoft to add LFN support to FAT16 & FAT32.

> This is where the very
> large clusters can come into play

Nothing to do with VFAT. Has everything to do with WinNT supporting
extensions to FAT16.

> Drive Size FAT Type Sectors Cluster
> (logical volume) Per Cluster Size
> ----------------- -------- ----------- -------
> 0 MB - 15 MB 12-bit 8 4K

Notice: ^^^^^^


> 16 MB - 127 MB 16-bit 4 2K

Notice: ^^^^^^

These are the FAT types, not 'VFAT'.

> Thus, while I was wrong in stating that DOS had clusters of up to 256K,
> I was not wrong in claiming that these were supported under VFAT, or
> in saying that they waste large amounts of disk space!

'VFAT' isn't a partition type. WinNT running large cluster sizes on
FAT16 wastes lots of space, sure. No idea why you'd want to do that,
though - you're taking the worst possible case and endeavoring to
portray it as 'typical for WinNT & FAT', or some such nonsense.

> > Talking about cluster size in relation to VFAT is meaningless. <g>
>
> See above.

It still is.


> > Why do we end up in these nasty discussions? <g>
>
> Probably because we differ in what each of us regards as "DOS".

I suppose, when I say 'DOS', I mean DOS 6.22.

However, even when you're talking about whatever version of DOS that
comes with WinMe these days, 'VFAT' is still meaningless when talking
about cluster sizes, and WinNT's supported cluster sizes on FAT16 have
nothing to do with the issue (of what DOS is).

Richard Bayarri Bartual

unread,
Jun 21, 2001, 11:34:11 AM6/21/01
to

Mike Swaim wrote:
>
> "Richard Bayarri Bartual" <r...@visual-limes.com> wrote in message
> news:3B31EFCF...@visual-limes.com...
> >
> >
> > Mike Swaim wrote:
> > >
> > > Windows NT/2000 also uses SMP.
> >
> > Well, NT 4 sort of supported SMP, but only with programs that were
> > specially written to use it (you also had to specifically turn it
> > on, as most multi-processor Intel-based systems do not have a
> > symmetrical architecture).

> Our servers are running NT4, and we don't use any applications "specially
> written" to take advantage of SMP.

Then it's probably not helping you much, as MS have a very specific
set of guidelines on how to write multithreaded apps in ways that
allow the OS scheduler to manage them effectively.

> Even our single threaded apps benefit because more than one can
> run at a time.

Only if each of them requires enough CPU load to ensure that each
application gets assigned its own processor - you will notice no
difference otherwise.

> Most of our server applications (and some of our desktop apps) do use
> multiple threads at some point in their lifetimes.
>

Multiple threads are not in themselves sufficient to gain real benefits
from the NT implementation of SMP - you need to ensure that locking
is high-granularity so that the scheduler can reallocate loads properly.



>
> I can't tell what the hardware is, because it's locked up in a room, and
> I've forgotten what we ordered. They're Compaq rack mounted servers, so I
> suspect that they're using a server chipset, so I suspect that they're true
> SMP boxes.
>

The only Compaq systems that are pretty much guaranteed to have SMP-ready
hardware are their Alpha-based range - machines built around Intel processors
vary (and being rack-mount is not an indication one way or the other).

Richard Bayarri Bartual

unread,
Jun 21, 2001, 12:25:31 PM6/21/01
to

Barry Kelly wrote:
>
> In article <3B31CF9A...@visual-limes.com>
> Richard Bayarri Bartual <r...@visual-limes.com> wrote:
>
> > > VFAT is a device driver used in Win9x to allow long file names (LFNs)
> > > on FAT16 & FAT32.
> >
> > And on Windows/NT - for example, if you tell WinNT to format a drive
> > as FAT, it will actually format it as VFAT.
>

> There is *no partition code* for 'VFAT'. VFAT is something made up by
> Microsoft to add LFN support to FAT16 & FAT32.
>

I never said there was - I was in fact paraphrasing MS, who in
this MSDN article (http://www.microsoft.com/technet/deploy/fat.asp)
say:

"When you tell Windows NT to format a partition as FAT, it actually
formats the partition as VFAT. The only time you’ll have a true FAT
partition under Windows NT Version 4.0 is when you use another operating
system (such as MS-DOS) to format the partition."

>
> Nothing to do with VFAT. Has everything to do with WinNT supporting
> extensions to FAT16.

Take it up with MS, not me!

> 'VFAT' isn't a partition type.

This is not what MS say in the article above when referring to the
specific WinNT implementation thereof. Again, take it up with them.

0 new messages