Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

McKinley Cometh...

13 views
Skip to first unread message

Nick Maclaren

unread,
Jul 9, 2002, 5:43:16 AM7/9/02
to

Well, McKinley may be announced, but there is little evidence that
HP are backing up their fine words with buttered parsnips.

HP's "buy online" for the USA has only the zx2000 (plus its 6 Merced
configurations) and says "There is no stock currently available" for
all of them.

Neither the zx2000 nor the zx6000 are in HP's UK list yet. And, yes,
we are a serious potential customer!

If anyone manages to buy one using a normal mechanism, I should
appreciate hearing what, for delivery when and in which country.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

Larry Kilgallen

unread,
Jul 9, 2002, 7:32:22 AM7/9/02
to
In article <ageb7k$68h$1...@pegasus.csx.cam.ac.uk>, nm...@cus.cam.ac.uk (Nick Maclaren) writes:
>
> Well, McKinley may be announced, but there is little evidence that
> HP are backing up their fine words with buttered parsnips.
>
> HP's "buy online" for the USA has only the zx2000 (plus its 6 Merced
> configurations) and says "There is no stock currently available" for
> all of them.
>
> Neither the zx2000 nor the zx6000 are in HP's UK list yet. And, yes,
> we are a serious potential customer!

The web pages I read from the announcement said "Available in July".
In vendorese that is different from "Available now".

Ken Green

unread,
Jul 9, 2002, 6:35:46 AM7/9/02
to
Nick Maclaren wrote:

Well the register this morning said that the boxes where due to start
shipping in August.

http://theregister.co.uk/content/3/26097.html

For UK pricing, I'm sure that there must be a sales team at HP
that focus on Universities, try giving them a call.

Cheers

Ken


Nick Maclaren

unread,
Jul 9, 2002, 6:45:48 AM7/9/02
to

In article <OqzEjX...@eisner.encompasserve.org>,

From HP's own Web pages:

hp part # product name unit price usually ships

A8081A hp zx2000 w/900MHz Intel® $5,834.00 backordered

Itanium® 2 processor, ATI
Radeon 7000 graphics, 1GB
DDR-SDRAM, 40GB HD,
HP-UX 11i
(Lease at $171.87/mo2)

?Usually ships? provides our best estimate of when products will
ship to you. Options are:
1-2 days
'Usually ships' within 1-2 business days.
3-5 days
'Usually ships' within 3-5 business days.
8-9 days
'Usually ships' within 8-9 business days.
1-2 weeks
'Usually ships' within 1-2 weeks.
backordered
There is no stock currently available.


"Available in July" seems unlikely. "Orderable in July" perhaps.

Alberto

unread,
Jul 9, 2002, 8:40:57 AM7/9/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio news:ageb7k$68h$1...@pegasus.csx.cam.ac.uk...

>
> Well, McKinley may be announced, but there is little evidence that
> HP are backing up their fine words with buttered parsnips.
>
> HP's "buy online" for the USA has only the zx2000 (plus its 6 Merced
> configurations) and says "There is no stock currently available" for
> all of them.
>
> Neither the zx2000 nor the zx6000 are in HP's UK list yet. And, yes,
> we are a serious potential customer!
>
> If anyone manages to buy one using a normal mechanism, I should
> appreciate hearing what, for delivery when and in which country.

You have too hurry :-).
Itanic performs very well now, but 2 years are
necessary to penetrate a full of difficulties market.
How many years are necessary to Opteron for
same things? (10?).
Bye.
Alberto.


Nick Maclaren

unread,
Jul 9, 2002, 9:09:49 AM7/9/02
to

In article <tRAW8.13228$K_4.3...@twister1.libero.it>,

"Alberto" <uapalb...@libero.it> writes:
|>
|> > Well, McKinley may be announced, but there is little evidence that
|> > HP are backing up their fine words with buttered parsnips.
|> >
|> > HP's "buy online" for the USA has only the zx2000 (plus its 6 Merced
|> > configurations) and says "There is no stock currently available" for
|> > all of them.
|> >
|> > Neither the zx2000 nor the zx6000 are in HP's UK list yet. And, yes,
|> > we are a serious potential customer!
|> >
|> > If anyone manages to buy one using a normal mechanism, I should
|> > appreciate hearing what, for delivery when and in which country.
|>
|> You have too hurry :-).

Perhaps :-)

|> Itanic performs very well now, but 2 years are
|> necessary to penetrate a full of difficulties market.

Considering that we have been getting an earful of how wonderful
the imminent IA-64 systems will be for over 5 years now, I am not
sympathetic. More seriously, this is IA-64's last chance; if it
isn't established within 12-18 months, it will die.

My belief is that HP got Intel to 'launch' so that HP can sell
into one or more large USA military/HPC sites without forcing
every user to sign an Intel NDA. It is quite possible that
even HP will not start selling IA-64 systems seriously on the
open market until the autumn. Or they might start to do so
tomorrow. Any information appreciated.

|> How many years are necessary to Opteron for
|> same things? (10?).

Much less. It is an incremental solution, not a replacement one.
In particular, it is reported to run legacy (IA-32) code nearly
as fast as x86-64 code, so developers and customers have an easy
upgrade path.

Most observers believe that serious Opteron servers will be
available by March 2003, and that the performance will be better
than the McKinley for most work, though worse for number crunching.
If McKinley has failed to establish by then, Madison may also
fail to establish, and the IA-64 project will collapse.

Alberto

unread,
Jul 9, 2002, 9:27:52 AM7/9/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio

> Most observers believe that serious Opteron servers will be
> available by March 2003, and that the performance will be better
> than the McKinley for most work, though worse for number crunching.
> If McKinley has failed to establish by then, Madison may also
> fail to establish, and the IA-64 project will collapse.

Ummm...... Madison = 1.7-1.8Ghz cpu + 6Mb L3, with 1450+ specint
and 2430+specfpu, an other class of cpu respect Opteron ( no serius
compiler yet ).
This numbers are explicit and other consideration are only pessimistic :-).
Itanium do not collapse, shure. Give him 2 years.
Bye.
Alberto.


Terry C. Shannon

unread,
Jul 9, 2002, 9:36:34 AM7/9/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:agenat$gsj$1...@pegasus.csx.cam.ac.uk...

>
> In article <tRAW8.13228$K_4.3...@twister1.libero.it>,
> "Alberto" <uapalb...@libero.it> writes:
> |>
> |> > Well, McKinley may be announced, but there is little evidence that
> |> > HP are backing up their fine words with buttered parsnips.
> |> >
> |> > HP's "buy online" for the USA has only the zx2000 (plus its 6 Merced
> |> > configurations) and says "There is no stock currently available" for
> |> > all of them.
> |> >
> |> > Neither the zx2000 nor the zx6000 are in HP's UK list yet. And, yes,
> |> > we are a serious potential customer!
> |> >
> |> > If anyone manages to buy one using a normal mechanism, I should
> |> > appreciate hearing what, for delivery when and in which country.
> |>
> |> You have too hurry :-).
>
> Perhaps :-)
>
> |> Itanic performs very well now, but 2 years are
> |> necessary to penetrate a full of difficulties market.
>
> Considering that we have been getting an earful of how wonderful
> the imminent IA-64 systems will be for over 5 years now, I am not
> sympathetic. More seriously, this is IA-64's last chance; if it
> isn't established within 12-18 months, it will die.
>

Sounds like a reasonable assessment.


Alberto

unread,
Jul 9, 2002, 9:47:01 AM7/9/02
to

"Terry C. Shannon" <terrys...@attbi.com> ha scritto nel messaggio
news:CFBW8.337959$6m5.3...@rwcrnsc51.ops.asp.att.net...

Catastrophic assessment............
The chip in question is a Intel's chip ;-).
Opteron is a outsider's chip (or HT's chip?).
Bye.
Alberto.


Nick Maclaren

unread,
Jul 9, 2002, 10:41:44 AM7/9/02
to

In article <sxBW8.15370$7N3.3...@twister2.libero.it>,

I said better than the McKinley. My current guess is a SpecInt
of 800-900 in x86-64 mode and 700-800 in IA-32 mode in 1Q03.
But time will tell.

If the Madison runs at 1.7-1.8 GHz and delivers 1450+ SpecInt
and 2430+ SpecFP within 12 months, I shall be most impressed.
I suspect that 1.5 GHz and 1200 SpecInt on HP systems and 950
on Intel ones at this time next year is more plausible.

Alberto

unread,
Jul 9, 2002, 11:11:45 AM7/9/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio

> I said better than the McKinley. My current guess is a SpecInt
> of 800-900 in x86-64 mode and 700-800 in IA-32 mode in 1Q03.
> But time will tell.

Yes but.....where is sw for x86-64 mode?

>
> If the Madison runs at 1.7-1.8 GHz and delivers 1450+ SpecInt
> and 2430+ SpecFP within 12 months, I shall be most impressed.
> I suspect that 1.5 GHz and 1200 SpecInt on HP systems and 950
> on Intel ones at this time next year is more plausible.

1.8Ghz, i think, is the target for 0.13u process....Q4 2003?
After this, go to 0.09u process ( 60/35nm=1.71......3Ghz ?)
With this process, Itanium is more powerfull of Opteron.
The Intel choice is process, Amd have little chance in this
arena in future...........
Hpq know that.
But time will tell ;-).
Bye.
Alberto.


Nick Maclaren

unread,
Jul 9, 2002, 11:31:43 AM7/9/02
to

In article <R2DW8.15719$7N3.3...@twister2.libero.it>,

"Alberto" <uapalb...@libero.it> writes:
|> "Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio
|>
|> > I said better than the McKinley. My current guess is a SpecInt
|> > of 800-900 in x86-64 mode and 700-800 in IA-32 mode in 1Q03.
|> > But time will tell.
|>
|> Yes but.....where is sw for x86-64 mode?

Where the software for IA-64 was at the same stage. In development.
We shall know more in a few months.

|> > If the Madison runs at 1.7-1.8 GHz and delivers 1450+ SpecInt
|> > and 2430+ SpecFP within 12 months, I shall be most impressed.
|> > I suspect that 1.5 GHz and 1200 SpecInt on HP systems and 950
|> > on Intel ones at this time next year is more plausible.
|>
|> 1.8Ghz, i think, is the target for 0.13u process....Q4 2003?

Very likely. Hence my remark. By the time that the Madison
reaches 1.8 GHz, it is likely that the Opteron will be at 2.5 GHz,
perhaps 3.0 GHz. If AMD introduce a large cache version at the
latter speed, Opteron, too, could reach 1200 SpecInt. Maybe.

It is also rumoured that two Opterons will cost about the same
as one Madison, in terms of dollars, square centimetres and
watts. If that is so, the small server performance comparisons
swing solidly into Opteron's favour. We shall see.

|> After this, go to 0.09u process ( 60/35nm=1.71......3Ghz ?)

Before that comes, the battle will have been lost and won, or
at least fought to an inconclusive result. 0.1 micron will not
be relevant before 2004, and we shall know the result before
then.

|> With this process, Itanium is more powerfull of Opteron.
|> The Intel choice is process, Amd have little chance in this
|> arena in future...........

Really? Intel used to have a 12 months lead in the introduction
of new processes, but it is down to about 6 months now. You
must remember that AMD can buy in fab capacity from (say) IBM.

JF Mezei

unread,
Jul 9, 2002, 11:44:21 AM7/9/02
to
Alberto wrote:
> Itanic performs very well now, but 2 years are
> necessary to penetrate a full of difficulties market.
> How many years are necessary to Opteron for
> same things? (10?).

Considering that a full Windows won't be available on IA64 until next year, I
think that software availability on IA64 will be the major issue in the next
couple of years.

On the other hand, Hammer, being compatible with the 8086, should be able to
run existing software and benefit from a much greater wealth of software.

Now, if Windows on 32 bits has a reliability factor of "1", what will be its
relative reliability on IA64 ? A new architecture, and a change from 32 to 64
bits does lead on to have much confidence that microsoft would be able to make
thew new version for IA64 more reliable.

Intel will have to have a lot of staying power to sustain IA64 until there is
sufficient software to allow that platform to start to taken seriously.

JF Mezei

unread,
Jul 9, 2002, 11:47:57 AM7/9/02
to
Nick Maclaren wrote:
> If McKinley has failed to establish by then, Madison may also
> fail to establish, and the IA-64 project will collapse.

IA64 won't collapse. It would simply be scaled back to the Alpha scale where
it becomes a proprietary low volume chip used by HP to run HP-UX and NSK (and
VMS if still alive).

If IA64 doesn't take off in sufficient volumes, it will be most interesting to
see Microsoft's reaction with regards to continued Windows availability on IA64.

JF Mezei

unread,
Jul 9, 2002, 11:57:02 AM7/9/02
to
Since one needs special compilers to use EPIC's potential for performance,
what will happen to all the applications (especially public domain) that are
designed to be compiled with GNU compilers ?

Will GNU compilers be made to adapt to IA64's EPIC requirements or will they
just generate "vanilla" code that will greatly slow down the chip ?

Alberto

unread,
Jul 9, 2002, 12:11:35 PM7/9/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio

> |> 1.8Ghz, i think, is the target for 0.13u process....Q4 2003?


>
> Very likely. Hence my remark. By the time that the Madison
> reaches 1.8 GHz, it is likely that the Opteron will be at 2.5 GHz,
> perhaps 3.0 GHz. If AMD introduce a large cache version at the
> latter speed, Opteron, too, could reach 1200 SpecInt. Maybe.

2.5G or 3G with 0.13u ? Naaaa.............
Yes, when it go to 0.09u, but when?
Intel's process leadership is clear, and the other companies
are hungry.

>
> It is also rumoured that two Opterons will cost about the same
> as one Madison, in terms of dollars, square centimetres and
> watts. If that is so, the small server performance comparisons
> swing solidly into Opteron's favour. We shall see.

Speculation..............sure in terms of watts i don't agree.

>
> |> After this, go to 0.09u process ( 60/35nm=1.71......3Ghz ?)
>
> Before that comes, the battle will have been lost and won, or
> at least fought to an inconclusive result. 0.1 micron will not
> be relevant before 2004, and we shall know the result before
> then.

The 0.13u node is sufficent for battle. Amd is now in difficoult,
and Umc don't have aggressive process in schedule.

> Really? Intel used to have a 12 months lead in the introduction
> of new processes, but it is down to about 6 months now. You
> must remember that AMD can buy in fab capacity from (say) IBM.

Outsourcing ? bad job if porpouse is frequency scaling.....Sun know that.
Bye.
Alberto.

Bernd Paysan

unread,
Jul 9, 2002, 12:12:41 PM7/9/02
to
Nick Maclaren wrote:
> Very likely. Hence my remark. By the time that the Madison
> reaches 1.8 GHz, it is likely that the Opteron will be at 2.5 GHz,
> perhaps 3.0 GHz. If AMD introduce a large cache version at the
> latter speed, Opteron, too, could reach 1200 SpecInt. Maybe.

I thought the current Athlons (at 1.8GHz, end 0.18u/initial 0.13u) should
already be in the ~700 SpecInt range. The Hammer has several improvements,
which should give about 25% per-clock improvement. Also, it's a SOI chip,
which reduces gate delay by about 30% - and since other things didn't
change much, you can expect a 30% higher clock. 1200 SpecInt should be a
target you can easily get with initial chips, without larger caches and
such like.

The downside is that the available x86-64 compiler now is a real production
compiler (GCC), and not some hot speed special tuning compiler nobody uses
to compile their apps. This gives lower SpecInt numbers. But for the user,
this is barely relevant.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

Dan Pop

unread,
Jul 9, 2002, 12:37:08 PM7/9/02
to

>Since one needs special compilers to use EPIC's potential for performance,
>what will happen to all the applications (especially public domain) that are
>designed to be compiled with GNU compilers ?

The Intel compilers (whose Linux versions are available for free)
are front end compatible with the GNU compilers (i.e. they support
the GNU extensions).

>Will GNU compilers be made to adapt to IA64's EPIC requirements or will they
>just generate "vanilla" code that will greatly slow down the chip ?

Time will tell.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Dan...@ifh.de

Alberto

unread,
Jul 9, 2002, 12:45:55 PM7/9/02
to

"JF Mezei" <jfmezei...@videotron.ca> ha scritto nel messaggio news:3D2B04D1...@videotron.ca...

Yes but where are x86-64 compilers?
Do you hope a Intel 6.0 C++ for x86-64 ?
Sorry don't exist.
Bye.
Alberto.


Alberto

unread,
Jul 9, 2002, 12:49:12 PM7/9/02
to
"Bernd Paysan" <bernd....@gmx.de> ha scritto nel messaggio news:p12fga...@miriam.mikron.de...

> Nick Maclaren wrote:
> > Very likely. Hence my remark. By the time that the Madison
> > reaches 1.8 GHz, it is likely that the Opteron will be at 2.5 GHz,
> > perhaps 3.0 GHz. If AMD introduce a large cache version at the
> > latter speed, Opteron, too, could reach 1200 SpecInt. Maybe.
>
> I thought the current Athlons (at 1.8GHz, end 0.18u/initial 0.13u) should
> already be in the ~700 SpecInt range. The Hammer has several improvements,
> which should give about 25% per-clock improvement.

In all application? :-).

> Also, it's a SOI chip,
> which reduces gate delay by about 30% - and since other things didn't
> change much, you can expect a 30% higher clock.

I don't expect over a 10-15% higher clock with Soi.

> 1200 SpecInt should be a
> target you can easily get with initial chips, without larger caches and
> such like.

You are very very optimistic.

> The downside is that the available x86-64 compiler now is a real production
> compiler (GCC), and not some hot speed special tuning compiler nobody uses
> to compile their apps. This gives lower SpecInt numbers. But for the user,
> this is barely relevant.

Shure?
Bye.
Alberto

Nick Maclaren

unread,
Jul 9, 2002, 12:50:31 PM7/9/02
to

In article <7rEW8.16078$7N3.3...@twister2.libero.it>,

"Alberto" <albe...@libero.it> writes:
|>
|> Yes but where are x86-64 compilers?
|> Do you hope a Intel 6.0 C++ for x86-64 ?
|> Sorry don't exist.

Look, the first IA-64 products were nominally available a year
ago. The first x86-64 products aren't due until 4Q02. It
isn't surprising that software for the former is more readily
available - if it were not so, it would be clear that the IA-64
platform was dead.

Andrew Harrison SUNUK Consultancy

unread,
Jul 9, 2002, 12:58:32 PM7/9/02
to

JF Mezei wrote:

> Alberto wrote:
>
>>Itanic performs very well now, but 2 years are
>>necessary to penetrate a full of difficulties market.
>>How many years are necessary to Opteron for
>>same things? (10?).
>>
>
> Considering that a full Windows won't be available on IA64 until next year, I
> think that software availability on IA64 will be the major issue in the next
> couple of years.
>

The availability of IA-64 compiled software is the major problem
compounded by the fact that it needs to be compiled with the right
compiler (HP's for HP-UX).

HP's own benchmarks show pretty conclusively that IA-64 emulating
in this case HP-PA is not a good thing but that is what particularly
in the x86->IA-64 space will be what most ISV's will be relying on.
Binary emulation.

Intels and HP's need not to break binary compatibility or perhaps
their lack of confidence that they could carry the ISV's is both
an advantage, it does allow you to say that apps will run and also
the IA-64's achilles heal.

Most ISV's want an easy life and presented with an option of compiling
and possibly having to modify their code to run on IA-64 or just
allowing the binary compatibility to do its work many will take the
latter option despite its rather horrible performance consequences.

If I was a hardware vendor being courted by Intel I would be very
wary of IA-64 unless I could be sure that Intels compilers that I
would have access to will deliver a similar level of performance
on my box as it does on HP's.

Its rather extraordinary that HP/Intel have set out to create an
industry standard CPU and a commodity 64 bit platform and straight
away have tilted the playing field so that in fact the platforms
are not commodities.

How long will it be before HP starts to market the benefit of
buying boxes that run apps compiled with their compiler based
on the performance advantage provided by the compiler.

Is it suprising that Dell/IBM etc are less than enthused.

Regards
Andrew Harrison

Gavin Scott

unread,
Jul 9, 2002, 1:14:57 PM7/9/02
to
Nick wrote:

> HP's "buy online" for the USA has only the zx2000 (plus its 6 Merced
> configurations) and says "There is no stock currently available" for
> all of them.

You might find more info at:

https://www.e-solutions.hp.com/

which lets you configure and price all the new I2 systems. The options
are rather limited it seems, especially in terms of choosing a video
card for the workstation models, but for HP this is way more information
than you used to be able to get online :-)

G.

Douglas Siebert

unread,
Jul 9, 2002, 1:17:22 PM7/9/02
to
nm...@cus.cam.ac.uk (Nick Maclaren) writes:

>I said better than the McKinley. My current guess is a SpecInt
>of 800-900 in x86-64 mode and 700-800 in IA-32 mode in 1Q03.
>But time will tell.


I'd be very, very, very surprised if it doesn't easily exceed 1000
Specint in IA-32 at first release. If your predictions are correct, it
will be barely beating .18u Athlons (749 Specint for the XP 2100+)
Surely the shift to .13u SOI, two more pipeline stages, and two on chip
DDR controllers can add performance, if your expectations are correct
AMD should have cancelled Hammer and had all their good engineers
working on the Athlon .13u shrink instead!

I don't think compilation is the problem people make it out to be. AMD
will be hindered slightly by the fact they'll get their best SPEC
results in IA-32 mode using Intel's compiler (which they currently use
for Athlon) But given the immaturity of IA64 support in GCC, Hammer
will clean McKinley's (and its successors') clock on Linux. Hammer loses
performance using GCC, but not nearly as much as McKinley loses. Even
a .09u McKinley with 6MB cache probably has no chance against a .13u
Hammer where GCC is concerned. So goodbye Linux market on IA64.

IA64's problem is that its performance advantage is most significant for
FP (which far fewer server customers care about than int) and it won't
be very competitive on Linux. People don't use Windows for FP number
crunching, so the platform of choice for that market will be HP-UX. But
a good segment of the big FP market has already decided it is better to
fill a few racks with cheap x86 systems running Linux. For int work,
IA64 doesn't look to keep ahead of P4 & Hammer. At .18u, it is only
about 15% ahead of the fastest .18u P4, and slower than AMD's fastest
.18u Athlon. P4 & Athlon/Hammer will be constantly improved, it looks
like IA64 gets only shrinks and some additional L3 cache for the next
couple years. Oops!

--
Douglas Siebert dsie...@excisethis.khamsin.net

A good friend will help you move, a true friend will help you move a body.

Nick Maclaren

unread,
Jul 9, 2002, 1:22:08 PM7/9/02
to

In article <cuEW8.14150$K_4.3...@twister1.libero.it>,

"Alberto" <albe...@libero.it> writes:
|> "Bernd Paysan" <bernd....@gmx.de> ha scritto nel messaggio news:p12fga...@miriam.mikron.de...
|>
|> > Also, it's a SOI chip,
|> > which reduces gate delay by about 30% - and since other things didn't
|> > change much, you can expect a 30% higher clock.
|>
|> I don't expect over a 10-15% higher clock with Soi.

Well, considering AMD get 1733 MHz with a 0.18 process, don't you
think that 2500 MHz with a 0.13 SOI process is fairly conservative?

|> > 1200 SpecInt should be a
|> > target you can easily get with initial chips, without larger caches and
|> > such like.
|>
|> You are very very optimistic.

The mind boggles. If I remind you of what you posted:

Ummm...... Madison = 1.7-1.8Ghz cpu + 6Mb L3, with 1450+ specint
and 2430+specfpu, an other class of cpu respect Opteron ( no serius
compiler yet ).
This numbers are explicit and other consideration are only
pessimistic :-).

You are assuming that Intel can get an almost 80% increase in clock
speed (and hence performance) by shrinking to 0.13, but you are
claiming that expecting AMD to get a 45% increase in the same way
is very, very optimistic.

Yes, of course, AMD could make a complete pig's ear of the Opteron
or could be lying through their teeth that the Opteron is faster
than the Athlon at the same clock speed, but you are assuming one
or the other.

Greg Cagle

unread,
Jul 9, 2002, 1:41:02 PM7/9/02
to
Douglas Siebert wrote:

<snip>

> I don't think compilation is the problem people make it out to be. AMD
> will be hindered slightly by the fact they'll get their best SPEC
> results in IA-32 mode using Intel's compiler (which they currently use
> for Athlon) But given the immaturity of IA64 support in GCC, Hammer
> will clean McKinley's (and its successors') clock on Linux. Hammer loses
> performance using GCC, but not nearly as much as McKinley loses. Even
> a .09u McKinley with 6MB cache probably has no chance against a .13u
> Hammer where GCC is concerned. So goodbye Linux market on IA64.

But gcc isn't the only Linux compiler for IA-64. Intel has a gcc-compatible
compiler. Granted it's not producing the same results as the HP-UX
compilers, but it's still better than gcc.

<snip>

--
Greg Cagle
gregc at gregcagle dot com

Alberto

unread,
Jul 9, 2002, 1:44:42 PM7/9/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio news:agf640$t0r$1...@pegasus.csx.cam.ac.uk...

Sorry, but i think that Amd have eat gate length in 0.18u
process and now.........
I know that this is only an internet speculation but is credible,
after the Palomino ''miracle'' and Tbird's high o.c. characteristics;
2.2G are a good top freq. for standard 0.13u Amd's process.
But in future...................????
Bye.
Alberto.


Alberto

unread,
Jul 9, 2002, 2:01:35 PM7/9/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio

CUT


Calculation:

Intel 0.18u (110nm) 0.13u (60nm)
Amd 0.18u (110nm-???nm) 0.13u(80nm) Motorola

Intel 110/60==> +83%
Amd 110/80==> +40% but Palomino eat 20% so............

Amd's process isn't competitive now, in future.........?
Bye.
Alberto.


Toon Moene

unread,
Jul 9, 2002, 2:39:18 PM7/9/02
to
Alberto wrote:

> Ummm...... Madison = 1.7-1.8Ghz cpu + 6Mb L3, with 1450+ specint
> and 2430+specfpu, an other class of cpu respect Opteron ( no serius
> compiler yet ).
> This numbers are explicit and other consideration are only pessimistic :-).

You must be using a different definition of "explicit" than I am.

Explicit SPEC numbers are those on the SPEC web pages, i.e.
http://www.spec.org.

All others are speculation :-)

--
Toon Moene - mailto:to...@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)

Peter Boyle

unread,
Jul 9, 2002, 2:41:10 PM7/9/02
to

On Tue, 9 Jul 2002, Douglas Siebert wrote:

> IA64's problem is that its performance advantage is most significant for
> FP (which far fewer server customers care about than int) and it won't
> be very competitive on Linux. People don't use Windows for FP number
> crunching, so the platform of choice for that market will be HP-UX. But
> a good segment of the big FP market has already decided it is better to
> fill a few racks with cheap x86 systems running Linux. For int work,
> IA64 doesn't look to keep ahead of P4 & Hammer. At .18u, it is only
> about 15% ahead of the fastest .18u P4, and slower than AMD's fastest
> .18u Athlon. P4 & Athlon/Hammer will be constantly improved, it looks
> like IA64 gets only shrinks and some additional L3 cache for the next
> couple years. Oops!

Logical conclusion:

Intel release their compiler for free dollar cost (not
free intellectually) for userland linux compiles.

This is both hinted at by the linux Beta's for Intel C/fortran for
both x86 and IA64 targets, and not unprecendented since the same
issues made Compaq (academic only?) release ccc/cxx.

Non-(dollar) free or usage restrictive licenses will not cut the mustard
with the linux community - I downloaded the beta out of casual
interest in the IA64 code generation, but refuse to deal with Flex
on my laptop. Plenty of cluster people were very grateful to get their
hands on the compaq compiler binaries, and FSF ideology be damned for a
factor of 2x.

Peter


> --
> Douglas Siebert dsie...@excisethis.khamsin.net
>
> A good friend will help you move, a true friend will help you move a body.
>

Peter Boyle pbo...@physics.gla.ac.uk

JF Mezei

unread,
Jul 9, 2002, 3:25:07 PM7/9/02
to
Alberto wrote:
> Yes but where are x86-64 compilers?

Since it would be the same architecture, I would surmise that the Hammer
compilers would simply need to be the existing 8086 compilers modified to use
64 bit adresses.

This isn't as dramatic a change as requiring the compiler to not only generate
new instructions, but also break out the code differently and specifically put
instructions that make use of EPIC.

JF Mezei

unread,
Jul 9, 2002, 3:35:44 PM7/9/02
to
Douglas Siebert wrote:
> be very competitive on Linux. People don't use Windows for FP number
> crunching, so the platform of choice for that market will be HP-UX.

Don't the teenage kids who play all those games on Windows rely heavily on
floating point ?

Don't rendering farms for CG scenes in movies rely heavily on floating point ?
(many of which currently run on Linux BWT).

Andries Thijssen

unread,
Jul 9, 2002, 3:42:13 PM7/9/02
to
"JF Mezei" <jfmezei...@videotron.ca> wrote in message

> Don't rendering farms for CG scenes in movies rely heavily on floating
point ?
> (many of which currently run on Linux BWT).

My (uneducated) guess would be that it is relatively easy to divide a
rendering task between multiple systems and that the price/performance ratio
of P4/Athlon systems may be superior for such tasks.

How important is FP performance for mid-range systems (another target market
for Itanium)? I don't see a database server do much FP work, especially in a
multi-tier architecture.

If anyone can shed light on this, please do.

Andries


Bill Todd

unread,
Jul 9, 2002, 3:57:47 PM7/9/02
to

"Alberto" <uapalb...@libero.it> wrote in message
news:R2DW8.15719$7N3.3...@twister2.libero.it...

>
> "Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio

...

> > If the Madison runs at 1.7-1.8 GHz and delivers 1450+ SpecInt
> > and 2430+ SpecFP within 12 months, I shall be most impressed.
> > I suspect that 1.5 GHz and 1200 SpecInt on HP systems and 950
> > on Intel ones at this time next year is more plausible.
>
> 1.8Ghz, i think, is the target for 0.13u process

Ah, you mean like 1.4 GHz was the target for McKinley in 0.18 micron (I even
saw 1.5 GHz mentioned at one point)? And then 1.2 GHz? But now 1.0 Ghz
seems to be sufficiently difficult to reach that they're binning out 900 MHz
low-end units to use up the parts that don't make the grade.

1.4 GHz seems a much more likely introduction speed for Madison. Possibly
1.5 GHz if moving to a copper process helps enough. 2.0 GHz sounds
reasonable for Montecito at introduction. Both will likely creep up *after*
introduction (unless Montecito appears soon enough that tweaking Madison
doesn't make sense).

- bill

David Mosberger-Tang

unread,
Jul 9, 2002, 12:31:44 PM7/9/02
to
>>>>> On Tue, 09 Jul 2002 11:57:02 -0400, JF Mezei <jfmezei...@videotron.ca> said:

JF> Since one needs special compilers to use EPIC's potential for
JF> performance, what will happen to all the applications
JF> (especially public domain) that are designed to be compiled with
JF> GNU compilers ?

I'm not sure what you mean by "special compilers". Intel is certainly
working hard to make their compiler a solid and viable option for ia64
(both linux and windows). The Intel compiler supports pretty much all
the sane GCC extensions, so switching compilers is rather easy for
most open source applications. At the moment, I'd still rate GCC
higher for stability, but the Intel compiler has certainly come a long
way from being a pure SPEC compiler. From my perspective, the only
real downside is that the Intel compiler isn't available at no cost
(not if you want to distribute binaries compiled with the Intel
compiler, at least).

JF> Will GNU compilers be made to adapt to IA64's EPIC requirements
JF> or will they just generate "vanilla" code that will greatly slow
JF> down the chip ?

GCC has been available for ia64 for a while and on integer code it
performs OK. For floating-point intensive code it doesn't really
stand much of a chance against the compilers from Intel and HP.

There is an interesting competition going on between GCC and the Open
Research Compiler (ORC), which is based on a GCC front-end and the
open-sourced SGI backend technology. Most people seem to agree that
ORC has a better infrastructure for modern architectures (not just
EPIC) but on the other hand, GCC certainly has more momentum and a
much larger user base. It will be interesting to watch how these
projects evolve over the next year or two.

--david

Nick Maclaren

unread,
Jul 9, 2002, 4:15:10 PM7/9/02
to
In article <uim6ghm...@news.supernews.com>,

Thanks. It does list them, but is so gruesomely painful that I
have so far failed to get to the end of it. What chimpanzee
designed that form? I shall try to remember to try at a time I
am on a fast link and the USA is asleep.

Assuming that you don't get the answer "not orderable" when you
get to the end, it appears that there is a discrepancy between
that page and HP's main Web pages. Odd.

Sander Vesik

unread,
Jul 9, 2002, 4:36:44 PM7/9/02
to
In comp.arch JF Mezei <jfmezei...@videotron.ca> wrote:
> Nick Maclaren wrote:
>> If McKinley has failed to establish by then, Madison may also
>> fail to establish, and the IA-64 project will collapse.
>
> IA64 won't collapse. It would simply be scaled back to the Alpha scale where
> it becomes a proprietary low volume chip used by HP to run HP-UX and NSK (and
> VMS if still alive).

Well, in this scenario, ia64 does collapse - it would be more effective for hp to
continue using hp-pa and mips for hp-ux/nsk and to just migrate everybody still around
to hp-pa / hp-ux from alpha instead.

>
> If IA64 doesn't take off in sufficient volumes, it will be most interesting to
> see Microsoft's reaction with regards to continued Windows availability on IA64.

--
Sander

+++ Out of cheese error +++

Sander Vesik

unread,
Jul 9, 2002, 4:39:29 PM7/9/02
to
In comp.arch Alberto <albe...@libero.it> wrote:
>
> Yes but where are x86-64 compilers?

You don't really need them - x86-64 processors will be very fast
even in 32 bit mode. They will just be faster in 64 bit mode for
apps that are recompiled.

> Do you hope a Intel 6.0 C++ for x86-64 ?
> Sorry don't exist.

T%his assumes intel can't make more money from it than from ia64...

> Bye.
> Alberto.

JF Mezei

unread,
Jul 9, 2002, 5:39:08 PM7/9/02
to
Sander Vesik wrote:
> Well, in this scenario, ia64 does collapse - it would be more effective for hp to
> continue using hp-pa and mips for hp-ux/nsk and to just migrate everybody still around
> to hp-pa / hp-ux from alpha instead.

No. By the time Intel admits that IA64 won't become industry standard and is
relegated to HP's proprietary chip, HP will have already migrated all its eggs
onto IA64 and at that point, it makes more sense to continue to develop IA64
than to go to whatever other architecture exists at that time.

Toon Moene

unread,
Jul 9, 2002, 5:30:01 PM7/9/02
to
David Mosberger-Tang wrote:

> JF> Will GNU compilers be made to adapt to IA64's EPIC requirements
> JF> or will they just generate "vanilla" code that will greatly slow
> JF> down the chip ?
>
> GCC has been available for ia64 for a while and on integer code it
> performs OK. For floating-point intensive code it doesn't really
> stand much of a chance against the compilers from Intel and HP.

One of the reasons for this is that "floating-point intensive code" in
the context of SPEC, this often (though not always) means "Fortran
code".

Recently, I observed that we're losing no-aliasing information when
compiling Fortran code. This means that unrolling loop code and
renaming registers (to use all those nice 128 fp registers) won't be
effective using g77.

I'm planning to take a look at this my coming holiday (even though I
myself am more interested in Alpha performance).

Rupert Pigott

unread,
Jul 9, 2002, 5:42:53 PM7/9/02
to
"JF Mezei" <jfmezei...@videotron.ca> wrote in message
news:3D2B3B0E...@videotron.ca...

> Douglas Siebert wrote:
> > be very competitive on Linux. People don't use Windows for FP number
> > crunching, so the platform of choice for that market will be HP-UX.
>
> Don't the teenage kids who play all those games on Windows rely heavily on
> floating point ?

If you take Tom's hardware as a reference it appears that framebuffer
bandwidth and AGP bandwidth are the key limiting factors on that stuff
at the moment. I doubt that will change either, the Graphics hardware
vendors make their money from moving work from the CPU to their silicon.

Those same vendors seem to have the most sway in the APIs too, so the
APIs will favour *more* work moving towards the display subsystem. For
what work is vaguely affected by the CPU the memory subsystem seems to
be the critical factor.

Cheers,
Rupert


Nick Maclaren

unread,
Jul 9, 2002, 6:05:38 PM7/9/02
to
In article <3D2B57F1...@videotron.ca>,

Back in 1994, the reason that HP got into bed with Intel was that
HP did not have the resources to develop and market PA-RISC 3.0
(later IA-64) on its own. Nothing has changed in that respect.
If IA-64 folds within the next year, HP should still be able to
back off to PA-RISC. The nightmare scenario is Intel canning
IA-64 18-24 months down the line.

Nick Maclaren

unread,
Jul 9, 2002, 6:08:19 PM7/9/02
to
In article <3D2B55D9...@moene.indiv.nluug.nl>,

Toon Moene <to...@moene.indiv.nluug.nl> wrote:
>David Mosberger-Tang wrote:
>
>> JF> Will GNU compilers be made to adapt to IA64's EPIC requirements
>> JF> or will they just generate "vanilla" code that will greatly slow
>> JF> down the chip ?
>>
>> GCC has been available for ia64 for a while and on integer code it
>> performs OK. For floating-point intensive code it doesn't really
>> stand much of a chance against the compilers from Intel and HP.
>
>One of the reasons for this is that "floating-point intensive code" in
>the context of SPEC, this often (though not always) means "Fortran
>code".

For reasons related to the C language that we all know. Yes. This
may change as C99 and restrict become more widespread.

>Recently, I observed that we're losing no-aliasing information when
>compiling Fortran code. This means that unrolling loop code and
>renaming registers (to use all those nice 128 fp registers) won't be
>effective using g77.

Gug. That is BAD news for IA-64.

>I'm planning to take a look at this my coming holiday (even though I
>myself am more interested in Alpha performance).

It is pretty important for that - but critical for IA-64!

Peter Boyle

unread,
Jul 9, 2002, 6:52:50 PM7/9/02
to

On Tue, 9 Jul 2002, Toon Moene wrote:

> Recently, I observed that we're losing no-aliasing information when
> compiling Fortran code. This means that unrolling loop code and
> renaming registers (to use all those nice 128 fp registers) won't be
> effective using g77.
>
> I'm planning to take a look at this my coming holiday (even though I
> myself am more interested in Alpha performance).

Improved loop unrolling and reg renaming (followed by peephole
scheduling) would make an enormous difference in many simple kernels
on both Alpha and Power, let alone IA64.

My guess is the only reason gcc has managed to avoid this so far because
x86 has so damned few registers in any case...

Go Toon!

Peter


Alberto

unread,
Jul 9, 2002, 6:57:06 PM7/9/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio

> Back in 1994, the reason that HP got into bed with Intel was that


> HP did not have the resources to develop and market PA-RISC 3.0
> (later IA-64) on its own. Nothing has changed in that respect.
> If IA-64 folds within the next year, HP should still be able to
> back off to PA-RISC. The nightmare scenario is Intel canning
> IA-64 18-24 months down the line.

You are formidable :-).
Intel touch steel when you speek heheheheh.
^^^^^
(an italian ''scaramanzia'')
Bye.
Alberto.


Andrew Reilly

unread,
Jul 9, 2002, 8:24:15 PM7/9/02
to
On Wed, 10 Jul 2002 00:41:44 +1000, Nick Maclaren wrote:


> In article <sxBW8.15370$7N3.3...@twister2.libero.it>, "Alberto"


> <uapalb...@libero.it> writes:
> |> "Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio
> |>

> |> > Most observers believe that serious Opteron servers will be
> |> > available by March 2003, and that the performance will be better
> |> > than the McKinley for most work, though worse for number crunching.


> |> > If McKinley has failed to establish by then, Madison may also fail
> |> > to establish, and the IA-64 project will collapse.
> |>

> |> Ummm...... Madison = 1.7-1.8Ghz cpu + 6Mb L3, with 1450+ specint and
> |> 2430+specfpu, an other class of cpu respect Opteron ( no serius
> |> compiler yet ).
> |> This numbers are explicit and other consideration are only

> |> pessimistic :-). Itanium do not collapse, shure. Give him 2 years.


>
> I said better than the McKinley. My current guess is a SpecInt of
> 800-900 in x86-64 mode and 700-800 in IA-32 mode in 1Q03. But time will
> tell.

Are those figures (700-800 specint in IA-32 mode) achievable with an
in-order IA-32 translator, or are you suspecting that the IA-32 wart is
an all-singing out-of-order implementation, even given an EPIC
substructure? How would those IA-32 figures compare to those for SPEC
binaries compiled for PIII or P4, do you think?

Has anything been published about the microarchitecture of the IA-32
emulator/core?

If there's a fully out-of-order IA-32 engine on the chip, how much more
effort would it take to turn IA-64 into a 128 register out-of-order RISC
chip? That would be formidable, no? All of the other architectures are
register-count constrained.

--
Andrew

Bill Todd

unread,
Jul 9, 2002, 9:20:03 PM7/9/02
to

"Andrew Reilly" <and...@gurney.reilly.home> wrote in message
news:P8LW8.419482$o66.1...@news-server.bigpond.net.au...

> On Wed, 10 Jul 2002 00:41:44 +1000, Nick Maclaren wrote:
>
>
> > In article <sxBW8.15370$7N3.3...@twister2.libero.it>, "Alberto"
> > <uapalb...@libero.it> writes:
> > |> "Nick Maclaren" <nm...@cus.cam.ac.uk> ha scritto nel messaggio
> > |>
> > |> > Most observers believe that serious Opteron servers will be
> > |> > available by March 2003, and that the performance will be better
> > |> > than the McKinley for most work, though worse for number crunching.
> > |> > If McKinley has failed to establish by then, Madison may also fail
> > |> > to establish, and the IA-64 project will collapse.
> > |>
> > |> Ummm...... Madison = 1.7-1.8Ghz cpu + 6Mb L3, with 1450+ specint and
> > |> 2430+specfpu, an other class of cpu respect Opteron ( no serius
> > |> compiler yet ).
> > |> This numbers are explicit and other consideration are only
> > |> pessimistic :-). Itanium do not collapse, shure. Give him 2 years.
> >
> > I said better than the McKinley. My current guess is a SpecInt of
> > 800-900 in x86-64 mode and 700-800 in IA-32 mode in 1Q03. But time will
> > tell.
>
> Are those figures (700-800 specint in IA-32 mode) achievable with an
> in-order IA-32 translator, or are you suspecting that the IA-32 wart is
> an all-singing out-of-order implementation, even given an EPIC
> substructure?

I think you misunderstood: the numbers Nick was suggesting (which I suspect
should be increased by at least 40%) were for x86-64 (i.e., Hammer), not
IA64. I *am* quite curious to see whether Merced's IA32 performance (which
was far worse even than its poor IA64 performance) got boosted by anything
like the order of magnitude it would take to make running IA32 binaries on
McKinley a reasonable proposition.

- bill

Aaron Spink

unread,
Jul 9, 2002, 11:30:05 PM7/9/02
to

"Andries Thijssen" <unava...@somewhere.com> wrote in message
news:mUGW8.116211$38.14...@zwoll1.home.nl...

> My (uneducated) guess would be that it is relatively easy to divide a
> rendering task between multiple systems and that the price/performance
ratio
> of P4/Athlon systems may be superior for such tasks.
>
You do understand that almost all the computer graphics applications are
licensed on a per CPU basis. In addition there is only one company that
doesn't pay for their main software package ( Pixar and Renderman ). In
general the software stack can cost as much as if not significantly more
than the hardware cost. Add in increased support and maintence costs and it
is not unreasonable to buy a lower number of more expensive but higher
performing cpus.

aaron spink
speaking for myself inc


Norbert Juffa

unread,
Jul 9, 2002, 11:45:00 PM7/9/02
to

"Bill Todd" <bill...@metrocast.net> wrote in message news:%eHW8.107267$vq.53...@bin6.nnrp.aus1.giganews.com...
[...]

> 1.4 GHz seems a much more likely introduction speed for Madison. Possibly
> 1.5 GHz if moving to a copper process helps enough. 2.0 GHz sounds
> reasonable for Montecito at introduction. Both will likely creep up *after*
> introduction (unless Montecito appears soon enough that tweaking Madison
> doesn't make sense).

I fully concur with both estimates (Madison = 1.4GHz 0.13u, Montecito = 2 GHz
0.09u) as these are the same numbers I puzzled out by myself as well. As for the
Montecito, a research paper that was published by Intel not only hints at the 2 GHz
target but also makes it seem likely that it will incorporate SMT.

-- Norbert

Norbert Juffa

unread,
Jul 10, 2002, 12:02:07 AM7/10/02
to

"Bernd Paysan" <bernd....@gmx.de> wrote in message news:p12fga...@miriam.mikron.de...
> Nick Maclaren wrote:
> > Very likely. Hence my remark. By the time that the Madison
> > reaches 1.8 GHz, it is likely that the Opteron will be at 2.5 GHz,
> > perhaps 3.0 GHz. If AMD introduce a large cache version at the
> > latter speed, Opteron, too, could reach 1200 SpecInt. Maybe.
>
> I thought the current Athlons (at 1.8GHz, end 0.18u/initial 0.13u) should
> already be in the ~700 SpecInt range. The Hammer has several improvements,
> which should give about 25% per-clock improvement. Also, it's a SOI chip,

> which reduces gate delay by about 30% - and since other things didn't
> change much, you can expect a 30% higher clock. 1200 SpecInt should be a

> target you can easily get with initial chips, without larger caches and
> such like.

You right on the money, the published SPECint numbers for the 1733 MHz Athlon
are 720/749 base/peak. Numbers for the 1800 MHz AthlonXP 2200+ have not
been published yet, but based on published results one would expect them to be
something like 741/772 base/peak.

I think you overestimate the frequency avantage due to SOI, but SOI helps reduce
the power consumption which in turn might help to crank up the frequency in typical
PCs whose cooling technology seems to limit them to a max power of about 75W
for the CPU.

Paul DeMone has published some SPEC estimates for initial Hammers systems on
some web site (Real World Technology?) which seemed very plausible to me. If
I recall correctly, he estimated 1100 SPECint/1000SPECfp.


> The downside is that the available x86-64 compiler now is a real production
> compiler (GCC), and not some hot speed special tuning compiler nobody uses
> to compile their apps. This gives lower SpecInt numbers. But for the user,
> this is barely relevant.

The non-Intel compilers seem to be picking up the pace. At least the GCC folks
seem to be working hard to get Athlon SPEC results up, and have also added
feedback directed optimizations (most pwerful differentiator at the Intel compiler's
arsenal from my observation). See http://www.suse.de/~aj/SPEC/index.html

-- Norbert

David Mosberger-Tang

unread,
Jul 9, 2002, 5:23:40 PM7/9/02
to
>>>>> On Tue, 9 Jul 2002 17:17:22 +0000 (UTC), dsie...@excisethis.khamsin.net (Douglas Siebert) said:

Douglas> IA64's problem is that its performance advantage is most
Douglas> significant for FP (which far fewer server customers care
Douglas> about than int) and it won't be very competitive on Linux.

That would be news to me. Note that the highest reported SPECfp
number is precisely for Linux (which is a first in and of itself, I
believe). Also, PNL seems to be quite happy with Linux on Itanium 2:

http://www.emsl.pnl.gov:2080/capabs/mscf/?/capabs/mscf/hardware/results_hpcs2.html

As far as I know, PNL uses the Intel compilers.

--david

Douglas Siebert

unread,
Jul 10, 2002, 2:22:45 AM7/10/02
to
Andrew Harrison SUNUK Consultancy <andrew_nospam.harrison_remove_this@sun#.com> writes:

>Its rather extraordinary that HP/Intel have set out to create an
>industry standard CPU and a commodity 64 bit platform and straight
>away have tilted the playing field so that in fact the platforms
>are not commodities.

>How long will it be before HP starts to market the benefit of
>buying boxes that run apps compiled with their compiler based
>on the performance advantage provided by the compiler.


Well....never? This was HP's stated strategy for their alliance with
Intel on IA64 all along. HP reaps the benefits of commodity CPU
pricing, doesn't have to do development of said CPUs (at least I think
they are out of that biz now that McKinley is out and the next few are
only shrinks)

I'm just surprised they managed to do it so well -- they've never been
regarded to be particularly strong in compiler technology compared to
say Compaq. Oh yeah, they bought those guys...

Bill Todd

unread,
Jul 10, 2002, 2:26:21 AM7/10/02
to

"Norbert Juffa" <ju...@earthlink.net> wrote in message
news:05OW8.10998$A43.1...@newsread2.prod.itd.earthlink.net...

And yet the most recent public Intel pronouncement about Montecito that I've
seen said that it would include only 'little enhancements' over Madison:
while I can't say exactly what Intel may think constitutes 'little',
everything I've heard about the implementation complexity of SMT - even in
the limited manner supported in the Pentium 4 - suggests to me that most
people would not call it a 'little' enhancement, perhaps especially in an
architecture like EPIC's. Furthermore, IIRC Montecito is said to be a
plug-compatible upgrade to Madison, which may not preclude the appearance of
SMT but would at a minimum have non-transparent software consequences if the
feature were enabled.

While Intel is now awash in engineering talent, it still might be a bit of a
strain - even just in terms of coordination - to have *three* ambitious
development efforts proceeding in parallel on the Itanic line alone: SMT
for 2004, on-chip routing and memory-controller support for 2005 (the
earliest any real Alpha influence can realistically appear, and that likely
not in the core itself), and whatever black magic the Alpha team is cooking
up for 2006 - 7. A dual-core Montecito in 2004 or (perhaps more likely)
2005 might be a more reasonable expectation, with a single major core
revision in 2006 - 7 (the now-cloaked-in-mystery Chivano).

- bill

Dennis O'Connor

unread,
Jul 10, 2002, 3:30:49 AM7/10/02
to

"Bill Todd" <bill...@metrocast.net> wrote ...

> While Intel is now awash in engineering talent, it still might be a bit of a
> strain - even just in terms of coordination - to have *three* ambitious
> development efforts proceeding in parallel on the Itanic line alone:

Not that I've noticed, from my perspective in XScale architecture.
Intel has so many "ambitious development projects" going on,
I can't even keep the codenames straight. :-)
--
Dennis O'Connor dm...@primenet.com
"We don't become a rabid dog to destroy a rabid dog"


Dennis O'Connor

unread,
Jul 10, 2002, 3:36:46 AM7/10/02
to
"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote ...
> "Alberto" <albe...@libero.it> writes:
> |> "Bernd Paysan" <bernd....@gmx.de> ...

> |>
> |> > Also, it's a SOI chip,
> |> > which reduces gate delay by about 30% - and since other things didn't
> |> > change much, you can expect a 30% higher clock.
> |>
> |> I don't expect over a 10-15% higher clock with Soi.
>
> Well, considering AMD get 1733 MHz with a 0.18 process, don't you
> think that 2500 MHz with a 0.13 SOI process is fairly conservative?

No. AMD's "0.18 process" is said to have already been using
130nm transistor gate dimensions, Nick. So no gain there.
And the reduced wire runs don't necessarily buy you much
because your wire-to-wire spacing has decreased proportionally,
as well as your wire width. But the die gets smaller, and that
is always good for cost (but bad for power density).

A semiconductor fab process isn't fully characterized
by a gate dimension, metal type, and K, Nick.
It's complicated beyond your ken.

Ken Green

unread,
Jul 10, 2002, 3:41:12 AM7/10/02
to
Greg Cagle wrote:

> Douglas Siebert wrote:
>
> <snip>
>
> > I don't think compilation is the problem people make it out to be. AMD
> > will be hindered slightly by the fact they'll get their best SPEC
> > results in IA-32 mode using Intel's compiler (which they currently use
> > for Athlon) But given the immaturity of IA64 support in GCC, Hammer
> > will clean McKinley's (and its successors') clock on Linux. Hammer loses
> > performance using GCC, but not nearly as much as McKinley loses. Even
> > a .09u McKinley with 6MB cache probably has no chance against a .13u
> > Hammer where GCC is concerned. So goodbye Linux market on IA64.
>
> But gcc isn't the only Linux compiler for IA-64. Intel has a gcc-compatible
> compiler. Granted it's not producing the same results as the HP-UX
> compilers, but it's still better than gcc.
>

IIRC the SPEC FP number on HP's website is the Intel compiler running on
Denbian Linux, the SPEC Int number is on HP-UX.

>
> <snip>
>
> --
> Greg Cagle
> gregc at gregcagle dot com

Ken Green

unread,
Jul 10, 2002, 3:55:51 AM7/10/02
to
Nick Maclaren wrote:

> In article <3D2B57F1...@videotron.ca>,
> JF Mezei <jfmezei...@videotron.ca> wrote:
> >Sander Vesik wrote:
> >> Well, in this scenario, ia64 does collapse - it would be more effective for hp to
> >> continue using hp-pa and mips for hp-ux/nsk and to just migrate everybody still around
> >> to hp-pa / hp-ux from alpha instead.
> >
> >No. By the time Intel admits that IA64 won't become industry standard and is
> >relegated to HP's proprietary chip, HP will have already migrated all its eggs
> >onto IA64 and at that point, it makes more sense to continue to develop IA64
> >than to go to whatever other architecture exists at that time.
>
> Back in 1994, the reason that HP got into bed with Intel was that
> HP did not have the resources to develop and market PA-RISC 3.0
> (later IA-64) on its own.

Hey, maybe it was all just a clever wheeze to get Intel to pay to develope it in the
first place :-)

Then let it die as a comodity and let HP have it back as propritory chip, at that
point all the cost constrains are out of the window and they can afford to put
nice big caches back on the thing, PA8800 has a 32MB L2 cache, McKinley
has to make do with a 3MB L3.

> Nothing has changed in that respect.
> If IA-64 folds within the next year, HP should still be able to
> back off to PA-RISC. The nightmare scenario is Intel canning
> IA-64 18-24 months down the line.
>
> Regards,
> Nick Maclaren,
> University of Cambridge Computing Service,
> New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
> Email: nm...@cam.ac.uk
> Tel.: +44 1223 334761 Fax: +44 1223 334679

Cheers

Ken


Nick Maclaren

unread,
Jul 10, 2002, 4:03:59 AM7/10/02
to
In article <7ZLW8.8$iX5....@bin3.nnrp.aus1.giganews.com>,

Bill Todd <bill...@metrocast.net> wrote:
>"Andrew Reilly" <and...@gurney.reilly.home> wrote in message
>news:P8LW8.419482$o66.1...@news-server.bigpond.net.au...
>> >
>> > I said better than the McKinley. My current guess is a SpecInt of
>> > 800-900 in x86-64 mode and 700-800 in IA-32 mode in 1Q03. But time will
>> > tell.
>>
>> Are those figures (700-800 specint in IA-32 mode) achievable with an
>> in-order IA-32 translator, or are you suspecting that the IA-32 wart is
>> an all-singing out-of-order implementation, even given an EPIC
>> substructure?
>
>I think you misunderstood: the numbers Nick was suggesting (which I suspect
>should be increased by at least 40%) were for x86-64 (i.e., Hammer), not
>IA64. I *am* quite curious to see whether Merced's IA32 performance (which
>was far worse even than its poor IA64 performance) got boosted by anything
>like the order of magnitude it would take to make running IA32 binaries on
>McKinley a reasonable proposition.

Yes, precisely. I was quoting what I regard as conservative figures,
that I am pretty sure that AMD will be able to deliver, reliably and
probably on time (well, almost). I should not be surprised if they
were 20% higher in the actuality, but should be if they were 40%.
Not flabberghasted, but definitely surprised.

If they are 40% higher, and the Hammer remains on schedule, and it
works reliably in high-RAS contexts, then the Madison launch will be
a damn squib at best. But that is quite a few ifs.

Nick Maclaren

unread,
Jul 10, 2002, 4:08:40 AM7/10/02
to
In article <3D2BE887...@kgcc.co.uk>,
Ken Green <Ken....@kgcc.co.uk> wrote:

>Nick Maclaren wrote:
>
>> Back in 1994, the reason that HP got into bed with Intel was that
>> HP did not have the resources to develop and market PA-RISC 3.0
>> (later IA-64) on its own.
>
>Hey, maybe it was all just a clever wheeze to get Intel to pay to develope it in the
>first place :-)
>
>Then let it die as a comodity and let HP have it back as propritory chip, at that
>point all the cost constrains are out of the window and they can afford to put
>nice big caches back on the thing, PA8800 has a 32MB L2 cache, McKinley
>has to make do with a 3MB L3.

As I said, nothing has changed in that respect. HP cannot afford
it, because the IA-64 architecture is a very high-cost one to
develop and develop for. Not just the hardware, but the software.

Nick Maclaren

unread,
Jul 10, 2002, 4:15:45 AM7/10/02
to
In article <ugy9ckt...@panda.mostang.com>,

David Mosberger-Tang <David.M...@acm.org> wrote:
>>>>>> On Tue, 9 Jul 2002 17:17:22 +0000 (UTC), dsie...@excisethis.khamsin.net (Douglas Siebert) said:
>
> Douglas> IA64's problem is that its performance advantage is most
> Douglas> significant for FP (which far fewer server customers care
> Douglas> about than int) and it won't be very competitive on Linux.
>
>That would be news to me. Note that the highest reported SPECfp
>number is precisely for Linux (which is a first in and of itself, I

Is it just? You mean the 1365? That is MOST interesting, if I have
understood you.

>believe). Also, PNL seems to be quite happy with Linux on Itanium 2:
>
> http://www.emsl.pnl.gov:2080/capabs/mscf/?/capabs/mscf/hardware/results_hpcs2.html

Hmm. The abstract refers to getting 96% of peak on a key kernel
(a matrix multiply). Sorry, but that is NOT relevant. Firstly,
matrix multiply is trivial to optimise. Secondly, if competently
coded, it will be dominated by calls to hand-coded DGEMM or ZGEMM
anyway. I don't have time to look at the paper now, but will later.

Aaron Spink

unread,
Jul 10, 2002, 4:34:22 AM7/10/02
to

"Douglas Siebert" <dsie...@excisethis.khamsin.net> wrote in message
news:aggjrk$5sl$1...@sword.avalon.net...

> I'm just surprised they managed to do it so well -- they've never been
> regarded to be particularly strong in compiler technology compared to
> say Compaq. Oh yeah, they bought those guys...
>
Just a clarification...

The compiler group was part of the Intel deal, prior to the HP deal. Rich
Grove who was a member of the DEC/Compaq Alpha compiler group and also a
Compaq fellow is now an intel fellow:

http://www.intel.com/pressroom/kits/bios/rgrove.htm

So what used to be the Alpha compiler development group is now at Intel.

aaron spink
speaking for myself inc.


Bill Todd

unread,
Jul 10, 2002, 4:34:34 AM7/10/02
to

"Ken Green" <Ken....@kgcc.co.uk> wrote in message
news:3D2BE887...@kgcc.co.uk...

...

> Then let it die as a comodity and let HP have it back as propritory chip,
at that
> point all the cost constrains are out of the window and they can afford to
put
> nice big caches back on the thing, PA8800 has a 32MB L2 cache, McKinley
> has to make do with a 3MB L3.

I rather doubt that PA8800's 32 MB L2 cache is on-chip, however (the
McKinley die is pretty large already, and about half of it is the 3 MB of L3
cache) - so presumably HP could add just as much L4 cache to a McKinley
system should they care to.

The better question is exactly why HP would believe that EPIC (if it indeed
did revert to being an HP-proprietary product) would be a useful step up
from PA-RISC which, despite relative neglect of late, has just released an
875 MHz part that should get about 700 SPECint2K performance if the increase
is close to linear and next year around Madison time will get a shrink to
130 nm (keeping up with Madison in that regard) *plus* dual cores on the
chip (yielding close to double the server performance per die). There's no
evidence that EPIC has left a great deal of performance on the table just
waiting to be realized in future iterations (though the Alpha guys can
probably find some if anyone can, or throw it in the trash and start over),
so whatever degree of effort it would take to keep improving Itanic could
likely be equally effective in improving good old PA-RISC.

Oh, wait: that *was* the conclusion they reportedly came to a while ago,
but couldn't make fly at the corporate level...

- bill

Bill Todd

unread,
Jul 10, 2002, 4:45:44 AM7/10/02
to

"Dennis O'Connor" <dm...@primenet.com> wrote in message
news:10262864...@nnrp1.phx1.gblx.net...

So enlighten us then. It seems that you still expect *some* gain from the
shrink, and Bernd seems to expect some perhaps significant gain from the
move to SOI, and AMD has added two pipeline stages in decoding to remove
roadblocks (as well as improve efficiency) there. While all of these are of
course subject to variations beyond your ken or control, since you've
already volunteered an opinion of sorts encouraging you to quantify it a bit
(perhaps within a rough range) does not seem unreasonable given your claimed
level of expertise in this area.

- bill

Thomas Womack

unread,
Jul 10, 2002, 4:46:32 AM7/10/02
to
In article <aggqfh$903$1...@pegasus.csx.cam.ac.uk>,

Nick Maclaren <nm...@cus.cam.ac.uk> wrote:
>In article <ugy9ckt...@panda.mostang.com>,
>David Mosberger-Tang <David.M...@acm.org> wrote:
>>>>>>> On Tue, 9 Jul 2002 17:17:22 +0000 (UTC), dsie...@excisethis.khamsin.net (Douglas Siebert) said:
>>
>> Douglas> IA64's problem is that its performance advantage is most
>> Douglas> significant for FP (which far fewer server customers care
>> Douglas> about than int) and it won't be very competitive on Linux.
>>
>>That would be news to me. Note that the highest reported SPECfp
>>number is precisely for Linux (which is a first in and of itself, I
>
>Is it just? You mean the 1365? That is MOST interesting, if I have
>understood you.
>
>>believe). Also, PNL seems to be quite happy with Linux on Itanium 2:
>>
>> http://www.emsl.pnl.gov:2080/capabs/mscf/?/capabs/mscf/hardware/results_hpcs2.html

> Hmm. The abstract refers to getting 96% of peak on a key kernel
> (a matrix multiply). Sorry, but that is NOT relevant. Firstly,
> matrix multiply is trivial to optimise. Secondly, if competently
> coded, it will be dominated by calls to hand-coded DGEMM or ZGEMM
> anyway. I don't have time to look at the paper now, but will later.

The paper is six Powerpoint slides, full of vapour (apart from a
slightly scathing comparison of McKinley and Power4 memory
bandwidths), and pretty much content-free. Their claim is that DGEMM
-- presumably Intel's hand-coded DGEMM -- manages 96% of peak, whilst
Cray's hand-coded DGEMM on T3Es gets 83% and IBM's 65% or so. 128
registers and 2 FP ops per cycle clearly buys you a lot.

It looks as if the results are on a prototype made with 900MHz
processors, there are also extrapolations to 1GHz; I was slightly
surprised that Intel's known-superb process engineers are having so
much trouble with this CPU, until realising that they're managing four
times the speed nVidia arranged for a comparably complex chip in a
smaller process.

I imagine Intel's Russians are better than IBM's Russians :) --
Intel's low-level-optimisation group seems, from reading bits of the
MKL documentation, to be based in Nizhny-Novgorod.

For the whole-chemistry benchmark, the McKinley system is slightly
slower per-CPU than a Power4, but doesn't hit the hard limit at 32
processors; the machine tested appears to be using 48.

I presume a McKinley will join the HP testdrive farm shortly, and then
I'll get to play with it :)

Tom

Christoph Breitkopf

unread,
Jul 10, 2002, 5:19:42 AM7/10/02
to
"Norbert Juffa" <ju...@earthlink.net> writes:

> > The downside is that the available x86-64 compiler now is a real production
> > compiler (GCC), and not some hot speed special tuning compiler nobody uses
> > to compile their apps. This gives lower SpecInt numbers. But for the user,
> > this is barely relevant.
>
> The non-Intel compilers seem to be picking up the pace. At least the GCC folks
> seem to be working hard to get Athlon SPEC results up, and have also added
> feedback directed optimizations (most pwerful differentiator at the Intel compiler's
> arsenal from my observation). See http://www.suse.de/~aj/SPEC/index.html

For branch prediction, this was in a long time - at least since 2.95.
Didn't work that well, however. Recently introduced stuff like tail
duplication etc. seems to work very well indeed, even in this early
stage. I think gcc 3.2 looks very promising.

The newer gcc Versions (even 3.1) handily beat the Intel compiler
on some real-word codes, see for example Mark Stocks Radiance Benchmarks:
http://mark.technolope.org/pages/rad_bench.html
So far, I did not try feedback-directed optimization, however.

Anyway feedback directed optimization is not the silver
bullet of compiler technology. The speedup with the Intel
x86 compiler on bzip2 (the real one, not from Spec) is
exactly nothing. Maybe because the program is well-optimized
already?

Regards,
Chris

Ken Green

unread,
Jul 10, 2002, 5:43:18 AM7/10/02
to
Bill Todd wrote:

> "Ken Green" <Ken....@kgcc.co.uk> wrote in message
> news:3D2BE887...@kgcc.co.uk...
>
> ...
>
> > Then let it die as a comodity and let HP have it back as propritory chip,
> at that
> > point all the cost constrains are out of the window and they can afford to
> put
> > nice big caches back on the thing, PA8800 has a 32MB L2 cache, McKinley
> > has to make do with a 3MB L3.
>
> I rather doubt that PA8800's 32 MB L2 cache is on-chip, however (the
> McKinley die is pretty large already, and about half of it is the 3 MB of L3
> cache) - so presumably HP could add just as much L4 cache to a McKinley
> system should they care to.
>

Your right Bill, the L2 is in the same package, but not on the same, chip,
altough the tags are on the CPU chip, along with 2 cores, and the two
L1 caches (the data caches are half the size of the PA8700)

I would have thought that cost was a major factor in deciding not to follow
a similar approach in the McKinley, I guess HP can still afford more for
CPUs in their mid to upper range systems than Intel expects the IA64
market place to want to pay. I might be wrong, maybe we'll see Makos in
the the rp24XX boxes which start with a list price of <$1000.

>
> The better question is exactly why HP would believe that EPIC (if it indeed
> did revert to being an HP-proprietary product) would be a useful step up
> from PA-RISC which, despite relative neglect of late, has just released an
> 875 MHz part that should get about 700 SPECint2K performance if the increase
> is close to linear and next year around Madison time will get a shrink to
> 130 nm (keeping up with Madison in that regard) *plus* dual cores on the
> chip (yielding close to double the server performance per die). There's no
> evidence that EPIC has left a great deal of performance on the table just
> waiting to be realized in future iterations (though the Alpha guys can
> probably find some if anyone can, or throw it in the trash and start over),
> so whatever degree of effort it would take to keep improving Itanic could
> likely be equally effective in improving good old PA-RISC.

EPIC is PA3, it was designed as the replacement for PA-Risc, sure the world
has moved on a long way since 1990 when they started, but the PA8700 is not
significantly different in terms of what it does from the PA8000. It's not like
with
Alpha where over the same time frame things like OOO have been added, PA2
had them already.

>
>
> Oh, wait: that *was* the conclusion they reportedly came to a while ago,
> but couldn't make fly at the corporate level...
>

Well the management have well a truely nailed their flag to the IA64 mast.
Carley has already said that the whole farm has been bet on IA64, she
even went as far as buying Compaq to help prove it.

Technical people are easier to convince, generally if you can show enough
evidence you can win arguements, they don't like being proved wrong, but
they can be.

Most corperate levels... won't be proved wrong.

>
> - bill

Cheers

Ken

Rob Young

unread,
Jul 10, 2002, 10:37:43 AM7/10/02
to
In article <3D2BE887...@kgcc.co.uk>, Ken Green <Ken....@kgcc.co.uk> writes:

>
> Then let it die as a comodity and let HP have it back as propritory chip, at that
> point all the cost constrains are out of the window and they can afford to put
> nice big caches back on the thing, PA8800 has a 32MB L2 cache, McKinley
> has to make do with a 3MB L3.
>

You make it sound as if a smaller on-chip cache is somewhat of a
handicap. Hint: name a benchmark *PA8700* beats Itanium II at.
Second challenge, which benchmark (will there be one?) will PA8800
lead Madison (aka Itanium III) at?

HP isn't moving to slower performance... a few may believe it but
the numbers just aren't there (if they are, please show us).

Rob

Nick Maclaren

unread,
Jul 10, 2002, 9:57:01 AM7/10/02
to

One area where I suspect that this may be so are applications
which use a lot of floating-point division or where the IA-64
needs an interrupt and PA-RISC does not.

But a more serious and more interesting reservation is in MPI
and OpenMP support. Time will tell how IA-64 behaves in that.

Ken Green

unread,
Jul 10, 2002, 10:57:39 AM7/10/02
to
Rob Young wrote:

> In article <3D2BE887...@kgcc.co.uk>, Ken Green <Ken....@kgcc.co.uk> writes:
>
> >
> > Then let it die as a comodity and let HP have it back as propritory chip, at that
> > point all the cost constrains are out of the window and they can afford to put
> > nice big caches back on the thing, PA8800 has a 32MB L2 cache, McKinley
> > has to make do with a 3MB L3.
> >
>
> You make it sound as if a smaller on-chip cache is somewhat of a
> handicap. Hint: name a benchmark *PA8700* beats Itanium II at.
> Second challenge, which benchmark (will there be one?) will PA8800
> lead Madison (aka Itanium III) at?
>

A smaller on-chip cache is a handicap. If they could make a Itanium II
with bigger cache it would almost certainly go quicker.

I'm not saying that a PA8700 is faster than a I2, although I bet I could easily
write something where that would be the case, say just find some thing that
fits in the 1.5MB L1 cache, but not in the I2s cache.

PA8800 won't lead I3... it won't be allowed to...
(I guess the pay masters won't let the new alpha beat Itanium either:-)
But I doubt it could even if given permission.

All I was saying is that I think there are things that a big system CPU
is likely to be able to afford that even a comodity server chip probably
can't.

>
> HP isn't moving to slower performance... a few may believe it but
> the numbers just aren't there (if they are, please show us).

I don't think HP is moving to a slower chip. I've never thought that
I have to reason to doubt the people who told be back in ~94 "just
wait for the second version" I2 is quicker than PA,
for the benchmarks they quoted it's quicker than any Risc CPU,
for SPECint_base2000 P4 can beat it, for SPECfp_base2000 it's
in the clear. They haven't published the results on the SPEC web
site yet, so the the other results we don't know.

For application benchmarks the results look good, the TPC C
figures show I2 having a clear advantage over the latest P4s
and Alphas.

>
>
> Rob

Cheers

Ken


Rob Young

unread,
Jul 10, 2002, 12:07:13 PM7/10/02
to
In article <3D2C4B63...@kgcc.co.uk>, Ken Green <Ken....@kgcc.co.uk> writes:
> Rob Young wrote:
>
>> In article <3D2BE887...@kgcc.co.uk>, Ken Green <Ken....@kgcc.co.uk> writes:
>>
>> >
>> > Then let it die as a comodity and let HP have it back as propritory chip, at that
>> > point all the cost constrains are out of the window and they can afford to put
>> > nice big caches back on the thing, PA8800 has a 32MB L2 cache, McKinley
>> > has to make do with a 3MB L3.
>> >
>>
>> You make it sound as if a smaller on-chip cache is somewhat of a
>> handicap. Hint: name a benchmark *PA8700* beats Itanium II at.
>> Second challenge, which benchmark (will there be one?) will PA8800
>> lead Madison (aka Itanium III) at?
>>
>
> A smaller on-chip cache is a handicap.

Most cases it isn't. Run through SPEC and see how on-chip caches
compare. Run through TPC-C and do likewise. Now certainly if
you have a run size that fits in off-chip L2 that blows
out an on-chip L2/L3, you *may* have a good case, but not always
(and yes, that can be proven by example).

If they could make a Itanium II
> with bigger cache it would almost certainly go quicker.
>

Yeah, Madison it is code-named.

> I'm not saying that a PA8700 is faster than a I2, although I bet I could easily
> write something where that would be the case, say just find some thing that
> fits in the 1.5MB L1 cache, but not in the I2s cache.
>

Exactly. You can cook a benchmark to make an UltraSparc III look
fast too, i.e. art. Hats off to their compiler team.

> PA8800 won't lead I3... it won't be allowed to...

We descend to conspiracy theories, Good Bye.

Rob

Ken Green

unread,
Jul 10, 2002, 11:15:56 AM7/10/02
to
Rob Young wrote:

> In article <3D2C4B63...@kgcc.co.uk>, Ken Green <Ken....@kgcc.co.uk> writes:
> >
>
> Exactly. You can cook a benchmark to make an UltraSparc III look
> fast too, i.e. art. Hats off to their compiler team.

At least they didn't write the benchmark. If you can design the benchmark too
then you can prove any system is faster than any other.... my slide rule is faster
than your Cray, coz it boots faster... etc...

But definitelt hats off to the compiler guys. Since the benchmark result still
stands I assume none of the competitor have shown the optomisation to
be art specific. Also no one else seems to have managed to pull of the same
trick yet.

>
>
> > PA8800 won't lead I3... it won't be allowed to...
>
> We descend to conspiracy theories, Good Bye.
>
> Rob

:-)

Cheers

Ken

Sander Vesik

unread,
Jul 10, 2002, 11:30:39 AM7/10/02
to
In comp.arch Alberto <albe...@libero.it> wrote:
> "Bernd Paysan" <bernd....@gmx.de> ha scritto nel messaggio news:p12fga...@miriam.mikron.de...

>> Nick Maclaren wrote:
>> > Very likely. Hence my remark. By the time that the Madison
>> > reaches 1.8 GHz, it is likely that the Opteron will be at 2.5 GHz,
>> > perhaps 3.0 GHz. If AMD introduce a large cache version at the
>> > latter speed, Opteron, too, could reach 1200 SpecInt. Maybe.
>>
>> I thought the current Athlons (at 1.8GHz, end 0.18u/initial 0.13u) should
>> already be in the ~700 SpecInt range. The Hammer has several improvements,
>> which should give about 25% per-clock improvement.
>
> In all application? :-).

In most processor intensive applications. It needs to, as it is supposed to
be the next gen AMD processor for both 32 and 64 bits. Also, most of the
enhancements pply equally to 32bit and 64bit code.

--
Sander

+++ Out of cheese error +++

Sander Vesik

unread,
Jul 10, 2002, 11:33:37 AM7/10/02
to
In comp.arch JF Mezei <jfmezei...@videotron.ca> wrote:
> Sander Vesik wrote:
>> Well, in this scenario, ia64 does collapse - it would be more effective for hp to
>> continue using hp-pa and mips for hp-ux/nsk and to just migrate everybody still around
>> to hp-pa / hp-ux from alpha instead.
>
> No. By the time Intel admits that IA64 won't become industry standard and is
> relegated to HP's proprietary chip, HP will have already migrated all its eggs
> onto IA64 and at that point, it makes more sense to continue to develop IA64
> than to go to whatever other architecture exists at that time.

But HP does not need to wait for intel to decide that ia64 won't make an industry
standard - it can reach that conclusion on its own and act accordingly. At least
in theory it should be able to.

Douglas Siebert

unread,
Jul 10, 2002, 11:35:40 AM7/10/02
to
"Dennis O'Connor" <dm...@primenet.com> writes:

>"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote ...
>>

>> Well, considering AMD get 1733 MHz with a 0.18 process, don't you
>> think that 2500 MHz with a 0.13 SOI process is fairly conservative?

>No. AMD's "0.18 process" is said to have already been using
>130nm transistor gate dimensions, Nick. So no gain there.
>And the reduced wire runs don't necessarily buy you much
>because your wire-to-wire spacing has decreased proportionally,
>as well as your wire width. But the die gets smaller, and that
>is always good for cost (but bad for power density).


So the story goes. If you accept it 100% that AthlonXPs use the same
gate lengths as their .13u process (and not some sort of in-between
like .15u) then you consider AMD's .18u ceiling as 1400MHz, the speed
of the fastest non-XP Athlon. For .5u->.35u, .35u->.25u, and .25u->.18u,
Intel managed to about double clock speed each time. I'm not going to
argue that this means Intel will do it with the P4 (4 GHz for Northwood
seems unlikely at this time) or that just because Intel can do something,
AMD can.

So if you start at 1400MHz, give AMD only 50% for the transition from
.18u bulk to .13u SOI, that's 2100MHz. Then there's the matter of those
two additional pipestages, which if done right, ought to come pretty
close to a 20% gain. Then you have your 2500MHz. That'd be for a
mature process of course. Paul DeMone's calculations indicated he
thought a "3400+" AMD is planning to intro Hammer at would require 2GHz.
That's less than a 50% bump from 1400MHz including those pipestages.

BTW, if AMD was "eating their future" with the Athlon XP, what exactly
is Intel doing with the smaller features used at .13u for the Pentium
4 @ 2.4GHz and above? :)

http://www.intel.com/pressroom/archive/releases/20020402comp.htm

Sander Vesik

unread,
Jul 10, 2002, 11:35:48 AM7/10/02
to
In comp.arch Andrew Reilly <and...@gurney.reilly.home> wrote:

> On Wed, 10 Jul 2002 00:41:44 +1000, Nick Maclaren wrote:
>
> Are those figures (700-800 specint in IA-32 mode) achievable with an
> in-order IA-32 translator, or are you suspecting that the IA-32 wart is
> an all-singing out-of-order implementation, even given an EPIC
> substructure? How would those IA-32 figures compare to those for SPEC
> binaries compiled for PIII or P4, do you think?

I think he is talking about Opteron at that point, not IA64 8-)

>
> --
> Andrew

John Dallman

unread,
Jul 10, 2002, 1:20:00 PM7/10/02
to
In article <3D2B3890...@videotron.ca>, jfmezei...@videotron.ca
(JF Mezei) wrote:

> Since it would be the same architecture, I would surmise that the Hammer
> compilers would simply need to be the existing 8086 compilers modified
> to use 64 bit adresses.

The changes are a bit more drastic than that - more registers, changes of
addressing modes, and so on. It's still a simpler task than IA-64, but not
trivial. Since MS have announced that they're going to support x86-64
64-bit mode with a version of Windows, one presumes that they'll have to
get their compiler generating 64-bit code so as to be able to build it.

---
John Dallman j...@cix.co.uk
"Any sufficiently advanced technology is indistinguishable from a
well-rigged demo"

Bill Todd

unread,
Jul 10, 2002, 1:29:55 PM7/10/02
to

"Rob Young" <you...@encompasserve.org> wrote in message
news:4L$FtJx...@eisner.encompasserve.org...

> In article <3D2BE887...@kgcc.co.uk>, Ken Green
<Ken....@kgcc.co.uk> writes:
>
> >
> > Then let it die as a comodity and let HP have it back as propritory
chip, at that
> > point all the cost constrains are out of the window and they can afford
to put
> > nice big caches back on the thing, PA8800 has a 32MB L2 cache, McKinley
> > has to make do with a 3MB L3.
> >
>
> You make it sound as if a smaller on-chip cache is somewhat of a
> handicap. Hint: name a benchmark *PA8700* beats Itanium II at.
> Second challenge, which benchmark (will there be one?) will PA8800
> lead Madison (aka Itanium III) at?

That's easy: since the 8700 just got a clock-rate boost that should bring
its SPECint performance to about 700 (i.e., close enough to McKinley's to be
considered competitive), and the 8800 will benefit from the same order of
shrink that Madison will, any SPECint-style benchmark that makes use of both
cores on the 8800 should blow the doors off Madison.

- bill

JF Mezei

unread,
Jul 10, 2002, 1:48:36 PM7/10/02
to
Nick Maclaren wrote:
> As I said, nothing has changed in that respect. HP cannot afford
> it, because the IA-64 architecture is a very high-cost one to
> develop and develop for. Not just the hardware, but the software.

But IA64 has been developped. It is almost a real chip by now. *IF* the
architecture was properly done, then incremental improvements shouldn't cost
that much. Also, if half the chip resides in the compilers, then HP still
ends up paying a big part of the chip through its own proprietary compiler(s).

In the end, a HP proprietary IA64 will be no different from Sun's SPARC or
IBM's Power.

Seems to me that, performance being equal, the 64 bit chip that will win will
be the one that can capture a greater volume.

Consider the theoretical possibility of Apple, now having fresh source code
for its OS, would revamp it to run 64 bits on the current Power_x chips. That
would surely beat IA64's volumes as long as IA64 is considered a high-margin
product restricted to servers.


Oh, by the way, want to see some competition for Compaq's slow pentium 3 blade
servers ?
http://www.apple.com/xserve/

Just a glimpse:
With dual 1GHz PowerPC G4s, up to 2GB DDR SDRAM, two 64-bit 66MHz PCI slots
(plus a third combination PCI/AGP slot), dual Gigabit Ethernet, FireWire, USB
and four Ultra ATA/100 Apple Drive Module bays, Apple’s best-in-class 1U
configuration compares favorably not just with 1U competitors, but even with
many 2U servers.

Check out the specs at: http://www.apple.com/xserve/specs.html
and you'll see where VMS is really lagging behind when you look at what comes
out of the box with VMS versus the Apple server offering. (in terms of
software and support for networking etc).

Bill Todd

unread,
Jul 10, 2002, 1:46:31 PM7/10/02
to

"Ken Green" <Ken....@kgcc.co.uk> wrote in message
news:3D2C01B6...@kgcc.co.uk...

...

> Your right Bill, the L2 is in the same package, but not on the same, chip,
> altough the tags are on the CPU chip, along with 2 cores, and the two
> L1 caches (the data caches are half the size of the PA8700)

Well, being in the same package does help - and also helps explain why
adding additional PA processors to a system is rather expensive. Apologies
for not being more familiar with the beast.

>
> I would have thought that cost was a major factor in deciding not to
follow
> a similar approach in the McKinley, I guess HP can still afford more for
> CPUs in their mid to upper range systems than Intel expects the IA64
> market place to want to pay. I might be wrong, maybe we'll see Makos in
> the the rp24XX boxes which start with a list price of <$1000.

I wouldn't hold my breath waiting, given that even the runt-of-the-litter
processor chips cost more than that.

However, it may be a matter of decreased marginal benefit. It sounds as if
the PA chips didn't have room for much on-chip cache (perhaps they do now
but there's never seemed to be sufficient gain to modify the original
architecture) so went the MCM route. 3 MB on the chip is at least starting
to get respectable, and the 6 MB Madison will offer next year even more so,
especially given that it's most likely noticeably lower-latency than PA's
cache: with that much fast cache on the chip, the marginal utility of a
larger one off-chip may be fairly small.

Understood. However, since EPIC has come nowhere near to fulfilling the
"three times as fast as Alpha!" promises made 'way back then (i.e., it's
here, and it's competitive, but no more than that), and since PA has
continued to slog ahead to the point where the 875 MHz 8700s shouldn't be
significantly slower at SPECint than McKinley (with the shrunk, dual-core
8800s on the way next year to out-perform Madison in such server-style use),
and since the only paths forward I've seen anyone suggest for EPIC (SMT,
multi-core dice) apply equally well to the PA architecture, and since EPIC
seems to be at the bottom of the barrel when it comes to performance per
Watt, I still maintain that it would seem difficult to justify funneling
further development dollars into EPIC instead of into PA (and/or Alpha) *if*
Itanic should turn out to be only an HP-proprietary product rather than
become the 'industry standard' its supporters keep hoping it will.

>
> >
> >
> > Oh, wait: that *was* the conclusion they reportedly came to a while
ago,
> > but couldn't make fly at the corporate level...
> >
>
> Well the management have well a truely nailed their flag to the IA64 mast.
> Carley has already said that the whole farm has been bet on IA64, she
> even went as far as buying Compaq to help prove it.
>
> Technical people are easier to convince, generally if you can show enough
> evidence you can win arguements, they don't like being proved wrong, but
> they can be.
>
> Most corperate levels... won't be proved wrong.

Amen.

- bill

Nick Maclaren

unread,
Jul 10, 2002, 2:04:45 PM7/10/02
to
In article <3D2C7373...@videotron.ca>,

JF Mezei <jfmezei...@videotron.ca> wrote:
>Nick Maclaren wrote:
>> As I said, nothing has changed in that respect. HP cannot afford
>> it, because the IA-64 architecture is a very high-cost one to
>> develop and develop for. Not just the hardware, but the software.
>
>But IA64 has been developped. It is almost a real chip by now. *IF* the
>architecture was properly done, then incremental improvements shouldn't cost
>that much. Also, if half the chip resides in the compilers, then HP still
>ends up paying a big part of the chip through its own proprietary compiler(s).
>
>In the end, a HP proprietary IA64 will be no different from Sun's SPARC or
>IBM's Power.

I think that you are still underestimating the ongoing cost caused
by the IA-64's complexity. I have located some pretty foul bugs
in pretty foul systems in my time - sometimes where the vendors
had given up - and the thought of the IA-64 makes me gibber. And
I am one of the last few people in Cambridge (not to say the UK)
with recent experience of tracking down the sort of bug that I
am referring to.

The cost of complexity does not go away even after the operating
systems and compilers are 'tested' - it continues to cause trouble
to people trying to locate bugs in 'working' applications. Note
that the bug can well be a simple one - the complexity is what
can hide it. And the IA-64 architecture is the most complicated
I have ever seen, by a factor of three.

Intel can pay ISV's to put the extra effort in. Could HP do the
same, on its own? If not, we would get into the situation that
some important applications were ported late, badly or not at all
to HP's unique IA-64 systems. Customers would then follow the
applications, as they have done for 40 years.

>Seems to me that, performance being equal, the 64 bit chip that will win will
>be the one that can capture a greater volume.

I hope that no single chip wins. Intel were intending the IA-64
to wipe out all other general-purpose designs; this would be bad
for computing and, in the long term, bad for Intel. Witness what
happened to IBM. An AMD monopoly would be no better.

>Consider the theoretical possibility of Apple, now having fresh source code
>for its OS, would revamp it to run 64 bits on the current Power_x chips. That
>would surely beat IA64's volumes as long as IA64 is considered a high-margin
>product restricted to servers.

Not really. Even in terms of number of systems, there are a lot
more medium- and high-end servers than Apple systems.

JF Mezei

unread,
Jul 10, 2002, 3:58:16 PM7/10/02
to
Sander Vesik wrote:

> But HP does not need to wait for intel to decide that ia64 won't make an industry
> standard - it can reach that conclusion on its own and act accordingly. At least
> in theory it should be able to.
> +++ Out of cheese error +++


No. Carly bet her job, her carreer and her reputation on this. Remember that
prior to the merger, her competence was in question. Remember that during the
merger pregnancy, most HP staff were against her project. And now that she has
given birth to the "new" HP, there is a lot of work ahead to prevent Wall
Street from declaring that buying Compaq was a big costly mistake.

It will be quite a while before Carly has enough tenure/power to admit that
there were some strategic mistakes such as fofucisng entirely on an unproven
chip.

Compare Carly's decisions with those of IBM.

IBM developped the PowerPC and evolved it into Power_x and only well after the
architecture had proven itself did IBM start to deploy it onto its maintream
products, and only about 10 years alter is IBM confident enough to plan to
migrate MVS from the 360 architecture to Power (and even that will take a while).

Meanwhile, HP committed and forced Compaq to commit to a totally unproven
architecture with different, also untested, paradigm well before that chip
becomes available. That is a heck of a big gamble.

konabear

unread,
Jul 10, 2002, 3:52:46 PM7/10/02
to
IMHO the TEL part of Wintel needs to get to 64 bits. MS is doubling its
memory requirements every major release. It's already in the 128-256MB
range. That's only a couple of notches below 4 GB limit of 32 bits. I've
already seen motherboards saying they support 4 GB of main memory but
because I/O device buffers consume space that the system will only report
3.2-3.8 GB of memory. Sounds like a VAX 7000!

If MS is going to let PCs get into video editting in a major way, they're
going to need GBs of memory for the editors and thus more than 32 bits.

The fact that Windows needs to go there means there will be a major market
for 64 bit chips. Will IA-64 be the solution?

Todd
"Terry C. Shannon" <terrys...@attbi.com> wrote in message
news:CFBW8.337959$6m5.3...@rwcrnsc51.ops.asp.att.net...
>
> "Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
> news:agenat$gsj$1...@pegasus.csx.cam.ac.uk...
> >
> > In article <tRAW8.13228$K_4.3...@twister1.libero.it>,
> > "Alberto" <uapalb...@libero.it> writes:
> > |>
> > |> > Well, McKinley may be announced, but there is little evidence that
> > |> > HP are backing up their fine words with buttered parsnips.
> > |> >
> > |> > HP's "buy online" for the USA has only the zx2000 (plus its 6
Merced
> > |> > configurations) and says "There is no stock currently available"
for
> > |> > all of them.
> > |> >
> > |> > Neither the zx2000 nor the zx6000 are in HP's UK list yet. And,
yes,
> > |> > we are a serious potential customer!
> > |> >
> > |> > If anyone manages to buy one using a normal mechanism, I should
> > |> > appreciate hearing what, for delivery when and in which country.
> > |>
> > |> You have too hurry :-).
> >
> > Perhaps :-)
> >
> > |> Itanic performs very well now, but 2 years are
> > |> necessary to penetrate a full of difficulties market.
> >
> > Considering that we have been getting an earful of how wonderful
> > the imminent IA-64 systems will be for over 5 years now, I am not
> > sympathetic. More seriously, this is IA-64's last chance; if it
> > isn't established within 12-18 months, it will die.
> >
>
> Sounds like a reasonable assessment.
>
>


Nick Maclaren

unread,
Jul 10, 2002, 3:58:40 PM7/10/02
to
In article <3D2C91CE...@videotron.ca>,

JF Mezei <jfmezei...@videotron.ca> wrote:
>Sander Vesik wrote:
>
>> But HP does not need to wait for intel to decide that ia64 won't make an industry
>> standard - it can reach that conclusion on its own and act accordingly. At least
>> in theory it should be able to.
>
>No. Carly bet her job, her carreer and her reputation on this. Remember that
>prior to the merger, her competence was in question. Remember that during the
>merger pregnancy, most HP staff were against her project. And now that she has
>given birth to the "new" HP, there is a lot of work ahead to prevent Wall
>Street from declaring that buying Compaq was a big costly mistake.

That is certainly true.

>It will be quite a while before Carly has enough tenure/power to admit that
>there were some strategic mistakes such as fofucisng entirely on an unproven
>chip.

From her behaviour, she has a similar character to Margaret Thatcher.
If so, she will never admit that she was wrong, not if her refusal
to do so means the collapse of HP.

>Compare Carly's decisions with those of IBM.
>
>IBM developped the PowerPC and evolved it into Power_x and only well after the
>architecture had proven itself did IBM start to deploy it onto its maintream
>products, and only about 10 years alter is IBM confident enough to plan to
>migrate MVS from the 360 architecture to Power (and even that will take a while).

I am sorry, but I was very close to IBM at the time, and that is
fantasy. IBM's faults were that of a large bureaucratic, poorly
communicating organisation, and the PowerPC/POWER shambles was even
more impressive than HP's and Compaq's IA-64 behaviour has been to
date - though not as much as Intel's. I could go into horrible
detail, but it is grossly off-group.

>Meanwhile, HP committed and forced Compaq to commit to a totally unproven
>architecture with different, also untested, paradigm well before that chip
>becomes available. That is a heck of a big gamble.

No, that is NOT so. Until fairly recently, HP was hedging its bets,
and was developing PA-RISC as actively as ever, just more quietly.
And HP did NOT force Compaq into the migration - Capellas had
already taken that decision, announced it, disbanded the Alpha
development and sold off some of the IPR and people before the
merger with HP started.

Please give discredit where discredit is due.

Ken Green

unread,
Jul 10, 2002, 4:46:48 PM7/10/02
to
Bill Todd wrote:

> "Ken Green" <Ken....@kgcc.co.uk> wrote in message
> news:3D2C01B6...@kgcc.co.uk...
>
> ...
>
> > Your right Bill, the L2 is in the same package, but not on the same, chip,
> > altough the tags are on the CPU chip, along with 2 cores, and the two
> > L1 caches (the data caches are half the size of the PA8700)
>
> Well, being in the same package does help - and also helps explain why
> adding additional PA processors to a system is rather expensive. Apologies
> for not being more familiar with the beast.
>

OK, a quick guide to PA caches.

Most PA processors only have L1 cache, since PA's pipeline
stores results in the clock cycle after execute, this cache has a
load to use latency of 2 cycles.

Upto the PA8200 this cache was external, most PA8200 based
systems had 2MB for data & 2MB for instructions. Beyond 240MHz
they moved to onchip cache with the PA8500.

The PA8500 has 1MB for data and 512K for instruction. The PA8600
also has the same 1.5MB of L1 cache.

PA8700 (&PA8700+ not sure of the difference apart from clock speed)
increases both cache sizes by 50%.

The next generation PA8800 is the first main stream PA processor
to bother with L2 cache. Each core has a 768K data and 768K instr
cache, they share a 32MB L2 cache with a 40cycle latency.

>
> >
> > I would have thought that cost was a major factor in deciding not to
> follow
> > a similar approach in the McKinley, I guess HP can still afford more for
> > CPUs in their mid to upper range systems than Intel expects the IA64
> > market place to want to pay. I might be wrong, maybe we'll see Makos in
> > the the rp24XX boxes which start with a list price of <$1000.
>
> I wouldn't hold my breath waiting, given that even the runt-of-the-litter
> processor chips cost more than that.

The rp2430 uses a PA8700 in a sub $1000 box, it's not at max speed (650Mhz)
but it's a PA8700.

>
>
> However, it may be a matter of decreased marginal benefit. It sounds as if
> the PA chips didn't have room for much on-chip cache

Well going beyond 2.25MB of cache on a chip kinda of makes it quite big :-)

> (perhaps they do now
> but there's never seemed to be sufficient gain to modify the original
> architecture) so went the MCM route. 3 MB on the chip is at least starting
> to get respectable, and the 6 MB Madison will offer next year even more so,
> especially given that it's most likely noticeably lower-latency than PA's
> cache:

The papers talk about
L1 being 2x 16K with 1 cycle latancy
L2 being 256K with 5cycle latency
L3 being 3MB with 12cycles latency

Sorry I don't know the I2s pipeline, so I'm not sure whether these are
real load to use latency figures.

> with that much fast cache on the chip, the marginal utility of a
> larger one off-chip may be fairly small.
>

I believe theres quite a difference between the Power4 SPEC scores
when they're configured with 64MB of L3 cache instead of 128MB.

True

> (i.e., it's
> here, and it's competitive, but no more than that), and since PA has
> continued to slog ahead to the point where the 875 MHz 8700s shouldn't be
> significantly slower at SPECint than McKinley

assuming perfect scaling it should score 701 SPECint 2000 & 677 for FP
so it's still quite some way addrift of I2.

For Mako (PA8800) their estimating SPECint 900 & 1000 for FP, compiler gains
could well take I2 past 900 SPECint by the time Mako ships, the FP score is
way down on I2.

> (with the shrunk, dual-core
> 8800s on the way next year to out-perform Madison in such server-style use),
> and since the only paths forward I've seen anyone suggest for EPIC (SMT,
> multi-core dice) apply equally well to the PA architecture, and since EPIC
> seems to be at the bottom of the barrel when it comes to performance per
> Watt,

I've no idea what the wattage of a PA8700 is, but it's a similar size to the
I2, and all runs at 875Mhz, so I wouldn't expect it to be much lower than
the I2.

> I still maintain that it would seem difficult to justify funneling
> further development dollars into EPIC instead of into PA (and/or Alpha) *if*
> Itanic should turn out to be only an HP-proprietary product rather than
> become the 'industry standard' its supporters keep hoping it will.

I'm sure HP expects it to remain/become an "industry standard"

>
>
> >
> > >
> > >
> > > Oh, wait: that *was* the conclusion they reportedly came to a while
> ago,
> > > but couldn't make fly at the corporate level...
> > >
> >
> > Well the management have well a truely nailed their flag to the IA64 mast.
> > Carley has already said that the whole farm has been bet on IA64, she
> > even went as far as buying Compaq to help prove it.
> >
> > Technical people are easier to convince, generally if you can show enough
> > evidence you can win arguements, they don't like being proved wrong, but
> > they can be.
> >
> > Most corperate levels... won't be proved wrong.
>
> Amen.
>

:-)

>
> - bill

Cheers

Ken


Ken Green

unread,
Jul 10, 2002, 4:53:22 PM7/10/02
to
konabear wrote:

> IMHO the TEL part of Wintel needs to get to 64 bits. MS is doubling its
> memory requirements every major release. It's already in the 128-256MB
> range. That's only a couple of notches below 4 GB limit of 32 bits. I've
> already seen motherboards saying they support 4 GB of main memory but
> because I/O device buffers consume space that the system will only report
> 3.2-3.8 GB of memory. Sounds like a VAX 7000!
>
> If MS is going to let PCs get into video editting in a major way, they're
> going to need GBs of memory for the editors and thus more than 32 bits.

Currently video editing uses less RAM than still picture editting.
The quantities of data involved with video are just so big that trying
to do it in RAM is impractical. For DV 2GB is 9mins of video. So it's
all done from disk. Besides theres no real benift in having many frames
in memory at one time. For stills though 100MBs are nothing, and
you often want several stages of manipulation in RAM at once,
otherwise the undo button takes to long :-)

Ken Green

unread,
Jul 10, 2002, 4:57:01 PM7/10/02
to
Nick Maclaren wrote:

The origenal roadmap was altered because I1 was so late, the PA8600,
PA8800 & PA8900 were not on the origenal map.

Bill Todd

unread,
Jul 10, 2002, 5:11:11 PM7/10/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:agi3lg$f0k$1...@pegasus.csx.cam.ac.uk...

> In article <3D2C91CE...@videotron.ca>,
> JF Mezei <jfmezei...@videotron.ca> wrote:

...

> >Meanwhile, HP committed and forced Compaq to commit to a totally unproven
> >architecture with different, also untested, paradigm well before that
chip
> >becomes available. That is a heck of a big gamble.
>
> No, that is NOT so. Until fairly recently, HP was hedging its bets,
> and was developing PA-RISC as actively as ever, just more quietly.
> And HP did NOT force Compaq into the migration - Capellas had
> already taken that decision, announced it, disbanded the Alpha
> development and sold off some of the IPR and people before the
> merger with HP started.

While we don't really know (though can perhaps make reasonable guesses
about) what effect the planned merger had on Alpha's future, Carly & Curly
have clearly stated that merger talks were *well* under way long before June
25, 2001. My own suspicion is that if Curly had not been able to look
forward to an imminent merger to save his job (at least temporarily) he
might have been considerably less eager to throw away such an important part
of Compaq's profit stream - and also would have had far less incentive to
avoid competing with Itanic (and HP) at least at the high-end.

- bill

Nick Maclaren

unread,
Jul 10, 2002, 5:14:42 PM7/10/02
to
In article <3D2C9F9D...@kgcc.co.uk>,
Ken Green <Ken....@kgcc.co.uk> wrote:

>Nick Maclaren wrote:
>
>> No, that is NOT so. Until fairly recently, HP was hedging its bets,
>> and was developing PA-RISC as actively as ever, just more quietly.
>
>The origenal roadmap was altered because I1 was so late, the PA8600,
>PA8800 & PA8900 were not on the origenal map.

No, they weren't on the MAP, but they assuredly were under
development! Perhaps only in draft (i.e. on paper etc.), but
you don't deliver a complex chip within 2 years of starting from
scratch.

Back in 1995, many people said that Intel had shafted HP, because
Intel could survive a seriously late Merced, but HP had stopped
PA-RISC development. I said "Maybe but, if I were HP's CEO, I
would be developing PA-RISC quietly in case IA-64 is delayed or
flops."

Well - lo and behold! - HP announced the previously denied 8500,
to ship a mere 6 months after announcement. And, shortly thereafter,
HP announced a complete roadmap, stretching on to the 8900. I feel
that I and the previous HP CEO (whatever his name was) would have
got on, if we had ever met :-)

Nick Maclaren

unread,
Jul 10, 2002, 5:17:44 PM7/10/02
to
In article <Pp1X8.69661$Im2.2...@bin2.nnrp.aus1.giganews.com>,

Yes, that is true, but I have good reason to believe that Capellas
has decided to scrap Alpha by the end of 2000, and have some reason
to believbe that the negotiations with Intel were well under way
in early 2001.

Terry C. Shannon

unread,
Jul 10, 2002, 5:33:54 PM7/10/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:agi89o$ian$1...@pegasus.csx.cam.ac.uk...
<snip>

>
> Yes, that is true, but I have good reason to believe that Capellas
> has decided to scrap Alpha by the end of 2000, and have some reason
> to believbe that the negotiations with Intel were well under way
> in early 2001.

Completely consistent with what I've heard. ;-}


Bill Todd

unread,
Jul 10, 2002, 5:58:55 PM7/10/02
to

"Ken Green" <Ken....@kgcc.co.uk> wrote in message
news:3D2C9D37...@kgcc.co.uk...

> Bill Todd wrote:
>
> > "Ken Green" <Ken....@kgcc.co.uk> wrote in message
> > news:3D2C01B6...@kgcc.co.uk...

...

> > > I would have thought that cost was a major factor in deciding not to


> > follow
> > > a similar approach in the McKinley, I guess HP can still afford more
for
> > > CPUs in their mid to upper range systems than Intel expects the IA64
> > > market place to want to pay. I might be wrong, maybe we'll see Makos
in
> > > the the rp24XX boxes which start with a list price of <$1000.
> >
> > I wouldn't hold my breath waiting, given that even the
runt-of-the-litter
> > processor chips cost more than that.
>
> The rp2430 uses a PA8700 in a sub $1000 box, it's not at max speed
(650Mhz)
> but it's a PA8700.

Apologies again for being PA-knowledge-impaired: I thought 'Mako' above was
a nick-name for McKinley. Your comment does however make it clear that PA
already provides 64-bit competition for McKinley at a significantly lower
price point, just as SPARC does - which takes considerable wind out of the
sails of those who claim Itanic will win on pure price because other 64-bit
manufacturers (conveniently excluding AMD) can't compete down there.

...

> > with that much fast cache on the chip, the marginal utility of a
> > larger one off-chip may be fairly small.
> >
>
> I believe theres quite a difference between the Power4 SPEC scores
> when they're configured with 64MB of L3 cache instead of 128MB.

Ah, but that's comparing off-chip cache to off-chip cache - and at a
particularly interesting pair of granularities, since IIRC no SPEC test
currently requires more than about 200 MB to run in (and quite a few would
run completely in 128 MB). POWER4 has about 1.5 MB of on-chip (L2) cache,
which is smaller than most if not all of the SPEC tests, so external cache
is still a major win.

I recognize that as long as cache is well over an order of magnitude faster
than main memory then cache speed tends to be less critical than cache miss
rate. But miss rate (at least for truly random data that greatly exceeds
the cache size) goes down pretty slowly with size increases - at best a
square-root relationship and ISTR it's even worse - so for at least *some*
values of size and speed a much faster on-chip cache can work as well as a
slower off-chip one, and 6 MB is a sufficient fraction of 32 MB that while
it might be *somewhat* slower overall the amount may not be all that
significant.

...

> > since PA has
> > continued to slog ahead to the point where the 875 MHz 8700s shouldn't
be
> > significantly slower at SPECint than McKinley
>
> assuming perfect scaling it should score 701 SPECint 2000 & 677 for FP
> so it's still quite some way addrift of I2.

I don't consider 13% to be a 'significant' difference, given all the other
variables associated with system performance and price/performance. And I
did specifically say SPECint, since that's what's important (at least
compared with SPECfp) to most of the server market.

>
> For Mako (PA8800) their estimating SPECint 900 & 1000 for FP,

Given that IIRC Mako is a process generation ahead of 8700, that doesn't
sound like very much of a speed-up (though adjusting to a new process can
take a bit of time). And the difference in percentage gain between SPECint
and SPECfp is also intriguing.

compiler gains
> could well take I2 past 900 SPECint by the time Mako ships,

Maybe. But more to the point I thought Madison was supposed to arrive by
about then: while I don't expect the rosiest estimates for its performance
to materialize, I do expect something like 1100 SPECint from it and possibly
somewhat more (anything much over 1200 would surprise me, though).

However, in server use Mako's dual cores (I assume they're still planned)
should give it a definite leg up.

...

> > I still maintain that it would seem difficult to justify funneling
> > further development dollars into EPIC instead of into PA (and/or Alpha)
*if*
> > Itanic should turn out to be only an HP-proprietary product rather than
> > become the 'industry standard' its supporters keep hoping it will.
>
> I'm sure HP expects it to remain/become an "industry standard"

I'm sure you're right. But the context of this discussion was your
suggestion that HP would continue Itanic as a proprietary effort if it did
*not* become industry-standard, and that was what I was addressing above.

- bill

Andy Isaacson

unread,
Jul 10, 2002, 6:04:01 PM7/10/02
to
Removing comp.os.vms from newsgroups, any relevance is long gone.

In article <3D2C91CE...@videotron.ca>,
JF Mezei <jfmezei...@videotron.ca> wrote:

>Compare Carly's decisions with those of IBM.
>
>IBM developped the PowerPC and evolved it into Power_x and only well after the
>architecture had proven itself did IBM start to deploy it onto its maintream
>products, and only about 10 years alter is IBM confident enough to plan to
>migrate MVS from the 360 architecture to Power (and even that will take
>a while).

Er, the S/390 is migrating to Power??? Do you have any reference for
that statement? I find it hard to believe that they're going to throw
out the investment they've made in zArchitecture.

-andy

Bill Todd

unread,
Jul 10, 2002, 6:04:28 PM7/10/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:agi89o$ian$1...@pegasus.csx.cam.ac.uk...

Indeed. But if you examine your statement you'll see that it clearly states
that he had "announced it, disbanded the Alpha development and sold off some
of the IPR and people before the merger with HP started" - and that's what I
responded to.

- bill

Nick Maclaren

unread,
Jul 10, 2002, 6:08:25 PM7/10/02
to
In article <Mb2X8.26543$Bt1.1...@bin5.nnrp.aus1.giganews.com>,

Bill Todd <bill...@metrocast.net> wrote:
>"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
>news:agi89o$ian$1...@pegasus.csx.cam.ac.uk...

>>
>> Yes, that is true, but I have good reason to believe that Capellas
>> has decided to scrap Alpha by the end of 2000, and have some reason
>> to believbe that the negotiations with Intel were well under way
>> in early 2001.
>
>Indeed. But if you examine your statement you'll see that it clearly states
>that he had "announced it, disbanded the Alpha development and sold off some
>of the IPR and people before the merger with HP started" - and that's what I
>responded to.

All right, 'started' is a word with many meanings :-)

I certainly don't believe that HP instructed Compaq to scrap the
Alpha, but there is a considerable grey area involving nudges and
winks ....

Stephen Fuld

unread,
Jul 10, 2002, 6:09:03 PM7/10/02
to

"JF Mezei" <jfmezei...@videotron.ca> wrote in message
news:3D2C91CE...@videotron.ca...

>
> IBM developped the PowerPC and evolved it into Power_x and only well after
the
> architecture had proven itself did IBM start to deploy it onto its
maintream
> products, and only about 10 years alter is IBM confident enough to plan to
> migrate MVS from the 360 architecture to Power (and even that will take a
while).

Has IBM announced a decision to migrate MVS to POWER? I guess I missed
that. Given the huge amount of legacy software, a fair amount of it in
assembler, much dating back decades, this seems like a huge decision.

--
- Stephen Fuld
e-mail address disguised to prevent spam.


Nick Maclaren

unread,
Jul 10, 2002, 6:12:12 PM7/10/02
to
In article <3g2X8.91907$UT.60...@bgtnsc05-news.ops.worldnet.att.net>,

I have heard it, but not seen definite confirmation. I can witness
that it has been seriously considered from about 1990, and a decade
is about what it takes IBM to take a major decision :-)

Terry C. Shannon

unread,
Jul 10, 2002, 6:13:50 PM7/10/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:agib8p$kdk$1...@pegasus.csx.cam.ac.uk...

> In article <Mb2X8.26543$Bt1.1...@bin5.nnrp.aus1.giganews.com>,
> Bill Todd <bill...@metrocast.net> wrote:
> >"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
> >news:agi89o$ian$1...@pegasus.csx.cam.ac.uk...
> >>
> >> Yes, that is true, but I have good reason to believe that Capellas
> >> has decided to scrap Alpha by the end of 2000, and have some reason
> >> to believbe that the negotiations with Intel were well under way
> >> in early 2001.
> >
> >Indeed. But if you examine your statement you'll see that it clearly
states
> >that he had "announced it, disbanded the Alpha development and sold off
some
> >of the IPR and people before the merger with HP started" - and that's
what I
> >responded to.
>
> All right, 'started' is a word with many meanings :-)
>
> I certainly don't believe that HP instructed Compaq to scrap the
> Alpha, but there is a considerable grey area involving nudges and
> winks ....
>

Yep, a lot of what went on will likely remain opaque to us mere mortals.
Still, I believe Alpha's fate was sealed well before any serious acquisition
negotiations got underway.


JF Mezei

unread,
Jul 10, 2002, 6:31:00 PM7/10/02
to
Merced was terribly slow, but it was more or less a beta version of the architecture.

Can one consider McKinley to be "production" quality and judge IA64 based on
McKinley ?

If McKinley is now production quality, isn't it fair to state that from now
on, IA64 would progress are roughly the same rate as competing chips ? If so,
its relative position in the pack wouldn't change much over time. It may lead
for a few months, and then be overtaken by another etc etc.

What I don't quite understand is that EPIC should simplify a chip design since
it offloads lots of the logic to the compilers, right ? If that is the case,
is it fair to state that an EPIC's chip improvements will come mostly through
clock increases with far less done through chip design ?

For instance, with a philosophy of having much of the performance logic in the
chip, the Alpha engineers were ahle to add fancy stuff such as branch
prediction and pre-fetching/compilation of both instruction streams after an
IF statement. So these improvements were added to the clock increases to yield
an even more performing chip.

But just how much fancy stuff can be added to IA64 before it is no longer EPIC
? And when IA64 is imporved to the point that it is a RISC chip, won't it
require that compilers that had been built to do the work that IA64 doesn't do
be modified to become more conventional compilers that allow the chip to do
much of the performance enhancements ?

Bill Todd

unread,
Jul 10, 2002, 6:21:29 PM7/10/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:agibfs$kfm$1...@pegasus.csx.cam.ac.uk...

> In article <3g2X8.91907$UT.60...@bgtnsc05-news.ops.worldnet.att.net>,
> Stephen Fuld <s.f...@PleaseRemove.att.net> wrote:

...

> >Has IBM announced a decision to migrate MVS to POWER? I guess I missed
> >that. Given the huge amount of legacy software, a fair amount of it in
> >assembler, much dating back decades, this seems like a huge decision.
>
> I have heard it, but not seen definite confirmation. I can witness
> that it has been seriously considered from about 1990, and a decade
> is about what it takes IBM to take a major decision :-)

I recently encountered a reference to this being targeted for POWER6 (I
think - *could* have been POWER5), but whether in any really official IBM
context I don't recall.

- bill

JF Mezei

unread,
Jul 10, 2002, 6:38:24 PM7/10/02
to
Nick Maclaren wrote:
> can hide it. And the IA-64 architecture is the most complicated
> I have ever seen, by a factor of three.

In what way would the architecture be 3x more complex ? Shouldn't it be much
simpler since the chip doesn't have to bother with out of order execution
logic etc etc ?

What "gizmos" does the chip have ? Does it rely a lot on pipelining ? does
it have branch prediction ? out of order execution ?

Or does it rely on the compiler to funnel each instruction in the right
execution stream ?

In other words, what are the differences between EPIC and modern RISC chips in
terms of how instructions are processed ?

Is IA64 more complex because it is EPIC, or because it was made by Intel with
lots of compromises to please HP ?

Also, what is different between Merced and McKinley ?

It is loading more messages.
0 new messages