Unfuckingbelievable. Sun better start getting it, and soon.
http://linuxtoday.com/news_story.php3?ltsn=2003-02-12-017-26-OS-BZ-SV
______________________________________________________________________
Posted Via Uncensored-News.Com - Still Only $9.95 - http://www.uncensored-news.com
<><><><><><><> The Worlds Uncensored News Source <><><><><><><><>
perhaps you don't get the fact that sun's market cap is even higher than
intuit's, and that the absolute stock price does not mean that much
here? look up something called a reverse stock split (e.g. palm just did
one).
sun has $5 billion in cash in the bank, as well as low short term and
long term debts. it is also the #1 vendor of high end enterprise
servers, ahead of HP and IBM. it does not look like a company in bad
trouble to me, at least not in the short and medium term (although in
today's economy, most every tech company is in some sort of trouble). in
the long run, unless we suddenly see spending again on the order of the
dotcom boom, they will have to re-evaluate their business plan.
Actually, this is one of the first things Scott has said in a long time that
makes sense. Everyone is jumping on the linux bandwagon right now. I is
not ready for high end yet. WIth HP and IBM helping, it might be, but then
Scott will let HP and IBM dump money into Linux. He will still benefit from
linux being improved. If he deems it enterprise ready in the future, he can
just come out with a line of servers for it. I think he is taking an
approach similar to IBM's new stance on Itanium. If Itanium becomes
popular, IBM will use it, if not they reap the benefits of money dumped into
power 4 development. Sun does not have to invest in linux to benefit. For
the moment Sun should push Open Office and other apps that can take money
from MS and not cost him alot.
Pshht, yeah, place yer bets on that happening.
> http://linuxtoday.com/news_story.php3?ltsn=2003-02-12-017-26-OS-BZ-SV
>
From:
http://linuxtoday.com/news_story.php3?ltsn=2002-12-31-010-26-NW-BZ-SV
"Sun in September said it plans to make inexpensive PCs based on Linux.
Before that, it launched a low-end Linux server to better battle similar
offerings from Dell."
Oh yeah, I remember something about that. Say, how are those selling?
He... hello? Remember, they were going to "cut off Dell's air supply"?
...Is anybody there?
"'Linux isn't a market. It's a crankshaft, a widget,' scoffs McNealy,
using an analogy befitting the son of a former American Motors vice
chairman..."
Ohhhhh... ok, so Scotty's a poor little rich boy, I see. Daddy must
have set him up pretty damn good for him to be able to ruin giants like
Sun singlehandedly with complete impunity.
Or this:
http://linuxtoday.com/news_story.php3?ltsn=2002-10-09-020-26-NW-BZ-SW
"'The real issue--our mistake--is that we thought the whole world would
have gone to 64-bit (Solaris). It turns out that 32-bit is good enough,
and a ton of 32-bit apps run on x86 and Linux,' McNealy said. He
positioned Solaris as an upgrade path for customers needing more
computing muscle. 'Linux is very compatible with Solaris,' he said..."
No Scotty. The real issue--Sun's Board's mistake---was, is, and will be
until the company's assets are liquidated at auction for pennies on the
dollar, not hurling you out on your ass when you first revealed your "I
hate Microsoft" business plan.
: http://linuxtoday.com/news_story.php3?ltsn=2002-10-09-020-26-NW-BZ-SW
: "'The real issue--our mistake--is that we thought the whole world would
: have gone to 64-bit (Solaris). It turns out that 32-bit is good enough,
: and a ton of 32-bit apps run on x86 and Linux,' [...]
32-bit is not just good enough - it's a lot more efficient than 64-bit.
The primary effect of 64 bits is to double the size of your
address/data lines, making everything twice as big, twice as
far apart, and twice as slow.
--
__________
|im |yler http://timtyler.org/ t...@tt1.org
actually, the latest article on linux servers (that it jumped 100% last
year, compared to 5% for industry as a whole) said that sun's linux
servers made $1.4 million (?) or so in the 4th quarter (even though they
just introduced it), so i guess they are doing ok....we ourselves got a
sun linux box earlier this year...it's very inexpensive (at $2000+), but
with 1 GB RAM, 1.4 MHZ, has the SUN logo (which is way cooler than a
dell logo), and it looks a distinctive royal blue in color.
have no idea about desktops...i don't think they sell desktops...
oh, yep, here it is:
"Sun Microsystems, which started selling Linux servers just last year,
took in $1.3 million in the fourth quarter, up from $912,500 in the
third quarter."
> Actually, this is one of the first things Scott has said in a long time that
> makes sense. Everyone is jumping on the linux bandwagon right now. I is
> not ready for high end yet. WIth HP and IBM helping, it might be, but then
> Scott will let HP and IBM dump money into Linux. He will still benefit from
> linux being improved. If he deems it enterprise ready in the future, he can
> just come out with a line of servers for it. I think he is taking an
> approach similar to IBM's new stance on Itanium. If Itanium becomes
> popular, IBM will use it, if not they reap the benefits of money dumped into
> power 4 development. Sun does not have to invest in linux to benefit. For
> the moment Sun should push Open Office and other apps that can take money
> from MS and not cost him alot.
It strikes me that IBM, HP, and Sun are all doing the same thing: selling
their bigger UNIX boxes with proprietary OSes, while they also bring lower
end Linux/x86 products on-line. I think they will all let them market
choose if/when to transition ... though maybe IBM and HP make the choice
higher profile.
Scott is just being Scott ;-), grandstanding as he says how different he
is ... and then in the next paragraph refers to Sun's new Linux/x86 blade
product.
jjens
>actually, the latest article on linux servers (that it jumped 100% last
>year, compared to 5% for industry as a whole) said that sun's linux
>servers made $1.4 million (?)
Wow. That will really drive the stock back up to $65 per share, huh.
>The primary effect of 64 bits is to double the size of your
>address/data lines, making everything twice as big, twice as
>far apart, and twice as slow.
Gee maybe you should tell Intel, AMD and the others that so they stop
wasting their time and money.
>JTK <gsagdj...@ahgkjhsadh.com> wrote:
>
>: http://linuxtoday.com/news_story.php3?ltsn=2002-10-09-020-26-NW-BZ-SW
>
>: "'The real issue--our mistake--is that we thought the whole world would
>: have gone to 64-bit (Solaris). It turns out that 32-bit is good enough,
>: and a ton of 32-bit apps run on x86 and Linux,' [...]
>
>32-bit is not just good enough - it's a lot more efficient than 64-bit.
If 32-bit is better than 64-bit then surely 16-bit must be even
better. Why stop there? 4-bit processors are REALLY cheap.
If you're moving 64-bits in parallel, explain how things are "twice as
slow".
not pretty likely in the near term...sun's stock price was wildly
inflated (and split 3 times actually) during the dotcom boom, and has
now settled to more reasonable levels.
btw, installed that first JDK yet? you know, posting here for like the
gazillionth time about how java sucks without actually having written a
SINGLE LINE OF JAVA CODE is really pathetic.
Tim Tyler <t...@tt1.org> wrote:
> The primary effect of 64 bits is to double the size of your
> address/data lines, making everything twice as big, twice as
> far apart, and twice as slow.
Not really. Data fetching is usually the main computational
bottleneck, and _it_ goes (almost, there's one log_2 of word length
spoiler involved in _resolving_ addresses, particularly from cache)
twice as fast.
Moreover, 64 bits removes the 32 bit code image size bottleneck, which
lets larger programs work as integrated wholes.
I can confirm from working at eBay that 2 Gigabyte code images, though
obscene, are no longer unusual.
One of the neglected impacts of OOP is that compiled code sizes grow
without bounds, as fairly huge classes are sucked in to use some petty
feature or other, and as everybody and their mother on a project does
separate incompatible subclasses of initial classes, since it is now
so easy to do that.
Unless we find a spiffy new programming paradigm without that nasty
side-effect, that make the 64 bit revolution pretty much unavoidable.
While it's true that going to 64 bits jacks a bunch of space for leads
into IC layouts, that's a penalty you pay _once_, while Moore's law
continues to function as always.
Before Moore's law finally runs out of steam, very likely 32 bits
won't be enough address space even internal to the IC any more.
So, the net effect is to move you back about 18 months in time _once_
with regard to performance, and then progress continues as before,
while the benefits of 64 bits continue to accrue forever after. Among
other of those benefits is not having to spend grotesque amounts of
effort to keep code images below 2 gigabytes!
64 bits is _surely_ a large enough address space for any software
humans will ever construct.(*)
xanthian.
(*) Then again some famous equally blase comment about everything
worth patenting having been invented before the turn of the 20th
century was also perhaps equally ill-advised. It's a bit humbling to
realize that the limits of computing might well be in devices with
component counts on the order of magnitude of Avagadro's number if
nanotechnology for computing pans out, which breaks 64 bit addressing.
Maybe we should just go directly to 256 bit addressing today and get
the pain over with all at once.
Use procedural/relational. Rather than have one big EXE, you can
divide projects up into "tasks", and tasks mostly use the
database to communicate state and state changes.
-T-
I have a choice of two pizzas. The slices of pizza #1 are 19 square inches.
The slices of pizza #2 are 17 square inches.
Which pizza is bigger?
--
Evidence Eliminator is worthless: "www.evidence-eliminator-sucks.com"
--Tim Smith
If you want to buy underpowered, overpriced boxes that look terrific, Apple
seems a better choice than Sun.
$1.3 million divide by $2000 per box = 650 boxes. Dell is not frightened by
this.
LOL - actually, we compared it to a dell tower server running linux and
the prices came about even for those specs. in addition, the cobalt
comes loaded with SunONE ASP, mySQL, Tomcat 4, Apache, etc. i STILL like
the color scheme though.
i woudn't doubt that....IBM is actually the major seller of linux boxes
(in revenues), not sun....but that's pretty ok for a company that just
started selling it at that time. i believe it actually had the highest
growth quarter per quarter, so there's obviously large room for growth
in the coming months and years.
:>: http://linuxtoday.com/news_story.php3?ltsn=2002-10-09-020-26-NW-BZ-SW
:>
:>: "'The real issue--our mistake--is that we thought the whole world would
:>: have gone to 64-bit (Solaris). It turns out that 32-bit is good enough,
:>: and a ton of 32-bit apps run on x86 and Linux,' [...]
:>
:>32-bit is not just good enough - it's a lot more efficient than 64-bit.
: If 32-bit is better than 64-bit then surely 16-bit must be even
: better. Why stop there? 4-bit processors are REALLY cheap.
Actually I'm an advocate of using 4-bit processors a lot more -
due to the parallelism benefits.
When lots of these are fitted onto a single pieve of silicon they
are more commonly known as FPGAs.
Programmable logic is generally slower than conventional logic of the
same cost - but it's stupendously more flexible.
:>The primary effect of 64 bits is to double the size of your
:>address/data lines, making everything twice as big, twice as
:>far apart, and twice as slow.
: Gee maybe you should tell Intel, AMD and the others that so they stop
: wasting their time and money.
I said as much at the time. Folks such as those at ARM seem to have
grasped the idea of bigger not necessarily being better.
:>JTK <gsagdj...@ahgkjhsadh.com> wrote:
:>
:>: http://linuxtoday.com/news_story.php3?ltsn=2002-10-09-020-26-NW-BZ-SW
:>
:>: "'The real issue--our mistake--is that we thought the whole world would
:>: have gone to 64-bit (Solaris). It turns out that 32-bit is good enough,
:>: and a ton of 32-bit apps run on x86 and Linux,' [...]
:>
:>32-bit is not just good enough - it's a lot more efficient than 64-bit.
:>
:>The primary effect of 64 bits is to double the size of your
:>address/data lines, making everything twice as big, twice as
:>far apart, and twice as slow.
: If you're moving 64-bits in parallel, explain how things are "twice as
: slow".
Making your data bus wider doesn't have much in the way of a negative
performance impact.
However, making the address bus wider does slow things down - for the
reasons I stated - components get forced apart from one another by the
width of the bus, and consequently signals take longer to cover the
distance in question.
This is a symptom of the problem of the desire to present the programmer
with a huge, uniform flat address space. Of course these days most
applications programmers never see any sort of address space - so
this nicety is lost on them.
I've lived through the 8-16 bit transition, the 16-bit to 32-bit
transition - and now the 32-bit to 64-bit transition looms.
Processor instruction languages should be more like Java bytecode on
this front.
Their instruction languages should *not* have hard-coded limitations
relating to the maximum size of the memory they can address. That is a
design screw-up - a hangover from the dark ages of computing. It is a
failure on encapsulation. It is exposing an irrelevant implementation
detail.
: I've lived through the 8-16 bit transition, the 16-bit to 32-bit
: transition - and now the 32-bit to 64-bit transition looms.
If I was more sceptical I might suggest that this was a /deliberate/
design screw-up.
Built in obsolescence encourages customers to upgrade regularly.
I don't doubt the marketing department likes very much the idea
of a 64-bit revolution. Suddenly they have an even bigger number
to write on their products.
Perhaps developments beyond that could be dealt with by marketing in
much the same way that oversampling on CDs was.
Oversampling went something like:
2 bits > 4 bits > 8 bits > 16 bits > 1 bit (bitstream).
> However, making the address bus wider does slow things down - for the
> reasons I stated - components get forced apart from one another by the
> width of the bus, and consequently signals take longer to cover the
> distance in question.
> This is a symptom of the problem of the desire to present the programmer
> with a huge, uniform flat address space. Of course these days most
> applications programmers never see any sort of address space - so
> this nicety is lost on them.
FWIW, I find the single-address space OSes interesting. I hope that (a)
this form of 64bit proccessor will be suitable and (b) more people play
around in this area.
[snip]
jjens
>> Unless we find a spiffy new programming paradigm without that nasty
>> side-effect, that make the 64 bit revolution pretty much
unavoidable.
> Use procedural/relational. Rather than have one big EXE, you can
> divide projects up into "tasks", and tasks mostly use the
> database to communicate state and state changes.
You're preaching to the choir, I'm a firm believer in the Unix idea of
chains of small, one-purpose programs to do tasks, but "we" in the
above was meant to be the applications software writers for industry,
not my vest-pocket pet mouse and I.
That industry still believes firmly in one programming monolith for
any one job: database locking and rollback issues are much simpler
when there is only one program addressing the database, and most of
the application programming crowd have only enough database skills to
get by, not a deep understanding of concurrency issues, so they (feel
they) are "safer" writing great wallowing pigs of programs.
Those programs, however, have outgrown or are intensely stressing the
32 bit address space in many cases among my recent employments.
xanthian.
How exactly is Apple's G4 under-powered? A dual processor system that
can do 21 Gigaflops for $2700. Show me an Intel system that can do 1
gigflop. Let alone have 2MB of L3 cache per processor. PC's have major
performance bottle necks.
The Unix pipe model is great for batch processing. It's hopelessly
crude for complex interactive systems.
You almost seem to be saying that big walloping OO applications
are best for those who can't or won't "get" databases, and
thus reinvent the database on their own through a gradual
organic fashion (which usually resemble network databases, not
relational ones in the end), and that we need more RAM space
to handle this hand-rolled sprawl.
>
> Those programs, however, have outgrown or are intensely stressing the
> 32 bit address space in many cases among my recent employments.
>
> xanthian.
>
-T-
It is true that main memory bandwidth and latency is critical for many
computational applications, but this needn't be constrained by the
word size of the CPU registers and instructions.
Already memory architectures are much more sophisticated.
> One of the neglected impacts of OOP is that compiled code sizes grow
> without bounds, as fairly huge classes are sucked in to use some petty
> feature or other, and as everybody and their mother on a project does
> separate incompatible subclasses of initial classes, since it is now
> so easy to do that.
>
> Unless we find a spiffy new programming paradigm without that nasty
> side-effect, that make the 64 bit revolution pretty much unavoidable.
Even with OO there are few programs with object code sizes
beyond that usable with 32 bits CPUs. It's all data.
In any case this phenomenon is probably only a result of bad programming
langauges and their common implementations. Something like SmartEiffel
already specializes only those pieces of code which need to be so, and
are known might be called in code.
This means you can write very full featured classes, and use genericity
and inheritence plenty, and you will not pay costs for routines
which will never be called.
> So, the net effect is to move you back about 18 months in time _once_
> with regard to performance, and then progress continues as before,
> while the benefits of 64 bits continue to accrue forever after. Among
> other of those benefits is not having to spend grotesque amounts of
> effort to keep code images below 2 gigabytes!
Is that a significant problem compared to data sizes?
With lots of objects, much of your data consists of addresses to other
objects. This becomes twice as big. The instructions may also increase in
size to allow for bigger offsets. With the increase in data/instruction size
you then need to add more cache.
Mark Thornton
Uh, huh. And 640K ought to be enough for anybody. ;-)
--
Mike Smith
If you can get 'em! 4-bitters ain't real common these days.
Hmmmm, an only very weakly supportable statement. Don't confuse the
Gentle Reader like that.
> Programmable logic is generally slower than conventional logic of the
> same cost - but it's stupendously more flexible.
But its generally slower than conventional logic of the same cost.
Guess who wins.
Indeed. It doubles your performance. Twice as fast.
> However, making the address bus wider does slow things down - for the
> reasons I stated - components get forced apart from one another by the
> width of the bus, and consequently signals take longer to cover the
> distance in question.
>
Whahuh? But doubling the data bus width *doesn't* do this? No, you're
misinformed on this subject. The only effect wider data and address
busses *might* have is *perhaps* increasing the transfer latency, which
gets absorbed by the caches anyway.
> This is a symptom of the problem of the desire to present the programmer
> with a huge, uniform flat address space.
Sweet, isn't it? Hell of a lot better than "segments" ain't it? Or the
horror of bank switching <shudder>.
> Of course these days most
> applications programmers never see any sort of address space - so
> this nicety is lost on them.
>
"applications programmers never see any sort of address space"?? Try:
char bigarray[<<four terabytes>>+1];
on a 32-bit machine and then a 64-bit machine. In whatever language you
want, even the now-defunct Java. You just saw address space.
> I've lived through the 8-16 bit transition, the 16-bit to 32-bit
> transition -
So you know firsthand the benefits that have accrued from each
"transition". I know I'll never go back to a 6805 on my desktop!
> and now the 32-bit to 64-bit transition looms.
>
Kick-ASS! Can you imagine a 3 gig machine that's TWICE as fast as the
32-bitters available now, pumping out the next first-person-shooter?!?
> Processor instruction languages should be more like Java bytecode on
> this front.
>
> Their instruction languages should *not* have hard-coded limitations
> relating to the maximum size of the memory they can address. That is a
> design screw-up - a hangover from the dark ages of computing. It is a
> failure on encapsulation. It is exposing an irrelevant implementation
> detail.
Oh man, are you turned around! When you get to the machine code level,
implementation details are ALL YOU GOT! There's a point where the
abstraction ends and the work begins man! And it's machine code.
> Already memory architectures are much more sophisticated.
Multi-layer memory architectures are a _response_ to slow
fetch rates, not particularly a cure for them except where
the code and data accesses both happen to be conveniently
localized, and as code size and IC size and density
increase, secondary and even tertiary cache sizes grow until
they too stress _some_ addressing range. In fact, the cache
sizes are in particular chosen by how many addressing lines
to that cache the CPU architecture can afford; if they were
instead sized based on available chip real-estate, they
wouldn't be almost universally an even power of two in size.
> Even with OO there are few programs with object code sizes
> beyond that usable with 32 bits CPUs. It's all data.
Never worked on a program whose SLOC counts were in the
hundreds of millions, huh? And that never counts the
sucked-in library files that accompany them, which when you
are incorporating COTS-ware libraries from every vendor on
the block are also huge. Yes, data is in there, but
typically only enough to have one instance of each toplevel
object under work, not the whole dataset at once, so
essentially it is still at base a code size problem.
One Navy site where I worked had one application that had
6000 _libraries_ of (C code) compiled object modules brought
in at link time.
> In any case this phenomenon is probably only a result of
> bad programming lang[ua]ges and their common
> implementations. Something like SmartEiffel already
> specializes only those pieces of code which need to be so,
> and are ["]known might be called in["] code.
> This means you can write very full featured classes, and
> use genericity and inheritence plenty, and you will not
> pay costs for routines which will never be called.
Well, command and control software tends not to have any
such thing as "routines which will never be called", though
it tends to have lots of contingency routines whose
probability of use is incredibly small, but need to be
accessible faster than you can read them off a disk when
trouble approaches at missile speeds.
>> So, the net effect is to move you back about 18 months in
>> time _once_ with regard to performance, and then progress
>> continues as before, while the benefits of 64 bits continue
>> to accrue forever after. Among other of those benefits is
>> not having to spend grotesque amounts of effort to keep
>> code images below 2 gigabytes!
> Is that a significant problem compared to data sizes?
Absolutely. Programmers now are going for less than minimum
wage; i.e., high unemployment, but in usual times, the cost
of software effort keeps going up while the cost of CPU
cycles, RAM, and disk code image footprint keeps going down.
The economics of doing whatever makes the coder's life
simpler are a no-brainer. Just like garbage collection, for
a price, removes memory management from the coder's
immediate concerns, so does larger addressing range remove
lots of nasty work-arounds from the effort. You're hearing
this from someone who spent years writing heavily overlaid
Fortran to fit 5 megabytes of machine code footprint into
MS-DOS's 640Kbyte memory limitations. That easily consumed
20% of the whole programming effort: think about refactoring
your entire code base from scratch at each and every new
functionality addition to understand the flavor of the
problem that still faces programmers of "software in the
large" with today's 2 Gigabyte limits.
And "data sizes" is a misnomer, as mentioned above. There
are many earth sciences and military intelligence
applications with single problem data sizes in hundreds of
terabytes, but that says next to nothing about code
footprints of the programs that process that data.
xanthian.
One totally awesome side effect of Moore's law: I have code
now that, left to run for a month, which might well be the
solution time for a modest sized problem of the type I make
my hobby, makes calls on particular individual methods in
the _trillions_.
On my _laptop_.
Happy am I to have lived to see it.
Oh MAN do I want to work where you work! ;-)
Stop reading this if you don't want to be offended. That being said .
. .
The mistake Sun is making is failure to recognize the rapid
acceleration of commodity off-the-shelf hardware at prices that make
most people literally rub their eyes in disbelief.
Our admin at work came in and told me about an offer extended to us
from Dell in which we can purchase a 3.0 ghz hyperthreaded P4 with
80gb of storage, 1gb of RAM, and a gigabit ethernet card for $1700.
With hardware that cheap, and with the prices of load balancers
falling nearly as precipitously (not quite as steep, but close!), us
folks in the "low to middle end" market are rapidly coming up to speed
with the big boys.
The one thing that hasn't cracked the PC world (yet) is cheap
availability of extremely vast quantities of disk. While the current
disk sizes and cost are too staggering to describe, we have yet to see
a PC that can store 4 terrabytes. That's a problem, and is what keeps
companies like Sun in business. They can provide the storage
solution, and "oh by the way for a few tens of thousands of dollars
more you can get this zesty Sun server. Wouldn't it be great to get
the support for your storage from the same place that provides the
support for your application server?" Hardly...
The truth of the matter is this: if you gave me X dollars, and said
'go build a data warehouse that is highly available' I could probably
come up with a more scalable granular architecture using ONE big file
server (from Sun or IBM) and a dozen or so clustered P4 3.0ghz
hyperthreaded PCs with 2gb RAM running a J2EE application server. The
alternative solution Sun will sell you? Purchase this fairly
expensive UltraSPARC, run Solaris, pay for support from us, and "trust
us, this will suit your needs."
Ya, 5 years ago Sun provided hardware that was leaps and bounds above
commoditized PC hardware. Not anymore. The rate of expansion and
competetive forces driving PC hardware advancement far outpaces any
product schedules I've seen from Sun.
On a side note, I just don't understand why people would purchase Sun
to begin with. Consider: say you know you need N bogomips of
processor. You could purchase a Sun that provides 2N bogomips
(chances are slim that there's a Sun out there that *perfectly* fits
your needs, and the product lines from Sun seem to make pretty steep
jumps from low end to middle end to high end). So chances are you're
going to plunk down a hefty chunk of change to purchase hardware that
you probably won't use for another year.
Now let's examining purchasing of PCs instead. The first benefit is
that they are probably cheaper. The second is that they are more
granular, allowing for degraded performance during an outage (assuming
you have load balancing in place). The third benefit is that you
scale as demand dictates. Sure you keep just ahead of the curve, but
when the time comes to upgrade you go to your Dell rep and ask for 3
new servers for $3000. That lasts you for another 3 or 6 months.
And, if demand is really outpacing supply, you'll have adequate
revenues (from site usage) that'll give you the $3000 BEFORE you pay
Dell. In the previous solution you are SPENDING MONEY before site
usage gives you revenue. And we wonder why Sun made a lot of money
during the dot-com boom - people pie-in-the-sky'ed their estimates for
site hardware.
It just seems ludacrous in this age to use Sun equipment when PC
equipment is just so darn fast, so darn cheap, so darn reliable, and
so darn available.
<snip>
>
> It just seems ludacrous in this age to use Sun equipment when PC
> equipment is just so darn fast, so darn cheap, so darn reliable, and
> so darn available.
Of course there are applications where having a 64 bit address space is an
advantage. I have some cases where keeping the data within 2GB is a real
pain. So far there are no 64 bit commodity PCs.
Mark Thornton
:> Programmable logic is generally slower than conventional logic of the
:> same cost - but it's stupendously more flexible.
: But its generally slower than conventional logic of the same cost.
: Guess who wins.
That depends on the context, but:
ASIC perform well in specialised applications, and the FPGA wins through
not being specialised for any one application in general purpose
applications.
Today's processors tend to be highly specialised for executing serial
streams of instructions. They are "completely mashed" by FPGAs whenever
you need to do anything in parallel.
:> :>: "'The real issue--our mistake--is that we thought the whole world would
:> :>: have gone to 64-bit (Solaris). It turns out that 32-bit is good enough,
:> :>: and a ton of 32-bit apps run on x86 and Linux,' [...]
:> :>
:> :>32-bit is not just good enough - it's a lot more efficient than 64-bit.
:> :>
:> :>The primary effect of 64 bits is to double the size of your
:> :>address/data lines, making everything twice as big, twice as
:> :>far apart, and twice as slow.
:>
:> : If you're moving 64-bits in parallel, explain how things are "twice as
:> : slow".
:>
:> Making your data bus wider doesn't have much in the way of a negative
:> performance impact.
: Indeed. It doubles your performance. Twice as fast.
Bigger data busses certainly do have their attractions ;-)
:> However, making the address bus wider does slow things down - for the
:> reasons I stated - components get forced apart from one another by the
:> width of the bus, and consequently signals take longer to cover the
:> distance in question.
: Whahuh? But doubling the data bus width *doesn't* do this?
It does indeed - among other things.
: No, you're misinformed on this subject. The only effect wider data
: and address busses *might* have is *perhaps* increasing the transfer
: latency, which gets absorbed by the caches anyway.
Nope.
:> This is a symptom of the problem of the desire to present the programmer
:> with a huge, uniform flat address space.
: Sweet, isn't it? [...]
It's terrible. It's like having all URLs constrained to be the same
length. Forget about the fact that some data is accessed thousands of
times a second, and other data is accessed once in a blue moon. Give
everything an equally long and cumbersome address - no matter *how*
important it is.
A flat address space creates the illusion that all memory locations are
equally accessible. However this is an illusion - and a misleading
one - some information will be stored close at hand and others will
be far away.
:> Of course these days most applications programmers never see any sort
:> of address space - so this nicety is lost on them.
: "applications programmers never see any sort of address space"?? Try:
: char bigarray[<<four terabytes>>+1];
: on a 32-bit machine and then a 64-bit machine. In whatever language you
: want, even the now-defunct Java. You just saw address space.
Java programmers never see the *machine's* address space. Sure they can
create arrays - but what has that got to do with anything? Those arrays
can be implemented however the JVM's designer likes.
:> I've lived through the 8-16 bit transition, the 16-bit to 32-bit
:> transition - [...]
: So you know firsthand the benefits that have accrued from each
: "transition".
...and all the code that failed to make it across the divides.
:> Processor instruction languages should be more like Java bytecode on
:> this front.
:>
:> Their instruction languages should *not* have hard-coded limitations
:> relating to the maximum size of the memory they can address. That is a
:> design screw-up - a hangover from the dark ages of computing. It is a
:> failure on encapsulation. It is exposing an irrelevant implementation
:> detail.
: Oh man, are you turned around! When you get to the machine code level,
: implementation details are ALL YOU GOT! There's a point where the
: abstraction ends and the work begins man! And it's machine code.
A high-level and extremely exposed location. Look at the
incompatibilities it caused at the 8->16 bit and 16->32 bit
transitions. If everyone wrote code in Java they'd be OK - but many folk
are still writing in lower level languages - and many of them will see
their programs break - or run in a slower emulated mode - and practically
all the .EXEs in existence will need to be run under x86 emulation :-(
This last point is one of the main reasons we have been stuck with
the disasterous x86 instruction set for so long. Everyone knows
what a mess it is - but most of the implementation details are
exposed in the public API - and none of them can be changed
without destroying backwards compatibility - including little
details like the maximum size of addressable memory.
True ... that's why there is the Itanium II which is a 64-bit processor.
There are quite a few 64 bit processors either available now or coming soon,
but none of them are available as "commodity PCs". They are all at quite a
significant premium to 32 bit equipment. Maybe the Athlon 64 will change
this when it arrives.
Mark Thornton
> One Navy site where I worked had one application that had
> 6000 _libraries_ of (C code) compiled object modules brought
> in at link time.
Sounds like an application that should have been written in Ada.
Richard Riehle
Probably not, I've been unemployed for 25 months, sleeping on pavement
for 19.
However, that particular one of my several experiences in "coding in
the extremely large" was 1992-1994 at Fleet Numerical Meteorology and
Oceanography Center in Monterey, California, and why they had an
instance of this Lockheed-written set of code, besides that they were
capital "N" Navy, I wasn't privileged to know, lots of the shop was
outside the coverage of my mere DoD Contractor's Secret clearance, but
I was privileged to help other programmers use it in coding, as an
aside to my day job helping forecast tropical cyclone behavior. They
are a great group of folks to work for, I retain friends there to this
day, and Monterey is a wonderful if very pricy place to live; go for
it!
xanthian.
Of course you have to put up with the _Navy_, but I was a retired
LCDR, so it wasn't _too_ awful.
Ouch. Sorry to hear that. Didn't know they had newsreaders built into
curbs these days ;-). (If you are honest-to-God "sleeping on the
pavement", strike that last one).
What's your gig? Is the job market *that* bad for developers?
> However, that particular one of my several experiences in "coding in
> the extremely large" was 1992-1994 at Fleet Numerical Meteorology and
> Oceanography Center in Monterey, California,
Ooof! From what I understand, it don't *get* any larger than that!
> and why they had an
> instance of this Lockheed-written set of code, besides that they were
> capital "N" Navy, I wasn't privileged to know, lots of the shop was
> outside the coverage of my mere DoD Contractor's Secret clearance, but
> I was privileged to help other programmers use it in coding, as an
> aside to my day job helping forecast tropical cyclone behavior. They
> are a great group of folks to work for, I retain friends there to this
> day, and Monterey is a wonderful if very pricy place to live; go for
> it!
>
Approaching at missle speed, Lieutenant Commander! ;-) Hmm, I don't
know though, all that sneakin' around stuff would probably get in my
nerves. It'd be pretty cool to be able to claim "I have security
clearance <whatever>" though, and be able to kill anybody who "knew too
much", so I guess it compensates.
> xanthian.
>
> Of course you have to put up with the _Navy_, but I was a retired
> LCDR, so it wasn't _too_ awful.
Plus you can deal with trouble approaching at missle speeds. ;-)
Right. In this particular context (broadly defined as general purpose
computing), good ol' purpose-designed regular logic wins on cost, speed,
cost, power consumption, cost, and cost.
> but:
>
> ASIC perform well in specialised applications,
ASICs perform well if designed to perform well in the application.
> and the FPGA wins through
> not being specialised for any one application in general purpose
> applications.
>
Making it ideal for prototyping, short runs where you couldn't make up
the ASIC NRE, and not a whole lot else.
> Today's processors tend to be highly specialised for executing serial
> streams of instructions. They are "completely mashed" by FPGAs whenever
> you need to do anything in parallel.
And the FPGAs are "completely mashed" by the ASICs derived from the FPGA
designs. The bottom line is that the "FP" part burns up a huge amount
of real estate that could otherwise be used for logic doing actual work.
[snip wildy wrong assumptions about how uP's work]
> :> This is a symptom of the problem of the desire to present the programmer
> :> with a huge, uniform flat address space.
>
> : Sweet, isn't it? [...]
>
> It's terrible. It's like having all URLs constrained to be the same
> length. Forget about the fact that some data is accessed thousands of
> times a second, and other data is accessed once in a blue moon. Give
> everything an equally long and cumbersome address - no matter *how*
> important it is.
>
> A flat address space creates the illusion that all memory locations are
> equally accessible. However this is an illusion - and a misleading
> one - some information will be stored close at hand and others will
> be far away.
>
Ummm... Tim, you are aware of a little thing we call "addressing modes",
right? Like branches, which can have an 8-bit offset, a 16-bit offset,
a 32-bit offset, or... perhaps even a 64-bit offset? This ain't a new
invention Tim. You "just" wedge in a new addressing mode and Bob's yer
uncle!
> :> Of course these days most applications programmers never see any sort
> :> of address space - so this nicety is lost on them.
>
> : "applications programmers never see any sort of address space"?? Try:
>
> : char bigarray[<<four terabytes>>+1];
>
> : on a 32-bit machine and then a 64-bit machine. In whatever language you
> : want, even the now-defunct Java. You just saw address space.
>
> Java programmers never see the *machine's* address space.
What do you mean by that? What are you trying to claim here? Of
*course* they "see" the address space they're running in, if in no other
way than....
> Sure they can
> create arrays - but what has that got to do with anything? Those arrays
> can be implemented however the JVM's designer likes.
>
....they can't exceed the size of the machine's... address space. Sure,
they could be cobbled to not really be an array, but rather a, say, disk
file larger than the address space *posing* as an array larger than the
machine's address space. But they're not going to be. That's a job for
a library.
> :> I've lived through the 8-16 bit transition, the 16-bit to 32-bit
> :> transition - [...]
>
> : So you know firsthand the benefits that have accrued from each
> : "transition".
>
> ...and all the code that failed to make it across the divides.
>
Good riddance to bad rubbish sez me. Give me an example of some 8-bit
code that really should be running on my 32-bit XP machine but isn't.
> :> Processor instruction languages should be more like Java bytecode on
> :> this front.
> :>
> :> Their instruction languages should *not* have hard-coded limitations
> :> relating to the maximum size of the memory they can address. That is a
> :> design screw-up - a hangover from the dark ages of computing. It is a
> :> failure on encapsulation. It is exposing an irrelevant implementation
> :> detail.
>
> : Oh man, are you turned around! When you get to the machine code level,
> : implementation details are ALL YOU GOT! There's a point where the
> : abstraction ends and the work begins man! And it's machine code.
>
> A high-level and extremely exposed location. Look at the
> incompatibilities it caused at the 8->16 bit and 16->32 bit
> transitions. If everyone wrote code in Java they'd be OK - but many folk
> are still writing in lower level languages - and many of them will see
> their programs break - or run in a slower emulated mode - and practically
> all the .EXEs in existence will need to be run under x86 emulation :-(
>
Not if they're recompiled. And if they're not, they'll still run, as
you say. Who loses here?
> This last point is one of the main reasons we have been stuck with
> the disasterous x86 instruction set for so long. Everyone knows
> what a mess it is - but most of the implementation details are
> exposed in the public API - and none of them can be changed
> without destroying backwards compatibility - including little
> details like the maximum size of addressable memory.
Well this is another discussion entirely. Whatever happened to that
Intel patent or whatever where each virtual memory segment could have
its own instruction set? Did they ever put that in a chip? That's the
optimal solution to the x86 mess.
Double? Twice? Last I heard it would require just one
extra bit (*not* 32) to double the addressing space ...
Unless you meant actually making the data lines twice as big :))
You can get several complete 64 bit machines from Sun for the price of
a single Itanium chip.
What's your point? That old, slow and underperforming systems quickly lose
their value??
"SGI has attained linear scalability on a 64-processor Itanium 2-based
system and world-record results among microprocessor-based systems on the
STREAM Triad benchmark, which tests memory bandwidth performance.
Demonstrating linear scalability from 2 to 64 processors, the Itanium
2-based prototype system from SGI exceeded 120GB per second on a single
system image. This result, derived from initial internal testing, marks a
significant milestone for the industry: Early Itanium 2-based SGI® systems
built on the innovative SGI® NUMAflexTM shared-memory architecture, have
proven that Linux can scale well beyond the perceived limitation of eight
processors in a single system image.
Additionally, results show that the upcoming Itanium 2-based SGI system has
not only outperformed the IBM® eServer p690 and Sun Microsystems Sun FireTM
15K high-end microprocessor-based systems, it has also surpassed memory
bandwidth performance on the CRAY C90TM, the CRAY SV1TM and the Fujitsu
VPP5000 CMOS vector-based supercomputers."
>> Probably not, I've been unemployed for 25 months, sleeping on
pavement
>> for 19.
> Ouch. Sorry to hear that. Didn't know they had newsreaders built into
> curbs these days ;-). (If you are honest-to-God "sleeping on the
> pavement", strike that last one).
I am honest to a god in whom I do not believe sleeping on the sidewalk
between a couple of church buildings with a roof overhang between
them, but I spend my days 7am to midnight in a campus student union,
with real electricity and wireless internet access and a spiffy laptop
on which I spent a sixth of my gross worth so I wouldn't go nuts doing
nothing.
My today's partial output from many weeks work just showed up here as
TravellerArt #7:
http://www.anycities.com/user/xanthian/
> What's your gig? Is the job market *that* bad for developers?
It is for foul tempered, grossly depressed, white haired developers
who will be the last hired back, when the job crunch in May 2000 put a
quarter million programmers out of work in America and their ranks
have only been growing since. A wealth of skills,
http://www.well.com/user/xanthian/resume.html
41 years programming experience, 700 resumes sent out before I pretty
much gave up wasting my day fruitlessly, and exactly one face to face
interview, where I was told I was "overqualified". Indeed. They
wanted to hire someone for 37% of my previous pay, and I'd have taken
it gladly if offered.
So, I write Java to gain new skills, both in the language and in the
genetic algorithm paradigm, publish the results as freeware:
http://www.well.com/user/xanthian/java/TravellerDoc.html
http://www.anycities.com/user/xanthian/MovieScheduler/MovieScheduler.html
and wait for better times, while surviving on the $152 a month the
courts couldn't give to my first wife from my pension and my second
wife can't touch.
And frankly, I'm having a blast, though I'd rather have a way to
support my 7 year old kid, but crippled up as I am, brute force
employment is out too, and his mother chose this life for the two of
them, so who am I to interfere?
xanthian.
:> > The primary effect of 64 bits is to double the size of your
:> > address/data lines, making everything twice as big, twice as
:> > far apart, and twice as slow.
: Double? Twice? Last I heard it would require just one
: extra bit (*not* 32) to double the addressing space ...
: Unless you meant actually making the data lines twice as big :))
The topic of discussion is increasing the size of the address space
from 32 bits to 64 bits.
That does not represent a doubling of the address space - it is
doubling the base two log of the size of the address space - which
increases the size of the actual address space by a factor of 2^32.
[snip]
:> :> This is a symptom of the problem of the desire to present the programmer
:> :> with a huge, uniform flat address space.
:>
:> : Sweet, isn't it? [...]
:>
:> It's terrible. It's like having all URLs constrained to be the same
:> length. Forget about the fact that some data is accessed thousands of
:> times a second, and other data is accessed once in a blue moon. Give
:> everything an equally long and cumbersome address - no matter *how*
:> important it is.
:>
:> A flat address space creates the illusion that all memory locations are
:> equally accessible. However this is an illusion - and a misleading
:> one - some information will be stored close at hand and others will
:> be far away.
: Ummm... Tim, you are aware of a little thing we call "addressing modes",
: right? Like branches, which can have an 8-bit offset, a 16-bit offset,
: a 32-bit offset, or... perhaps even a 64-bit offset? This ain't a new
: invention Tim. You "just" wedge in a new addressing mode and Bob's yer
: uncle!
Addressing modes don't really change the sizes of memory addresses.
They allow you to specify addresses relative to (or via) other ones.
You still have a big, flat address space.
Registers are more like an example of what I'm talking about. Fast memory
that can be accessed quickly because it is nearby.
:> :> Of course these days most applications programmers never see any sort
:> :> of address space - so this nicety is lost on them.
:>
:> : "applications programmers never see any sort of address space"?? Try:
:>
:> : char bigarray[<<four terabytes>>+1];
:>
:> : on a 32-bit machine and then a 64-bit machine. In whatever language you
:> : want, even the now-defunct Java. You just saw address space.
:>
:> Java programmers never see the *machine's* address space.
: What do you mean by that? What are you trying to claim here? Of
: *course* they "see" the address space they're running in, if in
: no other way than....
This seems a rather pointless misunderstanding - let's try and get
past it.
: ....they can't exceed the size of the machine's... address space.
Well, the usual limit is placed by the JVM - not by the system.
By default it is 64Mb - the JVM's default memory size.
There's no reason why a JVM's memory can't exceed the address space
of the processor of the machine it runs on - since JVMs are free
to make use of virtual memory.
The JVM might be able to figure out it's using virtual memory by doing
some timings - but that will tell it the size of the *actual* memory
space, not how much is addressable by the machine's CPU.
The only way a Java program could see the address space of the machine
it was running in would be if that information was provided by the
System properties. However, looking at:
http://java.sun.com/docs/books/tutorial/essential/system/properties.html
...that information does not appear to be directly available.
Maybe - for some OSs - it could read "os.name" - and then make a guess at
the probable address space of the host machine.
:> :> Programmable logic is generally slower than conventional logic of the
:> :> same cost - but it's stupendously more flexible.
:>
:> : But its generally slower than conventional logic of the same cost.
:> : Guess who wins.
:>
:> That depends on the context,
: Right. In this particular context (broadly defined as general purpose
: computing), good ol' purpose-designed regular logic wins on cost, speed,
: cost, power consumption, cost, and cost.
There are gazzilions of serial programs out there that need running.
Running these on parallel machines is typically a complete waste of
resources, yes.
:> [...] the FPGA wins through not being specialised for any one
:> application in general purpose applications.
: Making it ideal for prototyping, short runs where you couldn't make up
: the ASIC NRE, and not a whole lot else.
I don't think so. Programmable logic is used today mainly for
prototyping - but it will eventually invade the mainstream.
Having a bit of programmable circuitry in a computer that
can perform arbitrary logic is too useful a resource to
remain absent. Even if many conventional serial programs
can't take advantage of it, JITs will eventually be able to.
:> Today's processors tend to be highly specialised for executing serial
:> streams of instructions. They are "completely mashed" by FPGAs whenever
:> you need to do anything in parallel.
: And the FPGAs are "completely mashed" by the ASICs derived from the FPGA
: designs.
Yes - but you can't necessarily afford an ASIC run for every single
problem you want to deal with.
: The bottom line is that the "FP" part burns up a huge amount
: of real estate that could otherwise be used for logic doing actual work.
If you're not using the programmability then it's expensive and wasteful.
A good reason to use programmable logic in the cases where you reprogram it.
>>> The primary effect of 64 bits is to double the size of your
>>> address/data lines, making everything twice as big, twice as
>>> far apart, and twice as slow.
> Double? Twice? Last I heard it would require just one
> extra bit (*not* 32) to double the addressing space ...
> Unless you meant actually making the data lines twice as big :))
Well, yes, but Tim's point is still partly valid; if you are
dealing in 64 bit words, then you are serving out addresses, doing
address calculations, and so forth, on 64 bit lines and in 64 bit
registers, fetching data in on 64 bit busses, routing data across
your CPU chip in 64 lead wide data paths, lots of stuff really is
physically twice as wide, you are already bang up against the
speed of light in lots of computer piece part specs, and _if_ your
signal is going orthogonally to that 64-tuple of data leads, _then_
your signal has twice as far to go to get across it and you cannot
do anything much meanwhile on the other end but wait. But that is
a Simple Matter of IC Design of course; you want your processing
paths _not_ to run at right angles to the data leads, but to be
pipelines and cascades through which the data flows, _in the
direction of_ the data leads. Beyond which, not everything on a
chip is data leads, and the rest of it doesn't necessarily scale
with data lead total path width.
So, you are going to lose speed in some fraction smaller than a
factor of two but still appreciable, and then gain it back and more
the next time Moore's law completes a cycle; it really is, from one
perspective, just a delay of the _calandar date_ at which a certain
throughput counted in words processed will occur, not some
insurmountable absolute now and forever barrier that should have us
giving up on switching rapidly for other reasons to 64 bit word size
technology as Tim seems to be suggesting.
In My Opinion.
xanthian.
You won't get anywhere near 21 gigaflops on most real code. That's peak
theoretical performance if everything that can possibly be happening in
parallel actually happens. Under the same assumptions that get a dual G4 to
21 gigaflops, a dual 2 GHz P4 gets 16 gigaflops. Last time I checked, 16 is
greater than 1.
If you need double precision, a *single* 2 GHz P4 will outperform a *dual*
1.25 GHz G4.
Here's a post where someone gives excellent details:
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=v.bostrom-F5202B.22064109012003%40netnews.attbi.com&rnum=5
--
Evidence Eliminator is worthless: "www.evidence-eliminator-sucks.com"
--Tim Smith
: So, you are going to lose speed in some fraction smaller than a
: factor of two but still appreciable, and then gain it back and more
: the next time Moore's law completes a cycle; it really is, from one
: perspective, just a delay of the _calandar date_ at which a certain
: throughput counted in words processed will occur, not some
: insurmountable absolute now and forever barrier that should have us
: giving up on switching rapidly for other reasons to 64 bit word size
: technology as Tim seems to be suggesting.
Well you've got to stop somewhere - the 128-bit 256-bit roadmap of
VLIW by Intel and HP is the path to madness. A 128-bit integer
or a 128-bit object pointer is just crazy - and the waste is a
terrible price to pay for addressing more data and getting a
wider data bus.
64 bits looks like much too much to me. Too many common data types are
much smaller than your word size, the directly addressable memory
completely outstrips the size of the cache - and there's already much more
space than most methods need.
An address bus of a specified width is not an essential
component anyway. The brain represents a competent general
problem-solving device - but nobody asks after the size of
its address bus.
I look forward to the 1-bit computer - when the size of the
address bus has turned into a complete non-issue.
BCC #127:8
MOV @0xAB:8, R0
etc etc...
Yes, they do.
> They allow you to specify addresses relative to (or via) other ones.
> You still have a big, flat address space.
>
Which in some applications is exactly what you want, and in others can't
hurt.
> Registers are more like an example of what I'm talking about. Fast memory
> that can be accessed quickly because it is nearby.
>
You still got registers Tim. It really looks to me like you have little
idea what you're talking about here.
[snip]
> : ....they can't exceed the size of the machine's... address space.
>
> Well, the usual limit is placed by the JVM - not by the system.
> By default it is 64Mb - the JVM's default memory size.
>
> There's no reason why a JVM's memory can't exceed the address space
> of the processor of the machine it runs on - since JVMs are free
> to make use of virtual memory.
>
And what is the limit on how much virtual memory the JVM has to work with?
Yep, the size of the address spece.
> The JVM might be able to figure out it's using virtual memory by doing
> some timings - but that will tell it the size of the *actual* memory
> space, not how much is addressable by the machine's CPU.
>
The JVM is not supposed to *care* whether it's using virtual memory or
not. That's the whole point of virtual memory!
> The only way a Java program could see the address space of the machine
> it was running in would be if that information was provided by the
> System properties.
Or if it somehow exceeded it.
> However, looking at:
> http://java.sun.com/docs/books/tutorial/essential/system/properties.html
> ...that information does not appear to be directly available.
>
> Maybe - for some OSs - it could read "os.name" - and then make a guess at
> the probable address space of the host machine.
Tim, neither one of us knows what you're talking about at this point.
>: So, you are going to lose speed in some fraction smaller than a
>: factor of two but still appreciable, and then gain it back and more
>: the next time Moore's law completes a cycle; it really is, from one
>: perspective, just a delay of the _calandar date_ at which a certain
>: throughput counted in words processed will occur, not some
>: insurmountable absolute now and forever barrier that should have us
>: giving up on switching rapidly for other reasons to 64 bit word
>: size technology as Tim seems to be suggesting.
> Well you've got to stop somewhere - the 128-bit 256-bit roadmap of
> VLIW by Intel and HP is the path to madness.
Hardly. We've been down this "make it bigger, we're out of room" path
many times in the past, and there were nay-sayers kicking and
screaming just like you are, each and every time.
Somehow the industry has survived the great leap forward from 12 bit
addressing.
Change is uncomfortable, but if you're going to work in the computer
field, you'd damn-all better come to love it, it isn't going away any
time soon.
What we want to skip this time is the dead-end paths that 24 bit and
48 bit choices turned out to be for Harris et alia. If you're going
to bite a bullet, make it of sufficient caliber to make the nastiness
worthwhile and preferably one-time-in-one-career-only.
> A 128-bit integer
> or a 128-bit object pointer is just crazy - and the waste is a
> terrible price to pay for addressing more data and getting a
> wider data bus.
Umm, as noted in my earlier posting, even 64 bits is too small once we
achieve molecular level IC components in 3D layouts, so a 128 bit
pointer is a "room for anticipated growth" choice. Looks like wise
planning to me [though for nasty seditious reasons of my own having to
do with long-word-programmed Langton's Vants I'd prefer going straight
to 256 bit integers and be done with it].
> 64 bits looks like much too much to me. Too many common data types are
> much smaller than your word size, the directly addressable memory
> completely outstrips the size of the cache - and there's already much more
> space than most methods need.
I think you've descended into pure superstition and dread at this
point, Tim.
Directly addressable memory has _always_ outstripped the size of the
cache, which is why its only a cache instead of the only memory you
have, and which is why cache addresses are merely _translations_ of
memory addresses they mirror.
Sure lots of common counter-type needs are _smaller_ than 64 bits,
that isn't the point, and the argument doesn't apply to the issue at
hand. Most of them are very comfortable in 4 or 8 bits, which doesn't
mean we should go back to 4 or 8 bit addressing!
The point is that more and more counter-type needs and specifically
addressing-type needs are _larger_ than 32 bits. Terabyte databases
are common, desktop hardware is edging bit by bit toward overflowing
32 bit addressing almost weekly [and I'd love the opportunity to put
64 gigabytes of RAM in a laptop just to keep Java applications
memory-bound, but right now the Java VM would presumably roll over and
twitch a few last times then die horribly with > 32 bit addressing,
since integer pointers die there by design, a choice that did _not_
take into account easily anticipated growth a few years back when it
was made by the Java designers; and also a non-static RAM that size
with today's technology would promptly turn a laptop into plasma and
the user into ash from the waste heat disposal problem].
Most of the latter, in fact, today, are there or very close, putting
us rapidly up against a wall that has huge software _and_ hardware
development cost consequences.
The list of "places we've run out of room in 32 bits" grows daily both
inside and outside the individual computer; think IP-NG.
VLIWs of 128 bits are two decades old, maybe three [I seem to remember
encountering vector machines with that kind of word size already in
use in the middle 1970s, but it might have been a few years later,
1981 is the other end of the possible time span at that employment],
and so the current VLSI powers-that-be aren't working out of pure
ignorance of the consequences, there's several generations of design
wisdom for the chore already captured in the industry's meme-space.
Packing multiple smaller entities into the longer word sizes was CISC
state of the art, and now that we've given it up pretty much with
RISC, we might need to look back and review that choice in new
hardware designs and the compilers to support them.
If we're fetching 64/128/256 bit instructions anyway, surely we have
room for a few more/longer op-codes in there somewhere.
If we limit ourselves specifically to adding sub-full-word addressing
ones rather than go crazy with power as previous generations of CISC
designers did, we might come out of the transition with relatively few
new scars.
I don't envy the compiler or hardware designers the chore of
synchronizing access to the piece parts of a subdivided fullword,
though.
I _do_ envy the hardware designers the chance to do away with the
horridly complex kludge of multiple level caching by providing a data
fetch path wide enough to make sense and bring the data in straight
from RAM as fast as needed, when and if RAM speeds ever catch up with
CPU speeds. Today, RAM might as well be hard-disk, it is so many-fold
times too slow for feeding the CPU's requirements, and so much too
speed-of-light-far away from the CPU, and that's pretty much how
caching treats it.
xanthian.
Maybe for x86, but Digital/Compaq/... has been shipping a 64-bit JVM for
Alpha for quite a while, and Sun's JDK 1.4.1 for Solaris on SPARC has both
a 32-bit and a 64-bit version. (I haven't looked at the various IA-64 JVMs
to see whether they use 32 bits or 64 bits, nor have I looked at JVMs for
64-bit Power or MIPS systems to see whether they do 64 bits).
Thomas Maslen
mas...@pobox.com
No, that the Itanium isn't the only game in town, and is far from cheap.
:> : ....they can't exceed the size of the machine's... address space.
:>
:> Well, the usual limit is placed by the JVM - not by the system.
:> By default it is 64Mb - the JVM's default memory size.
:>
:> There's no reason why a JVM's memory can't exceed the address space
:> of the processor of the machine it runs on - since JVMs are free
:> to make use of virtual memory.
: And what is the limit on how much virtual memory the JVM has to work with?
: Yep, the size of the address spece.
There is no reason why that should be a limit. The theoretical limit
on storage available to a JVM will be set by the total size of the free
space on the available storage devices - after allowing for things like
compression.
:>: So, you are going to lose speed in some fraction smaller than a
:>: factor of two but still appreciable, and then gain it back and more
:>: the next time Moore's law completes a cycle; it really is, from one
:>: perspective, just a delay of the _calandar date_ at which a certain
:>: throughput counted in words processed will occur, not some
:>: insurmountable absolute now and forever barrier that should have us
:>: giving up on switching rapidly for other reasons to 64 bit word
:>: size technology as Tim seems to be suggesting.
:> Well you've got to stop somewhere - the 128-bit 256-bit roadmap of
:> VLIW by Intel and HP is the path to madness.
: Hardly. We've been down this "make it bigger, we're out of room" path
: many times in the past, and there were nay-sayers kicking and
: screaming just like you are, each and every time.
: Somehow the industry has survived the great leap forward from 12 bit
: addressing.
: Change is uncomfortable, but if you're going to work in the computer
: field, you'd damn-all better come to love it, it isn't going away any
: time soon.
You have me pegged as a luddite?!?
:> A 128-bit integer or a 128-bit object pointer is just crazy - and the
:> waste is a terrible price to pay for addressing more data and getting a
:> wider data bus.
: Umm, as noted in my earlier posting, even 64 bits is too small once we
: achieve molecular level IC components in 3D layouts, so a 128 bit
: pointer is a "room for anticipated growth" choice. Looks like wise
: planning to me [though for nasty seditious reasons of my own having to
: do with long-word-programmed Langton's Vants I'd prefer going straight
: to 256 bit integers and be done with it].
Any limit is going to get in the way for some applictions. For those
who think 64 bits will address all the memory on the planet for the
forseeable future, memory-mapped sparse arrays are a great way to
soak up address space.
:> 64 bits looks like much too much to me. Too many common data types are
:> much smaller than your word size, the directly addressable memory
:> completely outstrips the size of the cache - and there's already much more
:> space than most methods need.
: I think you've descended into pure superstition and dread at this
: point, Tim.
Then you have not the faintest idea about my actual views :-|
: Sure lots of common counter-type needs are _smaller_ than 64 bits,
: that isn't the point, and the argument doesn't apply to the issue at
: hand. Most of them are very comfortable in 4 or 8 bits, which doesn't
: mean we should go back to 4 or 8 bit addressing!
: The point is that more and more counter-type needs and specifically
: addressing-type needs are _larger_ than 32 bits.
E.g. Java has had 64-bit integers for a long time. 64-bit addresses
don't represent any more of a problem. Are they needed often enough
to justify doubling the size of the machine's word? Not IMO.
If the size of the address bus was kept as an implementation detail, I
wouldn't care about it - it could be sorted out at a later date.
It's the fact that most compiled programs have a hard-wired dependency on
it that causes the problems.
If machine code was more like JVM bytecode nobody need ever know
about the size of the address bus - unless they were putting in RAM.
That seems to be clearly how things /ought/ to be.
: Terabyte databases are common, desktop hardware is edging bit by bit
: toward overflowing 32 bit addressing almost weekly [and I'd love the
: opportunity to put 64 gigabytes of RAM in a laptop just to keep Java
: applications memory-bound, but right now the Java VM would presumably
: roll over and twitch a few last times then die horribly with > 32 bit
: addressing, since integer pointers die there by design, a choice that
: did _not_ take into account easily anticipated growth a few years back
: when it was made by the Java designers [...]
Well, the JVM is address-bus agnostic. It works just fine under 64 bit
unixes. The JVM's designers weren't stupid. Give them some credit.
: The list of "places we've run out of room in 32 bits" grows daily both
: inside and outside the individual computer; think IP-NG.
This desire to fit as many data types into a word as possible seems
like a cancer to me. It leads to bigger and bigger words - and more
and more inefficient processors. They may be faster - but only
through being twice as big. With twice the area you could have
two conventional processors - and at least that sort of parallelism
can scale properly.
And I haven't looked inside Java byte code at all, Tim Tyler says it
is buss size agnostic, and I'm glad to hear it. My issue is that the
current Java design, by locking primitive data type sizes to
predefined limits, while that has lots of wonderful effects, _also_
made the most common indexing type, the int, breakable when more than
2^32 entities exist to be indexed. I've no real clue whether arrays
can be indexed with longs instead, and arrays that size "just work",
but I'd be amazed to find that to be true, because nothing I've run
into so far in the declaration of arrays seems to include an "indexed
by what type" portion, as Pascal's and Ada's "indexed by this
enumeration type" (though it wouldn't solve the current problem)
declarations did and do.
xanthian.
Doesn't mean it isn't there, I'm still new at Java, just means I
haven't seen it.
> And I haven't looked inside Java byte code at all, Tim Tyler says it
> is buss size agnostic, and I'm glad to hear it. My issue is that the
> current Java design, by locking primitive data type sizes to
> predefined limits, while that has lots of wonderful effects, ...
> ... and Ada's "indexed by this ...
In fact, Ada is quite robust in this regard. Consider the following package
in which I declare Integer types along with their bit sizes and configurations.
package Own_Integers is
-- signed integers
type Int_8 is range -2**7..2**7 -1;
for Int_8'Size use 8; -- eight bits for
Int_8
type Int_12 is range -2**11..2**11 -1;
for Int_12'Size use 12; -- twelve bits for
Int_12
for Int_12'Alignment use 2; -- align Int_12 on 2 byte
boundary
-- can define Int_16, Int_24, Int_32, any other sizeetc.
type Int_64 is range -2**63..2**63 -1;
for Int_64'Size use 64; -- sixty-four bits for
Int_64
-- Unsigned Integers
type UInt_64 is mod 2**64;
for UInt_64'Size use 64;
end Own_Integers;
Note that, when appropiate, we can also specify big-endian, little-endian,
and range values for each of those integer types. We can also do this same
thing for floating-point types, records, arrays, and enumerated types. The
legality is checked at compile time. Thus, if I were to say,
type Int_128 is range -2**127..2**127 -1;
for Int_128'Size use 128;
and the platform I was targeting could accept that, my statement would
compile with no problem. It the target cannot accept that range, the
compiler will notify me with a fatal error message.
Richard Riehle
There's a very good reason why that should be a limit: that's the entire
reason behind virtual memory. Like it or not, Java has pointers. They
point to something. Something in virtual memory, i.e. the VM's address
space.
> The theoretical limit
> on storage available to a JVM will be set by the total size of the free
> space on the available storage devices - after allowing for things like
> compression.
Yep. Now if there were only some way we could have dedicated hardware
that would... "virtualize"... all that storage....
Is the light coming on yet Tim?
: There's a very good reason why that should be a limit: that's the entire
: reason behind virtual memory. Like it or not, Java has pointers. They
: point to something. Something in virtual memory, i.e. the VM's address
: space.
Java's references are not specified as being of some fixed size, though.
There is no limit on their size or nature - because there's no pointer
arithmetic. They can be as big as you want them to be.
:> The theoretical limit on storage available to a JVM will be set by the
:> total size of the free space on the available storage devices - after
:> allowing for things like compression.
: Yep. Now if there were only some way we could have dedicated hardware
: that would... "virtualize"... all that storage...
Well, you already have suitable hardware. Any 32-bit machine can talk to
tens of gigabytes of storage space - despite it only having a 32-bit
address-bus. The problem of accessing it can be solved in software.
: I've no real clue whether arrays can be indexed with longs instead, and
: arrays that size "just work", but I'd be amazed to find that to be
: true, because nothing I've run into so far in the declaration of arrays
: seems to include an "indexed by what type" portion [...]
I think it /might/ have worked without that.
However in fact it doesn't. Arrays can only have integer indices.
int[] ia = new int[10];
long l = 0x100000001L;
Log.log("result:" + ia[l]);
...gives compiler error - wanting the index cast to an int.
``A component of an array is accessed using an array access
expression. Arrays may be indexed by int values; short, byte, or char
values may also be used as they are subjected to unary numeric promotion
(§2.6.10) and become int values.
- http://sunsite.ccu.edu.tw/java/vmspec/Concepts.doc.html
No provision for longs.
That means a 2Gb maximum, I believe - since ints are signed.
[Probably you ought not to do that that way, Tim; in a cross
posted article, all you really know is where _you_ are
reading it, and "from where" _you_ are answering it, you
ought not to claim to know "from where" I was posting it;
something like "Answering in c.l.j.a what KPD wrote ...
la-la-la" would more closely approximate the truth, and
probably be closer to what you wanted, too, a disclaimer of
knowing the context in the other groups, though it would be
flat out wonderful if the news posting _software_ would
annotate where the OP was reading when the article was
posted, so you could trim down to there and be sure s/he
was in reality a reader of that group, rather than being
stuck wallpapering the whole original list to assure a hit.]
> - http://sunsite.ccu.edu.tw/java/vmspec/Concepts.doc.html
> No provision for longs.
Ah, but the stronger argument is a 2 gibibyte maximum
_what_? Two Gb of doubles is ambiguous; is that two odd
billion array _entries_, or two odd billion _bytes_ of data
in a contiguous chunk that happen to be doubles, an eighth
as many entries? The integer address can surely _name_ the
former, but in a 32 bit _architecture_, can the JVM really
peek and poke them?
Since we are stuck with 32 (31 really as you note) bit
addressing busses, most likely the limitation is in bytes,
but if your statement that the JVM is buss neutral holds any
_functional_ reality, not merely representing a
_permission_, one ought to be able to get that many
_entries_, and let the JVM figure out where to put them and
how to find them.
The problem with sticking with a 32 bit architecture is that
thenceforth, _everything_ you do to extend it is a kludge,
and very likely every place the problem is solved with a
_different_ kludge because solved by a different specialist
(community), creating unbearable maintenance nightmares;
while shifting to the addressing width of your data needs
makes everything clean, granted that clean is also bulky and
slow and sucks power like a three-days-out camel trying to
inhale an oasis.
Since we have let ourselves be led down that kludge-garden
path several times already, with all the attendant suffering
and grief, how about we just take a "pass" this time, grip
the chance to _learn_ from our very own history, and do the
job correctly from the start?
xanthian.
[The concept of a fairly-newly-blessed CS PhD throwing
wooden shoes into the loom gears _does_ tug at the heart,
Tim, but I just think you have a fixation that needs
unblocking (and I'd be checking those shoes suspiciously for
fractal dimension anyway with my Mandelbrot micrometer).]
[Or, "Mandelbrometer" might be more felicitious, if he'll
forgive eliding the silent "t" at the end of his name.]
(read through n.s.r, posted via n.s.r, demon.ip.support.turnpike added
to crossposting, follow-ups set to n.s.r,d.i.s.t)
>in a cross posted article, all you really know is where _you_ are
>reading it, and "from where" _you_ are answering it, you ought not to
>claim to know "from where" I was posting it; something like "Answering
>in c.l.j.a what KPD wrote ... la-la-la" would more closely approximate
>the truth, and probably be closer to what you wanted, too, a disclaimer
>of knowing the context in the other groups, though it would be flat out
>wonderful if the news posting _software_ would annotate where the OP
>was reading when the article was posted, so you could trim down to
>there and be sure s/he was in reality a reader of that group, rather
>than being stuck wallpapering the whole original list to assure a hit.
Ace idea. Is there any software/news-client that does this?
--
Jim Crowther "It's MY computer" (tm)
Spam no longer a problem: <http://popfile.sourceforge.net/>
Nor should they be. Just like C's pointers.
> There is no limit on their size or nature - because there's no pointer
> arithmetic.
That plays no part in any of this.
> They can be as big as you want them to be.
>
Yep. Theoretically. This is one of them cases where the difference
between practice and theory is greater in practice than in
theory.
> :> The theoretical limit on storage available to a JVM will be set by the
> :> total size of the free space on the available storage devices - after
> :> allowing for things like compression.
>
> : Yep. Now if there were only some way we could have dedicated hardware
> : that would... "virtualize"... all that storage...
>
> Well, you already have suitable hardware. Any 32-bit machine can talk to
> tens of gigabytes of storage space - despite it only having a 32-bit
> address-bus.
Any 8-bit machine can too! Let's all go back to the 6805!! YAY!!!!
> The problem of accessing it can be solved in software.
Can be. If you want to go back to the bad-old-days of "HUGE pointers".
That's what you're talking about Tim. Oh, wait, no, not quite - "HUGE
pointers that reference locations on DISK". Much worse.
:> In comp.lang.java.advocacy
: [Probably you ought not to do that that way, Tim; in a cross
: posted article, all you really know is where _you_ are
: reading it, and "from where" _you_ are answering it, you
: ought not to claim to know "from where" I was posting it; [...]
I don't think I was.
I wrote:
"In comp.lang.java.advocacy Kent Paul Dolan <xant...@well.com> wrote"
...and it's true - you /did/ post such a message to c.l.j.a.
Maybe if you can think of a superior phrasing, you'll have another
claim to fame ;-)
: [The concept of a fairly-newly-blessed CS PhD throwing
: wooden shoes into the loom gears _does_ tug at the heart,
: Tim, but I just think you have a fixation that needs
: unblocking (and I'd be checking those shoes suspiciously for
: fractal dimension anyway with my Mandelbrot micrometer).]
You can check my shoes yourself: http://timtyler.org/shoes/ ;-)
> (read through n.s.r, posted via n.s.r,
> demon.ip.support.turnpike added to crossposting, follow-ups
> set to n.s.r,d.i.s.t)
>> in a cross posted article, all you really know is where
>> _you_ are reading it, and "from where" _you_ are answering
>> it, you ought not to claim to know "from where" I was
>> posting it; something like "Answering in c.l.j.a what KPD
>> wrote ... la-la-la" would more closely approximate the
>> truth, and probably be closer to what you wanted, too, a
>> disclaimer of knowing the context in the other groups,
>> though it would be flat out wonderful if the news posting
>> _software_ would annotate where the OP was reading when the
>> article was posted, so you could trim down to there and be
>> sure s/he was in reality a reader of that group, rather
>> than being stuck wallpapering the whole original list to
>> assure a hit.
> Ace idea.
Why thank you. It seemed to have some merit, which is why I
went blundering around looking for a place to put it as a
stumbling stone in the path of likely clue gatherers.
> Is there any software/news-client that does this?
If you're asking me, I hope someone else will answer
instead. Like most people, I come up with lots of zillion
dollar ideas I have no idea how to implement [else the world
would be cluttered up with rich people and we poor people
would have no stray pavement on which to make our beds.]
xanthian.
>Jim Crowther <Lock...@blackhole.uk> wrote:
>> Kent Paul Dolan penned the following:
>
[]
>>>something like "Answering in c.l.j.a what KPD
>>> wrote ... la-la-la" would more closely approximate the
>>> truth
[]
>> Ace idea.
>
>Why thank you. It seemed to have some merit, which is why I
>went blundering around looking for a place to put it as a
>stumbling stone in the path of likely clue gatherers.
>
>> Is there any software/news-client that does this?
>
>If you're asking me, I hope someone else will answer
>instead. Like most people, I come up with lots of zillion
>dollar ideas I have no idea how to implement [else the world
>would be cluttered up with rich people and we poor people
>would have no stray pavement on which to make our beds.]
I had a 'Doh!' moment when it was pointed out to me that (in Turnpike at
least) the attribution 'template' can be altered for any personality -
how's the text at the top of this?
Nice, though wordy. Fact is, the article ID is pretty thoroughly
useless to humans since threaded news readers started keeping track of
references for us.
That takes care of my issue with Tim's wording, but doesn't win us the
zillions for identifying the "from where" of the article we are
following up, we still only have the "seen where", a different beast.
The former only exists now if the OP volunteers the information, but
having the posting software capture into the headers and make visible
at news perusing time the currently selected newsgroup at posting time
would be _so_ helpful.
Of course, in your case, the "from where" _is_ captured, because yours
is a second generation article, and identifies where you are by where
you are seeing my article but captured for your article, not available
for my presumptive "head of thread and crossposted" case, as Tim's
wording that started this whole segue seemed to imply.
xanthian.
[]
>That takes care of my issue with Tim's wording, but doesn't win us the
>zillions for identifying the "from where" of the article we are
>following up, we still only have the "seen where", a different beast.
[]
>Of course, in your case, the "from where" _is_ captured, because yours
>is a second generation article, and identifies where you are by where
>you are seeing my article but captured for your article, not available
>for my presumptive "head of thread and crossposted" case, as Tim's
>wording that started this whole segue seemed to imply.
I agree attributions should be short and succinct. Does the above
suffice?
> I agree attributions should be short and succinct. Does the above
> suffice?
Yep, that's pretty much ideal.
xanthian.
Another classic mistake was to hardwire Unicode characters as 16 bit.
Nowadays Unicode has over a million code points.
Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."
> Another classic mistake was to hardwire Unicode characters as 16 bit.
> Nowadays Unicode has over a million code points.
I had been wondering about that, and not just with respect to Java.
It seems on casual inspection like a _lot_ of premature "it'll never
get bigger than "X"-bits decisions have been made about Unicode, while
the standardizers just keep adding more glyphs. I cannot believe that
the standardizers have set _any_ upper limit; there are on the order of
6000 human languages, many with script forms, and they seem barely yet
to have scratched the surface (though the surface contained some wondrous
things, like that special alphabet the Mormons designed for their own use).
xanthian.
True, but that mistake was merely inherited from the official Unicode
consortium position at the time. Even so I'm not sure that it was a
mistake --- if char values were 4 bytes the howls of complaint about the
space being used by Java to represent ASCII text would be even louder than
they are now. As it is a few people will have to suffer the pain of code
points being represented by a surrogate pair, but most uses will be quite
comfortable with single char codepoints --- it is nowhere near as awkward as
the 8 bit char constraint which pushed many more people into the pain of
multibyte characters.
Mark Thornton
The initial Unicode concept did envisage 16 bits as being sufficient. Hence
a number of implementations took them at their word.
Mark Thornton
I tried to argue that point in comp.lang.c++.moderated a few months ago,
and got soundly thrashed. Multi-byte Unicode - in other words, UTF-8 - is
actually a reasonable design such that it isn't any worse to deal with
than UTF-16. In both cases different characters have different lengths. In
both cases character starts cannot be confused with character middles
(which really helps). In both cases you really need to work on strings
rather than characters anyway, because foreign languages rules tend to
need the context.
If you're used to Microsoft's multi-byte encodings, eg MBCS/DBCS, then
UTF-8 is a big improvement. UTF-16 is only easier if you ignore surrogate
pairs - in other words, if you don't bother with all of Unicode.
I suspect that most developers could easily get away with ignoring surrogate
pairs most of the time. So in practice UTF-16 is easier. No doubt there is a
noisy community who do need to make extensive use of the characters which
require surrogate pairs in UTF-16, and for them Java should probably add an
int based wide string class and supporting code. However the vast majority
of the api can be left as it is --- based on UTF-16.
I wonder what chance there is of Microsoft changing all their 16 bit unicode
API?
Mark Thornton
Most programmers would find plain 7-bit ASCII sufficient. I don't have
personal experience here, but I gather that you cannot provide reasonable
support for Japanese, say, without needing surrogate pairs.
> I wonder what chance there is of Microsoft changing all their 16 bit
> unicode API?
Getting an API wrong is less of a blunder than getting a language wrong.
> I suspect that most developers could easily get away with ignoring surrogate
> pairs most of the time. So in practice UTF-16 is easier. No doubt there is a
> noisy community who do need to make extensive use of the characters which
> require surrogate pairs in UTF-16, and for them Java should probably add an
> int based wide string class and supporting code. However the vast majority
> of the api can be left as it is --- based on UTF-16.
A wide-string class? What an idea. Whoever would think such a
thing useful? Oh, wait. I remember now. Ada has wide-strings based
on sixteen bit wide-characters. In fact, when the time comes for
Unicode to be expanded to 32bits, Ada will able to accomodate the extension.
Oh, and Ada still allows eight bit characters along with routines to
convert between wide and standard characters. No need to reinvent
the wheel after all.
Richard Riehle
Err, but Unicode already requires (I think) 21 bits which is the point of
this thread.
Mark Thornton
> Err, but Unicode already requires (I think) 21 bits which is the point of
> this thread.
Thanks, Mark.
I am including this reply in comp.lang.ada
So, will the next Ada standard include support for a 21 bit Unicode,
or am I missing something and it already does?
Richard Riehle
Probably. See AI-285. (http://www.ada-auth.org/ais.html).
Randy Brukardt.
ARG Editor
If the DOD had its way, _everything_ would have been written
in Ada.
But you have to appreciate the DOD folks!
Theoretically, they are all working in Ada using very
strict procedures. In actual practice, there are
a lot of people who are actually getting real things done!
It keeps everybody happy. The cushy-job crowd spends
years to do projects-that-will-go-nowhere using various
revisions of Ada, following very complicated specs.
The get-the-job-done crowd, in the meanwhile, can get
all kinds of exemptions and stuff, and gets
a lot of real work done. The real work can
then even get some Ada framework around it to
keep the Ada people feeling productive!
Progress on both fronts.
> If the DOD had its way, _everything_ would have been written
> in Ada.
In 1996, Assistant Secretary of Defense, Emmett Paige, an Ada advocate,
decided that Ada had sufficient success in the projects where it had
been used that it should stand on its own merits instead of being
a part of a mandate. In the early days of Ada, the mid-1980's, when
compilers were not adequate and people using the language did not
understand how it was different from what they already knew, there
were a fair number of failures.
Since that time, and with the advent of the Ada 95 standard, there have
been significant successes with Ada. The compilers are as good as
one will find with any other language. The language supports an level
of reliability not found in most other languages. It turns out to be
easier to understand and learn than seemed to be the case when it
was an entirely new set of ideas.
> Theoretically, they are all working in Ada using very
> strict procedures. In actual practice, there are
> a lot of people who are actually getting real things done!
And many of those people are getting real things done in Ada.
There is a long list of highly successful military and non-military
software written in Ada. Most of it is targeted to safety-critical
software rather than the desktop. We certainly can point to
failures in projects that used Ada. We can also point to failures
in projects using any other language. In my experience, the failures
in which Ada was used had nothing to do with Ada. In most
cases, those failures had to do with poor management and poor
engineering.
> It keeps everybody happy. The cushy-job crowd spends
> years to do projects-that-will-go-nowhere using various
> revisions of Ada, following very complicated specs.
Those complicated specs are necessary because they are related
to complicated systems. Ada has been highly successful in uncomplicated
systems, and also highly successful in some of the most complex safety-critical
systems ever designed and deployed. In case you did not know it, every
time you board a commercial jetliner, there is a high probability you are
depending on Ada at many levels, for your safety and dependability of
arrival.
> The get-the-job-done crowd, in the meanwhile, can get
> all kinds of exemptions and stuff, and gets
> a lot of real work done. The real work can
> then even get some Ada framework around it to
> keep the Ada people feeling productive!
You either don't know much about the current state of Ada
software practice, or you were on a project that was badly
managed and are one of those who thinks the problem was
Ada. I recall a project at NASA Kennedy Space Center
where I found a lot of people complaining about Ada. It
turned out that Ada was not the problem. Rather, the
management had insisted that all programming use a
vi editor. The engineers assumed this was inherent
to Ada and constantly complained about Ada when the
problem was that they hated the editor.
Another project was on a contact won by a large and
overly-bureaucratic organization that staffed their leadership team
with some of the most incompetent people I have ever seen
in software development. The programmers were relatively
good, but they were saddled with this ineffetive management.
The project was delayed, delayed, delayed, and the complexity
increased due to the meddling of these incompetent managers.
The contract was later won by another contractor that had
better and more experienced leadership. They did the job,
in Ada, just fine after firing (should have set fire to) some
of the deadwood that was keeping things from getting done.
I have seen project after project completed successfully
using Ada, often on-time and withing budget. Sorry your
experience is different. The issue is not the language, but
the competence of the people using it. I find Ada easy to
understand, easy to use, and a great pleasure to engage
for simple and complex programming tasks. My experience
is that people properly trained will find it as pleasant
as I do.
When I compare it to the other languages I know, including
Java and C++, I realize that Ada is still the best option for
any real-time embedded software system where a high level
of safety is required. The concurrency features are far superior
to those in Java, and the surprises far less frequent than one
enjoys in C++.
Richard Riehle
What happened until 1996?
> Since that time, and with the advent of the Ada 95 standard, there have
> been significant successes with Ada. The compilers are as good as
That's great. In the real world, since the early 80's till 1996 (when
Ada became non mandatory in DoD,) C came and went, C++ came
and went, Java became popular, VB made programming accessible
to a whole class of people, the internet happened...
In all this time, DoD was able to decide that Ada now stands on
its own merits.
(Though I suspect it actually stands on all the people
who got involved in it while it was mandatory.)
> one will find with any other language. The language supports an level
> of reliability not found in most other languages. It turns out to be
> easier to understand and learn than seemed to be the case when it
> was an entirely new set of ideas.
I personally thought Hoare's ACM lecture was rather smart and wise...
The response that "real world problems are complex and large" showed
very clearly to me that Ada proponents didn't understand programming.
Yes, real world problems are complex and large. That's exactly why
you don't need a large and complex language on top of it.
Yes, good people can get the job done even in Ada (some
of the jobs. I don't know about the latest versions of Ada,
but in general there are areas outside Ada's reach. Ada
people tend to totally depend upon libraries written
in C or C++.)
But good people can get the job done even in assembly language.
> Richard Riehle <ric...@adaworks.com> wrote in message news:<3E5C7033...@adaworks.com>...
>
> > In 1996, Assistant Secretary of Defense, Emmett Paige, an Ada advocate,
> > decided that Ada had sufficient success in the projects where it had
> > been used that it should stand on its own merits instead of being
> > a part of a mandate. In the early days of Ada, the mid-1980's, when
> > compilers were not adequate and people using the language did not
> > understand how it was different from what they already knew, there
> > were a fair number of failures.
>
> What happened until 1996?
By the late 80's, compiler developers had stopped writing Ada
compilers as if they were Fortran compilers and began to actually
write Ada compilers for Ada. Programmers and engineers began
to understand how to design with Ada, in part because of the advent
of object technology throughout other parts of the industry. The
much improved ISO/ANSI Ada standard was in place by 1995
(actually available earlier) and excellent software was being
developed in the language. Assistant Secretary Paige simply
opened the door to a wider range of technologies, including
Ada, because he felt Ada could stand on its own.
> > Since that time, and with the advent of the Ada 95 standard, there have
> > been significant successes with Ada. The compilers are as good as
>
> That's great. In the real world, since the early 80's till 1996 (when
> Ada became non mandatory in DoD,) C came and went, C++ came
> and went, Java became popular, VB made programming accessible
> to a whole class of people, the internet happened...
Java and VB are certainly useful tools for some kinds of problems. Neither
is appropriate for real-time, embedded, safety-critical systems. We pick
the right tool for the right job. C and C++ continue to be useful tools.
> In all this time, DoD was able to decide that Ada now stands on
> its own merits.
The Ada mandate was poorly managed by the DoD. Mr. Paige
felt it would be better managed in the open marketplace if it were
required to compete with other technologies.
> I personally thought Hoare's ACM lecture was rather smart and wise...
> The response that "real world problems are complex and large" showed
> very clearly to me that Ada proponents didn't understand programming.
> Yes, real world problems are complex and large. That's exactly why
> you don't need a large and complex language on top of it.
It is interesting to hear people speak of Ada as large and complex. In fact,
what makes it large is the inclusion of concurrency within the language.
Concurrency, especially when one considers communicating concurrent
processes, has always been complex.
The language is structured around a few simple principles. Some of those
principles are more rigorously defined in Ada than in other languages. One
principle, separation of scope from visibility, is so different that even Ada
programmers have difficulty with it at first. Once they stop fighting it and
understand it, the rest of the language falls into place.
I get complaints all the time from new Java programmers about the complexity
of the language. It is not as complex as they first think. But if they insist on
fighting the language, they will continue to be disappointed.
> Yes, good people can get the job done even in Ada (some
> of the jobs. I don't know about the latest versions of Ada,
> but in general there are areas outside Ada's reach. Ada
> people tend to totally depend upon libraries written
> in C or C++.)
Ada is one of the easiest languages to read. This helps a lot with
long-term maintenance. It is excellent for creating architectures
for large-scale software systems. It can be, and is, used for coding
at low-levels or high levels of the software process. We can do
anything with Ada that one can do with C.
With respect to your note about people using libraries written
in C and C++. The language is designed with interoperability
as a built-in feature. The designers of the language realized that
old code that works is often better than new code that has not
stood the test of time. There are lots of C libraries, in part
because so many operating systems are written in C, that we
would rather re-use than write all over from scratch.
One organization I know of has been a heavy user of Fortran
in the past. They have a lot of excellent Fortran code still
available. Rather than recode the Fortran routines that have
been working well, they can incorporate those routines into
Ada, with capabilities built-in to Ada.
Ada is designed to create both extensible and reusable code.
Few languages provide for these two capabilities as well.
We can reuse working code from other languages easily. We
can create reusable code as either generics or via specialization.
We can extend architectures without breaking them.
All of those capabilities imply a need for some language
features not found in other languages. Ada is not intended
for small, one-person projects. It is at its best when used
on large software systems where dependability is a high
priority, and there are lot of programmers working together
to create a final software product.
You are certainly welcome to snipe at it (and at C and C++),
but I would hate to think that you would even consider developing
a high-integrity commercial avionics system without considering
it. You would not want to use Java for that kind of system, even
though Java is appropriate for some other kinds of software.
> But good people can get the job done even in assembly language.
True. A good carpenter can build a house measuring every piece
by eye, but the result will be more consistently better if that same
capenter begins with a well-formed architecture, and is able to
employ a variety of measuring tools. This is what Ada provides
for the development of large-scale software systems.
Richard Riehle
> soft-eng wrote:
>> But good people can get the job done even in assembly language.
> True.
That's kinda' like saying good people can build a house from wood chips
rather than 2x4's. True, but how often do you want to tackle that?
In addition there will usually be greater complexity as opposed to using
languages more suited to the task.
Elliott
--
http://www.radix.net/~universe ~*~ Enjoy! ~*~
Hail OO Modelling! * Hail the Wireless Web!
@Elliott 2003 my comments ~ newsgroups+bitnet OK
Richard Riehle <ric...@adaworks.com> wrote:
> soft-eng wrote:
>> If the DOD had its way, _everything_ would have been
>> written in Ada.
It helps to remember the context. The DoD had, and has to
this day, a situation where battlefield computers programmed
in different languages couldn't talk to one another. A
couple of obvious results of this kind of situation are that
people die from friendly fire problems, and that people die
from corrupted transmissions. It was, and still is,
important that the computer software share a single
language, with that language's exact semantics for
primitives and user declared types, so that communication
succeeds and at least the people fighting on our side get to
claim they got killed by the enemy instead of friendly
forces or computer glitches.
One of the most direct ways to assure that software talent
for this common language was and remained available was to
make it the case that _all_ DoD software was written in this
same language, mandatorily. Top Secret clearances are
expensive to produce, and keeping cleared people employed as
contractors is easier if their language skills translate
between application desmenes.
Beyond that, DoD had many languages that were in use nowhere
else, so finding programmers at all for any program on a
staffing up-curve was problematic; while having a DoD-wide
language also in widespread commercial use would have made the
programming staff-finding chore simpler.
Making the language mandatory DoD-wide made sense for other
reasons as well. Logistics bobbles can kill people in
wartime at least as readily as howitzer aiming bobbles, it's
just usually less immediately obvious.
A side result of the need to have all kinds of applications
programmed in this common language is that the language had
to support the idioms of a wide varieties of applications:
to do scientific calculations like FORTRAN, business
calculations like COBOL, communications bit fiddling and
calculations like C, and so forth. This _does_ tend to make
for a fairly large language, but various application areas
can pretty much stick to the subsets included for their
needs and ignore the parts they don't need, so the language
isn't so large from the viewpoint of any one programmer.
> In 1996, Assistant Secretary of Defense, Emmett Paige, an
> Ada advocate, decided that Ada had sufficient success in
> the projects where it had been used that it should stand
> on its own merits instead of being a part of a mandate.
Which was total BS, and a whitewash besides.
BS because you don't "optionally" need battlefield
compatibility, "optionally" need programmer portability;
these are always needed, and the mandate should have been
strengthened and better enforced, not removed.
Whitewash, because what had actually happened was a
rebellion among the services, that should have had a large
number of officers cashiered if not court-martialed. Abuse
of the exemption system had gutted the Ada mandate, and
commands, one where I worked among them, were scamming the
exemption system to get ongoing exemptions for problems they
caused themselves in the first place, in very
straightforward attempts to avoid converting software staff
to Ada software staff, and prolonged so long as they could
keep getting exemptions that let them avoid doing the hard
work needed to become unexemptable.
If the Ada mandate hadn't been dropped, the rebellion would
have to have been treated as what it was. The DoD chose to
cave rather than apply military standards to a military
discipline problem.
> In the early days of Ada, the mid-1980's, when compilers
> were not adequate and people using the language did not
> understand how it was different from what they already
> knew, there were a fair number of failures.
> Since that time, and with the advent of the Ada 95
> standard, there have been significant successes with Ada.
> The compilers are as good as one will find with any other
> language. The language supports an level of reliability
> not found in most other languages. It turns out to be
> easier to understand and learn than seemed to be the case
> when it was an entirely new set of ideas.
This is way true. As a statistic of one, I'm a much better
programmer in many other languages (C, C++, Pascal, Fortran,
Java, Perl, Modula-2, StarLogo, for examples), but I
_prefer_ programming in Ada. It is just a much better
programming language [and this isn't a casual comment; I say
this having committed software development professionally
that delivered code in a gross of programming languages and
dialects over a long career] than the others.
>> Theoretically, they are all working in Ada using very
>> strict procedures. In actual practice, there are a lot of
>> people who are actually getting real things done!
The mythology of "people getting stuff done by ignoring the
rules" is that the bozos doing so only consider the short
term costs: what it takes to meet management's ridiculously
short milestones. Factoring maintenance costs into the
picture, these people should be shot, but their managers
should preceed them to the firing squad for allowing what
they do to happen. What you really have here is the
"programmer as prima donna" syndrome: rules are for other
people, "I'm a hacker!"
Indeed.
> And many of those people are getting real things done in
> Ada.
Especially in the transportation industries, worldwide.
> There is a long list of highly successful military and
> non-military software written in Ada. Most of it is
> targeted to safety-critical software rather than the
> desktop. We certainly can point to failures in projects
> that used Ada. We can also point to failures in projects
> using any other language. In my experience, the failures
> in which Ada was used had nothing to do with Ada. In most
> cases, those failures had to do with poor management and
> poor engineering.
That, of course, can be said about software failures
independently of language issues: management is expected to
succeed given _any_ Turing complete language, but "software
management" is still mostly an oxymoron: the technically
incompetent leading the unherdable technocrats.
>> It keeps everybody happy. The cushy-job crowd spends
>> years to do projects-that-will-go-nowhere using various
>> revisions of Ada, following very complicated specs.
It helps to remind those who think of programming as a
"cushy job" that programmer burnout rates are high,
programmer divorce rates extraordinary, programmer work
hours are obscene, programmer job security is next to
non-existent, programmers come home from long work weeks of
programming to spend hours more in unpaid self-training to
keep up with a field that abandons those who stop learning
for even six months.
> Those complicated specs are necessary because they are
> related to complicated systems. Ada has been highly
> successful in uncomplicated systems, and also highly
> successful in some of the most complex safety-critical
> systems ever designed and deployed. In case you did not
> know it, every time you board a commercial jetliner, there
> is a high probability you are depending on Ada at many
> levels, for your safety and dependability of arrival.
>> The get-the-job-done crowd, in the meanwhile, can get all
>> kinds of exemptions and stuff, and gets a lot of real
>> work done. The real work can then even get some Ada
>> framework around it to keep the Ada people feeling
>> productive!
And now you have an unmanageable, unmaintainable mess that
should never have been allowed to happen: heads should roll.
> You either don't know much about the current state of Ada
> software practice, or you were on a project that was badly
> managed and are one of those who thinks the problem was
> Ada. I recall a project at NASA Kennedy Space Center
> where I found a lot of people complaining about Ada. It
> turned out that Ada was not the problem. Rather, the
> management had insisted that all programming use a vi
> editor. The engineers assumed this was inherent to Ada
> and constantly complained about Ada when the problem was
> that they hated the editor.
Which is hilarious; the vi(), nvi(), stevie(), vim(),
elvis(), and so on, family of editors were first written for
the use of _secretaries_; they are the easiest editors to
use imaginable, with wonderful ergonomic, navigation, and
search-and-replace mechanisms, and the more up-to-date
versions are far past wonderful, with text type specific
color syntax highlighting, Turing complete scripting
languages, and a wealth of special purpose features. Like
any powerful tool, they take time and training to achieve
high skill levels and productivity though [for example, I'm
writing this article in vim(), from which I will then cut
and paste the whole article into my browser's utterly
incompetent editor widget, to avoid the pain of using that
widget), and many people, dropped into vi() unwarned, suffer
horribly.
> Another project was on a contact won by a large and
> overly-bureaucratic organization that staffed their
> leadership team with some of the most incompetent people I
> have ever seen in software development.
Given what I've seen at various employments, incompent
people doing software management is utterly the norm.
You get a combination of Peter Principle Promotions:
programmers with no leadership skills suddenly floundering
foolishly in jobs they should never have accepted; and
Lateral Ludicrousness: horizontal transfers of folks who
might be wonderful warehouse worker managers, arriving with
the idea that because you can manage something with
essentially no technical content, you have the skills to
manage something with intensly challenging technical
content, but where most of the management issues are
technical issues, and most of your decisions have to be made
having no idea what the disputing parties are saying.
> The programmers were relatively good, but they were
> saddled with this ineffe[c]tive management. The project
> was delayed, delayed, delayed, and the complexity
> increased due to the meddling of these incompetent
> managers. The contract was later won by another
> contractor that had better and more experienced
> leadership. They did the job, in Ada, just fine after
> firing (should have set fire to) some of the deadwood that
> was keeping things from getting done.
Amen, except the second crowd is a fairy tale in my wide and
long experience.
> I have seen project after project completed successfully
> using Ada, often on-time and withing budget.
with budget; use budget; burn_budget(); ??? ;-)
> Sorry your experience is different. The issue is not the
> language, but the competence of the people using it. I
> find Ada easy to understand, easy to use, and a great
> pleasure to engage for simple and complex programming
> tasks. My experience is that people properly trained will
> find it as pleasant as I do.
As with vi(), as with ClearCase, if you don't _learn_ the
tool thoroughly _before_ forming your initial in-use
opinions of it, you'll hate it forever.
> When I compare it to the other languages I know, including
> Java and C++, I realize that Ada is still the best option
> for any real-time embedded software system where a high
> level of safety is required. The concurrency features are
> far superior to those in Java, and the surprises far less
> frequent than one enjoys in C++.
True.
On the downside, as a no-longer-mandated and distinctly
fringe, almost "cult", language, Ada lacks the wide variety
of intensely useful and daily burgeoning libraries of
already invented wheels of Java, the programming hordes
willing to debug open source code of C and C++, the
just-for-fun applications and game authoring tools that make
other languages more able to attract young programmers to
become language-X junkies.
Mostly, this is due to conscious decisions, budget
shortsightedness or unconscious arrogance in the Ada
community: "our language is just so damned incredibly
superior in its serious applications, there is no need for
it to cater to the unwashed masses".
That way lies relegation to the dustbin of programming
languages history, the track down which Ada seems inexorably
headed to this day, whatever its admitted successes.
Ada was intended to replace obscure languages like Jovial.
Instead it has become one of them. The Countess of Lovelace
should sue for damages to her good name.
> Richard Riehle
Crossposted to comp.lang.ada, which _really_ should add
comp.lang.ada.*, where * == {misc,gnat,advocacy,???}, to give
discussions like this a home.
> The Ada mandate was poorly managed by the DoD.
Talk about praising with faint damnation. They couldn't
have done worse by avowedly working against adoption of Ada.
Creating an unenforced underfunded mandate only accomplished
sowing confusion and a burning desire to find a way around
the rules, which all services but the USMC then took up as
the latest "fun frustrating feeble feckless DoD management
by paid professional footdragging" game. Which DoD lost to
its component parts.
> The language is structured around a few simple principles.
Which seem to be "simple" only in the minds of computer
language theorists, not in the minds of us mere programmers,
to whom the reasons for which these principles are
meritorious and why they should govern our lives are still
quite convincingly opaque.
> Some of those principles are more rigorously defined in
> Ada than in other languages.
If only the same rigor had been applied to furnishing
_readable explanations_ of these principles, in
self-contained, self-standing "why, not merely what" style,
in words clear enough to have been penned by Hemmingway, the
pain and suffering of programmers new to Ada might be much
diminished.
> One principle, separation of scope from visibility, is so
> different that even Ada programmers have difficulty with
> it at first.
Last time I checked by comp.lang.ada [which the mannerless
arrogance of certain posters there toward newbies finally
made unstomachable], this issue still seemed to consume the
bulk of the newsgroup. Are the questions now settled, and
is there somewhere available an online tutorial explaining
the issues clearly enough for a sixth grader to read and
use?
> Once they stop fighting it and understand it, the rest of
> the language falls into place.
Much like certain editors better not brought up again, or
lots of other software cobbling tools, for that matter.
xanthian.
Good thing all those devices programmed in C have no problem
talking with each other. Every night, I hear my stove chatting
with my washing machine.
Soldiers are replacing army-issue gadgets with commercial
ones because all of the different military devices use
incompatible batteries, increasing the weight of spares
that have to be carried.
Programming language choice is so far from a factor in
interoperability and communications that your comments
are laughable.
> Probably. See AI-285. (http://www.ada-auth.org/ais.html).
??? That link doesn't go to anything obviously pertinent, just the top
of some huge and not pellucidly navigable database even whose
component parts are confusingly labeled.
Do you have a direct link or is what you reference really on that page
somewhere?
xanthian.
This, maybe? It's AI-285, anyway.
<http://www.ada-auth.org/cgi-bin/cvsweb.cgi/AIs/AI-00285.TXT?rev=1.6>