Comments?
Jon
-----
The Forrest Curve
Jon Forrest
for...@ce.berkeley.edu
There is a phenomenon sweeping the computer industry that is having
a profound but largely unrecognized effect. I claim that companies
ignoring this phenomenon will suffer a slow and painful death. What's
more, there's absolutely nothing that can be done to escape it.
In this article I first describe this phenomenon and then spend some
time trying to figure out what it means.
Simply and briefly stated, my hypothesis is that fewer and fewer
computer users think their computer is too slow. I've invented what I
call the Forrest Curve to illustrate this.
Here's the Forrest Curve:
| \
| \
"slow" | \ /\
factor | v \
| \ /\
| v \
| \ /\
| v \
| \
______________________
-- time --
This is a curve with a general downward slope, having occasional
upward blips. The curve approaches but never hits 0. (Neither axis
is drawn to any scale nor can be used to derive any specific numeric
values).
The "slow" factor is the number of people who think their computer
is too slow. I admit that this isn't a very objective measurement but
you can get a feel for it by the amount of grumbling about computer
speed that takes place in your office.
I'm also not being very specific about exactly what constitutes a
"computer". I claim it really doesn't matter. Taken as a
whole, the whole pile of stuff sitting on someone's desk (or lap), is
what I consider to be a computer. A more detailed examination wouldn't
change the Forrest Curve.
Note that I'm including the entire population of computer users in
this graph, many of whom know very little, if anything, about what
their computer is really doing or how it works. But, even if I were
to confine this graph to "software professionals" the graph would
merely have a higher origin point. The general shape wouldn't
change.
I also recognize that there is a class of users that can and will
always be able to consume any amount of computer resources.
These guys are why the Forrest Curve never goes to zero. In spite of
their needs they can't reshape the Forrest Curve because they don't
have enough money to spend anymore.
Every so often something comes along that causes a temporary
perturbation in the Forrest Curve. Some examples might be
relational databases, X-windows, Windows NT, multimedia,
handwriting and speech recognition, and so on. This is natural.
There will always be such cases and they admittedly can cause high
blips in the Forrest Curve. Sometimes these blips are partially
flattened by special purpose hardware but the problem is that
special purpose hardware usually has a short lifespan and is
doomed to financial failure due to lack of economy of scale. The
rest of the time general purpose hardware will catch up. The one
exception I can see here is that the hardware necessary to handle
digital video is special purpose now but will soon be a commodity,
once consumer television goes digital.
Another implication of the Forrest Curve that we're already seeing is
the shrinking, if not outright elimination, of the distinction between
a workstation and a PC. A while ago you could think of a workstation
as a kind of special hardware gizmo that was only bought for a select
few. The rest of us got PCs. But now, with 3Ghz Pentium 4s, the
Infiniband bus, fibre channel disks, and all the rest, it's gotten to point
where the main difference between a workstation and a PC is the size
of the monitor, the lack of IRQs in workstations, and maybe the amount
of memory you can stick in a PC.
The Forrest Curve implies that the folk myth claiming that people's
requirements for computing power expands to consume all available
computer cycles is no longer true. I'm not convinced that it ever
was true, although I have more faith in its corollary about disk
space. Meanwhile, although Moore's Law, which states that the power
of microprocessors doubles every 18 months, seemingly operates
independently, the Forrest Curve does predict that Moore's Law will
start to spread out as the cost of producing ever faster
microprocessors rises.
Another factor I recognize is that having an infinitely fast computer
on your desk doesn't do you any good unless it runs the applications
you need to run. I'm choosing to ignore this issue.
Let's assume you accept the Forrest Curve. What does it mean?
o It means that computer vendors are going to have a tougher
and tougher time selling computers. This is because people above
the curve only need a new computer when something breaks. This
happens less and less often.
o It means that computer purchasing decisions are no longer made based
on price/performance or just performance, like in the dark ages. Now,
when somebody decides to buy a new computer it will be price
alone, or maybe price and service, that determines which computer to
buy. The service aspect should not be ignored. Some people consider
it very important to know that they can call somebody to come to their
home or office to fix stuff and are willing to pay a fair amount for
this. Other people feel that it's important to buy name brands
no matter what the quality of the name brand is. For those of us who
are more enlightened, if our application requires 25 MIPs to run, and
we're trying to decide whether to buy the 30 MIP machine or the 50
MIP machine to run it, the number of people who'd pay much extra for
the 50 MIP machine is very small. Let's face it, nobody is going
to turn down a faster computer but the people making the purchasing
decisions will have a harder and harder time justifying the extra
cost of a faster system for most people. This is especially true in
environments where large numbers of computers are bought for
non-technical people.
o In earlier versions of this document I had the following sentence
right here: "It means that companies like DEC and SGI that are trying
to produce the fastest computer are slowly committing suicide because
there will be fewer and fewer people who need to buy computers this
fast." Two points for me. As of this writing, DEC is gone and SGI is
fighting for its life. Again, in earlier versions of this document
I said here "On the other hand, companies like Sun, and
virtually all PC companies, are doing the right thing by
concentrating on staying just below the Forrest Curve by selling
computers that are fast enough at the lowest price. Although Sun's
approach might have been an accident it has kept them profitable
during some extremely hard times in the industry. Ironically, it may
turn out that Sun will start to suffer too unless it can sell
Sparcstations at PC prices or increase their performance to rise above
the Forrest Curve for a little while". Two points for me again. These
days Sun is hurting and nobody is making money selling PCs except
for Dell, and their reason for success has nothing to do with speed,
little to do with technology, and lots to do with a deep understanding
of logistics and marketing.
So, except for breakage, to be successful the computer industry
is going to have to concentrate on selling to people who don't
currently have a computer. How many people is that in modern
society? Maybe the laptop industry will thrive because it isn't
affected by the fact that so many people already have computers since
most people who buy laptops already have at least one computer.
Maybe computer vendors can postpone hitting the Forrest Curve by
concentrating their marketing and sales efforts into the Second
and Third World but I bet the Forrest Curve still applies there, but
with a lower origin point. Plus, I wonder how much money there is
to be earned there, given hard currency and other non-technical
problems. But, even if these places are exploited, the Forrest
Curve is merely spread out a little. There's simply no way of
escaping it.
Another way to explain the Forrest Curve is as just the commodization
of computer technology. Assuming your application runs on a certain
computer architecture, there's little any vendor can do to add enough
value to get you to buy their system instead of somebody
else's. For all intents and purposes, the different brands of
computers are all the same, just like different brands of flour
and sugar, and buying a computer will be similar to buying baskets
in Tijuana. The only way for computer vendors to survive is to
remember this, and to remember that price and service will be what
makes or breaks them. Caveat Vendor!
Copyright 2002 Jon Forrest. All rights reserved. This document may be
published in any forum for any reason provided the document is not
modified in any way. Last updated (11/7/02).
Thanks for posting this again....
In reviewing all of my personal system upgrades over the last 10
years, I see that the early upgrades were often about performance,
and that the later upgrades are more often about software
compatibility (e.g., ability to run Mac OS/X) or expandability (my
early machines did not have many slots for memory or disk
expansion, and I really don't like piles of external I/O devices).
A consequence of the Forrest Curve is that I am now purchasing
systems that are physically larger and more expandible than I used
to, because I have the expectation that I will keep them longer
because they are "fast enough". This fits in with a trend to
purchase systems from large vendors to increase the likelihood
of getting support (though even these larger boxes are cheap
enough to simply replace if they fail after more than 2-3 years).
I still do some computational science as a hobby, so there are some
applications that I run (or want to run) that have semi-infinite
computational requirements. This is not likely to be very common
-- how many folks out there are really, really interested in
running computational experiments in geophysical fluid turbulence
for fun?
--
John D. McCalpin, Ph.D. mcca...@austin.ibm.com
Senior Technical Staff Member IBM POWER Microprocessor Development
"I am willing to make mistakes as long as
someone else is willing to learn from them."
My pleasure. I hope to gain great fame and fortune
from this.
> A consequence of the Forrest Curve is that I am now purchasing
> systems that are physically larger and more expandible than I used
> to, because I have the expectation that I will keep them longer
> because they are "fast enough".
That's the kind of reason that I expect to see more
and more of as time goes on.
As for me, I no longer care if my computer is the fastest
computer in the world, but I want it to be as quiet as possible.
So, I'm keeping my 750Mhz Athlon and investing in a new
quiet power supply and disk drive.
Our purchasing strategies are clearly generating some
revenue for somebody, but we're clearly both spending
a lot less than we would have back when our computers
weren't fast enough.
> I still do some computational science as a hobby, so there are some
> applications that I run (or want to run) that have semi-infinite
> computational requirements. This is not likely to be very common
> -- how many folks out there are really, really interested in
> running computational experiments in geophysical fluid turbulence
> for fun?
Exactly. Plus how much money are such people willing to spend?
Not enough to support the high-end CPU (and systems) industry.
Jon
You're famous in my book. 8^)
Another one of the fellow guys in Engr. I and North Hall, intellectual children
of Glenn Culler (not doing well at the moment).
I always wonder where Derrick K. went.... And Glenn Davis.
I know others went to ACC
Fortune: can't help you there. You have to talk to Robert.
>As for me, I no longer care if my computer is the fastest
>computer in the world, but I want it to be as quiet as possible.
I would not mind being portable.
>Exactly. Plus how much money are such people willing to spend?
>Not enough to support the high-end CPU (and systems) industry.
Commoditization.
It's not individuals but institutions.
Clearly not as powerful as many people thought.
> o In earlier versions of this document I had the following sentence
> right here: "It means that companies like DEC and SGI that are trying
> to produce the fastest computer are slowly committing suicide because
> there will be fewer and fewer people who need to buy computers this
> fast." Two points for me. As of this writing, DEC is gone and SGI is
> fighting for its life.
I'm not sure I'd be quite that quick.
SGI is trying to build fast computers, but is clearly targetting them
at the people who never will get enough CPU cycles. I suspect high
performance computing never has been lucrative, and SGI is being
outperformed by companies with more legs to stand on, and more
business to distribute development cost over.
DEC is gone, but it got merged into Compaq, which got merged with HP.
So in a sense, it's still around.
I would also point out that *everybody* is trying to build the fastest
computer, IBM, Sun, and HP are all in the race. Intel and AMD are
constantly competing of having the fastest CPU.
-kzm
Yes.
It reminds me of the repeatedly announced end of physics, and the rapid
assent by fellow high-priests of the business makes me even more
confident in my dissent.
When I was receiving my education in physics, the reality we all faced was:
1. The government was less and less willing to spend increasing sums on
experiments.
2. The cost of every single new data point seemed to be rising
exponentially.
In other words, the flood of data that had kept twentieth physics going
since the last prediction of its demise was about to slow to a trickle.
And it did...for a while.
What the nay-sayers could have seen and didn't (I am arrogant enough to
say that I did) was that putting a telescope above the earth's
atmosphere would allow us to see things we had never even dreamed of and
re-open the flood gates once again.
The denizens of this newsgroup remind me of nineteenth century
physicists: they had worked their relatively new field (pun intended)
over so thoroughly that it was hard to imagine that there was much of
anything left to discover.
The cleverness and elegance of nineteenth century physics is not to be
mocked, and neither is the current state of computer science, but it
would be foolish to imagine that it is in any sense complete or nearing
any meaningful boundaries.
What has *really* happened is that makers of hardware have gotten way
out in front of makers of software. Computer science is *not* my area
of expertise (as everyone who has read my posts undoubtedly realizes),
but when I read a document from a large hardware manufacturer on
artificial intelligence, I just know that the authors, well-educated and
brilliant though they may be, are headed down the same kind of
wrong-headed path as nineteenth century physicists who bravely tried to
turn vortices into particles. They were trying to turn something they
absolutely did not understand (quantum mechanics) into something they
did (fluid mechanics).
There is only the human brain to look at, the relatively simple
blueprint underlying its construction, and the brain's incredible
capacity to cope with things it has never seen or heard of to realize
that there are some *very* basic things about computation that have not
yet been discovered.
The brain doesn't need a team of computer of programmer analysts to keep
it going, so there is no chance of the hardware getting ahead of the
software. If somebody can explain how that works, I will be glad to
listen to their predictions about the future of computer science. Until
then, my response is, "Been there, done that."
Jon,
I hear many people complain about their computers being slow. It is not
their CPU or HD or Graphics card, but it IS their network connection. So the
bottleneck has moved outside the box sitting on or under the desk to the
wires and boxes out in the rest of the world that people use for their
computing and communication. Whether this is a corporate network, the
internet or even a local lan party, the ability of the computer to get and
process the information that the user needs is a bottleneck and with the
availability of very cheap computing power (a GHZ+ CPU is less than a month
of DSL access) it is likely to stay that way for the immediate future.
Limiting yourself to the box only is a way to artificially create a curve -
one which upon examination is really just a recasting of Moore's law.
Thus I propose the following:
Dahlgren's Law:
In aggregate people will always complain.
Dahlgren's Second Law"
The complaining proscribed in Dahlgren's law is a constant regardless of
underlying circumstances.
Now am I famous? Or do I need to post this for several years...
-Jack Dahlgren
Actually, I would disagree there. In those two cases, there were and
are meaningful boundaries, but they are of the form:
1) To get beyond here, you are going to have to think laterally;
this approach will not succeed, as it is fast becoming unproductive.
2) To get beyond here, you are going to have to change direction;
this approach is taking you away from the solution to the problem.
3) To get beyond here, you are going to have to abandon many of
your cherished dogmas; you are already working with falsehoods.
These form a gradation, of course, and you can argue where (say) the
doctrine of the Ether fell. I can't think of clear examples of the
last from physics, but there are plenty from medicine and biology.
One of the latest and clearest examples of (2) was less than 50 years
ago, when people stopped trying to reduce fluid 'friction' by getting
ever closer to laminar flow. The biologists had been telling them
that there was a better solution for years :-)
>There is only the human brain to look at, the relatively simple
>blueprint underlying its construction, and the brain's incredible
>capacity to cope with things it has never seen or heard of to realize
>that there are some *very* basic things about computation that have not
>yet been discovered.
Especially as the largest modern computer systems are approaching
the complexity of a human brain, in 'blueprint', 'training' and
'capacity'. Only a couple of orders of magnitude to go :-)
>The brain doesn't need a team of computer of programmer analysts to keep
>it going, so there is no chance of the hardware getting ahead of the
>software. If somebody can explain how that works, I will be glad to
>listen to their predictions about the future of computer science. Until
>then, my response is, "Been there, done that."
Yeah, well, it's all done by magic, innit?
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679
I think the easy way to see the truth behind the Forrest curve is to
consider what kind of machine would be on your desktop today if the
compute market had the same number of customers it does today, but the
average customer was still willing to spend $5K/machine as they were
in the early 90's. You could build a heck of a machine for $5K - 15%
gross margin today, again assuming that the NRE was split across the
appropriate number of machines.
Companies are competing for the fastest machine at the popular price
points, and that popular price point keeps descending. Even the
super-high end seems to be playing this game.
nate
> > A consequence of the Forrest Curve is that I am now purchasing
> > systems that are physically larger and more expandible than I used
> > to, because I have the expectation that I will keep them longer
> > because they are "fast enough".
>
> That's the kind of reason that I expect to see more
> and more of as time goes on.
>
> As for me, I no longer care if my computer is the fastest
> computer in the world, but I want it to be as quiet as possible.
> So, I'm keeping my 750Mhz Athlon and investing in a new
> quiet power supply and disk drive.
This is interesting. You'd expect the regulars here to be power-hungry
when it comes to computers.
I've got a several year old Athlon 700 running Linux, but the other
computers I have here are Macs in the 250 - 266 MHz range. They run
MacOS 9 very nicely, but the ones that are upgraded 100 - 120 MHz
machines (3rd party G3 daughterboard upgrades) are a little pokey in OS
X. The 266 MHz Powerbook I'm typing this on is my main computer and
it's actually not too bad in OS X, but the heavy graphics do slow it
down and it would be nice to move to something where the GPU can do the
drawing. I'll probably move to something like an 800 MHz iBook in the
next year or so, but I'm hard pressed to see a reason to upgrade past
*that* for a long time to come.
-- Bruce
>I hear many people complain about their computers being slow. It is not
>their CPU or HD or Graphics card, but it IS their network connection.
1 Gb/sec "concatenated" (not muxed) serial links are commodity. 10 Gb/sec
concatenated serial links are easy to implement. 40 Gb/sec concatenated
links are difficult, but still possible now. 160 Gb/sec muxed links are
possible in research labs. Many serial links can be merged into a single
fiber using DWDM- the components to do this are commodity thanks to the
telecom boom. Fiber itself is cheap. Fiber repeaters are cheap. I have a
set of dusty OC-192 laser Rx/Tx modules sitting on my desk which can
transmit through 120 Km fiber without any repeaters.
Yet there is no hope to get much more than 1 Mb/sec to my house (or to a
company for that matter) any time soon. Most people are still limited to 56
Kb/sec. Sigh. :-(
--
/* jha...@world.std.com (192.74.137.5) */ /* Joseph H. Allen */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}
The cash outlay has certainly decreased significantly.
I was able to purchase my last two peecee-compatible
machines (one Pentium4/1600 and one Athlon 1600+)
with 19" monitors for about $1850 --- about 25% less
than my first PowerPC-based Mac with a 15" monitor
(and that was a low-end Mac at the time) in December
1995.
Performance and capacity comparisons across this 7 year period
are staggering. Moore's Law suggests about a 12.8x improvement in
price/performance over this period, while my quick estimates
suggest that what I actually got was more than 20x in performance
and capacity (RAM and DISK) for less than 1/2 the price -- close
to 50x in price/performance at the system level.
(I should note that these results does not depend on using a
PowerMac as the base -- the machine that it replaced was a
Packard Bell 486/20, and I am sure the comparisons would look
amazing there as well.)
Absolutely. I have bought irregularly and infrequently, and what
was originally a major expenditure (excluding the wonderful BBC
Micro) is now almost a minor one.
What I still can't do is to get a serious parallel system for an
affordable price, but that is because the vendors don't see a
market, and not because one couldn't be made. I am talking about
(say) 16 CPUs each running at 250 MHz in a compact box.
Incidentally, I do still find the machines slow, but that is because
my bloatware has got indigestion. I don't believe a faster CPU would
help, as the bottleneck is almost certainly memory latency in the
suppurating spaghetti that is a typical networking and/or GUI
application. If I could dual-boot MacOS and Linux, I might well
buy a MacOS licence, as by all accounts that is streets better than
what I am currently using :-)
>SGI is trying to build fast computers, but is clearly targetting them
>at the people who never will get enough CPU cycles. I suspect high
>performance computing never has been lucrative, and SGI is being
>outperformed by companies with more legs to stand on, and more
>business to distribute development cost over.
It's hard to draw lessons from the steady shrinkage of a company that
uses low-performance cpus in their products. You're correct that their
target market is extremely cpu-hungry, but a much more lucrative
approach to that market is to use more commodity parts.
-- greg
Sure. I was only talking about computing power. You're talking
about something else entirely, and I agree with you. The corresponding
curve for the WAN network industry would look a lot different.
> Limiting yourself to the box only is a way to artificially create a
curve -
> one which upon examination is really just a recasting of Moore's law.
But the making and selling of the box has been what's kept
the computer industry going. Now, as you say, the box
has become pretty much irrelevent. What's going to happen
to the industry?
> Now am I famous? Or do I need to post this for several years...
As long as it's once a year I say go for it.
Jon
Except the Forrest Curve isn't claiming the end of anything - it's an
assymtotic curve. Doesn't that change things?
> What the nay-sayers could have seen and didn't (I am arrogant enough to
> say that I did) was that putting a telescope above the earth's
> atmosphere would allow us to see things we had never even dreamed of and
> re-open the flood gates once again.
The corresponding event in computer technology would be
if all of a sudden computing power and storage were
increased to a level we had never dreamed of for a cost
lower than we have ever dreamed of. Let's assume this happened.
How would the day to day experiences of the people who
already think their computer is fast enough be improved?
> The denizens of this newsgroup remind me of nineteenth century
> physicists: they had worked their relatively new field (pun intended)
> over so thoroughly that it was hard to imagine that there was much of
> anything left to discover.
There's a crucial difference between discovery and development.
Many scientists spend their lives trying to discover the patterns
of nature. Computer people are different. They generally aren't
trying to discover anything. They're trying to develop new things.
But this is off topic from what the Forrest Curve is trying to say.
> There is only the human brain to look at, the relatively simple
> blueprint underlying its construction, and the brain's incredible
> capacity to cope with things it has never seen or heard of to realize
> that there are some *very* basic things about computation that have not
> yet been discovered.
No doubt, but I'll have no trouble sleeping at night because
I know that it won't be lack of computing power that will keep
these discoveries from taking place.
Jon
<stuff about high speed networks deleted>
> Yet there is no hope to get much more than 1 Mb/sec to my house (or to a
> company for that matter) any time soon. Most people are still limited to
56
> Kb/sec. Sigh. :-(
See what I mean about complaining! It is an unstoppable force.
-Jack Dahlgren
> This is interesting. You'd expect the regulars here to be power-hungry
> when it comes to computers.
At home I will need an all-new computer to play Doom III really
well. Does that count?
--
Chris Morgan
"Not so bad offer to discuss about"
- Best recent email spam subject line
I am saving money for a box for Aces High. My current Windows PC can
only do about 3fps in it (ok, it's ancient - a K6/200 with Virge DX).
I may be able to afford a Doom III capable PC in five years, perhaps...
Even if we consider work, Forrest is on crack anyway. "Fast enough",
indeed. He should try to do "bk pull" from a Linux kernel clone on
a 500MHz/192MB laptop to discover the new definition of suffering.
-- Pete
That is correct, but SGI scores very heavily in producing extremely
lightweight, low-power boxes. If you are limited by floor loading
or room power/cooling (as many machine rooms in tower blocks are),
SGI produces very effective solutions.
> I am saving money for a box for Aces High. My current Windows PC can
> only do about 3fps in it (ok, it's ancient - a K6/200 with Virge DX).
> I may be able to afford a Doom III capable PC in five years,
> perhaps...
John Carmack's engines aren't -that- far ahead of the curve - the
power to run Doom III will reach the $199 walmart pc in only about
18-24 months after the game launches, I would guess.
> Even if we consider work, Forrest is on crack anyway. "Fast enough",
> indeed. He should try to do "bk pull" from a Linux kernel clone on
> a 500MHz/192MB laptop to discover the new definition of suffering.
Oh yeah, I could really use about 10X the compile performance of my
sun machine under my desk at work, but I think he's right that we're
in a diminishing-important part of the overall computer market.
Chris
I see two (or three) things:
1) The industry moves on to things outside the box. Since I don't speak for
Intel, read one of Intel's annual reports for how they think that
communications silicon will be the thing to keep feeding their engineers and
stockholders (though there might be a few more rough years there...). Looks
pretty clear where Intel is putting their money.
2) The industry sells boxes to those who do not have boxes yet. I think most
PC makers will tell you which geographic areas are strong and which are not.
Sales guys will need to learn Mandarin.
3) The industry will work on other form factors. The processing power that
you can keep in your pocket has plenty of room for improvement as long as
power can be kept under control. CPU's are also at the heart of a lot of
those "converged" devices (TV, stereo, video recorder in one) which are
being pushed. Personally having the stereo, tv and pc all in one place
limits rather than expands my use of them, but the folly of the converged
entertainment/computing center will be figured out about 15 minutes after
they actually get one of these things working and it turns out that the kids
want to watch cartoons and dad is trying to download pr0n (or vice versa).
The likely solution there is again a networking one distributing devices
throughout the home.
Meanwhile, you can always have more computing power so the performance and
speed wars will continue forever. The box is not irrelevant yet. It is not
so exciting, but it is not dead.
On the other hand. Maybe it WILL all stop and everyone can go home and tend
roses.
-Jack Dahlgren
Not speaking for my employer.
> The corresponding event in computer technology would be
> if all of a sudden computing power and storage were
> increased to a level we had never dreamed of for a cost
> lower than we have ever dreamed of. Let's assume this happened.
> How would the day to day experiences of the people who
> already think their computer is fast enough be improved?
> Jon
>
This happens all the time doesn't it? At least it has in recent memory.
Here is an example from personal experience this summer: HD space dropped in
price to ~$1.00 per GB. I can now save ALL the digital photos I take instead
of being selective. On my first computer I could have stored 20 of them. Now
I have no hesitation to store another hundred or so everytime I take the
camera out. Costs me about a dime in storage space. This makes me think I
should go out and buy a 100GB drive just for backing stuff up. If your
timeline for "sudden" is extended a bit longer than a 15 minute attention
span, there are huge numbers of examples you can find.
-Jack Dahlgren
No one ever predicted the actual *end* of physics, just that it would
become ever-less fertile toil to till. I think that's more in the sprit
of the Forrest curve.
>
>> What the nay-sayers could have seen and didn't (I am arrogant
>> enough to say that I did) was that putting a telescope above the
>> earth's atmosphere would allow us to see things we had never even
>> dreamed of and re-open the flood gates once again.
>
>
> The corresponding event in computer technology would be if all of a
> sudden computing power and storage were increased to a level we had
> never dreamed of for a cost lower than we have ever dreamed of. Let's
> assume this happened. How would the day to day experiences of the
> people who already think their computer is fast enough be improved?
>
What to do with ever-increasing amounts of computing power is a more or
less constant topic of discussion in c.a, so I may be repeating myself here.
1. Current user interfaces are wholly, completely, and repulsively
inadequate for the average user. Those of us who like computers and are
willing to invest some time in them often forget just how true this is.
I spend significant time learning about computers, how they work, and
how to use them; and most of the people around me know that. The
questions I wind up answering for them are on topics so basic that if I
had to spend time learning about them, I'd never get computers to do
anything. For example, there is a way in MS Word to give a text box
inserted into an HTML document transparent. Where is it and how does an
ordinary mortal discover it without becoming an expert on MS Word?
Examples of the way in which computers *are* evolving to address this
need are auto-completion of text and help agents that occasionally offer
a suggestion that is actually useful.
Would a really useful interface involve signifcantly more computing
power? In my estimation, it would, because such an interface would
include considerable intelligence and natural-speech processing
capabilities.
2. Anyone who thinks computers are already fast enough either doesn't
have much in the way of stored documents or is willing to be
extraordinarily patient in finding them. I have my own document
indexing system that works well enough considering I paid nothing for
it, but it takes twenty-four hours to index the documents on one of my
hard disks. When the index *is* up to date, I have to use even more
tools and computer knowledge the average user doesn't possess to find
what I need because the indexing system has no real intelligence at all.
*Everyone* could benefit by an indexing system that could really find
things, that could scan an entire collection to develop user-relevant
categories and and contexts and that could make helpful suggestions when
the user is groping around without much of a clue.
Crude versions of the kind of document manager I imagine could be built
on today's platforms, but they would be extremely crude and frustratingly
slow.
3. Interfaces between just about everything besides the
user(peripherals, computers, programs, etc.) could be orders of
magnitude more robust. An unwanted area of expertise that I have is in
networking. The number of things that can go wrong is stupefying, and
because I follow usenet groups that deal with those things, I know that
the same thing goes wrong over and over and over again and that those
things go wrong for quite ordinary users, like a home user with a linux
and a windows box that he needs to use to share files.
4. ...
I could go on an on in this vein for pages and pages.
>
>> The denizens of this newsgroup remind me of nineteenth century
>> physicists: they had worked their relatively new field (pun
>> intended) over so thoroughly that it was hard to imagine that there
>> was much of anything left to discover.
>
>
> There's a crucial difference between discovery and development. Many
> scientists spend their lives trying to discover the patterns of
> nature. Computer people are different. They generally aren't trying
> to discover anything. They're trying to develop new things. But this
> is off topic from what the Forrest Curve is trying to say.
>
Is it? I thought my example of the human brain was pretty compelling.
Let us imagine a user presented with a computer that could pass the
Turing test and pretend to be a very competent secretary/administrative
assistant/man friday. We would display the features and benefits, get
the potential user really worked up and then announce the price: about a
million times the cost of a copy of Deep Blue. No sale.
Same conversation. Different price: $1000. Still a diminishing number
of interested buyers?
>
>> There is only the human brain to look at, the relatively simple
>> blueprint underlying its construction, and the brain's incredible
>> capacity to cope with things it has never seen or heard of to
>> realize that there are some *very* basic things about computation
>> that have not yet been discovered.
>
>
> No doubt, but I'll have no trouble sleeping at night because I know
> that it won't be lack of computing power that will keep these
> discoveries from taking place.
>
Your response misses the point of what I was trying to say, probably
because I didn't state in the appropriate context of your original post.
You allowed that developments come along that temporarily goose the
market for computational power, but that they play themselves out
quickly. My expectation is that there are discoveries out there as
profound as the development of the transistor that will likewise play
themselves out over decades.
> I've been posting this once a year for about
> the last 6 years. Does anybody doubt this now?
Eight years according to google :-)
Going back and reading the original 1994 thread was entertaining.
Personally I still have a lot of faith that people will be able to
come up with mass-market uses for all this power. I would certainly
use my PC systems differently if they had even 4x of any of the major
limiting resources (CPU, memory, disk, etc.).
I have no doubt that there are applications that could be sold to
large numbers of users that would require 100 to 10,000 times today's
performance. Whether or not we get to these performance levels is
probably determined to a large degree by whether or not there is
a continuum of similarly marketable applications along the way.
There might be some really viable application like total-immersion
virtual-reality that would require 10,000 times today's power, but
if there's nothing that can be done with 1,000 times today that can't
be done just as well with 100x today then maybe things stall out
there and it takes a much longer time to make the "jump" to the
next level.
I guess I'm suggesting that the "up ticks" in the Forrest Curve may
be infrequent, but that they may have a net positive effect on the
"slowness" factor over a long enough time as we come up with
revolutionary new applications to take advantage of all this new
capacity.
G.
I'm unclear as to what fundamental bearing space telescopes have on
*physics* research in general, though some astrophysicists are doubtless
pleased.
The HEP folks I know at SLAC seem to spend most of their time at
conferences arguing over which national lab gets increasingly scarce
funding, and much of the rest filling out OSHA reports and satisfying
other bureaucratic imperatives, rather than actually being in a position
to do physics.
Jon
__@/
> This happens all the time doesn't it? At least it has in recent memory.
> Here is an example from personal experience this summer: HD space
> dropped in
> price to ~$1.00 per GB. I can now save ALL the digital photos I take
> instead
> of being selective. On my first computer I could have stored 20 of
> them. Now
> I have no hesitation to store another hundred or so everytime I take the
> camera out. Costs me about a dime in storage space. This makes me think I
> should go out and buy a 100GB drive just for backing stuff up. If your
I have somewhere between 50 GB (laptop) and 200 GB on all the PCs I use
and/or own, my personal files, including all the 4 Mpix photos I take,
are always stored on at least three of these systems/disks.
Backup tape? Who needs that?
:-)
Anyway, MiniDV digital video is what's gonna absorb the next two orders
of magnitude in disk size.
Terje
--
- <Terje.M...@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
[ ... physics and limits to desktop requirements ... ]
rmyers1400> What the nay-sayers could have seen and didn't (I am
rmyers1400> arrogant enough to say that I did) was that putting a
rmyers1400> telescope above the earth's atmosphere would allow us to see
rmyers1400> things we had never even dreamed of and re-open the flood
rmyers1400> gates once again.
Because Nature (the universe) is vast and we only see a narrow bit of
it. We don't invent Nature, we just discover it.
rmyers1400> The denizens of this newsgroup remind me of nineteenth
rmyers1400> century physicists: they had worked their relatively new
rmyers1400> field (pun intended) over so thoroughly that it was hard to
rmyers1400> imagine that there was much of anything left to discover. [
rmyers1400> ... ]
But computer science is not a science, where the goal is discovery
within a vast field of Nature.
Computer science is engineering, pursues innovation/invention instead of
discovery, and is driven by utility.
I would more readily compare computer ``science'' to car (or train, or
airplane, ...) ``science'' (that is engineering).
Car design is sort of a mature engineering field too, and the basics of
the design of a car have changed very slowly. It's just a game of
incrementally cheaper, and better. What is an adequately useful car
design has been decided long ago; so for a desktop computer.
We now have, architecturally, desktop/laptop super-mainframes (x86s
running Windows NT/2000 or Linux are based on 50s/60s technology) and
super-workstations (the Windows or Linux GUIs are based on 70s
technology), and it's hard to imagine this changing anytime soon.
The last ``big'' innovation in ``desktops'' was the Onyx in 1980,
which was the first machine in a self contained enclosure running Unix
with a microprocessor, a Winchester disk and a cartridge tape unit.
Most system today are built on exactly the same line (with x86s
instead of Z8000s, 3.5" disks instead of 8", and DDS/CD-RW instead of
1/4" tapes). CD-ROM drives, another important innovation, were
introduced 20 years ago too.
Radical new car designs aren't going to happen soon, neither are radical
new workstation designs. Their utility is adequate, and they are
anchored by a lot of legacy.
If there is any hope for anything that is not ``same boring old'' (but
constantly incrementally improved) it would be something in an entirely
different application area, perhaps like wearable wireless swarms.
Arguably mobile phones are the biggest single innovation in computers:
they are in effect portable PCs of some considerable (early 1990s)
power with a built in wireless net interface. But even they are now
mature.
Other than new application area, computer design (more than
architecture) produces radically new designs only in response to new
hardware technologies.
The last (latest?) important technological developments have been the
SRAM/DRAM dichotomy and CMOS (and small Winchester disks), and arguably
the initial part of the past 20-30 years have been just exploration of
the new designs (microprocessors and RISC) that were made feasible by
these new technologies.
The exploration did not take long, and it's been very samey since,
because no hardware technologies with dramatically different tradeoffs
have happened.
If things like ``ferroelectric'' or organic or holostore memory were
to really happen then a lot of the current designs would be rethought.
But then there was hope in the past and GaAs or bubble/CCD memories
did not really happen.
I agree with that.
>Computer science is engineering, pursues innovation/invention instead of
>discovery, and is driven by utility.
Really? Large areas of it most definitely aren't.
Your post exemplifies perfectly why even very competent scientists so
often get things wrong. It's the equivalent of saying that there's no
use to the national economy getting better if my business is in the
tank. The observations (1) and (2) above may be even more true today
for the HEP folks than when they first became obvious to everyone in
physics. Particle accelerators don't occupy and many never again occupy
the grand place in physics they once did.
The issue now facing fundamental physics in general is, given the
tantalizing clues that have been afforded by looking more closely at
what is around us, how to turn those clues into theories. Given the
likely energy scales involved, digging up half of Texas, or all of
Texas, or even the entire United States, wouldn't do much other than
make the folks in HEP feel more secure as to their economic futures.
As to what space telescopes may have to do with physics in general,
theoreticians tend to go where the data is, and, just at the moment, the
interestating data are almost all coming from space.
I don't get by slac as often as i did in the 70s & 80s ... but have
been by a couple times in the past several months. I gave a talk there
in august and got a tour of the old machine room ... which used to be
all big ibm mainframe stuff ... and is now these racks filled with
linux processors (although there is quite a bit of empty space). I
have a little cube (beemtree) on top my screen from the slac/fermi
booth at sc2001.
http://sc2001.slac.stanford.edu/
http://sc2001.slac.stanford.edu/beamtree/
the big(?) stuff is grid, babar, and the petabytes of data that will
be flying around world wide grid. i interpreted what i heard was that
there is so much data and so much computation required to process the
the data ... that is being partitioned out to various organizations
around the world (slightly analogous to some of the distributed
efforts on the crypto challenges/contests ... except that hundreds of
mbytes/sec & gbytes/sec will be flying around the world). The economic
model seems to be that they can get enuf money for the storage and for
some of the processing ... but it seems that world-wide gbyte/sec
transfer cost is such that various computational facilities around the
world can share in the data analysis (also there is quite of database
work supporting all of this stuff).
BaBar home page
http://www.slac.stanford.edu/BFROOT/
sc2002 is coming up in baltimore.
--
Anne & Lynn Wheeler | ly...@garlic.com - http://www.garlic.com/~lynn/
> rmyers1400> The denizens of this newsgroup remind me of nineteenth
> rmyers1400> century physicists: they had worked their relatively new
> rmyers1400> field (pun intended) over so thoroughly that it was hard to
> rmyers1400> imagine that there was much of anything left to discover. [
> rmyers1400> ... ]
>
> But computer science is not a science, where the goal is discovery
> within a vast field of Nature.
>
This distinction between development and discovery, which Jon Forrest
was also careful to make, fascinates me. I am simply too thick-headed
to see the difference.
When physicists slow down or even "stop" light in a gas, is that
development or discovery? In the sense that they are doing pure physics
without any utilitarian motivation, it is discovery. In the sense that
they are deliberately manipulating the natural world to get it to do
things it wouldn't likely do without clever human intervention, I would
call it development. I would also call it more along the lines of
inventing nature rather than discovering it, if you are to insist on the
distinction at all.
> Computer science is engineering, pursues innovation/invention instead of
> discovery, and is driven by utility.
>
The distinction just doesn't seem very useful. Two vortex filaments
coexist happily only in two-dimensional computer simulations. In three
dimensions, they are highly susceptible to something called a mutual
induction instability. I would call the identification and successful
theoretical treatment of this phenomenon a discovery, but the process
of discovery was driven by the most utilitarian of motivations: jumbo
jets leave behind pairs of powerful vortex filaments that would make
modern aviation completely impractical if they were anywhere near as
immortal as two dimensional simulations would suggest. As it is, the
occasional encounter of small aircraft with an unusually persistent
vortex wake can produce a headline-grabbing accident.
> I would more readily compare computer ``science'' to car (or train, or
> airplane, ...) ``science'' (that is engineering).
>
Is "applied science" an oxymoron? If so, then "applied mathematics" is
even more so. Historically, both mathematics and science have been
driven (and funded) by utilitarian needs. Anyone who thinks that the
powerful scientific infrastructure of the United States is driven by an
interest in high culture should take a look at the budget of the
National Endowment for the Humanities.
> Car design is sort of a mature engineering field too, and the basics of
> the design of a car have changed very slowly. It's just a game of
> incrementally cheaper, and better. What is an adequately useful car
> design has been decided long ago; so for a desktop computer.
>
The automobile is an interesting and instructive example. The question
is, will computers turn out to be like cars. You seem to think so. I
don't.
I will be bold. I expect that:
1. Computers will become much more flexible, adaptive, and
self-organizing. No new hardware is necessary for this to happen, but
new hardware will come along that is better suited to computers with
such a design methodology.
2. Computers will rely more on probabilistic computation techniques that
will yield the correct answer at least as often as today's deterministic
algorithms and usually much more quickly. I base this prediction on my
observation that most problems become "embarrassingly parallel" when
they are moved into the realm of probability.
3. Computers will become much more redundant and capable of self-healing.
4. A side effect of (1-3) combined is that computer systems will become
much more robust and much less brittle.
5. Computer programming will be completely unrecognizable in two decades
because humans will interact with computers at a *much* more abstract
level than most people currently envision. Of the prediction itself, I
am certain. The time scale could be quite wrong.
6. Computers will become much more adept at acquiring knowledge on their
own.
There you have it. Crackpot fringe science fiction? If it makes you
feel more secure about the future of whatever highly specialized area of
computation you have become an expert in, feel free to think so.
...
> I would also point out that *everybody* is trying to build the fastest
> computer, IBM, Sun, and HP are all in the race.
Not quite that simple, I'm afraid. For example, if HP were truly interested
in speed, it wouldn't have allowed Compaq to scrap Alpha, and if Intel were
truly interested in speed, it would have taken the Alpha IP it purchased
last year and used it to bring out EV8 in 2004 instead of waiting until 2006
or later for the same engineers to wrestle Itanic into something that might
be comparable.
But HP is more interested in platform consolidation and divesting itself of
development responsibility (as indeed Compaq was before it), and Intel is
more interested in a platform that it can keep exclusive (rather than, e.g.,
having to share architecture rights with Samsung). As for Sun, its only
hope of building the fastest machine seems to be to hop onto the Hammer
train (which some think it may do): SPARC is a respectable workhorse, but
doesn't seem likely to make the leap to the lead any time soon.
While POWER4 may not be the fastest core in the race, IBM seems to be
surrounding it with hardware that might make better use (at least in
multi-threaded server environments) of what performance it has to offer than
anything else save EV7 and Hammer: MCMs are expensive, but they do
facilitate efficient communication, and the POWER4 core seems to provide
both more performance per Watt than just about any other upper-end micro
(though IIRC SPARC isn't too bad in this department either) and more
performance per unit chip area than any upper-end micro save perhaps for
IA32 and Hammer.
Of course, all the above applies only to server environments, where the more
throughput a single package can provide, the less expensive the rest of the
server has to be.
- bill
...
> What to do with ever-increasing amounts of computing power is a more or
> less constant topic of discussion in c.a, so I may be repeating myself
here.
>
> 1. Current user interfaces are wholly, completely, and repulsively
> inadequate for the average user. Those of us who like computers and are
> willing to invest some time in them often forget just how true this is.
>
> I spend significant time learning about computers, how they work, and
> how to use them; and most of the people around me know that. The
> questions I wind up answering for them are on topics so basic that if I
> had to spend time learning about them, I'd never get computers to do
> anything. For example, there is a way in MS Word to give a text box
> inserted into an HTML document transparent. Where is it and how does an
> ordinary mortal discover it without becoming an expert on MS Word?
>
> Examples of the way in which computers *are* evolving to address this
> need are auto-completion of text
"Are" may not be quite the right tense to use here: DEC PDP-10/20 systems
were doing this a quarter-century ago.
and help agents that occasionally offer
> a suggestion that is actually useful.
Context-sensitive help has also existed for at least a couple of decades
now. And neither this nor auto-completion require any additional computing
power whatsoever, just decent software.
>
> Would a really useful interface involve signifcantly more computing
> power? In my estimation, it would, because such an interface would
> include considerable intelligence and natural-speech processing
> capabilities.
I'm not sufficiently well-acquainted with natural-speech processing to
evaluate the assertion that it requires 'significantly more computing power'
than is available today on the desktop, but my inclination is to suspect
that it does not (rather, just better software algorithms).
>
> 2. Anyone who thinks computers are already fast enough either doesn't
> have much in the way of stored documents or is willing to be
> extraordinarily patient in finding them. I have my own document
> indexing system that works well enough considering I paid nothing for
> it, but it takes twenty-four hours to index the documents on one of my
> hard disks. When the index *is* up to date, I have to use even more
> tools and computer knowledge the average user doesn't possess to find
> what I need because the indexing system has no real intelligence at all.
>
> *Everyone* could benefit by an indexing system that could really find
> things, that could scan an entire collection to develop user-relevant
> categories and and contexts and that could make helpful suggestions when
> the user is groping around without much of a clue.
>
> Crude versions of the kind of document manager I imagine could be built
> on today's platforms, but they would be extremely crude and frustratingly
> slow.
Now, this is an area which I *am* qualified to address. The kind of
document indexing you describe already exists (though not in common desktop
software), and in no way requires any more computing power than a low-end
desktop supplies. The only advances that will materially speed it up are
advances in storage technology (essentially, the replacement of
electromechanical positioning mechanisms used in disks by high-density,
stable solid-state storage with access times in the microseconds rather than
in the milliseconds) - and even after those occur, existing approaches will
allow it to be indexed and retrieved quickly (certainly in single-user
situations such as the one you describe) without increases in processing
power (only brute-force sequential searches would benefit significantly from
such added processing power).
>
> 3. Interfaces between just about everything besides the
> user(peripherals, computers, programs, etc.) could be orders of
> magnitude more robust. An unwanted area of expertise that I have is in
> networking. The number of things that can go wrong is stupefying, and
> because I follow usenet groups that deal with those things, I know that
> the same thing goes wrong over and over and over again and that those
> things go wrong for quite ordinary users, like a home user with a linux
> and a windows box that he needs to use to share files.
Once again, nothing requiring increased processing power.
>
> 4. ...
>
> I could go on an on in this vein for pages and pages.
Given that you're zero for three already, that's probably not a good idea.
You yourself were the one who mentioned earlier that hardware had already
far outdistanced the software running on it. This is in no way a recent
phenomenon, though it might be even worse today than it was decades ago.
While it is possible that some dramatic new application that everyone will
decide they need will demand significant increases in processing power (and
the Forrest Curve does seem to allow for this), on average the implacable
advance of processing power seems to have out-distanced even Microsoft's
ability to soak up cycles with bloatware. As a result, more and more
people's needs are being satisfied by less and less leading-edge hardware,
and even good software developers seem to be running out of ways to use
high-end hardware effectively save in bulk throughput (i.e., server-style)
or HPTC, rather than desktop, environments.
Now, if software engineering morphs into pure drag-and-drop shuffling of
generic, abysmally inefficient modules, then hardware requirements may take
a significant step upward. But even *that* will be a one-time occurrence:
the main driver for a return to leading-edge hardware on the desktop will
come if/when we develop better understanding of how to model the problems we
want to solve, create new algorithms to do so, and find that these
algorithms are far more processing-intensive than existing code (which in
many cases they may not be). It is certainly reasonable to forecast that
such a situation will at some point occur, but it's not clear that you can
forecast *when* it will occur until it has already begun.
In many ways that does reflect the situation in physics at the end of the
19th century. But unlike the grey-beards of that time, Jon hasn't seemed to
declare the end of advances in computing power but merely a model that
recognizes that (at least as long as Moore's Law remains valid) our ability
to make use of it (especially in personal use) fails to match technology's
ability to increase it save at occasional points in time where needs take a
sudden jump (and unless that jump is sufficiently large to take desktop
requirements from the 'trailing edge' to somewhere beyond the leading edge
of technology at the time it occurs, the bump in the curve will not be a
dramatic one and indeed the overall trend of the curve will be downward).
- bill
Post rearranged to suit my convenience, with no intent to mislead...
> You yourself were the one who mentioned earlier that hardware had
> already far outdistanced the software running on it. This is in no
> way a recent phenomenon, though it might be even worse today than it
> was decades ago. While it is possible that some dramatic new
> application that everyone will decide they need will demand
> significant increases in processing power (and
> the Forrest Curve does seem to allow for this), on average the
> implacable advance of processing power seems to have out-distanced
> even Microsoft's ability to soak up cycles with bloatware. As a
> result, more and more people's needs are being satisfied by less and
> less leading-edge hardware, and even good software developers seem to
> be running out of ways to use high-end hardware effectively save in
> bulk throughput (i.e., server-style) or HPTC, rather than desktop,
> environments.
>
This is the crux of the matter. The shortfall, at the moment, is in the
software, not the hardware. The place I disagree with the Forrest curve
(and I tried to make this clear in my direct response to Jon Forrest),
is the expectation that the upward blips in the curve are doomed to be
modest and temporary. On this point, I demur.
Why it is that software development has become so moribund and
inward-looking could form a whole thread in itself. Who or what to
blame? Microsoft, maybe. If that's really the case, then Open Source
and IBM are the only plausible sources of reinvigorating software
development that come to mind.
Maybe the problem is that while machine computation arose out of
mathematics, all of the significant development has taken place in EE
departments, and if you don't know CMOS from GaAs, you just aren't where
the action is. Here, I really don't know.
>
>> 1. Current user interfaces are wholly, completely, and repulsively
>> inadequate for the average user. Those of us who like computers
>> and are willing to invest some time in them often forget just how
>> true this is.
>>
>
> and help agents that occasionally offer a suggestion that is actually
useful.
>
> Context-sensitive help has also existed for at least a couple of
> decades now. And neither this nor auto-completion require any
> additional computing power whatsoever, just decent software.
>
Only a computer geek would regard the context-sensitive help that is
currently available as much more than a distraction from the already
confusing task of coping with the incredibly busy graphical interfaces
that face a user using a typical word-processor.
As to whether decent context-sensitive help would require more computing
power, we simply disagree. I believe that really helpful help would
require computing capacity something on the order of deep blue.
>
>> Would a really useful interface involve signifcantly more computing
>> power? In my estimation, it would, because such an interface
>> would include considerable intelligence and natural-speech
>> processing capabilities.
>
>
> I'm not sufficiently well-acquainted with natural-speech processing
> to evaluate the assertion that it requires 'significantly more
> computing power' than is available today on the desktop, but my
> inclination is to suspect that it does not (rather, just better
> software algorithms).
>
Erg. Just try using Naturally Speaking with the fastest of processors
available. At that, it's not all that accurate and it makes no claim to
attempt to infer meaning.
Hmmm. What I am using is Namazu, which runs under Perl. I'm sure there
are better solutions available, and I would welcome a suggestion, if
it's open source. I run a WD Special Edition with 8 mb of built-in
cache and have gobs of memory, so my computer can load the whole damn
file in most cases, and more quickly than Namazu can crunch it,
especially when you realize that it is dealing with files in a multitude
of formats, included compressed files, postscript files, and pdf files,
not just plain text.
>
>> 3. Interfaces between just about everything besides the
>> user(peripherals, computers, programs, etc.) could be orders of
>> magnitude more robust. An unwanted area of expertise that I have
>> is in networking. The number of things that can go wrong is
>> stupefying, and because I follow usenet groups that deal with those
>> things, I know that the same thing goes wrong over and over and
>> over again and that those things go wrong for quite ordinary users,
>> like a home user with a linux and a windows box that he needs to
>> use to share files.
>
>
> Once again, nothing requiring increased processing power.
>
Think not? Even 100MHz ethernet puts a significant burden on the CPU,
and that without any kind of checking to see whether what's happening
makes any sense (Hey, Bob, one of the computers on the network is trying
to connect but can't do it because port xxx can't get through the
firewall). Such a system would have to be aware of and monitor all of
the layers of the TCP stack, including the application interface. And
of course, I wouldn't want it to slow me down while I'm answering a post
like yours.
...
> This is the crux of the matter. The shortfall, at the moment, is in the
> software, not the hardware. The place I disagree with the Forrest curve
> (and I tried to make this clear in my direct response to Jon Forrest),
> is the expectation that the upward blips in the curve are doomed to be
> modest and temporary. On this point, I demur.
I suspect that may be because you don't understand what the curve displays.
In particular, the vertical axis does not display dissatisfaction level with
a *particular* performance level but dissatisfaction with the available
performance at a particular point in time.
As long as the performance of the desktop computers people use continues to
increases faster than their needs do, the curve will slope downward - even
if those needs continue to increase as well. The only times its slope turns
positive are when some sudden increase in required processing power exceeds
the continuing increase made available by the hardware, but unless there's
something about that required increase that is *continuing* then after the
instantaneous blip upward the hardware advances will turn the slope downward
once again.
It is certainly valid to observe that the curve applies only to a particular
*class* of computer. For example, it's easy to imagine a lot more
percentage dissatisfaction with, say, a palm-style computer than with a
desktop (if the palm computer is trying to do anything significant,
anyway) - but that just indicates that the palm computer is farther to the
left on its curve than the desktop is on its (different, though similar)
curve.
It is also reasonable to question some of the conclusions Jon draws from the
curve.
>
> Why it is that software development has become so moribund and
> inward-looking could form a whole thread in itself. Who or what to
> blame? Microsoft, maybe. If that's really the case, then Open Source
> and IBM are the only plausible sources of reinvigorating software
> development that come to mind.
I'm not sure why you think that software development is in any worse shape
than it ever was, in terms of its rate of advance. However, it is likely
that the percentage of people actually advancing the state of the art has
decreased as the population of developers has grown, such that there may not
be many more now in absolute numbers than there were 20 or 30 years ago
(which would contribute to a decline in *average* software quality, but not
in the state of the art).
>
> Maybe the problem is that while machine computation arose out of
> mathematics, all of the significant development has taken place in EE
> departments, and if you don't know CMOS from GaAs, you just aren't where
> the action is. Here, I really don't know.
>
> >
> >> 1. Current user interfaces are wholly, completely, and repulsively
> >> inadequate for the average user. Those of us who like computers
> >> and are willing to invest some time in them often forget just how
> >> true this is.
> >>
> >
> > and help agents that occasionally offer a suggestion that is actually
> useful.
> >
> > Context-sensitive help has also existed for at least a couple of
> > decades now. And neither this nor auto-completion require any
> > additional computing power whatsoever, just decent software.
> >
>
> Only a computer geek would regard the context-sensitive help that is
> currently available as much more than a distraction from the already
> confusing task of coping with the incredibly busy graphical interfaces
> that face a user using a typical word-processor.
>
> As to whether decent context-sensitive help would require more computing
> power, we simply disagree. I believe that really helpful help would
> require computing capacity something on the order of deep blue.
I guess you've never been exposed to a decent help system (no surprise
there: most people haven't). But generalizing so wildly from your limited
experience is hardly persuasive.
>
> >
> >> Would a really useful interface involve signifcantly more computing
> >> power? In my estimation, it would, because such an interface
> >> would include considerable intelligence and natural-speech
> >> processing capabilities.
> >
> >
> > I'm not sufficiently well-acquainted with natural-speech processing
> > to evaluate the assertion that it requires 'significantly more
> > computing power' than is available today on the desktop, but my
> > inclination is to suspect that it does not (rather, just better
> > software algorithms).
> >
> Erg. Just try using Naturally Speaking with the fastest of processors
> available. At that, it's not all that accurate and it makes no claim to
> attempt to infer meaning.
Unless you're claiming that running Naturally Speaking on a, say, 20 GHz
Pentium would improve that experience (which from your description sounds
unlikely), then once again you're not making any kind of case for faster
processors but rather for better algorithms (which, once defined, would then
make it clear whether faster processing would be beneficial).
Unless your documents are on average fairly large (well over 100KB for a
median size) *and* contiguous on disk, it sounds as if your indexing
software is something of a pig (which would not be all that surprising,
since such software is often rather brute-force in its algorithms). And in
any event, once you've indexed them, if there's *any* perceptible retrieval
latency beyond that associated with disk latency it's out-right incompetent.
Efficient document indexing is a significant sub-realm of data management.
Open source software is not usually the best place to find state-of-the-art
implementations (unless it originates in a relevant portion of academia).
>
> >
> >> 3. Interfaces between just about everything besides the
> >> user(peripherals, computers, programs, etc.) could be orders of
> >> magnitude more robust. An unwanted area of expertise that I have
> >> is in networking. The number of things that can go wrong is
> >> stupefying, and because I follow usenet groups that deal with those
> >> things, I know that the same thing goes wrong over and over and
> >> over again and that those things go wrong for quite ordinary users,
> >> like a home user with a linux and a windows box that he needs to
> >> use to share files.
> >
> >
> > Once again, nothing requiring increased processing power.
> >
>
> Think not? Even 100MHz ethernet puts a significant burden on the CPU,
> and that without any kind of checking to see whether what's happening
> makes any sense (Hey, Bob, one of the computers on the network is trying
> to connect but can't do it because port xxx can't get through the
> firewall). Such a system would have to be aware of and monitor all of
> the layers of the TCP stack, including the application interface. And
> of course, I wouldn't want it to slow me down while I'm answering a post
> like yours.
I fear you place far too much reliance on your misinterpretations of your
own experiences. Ethernet itself is not particularly CPU-unfriendly, and
while the TCP/IP stack can add a lot of overhead that issue is rapidly being
addressed by NIC enhancements rather than by increased host processing power
(among other things, the former has the advantage of scaling with multiple
connections).
One of your better insights occurred in your initial post: "Computer
science is *not* my area of expertise." Best perhaps to leave it there.
- bill
I half agree. People have been talking about "wasted cycles" since at least
the 286. But, software designers have been quite able to keep up with the pace
of silicon designers, so now we have Doom II instead of Commander Keen. Old
computers have *always* still been interesting and useful. But, fast computers
will *always* be too slow. (This message typed on a P-60.)
------------------
Woooogy
I have to go back in time to pretend to be myself when I tell myself to tell
myself, because I don't remember having been told by myself to tell myself. I
love temporal mechanics.
Norm Rubin
"Will R" <fork...@aol.com> wrote in message
news:20021109190051...@mb-de.aol.com...
>I've been posting this once a year for about
>the last 6 years. Does anybody doubt this now?
Have you ever played 'Baldur's Gate: Dark Alliance'? If so, did you
notice the water effects? The way when you wade into a pool, it
creates ripples that spread out in circles, bounce off the edges of
the pool and go back and forth, interfering with each other, all
perfectly smooth and fluid? My jaw nearly hit the floor the first time
I saw it. That, in a sense, is what 6 gigaflops of computing power
looks like.
One of the levels in 'Red Faction' is supposedly set in an undersea
base, but in fact water never comes into it. I want to be able to blow
a hole in the wall at a selected point and have the water spray in, a
nearly solid jet first, bouncing off things and knocking them over,
pouring down stairways and through corridors (and hopefully drowning
my opponents in the process if I get the timing right). Water in 3D,
not just 2D.
The reason we can't do this now is very simple: today's hardware
provides nowhere near enough computing power.
--
"Mercy to the guilty is treachery to the innocent."
Remove killer rodent from address to reply.
http://www.esatclear.ie/~rwallace
The forrest curve will not happen as there will be no end in the usage
of computing power. People will always be willing to pay more for
more computing power.
It is, of course, a completely other story, if there is no physical (or
commercially viable) way to improve the CPU performance.
Forrest curve will not happen due to lack of demand of CPU power.
The masses will buy faster computers, because the faster computer
is always both more entertaining and useful.
...
> The masses will buy faster computers, because the faster computer
> is always both more entertaining and useful.
You appear to have completely missed the point: the curve reflects the fact
that the faster computer is *not* measureably more entertaining and useful
for the average user, because there is no significant amount of popular
software that requires its capabilities. Until the *average rate* of
software's ability to benefit noticeably from faster processors becomes
greater than the normal increases in hardware performance from technology
advances motivated by factors *other than* the personal computing needs of
the masses, the curve will continue to trend downward.
- bill
> ....
>
>
>>This is the crux of the matter. The shortfall, at the moment, is in
the
>>software, not the hardware. The place I disagree with the Forrest
curve
>>(and I tried to make this clear in my direct response to Jon
Forrest),
>>is the expectation that the upward blips in the curve are doomed to
be
>>modest and temporary. On this point, I demur.
>
>
> I suspect that may be because you don't understand what the curve displays.
> In particular, the vertical axis does not display dissatisfaction level with
> a *particular* performance level but dissatisfaction with the available
> performance at a particular point in time.
>
> As long as the performance of the desktop computers people use continues to
> increases faster than their needs do, the curve will slope downward - even
> if those needs continue to increase as well. The only times its slope turns
> positive are when some sudden increase in required processing power exceeds
> the continuing increase made available by the hardware, but unless there's
> something about that required increase that is *continuing* then after the
> instantaneous blip upward the hardware advances will turn the slope downward
> once again.
>
Here is the crux of *our* disagreement. I think I do understand what
the Forrest curve represents, and I think it analyzes in relatively
microscopic fashion a relatively temporary state of computing. The
construction of the curve assumes that computing is a mature field and
that future changes will be incremental. The caveat that mutual funds
are required to add to advertisements of their own performance: "Past
performance is no guide to future growth," comes to mind.
I don't think computing is a mature field, I don't think today's
computers or common applications of them indicate much about what
computers are capable of doing or how they should work, and I expect
changes that will make today's computers look about as well suited to
their intended task as Roman numerals were to arithmetic.
<snip>
>
> I guess you've never been exposed to a decent help system (no surprise
> there: most people haven't). But generalizing so wildly from your limited
> experience is hardly persuasive.
>
I'll put it to you this way: if somebody is capable of implementing a
really decent help system on today's desktop hardware and hasn't done
it, I'd like to invest in the company that can.
<snip>
>> >
>>Erg. Just try using Naturally Speaking with the fastest of
processors
>>available. At that, it's not all that accurate and it makes no
claim to
>>attempt to infer meaning.
>
>
> Unless you're claiming that running Naturally Speaking on a, say, 20 GHz
> Pentium would improve that experience (which from your description sounds
> unlikely), then once again you're not making any kind of case for faster
> processors but rather for better algorithms (which, once defined, would then
> make it clear whether faster processing would be beneficial).
>
A faster processor would improve the experience. As it is, it's
pretty
clear that the software spends about as much time as it thinks it can
get away with before committing to a particular interpretation of
spoken
text. Analyzing larger and larger chunks of text would allow the
software to get more clues from context as to what a particular word
or
phrase is intended to be.
That is not to say that much greater improvements would not come from
better algorithms. About that, I suspect you are right, but I expect
the algorithms to be computationally expensive. Huge parts of the
brain
are committed to language processing.
I had to look at the the rc file for the indexing software: it's
currently set to allow the loading of documents as large as 20 Mb into
memory and the indexing of text files as large as 10 Mb. The
filesystem
is about 5% non-contiguous, and multimegabyte compressed and pdf files
are not at all uncommon. That the software is a pig there is no
doubt,
but I am certainly a customer for a faster processor for that
application.
>
> And in
> any event, once you've indexed them, if there's *any* perceptible retrieval
> latency beyond that associated with disk latency it's out-right incompetent.
>
Doing a straight up search on a keyword yields 20 results in the blink
of any eye. If I ask it to dump to a file all the results for a
keyword
search that returns thousands of files, that can take a few seconds.
> Efficient document indexing is a significant sub-realm of data management.
> Open source software is not usually the best place to find state-of-the-art
> implementations (unless it originates in a relevant portion of academia).
>
It's pretty clear from the speed with which google returns results and
the size of the database they are maintaining that what can be known
and
done is quite a bit beyond what's on my desktop, but that's exactly
the point. That's a capability I want, and I'm not willing to pay
several times the price of a typical desktop to get it (by buying one
of google's intranet servers).
You have much more confidence in the outcome of the TCP-offload
enterprise than I do. Whether that particular approach works or not
(and I'm betting that it
won't, at least if cost is taken into account), I expect
more and more silicon and more and more cycles to go into coping with
I/O. Whether it's integral with the CPU or not, it's processing
power,
and it's not cheap.
Even a fully realized offload engine as currently envisioned wouldn't
address the task I set forth. What I am imagining is a communications
supervisor that keeps track of communications via hooks, the way that
IPTABLES does, only one with enough intelligence to sort out and
decode
protocols and to recognize and diagnose things going wrong, whether it
is malicious intrusion or a benign networking disconnect. If a packet
is refused from my internal network, I'd like to know what it was and
why. Just as with document management, software exists to do most of
these things in a separate box at significant cost. I want the same
capability in my PC and I'm not willing to pay several times the cost
of my PC to get it. My perceived level of dissatisfaction is high and
I share that perceived level of
dissatisfaction with hundreds of thousands of others who'd like to
hook
their PC's together without becoming networking wizards.
>
> One of your better insights occurred in your initial post: "Computer
> science is *not* my area of expertise." Best perhaps to leave it there.
>
The people who are driving the Forrest curve aren't experts in
computer science either. Maybe we should leave it *there*?
>I still do some computational science as a hobby, so there are some
>applications that I run (or want to run) that have semi-infinite
>computational requirements. This is not likely to be very common
>-- how many folks out there are really, really interested in
>running computational experiments in geophysical fluid turbulence
>for fun?
Actually, I don't have the expertise, but I have always had the desire to
know what would happen with continental drift 100K, 500K, 1M, 5M, and 10M
in the future. Billiards at a continental scale always struck me as
interesting for some reason. That is KIND of what you are talking about,
right?
Unfortunately, I don't have the expertise in almost any of the areas
necessary to generate meaningful results, even with so many hundreds of
millions of years of data to start with :-P.
--
Skipper Smith Helpful Knowledge Consulting
Worldwide Microprocessor Architecture Training
PowerPC, ColdFire, 68K, CPU32 Hardware and Software
/* Remove no-spam. from the reply address to send mail directly */
[ ... ]
>> Computer science is engineering, pursues innovation/invention instead
>> of discovery, and is driven by utility.
nmm1> Really? Large areas of it most definitely aren't.
Uhmm, it all depends on how you define "computer science", innit? :-)
For example: if "computer science" is what gets done at "computer
science" departments at Universities, in a lot of them a lot of it is
applied maths, and even a fair bit of pure maths.
This all depends on academic politics: a number of the more alert
maths academics sniffed that "computer science" was a way to get more
lectureships and better funding. More power to them. :-)
But then my opinion is that usually the nicer CS departments are those
that were spawned by less stuffy bits of engineering departments
thinking CS was the next hot wave of engineering rather than by math
departments thinking it was a way to find some lectureships for the less
pure of their members.
But then in a chat with a really pure mathematician of some
considerable skills he summed up "compuer science" as something like
``yeah, heard of it, sounds like kind of some application of bits of
nonclassical logic".
So to me "computer science" really is (or should be :->) the engineering
subset; however then let's narrow down what I was writing above to
"Compute archictecture is engineering".
But then as to the more general notions of "computer science", I would
still maintain that even the non engineering bits (those like the
mathematical ones where discovery is pursued) are driven by utility; a
lot of the current developments of some bits of maths, even pretty pure
maths, seem to be driven by the utility of applications to computer
based applications.
The small example I use is the Risch-Norman symbolic integration
technique, which is interesting in itself but AFAIK is quite impractical
unless done by computer, and I doubt would have been worked upon unless
of the utility side.
Compile performance seems to be more a software issue than a hardware issue.
The way C++ works apparently defeats most approaches to make compilation go
fast. I program in Forth (not only for that reason), and some weeks ago
claimed (in comp.lang.forth) that my GUI library (certainly programmed in
an object oriented extension to Forth) compiles in "less than a second" on
a current PC. Asked about more precide datas, I timed the compile run, and
was surprised that the actual compile time is just around 0.1s. The
perceived time included typing "make<ret>", which eats up the rest of the
second. The last time I wasn't satisfied with compile speed was on my Atari
ST, where I did invest quite some energy to reduce compile time. This work
still pays off.
--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
And I thought Intel makes their living by teaching customers that the new
NetBurst(tm) technology in the Pentium 4 makes the Internet go much faster
than before.
[ ... on discovery/invention ... ]
Well, I agree with your points too, which seem to be, if I understood
them correctly, of skepticism of there always being a sharp distinction
between the two; but then it's not incompatible with there being a
difference.
The two categories are in the eye of the beholder, and the distnction
between them it seems useful to me even if the two categories overlap.
To me they are just measurement operators, I am not quite an essentialist.
pg> Car design is sort of a mature engineering field too, and the basics
pg> of the design of a car have changed very slowly. It's just a game of
pg> incrementally cheaper, and better. What is an adequately useful car
pg> design has been decided long ago; so for a desktop computer.
rmyers1400> The automobile is an interesting and instructive example.
rmyers1400> The question is, will computers turn out to be like cars.
rmyers1400> You seem to think so. I don't. I will be bold. I expect
rmyers1400> that: 1. Computers will become much more flexible, adaptive,
rmyers1400> and self-organizing. [ ... ]
Ahhhhh, I agre with you in this, if you define "computer" appropriately,
which in your case it seems to me to mean more "embedded computer cluster".
But the Forrest Curve and my comments to it are about _desktop_
computers having reached a performance plateau of some sort.
I think that the potential for functional evolution for these are IMNHO
limited, just like for cars, because they are driven by specific
applications, and the applications do not change a lot. That's why I
think that they are like cars.
A Wintel machine is more or less (less :->) adequate architecturally
and/or functionally to be a word processor, spreadsheet system and
server access terminal. A car is more or less adequate for its (more
narrowly defined function). A bit like a book, or a bicycle: form
(mostly) follows function, and certain functions have spawned pretty
adequate forms, where the goal is shifting tradeoffs or incremental
efficiency improvements.
It's a bit like what also happened to agriculture in developed
countries: once production became high enought to feed the people, fine,
just improve incrementally efficiency. Or to nuclear weapons:
smaller/cheaper/more durable is better, once they become that big.
Put another way: the computer industry in general translates
technological advancement by two product price/performance curves,
costant cost with increasing performance, and constant performance with
decreasing cost.
Whwat Forrest says is that desktop system product development seem to be
now mostly on the second curve. In other words it is a mature industry
like cars or TVs or bicycles.
Scientific workstations (whatever are left) instead are in many cases
still driven by constant cost (because it is budgeted essentially a
small and fixed overhead on the cost of employing a scientist) and ever
increasing performance, where there is almost insatiable demand at least
in some sectors (e.g. medical, media).
I hesitate to disagree with a post that is so nicely put, but I do disagree.
The history of the PC is that people have constantly underestimated the
uses that people would turn them to. The spreadsheet didn't invent the
PC, the PC invented the spreadsheet. The PC didn't quite invent the
word processor, but I'll wager we'd have nothing like anything with the
capabilities of MS-Word or StarOffice, without the PC having created
such a large market for high-end WYSIWYG word processors. The PC
clearly invented the browser as we know it.
The limiting factor in demand for desktop capability is the limited
ability of the average user to cope with complexity, not the limited
needs of the average user. A word processor, spreadsheet, and browser
is all the average user can cope with before they go into overload, not
everything he or she needs or wants from a desktop computer.
What I have spent so much of this thread arguing about is whether the
pent-up demand is all about software or all about hardware. It's not
all about either. A surfeit of processing power invites lazy software
development. That's bad because it means that Microsoft can just wallow
in its own mudhole forever. It's good, though, because someone who
wants to do something *really* different doesn't have to be really smart
about software design right from the git-go.
Once somebody has developed something interesting, no matter how clumsy,
it can always be cleaned up later to put lesser demands on the hardware.
A surfeit of computer power on the desktop therefore makes it easier
to satisfy the pent-up demand for software.
One particular recent example I know about is the Linux file browser
Nautilus, which in its original incarnation brought all but the fastest
of processors to their knees. Those who loaded RedHat 7.2 onto a
sufficiently powerful machine had no problems. Those who took the hype
of the Linux community at its word and tried to load it onto an old
machine entered a world of misery. With time, users have accepted
Nautilus (and right along with Mozilla), it has become less power-hungry.
My response feels clumsy compared to your elegant post, but I hope I
have made sense.
Some of the principles of spreadsheets have been around for 40 years, to
my personal knowledge. In 1962, I was using Tabular Interpretive
Programming on the English Electric DEUCE. This gave you a conceptual
2-D array, and column operators such as C3 = C1 * C2.
It also had the useful concept (do any modern spreadsheets have it?)
that an illegal operation, such as division by zero, on one cell did not
stop the computation, but resulted in the special value INVALID being
placed in the destination cell. Any other cell using that cell as input
got the same value.
--
Ken Moore
k...@mooremusic.org.uk
Web site: http://www.mooremusic.org.uk/
I reject emails > 300k automatically: warn me beforehand if you want to send one
> It also had the useful concept (do any modern spreadsheets have it?)
> that an illegal operation, such as division by zero, on one cell did not
> stop the computation, but resulted in the special value INVALID being
> placed in the destination cell. Any other cell using that cell as input
> got the same value.
Uhh ... *all* of them that I've ever seen.
-- Bruce
--
- Factory (there is no X in my email)
As one of the founders of the subject said, it has very little to do
with computing and isn't science :-)
>For example: if "computer science" is what gets done at "computer
>science" departments at Universities, in a lot of them a lot of it is
>applied maths, and even a fair bit of pure maths.
>
> This all depends on academic politics: a number of the more alert
> maths academics sniffed that "computer science" was a way to get more
> lectureships and better funding. More power to them. :-)
Well, yes and no. In the cases where it is honest mathematics, yes.
In the cases where it uses its 'engineering' aspect to break all the
rules (e.g. the requirement for consistency), it is less justifiable
than the worst abuses in the socials sciences. See the flame wars on
"relativization" that I trigger fairly regularly :-)
>But then my opinion is that usually the nicer CS departments are those
>that were spawned by less stuffy bits of engineering departments
>thinking CS was the next hot wave of engineering rather than by math
>departments thinking it was a way to find some lectureships for the less
>pure of their members.
>
> But then in a chat with a really pure mathematician of some
> considerable skills he summed up "compuer science" as something like
> ``yeah, heard of it, sounds like kind of some application of bits of
> nonclassical logic".
See above. And, putting my engineering hat on, it is horrifying how
much of the subject has been distorted in concept and terminology to
be a fruitful source of dubious papers rather than anything practical.
Both complexity theory and program proving are particularly bad here,
especially as the subjects predate "computer science" in applied
contexts, and have been hijacked AND EMASCULATED to fit the needs of
the "paper factories".
>So to me "computer science" really is (or should be :->) the engineering
>subset; however then let's narrow down what I was writing above to
>"Compute archictecture is engineering".
>
>But then as to the more general notions of "computer science", I would
>still maintain that even the non engineering bits (those like the
>mathematical ones where discovery is pursued) are driven by utility; a
>lot of the current developments of some bits of maths, even pretty pure
>maths, seem to be driven by the utility of applications to computer
>based applications.
See the flames arising from my remarks about REAL non-deterministic
automata and their theory. There are several models of these that
are critical to computer engineering, were worked on fairly extensively
before computer science was invented, and have been exterminated by
the computer science Thought Police. Well, almost - and not while I
live :-)
The tragedy here is that they are still be used, but the theoretical
development has effectively been killed. It is virtually impossible
to get funding if you have to START by persuading the funding bodies
that existing dogma is wrong. So only the most eminent people or
departments could rock the boat, even if they want to.
And there are similar reasons for practical program proving being
alive and well in hardware, but is being less and less used in
software. Yes, seriously.
> I think the easy way to see the truth behind the Forrest curve is to
> consider what kind of machine would be on your desktop today if the
> compute market had the same number of customers it does today, but the
> average customer was still willing to spend $5K/machine as they were
> in the early 90's.
That is, if total spending had risen as the number of customers. I
think an important part of the reason there are so many customers, is
that the prices have become so low. So I think it's unrealistic to
extrapolate in this manner.
Curious: has the total sales of $5K machines actually declined?
> Companies are competing for the fastest machine at the popular price
> points, and that popular price point keeps descending. Even the
> super-high end seems to be playing this game.
I think the 'popular price point' and the 'super-high end' markets are
the ones where performance is the primary criterion. Above PPP (e.g
the $5K machine you mention), you mostly get the same class of
computing power, but reliability and IO features. Below the
super-high end, you have high-end servers where stuff like service and
support and software availability is more important than a few
performance percents.
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
There is a collection of histories of the spreadsheet at
http://www.j-walk.com/ss/history/
One version of the history that makes my statement make sense can be
found at
http://www.dssresources.com/history/sshistory.html
While it isn't *exactly* accurate to say that the PC invented the
spreadsheet, the ubiquity of the spreadsheet as we know it certainly
would not have happened without widely available and inexpensive
hardware for it to run on.
I think this is where we are today:
http://www.gamasutra.com/gdce/2001/jensen/jensen_01.htm
You'll need to register, which is free.
Uses a 2D depth field to render waves, with loads of approximations to
make it look "real" (particle systems, foam detail textures etc).
Cheers
Martin
Stop by the SGI booth at SC02 in Baltimore next week - we will be demonstrating systems
that cover both spaces, high density/low power (http://biz.yahoo.com/prnews/021111/sfm061a_2.html)
and systems using more commodity parts based around IPF.
Cheers,
Mike
|>
|> Regards,
|> Nick Maclaren,
|> University of Cambridge Computing Service,
|> New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
|> Email: nm...@cam.ac.uk
|> Tel.: +44 1223 334761 Fax: +44 1223 334679
--
Michael S. Woodacre
wood...@sgi.com
Phone: +44 118 925 7846
Well of course it does.
-Jack
This is hypothetical right? I mean, you can't really find slaves (or
employees for that matter) with IQ's of 250 for $1000 can you? If so, can
they do yardwork?
-Jack
I belive they exist, under the name "grad students". Admittedly, they don't
all come with an IQ of 250, they cost $1000/month rather than a one-time fee,
and you have to be a professor to acquire them. Not sure about the yardwork
bit.
--
David Gay
dg...@acm.org
The stuff I am interested is far more theoretical!
I have a number of half-finished projects in the dynamics of
(heavily idealized) models of the ocean circulation, and in the
asymptotic properties of vortex-jet interactions in numerical
models of rotating, stratified flow.
One interesting topic is:
. There is evidence that the wind-driven double-gyre model
can exhibit behaviours analogous to strange attractors
of chaos theory. One can derive classification schemes
to assign observed dynamical regimes to particular
attractors, and then derive probability distribution
functions for the occupancy of the solution in each
attractor's domain. Is there numerical evidence that
such probability distribution functions are continuously
dependent on the parameters of the model?
I published two papers that are related to this topic in 1995 and
1995 in the Journal of Physical Oceanography. Many papers have
been published in response to this work, but I have not seen other
folks try the sort of "brute force" statistical approach that I
used, and I sometimes lie awake at night (*) wondering how much more
can be learned by staring at results from ensembles of 1000-year
simulations of this class of models. (Actually, this is a good
example of a "supercomputer" being a system that turns a CPU-bound
problem into an IO-bound problem.)
(*) In these cases, I am not usually awake for very long.
--
John D. McCalpin, Ph.D. mcca...@austin.ibm.com
Senior Technical Staff Member IBM POWER Microprocessor Development
"I am willing to make mistakes as long as
someone else is willing to learn from them."
> I sure hope you are not referring to technique similar to
> http://www.gamedev.net/reference/programming/features/water/
> Cause that's something that is really low power, and was big around the days of the 486.
I actually don't know to what extent the same algorithms are involved.
But there's no comparison between the quality of the effect you could
get back then versus now.
(Well, there is a comparison - people were doing stuff in the 1990s
overnight on render farms that was about as good as what one machine
can do in real time now :))
But what I want next is full 3D water. That's going to take another
few orders of magnitude in computing power, I think.
Bitkeeper likes memory. Lots of it. Presumably it's using hash tables
internally, so performance sucks when you start to hit swap. CPU speed
isn't as much of an issue supposedly.
Phil
--
nosig
Oh damn! Somebody's already done it. Another perfectly good business
idea right down the tubes:
"For desktop support pros, each day brings a new onslaught of questions
and demands from frustrated, frantic end-users. Though you may feel the
need for armor before facing this mob, you can arm yourself with
something much more effective and powerful: TechRepublic's Desktop
Support Resource Guide book and CD-ROM.
"To learn more about the Desktop Support Resource Guide, click here!
http://members.techrepublic.com/cgi-bin9/flo?y=hKbm0FIh4C0F1T0BMXH0Af
What I *really* want to know is: what did guys like you do to slip into
the parallel universe you live in? Was a Ph.D. in CS/EE all it took?
Maybe each university could post a couple of physics grad students in
each CS/EE department to watch the process at work. Maybe *then* we
could start to learn something *really* fundamental. Come to think of
it, maybe we could actually solve whatever is *really* behind the
Forrest curve in the process.
Only to benighted souls who haven't been able to grasp the concepts the
first time.
- bill
Just sneak it on the load leveler queue. :-)
del cecchi
> Compile performance seems to be more a software issue than a hardware issue.
> The way C++ works apparently defeats most approaches to make compilation go
> fast. I program in Forth (not only for that reason),
I'm glad your compiles are fast, but the choice of C++ is not
something I can alter. The ones that irk me are medium sized jobs
(like our client software package compiled/optimised and then cleanly
instrumented for Purify - that all takes a while). They're just about
small enough that I watch the progress instead of just coming back
later.
I don't know why C++ is so slow to compile, but I got a 2.5X speedup
going from a 333MHz US-IIi/2MB cache cpu on my old workstation to a
750MHz/8MB cache US-III. I make that slightly better than the
clock-speed ratio, and hence the increased cache size must be
relevant, but I wondered whether the RAM latency was also of relevance
(something like PC-133 SDRAM vs. EDO DRAM). The majority of files are
big enough and gnarly enough that filesystem I/O is not limiting
performance (many seconds of cpu time after all disk I/O stops per
file).
Chris
--
Chris Morgan <cm at mihalis.net> http://www.mihalis.net
Temp sig. - Enquire within
How very kind of you. So many fail to give me credit even for *having*
a soul, benighted or otherwise.
There are two real differences here.
First off, we may not be that close to the technological limit of what
we can do with silicon, but we are certainly close to the economic
limit. We began rounding that curve years back with the sub-$1000 PC,
and the sub-$200 PC at WalMart is the latest reminder. It's a matter
of what I'd like to have vs what I want so much that I'm willing to do
without something else for.
Second, this presumes that some advance doesn't tilt the whole industry
on its ear. A disruptive advance would restart the curve.
Dale Pontius
Dale Pontius
Thank you for putting it so compactly. My rambling posts all boil down
to one thought:
Based on what I have seen and understood about computers and about the
progress of science and technology, I expect a disruptive advance that
will restart the curve.
> Simply and briefly stated, my hypothesis is that fewer and fewer
> computer users think their computer is too slow. I've invented what I
> call the Forrest Curve to illustrate this.
Thinking about when I think my computer is slow - when I was
developing on NT, I thought the computer was too slow. Why? Because
when I right-clicked, it could take tens of seconds to bring up the
menu.
Of course, this has nothing to do with CPU speed; it was, at the time,
the fastest PC I had used, while slower PCs with different software
performed more than adequately. And upgrading to a CPU at twice the
speed cut the waiting from, say, 20 seconds to 18 seconds. Big deal.
(This is fairly analogous to people thinking their "computer is slow"
when it's really their internet hookup that's too narrow, and
switching off Flash and images in their browser makes it "fast"
enough.)
Also, compiling was -- and with a faster computer and different
language and tools, still is -- too slow. Here you can argue that the
CPU really matters, but I think a well written system (cf. Bernd
Paysan's forth) should feel fast enough even on a slow CPU.
So I think the F curve may have more to do with response times than
with throughput, and, consequently, more to do with software than with
hardware.
And if so, I'm not sure the curve actually is sloping downwards,
current mass market systems seem to be driven by features, not
response times. I haven't used XP extensively, but it still seems to
lag in places, which means my computer still "is too slow". In spite
of being an order of magnitude faster than the previously mentioned
box.
(Some things are inherently computationally intensive, like weather
forecasting or regression testing. On the other hand, you can live
with these things taking time -- you don't "feel" that your computer
is "too slow" (the Forrest criterion) if it takes one hour vs half an
hour.
Some things are inherently computationally intensive *and* requires
real time response, like 3d video-quality games. Are these the only
applications that have to drive the Forrest curve?)
> Let's assume you accept the Forrest Curve. What does it mean?
> o It means that computer vendors are going to have a tougher
> and tougher time selling computers. This is because people above
> the curve only need a new computer when something breaks. This
> happens less and less often.
Well, *if* software is fixed with respect to response times, people,
except game players, may no longer upgrade to get better performance.
Which, I think, is more or less the case in the PC market now, people
buy a PC with an OS on it, and they tend to use that until it breaks.
> o It means that computer purchasing decisions are no longer made based
> on price/performance or just performance, like in the dark ages.
I'm not sure that follows. If the price difference isn't too great,
why not get the latest and greatest? If only for the bragging rights?
For the unenlightened, surely 2n GHz at $1000 is better than n GHz at
$900?
How many n x 100K lines does it have? It is probably true that code
and compilation complexity is increasing slower than hardware speed,
but I suspect for the foreseeable future the majority of compilation
will be ... slow.
>> o It means that computer purchasing decisions are no longer made based
>> on price/performance or just performance, like in the dark ages.
>
> I'm not sure that follows. If the price difference isn't too great,
> why not get the latest and greatest? If only for the bragging rights?
> For the unenlightened, surely 2n GHz at $1000 is better than n GHz at
> $900?
Yes, and there are other ways in which being 'cool' or 'hot' affect
decisions than just GHz figure.
>
> -kzm
--
Sander
+++ Out of cheese error +++
Actually, in many cases it is increasing faster!
The time taken is non-linear in the size of the code, and it is
becoming increasingly important to use more aggressive and more
global optimisations to get the current hardware up to speed.
By becoming a Great Pundit. This will let me go around
spewing Knowledge while charging fantastic amount to do
so. It will be kind of like George Gilder, except I'll be right.
Jon
So this was probably due to not having enough memory. You
were probably spending most of your time paging.
> (This is fairly analogous to people thinking their "computer is slow"
> when it's really their internet hookup that's too narrow, and
> switching off Flash and images in their browser makes it "fast"
> enough.)
True, although I never said that people think their network
connection is fast enough.
> Also, compiling was -- and with a faster computer and different
> language and tools, still is -- too slow. Here you can argue that the
> CPU really matters, but I think a well written system (cf. Bernd
> Paysan's forth) should feel fast enough even on a slow CPU.
I agree. Another example of this is the speed of the
Turbo C and Turbo Pascal compilers, especially
compared to the Microsoft products they replaced.
The art of writing fast compilers (with little or no optimization)
seems to have been forgotten.
> So I think the F curve may have more to do with response times than
> with throughput, and, consequently, more to do with software than with
> hardware.
No doubt. Slow badly written software running on fast processors
is still slow software.
> And if so, I'm not sure the curve actually is sloping downwards,
> current mass market systems seem to be driven by features, not
> response times. I haven't used XP extensively, but it still seems to
> lag in places, which means my computer still "is too slow". In spite
> of being an order of magnitude faster than the previously mentioned
> box.
Funny you should mention XP. I was going to install it on my
700Mhz Athlon last weekend but found that there was some kind
of weird delay when bringing up the program list in the Start
menu. It wasn't because of memory, since I have 1GB, but
I don't know what it was. It doesn't happen on my wife's
400Mhz Celeron with 192MB so maybe it's graphic card
related.
The main reason why I think the Curve slopes down is because
writing software is hard. It takes a significant amount of time
to write new software that would tax a new processor. During
this time even newer faster processors come out, causing the
downward slope. I don't think there's much hope of this changing
unless either something new happens in automatic software generation
or the processor people hit some kind of manufactoring wall.
> (Some things are inherently computationally intensive, like weather
> forecasting or regression testing. On the other hand, you can live
> with these things taking time -- you don't "feel" that your computer
> is "too slow" (the Forrest criterion) if it takes one hour vs half an
> hour.
One thing that some non-believers have ignored is the part in
the Forrest Curve paper that says: "I also recognize that there
is a class of users that can and will always be able to consume
any amount of computer resources. These guys are why the
Forrest Curve never goes to zero. In spite of their needs they
can't reshape the Forrest Curve because they don't
have enough money to spend anymore." I should have been
more specific about who these guys are. They're the people
who model, analyze, and synthesize physical reality.
> Some things are inherently computationally intensive *and* requires
> real time response, like 3d video-quality games. Are these the only
> applications that have to drive the Forrest curve?)
See the above paragraph. Also, those new fancy graphics processors
are helping the 3d video-quality game people a lot.
> > Let's assume you accept the Forrest Curve. What does it mean?
>
> > o It means that computer purchasing decisions are no longer made based
> > on price/performance or just performance, like in the dark ages.
>
> I'm not sure that follows. If the price difference isn't too great,
> why not get the latest and greatest? If only for the bragging rights?
> For the unenlightened, surely 2n GHz at $1000 is better than n GHz at
> $900?
Maybe you or I would get the latest and greatest. But, I doubt
an insurance company or bank would spend the extra money,
especially if large numbers of computers are being purchased.
Jon
...
> > Fame is certainly feasible, at least in this limited venue. But how
> > do you expect to achieve fortune, other than perhaps shorting a few
> > select stocks?
>
> By becoming a Great Pundit.
Is that who Charlie Brown spends so much time looking for at about this time
each year? Famous, yes, but don't you have to be more visible to achieve
fortune?
- bill
It is graphic card related.
I have the same problem. Speed is fine until the nvidia drivers are
installed.
-Jack
> "Ketil Malde" <ket...@ii.uib.no> wrote in message
> news:egbs4uq...@sefirot.ii.uib.no...
> > "Jon Forrest" <for...@ce.berkeley.edu> writes:
>> Of course, this has nothing to do with CPU speed; it was, at the time,
>> the fastest PC I had used, while slower PCs with different software
>> performed more than adequately. And upgrading to a CPU at twice the
>> speed cut the waiting from, say, 20 seconds to 18 seconds. Big deal.
> So this was probably due to not having enough memory. You
> were probably spending most of your time paging.
No. Or, well, I had IIRC half a gigabyte. I strongly suspect the
menu is built dynamically, and that there's a somewhat complex scan of
installed software or something each time it's being accessed.
> The art of writing fast compilers (with little or no optimization)
> seems to have been forgotten.
Note that there is (and has been for decades) incremental systems
where compilation is unnoticable. Lisp users tend to laugh at C++
programmers complaining about compilation times, for instance. This
has nothing (or very little) to do with CPU speed, but a lot to do
with software (including language) design.
> writing software is hard. It takes a significant amount of time
> to write new software that would tax a new processor. During
> this time even newer faster processors come out, causing the
> downward slope.
But, to put it bluntly, sleep(10) takes as long on a fast CPU as on a
slow one. The continued use of poor algorithms will lead to a flat
curve, while if all software was well designed and implemented, the
curve would hit bottom.
So people may actually be upgrading because of the *misconception*
that their hardware is too slow.
> I don't think there's much hope of this changing
> unless either something new happens in automatic software generation
> or the processor people hit some kind of manufactoring wall.
As more and more people become programmers, the average skill level
drops, and systems become vastly larger and more complex.
>> (Some things are inherently computationally intensive
> is a class of users that can and will always be able to consume
> any amount of computer resources
>> real time response, like 3d video-quality games. Are these the only
>> applications that have to drive the Forrest curve?)
> See the above paragraph. Also, those new fancy graphics processors
> are helping the 3d video-quality game people a lot.
One day, a PC will be able to display vivid, realistic 3d at the
resolution of the human eye, and until then, game players are in the
infinite consumption group. They're a pretty large group, I think,
maybe even a majority.
>> I'm not sure that follows. If the price difference isn't too great,
>> why not get the latest and greatest? If only for the bragging rights?
>> For the unenlightened, surely 2n GHz at $1000 is better than n GHz at
>> $900?
> Maybe you or I would get the latest and greatest. But, I doubt
> an insurance company or bank would spend the extra money,
> especially if large numbers of computers are being purchased.
Quite the converse, I think.
I bought a cheap celeron for myself, thinking of noise and heat and
considering adequately powerful for my purposes. My employer had
little qualms about spending four times what I did on my work PC, just
to get it standardized and all the bells and whistles. Average Joe at
Wal-Mart is going to starte blindly at numbers; perhaps he knows his
neighbor has an n GHz machine, or that having many RAM is good, so he
coughs up a few extra % on the price.
Possibly - but as we are discussing general trends, then no, on average
it is probably IMHO growing slower.
>
> The time taken is non-linear in the size of the code, and it is
> becoming increasingly important to use more aggressive and more
> global optimisations to get the current hardware up to speed.
>
Oh I agree - and the complexity is only going to increase as time
goes. But increasing amounts of siftware will be scripting, thin
layers on top of middleware (if you have 3 layers of middleware,
are all of them still middleware?) and so on. So intensive
compilation will be moving towards the same slot as HPC.
>
> Regards,
> Nick Maclaren,
> University of Cambridge Computing Service,
> New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
> Email: nm...@cam.ac.uk
> Tel.: +44 1223 334761 Fax: +44 1223 334679
--
Umm.. to an extent. ccache does not really help in the sense that
it assumes identical input gives identical output - very wrong if
you changed from O1 to O3, and distcc is just a prallelisation trick,
which has been around (in various incarnations of various distributed
versions of make) for ages.
Some sides tothis are:
* you want more than just "a build" - you probably want debug,
optimised + assertations and perfomance tuned versions
at the very least, possibly more (think of code
instrumentation)
* the input to the final binary(ies) is not just the source
but also profiling feedback. Even if the "compilation"
doesn't consist of a three phases, the middle of which is
running profiling, profiling input is likely to be present.
* by ncessicity, more and more of optimisation and time
is spend in the "post-compilation" phase as cross-object-
file optimisations and similar are perfomed.
I'm not saying these aren't great programs - they just help only
certain niche cases and have limited application beyond that.
>
> -Andi
Yes and no. I think that this is another case where HPC hit a
problem yesterday that the rest of the world will hit tomorrow.
The two problems are that an increasing number of compilers require
to optimise the WHOLE program, which increases the complexity of
compilation, and that the ratio between optimised and unoptimised
performance is increasing, which is making a higher proportion of
code dependent on optimisation. The trend to "just in time"
compilation also feeds the latter, even in a "development free"
environment.
The effect is an increasing dichotomy between the sort of code
you are talking about, and the sort of code that needs serious
optimisation. My belief is that we are for a period where the
proportional cost of the aggressive optimisation will increase,
even though the proportional amount of optimised code will
continue to decrease.
Another way of looking at it is that the performance of CPUs is
increasing at 25-50% per annum, but only for aggressively tuned
and optimised code, and probably at only 5-10% for "just compiled"
code (including low-level scripting languages). As the demands
are increasing at more than 5-10%, there is an ongoing need to
move code from the latter to the former category.
Now, there has been some very interesting research in how the
changes in architecture over the years have meant that it can
now be faster to interpret an intermediate code than to execute
the fully compiled form! This is done by designing the engine
and intermediate ISA for a very fast and highly tuned interpreter
and run-time system. The trouble is that it is very hard to
start from existing languages and enable that.
But it indicates that there is a possibility of changing the
break point between hardware and software, with a good chance
of benefiting both. I am not going to hold my breath waiting
for that, though :-(
I haven't used ccache yet, but I understand that it is very careful to
ensure that it does not produce incorrect results in this case (it
simply doesn't provide any speedup). It stores the exact environment
and compiler options used to generate the object.
(I wonder if it handles the case of "install a new gcc and recompile".
Or even worse, "cp as.bugfix /usr/bin/as".)
-andy
>You appear to have completely missed the point: the curve reflects the fact
>
Either I have completely missed the point, or there will be new applications
that benefit from increased speed in all conditions. Please read
the point about IQ 250 vs 300 again. That was intented as a nearly
asymptotic proof about the commercial value of higher computation
speed in the future, i.e., bending the forrest curve upward.
>that the faster computer is *not* measureably more entertaining and useful
>for the average user, because there is no significant amount of popular
>
You are thinking that computer system is always a spreadsheet program
or something similar. As computing power increases, the computer system
will take new responsibilities. The more abstract these responsibilities
are,
the more the new computer system benefits automatically, without
reprogramming, from increased computing power. This bends
the forrest curve upwards.
>software that requires its capabilities. Until the *average rate* of
>software's ability to benefit noticeably from faster processors becomes
>greater than the normal increases in hardware performance from technology
>advances motivated by factors *other than* the personal computing needs of
>the masses, the curve will continue to trend downward.
>
I do not understand the bit about average rate of software's ability. I
think that even a single useful and widely-spread application can bend the
forrest curve upward.
Those businesses that feel their slaves are not sufficiently bright at
IQ 250 will buy faster slaves at IQ 300. A business run with slaves of
IQ 300 will beat all businesses with slaves of IQ 250, all other factors
being equal. The great success of the IQ 300 company will be followed
by imitators and demand for high power CPUs will be infinite.
After seeing M.Sc. and even Ph.D. level physics and engineering
planning tasks automated I am half serious about this digital slave
thing. It is going to happen in many other professional areas, too,
and there CPU power will be an important matter. I do not
believe that world is going to be too simple for these computers,
so that the benefit of the additional intelligence would not matter
any more.
Perhaps I still did not get the point and fell into some carefully
tuned satire trap? Sorry, English is not my native language and
I most likely fail to notice any subtle hints that would have been
so obvious to the natively English readers.
I belive that the forrest curve phenomenon exists only for
some limited and disappearing phenomena, like the computerized
spread sheets. However, it will fail badly in predicting the need
for future CPU power.
Now something exciting: Instead of an annual posting, a one
time shot only: The Alakuijala Curve:
|
| *
| **
| **
| ***
|*** ***
| *****
+---------------------> time
It shows the ratio of computer users that would prefer a
faster computer as a function of time (over the next 50 years).
More and more people will understand the need for faster
computing as the duties of the computer become more
abstract and commercially more important. Companies
failing to understand The Alakuijala Curve will suffer badly.
Jyrki Alakuijala, Dr. Tech., M.Sc. IT
Its not the matter of getting incorrect output - if it doesn't
find the contents in cache it wil still have to compile it and you
added the (ok, neglible) overhead of hashing. If it just recompiled
it didn't actually add any speedup.
>
> (I wonder if it handles the case of "install a new gcc and recompile".
> Or even worse, "cp as.bugfix /usr/bin/as".)
Maybe, maybe not - you can always just nuke the cache, so its not
a too important question.
>
> -andy
> "Jon Forrest" <for...@ce.berkeley.edu> wrote in message
> news:aqhe52$1f03$2...@agate.berkeley.edu...
> > But the making and selling of the box has been what's kept
> > the computer industry going. Now, as you say, the box
> > has become pretty much irrelevent. What's going to happen
> > to the industry?
> > Jon
>
> I see two (or three) things:
>
> 1) The industry moves on to things outside the box. Since I don't speak for
> Intel, read one of Intel's annual reports for how they think that
> communications silicon will be the thing to keep feeding their engineers and
> stockholders (though there might be a few more rough years there...). Looks
> pretty clear where Intel is putting their money.
>
> 2) The industry sells boxes to those who do not have boxes yet. I think most
> PC makers will tell you which geographic areas are strong and which are not.
> Sales guys will need to learn Mandarin.
Ah the old "Billion Consumers" delusion.
Yes there are 1.3 billion Chinese (and, what, 900 million Indians). Heck,
assuming their gangster capitalism doesn't implode in the next ten years,
a significant number of them may even buy PCs.
To go from that to assuming American industry will get rich is one heck of
a leap. In the first place, there's no reason anything but the essentials
(ie the x86 part) have to be American at all. The Chinese govt is working
hard to ensure that the OS part is not, and I'm guessing they already have
plans in place to start work on a Chinese made CPU (perhaps or perhaps not
x86). In the second place, there's far less incentive over there to use a
high end Intel P4 part. Not just AMD but also VIA and Centaur and every
two-bit x86 clone vendor you've never heard of will be vying to sell parts
with margins of pennies, let alone dollars or tens of dollars.
I'd be curious to know if ANY western industries have made serious money
in China by selling to the locals, not just through cheaper manufacturing.
(Perhaps construction, and of course certainly planes---for now.) Of those
that have made money, I'd be curious to see how the profits have changed
from year to year as local competition ramps up.
Maynard
Sounds like the stupid menu fade-in effect, also found in Win2K as long as
you haven't found the preference setting for turning it off.
--
Mvh./Regards, Niels Jørgen Kruse, Vanløse, Denmark
Maynard,
I think we were talking about the computer industry. Certainly the Chinese
computer (and networking) industry counts as industry. Thus, even if the
fortunes of specific companies (Sun, Intel, IBM, Microsoft, Cisco for
example) wane, there still will be more computers, OS's routers, etc. etc.
out there. None of this changes the need for sales guys to learn mandarin,
unless you are making the assumption that they already do - even then I'm
right as no one is born speaking it.
I can not imagine that the management of any of the companies I mentioned is
blind to the points you are making. Whether they will be effective in
combatting it is a different question, but the nature and extent of the
threat ARE areas of considerable interest.
-Jack
Not speaking for Intel
>I'd be curious to know if ANY western industries have made serious money
>in China by selling to the locals, not just through cheaper manufacturing.
Mobile telephone equipment vendors. China has the world's largest
number of mobile phones.
greg
> The Chinese govt is working hard to ensure that the OS part is not,
> and I'm guessing they already have plans in place to start work on a
> Chinese made CPU (perhaps or perhaps not x86). In the second place,
http://english.peopledaily.com.cn/200209/27/eng20020927_104011.shtml
"based on the RISC structure, a totally another standard."
--
Perfection is attained not when there is no longer anything to add,
but when there is no longer anything to take away (A. de Saint-Exupery)
Rudi Chiarito ru...@amiga.com
On this point, I agree with you completely.
> I don't think there's much hope of this changing
> unless either something new happens in automatic software generation
> or the processor people hit some kind of manufactoring wall.
>
This kind of prediction is where the pessimists usually get it wrong.
The entire field awaits a Shannon or Turing (or in a field I know more
about, a Copernicus or an Einstein) to turn the whole business upside down.
Is it the hardware that's clumsy or the software? My speculation is
both. The whole computer science business, to make yet another physics
analogy, reminds me of pre-Copernican astronomy. Maybe if you just add
epicyles, or maybe epicycles to epicycles...
Of course it's clumsy, and probably because at some time in the future
it will all look so bone-headed in retrospect that we will all be trying
to forget how limited our imaginations were.
<snip>
>
> Perhaps I still did not get the point and fell into some carefully
> tuned satire trap? Sorry, English is not my native language and
> I most likely fail to notice any subtle hints that would have been
> so obvious to the natively English readers.
>
There is nothing wrong with your English, which you use beautifully, and
I don't think Mr. Todd has done anything to answer your very compelling
post except to say that you're missing the point. Whatever *his* point
is, he isn't making it as clearly as you have made yours.
Been some time they crossed the 10^9 mark. A web site I just found says
1015 million (without a date - probably some years old).
Jan
But its a one time setup - thus more or less neglible.
>
> The nice thing with distcc is that it works with minimal configuration -
> you just need the same compiler and start the daemon.
Right - this is still a benefit largely for projects that aren't large
or long-lasting.
>
>>
>> Some sides tothis are:
>> * you want more than just "a build" - you probably want debug,
>> optimised + assertations and perfomance tuned versions
>> at the very least, possibly more (think of code
>> instrumentation)
>
> No problem. It will cache all that. You just need a big cache.
Its still just a small speedup trick, that offers you the illusion
of doing "full" builds at the same speed as an incremental would
have taken.
>
>> * the input to the final binary(ies) is not just the source
>> but also profiling feedback. Even if the "compilation"
>> doesn't consist of a three phases, the middle of which is
>> running profiling, profiling input is likely to be present.
>
> For profiling feedback it won't work, agreed.
> I don't think that is commonly used for debugging "daily builds" though,
> where build time is critical.
>
Think of in terms of having a regression test for execution times
so you know you didn't stumble upon something that caused a optimisation
to become a pessimisation in teh compiler.
>
>> * by ncessicity, more and more of optimisation and time
>> is spend in the "post-compilation" phase as cross-object-
>> file optimisations and similar are perfomed.
>
> ipo files could be cached too (but aren't currently, right)
>
> If your compiler does a completely optimizer rerun on each link
> you should probably get a new compiler. If it only does some simple
> things during final link then it should be still a win.
the optimised link phase (i don't think there is anything that does this
unasked) can do things like inline functions from other object files
and similar.
Much of it looks bone-headed already, but the reasons are probably
commercial. The idea that bloated operating systems are that way so as
to sell more hardware is at least as old as the IBM 360 series.
Maybe when hardware speeds start hitting the physical limits more time
will be put to architectural and programming elegance. In the past, it
seems to me that elegant approaches were usually upstaged by brute force
(e.g. Burroughs 5500 discussed recently). The user market is dominated
by amateurs, and they might well constitute the majority of programmers
too.
--
Ken Moore
k...@mooremusic.org.uk
Web site: http://www.mooremusic.org.uk/
I reject emails > 300k automatically: warn me beforehand if you want to send one
So:
* there is no speedup from ccache at all if you didn't
start the build from scratch. The risks in not doing so
are approximately the same as for using ccache
* for many problems, L2 caches do offer just a small
and not are not entirely relevant at all. There are also
cases where it is in fact a pessimisation.
Stop thinking in terms of microbenchmarks.
On this point, I suppose I would class myself a pessimist. We keep
geting elegant new languages (like Ruby), but programs seem less
readable now than they were three decades ago.
One of my more spectacular failures to predict the future was that, in
the mid-80's, I predicted the demise of Unix. The basis of my
prediction was that, as a collection of software, Unix reminded me more
of an automobile junkyard than of anything with a deliberate
architecture. What I failed to see that an even bigger mess created by
a software monopoly (Microsoft) would make Unix and its offspring Linux
continue to look attractive by comparison.
It simply amazes me that enterprises that rely on computers continue to
function at all. One enterprise with which I am familiar *never* seems
to have any confidence in the accuracy of its own numbers, and after
seeing repeated and spectacular blunders, I can see why. How it is that
they stay in business is beyond my understanding.
The only way that I can imagine brute force being of any help is that
maybe someone could invent something like a superoperating system that
could enumerate, keep track of, and supervise all the heterogeneous
pieces of whatever hardware and software make up the computer resources
that an individual or an enterprise is supposed to rely on.
If, for example, you regarded my own little computer network as just one
big data resource, it is filled with little inconsistencies and
surprises waiting to be fixed or to spring out at me at the most
inopportune time. It would be expecting too much to imagine that a
piece of software could be written that could cope with it all, but
there are obvious places to start. I *think* this is where IBM is
headed with its autonomic computing initiative. "Plug and Play"
operating systems, that query the hardware they are operatng on and try
to make sense of it all are another step in this direction.
All of this is yet another argument against the inevitability of the
Forrest curve, since what I have proposed is really more bloatware to
cope with all the bloatware. Epicycles upon epicycles...
Come, now -- you don't give the Sun compiler writers enough credit... ;) (/me
recalls Sun's not long ago feat of compiling that gave them something like a
10x boost in a particular SPEC test. Which one was that? ) As for the
hardware folks... They probably won't do anything all that surprising...
------------------
Woooogy
I have to go back in time to pretend to be myself when I tell myself to tell
myself, because I don't remember having been told by myself to tell myself. I
love temporal mechanics.
To say nothing of the dramatically decreasing cost of HD acquisition gear over
the past few years. MiniDV is < 4MB/sec. It is (NTSC) 720x480 pixels.
Slightly different for PAL. Now, imagine a cheap, uncompressed cam-corder that
does full 2k HD. That moves the data requirement from <4 MB/sec to ~ 10
MB/frame.
That's 750 GB/hr, if my calculations are about right. (And, only assuming 24
Fps.)
Now, some of you will point out that right now, 750 GB isn't that hard to stick
into a single desktop box. That's fine, and good, but it doesn't exactly make
for aunt millie storing all her home movies on the computer. Figure aunt
millie wants to whip out the cam corder at every birthday/wedding/major
event/vacation with any close friend or relative. A year's worth of archival
footage quickly approaches a bajillion gazillion MB.
Now, since Millie will also want to have the computer apply a few basic effects
when she gets home from the wedding -- simple stuff like a few titles, and
brightness and contrast for the most part, but other stuff as well, if it's
easy -- the CPU will need to be fast enough to chug through a TB of video,
applying effects to it, and such, and doing basic overlays.
Ummm... in summary (as if I actually had a point), the forrest curve can bite
my shiny metal ass. We will keep thinking of stuff to do with new hardware as
it comes out, even for the Joe Managers, and Aunt Millies of the world.
Sure, old computers will stay useful. Slow computers will still be cool. I
still use a SparcStation IPX (original SPARC -- no super/ultra/whatever SPARC),
an old Power Mac (The speed bump of the slowest model of the very first
generation 601 PowerMacs). My web page is sitting on the server at a friend's
apartment -- it's an *underclocked* P-200. (U/C'd to 133).
Heck, I'll even post the URL to the only real page on the web site so far --
128.211.153.106/god/joey.php
Just as an experiment to see if all of you together is enough to slashdot the
poor little U/C'd pentium. (It's a particularly aweful page about the actor
Joe Pantoliano)
Even knowing you won't slashdot the page, I wouldn't mind the page being on a
faster server. We would just make it do more things. It would become a Quake
Server, or I would dedicate the system to something else trivial but
entertaining, like folding@home, or whatever.
>>Anyway, MiniDV digital video is what's gonna absorb the next two orders
>>of magnitude in disk size.
>>
>>Terje
>>
>To say nothing of the dramatically decreasing cost of HD acquisition gear over
>the past few years. MiniDV is < 4MB/sec. It is (NTSC) 720x480 pixels.
>Slightly different for PAL. Now, imagine a cheap, uncompressed cam-corder that
>does full 2k HD. That moves the data requirement from <4 MB/sec to ~ 10
>MB/frame.
Exactly why are people going to need HD resolution for home movies of
their kids' birthday parties? HDTV isn't selling well in the US, not
only because of various regulatory issues, but because for most people,
NTSC is "good enough". Yeah, they can tell the difference with a real
HDTV signal, but when you tell them what it all costs, they decide it
ain't so great after all.
Anyway, why the hell would you say "uncompressed" HD? By the time a
camcorder that does HD resolutions is truly cheap, a chip fast enough to
do MPEG4 encoding in realtime will also be available to stick in there with
it, so you stay at a nice easy 4MB/sec or similar -- don't want to write
any faster than commodity flash can handle. Professionals might want full
HD without compression, but your average grandma will never care.
I wouldn't hold my breath for digital video to have a noticeable impact
in new computer sales, despite the hopes some people have.
--
Douglas Siebert dsie...@excisethis.khamsin.net
"Suppose you were an idiot. And suppose you were a member of Congress.
But I repeat myself." -- Mark Twain