Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Emperor's new clothes

3 views
Skip to first unread message

Joe Seigh

unread,
Nov 21, 2005, 11:41:47 AM11/21/05
to
So these processor manufacturers all have these
nice new multi-core cpu's but apart from market
hyperbole (these cpu's will save the environment, etc...)
I don't see them actually doing anything to exploit their
potential. By "them", I mean them not us. We of course
know to do. But what's going on to get all the applications
to start exploiting this? The magic parallelization fairy?

--
Joe Seigh

When you get lemons, you make lemonade.
When you get hardware, you make software.

Ken Hagan

unread,
Nov 21, 2005, 12:06:53 PM11/21/05
to
Joe Seigh wrote:
> So these processor manufacturers all have these
> nice new multi-core cpu's but [...] what's going

> on to get all the applications to start exploiting
> this? The magic parallelization fairy?

Pretty much. For the levels of parallelism on offer
(2- and 4-way) the OS can probably find useful work
to do even when confronted with a single-threaded
program. (If you are running Windows, try using
Performance Monitor to log the number of threads
in the "ready to run" state.

For typical domestic customers, whose most demanding
applications are Office and their web browser, I'd
expect to be able to keep a 4-way system busy.

Apps that make limited explicit use of parallelism may
or may not follow once the hardware is in place. I don't
think it matters.

"Apps" that use some sort of component framework (MTS, ASP,
COM+, .NET) to let the system run them using thread pools
will achieve slightly higher levels of parallelism (at least
when there is real work to do) with no programming effort.

If you want anything more than 8-way parallelism then, yes,
you'll need a fairy, but we aren't there yet.

Joe Seigh

unread,
Nov 21, 2005, 12:36:38 PM11/21/05
to
Ken Hagan wrote:
> Joe Seigh wrote:
>
>> So these processor manufacturers all have these
>> nice new multi-core cpu's but [...] what's going
>> on to get all the applications to start exploiting
>> this? The magic parallelization fairy?
>
>
[...]

>
> If you want anything more than 8-way parallelism then, yes,
> you'll need a fairy, but we aren't there yet.

Intel's business model used to be just putting out faster
processors. Customers could get an immediate benefit just
by buying the faster processors which meant continued
revenue for Intel. Now with multicores customers can't
get an immediate benefit by buying the new processors
since the apps haven't been changed to exploit multicore.
So no compelling reason to upgrade and a big dent in
Intel's revenue stream. Ditto for the other vendors
as well.

What I don't see are any of the vendors being proactive
here. They're depending entirely for it to occur to the
software vendors that there might be a competitive advantage
if they exploit multicore.

Jeff Kenton

unread,
Nov 21, 2005, 7:23:18 PM11/21/05
to
Ken Hagan wrote:
> Apps that make limited explicit use of parallelism may
> or may not follow once the hardware is in place. I don't
> think it matters.

Having helped develop several moderately large scale parallel machines (KSR,
BBN Butterfly, Encore, and some you never heard about), it's clear that the
problem is software. Almost nobody is willing to build parallel applications,
and the tools available to help are still terribly weak. You'll still have
threads, and there will be a use for 4 - 8 processors, but it will be a case
of extra processors running separate tasks at the same time, plus an
occasional killer chess program.

jeff

David Magda

unread,
Nov 21, 2005, 8:04:43 PM11/21/05
to
Jeff Kenton <jeffrey...@comcast.net> writes:

> help are still terribly weak. You'll still have threads, and there
> will be a use for 4 - 8 processors, but it will be a case of extra
> processors running separate tasks at the same time, plus an
> occasional killer chess program.

Under Windows I'm sure Norton will chew up one of the cores. (I'm only
half-kidding.)

--
David Magda <dmagda at ee.ryerson.ca>
Because the innovator has for enemies all those who have done well under
the old conditions, and lukewarm defenders in those who may do well
under the new. -- Niccolo Machiavelli, _The Prince_, Chapter VI

Bruce Hoult

unread,
Nov 22, 2005, 1:01:35 AM11/22/05
to
In article <dlsuql$8rj$1$830f...@news.demon.co.uk>,
Ken Hagan <K.H...@thermoteknix.co.uk> wrote:

> Joe Seigh wrote:
> > So these processor manufacturers all have these
> > nice new multi-core cpu's but [...] what's going
> > on to get all the applications to start exploiting
> > this? The magic parallelization fairy?
>
> Pretty much. For the levels of parallelism on offer
> (2- and 4-way) the OS can probably find useful work
> to do even when confronted with a single-threaded
> program. (If you are running Windows, try using
> Performance Monitor to log the number of threads
> in the "ready to run" state.
>
> For typical domestic customers, whose most demanding
> applications are Office and their web browser, I'd
> expect to be able to keep a 4-way system busy.
>
> Apps that make limited explicit use of parallelism may
> or may not follow once the hardware is in place. I don't
> think it matters.

There are plenty on the Mac. iTunes (ripping CDs), iMovie, iDvd,
Photoshop, QuickTime all know how to keep 4 CPUs busy on the new 4-way
Macs. (and there are many, many dual CPU Macs out there ranging from
400 MHz to 2.7 Ghz).

--
Bruce | 41.1670S | \ spoken | -+-
Hoult | 174.8263E | /\ here. | ----------O----------

Chris Thomasson

unread,
Nov 22, 2005, 2:18:27 AM11/22/05
to
"Joe Seigh" <jsei...@xemaps.com> wrote in message
news:iKudnfC1qrPxZRze...@comcast.com...

> So these processor manufacturers all have these
> nice new multi-core cpu's but apart from market
> hyperbole (these cpu's will save the environment, etc...)
> I don't see them actually doing anything to exploit their
> potential. By "them", I mean them not us. We of course
> know to do. But what's going on to get all the applications
> to start exploiting this?

Well, Intel's current research seems to be moving toward transactional
memory:

http://www.cambridge.intel-research.net/~rennals/faststm.html


In the near future they may be pushing developers to convert their
applications critical sections into transactions. I believe that they are
going to start to advertise transactional memory as a "general solution"
that can directly address the new multi-core processors that are coming out.
If the do take that path, I am not sure how well its going to work out for
them. They will probably need to put a transactional memory implementation
in the hardware itself. Something like this:

http://ogun.stanford.edu/~kunle/publications/tcc_isca2004.pdf


I am not too sure how well this would scale... It seems as though it could
possibly suffer from livelock-like situations under certain circumstances.
For instance, simple false-sharing may cause some transactions to abort. I
think it would be similar to the live-lock that can occur when there is
false-sharing on a LL/SC based lock-free LIFO anchor and/or nodes;
reservation granularity you know...


> The magic parallelization fairy?

Well, the fact that double-width compare-and-swap did not get "reliably"
ported to 64-bit architectures make be think that may be relying on a
magical fairy to come down and show application developers the light...
Luckily there are some algorithms out there (e.g., VZOOM and RCU-SMR) that
can help applications boost performance and efficiently scale-up to the new
multi-core designs right now. However, in order for this stuff to really
take off I believe that its going to take somebody to "bite-the-bullet" and
incorporate one of these solutions into their application architecture. Then
let the performance and scalability improvement(s) speak for themselves...

:)


As of now I am only using VZOOM for in-house projects. Maybe I should do
something commercial with it... Ahhhh, it would probably be a waste of
time... Perhaps I should use it to implement a speedy STM framework, since
transactions seem to be the way things are going to go anyway...

;)


Ken Hagan

unread,
Nov 22, 2005, 5:52:27 AM11/22/05
to
David Magda wrote:
>
> Under Windows I'm sure Norton will chew up one of the cores.
> (I'm only half-kidding.)

Lots of corporate customers have no choice but to run such filth, so
they will see an immediate benefit. (I'm not even half-kidding.)

Ken Hagan

unread,
Nov 22, 2005, 6:07:14 AM11/22/05
to
> Ken Hagan wrote:
>
>> Apps that make limited explicit use of parallelism may
>> or may not follow once the hardware is in place. I don't
>> think it matters.

Jeff Kenton wrote:
>
> Having helped develop several moderately large scale parallel machines
> (KSR, BBN Butterfly, Encore, and some you never heard about), it's clear
> that the problem is software. Almost nobody is willing to build
> parallel applications, and the tools available to help are still
> terribly weak. You'll still have threads, and there will be a use for 4
> - 8 processors, but it will be a case of extra processors running
> separate tasks at the same time, plus an occasional killer chess program.

I agree, and perhaps I wasn't clear. My point is that *existing*
software can exploit 2- and 4- core systems. The hardware is not (yet)
running ahead of the software. We won't hit that problem until 2010-ish.

On an optimistic note, in the next four years, developers will have
an incentive to write software that runs equally well on 1, 2 or 4
core systems. They've never had that before, and it isn't *that* hard
in many cases, so things might just be different this time. Having done
that, such software might not suck on 8 or 16 core systems. We may have
almost a decade before we need a solution to the hard problem.

Joe Seigh

unread,
Nov 22, 2005, 8:30:22 AM11/22/05
to
Chris Thomasson wrote:
> "Joe Seigh" <jsei...@xemaps.com> wrote in message
> news:iKudnfC1qrPxZRze...@comcast.com...
>
>>So these processor manufacturers all have these
>>nice new multi-core cpu's but apart from market
>>hyperbole (these cpu's will save the environment, etc...)
>>I don't see them actually doing anything to exploit their
>>potential. By "them", I mean them not us. We of course
>>know to do. But what's going on to get all the applications
>>to start exploiting this?
>
[...]

>
>
>>The magic parallelization fairy?
>
>
> Well, the fact that double-width compare-and-swap did not get "reliably"
> ported to 64-bit architectures make be think that may be relying on a
> magical fairy to come down and show application developers the light...
> Luckily there are some algorithms out there (e.g., VZOOM and RCU-SMR) that
> can help applications boost performance and efficiently scale-up to the new
> multi-core designs right now. However, in order for this stuff to really
> take off I believe that its going to take somebody to "bite-the-bullet" and
> incorporate one of these solutions into their application architecture. Then
> let the performance and scalability improvement(s) speak for themselves...
>
Yes, but when. I can just see that in some Intel/Sun/... quarterly earnings
report. "Eventually someone will 'bite-the-bullet' and start exploiting
multicores and then people will buy our cpu's again". Seems strange
expecially for Intel which doesn't like to leave things to chance.

>
>
> As of now I am only using VZOOM for in-house projects. Maybe I should do
> something commercial with it... Ahhhh, it would probably be a waste of
> time... Perhaps I should use it to implement a speedy STM framework, since
> transactions seem to be the way things are going to go anyway...
>

I see you've found out that application bloat has certainly raised the
barrier to entry.

Joe Seigh

unread,
Nov 22, 2005, 8:35:49 AM11/22/05
to
Ken Hagan wrote:

[...]


> I agree, and perhaps I wasn't clear. My point is that *existing*
> software can exploit 2- and 4- core systems. The hardware is not (yet)
> running ahead of the software. We won't hit that problem until 2010-ish.
>
> On an optimistic note, in the next four years, developers will have
> an incentive to write software that runs equally well on 1, 2 or 4
> core systems. They've never had that before, and it isn't *that* hard
> in many cases, so things might just be different this time. Having done
> that, such software might not suck on 8 or 16 core systems. We may have
> almost a decade before we need a solution to the hard problem.

8 and 16 processors is about when you start running into scalability
problems and it gets expotentially worse. If the nature of the solution
is we can just plug it in and everything runs faster just like plugging
in a faster processor used to do, then we'll be okay. Otherwise...

Joe Seigh

unread,
Nov 22, 2005, 11:10:22 AM11/22/05
to
Chris Thomasson wrote:
> "Joe Seigh" <jsei...@xemaps.com> wrote in message
>
>
>>The magic parallelization fairy?
>
>
> Well, the fact that double-width compare-and-swap did not get "reliably"
> ported to 64-bit architectures make be think that may be relying on a
> magical fairy to come down and show application developers the light...

I should add that I have a work around for that problem for one
situations at least. It's lock-free and doesn't involve KCSS (k-compare,
single swap) which is only obstruction-free. The only real beneficiary
would be sparc which doesn't have double wide compare and swap. It's not
likely that I'll get a Niagara processor to play around with however.

Eric P.

unread,
Nov 22, 2005, 11:30:50 AM11/22/05
to
Joe Seigh wrote:
>
> Yes, but when. I can just see that in some Intel/Sun/... quarterly earnings
> report. "Eventually someone will 'bite-the-bullet' and start exploiting
> multicores and then people will buy our cpu's again". Seems strange
> expecially for Intel which doesn't like to leave things to chance.

It depends on which market you are considering. If it is the mass
home, small business and office desktop market then they are
simply acknowledging that it will take more than ads with the
double mint twins dancing in bunny suits to sell du-lies,
and that the traditional "new front grill on this years model"
or the "my GHz are bigger than yours" approaches of the past
probably won't work.

Without that mass market, it remains a specialized and
only certain app producers will make the effort.

Eric

Russell Crook - Computer Systems - System Engineer

unread,
Nov 22, 2005, 1:03:29 PM11/22/05
to
Joe Seigh wrote:
> Chris Thomasson wrote:
>
>> "Joe Seigh" <jsei...@xemaps.com> wrote in message
>>
>>> The magic parallelization fairy?

Probably. Said fairy was obviously around for all the
existing software that works well on 8, 16, 32, 64 etc.
way systems (database applications, web servers, that kind of stuff).
There are many, many such systems running today.

IOW, stuff that already runs well on larger arity SMPs
is likely to run well on many core/many thread chips.

Russell

Brian Hurt

unread,
Nov 22, 2005, 6:57:05 PM11/22/05
to
Ken Hagan <K.H...@thermoteknix.co.uk> writes:

> > Ken Hagan wrote:
> >
> >> Apps that make limited explicit use of parallelism may
> >> or may not follow once the hardware is in place. I don't
> >> think it matters.

>Jeff Kenton wrote:
>>
>> Having helped develop several moderately large scale parallel machines
>> (KSR, BBN Butterfly, Encore, and some you never heard about), it's clear
>> that the problem is software. Almost nobody is willing to build
>> parallel applications, and the tools available to help are still
>> terribly weak. You'll still have threads, and there will be a use for 4
>> - 8 processors, but it will be a case of extra processors running
>> separate tasks at the same time, plus an occasional killer chess program.

>I agree, and perhaps I wasn't clear. My point is that *existing*
>software can exploit 2- and 4- core systems. The hardware is not (yet)
>running ahead of the software. We won't hit that problem until 2010-ish.

Over 4 cores, yeah- 2010 is a good guess, as I'd say the 4-cores will
be hitting the streets late 2007-2008.

What is worrisome is the implicit assumption here is that this is it.
The current generation of chips are hitting a wall- we can't make
single threaded apps any faster. The only thing to do with more
transistors is to simply add cores. So we have dual core today, quad
core in 2-3 years, eight core 2-3 years after that, etc. By 2020,
we're looking at 32 to 256 cores on a chip, with the mean being like
64-128 cores. Not to mention the increasing popularity of multi-chip
configurations. In 15 years, mid-level systems could have 512 to 1024
cores. We're reasonably certain Moore's law will continue until at
least then.

>On an optimistic note, in the next four years, developers will have
>an incentive to write software that runs equally well on 1, 2 or 4
>core systems. They've never had that before, and it isn't *that* hard
>in many cases, so things might just be different this time. Having done
>that, such software might not suck on 8 or 16 core systems. We may have
>almost a decade before we need a solution to the hard problem.

The threads & locks model (or the minor variant of threads & monitors
used in Java and C#) doesn't scale. It doesn't scale to hundreds or
thousands of threads. In addition to all of the current bugs, you add
all of the bugs that multithreading brings with it- race conditions,
deadlocks, livelocks. Not to mention scalability issues- applications
needing to run on 5-year CPUs (the equivelent of P3s or Athlons today)
with only 4 cores, up to the new top of the line workstations with 512
cores, efficiently.

In addition to this we have the increasing complexity of software- a
race we seem to be losing even without the added complexity of
parallelism.

And remember that what an extremely talented programmer can do is, by
and large, irrelevent to the debate. It's what the below-average
programmer can do that's relevent. Because there are a hell of a lot
more average and below average programmers out there, and we have to
use their software as well. In fact, generally their software is
mixed in with the software of the extraordinary programmers. There's
an old saw that applies here- if you add a tablespoon of wine to a
barrel of sewage, you get sewage. However, if you add a tablespoon of
sewage to a barrel of wine, you still get sewage.

I have some suspicions as to what that solution might look like. I'm
not sure I'm right, but one thing I am sure of: the solution will
*NOT* look like C++ or Java or C# or Ruby or Python or C or PHP or
Perl or etc. Which raises the question: were such a solution to exist
(one that is very different from current popular languages), would the
"mainstream" adopt it? Especially Mr. "I don't learn a new language
until my employeer sends me off to school, and maybe not even then"
Below-Average Programmer? Especially if the solution came out of
(shudder- the horror! the horror!) academia!

Which is why I predict that things will get much, much worse before
they get any better.

Brian

David Magda

unread,
Nov 22, 2005, 7:43:27 PM11/22/05
to
Ken Hagan <K.H...@thermoteknix.co.uk> writes:

Yes, including where I work. Didn't seem to stop the latest Sober
variants (nor did it prevent Sony's rootkit from being installed). :-/

Oh well.

Chris Thomasson

unread,
Nov 23, 2005, 3:36:47 AM11/23/05
to
"Joe Seigh" <jsei...@xemaps.com> wrote in message
news:qdednTWXVIY...@comcast.com...

> Chris Thomasson wrote:
>> "Joe Seigh" <jsei...@xemaps.com> wrote in message
>>>The magic parallelization fairy?
>>
>>
>> Well, the fact that double-width compare-and-swap did not get "reliably"
>> ported to 64-bit architectures make be think that may be relying on a
>> magical fairy to come down and show application developers the light...
>
> I should add that I have a work around for that problem for one
> situations at least. It's lock-free and doesn't involve KCSS (k-compare,
> single swap) which is only obstruction-free.


Humm, I wonder if your solution is anything like the one I have tinkered
around with a couple of years ago. Here is some rough pseudo-code that
illustrates the basic idea:


/* 128-bits */
struct dwcas_node_t
{
void *ptr1;
void *ptr2;
};


struct dwcas_anchor_t
{
int32 idx;
int32 aba;
};


static dwcas_node_t p_nodes[WHATEVER_DEPTH];


int DWCAS( dwcas_anchor_t *dest, void *cmp, void *xchg )
{
dwcas_anchor_t lcmp, lxchg;
dwcas_node_t *n = node_cache_pop();

memcpy( n, xchg, sizeof( *n ) );

lxchg.idx = n - p_nodes;
lcmp = *dest;

do
{
if ( memcmp( &p_nodes[lcmp.idx], cmp, sizeof( *n ) ) )
{
node_cache_push( n );
return 0;
}

/* emulate LL/SC-like behavior */
lxchg.aba = lcmp.aba + 1;

/* normal 64-bit cas */
} while ( ! CAS( dest, &lcmp, &lxchg ) );

/* cache old node */
node_cache_push( &p_nodes[lcmp.idx] );

return 1;
}


As you can see, I am using a "offset-as-pointer" trick and an aba count to
emulate a DWCAS. The node_cache_* functions would check a per-thread cache
first, then global, and finally allocate another slab of nodes if the caches
were empty. The crude design could also be extended to compare-and-swap more
than 2 contiguous pointers. I am wondering if you solution is far more
efficient than mine?

:)


> The only real beneficiary
> would be sparc which doesn't have double wide compare and swap. It's not
> likely that I'll get a Niagara processor to play around with however.

casxa did not get ported to 64-bit systems? How can that be...

DOH!

;)


Chris Thomasson

unread,
Nov 23, 2005, 4:16:53 AM11/23/05
to
Yikes!!!


[...]

> lcmp = *dest;
^^^^^^^^^^^^^^^

this line needs to be moved:


>
> do
> {

right here

lcmp = *dest;
^^^^^^^^^^^^


> if ( memcmp( &p_nodes[lcmp.idx], cmp, sizeof( *n ) ) )
> {
> node_cache_push( n );
> return 0;
> }
>
> /* emulate LL/SC-like behavior */
> lxchg.aba = lcmp.aba + 1;
>
> /* normal 64-bit cas */
> } while ( ! CAS( dest, &lcmp, &lxchg ) );

Sorry!


Humm, I wonder if Mr. Terekhov would be pleased because the CAS did not
return the new value on failure in the example...

;)


Chris Thomasson

unread,
Nov 23, 2005, 7:11:27 AM11/23/05
to
> I should add that I have a work around for that problem for one
> situations at least. It's lock-free and doesn't involve KCSS (k-compare,
> single swap) which is only obstruction-free.

You could also simply align a proxy collector data-structure anchor on a 128
or 256+ bit boundary and use the extra space as a reference count...
Differential counting algorithm would take care of the rest...


Joe Seigh

unread,
Nov 23, 2005, 7:35:17 AM11/23/05
to
Chris Thomasson wrote:
> "Joe Seigh" <jsei...@xemaps.com> wrote in message
> news:qdednTWXVIY...@comcast.com...
>
>>Chris Thomasson wrote:
>>>Well, the fact that double-width compare-and-swap did not get "reliably"
>>>ported to 64-bit architectures make be think that may be relying on a
>>>magical fairy to come down and show application developers the light...
>>
>>I should add that I have a work around for that problem for one
>>situations at least. It's lock-free and doesn't involve KCSS (k-compare,
>>single swap) which is only obstruction-free.
>
>
>
> Humm, I wonder if your solution is anything like the one I have tinkered
> around with a couple of years ago. Here is some rough pseudo-code that
> illustrates the basic idea:
>
[...]

>
>
>
>
> As you can see, I am using a "offset-as-pointer" trick and an aba count to
> emulate a DWCAS. The node_cache_* functions would check a per-thread cache
> first, then global, and finally allocate another slab of nodes if the caches
> were empty. The crude design could also be extended to compare-and-swap more
> than 2 contiguous pointers. I am wondering if you solution is far more
> efficient than mine?
>
No, it's not a double wide compare and swap solution. It's a reader/writer
solution w/ readers being lock-free. Yet another one. It doesn't
require double wide compare and swap and that's all I'm saying for now.

>
>>The only real beneficiary
>>would be sparc which doesn't have double wide compare and swap. It's not
>>likely that I'll get a Niagara processor to play around with however.
>
>
> casxa did not get ported to 64-bit systems? How can that be...
>
> DOH!
>
> ;)

casx is only 64 bits.

I'm tempted not to publish the solution and leave Sun in the iteresting position
of having lock-free solutions that only work on their Opteron based systems and
not on their sparc based systems (the offset trick aside). In theory they could use
RCU+SMR since they're probably cross licensed with IBM but NIH would probably prevent
that.

Ken Hagan

unread,
Nov 24, 2005, 6:10:29 AM11/24/05
to
Brian Hurt wrote:
>
> What is worrisome is the implicit assumption here is that this is it.
> [...] In 15 years, mid-level systems could have 512 to 1024 cores.

> We're reasonably certain Moore's law will continue until at least then.

Agreed, but I'm only saying that we don't have a problem *right now*.
I took Joe's original post to be implying that the current 2-core
systems were already "ahead of the software". (Or if not, then next
year's 4-core systems certainly would be.) I don't think that's true.

> The threads & locks model (or the minor variant of threads & monitors
> used in Java and C#) doesn't scale.

Agreed, again, but they will last a few years.

> And remember that what an extremely talented programmer can do is, by
> and large, irrelevent to the debate. It's what the below-average
> programmer can do that's relevent.

I disagree. There are a lot of server systems out there running maybe
half a dozen apps and a lot of home systems out there running another
half dozen. Get a dozen apps using threads intelligently and correctly
and the chip manufacturers will have eager customers for 32-way boxes.

> I have some suspicions as to what that solution might look like. I'm
> not sure I'm right, but one thing I am sure of: the solution will
> *NOT* look like C++ or Java or C# or Ruby or Python or C or PHP or
> Perl or etc.

I disagree again. There are various frameworks around which let dumb
programmers write single-threaded components in the above languages
which are then run (provably) safely in parallel. Large amounts of
dumb software also uses these frameworks.

In summary, I agree we have a real wall coming up but I think it is
still a few years off.

Stephen Fuld

unread,
Nov 24, 2005, 11:44:24 AM11/24/05
to

"Ken Hagan" <K.H...@thermoteknix.co.uk> wrote in message
news:dm472c$7sb$1$8300...@news.demon.co.uk...

I'm not even sure that we have a wall that will matter. Talking about the
capabilities of future hardware without talking about what will drive its
adoptation seems like putting the cart before the horse.

For servers, they can easily use just about as many threads/cores, as anyone
can provide, so they are a target market. Similarly, high performance
scientific programming already uses lots of parallelization, to they can
take advantage of more as soon as it is available.

For desktops (and notebooks), which are the volume driver for the PC market,
I don't see any substantial requirement for lots of threads, just as there
isn't much requirement for faster single thread CPUs today. What benefit is
there to running Word faster? The biggest area where the general user
probably would want better performance in the future is in graphics (better
web experience, video editing, etc.) , and that seems a discrete enough area
that it will be handled by more specialized instructions and graphics
processors. The big driver for faster CPUs is games, and they could take
advantage of multiple cores, but the people who program them are certainly
way above average.

So I don't see a "crisis" as the typical user won't benefit much from having
all those cores as things are fast enough for them already. So not being
able to run any faster won't be a problem and the lack of parallel
applications won't matter.

--
- Stephen Fuld
e-mail address disguised to prevent spam


Felger Carbon

unread,
Nov 24, 2005, 12:30:28 PM11/24/05
to
"Stephen Fuld" <s.f...@PleaseRemove.att.net> wrote in message
news:I5mhf.88118$qk4....@bgtnsc05-news.ops.worldnet.att.net...

>
> So I don't see a "crisis" as the typical user won't benefit much
from having
> all those cores as things are fast enough for them already. So not
being
> able to run any faster won't be a problem and the lack of parallel
> applications won't matter.

Stephen, the above summarizes what Greg(?) Forrest was posting for the
past few years on comp.arch ("the Forrest Curve"). I've believed for
a long time that we were headed in that direction, and like you, I
think we've arrived.

But I see a black cloud on the horizon.

Security software, spam blockers, and popup blockers keep getting
"updates" at weekly intervals. Each update comes with tens of
megabytes of new stuff to watch out for. Is the time arriving when
we'll need 99% of our computing power to block popups and 1% to do
what we want? No smiley face.


Niels Jørgen Kruse

unread,
Nov 25, 2005, 3:45:43 AM11/25/05
to
Felger Carbon <fms...@jfoops.net> wrote:

> "Stephen Fuld" <s.f...@PleaseRemove.att.net> wrote in message
> news:I5mhf.88118$qk4....@bgtnsc05-news.ops.worldnet.att.net...
> >
> > So I don't see a "crisis" as the typical user won't benefit much
> from having
> > all those cores as things are fast enough for them already. So not
> being
> > able to run any faster won't be a problem and the lack of parallel
> > applications won't matter.
>
> Stephen, the above summarizes what Greg(?) Forrest was posting for the
> past few years on comp.arch ("the Forrest Curve"). I've believed for
> a long time that we were headed in that direction, and like you, I
> think we've arrived.

If most everyday tasks have (just) dropped below the Forrest Curve, then
the end of scaling could be a blessing in disguise for CPU
manufactureres, in that it prevents single thread performance from
becoming a complete commodity.

--
Mvh./Regards, Niels Jørgen Kruse, Vanløse, Denmark

Ken Hagan

unread,
Nov 25, 2005, 6:44:32 AM11/25/05
to
Stephen Fuld wrote:
>
> I'm not even sure that we have a wall that will matter. Talking about the
> capabilities of future hardware without talking about what will drive its
> adoptation seems like putting the cart before the horse.

Yes, but that doesn't mean we won't hit these problems somewhere.
Even without invoking the next "killer app / machine hog" (desktop
searching?) I would predict that battery performance will continue
to suck for the forseeable future and so getting today's performance
out of chips that consume 10% or 1% of today's power will be a big
issue for whatever replaces today's desktops and laptops.

And what's going to process the output of those HDTV camcorders
that spew terabytes of crud onto holographic discs that could
swallow the whole of today's internet? Finding the useful data
in all that will take a lot of processing power. (Or, putting a
different spin on Felger's bleak scenario, *we* may be the ones
generating most of the stuff that we then want to filter out.)

Stephen Fuld

unread,
Nov 25, 2005, 11:10:06 AM11/25/05
to

"Ken Hagan" <K.H...@thermoteknix.co.uk> wrote in message
news:dm6tej$jh0$1$8302...@news.demon.co.uk...

> Stephen Fuld wrote:
>>
>> I'm not even sure that we have a wall that will matter. Talking about
>> the capabilities of future hardware without talking about what will drive
>> its adoptation seems like putting the cart before the horse.
>
> Yes, but that doesn't mean we won't hit these problems somewhere.

Certainly true.

> Even without invoking the next "killer app / machine hog" (desktop
> searching?)

I don't think so. Desktop searching is probably either totally disk bound
or "embarassingly parallel" such that no big advance in programming
technology or expertise would be required.

> I would predict that battery performance will continue
> to suck for the forseeable future and so getting today's performance
> out of chips that consume 10% or 1% of today's power will be a big
> issue for whatever replaces today's desktops and laptops.

True. What are the power implications of this whole issue? I'm don't know
enough to have an intelligent comment here, but ISTM that adding extra
transistors for the extra cores would require more power. I guess you could
posit that we should have many quite slow (therefore low power) cores rather
than one "adequate speed" core, but I don't know about the tradeoffs here.

> And what's going to process the output of those HDTV camcorders
> that spew terabytes of crud onto holographic discs that could
> swallow the whole of today's internet? Finding the useful data
> in all that will take a lot of processing power. (Or, putting a
> different spin on Felger's bleak scenario, *we* may be the ones
> generating most of the stuff that we then want to filter out.)

:-)

But note that I specifically excepted graphics as that seems to be more
amenable to special instructions (e.g. SSE) or more use of the graphics
processor for streaming operations.

Russell Crook - Computer Systems - System Engineer

unread,
Nov 25, 2005, 1:20:43 PM11/25/05
to Stephen Fuld
Stephen Fuld wrote:
> "Ken Hagan" <K.H...@thermoteknix.co.uk> wrote in message
> news:dm6tej$jh0$1$8302...@news.demon.co.uk...
>
>>Stephen Fuld wrote:
>>
>>>I'm not even sure that we have a wall that will matter. Talking about
>>>the capabilities of future hardware without talking about what will drive
>>>its adoptation seems like putting the cart before the horse.
>>
>>Yes, but that doesn't mean we won't hit these problems somewhere.
>
>
> Certainly true.
>
>
>>Even without invoking the next "killer app / machine hog" (desktop
>>searching?)
>
>
> I don't think so. Desktop searching is probably either totally disk bound

Rotating rust. Pah. There MUST be something better (since we're talking
about future products :->)

(Even today, if you were designing for power first, you
might use flash+SRAM.cf iPods)

> or "embarassingly parallel" such that no big advance in programming
> technology or expertise would be required.
>
>
>>I would predict that battery performance will continue
>>to suck for the forseeable future

Probably so, there's only so much you can do with chemicals
(and nuclear power has form-factor and power density limitations)
(1/2 :->)

>> and so getting today's performance
>>out of chips that consume 10% or 1% of today's power will be a big
>>issue for whatever replaces today's desktops and laptops.

The processor is only a part of the power issue. Displays
(esp. backlit) are significant problems. Massive memory
vs. power looks more tractable.

>
>
> True. What are the power implications of this whole issue? I'm don't know
> enough to have an intelligent comment here, but ISTM that adding extra
> transistors for the extra cores would require more power. I guess you could
> posit that we should have many quite slow (therefore low power) cores rather
> than one "adequate speed" core, but I don't know about the tradeoffs here.

If you start from the ground up designing for throughput, multiple
slower cores appear to be a significant throughput/watt win, with the
UltraSPARC T1 and the Raza XLR as current examples (each 8 core,
32 thread).

>
>
>>And what's going to process the output of those HDTV camcorders
>>that spew terabytes of crud onto holographic discs that could
>>swallow the whole of today's internet? Finding the useful data
>>in all that will take a lot of processing power.

If you ever bother to look :-< I think that a lot of this
data generated will end up as "write once, read never"
as people will start routinely recording masses of data to
which they never return.

Much like some blogs :->

(Or, putting a
>>different spin on Felger's bleak scenario, *we* may be the ones
>>generating most of the stuff that we then want to filter out.)
>
>
> :-)
>
> But note that I specifically excepted graphics as that seems to be more
> amenable to special instructions (e.g. SSE) or more use of the graphics
> processor for streaming operations.

It would make more (CPU cycle) sense to analyze the scene
when recorded, recording a much more structured data stream
than mere rasters.

(I'm not holding my breath on this ...)

Russell

>

Hank Oredson

unread,
Nov 25, 2005, 9:36:20 PM11/25/05
to
"Russell Crook - Computer Systems - System Engineer" <russel...@sun.com>
wrote in message news:438755FB...@sun.com...

This seems to be happening already, with still pictures.

Every picture I have ever taken with my digital SLR
is right here on this hard drive. Sometimes it is fun to
go back and look at the raw pix (for example) from that
2001 trip down the California coast. The "good pix" are
copied to another directory, thus I end up with two
copies of them on the hard drive.

Will do the same if I ever get a camcorder ...
Storage is essentially free now.

> Much like some blogs :->
>
> (Or, putting a
>>>different spin on Felger's bleak scenario, *we* may be the ones
>>>generating most of the stuff that we then want to filter out.)
>>
>>
>> :-)
>>
>> But note that I specifically excepted graphics as that seems to be more
>> amenable to special instructions (e.g. SSE) or more use of the graphics
>> processor for streaming operations.
>
> It would make more (CPU cycle) sense to analyze the scene
> when recorded, recording a much more structured data stream
> than mere rasters.
>
> (I'm not holding my breath on this ...)
>
> Russell

--

... Hank

http://home.earthlink.net/~horedson
http://home.earthlink.net/~w0rli


Charles Richmond

unread,
Nov 26, 2005, 11:58:04 PM11/26/05
to

Joe Seigh wrote:

> So these processor manufacturers all have these
> nice new multi-core cpu's but apart from market
> hyperbole (these cpu's will save the environment, etc...)
> I don't see them actually doing anything to exploit their
> potential. By "them", I mean them not us. We of course
> know to do. But what's going on to get all the applications

> to start exploiting this? The magic parallelization fairy?
>

Perhaps the magic paralization fairy... ;-)


Andrew Reilly

unread,
Nov 27, 2005, 3:01:28 AM11/27/05
to
On Fri, 25 Nov 2005 17:10:06 +0000, Stephen Fuld wrote:
> I don't think so. Desktop searching is probably either totally disk bound
> or "embarassingly parallel" such that no big advance in programming
> technology or expertise would be required.

Why would you expect that the bulk of applications with non-trivial
completion times (i.e., that make you wait, and consequently desire better
throughput) will all turn out to be such: i/o bound or embarrasingly
parallel (or at least fairly trivially parallel, if not actually
embarrasing)?

--
Andrew

Niels Jørgen Kruse

unread,
Nov 27, 2005, 7:59:27 AM11/27/05
to
Charles Richmond <rich...@comcast.net> wrote:

FWIW "paralization" doesn't show up in the Dashboard dictionary widget
(Oxford American Dictionaries), but "parallelization" does.

Stephen Fuld

unread,
Nov 27, 2005, 10:57:12 AM11/27/05
to

"Andrew Reilly" <andrew-...@areilly.bpc-users.org> wrote in message
news:pan.2005.11.27...@areilly.bpc-users.org...

Well, first of all, I was responding to an earlier post that mentioned
desktop searching, not making a general statement. That said, except for
graphics manipulations (which, as I said earlier seems amenable to tailored
instructions (e.g. SSE, etc.) or to better use of the streaming architecture
of graphics processors), and games, which have far better than average
programmers to take advantage of multiple cores, SMT, etc., I cannot think
of a high use example of an application that meets your stated criteria.
I'm not saying that there aren't any, but if you can think of some, please
let us all know.

Joe Seigh

unread,
Nov 27, 2005, 11:23:19 AM11/27/05
to
I think he meant paralyzation. Probably the result of corporate execs
having too many botox treatments.

Joe Seigh

unread,
Nov 27, 2005, 11:30:09 AM11/27/05
to

That ubiquitous piece of bloat, the browser. Having the gui events handles
by multiple event handling threads would definitely speed things up. Of
course the gui architecture would have to be completely rewritten since
the current one is deadlock prone enough as it is. But that's not a problem
of having threads. It's a problem of ignoring threads in the initial
architectural design.

Andrew Reilly

unread,
Nov 27, 2005, 4:46:24 PM11/27/05
to
On Sun, 27 Nov 2005 16:57:12 +0000, Stephen Fuld wrote:
> Well, first of all, I was responding to an earlier post that mentioned
> desktop searching, not making a general statement.

Sure, but it was such a nice statement, I thought it worth using as a
launching board, to get back to the topic of the Subject.

> That said, except for
> graphics manipulations (which, as I said earlier seems amenable to tailored
> instructions (e.g. SSE, etc.) or to better use of the streaming architecture
> of graphics processors), and games, which have far better than average
> programmers to take advantage of multiple cores, SMT, etc., I cannot think
> of a high use example of an application that meets your stated criteria.
> I'm not saying that there aren't any, but if you can think of some, please
> let us all know.

Well, there aren't really many things on desktops that cause any
perceptable delay at all. Hence the Forest curve.

Of those that many people encounter, media manipulations of various sorts
and searching stand out, and both of those are observably both trivially
parallelizable and/or IO bound.

Joe Seigh has mentioned browsers, and by extension most GUIs. I suspect
that most perceived browser slowness is really IO limitation (network
latency), but it's true that most existing "WIMP" systems have fairly
strong single-threading constraints in their historical implementation and
design, if not of necessity. High-end "visualization" and CAD systems
show that parallelization is useful for other heavy graphics problems, so
we should expect similar results for video games.

In the more specialized world of software development, compilation is
mostly parallelizable (and "make" handles that without much end-user
effort) up to the point where it becomes I/O bound.

What do you do that induces "wait" that you think isn't I/O bound or
trivially parallelizable?

--
Andrew

Stephen Fuld

unread,
Nov 27, 2005, 6:04:14 PM11/27/05
to

"Andrew Reilly" <andrew-...@areilly.bpc-users.org> wrote in message
news:pan.2005.11.27....@areilly.bpc-users.org...

Well, for the things I do (which I certainly don't claim is exhaustive or
even representative), I said I could think of nothing. But fixing the
scheduling bugs that hurt responsivness in Windows would be very nice. :-(

Chris Barts

unread,
Nov 30, 2005, 10:32:53 AM11/30/05
to
David Magda <dmagda+tr...@ee.ryerson.ca> wrote on Tuesday 22 November
2005 17:43 in comp.arch <m2ek58d...@gandalf.local>:

> Ken Hagan <K.H...@thermoteknix.co.uk> writes:
>
>> David Magda wrote:
>>> Under Windows I'm sure Norton will chew up one of the cores. (I'm
>>> only half-kidding.)
>>
>> Lots of corporate customers have no choice but to run such filth, so
>> they will see an immediate benefit. (I'm not even half-kidding.)
>
> Yes, including where I work. Didn't seem to stop the latest Sober
> variants (nor did it prevent Sony's rootkit from being installed). :-/
>
> Oh well.
>

All antivirus software, at least as the term is understood now, is a fraud
and /cannot/ work as advertised, even in principle. Trying to ferret out
bad code using static methods is equivalent to solving the Halting Problem,
something I'm sure Norton and the rest of that mob understands but
willfully ignores as long as the ignorant crowds are willing to pay money
for the software and endless stream of updates.

The only way to prevent bad code from doing damage is through strict
run-time controls on resource usage. Practically speaking, this means
running the software in a good OS. Realistically speaking, this means
dumping or severely curtailing use of Microsoft products.

(Microsoft /could/ make a good OS. They just haven't yet.)

--
My address happens to be com (dot) gmail (at) usenet (plus) chbarts,
wardsback and translated.
It's in my header if you need a spoiler.


----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----

Bernd Paysan

unread,
Nov 30, 2005, 11:05:09 AM11/30/05
to
Chris Barts wrote:
> All antivirus software, at least as the term is understood now, is a fraud
> and /cannot/ work as advertised, even in principle. Trying to ferret out
> bad code using static methods is equivalent to solving the Halting
> Problem, something I'm sure Norton and the rest of that mob understands
> but willfully ignores as long as the ignorant crowds are willing to pay
> money for the software and endless stream of updates.

The antivirus software works basically the same way the immune system of our
body works: It uses rather simple patterns against known evil-doers. The
pattern generation process is even less automated than in our body; but in
principle, malware that wants to spread out could be detected automatically
(by observing traffic - normal communication involves a high variety of
messages, each sent to few recipients, while worms and spam send basically
the same message to loads of others).

This won't work if the malware authors had techniques to completely remove
any trace of pattern on their code, but so far, they haven't learned that.
And no, I won't give them hints here ;-).

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

pa...@at-cantab-dot.net

unread,
Nov 30, 2005, 11:36:09 AM11/30/05
to
Bernd Paysan <bernd....@gmx.de> wrote:
> The antivirus software works basically the same way the immune system of
> our

I concur with the original poster, the whole concept of of an antivirus
program is flawed and is a symptom of an insufficently secure OS.

> This won't work if the malware authors had techniques to completely remove
> any trace of pattern on their code, but so far, they haven't learned that.
> And no, I won't give them hints here ;-).

They did learn, there were some really quite clever viruses around however
it became much much easier to attack scripting vunerabilities in browsers
and email clients and to write seemingly useful trojans. Things like
polymorphically encrypted checksum preserving viruses are incredibly hard to
write.

-p
--
"What goes up must come down, ask any system administrator"
--------------------------------------------------------------------

Dan Koren

unread,
Nov 30, 2005, 8:12:15 PM11/30/05
to
"Chris Barts" <puonegf...@tznvy.pbz> wrote in message
news:1133364...@corp.com...

>
> All antivirus software, at least as the term is understood now, is a fraud
> and /cannot/ work as advertised, even in principle. Trying to ferret out
> bad code using static methods is equivalent to solving the Halting
> Problem,
> something I'm sure Norton and the rest of that mob understands but....


You may be giving "Norton" and the
"mob" more credit than they deserve....

;-)

dk

Chris Barts

unread,
Nov 30, 2005, 10:52:27 PM11/30/05
to
Dan Koren <dank...@yahoo.com> wrote on Wednesday 30 November 2005 18:12 in
comp.arch <438e4df0$1...@news.meer.net>:

> "Chris Barts" <puonegf...@tznvy.pbz> wrote in message
> news:1133364...@corp.com...
>>

>> Trying to ferret out bad code using static methods is equivalent to
>> solving the Halting Problem, something I'm sure Norton and the rest of
>> that mob understands but....
>
> You may be giving "Norton" and the
> "mob" more credit than they deserve....
>
> ;-)
>

Yea, Hanlon's Razor probably applies here as well as anywhere else. ;)

Chris Barts

unread,
Nov 30, 2005, 10:58:25 PM11/30/05
to
Bernd Paysan <bernd....@gmx.de> wrote on Wednesday 30 November 2005 09:05
in comp.arch <l6v063-...@miriam.mikron.de>:

> The antivirus software works basically the same way the immune system of
> our body works: It uses rather simple patterns against known evil-doers.
> The pattern generation process is even less automated than in our body;
> but in principle, malware that wants to spread out could be detected
> automatically (by observing traffic - normal communication involves a high
> variety of messages, each sent to few recipients, while worms and spam
> send basically the same message to loads of others).

I know how the immune system works, and I think it is an ugly kludge as
well. I don't complain about it because patches don't seem to be applicable
to already existing systems. ;) The body doesn't have the luxury of
sandboxing proteins* the way software engineers can sandbox code, so the
proved methods of system security don't apply to meat machines.

*Unless they get a research grant and can buy a few scratch monkeys.

>
> This won't work if the malware authors had techniques to completely remove
> any trace of pattern on their code, but so far, they haven't learned that.
> And no, I won't give them hints here ;-).
>

They don't need hints: The current crop of crap is more than intelligent
enough to infect large numbers of well-connected machines. It doesn't need
to evolve any more than the flu does: there are plenty of morons with the
sanitation habits of two-year-olds to infect.

Ken Hagan

unread,
Dec 1, 2005, 5:16:42 AM12/1/05
to
Chris Barts wrote:
>
> All antivirus software, at least as the term is understood now, is a fraud
> and /cannot/ work as advertised, even in principle. Trying to ferret out
> bad code using static methods is equivalent to solving the Halting Problem,
> something I'm sure Norton and the rest of that mob understands but
> willfully ignores as long as the ignorant crowds are willing to pay money
> for the software and endless stream of updates.

The real beauty of the AV business model is that as Microsoft patch up
their software (and fewer viruses get through to dumb users) it's the
AV vendors who get the credit (coz it must be due to that new "filter"
software running on our firewall, right?).

That's money for someone else's old rope!

Stefan Monnier

unread,
Dec 1, 2005, 11:39:57 AM12/1/05
to
> The real beauty of the AV business model is that as Microsoft patch up
> their software (and fewer viruses get through to dumb users) it's the
> AV vendors who get the credit (coz it must be due to that new "filter"
> software running on our firewall, right?).

I think the AV's success is mostly due to the fact that it tells you "here,
I've detected and deactivated this virus", which gives you a warm&fuzzy
feeling of security.

Of course, it doesn't tell you that detection&deactivation was not needed
because it's an old virus that your system is already immune to (not because
of AV but because of actual bug fixes).

So it seems useful even when it's completely useless.


Stefan

Scott Moore

unread,
Dec 1, 2005, 3:27:46 PM12/1/05
to
Joe Seigh wrote On 11/21/05 08:41,:

> So these processor manufacturers all have these
> nice new multi-core cpu's but apart from market
> hyperbole (these cpu's will save the environment, etc...)
> I don't see them actually doing anything to exploit their
> potential. By "them", I mean them not us. We of course
> know to do. But what's going on to get all the applications
> to start exploiting this? The magic parallelization fairy?
>

It will allow you to compile without your yahoo music service
skipping, or allow you to run virus checks without slowing your
system to a crawl, etc.

Yes, certainly applications are way behind. I notice my web
browser halts all of its open windows when it is waiting to
get to just one site.

I personally see that as a tooling issue, since many programmers
who have tried multithreaded applications realize it mostly
generates massive and difficult to understand bugs.

I am personally for supporting it at the language level (and I
do).

Brian Hurt

unread,
Dec 1, 2005, 3:57:13 PM12/1/05
to
nos...@ab-katrinedal.dk (Niels Jørgen Kruse) writes:

>Charles Richmond <rich...@comcast.net> wrote:

>> Perhaps the magic paralization fairy... ;-)

>FWIW "paralization" doesn't show up in the Dashboard dictionary widget
>(Oxford American Dictionaries), but "parallelization" does.

It's an amalgamation of parallel and paralyze. The magic paralization
fairy allows you to deadlock eight cpus simultaneously. Boy, I wish I
was joking.

Brian


ram...@bigpond.net.au

unread,
Dec 1, 2005, 10:03:09 PM12/1/05
to
Brian Hurt <bhurt@AUTO> writes:

> >FWIW "paralization" doesn't show up in the Dashboard dictionary widget
> >(Oxford American Dictionaries), but "parallelization" does.
>
> It's an amalgamation of parallel and paralyze. The magic paralization
> fairy allows you to deadlock eight cpus simultaneously.

Brian, that hurts... :-)

Terje Mathisen

unread,
Dec 2, 2005, 9:54:51 AM12/2/05
to
Scott Moore wrote:

> Joe Seigh wrote On 11/21/05 08:41,:
>
>>So these processor manufacturers all have these
>>nice new multi-core cpu's but apart from market
>>hyperbole (these cpu's will save the environment, etc...)
>>I don't see them actually doing anything to exploit their
>>potential. By "them", I mean them not us. We of course
>>know to do. But what's going on to get all the applications
>>to start exploiting this? The magic parallelization fairy?
>>
>
>
> It will allow you to compile without your yahoo music service
> skipping, or allow you to run virus checks without slowing your
> system to a crawl, etc.

Yes! It magically makes my (single/laptop) hard drive multi-threaded as
well! :-)


>
> Yes, certainly applications are way behind. I notice my web
> browser halts all of its open windows when it is waiting to
> get to just one site.

Usually when said site is loading some code (Java or even .js).


>
> I personally see that as a tooling issue, since many programmers
> who have tried multithreaded applications realize it mostly
> generates massive and difficult to understand bugs.
>
> I am personally for supporting it at the language level (and I
> do).

Good!

Terje
--
- <Terje.M...@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"

Terje Mathisen

unread,
Dec 2, 2005, 10:02:32 AM12/2/05
to
Brian Hurt wrote:

Ouch. We spent last night (literally: start at midnight) migrating a
failing Java-based document server application to a backup box. Due to
programming bug(s) it would deadlock a new thread each time a user
aborted out of a file transfer.

"How many threads do we have currently?"

70

"How many are deadlocked, while also locking all corresponding heap
buffers?"

68?

The problem was of course that as soon as a significant amount of
resources had been lost this way, response time went way up, which then
would significantly increase the chance of a given user giving up on yet
another transfer.

In another field this is called a chain reaction, the end result is ugly
and it generates a lot of hot air.

Edward Wolfgram

unread,
Dec 19, 2005, 5:34:26 PM12/19/05
to

The problems you are describing have nothing to do with parallelization
and everything to do with intelligent OS scheduling and prioritization.
Multiuser operating systems deal with these issues everyday, as so
designers of applications on multiuser systems.

There is nothing magic about multi-tasking systems. They are not hard to
code. If you stop buying bad software, companies will start writing good
software. :)

Edward Wolfgram

Jan Vorbrüggen

unread,
Dec 20, 2005, 3:31:48 AM12/20/05
to
Scott Moore wrote:
> Yes, certainly applications are way behind. I notice my web
> browser halts all of its open windows when it is waiting to
> get to just one site.

My browser only does that when it's in some parts of the TCP/IP
stack (e.g., name resolution). So let's tell those guys in Red-
mond to do their jib properly, shall we? Good luck to all of us.

Jan

0 new messages