Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Unwelcome advice

2 views
Skip to first unread message

Eric Grange

unread,
Jul 8, 2008, 10:38:44 AM7/8/08
to
"developers should start thinking about tens, hundreds, and thousands of
cores now in their algorithmic development and deployment pipeline"

http://blogs.intel.com/research/2008/06/unwelcome_advice.php

Massive multicore already exists today in the form of stream processors
(GPUs), with budding languages to exploit them like CUDA

http://www.nvidia.com/object/cuda_home.html#

Of course CodeGear & Delphi will be busy enough catching up with Unicode
or 64bit support for the next couple of releases... but any comments on
what could be in store for Delphi to go beyond multi-threading?

Eric

Marco van de Voort

unread,
Jul 8, 2008, 11:06:56 AM7/8/08
to
On 2008-07-08, Eric Grange <egra...@SPAMglscene.org> wrote:

> Of course CodeGear & Delphi will be busy enough catching up with Unicode
> or 64bit support for the next couple of releases... but any comments on
> what could be in store for Delphi to go beyond multi-threading?

Let's start with the basics. How do you imagine such support beyond the
"multi core support" not even oneliner ?

What granularity will it use, is it meant to multithread a GUI-db system, do
a bit of SETI calculation or have complex intwined consumer-producer
relations?

What do you currently do that one core can't hack that can be divided on to
multiple cores ? And can you see a generic system doing that division?

mamcx

unread,
Jul 8, 2008, 11:38:17 AM7/8/08
to
Can start with providing a multi-core aproach that is at least as easy
to use as pointers!.

I develop from 10+ years now and can do pointers easily... but never
thread development.

If wanna a example on this, look at erlang (maybe not the ultimate
solution, but for sure is incredible to know that in less than 10 lines
of code you can have a multi-core, distributed across the globe solution)

Eric Grange

unread,
Jul 8, 2008, 11:58:23 AM7/8/08
to
> Let's start with the basics. How do you imagine such support beyond the
> "multi core support" not even oneliner ?

There are many possible directions, from integrating CUDA-like ideas,
providing granularity specifiers in the language, going for erlang-like
constructs, introducing and extending on parallelization constructs,
libraries and concepts from the HPC world, tec... or something entirely
new/different.

> What granularity will it use, is it meant to multithread a GUI-db system, do
> a bit of SETI calculation or have complex intwined consumer-producer
> relations?

If you aim for hundreds of cores, current "multithreaded" strategies
become somewhat of an irrelevant concept, as that'll take you only up to
a few cores... with a lot of work.
Ideally granularity should be compiler-decided, with the help of
language constructs that would make coarse granularity harder to achieve
than fine grained granularity (currently, it's the opposite).

Typically this would mean introducing non-predictability, the current
"for each" construct f.i. is linear (there is a predictable iterator
order), a parallelizable "for each" would have to relax that construct
(and move away from the underlying iterator pattern).

> What do you currently do that one core can't hack that can be divided on to
> multiple cores ? And can you see a generic system doing that division?

If I knew how to do such a generic system, I wouldn't tell you, I would
sell it to the highest bidder and become filthy rich ;)

The whole point is that we'll be increasingly faced with CPUs comprised
of zillions of core, with a high probability that each individual core
will offer a lower performance than today's core (like Intel's latest
atom processor does).

Eric

willr

unread,
Jul 8, 2008, 12:13:32 PM7/8/08
to

I guess that some companies are working hard on multi-core support --
with good reason.

http://synopsys.mediaroom.com/index.php?s=43&item=547
http://synopsys.mediaroom.com/index.php?s=43&item=546
FWIW to you...

Hopefully we will see 64Bit and multi core capability before too long.


--
Will R
PMC Consulting

James K Smith

unread,
Jul 8, 2008, 1:37:17 PM7/8/08
to
> What do you currently do that one core can't hack that can be divided on
to
> multiple cores ?

Any programming other than that for the desktop. Which is why if you're
doing this type of programming currently, you're not using Delphi or FPC.

>And can you see a generic system doing that division?

Of course not, but with the capability there, you will write your code
differently anyway.

James


Bruce McGee

unread,
Jul 8, 2008, 1:58:16 PM7/8/08
to
James K Smith wrote:

> Any programming other than that for the desktop. Which is why if
> you're doing this type of programming currently, you're not using
> Delphi or FPC.

Delphi (and presumably FPC) works just fine for server applications.

--
Regards,
Bruce McGee
Glooscap Software

Dan Barclay

unread,
Jul 8, 2008, 3:53:08 PM7/8/08
to
Eric,

Massively parallel operations are the key to decent use of
massive multicore machines.

IMHO, you start with the problem and determine how to solve
it. SOME problems will benefit significantly from massively
parallel operation, others not at all.

So, is there a problem you're trying to solve with Delphi
that requires utilization of massively parallel operations?
Google/Yahoo/etc servers come to mind.

Intel needs to worry about it. Google and Yahoo need to
worry about it. I hope Codegear keeps their ear to the
ground with respect to what customers are doing but doesn't
get too distracted with it.

Dan


"Eric Grange" <egra...@SPAMglscene.org> wrote in message
news:48737be5$1...@newsgroups.borland.com...

Lee Jenkins

unread,
Jul 8, 2008, 8:47:18 PM7/8/08
to

"3# And the question of the Borland Leadership Team was this: In what respect
does your input enhance the four phase application lifecycle management delivery
system threshold holistic architecture initiative deployment validation baseline
best practice success driver?"

Priceless :)

--
Warm Regards,

Lee

Lee Jenkins

unread,
Jul 8, 2008, 8:53:21 PM7/8/08
to

Oopes, sorry folks. This should have been for the "Sons of Kahn" post. :)

--
Warm Regards,

Lee

Hannes Danzl[NDD]

unread,
Jul 8, 2008, 9:06:20 PM7/8/08
to
> Intel needs to worry about it. Google and Yahoo need to worry about it. I
> hope Codegear keeps their ear to the ground with respect to what customers
> are doing but doesn't get too distracted with it.

Frankly: I think you're wrong. with 16 core mass market cpus on the horizon,
quad cores in normal consumer hands and practivally every new machine already
at least a dual core, there's no time to loose to get on that wagon. Have you
ever tried to implement an application leveraging these cores? you might get
away with using one core at the moment, but the trend is going to MUCH less
powerful cores, but many many more of them. What are you going to do in 3
years from now when the new processors run 128 or 256 cores each as powerful
as a current low level machine but you can only use one of the cores properly
because the language you use doesn't support parallization of even the most
basic tasks?

--

James K Smith

unread,
Jul 8, 2008, 9:27:24 PM7/8/08
to
But they don't take advantage of multicore, which is even more valuable wrt
server apps, and server apps are the future.

James


James K Smith

unread,
Jul 8, 2008, 9:29:56 PM7/8/08
to
>What are you going to do in 3
> years from now when the new processors run 128 or 256 cores each as
powerful
> as a current low level machine but you can only use one of the cores
properly
> because the language you use doesn't support parallization of even the
most
> basic tasks?

Use one of the cores, and lease out the others to apps which can use
multicore, across disparate machines!


Roger Lascelles

unread,
Jul 8, 2008, 10:36:42 PM7/8/08
to
"Hannes Danzl[NDD]" <han...@nexusdb.dbnexus.com> wrote in message
news:xn0fsh4g9...@newsgroups.borland.com...

>> Intel needs to worry about it. Google and Yahoo need to worry about it.
>> I
>> hope Codegear keeps their ear to the ground with respect to what
>> customers
>> are doing but doesn't get too distracted with it.
>
> Frankly: I think you're wrong. with 16 core mass market cpus on the
> horizon,
> quad cores in normal consumer hands and practivally every new machine
> already
> at least a dual core, there's no time to loose to get on that wagon.

I'm not convinced that parallelization is relevant to most present day
desktop applications. Most business type native desktop apps are close to
instantaneous already. I am happy to have 16 cores, but I suspect that I
have no work for them. When I compare my 3.2GHz P4 and Q6600, I am hard
pressed to tell the difference. So while I do believe we will find uses for
more computing power, I don't yet see those applications on my PC. Perhaps
better speech recognition? Do you have any suggestions? Perhaps your
speciality can use parallel tasks.

I see no point in parallelization of the tiny, trivial pieces of most apps
which are complete in microseconds. As you divide finer and finer, the
overhead of creating and managing the threads exceeds the work done by the
threads. That leaves the less common computation intensive tasks as
candidates for parallelization. The classics are image processing, mp3
encoding.

Any statement by Intel is more than just a comment on the science of chip
making- it is also a means to influence the market and advance the fortunes
of Intel. Intel's problem is that they can make more cores than most
software can use, and their business depends on obsoleting last year's
product. We don't have to repeat everything we read in the tech press -
especially when you consider how silly are most of the poplar tech
journalists. Microsoft has often told us what the future holds - and often
been wrong.

I suspect that single CPU throughput will continue to increase. Graphene
transistors for example. A single core of the top of the line Intel
processors is now much faster than even a couple of years ago.

The issue of power consumption does concern me. A modern home PC with a
graphics processor is a room heater. Possibly parallelization can be used
to reduce power.

Faster storage would make more difference to me than more cores - hard disk
spin-up time and access time are the delays I notice.

By all means, add parallel processing features to your favourite language.
It may be of use to some of us, but I don't see it revolutionising ordinary
desktop software.

Roger Lascelles

TJC Support

unread,
Jul 8, 2008, 11:05:09 PM7/8/08
to
"Roger Lascelles" <relATaantDOTcomDOTau> wrote in message
news:48742450$1...@newsgroups.borland.com...

>
> I'm not convinced that parallelization is relevant to most present day
> desktop applications. Most business type native desktop apps are close to
> instantaneous already.

Hi Roger,

I agree with you on this point. There will certainly be _some_ applications
that will benefit tremendously from parallelization, but I suspect most
desktop apps won't. One of the guys I used to work with once told me that
any effort to optimize software for him would be wasted once the compute
time was as fast as he could go outside and smoke a cigarette. While that's
far too slow for most folks these days, his point is well taken. My main
app takes _maybe_ a second to do a complex calculation, and that would
probably be when you happen to click the go button right when the OS wanted
to do something. There's not much point in trying to make it faster than
that. The users surely wouldn't notice.

Cheers,
Van


TJC Support

unread,
Jul 8, 2008, 11:06:35 PM7/8/08
to
"Lee Jenkins" <l...@nospam.net> wrote in message
news:48740bfe$1...@newsgroups.borland.com...

> Lee Jenkins wrote:
>>
>> "3# And the question of the Borland Leadership Team was this: In what
>> respect does your input enhance the four phase application lifecycle
>> management delivery system threshold holistic architecture initiative
>> deployment validation baseline best practice success driver?"
>>
>> Priceless :)
>>
>
> Oopes, sorry folks. This should have been for the "Sons of Kahn" post. :)

Ah, but it fits like a glove. :^)

Cheers,
Van


Dan Barclay

unread,
Jul 9, 2008, 12:21:38 AM7/9/08
to

"Hannes Danzl[NDD]" <han...@nexusdb.dbnexus.com> wrote in
message news:xn0fsh4g9...@newsgroups.borland.com...
>> Intel needs to worry about it. Google and Yahoo need to
>> worry about it. I
>> hope Codegear keeps their ear to the ground with respect
>> to what customers
>> are doing but doesn't get too distracted with it.
>
> Frankly: I think you're wrong. with 16 core mass market
> cpus on the horizon,
> quad cores in normal consumer hands and practivally every
> new machine already
> at least a dual core, there's no time to loose to get on
> that wagon.

<shrug> I don't get on wagons without a reason. Just
because it's passing by is no reason to jump on. It helps a
lot if it's going your way.

> Have you
> ever tried to implement an application leveraging these
> cores?

Nope. I've never had reason to. As a result, my
applications work as well on single core as multi core.
Since they spend most of their time responding to user input
they would benefit very little from that type of delivery.

> you might get
> away with using one core at the moment, but the trend is
> going to MUCH less
> powerful cores, but many many more of them.

Cite? If you think there will be a trend toward less
powerful cores (even a smigion less powerful) you're going
to be wrong. The machines will *not* lose capability, they
will gain capability. At the very least you'll find
microcode that emulates a single fast core using multiple
cores. Hardware vendors will not be able to sell chips for
general purpose use that have less capability.

> What are you going to do in 3
> years from now when the new processors run 128 or 256
> cores each as powerful
> as a current low level machine but you can only use one of
> the cores properly
> because the language you use doesn't support parallization
> of even the most
> basic tasks?

I'm going to be delivering *solutions* to my customers, not
smoke and mirrors. My customers have a funny habit of
focusing on their own problems and demanding solutions for
them, not on the latest fad.

Dan


Hannes Danzl[NDD]

unread,
Jul 9, 2008, 12:22:56 AM7/9/08
to
> The issue of power consumption does concern me. A modern home PC with a
> graphics processor is a room heater. Possibly parallelization can be used
> to reduce power.

That's exactly were my point was. Parallelization is currently looking to be
taking over from making single cores faster and faster and thus more complex.
It looks like cores are going to become simpler, probably more specialized
(e.g. having 16 cores specialized for floating point, 16 cores for specialized
for integer, etc). And that concerns me cause to run an application on such a
beast it needs to follow certain architectural guidelines and setups.



> By all means, add parallel processing features to your favourite language.
> It may be of use to some of us, but I don't see it revolutionising ordinary
> desktop software.

Well, we talk about that in 5 years from now when office 2013 comes with voice
recognition, 3D gesture UI, etc ... increase the power of the hardware and the
software will find a way to use it. Don't worry about that.

--

Eric Grange

unread,
Jul 9, 2008, 4:36:48 AM7/9/08
to
> Well, we talk about that in 5 years from now when office 2013 comes with voice
> recognition, 3D gesture UI, etc ... increase the power of the hardware and the
> software will find a way to use it. Don't worry about that.

Indeed. And add to that the "getting simpler" aspect of cores, and
single-threaded apps will not just be running slower comparatively,
they'll actually be running slower.

Eric

James Smith

unread,
Jul 9, 2008, 12:03:43 PM7/9/08
to
> The two most popular models for this, today, are transactional memory
> and lightweight processes with message passing. The former, it seems,
> would work just fine with Delphi. The latter would probably work fine
> with Delphi, but less fine with some of the VCL. Not an insurmountable
> problem, however.

The issue with either is memory isolation/protection, and additionally, you
can't safely run a system like this to it's full potential without some kind
of garbage collection either.

> Having immutable types, and the like, is certainly convenient for
> concurrency, but it's not a requirement. Although Haskell has design
> characteristics which are theoretically advantageous to concurrency, in
> practice its multithreading support is a work in progress.

Not a requirement from the desktop POV, but from a service provider point of
view, you would at least have to have the option for immutable types. I
don't think Haskell itself will ever be mainstream, tho' its functionality
will leak into other languages. In terms of just language implementation and
library that support concurrency, Erlang still has to be the standard model
to measure against.

At any rate, I don't see any commercial value in pursuing any of this on the
Delphi platform, so I don't think you'll see it. It's just too much of a
massive undertaking to pull it off. Of course, you won't hear a peep from
CG developers regarding any of this because it's not conducive to
maintaining the life that's left in the product.

Sorry, I became dyslexic when mentioning k/qdb+. It should be q/kdb+.

James


Craig Stuntz [TeamB]

unread,
Jul 9, 2008, 12:07:28 PM7/9/08
to
James Smith wrote:

> The issue with either is memory isolation/protection, and
> additionally, you can't safely run a system like this to it's full
> potential without some kind of garbage collection either.

Although I'm generally a GC fan, I just don't agree with this.

> Not a requirement from the desktop POV, but from a service provider
> point of view, you would at least have to have the option for
> immutable types.

An option is good, but you don't have to use it. Erlang, for example,
has mutable types, and works fine as a service provider.

--
Craig Stuntz [TeamB] � Vertex Systems Corp. � Columbus, OH
Delphi/InterBase Weblog : http://blogs.teamb.com/craigstuntz
All the great TeamB service you've come to expect plus (New!)
Irish Tin Whistle tips: http://learningtowhistle.blogspot.com

mamcx

unread,
Jul 9, 2008, 12:13:11 PM7/9/08
to
I don't think is that hard, if working on the messagin with light thread
route.

The programing model can be as simply like with the GET/POST/Session
model of http:

You GET from Light Tread/Process
You POST to it
You have the data in a session in-memory storage.

The model exist. Is sucesfully. Work in multi-core, across the goble. Is
scalable. Easy to grasp. And I don't see it that hard, only is how do
this inline, ie:

(Mark this as light tread?)
procedure Scalable(From:Integer,post:TPostData, var Session)
begin

end;

procedure NormalProcedure()

begin
Connect(GetScalable,'Scalable');
Scalabe(GetId,[Data1,Data2]) (Session is passed internally from the
pool?)
end;

procedure GetScalable(From:Integer;post:TPostData);
begin
// Get results here
end;

Or something like that. The only magig is pull Scalable in other
thread/process magically

Henrick Hellström

unread,
Jul 9, 2008, 12:56:30 PM7/9/08
to
James Smith wrote:
> Not a requirement from the desktop POV, but from a service provider point of
> view, you would at least have to have the option for immutable types.

You do.

type
tImmutable = class
private
fData: string;
public
constructor Create(aData: string);
property Data: string read fData;
end;

The above class is a reference type that cannot be modified after
construction; which are the defining characteristics of an immutable type.

Eric Grange

unread,
Jul 9, 2008, 4:43:03 AM7/9/08
to
> Cite? If you think there will be a trend toward less
> powerful cores (even a smigion less powerful) you're going
> to be wrong.

Follow the link in the original post, and the links given in the blog to
other Intel research entries.

> The machines will *not* lose capability, they
> will gain capability.

As far as the trend... ever heard of the new Intel Atom? This one runs
significantly slower than current desktop processor.

> At the very least you'll find microcode that emulates a single fast
> core using multiple cores.

Read on ILP on current CPUs, we've got pretty much as far as we could
down that route, there can be improvements still sure, but they'll not
be orders of magnitude. The law of diminishing returns as been hit, it's
a whole new game that's starting with massive multicores, that'll
requires different ways of programming.

> Hardware vendors will not be able to sell chips for
> general purpose use that have less capability.

Intel, AMD, VIA & nVidia tend to disagree with you, Atom is selling like
hotcakes f.i., and all others already have or are rushing to be on that
market.

Eric

Steve Thackery

unread,
Jul 9, 2008, 4:47:25 AM7/9/08
to
Let's be quite clear about this: Intel and AMD are pushing multi-core as
"the future" because they've completely failed in their original quest to
keep pushing up clock speeds. They have hit a wall at 4GHz.

So, in order to stay in business they're pushing something they CAN make
with ease - multicore CPUs.

As has been known for decades, there are certain computing tasks that
benefit enormously from mass parallelisation. But there are countless more
that don't and never will (there's plenty of stuff on the web from top
computing science academics if you want to read up on it).

So let's get things into perspective: there is nothing inherently 'better',
or 'right' or 'inevitable' about massively parallel processing for most
tasks. This current trend/bandwagon is MARKETING-DRIVEN by Intel, who are
desperate for you to keep buying their chips, despite being unable to
deliver what you actually want, which is ever faster throughput per core.

The tail is wagging the dog.

SteveT

James Smith

unread,
Jul 9, 2008, 1:29:43 PM7/9/08
to
Basic message passing is for sure easy, but it's the memory isoaltion
between threads that would be the issue. And it's the only way to enforce
pure message passing.

James


James Smith

unread,
Jul 9, 2008, 1:26:58 PM7/9/08
to
For sure good example, but I'd want a code generator to write all that code
for me. Like find every x: string,readonly and turn it into an object.

James


Bruce McGee

unread,
Jul 9, 2008, 6:53:39 AM7/9/08
to
> James K Smith wrote:
>
> > Any programming other than that for the desktop. Which is why if
> > you're doing this type of programming currently, you're not using
> > Delphi or FPC.

James K Smith wrote:

> But they don't take advantage of multicore, which is even more
> valuable wrt server apps, and server apps are the future.

Ah. I thought you meant server applications when you said "this type
of programming".

As for taking advantage of multiple cores (beyond the thread support we
can already take advantage of), we'll have to see what the future
brings.

http://blogs.codegear.com/abauer/category/parallel-programming

Roger Lascelles

unread,
Jul 9, 2008, 7:38:16 AM7/9/08
to
"Eric Grange" <egra...@SPAMglscene.org> wrote in message
news:48747890$1...@newsgroups.borland.com...

> they'll actually be running slower.

I doubt this prophecy. I can't see us sending all the desktop app
programmers back to school so we can run 100 core CPUs at 10MHz. No company
is going to worry about parallelizing and debugging some in-house app with a
limited audience when cheap hardware will scream through it anyway.

While I suspect that straight line grunt will always be in demand, I think
there may be areas where we can save power with parallelization in the
background - graphics processors, drivers, OS tasks, web servers - software
with mass application. The large investment required for these can be
repaid over millions of users and energy recouped over millions of users.
So perhaps we will see multiple processors of different speeds cooperating.

Finally, never forget that the physics is nowhere fully expoited yet. Intel
will sell you more cores if it suits them, but the moment that the speed and
speed-power horizons shift, they will be ramping up the speed as fast as
they can go - or lose business.

Just remember that forecasts (even your own) are not 100% accurate and are
often wrong in the details.

Roger Lascelles

Eric Grange

unread,
Jul 9, 2008, 7:47:11 AM7/9/08
to
> Let's be quite clear about this: Intel and AMD are pushing multi-core as
> "the future" because they've completely failed in their original quest
> to keep pushing up clock speeds. They have hit a wall at 4GHz.

Indeed, but they were bound to hit a wall sooner or later, if only
because electrons can only move so fast, and things can only get that
small. It was only ever a matter of "when" rather than "if".

For decades, developers had a free performance lunch by just waiting out
for hardware manufacturers to work out their magic, now that era is
looking to be at an end, and there won't be any free meals anymore.

Yet nowadays it appears devs are so awfully unprepared for what's coming
that the "current stuff is good enough" excuse is thrown all around, but
IMO that's just a gut reaction. We're more likely at the dawn of a new
information revolution: something new and extremely more powerful is
coming, and few know how to make any use of it... for now!

Eric

Eric Grange

unread,
Jul 9, 2008, 7:59:00 AM7/9/08
to
> I doubt this prophecy.

Well it's not a prophecy it's an observation. New multi-core iterations
run slower per-core than their single-core ancestors.
Recent new CPU designs like Atom or Cell are even a whole lot slower per
core.

> I can't see us sending all the desktop app programmers back to school
> so we can run 100 core CPUs at 10MHz.

Why? It's just another technological evolution, just like happened with
steam, explosion engines, electricity, electronics...
For a few decades IT completely transformed a wide variety of jobs in
other industry, accounting, sales, logistics, manufacturing,
entertainment, etc. underwent massive changes.

I don't see why IT would be shielded from change in the way its doing
its things.

> No company is going to worry about parallelizing and debugging some
> in-house app with a limited audience when cheap hardware will scream
> through it anyway.

That's because the current generation of development tools are
inadequate for the new multicore tasks at hand.

> Finally, never forget that the physics is nowhere fully expoited yet.

They're not, but even if fully exploited, the increase in processing
performance doesn't scale to what massive multicore can offer.
(and massive multicore would benefit from such progress too)

> Just remember that forecasts (even your own) are not 100% accurate and
> are often wrong in the details.

Indeed, but so far arguments against going multicore revolve around "we
don't know how to do that" or "current stuff is good enough".
Believing those arguments are going to stay true forever is just asking
for a good technological whipping in the end IMO ;)

Eric

I.P. Nichols

unread,
Jul 9, 2008, 8:56:22 AM7/9/08
to
"Eric Grange" wrote:
>
> Yet nowadays it appears devs are so awfully unprepared for what's coming
> that the "current stuff is good enough" excuse is thrown all around, but
> IMO that's just a gut reaction. We're more likely at the dawn of a new
> information revolution: something new and extremely more powerful is
> coming, and few know how to make any use of it... for now!

The hardware is already coming and like an avalanche I might add, both in
capability and lower cost. At this point usable developer tools are more
likely to come from Microsoft than CodeGear. You might find both of these
blogs interesting. :)

http://blogs.msdn.com/pfxteam/default.aspx
http://blogs.msdn.com/nativeconcurrency/


marc hoffman

unread,
Jul 9, 2008, 9:13:42 AM7/9/08
to
Roger,

> I doubt this prophecy. I can't see us sending all the desktop app
> programmers back to school so we can run 100 core CPUs at 10MHz. No
> company is going to worry about parallelizing and debugging some
> in-house app with a limited audience when cheap hardware will scream
> through it anyway.

hmm. 10 yeahs ago, a couple hundred MHz seems excessive for just about
anything you'd wanna do, yet today we have several GHz per core, and
that doesn't seem enough.

arguing that a single 4GHz core will always be sufficient doesn't seem
much different than making the same argument for a 100MHz one, 10 years
back.

the only difference is that 10 years ago, you had the luxury of
believing that, coz as CPUs did get faster, you directly reaped (most
of) the benefits, even though you didn't plan for it. iow, you didn;t ay
for your ignorance of future demands on processing speed. this is no
longer the case, and making the same assumption about "never needing
more speed" now, will leave you stuck, 10 years from now, with nowhere
to scale.

--
marc hoffman

RemObjects Software
The Infrastructure Company
http://www.remobjects.com

Dean Hill

unread,
Jul 9, 2008, 10:29:43 AM7/9/08
to
>I'm not convinced that parallelization is relevant to most present day
>desktop applications. Most business type native desktop apps are close
to
>instantaneous already. I am happy to have 16 cores, but I suspect that

It's not as simple as that. There are already a number of day to day
aspects of business software that can benefit from multiple cores. Such
as graphics processing and the embedded database that Delphi developers
add to their small desktop application. Even printing in a background
thread can substantially benefit a business application. Modern business
software will include special functionality such as biometric checks more
and more in the future. Right now, IMO, the market for Delphi is not a
growing one. The majority of customers are using it for DB apps which
can be done just as easily in VS 2008 with .NET. Already MS have a
parallel processing library.

My personal feeling is that Delphi will need to change to pick up more
and more niche markets. Win32, Win64, Parallel Processing, X-Platform
compilation, etc may not be sufficient on their own to make a market but
combine them and you have a sizable chunk.

---
Dean

--- posted by geoForum on http://delphi.newswhat.com

James Smith

unread,
Jul 9, 2008, 10:39:25 AM7/9/08
to
> By all means, add parallel processing features to your favourite language.
> It may be of use to some of us, but I don't see it revolutionising
> ordinary desktop software.

Roger, agree if desktop is where you want to be. But the business case for
desktop software is dying, and new business is supporting apps which do
nothing but expose services and functoids, and can consume services and
functoids from other apps. This is where multicore will shine.

James


Craig Stuntz [TeamB]

unread,
Jul 9, 2008, 10:55:09 AM7/9/08
to
James Smith wrote:

> At any rate, the much larger issue beyond threads is a generalized
> concurrency model, and to support safe concurrency and multicore
> would be a massive undertaking that is probably not suitable for
> Delphi or FPC architectures.

The two most popular models for this, today, are transactional memory
and lightweight processes with message passing. The former, it seems,
would work just fine with Delphi. The latter would probably work fine
with Delphi, but less fine with some of the VCL. Not an insurmountable
problem, however.

Having immutable types, and the like, is certainly convenient for


concurrency, but it's not a requirement. Although Haskell has design
characteristics which are theoretically advantageous to concurrency, in
practice its multithreading support is a work in progress.

--

Craig Stuntz [TeamB] · Vertex Systems Corp. · Columbus, OH
Delphi/InterBase Weblog : http://blogs.teamb.com/craigstuntz

Borland newsgroup denizen Sergio González has a new CD of
Irish music out, and it's good: http://tinyurl.com/7hgfr

James Smith

unread,
Jul 9, 2008, 10:48:49 AM7/9/08
to
> Well, we talk about that in 5 years from now when office 2013 comes with
> voice
> recognition, 3D gesture UI, etc ... increase the power of the hardware and
> the
> software will find a way to use it. Don't worry about that.

I think even more pertinent is that the app won't be on your machine but
will be a service, and will still be doing all that stuff as well.

James


James Smith

unread,
Jul 9, 2008, 10:45:33 AM7/9/08
to
> As for taking advantage of multiple cores (beyond the thread support we
> can already take advantage of), we'll have to see what the future
> brings.

Already unfolding with languages like Erlang and k/qdb+. At any rate, the

much larger issue beyond threads is a generalized concurrency model, and to
support safe concurrency and multicore would be a massive undertaking that
is probably not suitable for Delphi or FPC architectures.

James


TJC Support

unread,
Jul 9, 2008, 11:23:49 AM7/9/08
to
"James Smith" <jks...@grid-sky.com> wrote in message
news:4874cd9c$1...@newsgroups.borland.com...

>
> But the business case for desktop software is dying

That's possibly true for enterprise-type computing, but I don't believe that
can be said in general. In my problem domain, the focus is on _not_ being
connected, and everything is a desktop app. Connectivity is only used to
update the apps, and they don't talk to each other. It is potentially
interesting to look at making my particular app more modular by separating
pieces of it (user interface, data, calculations), but that is interesting
from the standpoint of achieving FAA approval rather than a functional need.

I think in the end, it depends on the particular application and problem
domain as to whether parallelization makes sense. One size definitely does
not fit all.

Cheers,
Van


Dan Downs

unread,
Jul 9, 2008, 7:37:50 PM7/9/08
to

Dan Downs

unread,
Jul 9, 2008, 7:57:45 PM7/9/08
to
> As for Adam and cell, I don't think they apply here. Cell isn't exactly

errr.... Atom not Adam.

DD

Dan Downs

unread,
Jul 9, 2008, 7:56:27 PM7/9/08
to
>> I doubt this prophecy.
>
> Well it's not a prophecy it's an observation. New multi-core iterations
> run slower per-core than their single-core ancestors.
> Recent new CPU designs like Atom or Cell are even a whole lot slower per
> core.

Yes and no, multicore came about because of power leakage and heat which
is being solved but not fast enough to warrent not doing multicore. So
the original dual and quad cores where clocked a lot lower for the same
reason, but you can get a 3.2ghz quad core with the same or lower power
requirements as the higher clocked P4s. Depending on how much more ILP
they could tweak out of a cpu the transistor budget might be better
spent on additional cores now anyways.

As for Adam and cell, I don't think they apply here. Cell isn't exactly

slow by any means, PS3 release Nov 06 in Japan, main cpu and 7
functional SPEs at 3.2ghz, impressive clock rate for the time.
Programming for the thing is a whole different matter though. Adam was
made to start competing with ARM in the low power market so its goals
aren't really in line with desktop or server use.

DD

Adem

unread,
Jul 10, 2008, 2:47:52 AM7/10/08
to
Eric Grange wrote:

> but any comments on what could be in store for Delphi to go
> beyond multi-threading?

Actually, first, I'd rather have some/most/all of those cores to be
task-reprogrammable on-the-fly.

So, that, I could instantly convert a core or two to handle, say,
RAID; some others to handle ToE (TCP/IP off-loading Engine), others,
(de)compression engines, yet others some other IO (music, audio etc.)

Eric Grange

unread,
Jul 10, 2008, 4:12:57 AM7/10/08
to
> main cpu and 7 functional SPEs at 3.2ghz, impressive clock rate for
the time.

Clock rate isn't everything, the CPU & SPE are in-order, and have taken
a variety of other complexity shortcuts cache & branch-prediction-wise,
which means that their 3.2 GHz isn't comparable to the 3.2GHz of a Core
2 or Athlon in individual terms.

Atom is similar: the clock frequency are high, but the thing is much
simpler internally, that's why in practice it delivers similar
performance to Pentium3 clocked 2 to 4 times lower.

Atom actually does matter, you should read on about it in Intel docs,
it's low power and small and currently sold for mobile, but it's also
pretty much a blueprint of the kind of cores you'll find in the massive
multicore chips that are coming, ie. lots and lots of simpler/smaller
cores that what is found in current chips.

Eric

Rene Tschaggelar

unread,
Jul 10, 2008, 9:43:17 AM7/10/08
to
Dan Barclay wrote:
> Eric,
>
> Massively parallel operations are the key to decent use of
> massive multicore machines.
>
> IMHO, you start with the problem and determine how to solve
> it. SOME problems will benefit significantly from massively
> parallel operation, others not at all.
>
> So, is there a problem you're trying to solve with Delphi
> that requires utilization of massively parallel operations?
> Google/Yahoo/etc servers come to mind.

Na. That is peanut stuff. I'd think about computational
electro dynamics, computational fluid dynamics ... there
is tons of computational power to be dumped.

Rene

Dan Downs

unread,
Jul 10, 2008, 11:33:34 AM7/10/08
to
Eric Grange wrote:
> > main cpu and 7 functional SPEs at 3.2ghz, impressive clock rate for
> the time.
>
> Clock rate isn't everything, the CPU & SPE are in-order, and have taken
> a variety of other complexity shortcuts cache & branch-prediction-wise,
> which means that their 3.2 GHz isn't comparable to the 3.2GHz of a Core
> 2 or Athlon in individual terms.

Quite right, its really a different kind of cpu than a C2D or Athlon,
but at up to 12.8 gflops per SPE its no slouch. Initial testing had it
clocked at 4.5ghz but the power draw was crazy, 2 years ago. To me its
still an impressive achievement. IIRC the SPE are coded in their own
separate ISA in asm (not sure if they have a C compiler for them or not)
so you're coding/tuning a lot closer to the hardware. I don't see it
becoming a desktop cpu though because its really geared for heavy fp not
integer business apps.

http://www-03.ibm.com/technology/cell/pdf/PowerXCell_PB_7May2008_pub.pdf


> Atom is similar: the clock frequency are high, but the thing is much
> simpler internally, that's why in practice it delivers similar
> performance to Pentium3 clocked 2 to 4 times lower.
>
> Atom actually does matter, you should read on about it in Intel docs,
> it's low power and small and currently sold for mobile, but it's also
> pretty much a blueprint of the kind of cores you'll find in the massive
> multicore chips that are coming, ie. lots and lots of simpler/smaller
> cores that what is found in current chips.

I think it'll become more and more important in ~3 years. Currently if
they really wanted to do high core count and low power and answer is
ARM. Intel of course won't go that route since they want x86 everywhere.
Unfortunitly the main problem is the x86 isa, for all its quirks
pros/cons there's still a bunch of hoops you have to jump through to get
high performance. I can see higher performance coming more from wider
SIMD units tied to several integer cores. If this ends up being Atom or
Core based I don't know.


Currently the best examples of high core count and lower per core
performance is SUN's Niagara and Rock.

DD


PS: I'm still waiting for my coffee to kick in so take my ramblings with
a grain of salt. <G>

Dan Downs

unread,
Jul 10, 2008, 11:41:05 AM7/10/08
to
> Actually, first, I'd rather have some/most/all of those cores to be
> task-reprogrammable on-the-fly.
>
> So, that, I could instantly convert a core or two to handle, say,
> RAID; some others to handle ToE (TCP/IP off-loading Engine), others,
> (de)compression engines, yet others some other IO (music, audio etc.)

A few years ago I read about a project were they could a FPGA and ran
genetic algorithms on it to optimize the layout for a certain task. I
can see using this approach to develop custom ASICs, max performance and
tweaked for lowest power usage. Unfortunately I don't think using FPGAs
directly are quite there yet. In the study the chip created a whole
separate disconnected section on the chip to enhance electrical flow of
the working section. Thats what fascinated me the most about the
project, completely unexpected results.

DD

Dan Barclay

unread,
Jul 10, 2008, 5:53:36 PM7/10/08
to

"Rene Tschaggelar" <no...@none.org> wrote in message
news:487611fd$1...@newsgroups.borland.com...

Yup, also structural dynamics, realtime simulation,
modeling.

There are tons of people doing that kind of work on an
exploratory basis. Most people doing that kind of work use
purchased analysis tools. My daughter, a structural
engineer, does this stuff all the time. They do *not* write
simulation code, they configure tools that do the
simulation.

So, how many actual developers to build these kinds of
tools? A hundred? Double that? Maybe 10x?

Dan


Dave White

unread,
Jul 10, 2008, 6:04:27 PM7/10/08
to
"Eric Grange" <egra...@SPAMglscene.org> wrote in message
news:4874...@newsgroups.borland.com...

>
> As far as the trend... ever heard of the new Intel Atom? This one runs
> significantly slower than current desktop processor.
>

But this is really a special purpose CPU. From Intels own web site "built
for low power and designed specifically for a new wave of Mobile Internet
Devices and simple, low-cost PC's"

Users are willing to trade speed for battery life with mobile devices, but
end users want there desktops to be as fast as possible.


Barry Kelly (CodeGear)

unread,
Jul 10, 2008, 6:58:24 PM7/10/08
to
Dan Downs wrote:

> Currently the best examples of high core count and lower per core
> performance is SUN's Niagara and Rock.

Are you sure? How does it compare to Azul (Vega 3 processor w/ 54 cores,
16x SMP capable, for 864 cores/box)?

-- Barry

--
http://barrkel.blogspot.com/

Alexandre Machado

unread,
Jul 10, 2008, 8:00:57 PM7/10/08
to

> What are you going to do in 3
> years from now when the new processors run 128 or 256 cores each as
> powerful
> as a current low level machine but you can only use one of the cores
> properly
> because the language you use doesn't support parallization of even the
> most
> basic tasks?

Three years only for 256 cores? Hum...
Will MS Windows support 256 cores in 3 years? If not, MS shoud worry about
that too... I think that Windows 7 will be crawling then, won't it?

Regards


Alexandre Machado

unread,
Jul 10, 2008, 8:52:27 PM7/10/08
to

> Roger, agree if desktop is where you want to be. But the business case for
> desktop software is dying, and new business is supporting apps which do
> nothing but expose services and functoids, and can consume services and
> functoids from other apps. This is where multicore will shine.

I think that your POV is interesting because IMHO it is antagonic with the
biggest software company: MS money comes from desktop, including Windows.
Everyday I hear that "desktop apps are dead" but these same folks continue
to pay hundreds of dollars for Windows (for what? a simple browser
platform?), some more hundreds for Excel, Word and stuff. I just can't see
this day comming. The day that your OS is there only to support a web
browser.

Regards.


Alexandre Machado

unread,
Jul 10, 2008, 9:05:52 PM7/10/08
to

> I think even more pertinent is that the app won't be on your machine but
> will be a service, and will still be doing all that stuff as well.

The same question I did in the other post: Why should I buy Windows if
everything is web? MS marketing is good indeed but I won't pay a cent for
Windows if the only application there is Firefox (I don't use IE anyway).
This post is about parallel programming, isn't it? So... we don't have to
worry about it anyway. The only living application will be a web browser, so
the only developers around the world that should worry about that are IE,
Firefox and Opera teams...
And what about hundreds of cores?... 128, 256 cores for a web browser? Intel
and AMD marketing will have to work hard too.

Regards.


Dan Downs

unread,
Jul 11, 2008, 12:54:15 AM7/11/08
to
Barry Kelly (CodeGear) wrote:
> Dan Downs wrote:
>
>> Currently the best examples of high core count and lower per core
>> performance is SUN's Niagara and Rock.
>
> Are you sure? How does it compare to Azul (Vega 3 processor w/ 54 cores,
> 16x SMP capable, for 864 cores/box)?
>
> -- Barry
>

Well as a good example the Niagara, or T1 and 2nd gen T2, because of
they're much lower clock speed but high core/thread count. T1 at 8
cores/32 threads and 1.0 - 1.4 ghz, while the T2 is 16 cores/128
threads. The chips only perform well with lots of threads running, its
meant for higher throughput not single thread performance. As per Eric's
Atom example something similar would happen, ie you'd need lots of
threads to make up for the simple cores.

As for the Azul, honestly I'll have to look it up as I don't know much
about it. I know there's other special purpose high fpu core chips out
there as well, but until they get themselves on the FSB (hypertransport
3) they're performance will always be capped or just really expensive to
get a motherboard with several PCIExpress x16 slots to feed them fast
enough.

DD

m. Th.

unread,
Jul 11, 2008, 2:38:02 AM7/11/08
to
Barry Kelly (CodeGear) wrote:
> Dan Downs wrote:
>
>> Currently the best examples of high core count and lower per core
>> performance is SUN's Niagara and Rock.
>
> Are you sure? How does it compare to Azul (Vega 3 processor w/ 54 cores,
> 16x SMP capable, for 864 cores/box)?
>
> -- Barry
>

Tilera has 64 cores per CPU:

http://en.wikipedia.org/wiki/TILE64

--

m. th.

Adem

unread,
Jul 11, 2008, 5:40:06 PM7/11/08
to
Dan Downs wrote:

> Unfortunately I don't think using FPGAs directly are quite there yet.

Currently, the only way to add FPGA/ASIC to the PC is through external
interfaces (PCI etc). This makes it far too expensive and constrained.

Had it been possible to add them just like adding harddisks (obviously
on a much faster bus, such as external PCIe), then things would be
different --there would be much larger market.

Dan Downs

unread,
Jul 11, 2008, 5:59:26 PM7/11/08
to


HyperTransport 3 and HTX, I thought it was supposed to be out by now,
but I'm not sure what happened.


Looking it up and its here, don't know why I didn't notice any press
releases though.

This might be something close to what you wanted.

http://www.hypertransport.org/docs/tech/rchtx_datasheet_screen.pdf

DD

Adem

unread,
Jul 15, 2008, 6:08:09 AM7/15/08
to
Dan Downs wrote:

> This might be something close to what you wanted.
>
> http://www.hypertransport.org/docs/tech/rchtx_datasheet_screen.pdf

Thanks.

Yes, it's close.

What I would love to see is a motherboard that supports a few HTX
sockets so that I could add external/internal modules to it.

And, if these modules supported data-streaming, I'd be more than happy.

Marco van de Voort

unread,
Jul 20, 2008, 9:06:20 AM7/20/08
to
On 2008-07-09, James K Smith <jks...@grid-sky.com> wrote:
> But they don't take advantage of multicore, which is even more valuable wrt
> server apps, and server apps are the future.

It is not a property of the language mostly, but of the framework. ANd there
are FPC or Delphi ones that do (nexusdb e.g. maybe kbmmw too)

Maybe you should explain what you are comparing with.

Eric Grange

unread,
Jul 21, 2008, 3:42:27 AM7/21/08
to
> It is not a property of the language mostly, but of the framework.

I disagree, language is key, frameworks can only work on what was
described with the language. If the language only efficiently supports
describing sequential execution, then the framework will only be able to
execute "safely" in a sequential fashion (parallelizing only on the rare
bits of code whare it's "safe").

This is also why languages that do not describe things in a purely
sequential fashion (Erlang and others) look so alien.

Eric

0 new messages