Multicore vs. Cloud Computing

455 views
Skip to first unread message

Greg Pfister

unread,
Nov 8, 2009, 9:09:34 PM11/8/09
to Cloud Computing
...which is the subject of a post on "Perils of Parallel." http://bit.ly/Gys2Y

Both are waves of the future. Do they get along? Net: Sort of. IaaS,
yes. PaaS, no, but that's because the dominant platform paradigm
doesn't do multicore much.

Greg Pfister
http://perilsofparallel.blogspot.com/

Jan Klincewicz

unread,
Nov 8, 2009, 9:38:14 PM11/8/09
to cloud-c...@googlegroups.com
I often wonder what the benefit is to "Virtual SMP" huge boxes when no single piece of software seems to be able to eat up that much of a box.  Maybe Oracle DB, and 64-bit Citrix XenApp come to mind, but aside from that, farms of single servers running 10-20 VMs each seem to be the norm.  Certainly hypervisors can eat up cores, but who wants to put that many eggs in a basket when you can spread the risk out among multiple boxes ?

As long as cores keep multiplying and the proc prices remain the same, the market will be more than happy to have the extra performance (even though CPU scheduling is what hypervisors seem to do best..)

I thought 3Leaf had some facility to aggregate and and share MEMORY as a resource among multiple devices.  RAM, although cheap, is probably a dearer resource than CPU....  If it could be shared dynamically as storage can, that would be a pretty decent breakthrough in architecture.
--
Cheers,
Jan

Alejandro Espinoza

unread,
Nov 8, 2009, 9:45:06 PM11/8/09
to cloud-c...@googlegroups.com
Greg,

I agree with you. Multicore has to be accounted in the clouds. Azure and Google Apps Engine will have to support multithreaded code on multiprocessors.

Creating threads on the cloud not only should be available but it should scale to the number of cores you require. The technology in PaaS will have to evolve to really simulate a single computer with cores-on-demand when it executes code, using the cloud fabric. If I want to create a thread, I should be able to do it, and then when a thread executes it should do so in a single core, thus creating real parallelism.

PaaS has to support real parallelism either as threads or as tasks. There are several ways to accomplish this, using simple Agents and Tasks architecture. Obviously implementing such fabric is not as simple. Right now when you create a service or a worker in Azure it runs on one machine, and it scales by installing on different VMs using the cloud fabric. So it is using Parallelism but not multicore.

I think PaaS will, like the article suggests, move up the ladder technology wise, and it will provide multicore deveolpment in the end.

Regards,
Alex.
--
Alex Espinoza | Axis Technical Group | Software Development Manager
phone: 714-491-2636 office | 714-470-7125 cell
http://neonlabs.structum.net/

Jan Klincewicz

unread,
Nov 8, 2009, 10:35:41 PM11/8/09
to cloud-c...@googlegroups.com
Recognizing multi-thread / multi-core / multi-CPU is not a new issue unique to Cloud.  We have had SMP since the early 90's with few apps (or even Operating Systems) taking full advantage.  This is a not a new issue, but an old one.  Software not keeping up with hardware.  Tweaking compilers is not the answer.  Perhaps re-vamping our Education system is. 
--
Cheers,
Jan

Rao Dronamraju

unread,
Nov 8, 2009, 11:29:32 PM11/8/09
to cloud-c...@googlegroups.com

Multi-cores in particular and massive parallelism in general is definitely
going to be the future. Clouds are going to be platforms for real-time deep
analytics and consequently a high degree of intelligence "juicer" (for the
lack of better word).

In ML, AI and Analytics, a lot information is approximated or inferred based
on processing of sample set of data. With multi-cores a LOT more
information/data can be analyzed in real-time and your probablistic models
will be much more accurate.

Imagine, no one could predict the "greatest recession in 60 years" despite
all the economic metric models and the mega super computing power available
today. Many Clouds size intelligence is needed to thwart this kind of fiasco
next time around.

So yes, multi-cores and Cloud/massive parallelism is the foundation for the
era of super intelligent machines that will pervade us in the next 20 to 50
years.

Rao Dronamraju

unread,
Nov 9, 2009, 12:01:17 AM11/9/09
to cloud-c...@googlegroups.com

 

“Tweaking compilers is not the answer.  Perhaps re-vamping our Education system is. “

 

How many human beings can handle multi-tasking?...Human beings/brains are sequential entities, hence the problem.

 


nirpaz

unread,
Nov 9, 2009, 12:30:46 AM11/9/09
to Cloud Computing
Hi,

I believe that it is the responsibility of the PaaS to deliver a
programming model which will enable application developers to take
advantage of both multicore and the distributed environment.
Actually, there are some models for that at the moment. Some of them
try to over simplify (in my opinion) the problem and 'hide' multicore
and inter-process communication while others provide semantics within
the APIs to enable the application to exploit these technologies.
In any case, it is clear that the platforms can not just ignore it.

Guy

On Nov 9, 6:29 am, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:

Ray DePena

unread,
Nov 9, 2009, 12:35:48 AM11/9/09
to cloud-c...@googlegroups.com
For the experts among you on this topic.... is this a viable alternative?

Roundup: Tilera Debuts 100-Core Processors

The company claims its approach has simplified programming for multi-core processors, which has been cited by analysts as a barrier to adoption of many-core parallel processing. Tilera says its two-dimensional iMesh interconnect “eliminates the need for an on-chip bus and its Dynamic Distributed Cache (DDC) system allows each cores’ local cache to be shared coherently across the entire chip. These two key technologies enable the TILE Architecture performance to scale linearly with the number of cores on the chip.

-RD
--
Ray DePeña
Director, Stealth Startups
Strategic Business Advisor

http://www.linkedin.com/in/raydepena
Sacramento, CA 95630
(916) 941-5558

Peglar, Robert

unread,
Nov 9, 2009, 7:07:50 AM11/9/09
to cloud-c...@googlegroups.com
@Rao, good points all. However, the models are only as good as the
built-in assumptions...e.g. one of the recession triggers was that the
'wunderkind' quants on the Street assumed a 6-8% annual increase in
housing prices, FOREVER, when they built their CDS models and sold the
resultant product to unwitting buyers.

We all know how that worked out.

Super-intelligent machines? Sure, that's great and we all want that.
But I would prefer more intelligent humans, myself, or perhaps humans
with more common sense. As Jan correctly said, that takes education,
not CPUs and RAM.

Rob

-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Rao Dronamraju
Sent: Sunday, November 08, 2009 10:30 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing



No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.698 / Virus Database: 270.14.53/2487 - Release Date:
11/08/09 13:39:00

Ken

unread,
Nov 9, 2009, 7:13:50 AM11/9/09
to Cloud Computing
Excellent discussion.

The applications run via a service enabled cloud must be able to adapt
the implementation of that service to whatever computational fabric it
finds. This means that the partitioning of tasks and data for
multicore, bridged multicore or clustered environments cannot be hard-
coded, but must be adaptable. Obviously, this is rare. Most software
is terribly designed massively parallel environments - especially on
HPC clouds.

The cloud service must be agnostic to the particular implementation
details. I'm not sure that is well understood.

Ken Lloyd

On Nov 8, 9:29 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> Greg Pfisterhttp://perilsofparallel.blogspot.com/- Hide quoted text -
>
> - Show quoted text -

Khazret Sapenov

unread,
Nov 9, 2009, 8:21:42 AM11/9/09
to cloud-c...@googlegroups.com
Only half of human brain is sequential, the other is parallel.
It is important that they work in tandem to maintain the life balance.
Here's some video explaining one reference implementation of parallelism in human brain :)

Rao Dronamraju

unread,
Nov 9, 2009, 11:28:25 AM11/9/09
to cloud-c...@googlegroups.com


"But I would prefer more intelligent humans, myself, or perhaps humans with
more common sense. As Jan correctly said, that takes education, not CPUs
and RAM."

There is a fundamental difference between a intelligent humans and
intelligent machines. Intelligent humans cannot process information at the
speeds at which machines can do. So if the machines are intelligent in
addition to the speed at which they apply this intelligence, then
intelligent solutions are formulated for real world problems in REAL-TIME.
Humans are not good at problem solving in real-time. This is the fundamental
difference. Generally machines will be behind in intelligence than humans.
But with advances in ML, AI and Analytics, they will soon catchup and go
ahead of humans. The time when machines will start to invent and create is
not far away. No it is not science fiction.

Alejandro Espinoza

unread,
Nov 9, 2009, 11:45:41 AM11/9/09
to cloud-c...@googlegroups.com
Jan,
 
I agree. This is not new. I have been working in obscurity with Multi processors since 1998 and people are still having problems understanding asynchrouns programming. I do think that with AJAX, some of it might have filtered the cracks of the developer's minds, but not all.
 
I do believe that this is a problem of education. It is very important to teach sequential programming and combine it with a good methodology for parallelizing sequential code. I think the book "The Art of Concurrency" by Clay Breshears approaches programmers with the focus on improving sequential code.
 
If you go to Amazon right now, and look for Concurrency or Parallelism books on Multiprocessors, most of them are very much filled with Math. Which actually scares most of the developers. I think that is why some developers haven't really approached working with Multiprocessors.
 
The fact that the material available is too focused on Math, might be a turn down for most of the developers, especially enterprise developers.
 
Regards,
Alex.

On Sun, Nov 8, 2009 at 7:35 PM, Jan Klincewicz <jan.kli...@gmail.com> wrote:

Alejandro Espinoza

unread,
Nov 9, 2009, 11:54:05 AM11/9/09
to cloud-c...@googlegroups.com
Rao,
 
"How many human beings can handle multi-tasking?...Human beings/brains are sequential entities, hence the problem. "
 
You are right.  Not many human beings can handle multi-tasking. also no many human beings can handle a multiplication of 5642392  by 42325 without help.
 
I think we need to start teaching about multi-tasking and parallelism to future generations. It is a problem of education. Just because we can't do it now, doesn't mean we are never going to make it.
 
Now I do handle multi-tasking pretty well. I am working on three project at the same time. People might argue that I am doing a round robin kind of thing. It might be true. But with the help of the computer I can compile one application, test another while I am writing a post to this forum. That amounts to three tasks in parallel.
 
Now what I do might not be of help to everyone, and not every task can be parallelized. But that means as human beings, we are capable of thinking about parallel tasks, with the help of the computer or 'tools', but that is exactly what parallelism is all about: Getting help. And we can learn to think on structuring problems in a set of concurrent tasks so that we can get help either in a form of a computer or tool, or just another human being.
 
Regards,
Alex.

Ray DePena

unread,
Nov 9, 2009, 12:31:01 PM11/9/09
to cloud-c...@googlegroups.com
There are several studies that show multi tasking in humans doesn't work very well.  Here's a recent one.  Others can be found in link below.

Stanford Report, August 24, 2009

Media multitaskers pay mental price, Stanford study shows

http://www.google.com/#hl=en&source=hp&q=multitasking+study&aq=0s&aqi=g-s1g1g-s8&oq=multi-tasking&fp=f856a575d939ef4

Konstantin Ignatyev

unread,
Nov 9, 2009, 1:19:54 PM11/9/09
to cloud-c...@googlegroups.com
That is not Street alone - unfortunately majority of all the societies
have idea of growth oriented economy,
but the model does not work anymore because space is finite, and resources too.

I would say that today we have sufficient computing power and
capabilities for any sort of tasks worth working on.

Yeap, it is all about humans and their education, or lack thereof....

It is not that recession cannot be predicted - it is matter of when,
not if, in a growth oriented system - it is just many humans benefit
from denial

On Mon, Nov 9, 2009 at 4:07 AM, Peglar, Robert
<Robert...@xiotech.com> wrote:
>
> @Rao, good points all.  However, the models are only as good as the
> built-in assumptions...e.g. one of the recession triggers was that the
> 'wunderkind' quants on the Street assumed a 6-8% annual increase in
> housing prices, FOREVER, when they built their CDS models and sold the
> resultant product to unwitting buyers.
>

--
Konstantin Ignatyev

PS: If this is a typical day on planet earth, humans will add fifteen
million tons of carbon to the atmosphere, destroy 115 square miles of
tropical rainforest, create seventy-two miles of desert, eliminate
between forty to one hundred species, erode seventy-one million tons
of topsoil, add 2,700 tons of CFCs to the stratosphere, and increase
their population by 263,000

Bowers, C.A. The Culture of Denial: Why the Environmental Movement
Needs a Strategy for Reforming Universities and Public Schools. New
York: State University of New York Press, 1997: (4) (5) (p.206)

Greg Pfister

unread,
Nov 9, 2009, 6:27:41 PM11/9/09
to Cloud Computing
Kharzet, that is one killer video. Magnificent. Thank you very much
for pointing it out.

That said --

(a) I can't help thinking that all the wonder she speaks of was the
result of brain injury. Separate topic, very off-topic for this forum.

(b) I sure can't see writing code in that state! Nor would I trust
code labelled as such.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Nov 9, 6:21 am, Khazret Sapenov <sape...@gmail.com> wrote:
> Only half of human brain is sequential, the other is parallel.
> It is important that they work in tandem to maintain the life balance.
> Here's some video explaining one reference implementation of parallelism in
> human brain :)http://www.youtube.com/watch?v=UyyjU8fzEYU
>
> On Mon, Nov 9, 2009 at 12:01 AM, Rao Dronamraju <
>
>
>
> rao.dronamr...@sbcglobal.net> wrote:
>
> > “Tweaking compilers is not the answer.  Perhaps re-vamping our Education
> > system is. “
>
> > How many human beings can handle multi-tasking?...Human beings/brains are
> > sequential entities, hence the problem.
>
> >  ------------------------------
>
> > *From:* cloud-c...@googlegroups.com [mailto:
> > cloud-c...@googlegroups.com] *On Behalf Of *Jan Klincewicz
> > *Sent:* Sunday, November 08, 2009 9:36 PM
> > *To:* cloud-c...@googlegroups.com
> > *Subject:* [ Cloud Computing ] Re: Multicore vs. Cloud Computing
>
> > Recognizing multi-thread / multi-core / multi-CPU is not a new issue unique
> > to Cloud.  We have had SMP since the early 90's with few apps (or even
> > Operating Systems) taking full advantage.  This is a not a new issue, but an
> > old one.  Software not keeping up with hardware.  Tweaking compilers is not
> > the answer.  Perhaps re-vamping our Education system is.
>
> >  On Sun, Nov 8, 2009 at 9:45 PM, Alejandro Espinoza <
> > aespin...@structum.com.mx> wrote:
>
> > Greg,
>
> > I agree with you. Multicore has to be accounted in the clouds. Azure and
> > Google Apps Engine will have to support multithreaded code on
> > multiprocessors.
>
> > Creating threads on the cloud not only should be available but it should
> > scale to the number of cores you require. The technology in PaaS will have
> > to evolve to really simulate a single computer with cores-on-demand when it
> > executes code, using the cloud fabric. If I want to create a thread, I
> > should be able to do it, and then when a thread executes it should do so in
> > a single core, thus creating real parallelism.
>
> > PaaS has to support real parallelism either as threads or as tasks. There
> > are several ways to accomplish this, using simple Agents and Tasks
> > architecture. Obviously implementing such fabric is not as simple. Right now
> > when you create a service or a worker in Azure it runs on one machine, and
> > it scales by installing on different VMs using the cloud fabric. So it is
> > using Parallelism but not multicore.
>
> > I think PaaS will, like the article suggests, move up the ladder technology
> > wise, and it will provide multicore deveolpment in the end.
>
> > Regards,
> > Alex.
>
> > On Sun, Nov 8, 2009 at 6:09 PM, Greg Pfister <greg.pfis...@gmail.com>

Greg Pfister

unread,
Nov 9, 2009, 6:32:34 PM11/9/09
to Cloud Computing
Ray,

Yes, that's the system I was referring to in my blog.

I didn't say more about it because I've not gotten into any of the
details of how reasonable a programming platform it is. I've heard
others say it's not good, but don't know enough to say myself. For my
purposes in the blog it's just a marker on the map saying we're moving
into regions where there be dragons.

And by the way, statements like those below indicating no bus = allows
100-way cache coherence are the product of a marketing writer who is
struggling to understand what the engineers are really saying.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Nov 8, 10:35 pm, Ray DePena <ray.dep...@gmail.com> wrote:
> For the experts among you on this topic.... is this a viable alternative?
> Roundup: Tilera Debuts 100-Core
> Processors<http://www.datacenterknowledge.com/archives/2009/10/26/roundup-tilera...>The
> company claims its approach has simplified programming for multi-core
> processors, which has been cited by
> analysts<http://www.datacenterknowledge.com/archives/2008/02/21/multi-core-add...>as
> a barrier to adoption of many-core parallel processing. Tilera says
> its
> two-dimensional iMesh interconnect “*eliminates the need for an on-chip bus
> and its Dynamic Distributed Cache (DDC) system allows each cores’ local
> cache to be shared coherently across the entire chip*. These two key
> technologies enable the TILE Architecture performance to scale linearly with
> the number of cores on the chip.
>
> -RD
>

bjacaruso

unread,
Nov 9, 2009, 6:41:02 PM11/9/09
to Cloud Computing
Shouldn't the software development paradigm be one that adapts?

A great example is the the implementation by Aha! Software of the
Monte Carlo simulation on AWS. They used a data services development
environment called Pervasive DataCloud that provided highly parallel
self adapting software execution. A library for Java Pervasive
developed for data intensive applications called DataRush. Can be
deployed on any cloud, they just happened to use AWS, but it takes
advantage of the underlying cloud resources, detecting the cores
available... the simulation takes hours to run on a dual core, takes 3
minutes on the extra large Amazon...same code!!!

IMO 100 Cores is not that big of news anymore....look up a company
called Azul...
http://www.azulsystems.com/products/compute_appliance.htm
Java only - but over 1000 cores...and they have had them for a while
now....

On Nov 9, 7:21 am, Khazret Sapenov <sape...@gmail.com> wrote:
> Only half of human brain is sequential, the other is parallel.
> It is important that they work in tandem to maintain the life balance.
> Here's some video explaining one reference implementation of parallelism in
> human brain :)http://www.youtube.com/watch?v=UyyjU8fzEYU
>
> On Mon, Nov 9, 2009 at 12:01 AM, Rao Dronamraju <
>
> rao.dronamr...@sbcglobal.net> wrote:
>
> > “Tweaking compilers is not the answer.  Perhaps re-vamping our Education
> > system is. “
>
> > How many human beings can handle multi-tasking?...Human beings/brains are
> > sequential entities, hence the problem.
>
> >  ------------------------------
>
> > *From:* cloud-c...@googlegroups.com [mailto:
> > cloud-c...@googlegroups.com] *On Behalf Of *Jan Klincewicz
> > *Sent:* Sunday, November 08, 2009 9:36 PM
> > *To:* cloud-c...@googlegroups.com
> > *Subject:* [ Cloud Computing ] Re: Multicore vs. Cloud Computing
>
> > Recognizing multi-thread / multi-core / multi-CPU is not a new issue unique
> > to Cloud.  We have had SMP since the early 90's with few apps (or even
> > Operating Systems) taking full advantage.  This is a not a new issue, but an
> > old one.  Software not keeping up with hardware.  Tweaking compilers is not
> > the answer.  Perhaps re-vamping our Education system is.
>
> >  On Sun, Nov 8, 2009 at 9:45 PM, Alejandro Espinoza <
> > aespin...@structum.com.mx> wrote:
>
> > Greg,
>
> > I agree with you. Multicore has to be accounted in the clouds. Azure and
> > Google Apps Engine will have to support multithreaded code on
> > multiprocessors.
>
> > Creating threads on the cloud not only should be available but it should
> > scale to the number of cores you require. The technology in PaaS will have
> > to evolve to really simulate a single computer with cores-on-demand when it
> > executes code, using the cloud fabric. If I want to create a thread, I
> > should be able to do it, and then when a thread executes it should do so in
> > a single core, thus creating real parallelism.
>
> > PaaS has to support real parallelism either as threads or as tasks. There
> > are several ways to accomplish this, using simple Agents and Tasks
> > architecture. Obviously implementing such fabric is not as simple. Right now
> > when you create a service or a worker in Azure it runs on one machine, and
> > it scales by installing on different VMs using the cloud fabric. So it is
> > using Parallelism but not multicore.
>
> > I think PaaS will, like the article suggests, move up the ladder technology
> > wise, and it will provide multicore deveolpment in the end.
>
> > Regards,
> > Alex.
>
> > On Sun, Nov 8, 2009 at 6:09 PM, Greg Pfister <greg.pfis...@gmail.com>

Greg Pfister

unread,
Nov 9, 2009, 6:59:58 PM11/9/09
to Cloud Computing
On Nov 8, 9:29 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:

Thanks for the comments, Rao. Before sending this, I realized what's
below sounds somewhat negative -- I don't mean it to be; you made good
comments, and I appreciate them. They're just tempting targets for me.

> Multi-cores in particular and massive parallelism in general is definitely going to be the future. Clouds are going to be platforms for real-time deep analytics and consequently a high degree of intelligence "juicer" (for the lack of better word).

Well, maybe. I recently got my ears bent by some guys at Pervasive
about their Datarush product, which beats the pands off clusters
(clouds) for data mining, running just on a single multicore. The key
is that the processing involved can get turned into streaming
operations: Off disk, into memory, into the processors, crunch on the
way through, back to memory and out.

Sure, they could do even better streaming in parallel on multiple
machines in a cluster -- but that, I think, is going to be the easy
part (they haven't done it yet).

> In ML, AI and Analytics, a lot information is approximated or inferred based
> on processing of sample set of data. With multi-cores a LOT more
> information/data can be analyzed in real-time and your probablistic models
> will be much more accurate.

Yup, that's what I referred to above.

> Imagine, no one could predict the "greatest recession in 60 years" despite
> all the economic metric models and the mega super computing power available
> today. Many Clouds size intelligence is needed to thwart this kind of fiasco
> next time around.

As Bob pointed out below, yes -- if we manage to have the right
models. Got to start by getting rid of silliness like assuming
exponential growth forever, as in Bob's example. I've run into that
same assumption in other settings, like: Back in the 90s, more than
one phone company was assuming exponential growth in cell phone sales,
forever. I remember questioning it in a customer briefing, and they
looked at me like I had two heads.

> So yes, multi-cores and Cloud/massive parallelism is the foundation for the
> era of super intelligent machines that will pervade us in the next 20 to 50
> years.

Ooyeee... you goin' Singularity on me, mon? :-) Please don't do that.
I'll believe in super intelligent machines as soon as anybody figures
out what "intelligent" is in the first place. (I believe some of the
adherents don't think that step is necessary, though.)

Greg Pfister
http://perilsofparallel.blogspot.com/

Jan Klincewicz

unread,
Nov 9, 2009, 8:07:40 PM11/9/09
to cloud-c...@googlegroups.com
@Alejandro :


<< I have been working in obscurity with Multi processors since 1998 and people are still having problems understanding asynchrouns programming.>>

I think there are about 20K people on this group so you are no longer in obscurity
 
<The fact that the material available is too focused on Math, might be a turn down for most of the developers, especially enterprise developers.??

I flunked Algebra I about three times (but if it's any excuse, it WAS in the mid-seventies.)  I fell in love with Boolean logicmy first year in University (in PHILOSOPHY class of all places ... leave it to the Jesuits ..)  Anyway,  I was considered a fair programmer in my day (if you consider dBase / Clipper programming.)

I think the whole outsourcing things has scared a lot of intelligent people away from pursuing programming as a vocation.  I hear varying opinions, but I think IT as a whole is less attractive to perspective students than in times past.

When I examine my career path, it's hard not to think I should have managed a Hedge Fund .....
--
Cheers,
Jan

Ray DePena

unread,
Nov 8, 2009, 10:29:35 PM11/8/09
to cloud-c...@googlegroups.com
Nice post on the "Perils of Parallel" Greg.

Regards,

Ray

On Sun, Nov 8, 2009 at 6:09 PM, Greg Pfister <greg.p...@gmail.com> wrote:

Rao Dronamraju

unread,
Nov 9, 2009, 8:24:04 PM11/9/09
to cloud-c...@googlegroups.com

“Now I do handle multi-tasking pretty well. I am working on three project at the same time. People might argue that I am doing a round robin kind of thing. It might be true. But with the help of the computer I can compile one application, test another while I am writing a post to this forum. That amounts to three tasks in parallel.”

 

I used the wrong word….I should have said parallel/concurrent not multi-tasking. What you are doing is sequential “I can compile one application, test another while I am writing a post to this forum” just as you wrote it, not parallel.

 

Parallel is when you can do the following at the same time

 

1234567 X 1234567

7899665 X 7655433

5677889 X 9900654

 

Remember the key word is SAME TIME. If you have completed the first multiplication at say 8:30:55 pm, you should complete the other two also 8:30:55. Now try it!.

 

Actually human beings do parallel processing in some respects. For instance most visual information processing, for instance what you see in front of you the 3-dimesional space, is processed

in parallel by your brain. But most tasks involving “analytical” work is processes by brain sequential. Infact I think even the 3-dimesinonal visual information that the brain is processing is being PERCEIVED in parallel but PROCESSED/COGNITION sequentially. The only difference is it is happening at such nano-second speeds that it appears as if parallel.

 

Human mind/brain can perceive only objects. It cannot perceive processes. The world (and may be the universe) consists of only one PERCEIVABLE entity – objects. Although a process (the other entity of the world) cannot be perceived by human beings, it can be COGNIZED/understood. So objects in the world are perceived in parallel but the semantics associated with objects, the relationships between objects and also processes, is understood sequentially. So human brain hits the breaks (in parallel processing) as soon it needs to process the relationships and semantics between the objects of the world and needs to get sequential. I am not sure whether this limitation is at the biological level or at the behavior (learning) level.

 

 


Rao Dronamraju

unread,
Nov 9, 2009, 9:19:41 PM11/9/09
to cloud-c...@googlegroups.com


:I'll believe in super intelligent machines as soon as anybody figures out
what "intelligent" is in the first place."

OK, I will give it a try. I have actually two definitions for intelligence.
1) At the world (in which we live) level
2) At the universe level

First the #1, definition.

Intelligence is the ability to solve problems. Now problems that human
beings face are on a continuum from trivial like 2X2 to super human like the
ability to understand all the objects & events happening in the world and
the relationships between them, the problems and solutions to those
problems.

So if I develop machines that can do the latter, I have made machines that
transcend human intelligence limitations and move into super intelligent
machines. I think this is possible with large numbers of machines (how many
I do not know as this is theory at this time) working in parallel in
real-time and depends on how much of world's information they can handle and
how much intelligence can be derived from this processing.

I must also admit that in order for machines to go beyond human beings in
intelligence, they also need to learn the art of innovation/creativity. If
not they will be replicating the algorithms that human beings have made the
machines learn. By innovating and creating their own algorithms, the
machines will surpass human intelligence.

#2 definition

This is more at universal and meta-physical level.
Intelligence is the ability of an entity to understand the universe and
beyond. This can be a human being or a machine or a set of machines.
The biggest question here is, can you know the whole of something from
withiin something?...Can an entity that lives within the universe ever know
universe and beyond?...Another interesting aspect of this is, the very
concept of univers and beyond is LIMITED by your perception and cognition
abilities. What if there is something that is beyond the limitations of
human perception and cognition (including human imagination) because of
which it can NEVER EXIST in a human mind.




-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Greg Pfister
Sent: Monday, November 09, 2009 6:00 PM
To: Cloud Computing
Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing


Alejandro Espinoza

unread,
Nov 9, 2009, 9:33:35 PM11/9/09
to cloud-c...@googlegroups.com
Rao,

Then I don't get your point. Because if I am a CPU there is no possible way I can do things in parallel. But parallelism is not about one entity doing two things at the same time. Is about getting help. That is the nature of multiprocessors, and that was my whole point.

So if the computer is helping me do the three things at the same time, then I am really doing things in parallel. If you test against the clock, I was compiling and running tests while I was writing in this forum. If the three of them are running at the same time, against the clock, then it is parallelism. So it is possible.

The computer can help human beings accomplish parallel in some of the tasks.

Regards,
Alex.

Alejandro Espinoza

unread,
Nov 9, 2009, 9:36:47 PM11/9/09
to cloud-c...@googlegroups.com
Jan,

I meant obscurity because there was not a lot of informationa available at the time for me. So I kindda learned hacking through Unix.

Now regarding Math, I also flunked Advanced Math III in college about 4 times. But I had never a repulsion against math. So of my co-workers have that repulsion, and it keeps them away from things that could actually help improve their applications, in this case Concurrency and now with the mainstream availability of multicores, parallelism.

Regards,
Alex.

Khazret Sapenov

unread,
Nov 9, 2009, 11:11:50 PM11/9/09
to cloud-c...@googlegroups.com
Perhaps there's some gold section, where multicore in cloud would make sense for both capitalists and scientists.
For example offering systems like SiCortex on demand may save research centers a couple of bucks and create critical mass for wider adoption(of such systems) due to more affordable model. 
As a side note, they used really interesting implementation of Kautz digraph for 'ideal communication network for parallel computers'. Read more at http://sicortex.com/content/download/1134/6805/file/hpcncs-07-Kautz.pdf 

jrhou...@yahoo.com

unread,
Nov 9, 2009, 10:42:55 PM11/9/09
to Rao Dronamraju, cloud-c...@googlegroups.com
Rao,

I'm not really sure what this analogy has to do with Cloud but the human mind is in fact highly parallel. I am quite capable of breathing, maintaining my heartbeat, and adding 2 plus 2 at the same time.

The promise of large multi-core systems is not that different; we should expect specialized cores for security, XML parsing, and other things that should be (dare I say it) autonomic.

But again, this is not anything unique to Cloud.

Rao Dronamraju

unread,
Nov 9, 2009, 11:00:28 PM11/9/09
to jrhou...@yahoo.com, cloud-c...@googlegroups.com

Human brian's working in the context of analytical work. Sure as I said, in
visual information processing, it is parallel, similarly in other
non-analytical work, it is parallel. Breathing does not need analysis from
the brain, heart beating does not need analytical work.

In the context of cloud there are two parallelisms that we may be mixing up.
1) Alex mentioned is load balancing. You can schedule unrelated work on
multiple cores, processors and machines and do the job.
2) The second kind is old fashioned parallel programming. Where you develop
parallel processing alogrithms for a single problem. This is the one I am
talking in the context of cloud's multi-cores, processors and systems.
To give an example of difficuilty involved in such problems of concurrency,
if you have written OS level software, you deal with multiple threads both
from interrupt level and process level working at the same time. In a
uni-processor they are sequentialized. In a multi-processing system,
multiple threads will be working at the same time. As an OS programmer you
need to be able to understand and think the concurrency involved in multiple
threads working silmutaneously. Similarly you can have this at application
level. Super computers handled this all the time. Wheather, Military and
many data intensive applications are parallelized and the process of
parallelization that a programmer does while writing the algorithms is a
difficuilt skill.
What I am talking about is, with processors with 100 cores and above are in
the cloud and with thousands of machines, you have massive parallelism at
your disposal. It could revive the good old parallel programming and also
when this massive parallelism is applied to ML, AI and Analytics, extreme
intelligence can be squeezed from the data/information about the world. This
will make the machines more and more intelligent.

Rao Dronamraju

unread,
Nov 9, 2009, 10:10:26 PM11/9/09
to cloud-c...@googlegroups.com

I think we might have digressed from the main point of Gerg’s post.

 

In order for us to exploit multi-cores and paralleization of clouds, we need to program parallel processing.

In order to program in parallel processing, one (human being) needs to have the ability think/process parallel.

This is a very difficuilt skill for a human being. It is easy for machines but the machines do what human beings ask them to do.

If human beings cannot think or perform in parallel, machines similarly will be limited.

Rao Dronamraju

unread,
Nov 9, 2009, 11:41:43 PM11/9/09
to cloud-c...@googlegroups.com

Here are some interesting articles about parallel programming and its degree
of difficuilty or not.

“…when we start talking about parallelism and ease of use of truly parallel
computers, we’re talking about a problem that’s as hard as any that computer
science has faced. … I would be panicked if I were in industry.”

http://tinyurl.com/56enf8

http://tinyurl.com/yd37k44

http://tinyurl.com/yflfgwg

Miha Ahronovitz

unread,
Nov 9, 2009, 11:46:00 PM11/9/09
to cloud-c...@googlegroups.com
Khazret,

SiCortex is out of business.
http://www.networkworld.com/cgi-bin/mailto/x.cgi?pagetosend=/export/home/httpd/htdocs/news/2009/110309-sicortex-supercomputing-recession.html

One can not start a business selling Formula 1 cars. They are designed for a race, and can not even be driven on a street. sually top performace is a luxury and not a business. Unfortunately this what happened with SiCortex. $68 million in VC money went down the drain.

Miha


From: Khazret Sapenov <sap...@gmail.com>
To: cloud-c...@googlegroups.com
Sent: Mon, November 9, 2009 8:11:50 PM

Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

Khazret Sapenov

unread,
Nov 10, 2009, 12:08:59 AM11/10/09
to cloud-c...@googlegroups.com
Miha,
yep, I've heard different versions, one blaming recession the other about 'bean counters, pulling the [money]plug and scaring other investors'. I was just wondering whether changing business model could save their intellectual property. Who knows, maybe this idea will reincarnate in some time, just see how half-dead GM throws millions left and right :) and we'll eventually have HPC for masses.

Miha Ahronovitz

unread,
Nov 10, 2009, 12:10:55 AM11/10/09
to cloud-c...@googlegroups.com
Human brain's working in the context of analytical work.
Sure as I said, in
visual information processing, it is parallel...
Alex
mentioned is load balancing... massive parallelism at
your disposal...will make the machines more and more
intelligent.

This whole discussion is not about human brain.

If you tell me how you select a wife, why we love music and detest noise.
Why we dream and what dream men. Perhaps an algorithm to fall in love?

can the cloud, or any machine solve this problems? This is what is solving now

What actually happens in grid or cloud in HPC when the multicore
 processors are optimized, is better throughput

      Topological scheduling

D. Performance optimization for multicore processors, specifically on Nehalem


B. In the modern multicore processing, each socket CPU and each core has execution units, cache, memory channels, I/O channels. Under NUMA (Non-Uniform Memory Access) a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors. Topological Scheduling allows to schedule jobs at core level or CPU level according to its unique needs. The use of Topological Scheduling has resulted in dramatic performance increases when tested at leading EDA customers.


http://my-inner-voice.blogspot.com/2009/10/features-in-both-sun-grid-engine-6.html


This feature is an update, as in the past grids (and now clouds) traditional  management software assumes a single core CPU at each node.


These type of optimizations are processor specific (in this case Nehalem). The throughput improves spectacularly. Yet there is long way to make an algorithm for falling in love and avoid a divorce.


Interesting as people, making errors is part of uncertainty -therefore happiness - of life. If we all knew the future, our life will be unbearable. did anyone read "The Unbearable Lightness of Being" of Milan Kundera?


miha




Jim Starkey

unread,
Nov 10, 2009, 11:59:04 AM11/10/09
to cloud-c...@googlegroups.com
Reality time, folks.� Multi-threading and multi-core programming are not difficult, and be taught straightforwardly, and the current generation of software engineers, raised in Java, come pre-tooled.

But first, some basics.� Multi-threading is the issue, not multi-cores.� Any correct multi-threading code will work just fine in multi-cores.� Even more happily, incorrect multi-threading code will fail more quickly in a multi-core environment, simplifying everyone's lives.

Multi-threaded programming is used for basically three distinct purposes.� One is to coordinate shared access to single resource.� This is what database systems do internally.� A second is handle a variety of inherently asynchronous events, like messages arriving on different sockets, keystrokes and mouse events from a user, all the while keeping the screen refreshed.� The third, which is new with the advent of multi-core processors, is to speed up an operation by using multiple processors in parallel.

Threading to manage shared resources and multiple event sources have been around for decades, and any experienced software engineer can be expected to manage without breaking a sweat.� There are legacy problems that a lot of gibberish has been written on the topic, and Unix traditionally has been a dreadful platform for threading, but these issues are gradually passing.�

Threading to reduce latency is relative new, and for many programmers, more difficult.� The difficulty, however, is not the actual code, but in decomposing a problem in such a way that it can be efficiently handled in parallel.� In many cases, it may be a great deal smarted to decompose the problem to use map/reduce on multiple servers will be a better solution that multi-threading.

Most applications, however, don't require multi-threading with data sharing and consistency handled in a database system and message handled managed by an application server.� Most application receive user input, perform some database operations, and spit out the result, leaving threading to the database and application servers.

Applications like image rendering are a different ball of wax, but again, relatively unusual.

Multi-threading and/or multi-core processing is not worth losing sleep over.� A more productive thing to worry about is how to use large numbers of really cheap processors to "chip" away at computationally large problems.� It is almost always cheaper to buy a large number of commodity boxes than a few very expensive computrons.� And, happily, when you figured out how to exploit a large number of cheap commodity boxes, you're probably had to figure out how to handle failover gracefully, leaving to not only faster and cheaper systems, but more reliable and available ones, too.



Rao Dronamraju wrote:
Here are some interesting articles about parallel programming and its degree
of difficuilty or not.

��when we start talking about parallelism and ease of use of truly parallel
computers, we�re talking about a problem that�s as hard as any that computer
science has faced. � I would be panicked if I were in industry.�


  
 �Now I do handle multi-tasking pretty well. I am working on three
project at the same time. People might argue that I am doing a round robin
    
kind
  
of thing. It might be true. But with the help of the computer I can
    
compile one
  
application, test another while I am writing a post to this forum. That
    
amounts
  
to three tasks in parallel.� 
 � 
 I used the wrong word�.I should have said parallel/concurrent not
multi-tasking. What you are doing is sequential �I can compile one
application, test another while I am writing a post to this forum� just
as you wrote it, not parallel. 
 � 
 Parallel is when you can do the following at the same time 
 � 
 1234567 X 1234567 
 7899665 X 7655433 
 5677889 X 9900654 
 � 
 Remember the key word is SAME TIME. If you have completed the first
multiplication at say 8:30:55 pm, you should complete the other two also
8:30:55. Now try it!. 
 � 
 Actually human beings do parallel processing in some respects. For
instance most visual information processing, for instance what you see in
    
front
  
of you the 3-dimesional space, is processed 
 in parallel by your brain. But most tasks involving �analytical�
work is processes by brain sequential. Infact I think even the
    
3-dimesinonal
  
visual information that the brain is processing is being PERCEIVED in
    
parallel
  
but PROCESSED/COGNITION sequentially. The only difference is it is
    
happening at
  
such nano-second speeds that it appears as if parallel. 
 � 
 Human mind/brain can perceive only objects. It cannot perceive
processes. The world (and may be the universe) consists of only one
    
PERCEIVABLE
  
entity � objects. Although a process (the other entity of the world)
cannot be perceived by human beings, it can be COGNIZED/understood. So
    
objects in
  
the world are perceived in parallel but the semantics associated with
    
objects,
  
the relationships between objects and also processes, is understood
sequentially. So human brain hits the breaks (in parallel processing) as
    
soon
  
it needs to process the relationships and semantics between the objects of
    
the
  
world and needs to get sequential. I am not sure whether this limitation
    
is at
  
the biological level or at the behavior (learning) level. 
 � 
 � 
 From: 
 cloud-c...@googlegroups.com [mailto:
 cloud-c...@googlegroups.com ] On Behalf Of Alejandro Espinoza 
 Sent: Monday, November 09, 2009
10:54 AM 
 To: cloud-c...@googlegroups.com
 
 Subject: [ Cloud Computing ] Re:
Multicore vs. Cloud Computing 
 � 
 Rao, 
 � 
 "How many human beings can handle multi-tasking?...Human
beings/brains are sequential entities, hence the problem. " 
 � 
 You are right.� Not many human beings can handle multi-tasking.
also no many human beings can handle a multiplication of 5642392� by 42325
without help. 
 � 
 I think we need to start teaching about multi-tasking and parallelism
to future generations. It is a problem of education. Just because we can't
    
do
  
it now, doesn't mean we are never going to make it. 
 � 
 Now I do handle multi-tasking pretty well. I am working on three
project at the same time. People might argue that I am doing a round robin
    
kind
  
of thing. It might be true. But with the help of the computer I can
    
compile one
  
application, test another while I am writing a post to this forum. That
    
amounts
  
to three tasks in parallel. 
 � 
 Now what I do might not be of help to everyone, and not every task can
be parallelized. But that means as human beings, we are capable of
    
thinking
  
about parallel tasks, with the help of the computer or 'tools', but that
    
is
  
exactly what parallelism is all about: Getting help. And we can learn to
    
think
  
on structuring problems in a set of concurrent tasks so that we can get
    
help
  
either in a form of a computer or tool, or just another human being. 
 � 
 Regards, 
 Alex. 
 On Sun, Nov 8, 2009 at 9:01 PM, Rao Dronamraju <
    
rao.dro...@sbcglobal.net >
  
wrote: 
 � 
 �Tweaking
compilers is not the answer.� Perhaps re-vamping our Education system
is.�� 
 � 
 How many human
beings can handle multi-tasking?...Human beings/brains are sequential
    
entities,
  
hence the problem.  
 � 
 From: cloud-c...@googlegroups.com [mailto:
    
cloud-c...@googlegroups.com ]
  
 On Behalf Of Jan Klincewicz 
 Sent: Sunday, November 08, 2009
9:36 PM 
 To: cloud-c...@googlegroups.com
 
 Subject: [ Cloud Computing ] Re:
Multicore vs. Cloud Computing 
 � 
 Recognizing
multi-thread / multi-core / multi-CPU is not a new issue unique to Cloud.�
We have had SMP since the early 90's with few apps (or even Operating
    
Systems)
  
taking full advantage.� This is a not a new issue, but an old one.�
Software not keeping up with hardware.� Tweaking compilers is not the
answer.� Perhaps re-vamping our Education system is.�  
 On Sun, Nov 8,
2009 at 9:45 PM, Alejandro Espinoza < aesp...@structum.com.mx >
wrote: 
 Greg, 
I agree with you. Multicore has to be accounted in the clouds. Azure and
    
 
-- 
Cheers, 
Jan 
 � 
 � 
-- 
Alex Espinoza | Axis Technical Group | Software Development Manager 
phone: 714-491-2636 office | 714-470-7125 cell 

    


--~--~---------~--~----~------------~-------~--~----~
~~~~~
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at 
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
-~----------~----~----~----~------~----~------~--~---


  


-- 
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376

Jeff Darcy

unread,
Nov 10, 2009, 7:24:21 AM11/10/09
to cloud-c...@googlegroups.com
Alejandro Espinoza wrote:
> Then I don't get your point. Because if I am a CPU there is no possible
> way I can do things in parallel.

Actually that's not quite true. Even within a single core, pipelining
and multiple functional units and other bits (e.g. a decoupled
load/store or cache unit) involve many things happening at once.

Rao Dronamraju

unread,
Nov 10, 2009, 12:57:44 AM11/10/09
to cloud-c...@googlegroups.com

“If you tell me how you select a wife, why we love music and detest noise.
Why we dream and what dream men. Perhaps an algorithm to fall in love?”

 

Sure, you have touched on another interesting aspect of how human brain functions including FEELINGS.

 

The way this works is, first human being perceives things. The pereception is processed to cognition and mapped on to a feeling domain.

So you learn patterns in the world that have associated feelings in your mind,

Infact that is why you see most people will select someone as a wife/partner who has SIMILAR thinking/life-style. The reason fror this you have, through habits in life, have formed patterns that give you good feelings or bad feelings. You will select a wife who would match those patterns that map to good feelings.

Note that these patterns are formed in life by living them day in and day out. Another way of sayng is through your habits. Infact there is a saying that says “Man is a creature of habit”

Infact human beings are such prisoners of their habits/patterns that even their THINKING/THOUGHTS is a pattern in itself.

The more you practice a thought more it becomes a subconciuos activity. So ones it becomes a subconcious activity, the activity happens at a subconcious level.

For instance, we all have driven to work without thinking about the route some time or the other. How can we do this?..

The reason for this is, you have practiced driving on that route many times and so it has become a habit and hence a pattern is formed in your mind.

You cannot do this with a totally new route.

 

So yes, you have formed an algorithm to fall in love through your leaning habits in life as to what patterns attract you and how these patterns map into the feeling domain in your brain, whether the attributes are physical or mental/personality.

 

“Why we dream and what dream men”

 

I do not know much about dreams. I will write about dreams after I learn more about them.

 

“can the cloud, or any machine solve this problems?”

 

We are not talking about FEELINGS here, we were talking about machines being intelligent. We all know that today machines do the same intelligent work that human beings do and they have NO FEELINGS. So clouds could have machines that in future will meet or exceed human intelligence but no they are not going to have feelings. But then who knows the machines may be just too good at hiding their feelings like many human beingsJ

“Yet there is long way to make an algorithm for falling in love and avoid a divorce.”

You can create an alogroithm that will recognize/learn all the patterns/behavior that a human being does in love or divorce and reproduce them except for feelings. But actions/behavior associated with feelings can also be generated by machines if taught to do so.

 

Interesting as people, making errors is part of uncertainty -therefore happiness - of life. If we all knew the future, our life will be unbearable.”

 

It depends. There is good uncertainity and bad uncertainity. Take for instance, the present great recession. If there was enough intelligence in learning and understanding the entire economic system in breadth and depth in space and time, may be we could have thwarted such a faisco where couple of trillion dollars from the economy have been suckedout, how many people lost their retirement savings or wealth built in the markets. Similarly wars can be avoided where thousands of people have died and trillion dollars were spent and we had to comeback and say we were UNCERTAIN about WMD.

 

So like all things it is a matter of whether you use it for good or bad.

 


From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Miha Ahronovitz
Sent: Monday, November 09, 2009 11:11 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

 

Human brain's working in the context of analytical work.


--~--~---------~--~----~------------~-------~--~----~
~~~~~
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-qu...

Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

Peglar, Robert

unread,
Nov 10, 2009, 6:31:22 AM11/10/09
to cloud-c...@googlegroups.com
Not to hijack the thread - there is a thread in all this, right? - but
it never ceases to amaze me how many sharp IT folks/programmers did not
fare well in their study of mathematics. Myself, never went beyond
differential equations, but then again I didn't have to - not required
for the CS students.

But it also never ceases to amaze how many sharp IT folks/programmers
have music or a musical background in their lives...I nearly majored in
music (instead of/in addition to CS) and I bet a whole lot of folks in
this group either played an instrument for the sheer pleasure of it or
did something organized, like played in an orchestra or band.

We now return you to your regularly scheduled debate on multi-processing
versus parallel processing :-)

Rob

-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Alejandro
Espinoza
Sent: Monday, November 09, 2009 8:37 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

--~--~---------~--~----~------------~-------~--~----~
~~~~~
Posting guidelines:
http://groups.google.ca/group/cloud-computing/web/frequently-asked-quest
ions
Follow us on Twitter http://twitter.com/cloudcomp_group or
@cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC,
http://www.amazon.com/gp/product/B002H0IW1U or get instant access to
downloadable versions at
http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google
Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to
cloud-computi...@googlegroups.com
-~----------~----~----~----~------~----~------~--~---


No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.698 / Virus Database: 270.14.58/2493 - Release Date:
11/09/09 13:40:00

Peglar, Robert

unread,
Nov 10, 2009, 6:41:58 AM11/10/09
to cloud-c...@googlegroups.com
There are several advances in autonomic computing and storage (e.g. self-regulating, self-healing disks) which are studiously being avoided by the cloud providers of the day, unfortunately. This is what I meant when I said before that innovation is not being well applied to cloud compute, at least in terms of infrastructure design. It's a shame, too - lots of very good R&D going on, but ignored in the race to the infrastructure bottom, where cheap and dirty rules the day.

Ironic that private datacenters/non-clouds are taking advantage of it while clouds are not, for the most part.

The reason that the mind is both autonomic and systematic is that we have different parts of the brain regulating these different functions. I think until we get back to self-modifying code, we won't have much autonomic function in processing. Back in the day, certain pieces of the OS were indeed self-modifying and adjusted itself to external events and conditions. Today, that technique is verboten by most shops because then "you can't understand the code."

That's the whole point of the exercise, actually.

Rob


---
Robert Peglar
Vice President, Technology, Storage Systems Group

Email: mailto:Robert...@xiotech.com
Office: 952 983 2287
Mobile:314 308 6983
Fax: 636 532 0828
Xiotech Corporation
1606 Highland Valley Circle
Wildwood, MO 63005 http://www.xiotech.com/ : Toll-Free 866 472 6764

-----Original Message-----

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of jrhou...@yahoo.com
Sent: Monday, November 09, 2009 9:43 PM
To: Rao Dronamraju

Jeff Darcy

unread,
Nov 10, 2009, 7:19:31 AM11/10/09
to cloud-c...@googlegroups.com
> SiCortex is out of business.
> http://www.networkworld.com/cgi-bin/mailto/x.cgi?pagetosend=/export/home/httpd/htdocs/news/2009/110309-sicortex-supercomputing-recession.html
>
> One can not start a business selling Formula 1 cars. They are designed
> for a race, and can not even be driven on a street. sually top
> performace is a luxury and not a business. Unfortunately this what
> happened with SiCortex.


As a former SiCortex employee, I vehemently disagree. SiCortex
explicitly did not position their products as supercomputers or market
to the top end of the market, and offered a 12-node (72-core) $15K
system which was hailed for its affordability and accessibility.
Several articles have been written about SiCortex's demise, including
one by me and others by or quoting both founders. It had more to do
with the vagaries of venture financing in a bad economy than with sales
(which were ramping up quite nicely) let alone intrinsic merit.

Adwait Ullal

unread,
Nov 10, 2009, 6:34:33 PM11/10/09
to cloud-c...@googlegroups.com
Jim,
 
>A more productive thing to worry about is how to use large numbers of really cheap processors to "chip" away at computationally large problems.  It is almost always cheaper to buy a large number of commodity boxes than a few very expensive computrons.  And, happily, when you figured out how to exploit a large number of cheap commodity boxes, you're probably had to figure out how to handle failover gracefully, leaving to not only faster and cheaper systems, but more reliable and available ones, too.
 
The last couple of sentences in your paragraph almost make it seem like grid computing. Is that what you're espousing?
 
>Multi-threading and/or multi-core processing is not worth losing sleep over. 
Luckily, the compiler/language vendors are waking up and making it simpler by introducing parallel extensions to the language, etc.
 

- Adwait
--
Adwait Ullal

w: http://www.adwait.com
p: (408) 898-2581


On Tue, Nov 10, 2009 at 9:59 AM, Jim Starkey <jsta...@nimbusdb.com> wrote:
Reality time, folks.  Multi-threading and multi-core programming are not difficult, and be taught straightforwardly, and the current generation of software engineers, raised in Java, come pre-tooled.

But first, some basics.  Multi-threading is the issue, not multi-cores.  Any correct multi-threading code will work just fine in multi-cores.  Even more happily, incorrect multi-threading code will fail more quickly in a multi-core environment, simplifying everyone's lives.

Multi-threaded programming is used for basically three distinct purposes.  One is to coordinate shared access to single resource.  This is what database systems do internally.  A second is handle a variety of inherently asynchronous events, like messages arriving on different sockets, keystrokes and mouse events from a user, all the while keeping the screen refreshed.  The third, which is new with the advent of multi-core processors, is to speed up an operation by using multiple processors in parallel.

Threading to manage shared resources and multiple event sources have been around for decades, and any experienced software engineer can be expected to manage without breaking a sweat.  There are legacy problems that a lot of gibberish has been written on the topic, and Unix traditionally has been a dreadful platform for threading, but these issues are gradually passing. 

Threading to reduce latency is relative new, and for many programmers, more difficult.  The difficulty, however, is not the actual code, but in decomposing a problem in such a way that it can be efficiently handled in parallel.  In many cases, it may be a great deal smarted to decompose the problem to use map/reduce on multiple servers will be a better solution that multi-threading.

Most applications, however, don't require multi-threading with data sharing and consistency handled in a database system and message handled managed by an application server.  Most application receive user input, perform some database operations, and spit out the result, leaving threading to the database and application servers.


Applications like image rendering are a different ball of wax, but again, relatively unusual.

Multi-threading and/or multi-core processing is not worth losing sleep over.  A more productive thing to worry about is how to use large numbers of really cheap processors to "chip" away at computationally large problems.  It is almost always cheaper to buy a large number of commodity boxes than a few very expensive computrons.  And, happily, when you figured out how to exploit a large number of cheap commodity boxes, you're probably had to figure out how to handle failover gracefully, leaving to not only faster and cheaper systems, but more reliable and available ones, too.




Rao Dronamraju wrote:
Here are some interesting articles about parallel programming and its degree
of difficuilty or not.

“…when we start talking about parallelism and ease of use of truly parallel
computers, we’re talking about a problem that’s as hard as any that computer
science has faced. … I would be panicked if I were in industry.”


 “Now I do handle multi-tasking pretty well. I am working on three
project at the same time. People might argue that I am doing a round robin
    
kind
  
of thing. It might be true. But with the help of the computer I can
    
compile one
  
application, test another while I am writing a post to this forum. That
    
amounts
  
to three tasks in parallel.” 
   
 I used the wrong word….I should have said parallel/concurrent not
multi-tasking. What you are doing is sequential “I can compile one
application, test another while I am writing a post to this forum” just
as you wrote it, not parallel. 
   
 Parallel is when you can do the following at the same time 
   
 1234567 X 1234567 
 7899665 X 7655433 
 5677889 X 9900654 
   
 Remember the key word is SAME TIME. If you have completed the first
multiplication at say 8:30:55 pm, you should complete the other two also
8:30:55. Now try it!. 
   
 Actually human beings do parallel processing in some respects. For
instance most visual information processing, for instance what you see in
    
front
  
of you the 3-dimesional space, is processed 
 in parallel by your brain. But most tasks involving “analytical”
work is processes by brain sequential. Infact I think even the
    
3-dimesinonal
  
visual information that the brain is processing is being PERCEIVED in
    
parallel
  
but PROCESSED/COGNITION sequentially. The only difference is it is
    
happening at
  
such nano-second speeds that it appears as if parallel. 
   
 Human mind/brain can perceive only objects. It cannot perceive
processes. The world (and may be the universe) consists of only one
    
PERCEIVABLE
  
entity – objects. Although a process (the other entity of the world)
cannot be perceived by human beings, it can be COGNIZED/understood. So
    
objects in
  
the world are perceived in parallel but the semantics associated with
    
objects,
  
the relationships between objects and also processes, is understood
sequentially. So human brain hits the breaks (in parallel processing) as
    
soon
  
it needs to process the relationships and semantics between the objects of
    
the
  
world and needs to get sequential. I am not sure whether this limitation
    
is at
  
the biological level or at the behavior (learning) level. 
   
   
 From: 
 cloud-c...@googlegroups.com [mailto:
 cloud-c...@googlegroups.com ] On Behalf Of Alejandro Espinoza 
 Sent: Monday, November 09, 2009
10:54 AM 
 To: cloud-c...@googlegroups.com
 
 Subject: [ Cloud Computing ] Re:
Multicore vs. Cloud Computing 
   
 Rao, 
   
 "How many human beings can handle multi-tasking?...Human
beings/brains are sequential entities, hence the problem. " 
   
 You are right.  Not many human beings can handle multi-tasking.
also no many human beings can handle a multiplication of 5642392  by 42325
without help. 
   
 I think we need to start teaching about multi-tasking and parallelism
to future generations. It is a problem of education. Just because we can't
    
do
  
it now, doesn't mean we are never going to make it. 
   
 Now I do handle multi-tasking pretty well. I am working on three
project at the same time. People might argue that I am doing a round robin
    
kind
  
of thing. It might be true. But with the help of the computer I can
    
compile one
  
application, test another while I am writing a post to this forum. That
    
amounts
  
to three tasks in parallel. 
   
 Now what I do might not be of help to everyone, and not every task can
be parallelized. But that means as human beings, we are capable of
    
thinking
  
about parallel tasks, with the help of the computer or 'tools', but that
    
is
  
exactly what parallelism is all about: Getting help. And we can learn to
    
think
  
on structuring problems in a set of concurrent tasks so that we can get
    
help
  
either in a form of a computer or tool, or just another human being. 
   
 Regards, 
 Alex. 
 On Sun, Nov 8, 2009 at 9:01 PM, Rao Dronamraju <
    
rao.dro...@sbcglobal.net >
  
wrote: 
   
 “Tweaking
compilers is not the answer.  Perhaps re-vamping our Education system
is. “ 
   
 How many human
beings can handle multi-tasking?...Human beings/brains are sequential
    
entities,
  
hence the problem.  
   
 From: cloud-c...@googlegroups.com [mailto:
    
cloud-c...@googlegroups.com ]
  
 On Behalf Of Jan Klincewicz 
 Sent: Sunday, November 08, 2009
9:36 PM 
 To: cloud-c...@googlegroups.com
 
 Subject: [ Cloud Computing ] Re:
Multicore vs. Cloud Computing 
   
 Recognizing
multi-thread / multi-core / multi-CPU is not a new issue unique to Cloud. 
We have had SMP since the early 90's with few apps (or even Operating
    
Systems)
  
taking full advantage.  This is a not a new issue, but an old one. 
Software not keeping up with hardware.  Tweaking compilers is not the
answer.  Perhaps re-vamping our Education system is.   
 On Sun, Nov 8,
2009 at 9:45 PM, Alejandro Espinoza < aesp...@structum.com.mx >
wrote: 
 Greg, 
I agree with you. Multicore has to be accounted in the clouds. Azure and
    
 
-- 
Cheers, 
Jan 
   
   
-- 
Alex Espinoza | Axis Technical Group | Software Development Manager 
phone: 714-491-2636 office | 714-470-7125 cell 

    
--~--~---------~--~----~------------~-------~--~----~
~~~~~
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at 
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
-~----------~----~----~----~------~----~------~--~---


  
-- 
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376

--

Rao Dronamraju

unread,
Nov 10, 2009, 6:46:10 PM11/10/09
to cloud-c...@googlegroups.com

You broughtup a good point. 20+ years back when I worked on a compiler for a
super computer, I had the opportunity to develop the instruction scheduler
in the backend. Instruction scheduling exploits the parallelism at the
processor level and re-arranges instructions for best execution of the
instructions in parallel. Debugging is a challenge both with respect to
instruction schedulers and also local and global optimizers.

Jan Klincewicz

unread,
Nov 10, 2009, 7:38:05 PM11/10/09
to cloud-c...@googlegroups.com
Unfortunately, trying to sell an innovative new platform to the general public without the name HP/IBM/Sun/Dell on the box has burned through more VC funds  than one could imagine.  Lots of companies with brilliant ideas (like Fabric7 for example) got to MAYBE a third round before the plug got pulled.  Everyone would love to fund the next Compaq.  The market has gotten so conservative, unless a big fish buys you just for the IP, chances are slim for survival.

That being said, the general buying public is not the ONLY buying public.  There are plenty of higher-risk taking customers out there who, if they see a technology which will give them a differentiated advantage will go for it, especially if their culture is a similar one.


<It had more to do with the vagaries of venture financing in a bad economy than with sales>>

Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com



--
Cheers,
Jan

Greg Pfister

unread,
Nov 10, 2009, 8:57:04 PM11/10/09
to Cloud Computing
And here's another - http://bit.ly/z5L02 - quoting Tim Sweeney (Epic
Games) on the cost to write parallel game code:

• If it costs X (time, money, pain) to develop an efficient single-
threaded algorithm, then…
o Multithreaded version costs 2X
o PlayStation 3 Cell version costs 5X
o Current "GPGPU" version costs: 10X or more

Note that this is game code, not typical web code. Actually, as
someone commenting on my blog post pointed out correctly, the usual
programming model that's been around forever with transaction
monitors, and implemented with GAE and other PaaS, works very well:

Write a piece of serial, stateless (use the DB) code and have the
runtime instantiate it a couple zillion times for all the cores and
threads and nodes. Backend DB locking (which can be transparent to the
coder) takes care of their interaction, if any. Not saying you don't
get serial bottlenecks if done wrong, just that the coding doesn't
have to be in Erlang or whatever.

Yeah, I'm contradicting myself. :-)

Greg Pfister
http://perilsofparallel.blogspot.com/

On Nov 9, 9:41 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> Here are some interesting articles about parallel programming and its degree
> of difficuilty or not.
>
> “…when we start talking about parallelism and ease of use of truly parallel
> computers, we’re talking about a problem that’s as hard as any that computer
> science has faced. … I would be panicked if I were in industry.”
>
> http://tinyurl.com/56enf8
>
> http://tinyurl.com/yd37k44
>
> http://tinyurl.com/yflfgwg
>
>
>
> -----Original Message-----
> From: cloud-c...@googlegroups.com
>
> [mailto:cloud-c...@googlegroups.com] On Behalf Of Rao Dronamraju
> Sent: Monday, November 09, 2009 10:00 PM
> To: jrhough...@yahoo.com
> rao.dronamr...@sbcglobal.net >
> > wrote:
> >   
> >  “Tweaking
> > compilers is not the answer.  Perhaps re-vamping our Education system
> > is. “
> >   
> >  How many human
> > beings can handle multi-tasking?...Human beings/brains are sequential
> entities,
> > hence the problem.  
> >   
> >  From: cloud-c...@googlegroups.com [mailto:
> cloud-c...@googlegroups.com ]
> >  On Behalf Of Jan Klincewicz
> >  Sent: Sunday, November 08, 2009
> > 9:36 PM
> >  To: cloud-c...@googlegroups.com
> >  Subject: [ Cloud Computing ] Re:
> > Multicore vs. Cloud Computing
> >   
> >  Recognizing
> > multi-thread / multi-core / multi-CPU is not a new issue unique to Cloud. 
> > We have had SMP since the early 90's with few apps (or even Operating
> Systems)
> > taking full advantage.  This is a not a new issue, but an old one. 
> > Software not keeping up with hardware.  Tweaking compilers is not the
> > answer.  Perhaps re-vamping our Education system is.   
> >  On Sun, Nov 8,
> > 2009 at 9:45 PM, Alejandro Espinoza < aespin...@structum.com.mx >
> > 2009 at 6:09 PM, Greg Pfister < greg.pfis...@gmail.com > wrote:
> > ...which is the subject of a post on "Perils of Parallel."http://bit.ly/Gys2Y
> > Both are waves of the future. Do they get
>
> ...
>
> read more »

Greg Pfister

unread,
Nov 10, 2009, 9:02:49 PM11/10/09
to Cloud Computing
Yep, I agree (hijacking or not...) musicians are famous for being
good, if not wonderful, programmers.

There's speculation was that it had partly to do with being familiar
with abstract notation.

I personally think there are other reasons, having to do with being
comfortable with long serial structures.

*Serial.*

(See, it's in the thread. :-) )

Greg Pfister
http://perilsofparallel.blogspot.com/

On Nov 10, 4:31 am, "Peglar, Robert" <Robert_Peg...@xiotech.com>
wrote:
> <jan.klincew...@gmail.com>wrote:
> <jan.klincew...@gmail.com>wrote:
>
> >>> Recognizing multi-thread / multi-core / multi-CPU is not a new issue
> >>> unique to Cloud.  We have had SMP since the early 90's with few apps
> (or
> >>> even Operating Systems) taking full advantage.  This is a not a new
> issue,
> >>> but an old one.  Software not keeping up with hardware.  Tweaking
> compilers
> >>> is not the answer.  Perhaps re-vamping our Education system is.
>
> >>>  On Sun, Nov 8, 2009 at 9:45 PM, Alejandro Espinoza <
> <greg.pfis...@gmail.com>wrote:
> Post Job/Resume athttp://cloudjobs.net
> Buy 88 conference sessions and panels on cloud computing on DVD athttp://www.amazon.com/gp/product/B002H07SEC,http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to
> downloadable versions athttp://cloudslam09.com/content/registration-5.html

Greg Pfister

unread,
Nov 10, 2009, 9:05:26 PM11/10/09
to Cloud Computing
Rao, there are tens of millions of words that have been written on the
nature of intelligence. While what you say may be reasonable, and may
even be right -- I am not expressing an opinion either way -- it's not
going to solve the problem here, in this forum.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Nov 9, 7:19 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
> Greg Pfisterhttp://perilsofparallel.blogspot.com/

Greg Pfister

unread,
Nov 10, 2009, 9:05:55 PM11/10/09
to Cloud Computing
Thanks!

Greg Pfister
http://perilsofparallel.blogspot.com/

On Nov 8, 8:29 pm, Ray DePena <ray.dep...@gmail.com> wrote:
> Nice post on the "Perils of Parallel" Greg.
>
> Regards,
>
> Ray
>

Greg Pfister

unread,
Nov 10, 2009, 9:10:31 PM11/10/09
to Cloud Computing
All,

Thank you for the stimulating discussion. I didn't think I had answers
in that post, really. I just hoped to bring up the issue and start
some thinking on it. That seems to have worked. :-)

Greg Pfister
http://perilsofparallel.blogspot.com/

Alejandro Espinoza

unread,
Nov 9, 2009, 11:53:56 PM11/9/09
to cloud-c...@googlegroups.com
Rao,

Great links, Thank you.

Alex.

--~--~---------~--~----~------------~-------~--~----~
~~~~~

Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at


~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
-~----------~----~----~----~------~----~------~--~---


Rao Dronamraju

unread,
Nov 10, 2009, 9:46:17 PM11/10/09
to cloud-c...@googlegroups.com

Alex,

 

My apologies if I have come across a bit rude in my reply to you. Actually I type a reply quickly and post it generally without re-reading it for netiquette.

Nothing intentional…..

 

Here are couple of links that approaches the issue from (cognitive) psychology perspective.

 

“These results suggest that a neural network of frontal lobe areas acts as a central bottleneck of information processing that severely limits our ability to multitask.”

Summary…..

http://tinyurl.com/32h8zw

 

Details are here…..

http://tinyurl.com/y9ss34b

 

Regards,

Rao

 


--

Miha Ahronovitz

unread,
Nov 11, 2009, 4:17:10 AM11/11/09
to cloud-c...@googlegroups.com
Jeff,

Can you describe who are the buyers of the $15K SiCortex systems and maybe the company could be restarted by a fresh group with  $68M past investment eliminated as liability. Plus  the lessons learned will minimize errors the second time.

With a $68M VC investment, the valuation should have been at least $200M , which means revenues of $20M to 40M per year in a few years.

I am NOT doubting the exceptional intellectual prowess of the founders and employees. Sure they are brilliant engineers.Sure there was bad luck., but bad luck does not lasts forever. Unfortunately, and I really mean when I say "unfortunately", the business acumen must exist as well.With sales of 5M to 10M per year, the revived SiCortex-2 can fetch $50M to $100M if acquired by a third party.

The recession is almost over. Would you form a group of investors to restart the company with some modifications, using SiCortex IP? Cray bought already some IP.

Does it makes sense to build a cloud with a SiCortex machine? Then one can put the money where one's mouth is.

Miha

The Formula One metaphor refers to HPC in general. While cars are built with a  specific audience in mind (entry level, luxury, sport, women , active men,  seniors or social climbers), Formula One car is designed with one goal,  to win races on special circuits, driven by highly trained athletes-pilots belonging to an elite small group.. By analogy, when one wants to sell a Top 500 computer to a bank, the bank needs were never considered when the supercomputer was originally designed. The requirements were to say be in spot #3 on Top 500 supercomputing list. A commercial customer does not give a damn if the supercomputer they buy is #3 or #100, a long as it does what the bank needs.  This discrepancy has created so many out of business HPC companies : SGI original, Cray original, Thinking Machines, and now SiCortex


From: Jeff Darcy <je...@pl.atyp.us>
To: cloud-c...@googlegroups.com
Sent: Tue, November 10, 2009 4:19:31 AM
Subject: Re: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

Miha Ahronovitz

unread,
Nov 11, 2009, 4:35:54 AM11/11/09
to cloud-c...@googlegroups.com
More .I checked the http://sicortex.com/customers
All customers were Universities. As far as productizing via business model
HPC software, my rule of thumb is that if a company caters only for
Universities and Government, it is very unlikely to succeed.

How come a product described like dream-come-true...

"...build the world's most energy-efficient computers. SiCortex
High-Productivity Computers enable breakthrough delivered performance at the
lowest power consumption in the industry. Combined with our scalable
architecture, small footprint, and minimal heat output, SiCortex breaks the
shackles of data center resource constraints while tackling complex
computing challenges. The bottom line: SiCortex customers achieve much more,
while consuming less, inside or outside of the data center."

...did not sell to commercial enterprises? I suspect a Formula 1 syndrome...
somewhere. If this can be repaired, why not having SiCortex-2?

M



-----Original Message-----
From: Jeff Darcy [mailto:je...@pl.atyp.us]
Sent: Tuesday, November 10, 2009 4:20 AM
To: cloud-c...@googlegroups.com

Sassa

unread,
Nov 11, 2009, 5:22:44 PM11/11/09
to Cloud Computing
On Nov 10, 2:24 am, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
...
> Human mind/brain can perceive only objects. It cannot perceive processes.
> The world (and may be the universe) consists of only one PERCEIVABLE entity
> - objects. Although a process (the other entity of the world) cannot be
> perceived by human beings, it can be COGNIZED/understood. So objects in the

I think you are using "PERCEIVABLE" instead of "IMAGINABLE", as in
"imagine no dog" or "imagine runs".

My 1.2 year old used to use syllables and gestures meaning "turns" and
"flies", pointing at wheels and airplanes respectively. He
successfully applied "turns" to rotors of fans and helicopters when he
first met them.

I think there even was a study demonstrating that children from some
cultures understand processes better than objects, and the other way
around for other cultures. A simple test was based on classifying
objects (sorting toys in piles) and solving a task where you needed to
figure out how to do something physically (use a stick with a grab on
its end to pick an object).

It kind of boiled down to what their mum says when pointing at a
running dog - "runs" or "dog".


Sassa


> world are perceived in parallel but the semantics associated with objects,
> the relationships between objects and also processes, is understood
> sequentially. So human brain hits the breaks (in parallel processing) as
> soon it needs to process the relationships and semantics between the objects
> of the world and needs to get sequential. I am not sure whether this
> limitation is at the biological level or at the behavior (learning) level.
...

Sassa

unread,
Nov 11, 2009, 5:31:02 PM11/11/09
to Cloud Computing
On Nov 9, 6:01 am, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> "Tweaking compilers is not the answer.  Perhaps re-vamping our Education
> system is. "
>
> How many human beings can handle multi-tasking?...Human beings/brains are
> sequential entities, hence the problem.

Not human beings, but algorithms as the method of expressing what
needs doing, is sequential. This method is sequential because the
medium (language, pen and paper) is sequential.


Sassa

Sassa

unread,
Nov 11, 2009, 5:43:43 PM11/11/09
to Cloud Computing
On Nov 9, 5:28 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> Humans are not good at problem solving in real-time.

Depends what problems you are referring to.

Here is a list of problems that humans are better than machines at,
when solving real-time:
- is this a flower?
- shortest path in the crowd
- what's the name of this person?
- driving the airplane (the machine built into the plane never
overrides the action of the human)
- come up with the algebra
...

Humans aren't good at quickly remembering an arbitrary fact, and
humans aren't good at dealing with large numeric computations (as the
result of that).

Sassa

Sassa

unread,
Nov 11, 2009, 6:20:59 PM11/11/09
to Cloud Computing
On Nov 10, 4:00 am, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> Human brian's working in the context of analytical work. Sure as I said, in
> visual information processing, it is parallel, similarly in other
> non-analytical work, it is parallel. Breathing does not need analysis from
> the brain, heart beating does not need analytical work.

Have you tried doing analytical work without saying what you do in
your mind? It is not the thinking that is sequential, but the method
of proving correctness - convincing yourself or others by moving the
focus from one decision to another.


Sassa

...

Jan Klincewicz

unread,
Nov 11, 2009, 8:44:57 AM11/11/09
to cloud-c...@googlegroups.com
After reading many posts here, the thought occurred to me that we may be ignoring the fundamental fact that the common virtualization used in CC does, in fact, make ample use of the multi-core CPUs.  It seems to me we have been discussing an issue germane to IT in the era where procs went under-utilized merely because we were running monolithic apps on inefficient Operating Systems.

Modern hypervisors do a very good job of scheduling multiple CPUs (or cores) and if you want, it is even possible to "nail up" a given core, dedicated to a single VM.

I can see where multi-threaded programming was more important when resources utilization ratios of software to hardware were absurd, but IMO we have come to a point where we can consume as much resources as our fail-over schemes safely allow with COTS software no matter how inefficiently-written the apps.


<Thank you for the stimulating discussion. I didn't think I had answers>
<in that post, really. I just hoped to bring up the issue and start>
<some thinking on it. That seems to have worked. :-)>

--
~~~~~
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com



--
Cheers,
Jan

Jeff Darcy

unread,
Nov 11, 2009, 9:14:50 AM11/11/09
to cloud-c...@googlegroups.com
On 11/11/2009 04:35 AM, Miha Ahronovitz wrote:
> How come a product described like dream-come-true...
> ...
> ...did not sell to commercial enterprises?

It almost did. We were on the verge of making our first commercial sale
(in the energy sector) at the time of the collapse, and IIRC it would
have been our biggest sale to date. I had personally been fairly
involved in the process, since there was a strong storage aspect to it,
and my understanding was that the customer had even gotten as far as
marking the lab space and provisioning power for the new box. Had the
funding not fallen through, it seems highly likely that the sale would
have completed, and we all recognized the significance of selling
outside the government/academic markets. Since before I started,
getting beyond those markets had been part of the strategy. So, why
hadn't we already made such sales? Allow me to repeat something I wrote
on my site about this recently
(http://pl.atyp.us/wordpress/?p=2458#comment-142489).

"No matter how good a startup�s technology is, they face an uphill
battle in the marketplace. Time after time, we�d hear prospective
customers say they loved the product, they loved the people they�d
worked with through the evaluation period, but they were afraid to spend
that much for a system from a small company that might not survive.
Their prophecy of failure turned out to be self-fulfilling, just as a
prophecy of success would have been. If those same customers had
evaluated products purely on the merits, many of them would have bought
our systems and we would have been fine. Building up that trust requires
demonstrating that the original technical success is repeatable, but
building that second generation � which we were doing � takes more time
and money than early-adopter customers put in � hence the need for
later-stage investment."

Is this Formula One stuff? I still don't think so. I think it's more
like hybrid vehicles. Unlike Formula One, which will never be directly
relevant to most people, hybrids could have broad appeal. Nonetheless,
most people still waited until they heard from their early-adopter
friends and neighbors about actual long-term experience. SiCortex faced
the same challenge. Their customers were notable not so much for being
non-commercial as for being early adopters willing to devote resources
to making the best use of new technology and/or to take the risk of
having an orphaned product on their hands (which can happen as the
result of big-company strategy decisions too BTW). The financial
institutions and oil companies and other obvious markets for this level
of computing power - which I'd call medium, not large - were still
waiting to see what their early-adopter friends had to say. It's a
waiting game, and in an economic environment where even the early
adopters are being forced to put off purchases the wait was just too long.

I've written some more on my site about a few of the other helpful
suggestions people have made about how SiCortex could have done better
(http://pl.atyp.us/wordpress/?p=2481) but that's drifting a bit off
topic. To keep things relevant, what I'll say is that many of the risks
and problems SiCortex faced are also relevant for cloud computing.
We're still in an early-adopter phase too. The potential big purchasers
are (with a few exceptions) still sitting on the sidelines, waiting to
see how things turn out for the early adopters before they decide to get
in the game themselves. VC is still a necessity for many, but deals
with VCs are as one-sided as ever; in this supposed partnership, only
one side feels or is held to any obligation not to act in ways that
jeopardize the hoped-for shared payoff. Overpromising a future "dream
come true" is just as dangerous in this market as any other, and might
better be eschewed in favor of providing more limited but more
immediately realizable value to users. These are all lessons the cloud
crowd should take to heart, even if the technology is different.

Alejandro Espinoza

unread,
Nov 11, 2009, 11:51:00 AM11/11/09
to cloud-c...@googlegroups.com
@Jeff,

That is not parallelism, that is concurrency. There is a big difference. One Processor can never do parallelism. It can do concurrence with simple round robin, but that is it.

Regards,
Alex.

Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--

Alejandro Espinoza

unread,
Nov 11, 2009, 11:53:45 AM11/11/09
to cloud-c...@googlegroups.com
@Miha,

I agree. I don't really understand why they didn't approach commercial enterprises. It does sound like a dream come true. It is sad they are gone.

Alex.

Daniel Ghidali

unread,
Nov 11, 2009, 12:14:31 PM11/11/09
to cloud-c...@googlegroups.com
Hmm..performance on SiCortex seemed a bit light, assuming this article is accurate:
http://www.theregister.co.uk/2008/09/19/sicortex_kicker/

 Given the 72-core system below, and at only 1.4GFl/core, that gets you a 100GFl system....for $15k.
Or, in other words, a pair of Nehalem servers at 85Gfl+ each, using standard x86 instruction set.  I don't get the fuss, the new "green" computing craze notwithstanding. Both solutions will fit in a standard wall 15A wall outlet.

-Dan

Rao Dronamraju

unread,
Nov 11, 2009, 4:15:06 PM11/11/09
to cloud-c...@googlegroups.com

Here is another interesting but brief paper on parallel programming and
multi-core architectures. The two colorful tables & the graph provide a nice
summmary.

http://tinyurl.com/y9bn5r8


Regards,
Rao

Rao Dronamraju

unread,
Nov 11, 2009, 6:47:57 PM11/11/09
to cloud-c...@googlegroups.com

No I am using PERCEIVABLE synonymous with SEEING. Human beings see/perceive
only Objects.

-----Original Message-----
From: Sassa [mailto:sass...@gmail.com]
Sent: Wednesday, November 11, 2009 4:23 PM
To: Cloud Computing
Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

Jeff Darcy

unread,
Nov 11, 2009, 7:38:32 PM11/11/09
to cloud-c...@googlegroups.com
Alejandro Espinoza wrote:
> That is not parallelism, that is concurrency. There is a big difference.
> One Processor can never do parallelism. It can do concurrence with
> simple round robin, but that is it.

No, that is true parallelism. Ask the people who design the chips, or
who write compilers to take advantage of that parallelism, or most
kernel/embedded programmers. Read Hennessy and Patterson. Look up ILP
in a technical dictionary. In short, learn the terms you're using.
What you say about round-robin is true for a single core as exposed to
user-level programmers, but it is most definitely not true of the
hardware itself.

Rao Dronamraju

unread,
Nov 11, 2009, 6:55:18 PM11/11/09
to cloud-c...@googlegroups.com
Sassa,

Please read the cognitive psychology article that I posted just yesterday.

Here it is....

“These results suggest that a neural network of frontal lobe areas acts as a
central bottleneck of information processing that severely limits our
ability to multitask.”

Summary…..

http://tinyurl.com/32h8zw


Details are here…..

http://tinyurl.com/y9ss34b\



-----Original Message-----
From: Sassa [mailto:sass...@gmail.com]
Sent: Wednesday, November 11, 2009 5:21 PM
To: Cloud Computing
Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

Rao Dronamraju

unread,
Nov 11, 2009, 6:52:06 PM11/11/09
to cloud-c...@googlegroups.com

I did not mean trivial problems. I meant fairly difficuilt or complex
problems. Infact that is the reason most of the world does not want answers
immediately to any problems of reasonable significance. They would like you
to "think through" and solve them, not on the fly.

-----Original Message-----
From: Sassa [mailto:sass...@gmail.com]
Sent: Wednesday, November 11, 2009 4:44 PM
To: Cloud Computing
Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

Jeff Darcy

unread,
Nov 11, 2009, 8:05:53 PM11/11/09
to cloud-c...@googlegroups.com
Daniel Ghidali wrote:
> Hmm..performance on SiCortex seemed a bit light, assuming this article
> is accurate:
> http://www.theregister.co.uk/2008/09/19/sicortex_kicker/
> <http://www.theregister.co.uk/2008/09/19/sicortex_kicker/>
>
> Given the 72-core system below, and at only 1.4GFl/core, that gets you
> a 100GFl system....for $15k.
> Or, in other words, a pair of Nehalem servers at 85Gfl+ each, using
> standard x86 instruction set.


First, you're comparing a chip only introduced this year with one from
2006, which isn't entirely fair. Second, Nehalems do not get anywhere
near 85 DP GFLOPS per core - try 70 *per chip*. IIRC, the clock speed
times issue width would make it physically impossible for them to get
more than 20-some per core, and even Intel's successor (Sandy Bridge)
doesn't claim more than 32 DP GFLOPS. That's with SSE, by the way, and
the more directly comparable numbers without are only a quarter of that.
Third, with those two Nehalem servers you get fewer memory buses and no
interconnect. You'd also be consuming more power and generating more
heat than all twelve nodes of the SC072. There are certainly many cases
where the Nehalems would still be the better choice, but there are also
cases where the next-gen SiCortex system (approximately 6x as fast as
its predecessor) would have won out.

Nothing is proven by comparing cherries to watermelons, using the wrong
numbers from years apart and ignoring important performance/cost
factors. Do you think your colleagues at IBM would appreciate the same
kind of hatchet job directed against the (very similar) processors used
in Blue Gene, or for that matter referring to x86 as the "standard"
instruction set?

Alejandro Espinoza

unread,
Nov 11, 2009, 8:18:35 PM11/11/09
to cloud-c...@googlegroups.com
@Jeff,


I disagree. No matter how you make it parallelism is parallelism. Is like saying that I am using a TRUE computer. So if one is TRUE then the other isn't right? That is exactly why it is not called Parallelism, it is called Concurrency.

Concurrency, is when two actions appear to make progress at the same time.
Parallelism, is when two actions DO make progress and execute simultaneously.

In a computer that has only one Processor, there can only be Concurrency, not parallelism. There is no such thinkg as a Fake Parallelism (the opposite of TRUE parallelism.) It is either parallel or not.

Regards,
Alex.


--
~~~~~
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com

Jeff Darcy

unread,
Nov 11, 2009, 9:03:39 PM11/11/09
to cloud-c...@googlegroups.com
Alejandro Espinoza wrote:
> Concurrency, is when two actions appear to make progress at the same time.
> Parallelism, is when two actions DO make progress and execute
> simultaneously.

I'm well aware of the difference. What makes you assume I haven't dealt
with the difference longer, and at greater depth, than you? What I'm
telling you is that what happens inside a processor meets that
definition of parallelism. Look at the block diagram for any processor.
Do you suppose all those integer and FP ALUs - yes, even within one
core - are taking turns? No, they're making progress at the *exact
same* physical instant, as are the load-store units and other more
obscure pieces. Whenever you see terms like "pipelined" or
"superscalar" they refer to exactly what you call parallelism above -
except that it's instruction-level parallelism rather than thread-level.
*Please* look those terms up. You're clearly thinking of thread-level
parallelism, and that's the most familiar form, but it's not the only
kind that exists.

Daniel Ghidali

unread,
Nov 11, 2009, 8:44:08 PM11/11/09
to cloud-c...@googlegroups.com
No need to get so defensive.  I was comparing based on numbers published for SiCortex, using the theoretical published  1.4Gfl per core for the SiCortex MIPS processors, and comparing it to the same theoretical performance for a 2 socket 2.67 Ghz Nehalem server..85.44Gfl. Yes..HPL is an "artificial" benchmark...but its also an accepted industry standard.

Bottom line is that a pair of Intel 2 socket nodes, gives you 32 cores, running at almost 4x the speed, and double the FLOPS/cycle of the SiCortex MIPS cores, at less than half the price you quoted for the 72 core SiCortex system. Those numbers are tough to work around.

Yes, there may have been a new version of the SiCortex MIPS processors that would be more competitive..but AFAIK, they never made it to market.

Blue Gene is in a different class altogether, in terms of scalability. I was comparing sub $20k deskside systems here. Its not the first time this "scale down" approach has been tried of course..remember Orion/Transmeta?







From:        Jeff Darcy <je...@pl.atyp.us>
To:        cloud-c...@googlegroups.com
Date:        11/11/2009 05:15 PM
Subject:        Re: [ Cloud Computing ] Re: Multicore vs. Cloud Computing
>
--
~~~~~
Posting guidelines:
http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions

Miha Ahronovitz

unread,
Nov 11, 2009, 9:46:45 PM11/11/09
to cloud-c...@googlegroups.com
Jeff,

First, it great you were trying and there is no way to make a break-through with the right to make mistakes.
We all make mistakes and great people make bigger mistakes than smaller people.

You write:

"...To keep things relevant, what I'll say is that many of the risks

and problems SiCortex faced are also relevant for cloud computing.
We're still in an early-adopter phase too.  The potential big purchasers
are (with a few exceptions) still sitting on the sidelines, waiting to
see how things turn out for the early adopters before they decide to get
in the game themselves. "

This is true, but Cloud computing definition that many people on this group agreed, is a business model not a technology.
The  definition is from user's point of view:

> 1. A user will always have all resources s/he needs
> 2. A user will pay only for what it uses
> 3. The applications are delivered as an easy to use service
> 4. The users do not want to know what is going inside the cloud.

The idea is that we make the user happy, they will buy the concept.

http://my-inner-voice.blogspot.com/2009/03/coud-computing-revolution.html

To sell a product that is hot, one must (1) know the business of the customer and (2) is able to calculate how much money the customer looses by not buying the product.

This is the litmus test.

Cheers,

Miha








Sent: Wed, November 11, 2009 6:14:50 AM

Subject: Re: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

On 11/11/2009 04:35 AM, Miha Ahronovitz wrote:
> How come a product  described like dream-come-true...
> ...
> ...did not sell to commercial enterprises?

It almost did.  We were on the verge of making our first commercial sale
(in the energy sector) at the time of the collapse, and IIRC it would
have been our biggest sale to date.  I had personally been fairly
involved in the process, since there was a strong storage aspect to it,
and my understanding was that the customer had even gotten as far as
marking the lab space and provisioning power for the new box.  Had the
funding not fallen through, it seems highly likely that the sale would
have completed, and we all recognized the significance of selling
outside the government/academic markets.  Since before I started,
getting beyond those markets had been part of the strategy.  So, why
hadn't we already made such sales?  Allow me to repeat something I wrote
on my site about this recently
(http://pl.atyp.us/wordpress/?p=2458#comment-142489).

"No matter how good a startup’s technology is, they face an uphill
battle in the marketplace. Time after time, we’d hear prospective
customers say they loved the product, they loved the people they’d

worked with through the evaluation period, but they were afraid to spend
that much for a system from a small company that might not survive.
Their prophecy of failure turned out to be self-fulfilling, just as a
prophecy of success would have been. If those same customers had
evaluated products purely on the merits, many of them would have bought
our systems and we would have been fine. Building up that trust requires
demonstrating that the original technical success is repeatable, but
building that second generation – which we were doing – takes more time
and money than early-adopter customers put in – hence the need for

Miha Ahronovitz

unread,
Nov 11, 2009, 10:02:42 PM11/11/09
to cloud-c...@googlegroups.com
@Jeff,

This is a conversation on speeds and feeds, which in general is counter productive. CPU Benchmarks are great but hardly sell a computer system.
I am not a specialist in hw, but the little I know is that designing a CPU chip and designing a computer box or a grid or a cloud are different things.

In compute intensive applications,  a Grid of 100 slow, older  CPUs beats anytime the one CPU box with that fastest newest CPU on the planet.

The ideal for cloud would have been , for someone like me, say a SiCortex box that is elastic by design. It will turn on more (or less)  CPU's and cores as demand increases (or decreases) inside the box. Then by adding two SiCortex boxes, the throughput increases nearly linearly and the same elasticity can be preserved. The power consumption is minimal.

Then the way to sell these systems is - in addition to direct sales - in partnership with big systems corporations (IBM, HP, Oracle Sun and more) who can OEM or resell them as part their cloud solutions. A small start up never has the power selling of this giants. These large corporations wil lnot hesitate to purchase an hypothetical SiCortex2 as soon and they see success in monetizing the solution...

My 2 cents,

Miha

Sent: Wed, November 11, 2009 5:05:53 PM

Subject: Re: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

Jeff Darcy

unread,
Nov 11, 2009, 11:14:27 PM11/11/09
to cloud-c...@googlegroups.com
Miha Ahronovitz wrote:
> In compute intensive applications, a Grid of 100 slow, older CPUs
> beats anytime the one CPU box with that fastest newest CPU on the planet.

Absolutely. Despite some of what I've posted, I can assure you that I'm
pretty familiar with the SiCortex chip's weaknesses. I'm a kernel guy,
so sequential integer performance often mattered more for the code I
worked on than parallel or FP performance. A 700MHz single-issue
processor with weak memory ordering and poor performance for shared data
(even read-only) can be a little frustrating sometimes. What made it
worthwhile was the *system* architecture, not the chip. A "free"
scalable interconnect is a pretty cool thing to have. Imagine running
map/reduce, or your favorite distributed key/value store, across an
interconnect that's directly connected to your processor and never seems
to bottleneck. Weak CPUs? Just use more! :)

> Then the way to sell these systems is - in addition to direct sales - in
> partnership with big systems corporations (IBM, HP, Oracle Sun and more)
> who can OEM or resell them as part their cloud solutions. A small start
> up never has the power selling of this giants. These large corporations
> wil lnot hesitate to purchase an hypothetical SiCortex2 as soon and they
> see success in monetizing the solution...

Perhaps. SiCortex was conceived as an HPC company, by HPC guys, and
"cloud" came along too late to change that image/direction. We did
experiment some with using an SC648 as a memcached server, but that was
*very* late in the game and never quite got off the ground. If we had
pursued that nascent market earlier, if we had spent more resources on
integer or memory performance instead of FP, ..., but then of course it
would have been a different box and a different company. I think for
now the "small pieces, well connected" approach will be applied in other
ways.

Jeff Darcy

unread,
Nov 11, 2009, 10:41:22 PM11/11/09
to cloud-c...@googlegroups.com
Daniel Ghidali wrote:
> No need to get so defensive. I was comparing based on numbers
> published for SiCortex, using the theoretical published 1.4Gfl per core
> for the SiCortex MIPS processors, and comparing it to the same
> theoretical performance for a 2 socket 2.67 Ghz Nehalem
> server..85.44Gfl. Yes..HPL is an "artificial" benchmark...but its also
> an accepted industry standard.
>
> Bottom line is that a pair of Intel 2 socket nodes, gives you 32 cores,
> running at almost 4x the speed, and double the FLOPS/cycle of the
> SiCortex MIPS cores, at less than half the price you quoted for the 72
> core SiCortex system. Those numbers are tough to work around.

I still say it's a bad comparison, using the wrong numbers. I just
priced out a system like you describe, from your own company. For *one*
such system, it came out to $9800. Even with your figure of 85 GFLOPS,
that's about $115 per GFLOPS. If your application can't take advantage
of SSE, you'll have to quadruple that. The SiCortex number works out to
about $149 per GFLOPS, and that system would have more memory bandwidth
so *actual* performance for some real apps would look even more
favorable. That's with the old chip, BTW. The numbers with the new
one, which was less than a year away, would have been "tough to work
around" in the other direction. Will Intel-based systems get down to
$25/GFLOPS this year? Let's wait and see.

> Blue Gene is in a different class altogether, in terms of scalability. I
> was comparing sub $20k deskside systems here.

OK, fine. Run the numbers for the SC648 or SC5832. Don't forget to
include that big pile of IB gear that you'd need to connect those two
racks or so of Nehalem servers, vs. a built-in scalable interconnect.

If all somebody wanted was a single box, without the capability to run
the *exact same code* (including interconnect drivers and such, without
weird performance bumps and dips) on a much bigger system, then you're
right that the SC072 wouldn't have been a good choice. That's looking
at it the wrong way, though; divorcing the SC072 from the rest of the
product set makes as little sense as divorcing a single QS22 or Altix
ICE blade from theirs. Scalable systems cost more at the low end. The
fact that they *are* scalable has to be factored into the equation. I
wouldn't have built a data center with racks full of SC072s either, but
as development or teaching platforms they were well worth what they
cost. The company didn't fail because the SC072 wasn't a good PC.

Daniel Ghidali

unread,
Nov 12, 2009, 12:23:23 AM11/12/09
to cloud-c...@googlegroups.com
Not to get picky on numbers, but even at full list price, a single 1U x3550m2 rack server, with 2 x5550 Nehalems and 24GB RAM comes in at under $6200. As to scalability? Well, you'll have to wait for the new Top500 list to come out next week, but that $/GFlops number you quoted seems very reasonable, even at full list, with all the necessary IB and infrastructure added. And, as we all know the x86 $/core is going to come down dramatically next year.

Look, I'm not claiming that commodity clusters are suitable, or scalable for every workload..although I haven't seen a better solution yet for cloud/grid and general purpose HPC. However, IMO,  alternative solutions need to show either massive scalability (in the 20k+  core range) or great savings (which can also include TCO) in order to compete.

That all being said SiCortex had some great design points, both inside and out . I got to see one of the larger systems at SC07, and it did look very cool, with the gull-wing doors and blue LEDs. :-)





From:        Jeff Darcy <je...@pl.atyp.us>
To:        cloud-c...@googlegroups.com
Date:        11/11/2009 08:26 PM
Subject:        Re: [ Cloud Computing ] Re: Multicore vs. Cloud Computing




--
~~~~~
Posting guidelines:
http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions

Jeff Darcy

unread,
Nov 12, 2009, 8:59:12 AM11/12/09
to cloud-c...@googlegroups.com
On 11/12/2009 12:23 AM, Daniel Ghidali wrote:
> Not to get picky on numbers, but even at full list price, a single 1U
> x3550m2 rack server, with 2 x5550 Nehalems and 24GB RAM comes in at
> under $6200.

Only 24GB? The SC072 would have had either 48GB or 96GB. Try pricing
those out to get an honest comparison.

> And, as we all know the x86 $/core is going to
> come down dramatically next year.

To $25/GFLOP? Would you like to bet anything on that?

> Look, I'm not claiming that commodity clusters are suitable, or scalable
> for every workload..although I haven't seen a better solution yet for
> cloud/grid and general purpose HPC. However, IMO, alternative solutions
> need to show either massive scalability (in the 20k+ core range) or
> great savings (which can also include TCO) in order to compete.

Why 20K+? Do you have any market analysis to back that up? We did, and
it showed that the greatest demand was at the very bottom of the Top500
or just below. Anybody familiar with exponential distributions and the
"long tail" and such shouldn't be too surprised. 20K cores with your
favored technology means fifty or more racks, even more counting
interconnects and extra power/cooling infrastructure. That's way more
money and way more space and way more administrative complexity than
most users could possibly justify. It's setting the target way too
high, where IBM and Cray are already solidly entrenched; trying to sell
such systems against them is a *certain* route to business failure.

If you're building a cloud instead of an HPC system, a lot of things
change. 20K cores isn't so much for a public-cloud provider, but
they'll have to be x86 cores because that's what public-cloud users
expect. Licensing would be prohobitive, so that means no task-optimized
processors and little leeway for a custom interconnect (not that they're
as valuable in that environment) either. That leaves little room for
meaningful differentiation. All you're left with is trying to sell
essentially the same box as Dell/IBM/HP/everyone and their dog. That
might not sound so bad when you're at IBM, but I'd never join a startup
with that business plan. The hardware side of the cloud seems to
provide limited opportunity for newcomers; all the real action is on the
software side.

There are a lot of ways to fail in this industry. If the SiCortex
technical and market decisions were so bad, why had they smashed sales
records for two quarters in a row? How could they have been on track to
become profitable in 2010? Simply, it's because the technology and the
positioning weren't as wrong as you seem to think. They company
succumbed to a completely different set of issues related to finance and
the economy. Those are the areas where different approaches might have
led to a better outcome. It's easy to kibitz, but please pardon me if I
put little stock in the opinions of people who have never even addressed
(let alone solved) the problems that were actually fatal - particularly
those who keep making claims contrary to fact and proposals that would
certainly have led to an even earlier demise.

Peglar, Robert

unread,
Nov 12, 2009, 9:20:57 AM11/12/09
to cloud-c...@googlegroups.com
Indeed, Jeff is correct. There are also some old geezers on this group
that remember things like scoreboards and the first pipelines, ALUs and
such.

This idea - having CPUs do _multiple things at once_ is nearly a
50-year-old idea. I refer to you (Mr. Espinoza) the work of Cray and
Thornton in the late '50s/early '60s. Integer, shift, floating point,
pop count, mult/divide, boolean, etc. One CPU, multiple functional
units.

Rob


---
Robert Peglar
Vice President, Technology, Storage Systems Group

Email: mailto:Robert...@xiotech.com
Office: 952 983 2287
Mobile:314 308 6983
Fax: 636 532 0828
Xiotech Corporation
1606 Highland Valley Circle
Wildwood, MO 63005 http://www.xiotech.com/ : Toll-Free 866 472 6764

-----Original Message-----

From: Jeff Darcy [mailto:je...@pl.atyp.us]
Sent: Wednesday, November 11, 2009 8:04 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

--
~~~~~
Posting guidelines:
http://groups.google.ca/group/cloud-computing/web/frequently-asked-quest
ions
Follow us on Twitter http://twitter.com/cloudcomp_group or
@cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC,
http://www.amazon.com/gp/product/B002H0IW1U or get instant access to
downloadable versions at
http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google
Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to
cloud-computi...@googlegroups.com

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.704 / Virus Database: 270.14.61/2497 - Release Date:
11/11/09 13:41:00

Jim Starkey

unread,
Nov 12, 2009, 12:21:18 PM11/12/09
to cloud-c...@googlegroups.com
Jeff Darcy wrote:
> Perhaps. SiCortex was conceived as an HPC company, by HPC guys, and
> "cloud" came along too late to change that image/direction. We did
> experiment some with using an SC648 as a memcached server, but that was
> *very* late in the game and never quite got off the ground. If we had
> pursued that nascent market earlier, if we had spent more resources on
> integer or memory performance instead of FP, ..., but then of course it
> would have been a different box and a different company. I think for
> now the "small pieces, well connected" approach will be applied in other
> ways.
>
>
The HPC market has always been a crapshoot. Intuitively, there should
be a market there, but many, many well financed, well executed companies
have failed -- Alliant, Kendal Square Research, Thinking Machines, to
name a few (maybe it's something about Boston's water?).

The problem seems to be that there are astonishingly few applications
that require vast number of FLOPs. During the hay days at Cray, their
VP of marketing told me that near 100% of the Crays sold ran one of 16
to 18 "codes" (as the supercomputer and spooks like to call software).
The reason is that there are some problems that putting up with weird
and expensive architectures are worth the cost, whatever it may be, but
there aren't a lot.

People don't buy computers because they're neat (academics excepted, of
course), but because they solve problems. A super-computer vendor has
to provide a killer solution (or a much cheaper solution) to an open
problem to succeed. If they don't find a problem -- and provide the
solution -- before the money runs out, well, there's a fire sale.
Academics buy solutions looking for problems, but companies don't. And
even a cheaper solution to an already solved problem is a hard sell to
agencies and departments unconstrained by budgets.

It just isn't enough for a new technology company to be better or
cheaper. If it wants to survive, it has to do something that the big
guys either can't do or do badly. Being green obviously wasn't enough.
If a company wanted to save kilowatts, it would be easier to switch to
compact florescents (in compatible sockets) than to rewrite a lot of
working software that already runs on machines that are already paid for.

Yeah, it's sad to see the hard work of talented folks go down the tube,
but there is much to be learned from failure.

--
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376

Peglar, Robert

unread,
Nov 12, 2009, 7:25:02 PM11/12/09
to cloud-c...@googlegroups.com
Jim is right. I worked in the supercomputer biz in the '80s (CDC, ETA)
and codes were very few and almost exclusively written in FORTRAN.

NASTRAN was big. Certain CFD codes ('weather', 'ocean' codes) as well.
A few hand-rolled codes that the spooks ran, mostly decrypt. Another
group was petroleum codes, i.e. seismic modeling.

After that, it got pretty sparse (no pun intended)

Rob

-----Original Message-----
From: Jim Starkey [mailto:jsta...@nimbusdb.com]
Sent: Thursday, November 12, 2009 11:21 AM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] Re: Multicore vs. Cloud Computing

--
~~~~~
Posting guidelines:
http://groups.google.ca/group/cloud-computing/web/frequently-asked-quest
ions
Follow us on Twitter http://twitter.com/cloudcomp_group or
@cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC,
http://www.amazon.com/gp/product/B002H0IW1U or get instant access to
downloadable versions at
http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google
Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to
cloud-computi...@googlegroups.com

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.707 / Virus Database: 270.14.61/2498 - Release Date:
11/12/09 01:38:00

Alejandro Espinoza

unread,
Nov 12, 2009, 9:01:38 PM11/12/09
to cloud-c...@googlegroups.com
@Jeff, @Robert,

Can you guys share the links? Because none of my experiments show that, without a round robin kindda thing. So can you please share the references to such techniques? I would really be interested in learning and experimenting with it, if this is the case. At this moment, I cannot think of a possible way on how a CPU can do two tasks simultaneously. I would really  be interested in learning how to accomplish that.

Now, just to make things clear, we are talking about two Parallel tasks running in the same processor simultaneously. We are talking about what Jeff calls True Parallelism.

Thanks in advance,
Alex.

Alejandro Espinoza

unread,
Nov 12, 2009, 9:19:40 PM11/12/09
to cloud-c...@googlegroups.com
@Jeff,

Ok I missed this Post. I understand now what you are talking about. And we are not talking about the same thing. The kind of parallelism you are talking about is not portable, and definitely, I am not referring to it, it is of no use to me in the cloud unless I am working on the fabric and I am not. It is not practical to talk about that kind of parallelism in the cloud, unless the fabric allows you to touch the processors in each machine. That is simply too unpractical with current technology.

So I am really sorry that you feel offended by my definition, but I have no idea who you are, to assume that you know the difference. It is always good to bring it up.

Regards,
Alex.


--
~~~~~
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com

Peglar, Robert

unread,
Nov 13, 2009, 8:51:13 AM11/13/09
to cloud-c...@googlegroups.com
"The Design of a Computer - the Control Data 6600", Thornton, 1970.

http://portal.acm.org/citation.cfm?id=1102018&dl=GUIDE&coll=GUIDE&CFID=6
2776389&CFTOKEN=79077003

Rob

-----Original Message-----
From: alejandro...@gmail.com [mailto:alejandro...@gmail.com]
On Behalf Of Alejandro Espinoza
--
Alex Espinoza | Axis Technical Group | Software Development Manager

phone: 714-491-2636 office | 714-470-7125 cell

--
~~~~~
Posting guidelines:
http://groups.google.ca/group/cloud-computing/web/frequently-asked-quest
ions
Follow us on Twitter http://twitter.com/cloudcomp_group or
@cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC,
http://www.amazon.com/gp/product/B002H0IW1U or get instant access to
downloadable versions at
http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google
Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to
cloud-computi...@googlegroups.com

No virus found in this incoming message.
Checked by AVG - www.avg.com

Sassa

unread,
Nov 13, 2009, 4:48:07 PM11/13/09
to Cloud Computing
Is hyperthreaded CPU two CPUs or one CPU? AFAIK the difference in size
is ~10%, you get two scheduling units progressing independently (apart
from moments when shared resources are contended)

Sassa

On Nov 13, 2:01 am, Alejandro Espinoza <aespin...@structum.com.mx>
wrote:
> @Jeff, @Robert,
>
> Can you guys share the links? Because none of my experiments show that,
> without a round robin kindda thing. So can you please share the references
> to such techniques? I would really be interested in learning and
> experimenting with it, if this is the case. At this moment, I cannot think
> of a possible way on how a CPU can do two tasks simultaneously. I would
> really  be interested in learning how to accomplish that.
>
> Now, just to make things clear, we are talking about two Parallel tasks
> running in the same processor simultaneously. We are talking about what Jeff
> calls True Parallelism.
>
> Thanks in advance,
> Alex.
>
> On Thu, Nov 12, 2009 at 6:20 AM, Peglar, Robert
> <Robert_Peg...@xiotech.com>wrote:
>
>
>
> > Indeed, Jeff is correct.  There are also some old geezers on this group
> > that remember things like scoreboards and the first pipelines, ALUs and
> > such.
>
> > This idea - having CPUs do _multiple things at once_ is nearly a
> > 50-year-old idea.  I refer to you (Mr. Espinoza) the work of Cray and
> > Thornton in the late '50s/early '60s.  Integer, shift, floating point,
> > pop count, mult/divide, boolean, etc.  One CPU, multiple functional
> > units.
>
> > Rob
>
> > ---
> > Robert Peglar
> > Vice President, Technology, Storage Systems Group
>
> > Email: mailto:Robert_Peg...@xiotech.com
> > Office: 952 983 2287
> > Mobile:314 308 6983
> > Fax: 636 532 0828
> > Xiotech Corporation
> > 1606 Highland Valley Circle
> > Wildwood, MO 63005http://www.xiotech.com/: Toll-Free 866 472 6764
>
> > -----Original Message-----
> > Post Job/Resume athttp://cloudjobs.net
> > Buy 88 conference sessions and panels on cloud computing on DVD at
> >http://www.amazon.com/gp/product/B002H07SEC,
> >http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to
> > downloadable versions at
> >http://cloudslam09.com/content/registration-5.html
>
> > ~~~~~
> > You received this message because you are subscribed to the Google
> > Groups "Cloud Computing" group.
> > To post to this group, send email to cloud-c...@googlegroups.com
> > To unsubscribe from this group, send email to
> > cloud-computi...@googlegroups.com
>
> > No virus found in this incoming message.
> > Checked by AVG -www.avg.com
> > Version: 9.0.704 / Virus Database: 270.14.61/2497 - Release Date:
> > 11/11/09 13:41:00
>
> > --
> > ~~~~~
> > Posting guidelines:
> >http://groups.google.ca/group/cloud-computing/web/frequently-asked-qu...
> > Follow us on Twitterhttp://twitter.com/cloudcomp_groupor
> > @cloudcomp_group
> > Post Job/Resume athttp://cloudjobs.net
> > Buy 88 conference sessions and panels on cloud computing on DVD at
> >http://www.amazon.com/gp/product/B002H07SEC,
> >http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to

Jeff Darcy

unread,
Nov 13, 2009, 8:52:46 PM11/13/09
to cloud-c...@googlegroups.com
Sassa wrote:
> Is hyperthreaded CPU two CPUs or one CPU? AFAIK the difference in size
> is ~10%, you get two scheduling units progressing independently (apart
> from moments when shared resources are contended)

It's a bit old, but AFAIK it mostly still applies and Jon Stokes
explains it far better than I ever could.

http://arstechnica.com/old/content/2002/10/hyperthreading.ars

Short answer: hyperthreading makes one processor (core) appear as two to
the OS, but in many important respects it's really more like one. In
particular, it's still only one set of functional units. Modern
processors do a pretty good job of keeping functional units busy even
for a single thread, the exception being that most threads tend to be
either integer-intensive or FP-intensive. Thus, if you had one thread
of each type they'd be using mostly separate sets of functional units
and you could see pretty good gains (if you had more than one the OS
usually wasn't HT-aware enough to take advantage).

As of the Pentium-4-era Xeons, I vaguely recall there were other issues
as well. Contrary to Intel's claims at the time, one logical CPU could
pretty easily starve the other for an arbitrarily long time if it was
spinning in cache - which would commonly happen in lock code*. This is
one of the things that the current "second generation" implementation of
multithreading might have improved upon, though, so take that part with
a grain of salt.


* Interestingly, this problem has resurfaced in virtualization. If you
have a virtual machine using more than one virtual CPU and they're not
scheduled onto physical CPUs together ("gang scheduling"), then one can
get scheduled without the other and then waste all of its time spinning
on a lock held by the other (which isn't even running). Oops.

Eryndlia Mavourneen

unread,
Nov 14, 2009, 11:50:33 AM11/14/09
to Cloud Computing
I agree with you, Ken. I have been thinking that OSs and computer
languages/run-time systems need to be able to accept and implement an
additional parameter when a request to create a task or thread is
requested. This parameter could take the form of an enumeration such
as (Inside, Nearby, Outside) or the form of a number, 0-100 for
instance. The purpose of this parameter would be to allow the
programmer to indicate a level of desirability for the task or thread
to execute within the multi-CPU, within the cluster, or within a
network of systems. The lowest value of either the enumeration or the
numeric range would force execution within the multi-core/-CPU box.
The highest value would force execution within the network, that is,
not nearby.

I personally lean towards a large numeric range, as it gives the most
flexibility to both the programmer and also to the system, which is
managing the availability of resources and attempting to accurately
service requests for those resources.

We may find, too, that additional parameters would be desirable -- if
feasible -- such as an indication of the need for a small/large amount
of memory, a CPU of particular speed, etc. It may be that the task/
thread needs a huge amount of memory but can take a week or a month
before the results are needed.

Any comments on the desirability and feasibility of this approach?

-- Eryndlia

On Nov 9, 7:13 am, Ken <kall...@wattsys.com> wrote:
> Excellent discussion.
>
> The applications run via a service enabled cloud must be able to adapt
> the implementation of that service to whatever computational fabric it
> finds.  This means that the partitioning of tasks and data for
> multicore, bridged multicore or clustered environments cannot be hard-
> coded, but must be adaptable.  Obviously, this is rare.  Most software
> is terribly designed massively parallel environments - especially on
> HPC clouds.
>
> The cloud service must be agnostic to the particular implementation
> details. I'm not sure that is well understood.
>
> Ken Lloyd
>
> On Nov 8, 9:29 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
> wrote:
>
>
>
> > Multi-cores in particular and massive parallelism in general is definitely
> > going to be the future. Clouds are going to be platforms for real-time deep
> > analytics and consequently a high degree of intelligence "juicer" (for the
> > lack of better word).
>
> > In ML, AI and Analytics, a lot information is approximated or inferred based
> > on processing of sample set of data. With multi-cores a LOT more
> > information/data can be analyzed in real-time and your probablistic models
> > will be much more accurate.
>
> > Imagine, no one could predict the "greatest recession in 60 years" despite
> > all the economic metric models and the mega super computing power available
> > today. Many Clouds size intelligence is needed to thwart this kind of fiasco
> > next time around.
>
> > So yes, multi-cores and Cloud/massive parallelism is the foundation for the
> > era of super intelligent machines that will pervade us in the next 20 to 50
> > years.
>
> > -----Original Message-----
> > From: cloud-c...@googlegroups.com
>
> > [mailto:cloud-c...@googlegroups.com] On Behalf Of Greg Pfister
> > Sent: Sunday, November 08, 2009 8:10 PM
> > To: Cloud Computing
> > Subject: [ Cloud Computing ] Multicore vs. Cloud Computing
>
> > ...which is the subject of a post on "Perils of Parallel."http://bit.ly/Gys2Y
>
> > Both are waves of the future. Do they get along? Net: Sort of. IaaS,
> > yes. PaaS, no, but that's because the dominant platform paradigm
> > doesn't do multicore much.
>
> > Greg Pfisterhttp://perilsofparallel.blogspot.com/-Hide quoted text -
>
> > - Show quoted text -

Sassa

unread,
Nov 14, 2009, 5:41:40 PM11/14/09
to Cloud Computing
Let's also see what Alejandro makes out of this.

Here is another article I saw before:

ftp://download.intel.com/technology/itj/2002/volume06issue01/vol6iss1_hyper_threading_technology.pdf

"pretty good job of keeping functional units busy" even on a single
processor, or not, please, see Page 13.

Single processor with hyper-threading: 21% increase compared to single
processor without hyper-threading

But the best bit is: Two processors without hyper-threading: ~60%
increase compared to single processor without hyper-threading

So 10% more transistors on a single die still looks like a bargain vs
100% more transistors on an additional die.


Nehalem has hyperthreading too.


Sassa

Alejandro Espinoza

unread,
Nov 15, 2009, 2:33:52 AM11/15/09
to cloud-c...@googlegroups.com
@Sassa,

My comments were related mostly to parallelism, not performance on multithreading. At the task level, which is mostly what I am working, there is no way there can be Parallelism in a single CPU. Jeff already showed me that single instructions can be done in parallel inside of the chip, which is very interesting. I am actually researching that right now.

I learned that I didn't know a lot about chip architecture. That didn't really mean that a CPU can handle multiple task simultaneously. That only means, that there is some limited form of paralellism inside the chip.

Now regarding Multithreading. As you may know, it is only logical. So it may improve performance, but that doesn't make it parallel. It is concurrent for sure.

But thank you very much for the article. I am learning a lot about hardware, specially cpu. And thanks to Jeff I am very much packed for the rest of the week regarding research.

If you have any more papers/articles please share them. I am very much interested.

Regards,
Alex.

--
~~~~~
Post Job/Resume at http://cloudjobs.net

Buy 88 conference sessions and panels on cloud computing on DVD at


~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com

J. Andrew Rogers

unread,
Nov 15, 2009, 3:17:59 AM11/15/09
to cloud-c...@googlegroups.com
On Sat, Nov 14, 2009 at 2:41 PM, Sassa <sass...@gmail.com> wrote:
>
> Single processor with hyper-threading: 21% increase compared to single
> processor without hyper-threading
>
> But the best bit is: Two processors without hyper-threading: ~60%
> increase compared to single processor without hyper-threading
>
> So 10% more transistors on a single die still looks like a bargain vs
> 100% more transistors on an additional die.


It is not this obvious in practice.

I think part of the problem here is that the definition and
implementation of hardware multithreading varies widely and the
implications for various applications vary widely as a consequence.
Intel's implementation, to use that as an example, has been getting
more useful and robust over time (and I have not worked with Nehalem)
but has been far from a broadly useful implementation traditionally.
Whether or not any particular code will be able to exploit
multithreading usefully on a particular architecture tends to be
something that is difficult to ascertain without actually testing it.
Worse, for processors that try to leverage some form of hardware
multithreading you can frequently get *worse* performance if you do
not tune your codes to exploit the peculiarities of the hardware.
Plenty of people have found themselves in this boat when using Intel's
hyperthreading and every other flavor of multithreading for that
matter.


Generally speaking, hardware multithreading is a latency-hiding trick,
not a processor throughput trick even though the net effect may be
higher throughput. If your code is efficiently dispatching operations,
use a second core. If your code is wasting all of its time on cache
misses and spinlocks (inherent to some codes), a clever hardware
multithreading implementation will allow you to use the processor you
already have more efficiently.

Second core = more cycles available, hardware multithreading = fewer
cycles wasted. A distinction with a difference that solves different
problems for different codes.


--
J. Andrew Rogers
realityminer.blogspot.com

Jeff Darcy

unread,
Nov 15, 2009, 6:53:45 AM11/15/09
to cloud-c...@googlegroups.com
Sassa wrote:
> Let's also see what Alejandro makes out of this.
>
> Here is another article I saw before:
>
> ftp://download.intel.com/technology/itj/2002/volume06issue01/vol6iss1_hyper_threading_technology.pdf

It's a good document, despite being from a source with a vested interest
in exaggerating the benefits of hyperthreading.

> Single processor with hyper-threading: 21% increase compared to single
> processor without hyper-threading
>
> But the best bit is: Two processors without hyper-threading: ~60%
> increase compared to single processor without hyper-threading
>
> So 10% more transistors on a single die still looks like a bargain vs
> 100% more transistors on an additional die.

It's worth noting that additional die != additional core on the same die.

Sassa

unread,
Nov 16, 2009, 6:54:53 AM11/16/09
to Cloud Computing
Good point. Haven't thought about it this way.

Basically, both solutions increase Useful_cycles=COP x
Available_cycles. Adding cores increases Available_cycles, hardware
multithreading increases COP (for some codes).


On the other hand, if your code spends a lot of time in cache hits,
threads spend less time snooping caches across CPUs. The Intel's doc
that I quoted didn't seem to imply that the CPU runs only one thread
until it becomes busy with a slow instruction (e.g. cache miss). I
don't do that level of profiling myself to know what it turns out to
do actually. I'll take Jeff's word for that it may be not entirely so.


Sassa


On Nov 15, 8:17 am, "J. Andrew Rogers" <reality.mi...@gmail.com>
wrote:

Sassa

unread,
Nov 16, 2009, 7:52:58 AM11/16/09
to Cloud Computing
Agreed. But we also need to agree that all figures are biased,
including negative figures by "independent experts", because they need
to retain their status of "peers of the giants".

Oracle Open World '09 had a session about tuning Oracle software on
Nehalem. You can say it is biased twice. But the benchmark results
allowed them to recommend:

"Enable Hyper-Threading:

30-40% improvement due to Hyper-Threading as measured by
SPECjAppServer2004 on system with Quad-Core Intel Xeon processor
X5570"




Sassa

On Nov 15, 11:53 am, Jeff Darcy <j...@pl.atyp.us> wrote:
> Sassa wrote:
> > Let's also see what Alejandro makes out of this.
>
> > Here is another article I saw before:
>
> >ftp://download.intel.com/technology/itj/2002/volume06issue01/vol6iss1...

Sassa

unread,
Nov 16, 2009, 8:25:51 AM11/16/09
to Cloud Computing
1. Maybe I missed the semantic difference between Parallel and
Concurrent. There may be wiser definitions of Parallel, but if the
threads are making progress independently, they can be seen as
parallel; but in fact it doesn't matter: from subjective point of view
of individual threads the other thread is not executing - because its
state cannot be observed. If you can observe the state of one thread
from the other, then you have sequential execution interleaved with
parallel execution.

2. With super-threading and hyper-threading more than one task is
active (loaded) into a single CPU.


Sassa

On Nov 15, 7:33 am, Alejandro Espinoza <aespin...@structum.com.mx>
wrote:
> @Sassa,
>
> My comments were related mostly to parallelism, not performance on
> multithreading. At the task level, which is mostly what I am working, there
> is no way there can be Parallelism in a single CPU. Jeff already showed me
> that single instructions can be done in parallel inside of the chip, which
> is very interesting. I am actually researching that right now.
>
> I learned that I didn't know a lot about chip architecture. That didn't
> really mean that a CPU can handle multiple task simultaneously. That only
> means, that there is some limited form of paralellism inside the chip.
>
> Now regarding Multithreading. As you may know, it is only logical. So it may
> improve performance, but that doesn't make it parallel. It is concurrent for
> sure.
>
> But thank you very much for the article. I am learning a lot about hardware,
> specially cpu. And thanks to Jeff I am very much packed for the rest of the
> week regarding research.
>
> If you have any more papers/articles please share them. I am very much
> interested.
>
> Regards,
> Alex.
>
>
>
> On Sat, Nov 14, 2009 at 2:41 PM, Sassa <sassa...@gmail.com> wrote:
> > Let's also see what Alejandro makes out of this.
>
> > Here is another article I saw before:
>
> >ftp://download.intel.com/technology/itj/2002/volume06issue01/vol6iss1...
> > Post Job/Resume athttp://cloudjobs.net
> > Buy 88 conference sessions and panels on cloud computing on DVD at
> >http://www.amazon.com/gp/product/B002H07SEC,
> >http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to

Greg Pfister

unread,
Nov 16, 2009, 4:50:49 PM11/16/09
to Cloud Computing
Not pointing at anyone in particular with this comment, but...

Let's confuse the issue even more and point out that every processor
above the level of a junior-level class project is pipelined: While
instruction N is being fetched, instruction N-1 is being decoded, N-2
is accessing a register, N-3 is (perhaps) doing a memory request, and
so on.

That's parallel. It's doing multiple instructions at the same time. It
increases performance very significantly. It's just within a single
instruction stream. It's one processor. It didn't require the
programmer to do anything (well, OK, you can optimize for this).

And there's out-of-order execution. High end implementations decode
multiple instructions simultaneously, and execute as many at the same
time as they can find that don't interfere with each other. Those are
*really* doing instructions in parallel, not just pipelining.
(Optimizing code for this is lots of fun. Unrolling loops, for
example, helps.)

And yes, I'd also call at least certain types of multithreading
parallelism in a single processor. Some kinds are really not parallel;
they're just very fast switching between threads. Others are parallel;
they genuinely run multiple threads at the same time -- in addition to
the pipelining and out-of-order execution mentioned above. One of the
main reasons for doing both is to hide latency to bigger cacnes and
DRAM, which arguably isn't "real" parallelism, it's just superfast
thread switching, but real parallelism does creep in there, too.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Nov 15, 12:33 am, Alejandro Espinoza <aespin...@structum.com.mx>
wrote:
> @Sassa,
>
> My comments were related mostly to parallelism, not performance on
> multithreading. At the task level, which is mostly what I am working, there
> is no way there can be Parallelism in a single CPU. Jeff already showed me
> that single instructions can be done in parallel inside of the chip, which
> is very interesting. I am actually researching that right now.
>
> I learned that I didn't know a lot about chip architecture. That didn't
> really mean that a CPU can handle multiple task simultaneously. That only
> means, that there is some limited form of paralellism inside the chip.
>
> Now regarding Multithreading. As you may know, it is only logical. So it may
> improve performance, but that doesn't make it parallel. It is concurrent for
> sure.
>
> But thank you very much for the article. I am learning a lot about hardware,
> specially cpu. And thanks to Jeff I am very much packed for the rest of the
> week regarding research.
>
> If you have any more papers/articles please share them. I am very much
> interested.
>
> Regards,
> Alex.
>
>
>
>
>
> On Sat, Nov 14, 2009 at 2:41 PM, Sassa <sassa...@gmail.com> wrote:
> > Let's also see what Alejandro makes out of this.
>
> > Here is another article I saw before:
>
> >ftp://download.intel.com/technology/itj/2002/volume06issue01/vol6iss1...
> > Post Job/Resume athttp://cloudjobs.net
> > Buy 88 conference sessions and panels on cloud computing on DVD at
> >http://www.amazon.com/gp/product/B002H07SEC,
> >http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to

Jim Starkey

unread,
Nov 16, 2009, 6:21:10 PM11/16/09
to cloud-c...@googlegroups.com
A few points. First, super-scalar and pipelining are invisible to an
executing instruction stream. Things may be performed out order, entire
branches may be tentatively executed, but the program sees only
plodding, single instruction, junior-level class project. Whether this
is good or not, it means that program and programmers are both backwards
compatible. Second, a major advantage hyper-threading (Intel) and
hardware threads (Sun) is to avoid unnecessary operating system level
context switches. Stealing minor cycles is all well and good, but keep
a cpu out of an OS kernel is even better. Third, getting significant
parallelism from multiple thread on different cores require a very
skilled engineer and an excellent architect.

Jeff Darcy

unread,
Nov 16, 2009, 7:01:09 PM11/16/09
to cloud-c...@googlegroups.com
Jim Starkey wrote:
> A few points. First, super-scalar and pipelining are invisible to an
> executing instruction stream.


Not necessarily. They might be invisible to a higher-level-language
programmer, but at the level of the instruction stream (the phrase you
used) many of these things become quite visible. Knowing about branch
or load delays, or multi-cycle latencies for advanced functional units,
can be necessary just to have the code be correct. Knowing how code
with lots of branches or pointer chasing might stall your pipeline can
be important to make it perform well. All of this is pretty
architecture dependent, but many times I've seen experts double the
performance of functionally-complete code by taking advantage of
"invisible" parallelism.


> Second, a major advantage hyper-threading (Intel) and
> hardware threads (Sun) is to avoid unnecessary operating system level
> context switches.


Some of that gets blown out of the water when you add virtualization, or
for that matter page faults. There are typically many ways to trap into
the OS other than with an explicit syscall, and few programmers know all
of them.


> Third, getting significant
> parallelism from multiple thread on different cores require a very
> skilled engineer and an excellent architect.


I guess it depends on what you mean by "significant" parallelism.
Personally I think the ability to harness parallelism and concurrency
lies somewhere between novice and expert - journeyman, perhaps? - and is
moving down the scale as more systems embody those characteristics. The
true experts IMO are the ones who can not only write such code and debug
it by brute force, but *proactively* develop code to avoid many of the
relevant pitfalls.

Joseph G. Baron

unread,
Nov 16, 2009, 7:34:18 PM11/16/09
to cloud-c...@googlegroups.com
What he (@Greg) said, and what he (@Jim) said. It is also worth
mentioning that for most applications or workloads, the IMPLICIT
parallel (or concurrent) content -- that is, instruction-level or
hyper-threaded CPU stuff-- far far exceeds the EXPLICIT parallel
content -- that is, loop-level, explicit pthreads, or multi-process
stuff.

It is easy to forget this because you get it "for free". That is, the
compiler and/or the chip does it for you. Besides the exlicit level
being hard to do, as Jim points out, there is just less of it to be
gotten in most code.

— Joe
Sent from my iPhone (919) 809-9542


On Nov 16, 2009, at 4:50 PM, Greg Pfister <greg.p...@gmail.com>
wrote:
>>> comp...@googlegroups.com
>>> To unsubscribe from this group, send email to
>>> cloud-computi...@googlegroups.com
>>
>> --
>> Alex Espinoza | Axis Technical Group | Software Development Manager
>>
>> phone: 714-491-2636 office | 714-470-7125 cell
>
> --
> ~~~~~
> Post Job/Resume at http://cloudjobs.net

Digvijay Singh Rathore

unread,
Nov 17, 2009, 10:17:40 AM11/17/09
to cloud-c...@googlegroups.com
Greg,
 
Ole'.  Some good amount of basic info here and it's standardized -
 
 
cheers!!!!
VJ
Digvijay "VJ" Singh Rathore
 
 


--- On Tue, 17/11/09, Greg Pfister <greg.p...@gmail.com> wrote:
Post Job/Resume at http://cloudjobs.net

Buy 88 conference sessions and panels on cloud computing on DVD at


~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com


The INTERNET now has a personality. YOURS! See your Yahoo! Homepage.

Digvijay Singh Rathore

unread,
Nov 17, 2009, 10:30:18 AM11/17/09
to cloud-c...@googlegroups.com
Greg,
 
Another parallel I thought I could draw while we discuss parallel and sequential -
 
From the fab-less design using Verilog/VHDL compared to C, which I used earlier, this was(below) important for us to understand right from the start while starting to program using the languages to build custom FPGA's.....
 
 
Of course, one key difference between hardware and software is how they "run." A hardware design consists of many elements all running in parallel. Once the device is powered on, every element of the hardware is always executing. Depending on the control logic and the data input, of course, some elements of the device may not change their outputs. However, they're always "running."
In contrast, only one small portion of an entire software design (even one with multiple software tasks defined) is being executed at any one time. If there's just one processor, only one instruction is actually being executed at a time. The rest of the software can be considered dormant, unlike the rest of the hardware. Variables may exist with a valid value, but most of the time they're not involved in any processing.
This difference in behavior translates to differences in the way we program hardware and software code. Software is executed serially, so that each line of code is executed only after the line before it is complete (except for nonlinearities on interrupts or at the behest of an operating system).

Thoughts???
 
VJ Singh
Digvijay "VJ" Singh Rathore
 

--- On Tue, 17/11/09, Greg Pfister <greg.p...@gmail.com> wrote:

From: Greg Pfister <greg.p...@gmail.com>
Subject: [ Cloud Computing ] Re: Multicore vs. Cloud Computing [NOSIG[
To: "Cloud Computing" <cloud-c...@googlegroups.com>
Date: Tuesday, 17 November, 2009, 3:20 AM

Post Job/Resume at http://cloudjobs.net

Buy 88 conference sessions and panels on cloud computing on DVD at


~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com

Vikas Deolaliker

unread,
Nov 17, 2009, 5:13:57 PM11/17/09
to cloud-c...@googlegroups.com
I believe concurrency is a temporal logic while parallelism is a execution logic. In other words, when a system is running multiple tasks at a point in time, it is concurrent. If a program can be split up and parts run two days apart and later joined, it is parallelism.
 
Has anybody explored p2p cloud computing architectures?

Greg Pfister

unread,
Nov 17, 2009, 6:29:00 PM11/17/09
to Cloud Computing
On Nov 16, 4:21 pm, Jim Starkey <jstar...@nimbusdb.com> wrote:
> A few points.  First, super-scalar

Thanks, that's the term that slipped my mind.

> and pipelining are invisible to an
> executing instruction stream.  Things may be performed out order, entire
> branches may be tentatively executed, but the program sees only
> plodding, single instruction, junior-level class project.

As I indicated, this is partly true. Recompilation is frequently
recommended to get the best performance from each new implementation.

The key thing for me is that bugs unique to parallelism aren't
introduced. The performance gains can easily be in the range
theoretically possible at current levels of multicore (4X or more).

And the discussion was, I thought, whether a single processor can
execute code in parallel, period. Just wanted to be sure all the
possibilities were included.

> Whether this
> is good or not, it means that program and programmers are both backwards
> compatible.

Yes, programmers do not have to reason about parallelism, except in a
much more limited context.

> Second, a major advantage hyper-threading (Intel) and
> hardware threads (Sun) is to avoid unnecessary operating system level
> context switches.  Stealing minor cycles is all well and good,

I don't think I disagree with your context-switch point, although I
haven't heard it raised as a major benefit.

About "stealing minor cycles": Minor? I beg to differ. An un-covered
cache miss can cost 10s to 100s of CPU cycles, and can happen a lot
more often than context switches.

Greg Pfister
http://perilsofparallel.blogspot.com/
> >>>http://www.amazon.com/gp/product/B002H0IW1Uorget instant access to
It is loading more messages.
0 new messages