Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

processors of the future: super-computer-on-a-chip?

14 views
Skip to first unread message

Chris Thomasson

unread,
Aug 22, 2006, 5:31:35 AM8/22/06
to
The following will be wrapped around the basic theme:

"in the year 20XX"


Okay... With the recent topic of how multithreading will interact with
mega/giga/tera-core chips presented by Barry Kelly in "The Problem with
Threads" discussion over in c.p.t.

http://groups.google.com/group/comp.programming.threads/msg/4416cc8e32fdf169?hl=en

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/b192c5ffe9b47926/5301d091247a4b16?hl=en#5301d091247a4b16


as a programmer that makes frequent use of advanced thread synchronization
techniques; I decided to ask the following questions to this group:


" How many cores do you think a chip could have, lets say 10 or 20+ years
from now? "


" Are there any known laws/phenomena that are imposed/governed by the laws
of physics, as they are understood today of course, that can put a hard
limit on the total number of cores that can fit on the usable surface area
of a chip? "


IMHO, it seems that the rapid innovation of highly advanced nano
technologies' could make a super-computer-on-a-chip possible. Indeed...


Warning!:

Please note that I am not a hardware guy in any sense of the term... I try
hard to make a decent living on creating highly scaleable server software
components out of the instruction sets that are ultimately created by some
of the hardware gurus that grace comp.arch with their intellect, and most
importantly, IMHO, their valuable TIME... Here are some of my brief
ramblings and probably naive opinions' on a specific aspect of this subject,
so please "try" to bear with me for a moment. It may also help you to keep
an open sense of humor...

Thank you!

:)


Okay. We can zoom down to the sub-atomic level and we can zoom out into the
cosmos, but how far can we go in either direction? If one cannot zoom out/in
enough to suite their specific needs, well IMHO: A prompt increase in raw
power and a rash of clever innovations can probably allow one go further....
Therefore, it is my humble thesis that the distance limit for zooming in or
out of anything is infinite. For instance, and IMO, the entire "universe"
probably looks like a normal galaxy at a certain distance...

Instead of a collection of solar-systems that orbit a common central source
of gravity (e.g., black hole), a universe contains a collection of galaxies
that orbit another exponentially stronger common central source of gravity
(e.g., perhaps super massive black hole)... Example of my, perhaps naive,
opinion that the distance that one can zoom out from, say a solar-system, is
indeed infinite.

A pattern for this could be as simple as the following fractal-like scheme:


1) Solar System = Collection Of Planets Orbiting Gravity: Star

<zoom out>


2) Galaxy = Collection Of Solar-Systems Orbiting Gravity: Black Hole
Strength 16

<zoom out>


3) Universe = Collection Of Galaxy's Orbiting Gravity: Black Hole Strength
16^16

<zoom out>


4) CollectionA = Collection Of Universes Orbiting Gravity: Black Hole
Strength (16^16)^16

<zoom out>


5) CollectionB = Collection Of CollectionA's Orbiting Gravity: Black Hole
Strength ((16^16)^16)^16

<zoom out>


6) And on and on forever...


I think that this make some sort of sense, in my mind at least... ?

;)


Now for the infinitely small, again IMHO, when you start to zoom in far
enough to observe atoms, and further down to the sub-atomic/quantum level,
things mysteriously and oddly seem to resemble and sort of behave like a
tiny little solar-system; Nucleus is the source of "gravity" for orbiting
particles... So, at this point I think that the same pattern for zooming out
can be applied to zooming in, and, IMO, can be repeated for ever... It seems
like an infinite amount of space can indeed occupy/exist in and throughout
the world of the super small, ditto for the world of the super massive...


Oh shi^ in am getting off into fuc%king "Startrek Land!!!!"... Beam me up
Scotty!


Anyway if you are still skimming over this AND if ANY of the crap I just
typed out makes any sort of sense at all, then, IMHO you can get a truly
massive amount, perhaps even an undefined or infinite amount of cores on a
chip... Perhaps a fractal-like chip design, that mimics my personal
interpretation of the design of the cosmos, cores can contain collections of
cores, which in turn contain other collections of cores. The central gravity
source that holds a collection of cores "together" could be a memory system
that is local for each collection of cores. The cores would be constructed
in the quantum world of the super small... Need more cores, zoom down to
another level and construct it there... Does this many any sense at all; any
thoughts?


P.S.

Please try not to flame me "too" much!

;)


Del Cecchi

unread,
Aug 22, 2006, 2:09:20 PM8/22/06
to
Can't zoom in too much. Limited by nature of things as described by
Heisenberg. Before that, you run into the atomic nature of matter.

So maybe I could make a gate out of a few atoms, and a "core" out of a
few thousand gates, talking order of magnitude here. So you might get a
lot of stuff, sort of like the human brain is a lot of stuff. But not
infinite amount, not indefinitely expandable.

--
Del Cecchi
"This post is my own and doesn’t necessarily represent IBM’s positions,
strategies or opinions.”

Mitch...@aol.com

unread,
Aug 22, 2006, 3:48:08 PM8/22/06
to

Chris Thomasson wrote:
> " How many cores do you think a chip could have, lets say 10 or 20+ years
> from now? "

Sun has a 8 core 32 thread Niagra processor today. At least one x86
manufacture
has taped out a 4 core PC.

You can expect the core count to increase by 4X by 2010. Farther than
that
is anyones guess.

Joe Seigh

unread,
Aug 22, 2006, 5:57:28 PM8/22/06
to

Chris only has a T2000 server with 4 Core, 8GB memory, that Sun gave him
to play with. I guess he's trying to figure out how fast things are
progressing.

--
Joe Seigh

When you get lemons, you make lemonade.
When you get hardware, you make software.

Bill Todd

unread,
Aug 22, 2006, 10:51:11 PM8/22/06
to
Chris Thomasson wrote:
> The following will be wrapped around the basic theme:
>
> "in the year 20XX"
>
>
> Okay... With the recent topic of how multithreading will interact with
> mega/giga/tera-core chips presented by Barry Kelly in "The Problem with
> Threads" discussion over in c.p.t.
>
> http://groups.google.com/group/comp.programming.threads/msg/4416cc8e32fdf169?hl=en
>
> http://groups.google.com/group/comp.programming.threads/browse_frm/thread/b192c5ffe9b47926/5301d091247a4b16?hl=en#5301d091247a4b16
>
>
> as a programmer that makes frequent use of advanced thread synchronization
> techniques; I decided to ask the following questions to this group:
>
>
>
>
> " How many cores do you think a chip could have, lets say 10 or 20+ years
> from now? "

Well, only partially playing devil's advocate here, I'll counter with
the question:

"Who cares?"

I was glad to see that the Ed (A.) Lee who wrote the article referred to
over at c.p.t. apparently wasn't the same Ed (K.) Lee whose
contributions I've come to respect over the years. To me, the question
of how to handle zillions of cores puts the cart before the horse:
rather, I'd ask why most people would want such chips rather than simply
assume that because it may be possible to build them well, then, clearly
we must and also are obliged to come with ways to use them (at least
beyond the ways we *already* know how to, as in the embarrassingly
parallel applications noted there).

Just because it's become harder to improve single-thread performance
doesn't mean that it's no longer useful to and that taking the path of
least hardware resistance (by ignoring single-thread performance
increases in favor of burgeoning core counts) is The Right Thing To Do.
In fact, one could argue that because single-threaded operation
characterizes such a large percentage of today's applications, and
because software has historically changed so slowly compared with
hardware, then there's relatively little reason to push multiple cores
per chip beyond *at most* a few dozen for the immediate future while
continuing to devote significant concentration to improving
single-thread performance too.

IBM (nobody's fool these past few years) certainly doesn't seem to be
relegating single-threaded performance to the back seat of its POWER
architecture: instead, after pioneering dual-core products 5 years ago
it has been steadily improving their single-threaded performance (with
only one additional nod to multi-threading in its SMT facilities which,
perhaps not coincidentally, likely offer some single-thread performance
advantages as well to threads with sufficient ILP). And only Sun has
introduced a product which has *conspicuously* traded off single-thread
for multi-core performance (a product which Sun itself agrees is
suitable only for certain niches rather than being really general-purpose).

Sure, there will be a few applications that could make really good use
of huge numbers of slower cores, but will they fund the associated
development sufficiently to overcome the resources available to develop
commodity products (always the lament of anything special-purpose these
days)? And for that matter aren't there any better uses we can come up
with for chip area?

Maybe in a decade or more some real application shift in this direction
will at least *start* to occur, but I don't expect it nearly as soon as
it becomes *possible* to build such cores, so I don't find it
particularly interesting save as an intellectual exercise for now.
Until we can at least come up with ways for software to use such
hypothetical hardware to solve some respectable range of problems far
better than they can be solved using obvious extensions to today's
mechanisms, it seems uncomfortably close to debating how many angels can
dance on the head of a pin.

- bill

Joe Seigh

unread,
Aug 23, 2006, 6:04:44 AM8/23/06
to
Bill Todd wrote:

> Chris Thomasson wrote:
>
>>
>>
>> " How many cores do you think a chip could have, lets say 10 or 20+
>> years from now? "
>
>
> Well, only partially playing devil's advocate here, I'll counter with
> the question:
>
> "Who cares?"
>
...

Intel at least wants to know how fast those angels can dance.
http://www.theregister.com/2006/08/22/intel_suite_help/
And maybe a few new dances as well.

I can think of some applications but it will have to wait until
those apocryphal low power desktop chips ever show up in the
retail channels at retail prices (something reasonable in
less than lots of 10000).

The big problem is most of todays apps were designed as single
threaded and it's extremely difficult to put in the multi-threaded
design patterns in after the fact, even if those patterns are
relatively simple to begin with. Think of when they first started
multi-threading the unix kernel and the giant kernel global lock.

It's probably as short sighted to say that we will never need more
then X cores as it was to say the world will never need more than
5 computers.

Bill Todd

unread,
Aug 23, 2006, 10:00:11 AM8/23/06
to

Since I'd interpret most of that article as applying to chips with under
10 or so cores (in fact, the only example given is a 4-core chip), I'm
not sure how you came to that conclusion. True, it states (though
rather nebulously)

[quote]

"From everything we can see, there are certainly interesting things to
do with tens of cores and hundreds of threads," Rattner said. "That's
where we are targeted with our research."

[end quote]

but then immediately continues with

[quote]

Rattner urged chip makers not to get caught in a "core war" where they
make myriad multi-core chips just because they can. Software and tools
vendors have a lot of catching up to do before such chips will be useful
in broad terms, Ratter said.

[end quote]

which somewhat resembles the observation I made myself.

...

> It's probably as short sighted to say that we will never need more
> then X cores as it was to say the world will never need more than
> 5 computers.

That could conceivably be why I suggested nothing of the kind, but
rather that *some* broad, significant use should at least be
envisionable for such products before the industry commits
whole-heartedly to embracing them.

- bill

Peter Grandi

unread,
Aug 23, 2006, 4:42:16 PM8/23/06
to
>>> On Tue, 22 Aug 2006 02:31:35 -0700, "Chris Thomasson"
>>> <cri...@comcast.net> said:

cristom> as a programmer that makes frequent use of advanced
cristom> thread synchronization techniques; I decided to ask the
cristom> following questions to this group: " How many cores do
cristom> you think a chip could have, lets say 10 or 20+ years
cristom> from now? "

Well, that depends on how useful is to put multiple CPUs on a
chip; there are already at least two 16-CPU chips around or
announced (look for my postings on the subject). One can easily
image a doubling every 2 years, so 64-CPU chips are entirely
possible by 2010.

There are also multiple-chip systems with dozens up to thousands
of CPUs (in discrete chips), usually as clusters of some sort or
another.

The 16-CPU chip by Boston Circuits has an interesting twist:
they describe it as a ''grid on a chip'', which I suspect is
more like ''cluster on a chip'', but they say ''grid'' probably
to ride on the huge PR effort by IBM about grids.

Now ''cluster on a chip'' sort of makes sense as the general
trend is to integrate whole systems on a chip, and then one can
start integrating multiple systems on a chip, why not. Cluster
software also is common and relatively easy to use. So good.

The limit really is determined by the number of transistors you
can put on a chip, and how useful is to have a cluster for the
average user, or even ''enough users'' that it is viable.

That's an interesting question. For small to medium scale
scientific computing approaches like the ClearSpeed one, where
they put a lot of ALUs/CPUs, currently 96, on a chip, might be
better. Clusters may be good for transactional for example, like
for Google servers, or for streaming, like YouTube.

I wonder if either is suitable for general office use; but then
low end PCs of today are amply suitable for that. If chips with
many CPUs come into existence they may become useful for things
like interactive VR and games, but there are non trivial issues
of restructuring the way these are architected.

Greg Lindahl

unread,
Aug 23, 2006, 7:03:19 PM8/23/06
to
In article <yf34pw3...@base.gp.example.com>,
Peter Grandi <pg...@0603.exp.sabi.co.UK> wrote:

>The limit really is determined by the number of transistors you
>can put on a chip, and how useful is to have a cluster for the
>average user, or even ''enough users'' that it is viable.

And here I thought the limit was the amount of I/O you could put on
the chip, e.g. access to main memory. If you aren't in the very tiny
memory embedded space, you care about bandwidth to main memory.

-- greg

Joe Seigh

unread,
Aug 23, 2006, 7:52:06 PM8/23/06
to
Bill Todd wrote:

Yes, Intel is clearly worried about the "core war" where they will
be forced to build cpus that do not live up to consumer expectations.
But the vendors who are supposed to do the catching up, do they
know they're supposed to be doing that? Why should they expend
resources on what's essentially Intel's and AMD's problem?

Eric Gouriou

unread,
Aug 23, 2006, 12:07:47 PM8/23/06
to
Bill Todd wrote:
> Joe Seigh wrote:
[...]

>> Intel at least wants to know how fast those angels can dance.
>> http://www.theregister.com/2006/08/22/intel_suite_help/
>> And maybe a few new dances as well.
>
> Since I'd interpret most of that article as applying to chips with under
> 10 or so cores (in fact, the only example given is a 4-core chip), I'm
> not sure how you came to that conclusion.

Rattner's presentation started by showing a "strawman" design
Intel architects are using to understand what needs to change (HW and
software) to enable a large number of cores to be used effectively.

The strawman architecture was made of 16 cores, a large amount
of on-die cache and some task-specific units (co-processors).

Eric

Eugene Miya

unread,
Aug 24, 2006, 1:08:38 PM8/24/06
to
In article <fOWdncfTDM0gUXfZ...@comcast.com>,

Chris Thomasson <cri...@comcast.net> wrote:
>technologies' could make a super-computer-on-a-chip possible. Indeed...

Well in some areas: it's been done.

>A pattern for this could be as simple as the following fractal-like scheme:
>1) Solar System = Collection Of Planets Orbiting Gravity: Star

1 less now


>2) Galaxy = Collection Of Solar-Systems Orbiting Gravity: Black Hole
>Strength 16

>3) Universe = Collection Of Galaxy's Orbiting Gravity: Black Hole Strength
>16^16

>4) CollectionA = Collection Of Universes Orbiting Gravity: Black Hole
>Strength (16^16)^16

>5) CollectionB = Collection Of CollectionA's Orbiting Gravity: Black Hole
>Strength ((16^16)^16)^16

>6) And on and on forever...
>I think that this make some sort of sense, in my mind at least... ?

What's fractal about this?

Where's the self-similarity?

--

Chris Thomasson

unread,
Aug 24, 2006, 4:19:25 PM8/24/06
to
"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
news:44eddd16$1@darkstar...

I never said it was a fractal, I said is was fractal-like. I guess I should
of said it was just a pattern...


> Where's the self-similarity?

A collection of galaxies (e.g., universe) looks like a normal galaxy at a
distance... A collection of universes looks like a normal galaxy at a
distance... A collection of collectionA looks like a normal galaxy at a
distance... And on and on.

No matter how far you zoom out, a collection will eventually look like a
normal galaxy. That sounds fractal-like to my ears...

;)


Chris Thomasson

unread,
Aug 24, 2006, 4:34:40 PM8/24/06
to
"Bill Todd" <bill...@metrocast.net> wrote in message
news:HJSdneRrfYk9X3bZ...@metrocastcablevision.com...

> Chris Thomasson wrote:
>> The following will be wrapped around the basic theme:
>>
>> "in the year 20XX"
>>
>>
>> Okay... With the recent topic of how multithreading will interact with
>> mega/giga/tera-core chips presented by Barry Kelly in "The Problem with
>> Threads" discussion over in c.p.t.
>>
>> http://groups.google.com/group/comp.programming.threads/msg/4416cc8e32fdf169?hl=en
>>
>> http://groups.google.com/group/comp.programming.threads/browse_frm/thread/b192c5ffe9b47926/5301d091247a4b16?hl=en#5301d091247a4b16
>>
>>
>> as a programmer that makes frequent use of advanced thread
>> synchronization techniques; I decided to ask the following questions to
>> this group:
>>
>>
>>
>>
>> " How many cores do you think a chip could have, lets say 10 or 20+
>> years from now? "
>
> Well, only partially playing devil's advocate here, I'll counter with the
> question:
>
> "Who cares?"
>
> I was glad to see that the Ed (A.) Lee who wrote the article referred to
> over at c.p.t. apparently wasn't the same Ed (K.) Lee whose contributions
> I've come to respect over the years. To me, the question of how to handle
> zillions of cores puts the cart before the horse: rather, I'd ask why most
> people would want such chips rather than simply assume that because it may
> be possible to build them well, then, clearly we must and also are obliged
> to come with ways to use them (at least beyond the ways we *already* know
> how to, as in the embarrassingly parallel applications noted there).

Yeah... It is putting the cart before the horse... Indeed. However, as a
"pure mental exercise", I wanted to be able to see if I could apply current,
or theoretically design new, thread synchronization techniques to/for such
processors. I think I came up with a technique that could "possibly" work on
and scale to such processor designs:

http://groups.google.com/group/comp.programming.threads/msg/6236a9029d80527a?hl=en

This scheme "should" be able to support the implementation of various
lock-free reader patterns on mega-core chips...


> IBM (nobody's fool these past few years) certainly doesn't seem to be
> relegating single-threaded performance to the back seat of its POWER
> architecture: instead, after pioneering dual-core products 5 years ago it
> has been steadily improving their single-threaded performance (with only
> one additional nod to multi-threading in its SMT facilities which, perhaps
> not coincidentally, likely offer some single-thread performance advantages
> as well to threads with sufficient ILP). And only Sun has introduced a
> product which has *conspicuously* traded off single-thread for multi-core
> performance (a product which Sun itself agrees is suitable only for
> certain niches rather than being really general-purpose).

I think Suns upcoming Rock processor addresses this issue...


Eugene Miya

unread,
Aug 24, 2006, 5:12:22 PM8/24/06
to
In article <C8ydnRpbOMUNmnPZ...@comcast.com>,
Chris Thomasson <cri...@comcast.net> wrote:
>>>fractal-like ..
powers of 10...

>> What's fractal about this?
>
>I never said it was a fractal, I said is was fractal-like. I guess I should
>of said it was just a pattern...

Humans are pattern recognizers. What kinds of pattern?


>> Where's the self-similarity?
>
>A collection of galaxies (e.g., universe) looks like a normal galaxy at a
>distance... A collection of universes looks like a normal galaxy at a
>distance... A collection of collectionA looks like a normal galaxy at a
>distance... And on and on.

Oh turtles all the way down!

I had to oversee the donation of some art work to the American Center of
Physics by the brother of the SC Chaos guys used to illustrate a
cosmology text book. The art work had to show parallel universes,
oscillating bangs and crunches, etc. But you need other details as well:
you really want (apparently 8-9 orders of magnitude).

I'm trying reconcil rotation in your example.

>No matter how far you zoom out, a collection will eventually look like a
>normal galaxy. That sounds fractal-like to my ears...

(squinting) Not sure what you mean by normal. Lots of galaxies and
cosmology we still have not figured out.


>;)

;^)

--

Gary

unread,
Aug 25, 2006, 6:52:42 PM8/25/06
to

Well, any idiot can throw a pile of cores on a die, and many idiots
have. The difficulty comes from feeding the cores with I/O (as you
observed) and from interconnecting the cores.

But, the real real problem is figuring out what those cores are doing.

The same idiots who throw tens of cores on a die (and apparently
someone is doing a thousand-core die) assure me there exists a class
of applications that 1) scales to large number of processors, 2) has
a tiny memory foot print, 3) accesses memory in an extremely well-
behaved way, 4) has trivial inter-processor communication needs,
5) is interesting, and 6) probably doesn't care about power
dissipation.

If only those people doing the assuring were hardwarily competent
and would go beyong vigorous hand-waving...

So, for the correct application, designing a hundred (or thousand) core
die is easy. For every other application, it's probably impossible,
for
the reason you gave.

- Gary

Anne & Lynn Wheeler

unread,
Aug 25, 2006, 7:05:25 PM8/25/06
to
"Gary" <gar...@gmail.com> writes:
> Well, any idiot can throw a pile of cores on a die, and many idiots
> have. The difficulty comes from feeding the cores with I/O (as you
> observed) and from interconnecting the cores.
>
> But, the real real problem is figuring out what those cores are doing.

a constant theme at hotchips this year was programming paradigm for
(at least increasingly, if not massively) parallel operation. there
were presentations from large tens to thousand cores (teraops on a
chip) ... sort of easy i/o processing examples were streaming video
... since the data is just coming at you ... you don't have to do
anything.

one of the presentations said that they worked on the (parallel)
programming paradigm for a long time before starting to define massive
number of cores (on a chip) that would then fit the programming
paradigm ... rather than the other way around ... building a massive
number of cores and then trying to stumble across a programming
paradigm that fit the cores.

session one ... had chips specifically designed to handle streaming.

session four on reconfigurable computing had a chip from toshiba with
large number of cores that could be dynamically reconfigured

session five on parallel processing had a number of flavors of
large number of cores on a chip.

http://www.hotchips.org/hc18/program/conference_day_one.htm

John Dallman

unread,
Aug 26, 2006, 5:42:00 PM8/26/06
to
In article <1156546360....@74g2000cwt.googlegroups.com>,
gar...@gmail.com (Gary) wrote:

> The same idiots who throw tens of cores on a die (and apparently
> someone is doing a thousand-core die) assure me there exists a class
> of applications that 1) scales to large number of processors, 2) has
> a tiny memory foot print, 3) accesses memory in an extremely well-
> behaved way, 4) has trivial inter-processor communication needs,
> 5) is interesting, and 6) probably doesn't care about power
> dissipation.

This class may even exist. What's irritating is the people who are sure
that it's actually a large fraction of all applications.

---
John Dallman j...@cix.co.uk
"Any sufficiently advanced technology is indistinguishable from a
well-rigged demo"

Seongbae Park

unread,
Aug 26, 2006, 6:31:39 PM8/26/06
to
Gary wrote:
...

> Well, any idiot can throw a pile of cores on a die, and many idiots
> have. The difficulty comes from feeding the cores with I/O (as you
> observed) and from interconnecting the cores.

True. However, let me play a devil's advocate here...

> But, the real real problem is figuring out what those cores are doing.
>
> The same idiots who throw tens of cores on a die (and apparently
> someone is doing a thousand-core die) assure me there exists a class
> of applications that 1) scales to large number of processors,

> 2) hasa tiny memory foot print,


> 3) accesses memory in an extremely well-behaved way,

Having more threads help you cope with irregular access patterns
better than having less number of cores.
Of course, that's assuming there's enough threads...

> 4) has trivial inter-processor communication needs,

Having all interprocessor communication on chip helps in this regard.

> 5) is interesting, and
> 6) probably doesn't care about power dissipation.
>
> If only those people doing the assuring were hardwarily competent
> and would go beyong vigorous hand-waving...
>
> So, for the correct application, designing a hundred (or thousand) core
> die is easy. For every other application, it's probably impossible,
> for the reason you gave.

I think a hundred threads chip will be feasible and
commercially viable [1] before 2010.
A thousand would be too many, unless some radical rethinking happens
in the software side.

Seongbae

[1] Commercially viable, if you consider Niagara commercially viable.

David Hopwood

unread,
Aug 26, 2006, 7:39:53 PM8/26/06
to
John Dallman wrote:
> gar...@gmail.com (Gary) wrote:
>
>>The same idiots who throw tens of cores on a die (and apparently
>>someone is doing a thousand-core die) assure me there exists a class
>>of applications that 1) scales to large number of processors, 2) has
>>a tiny memory foot print,

It's not the whole application that needs a small memory footprint; it's
the footprint of each part assigned to a core that should be small.

>>3) accesses memory in an extremely well-
>>behaved way, 4) has trivial inter-processor communication needs,
>>5) is interesting, and 6) probably doesn't care about power
>>dissipation.
>
> This class may even exist. What's irritating is the people who are sure
> that it's actually a large fraction of all applications.

It doesn't need to be a large fraction of all applications; it only needs
to be a large fraction of performance-critical applications.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Peter Grandi

unread,
Aug 26, 2006, 8:51:01 PM8/26/06
to
>>> On 23 Aug 2006 16:03:19 -0700, lin...@pbm.com (Greg Lindahl) said:

lindahl> In article <yf34pw3...@base.gp.example.com>,
lindahl> Peter Grandi <pg...@0603.exp.sabi.co.UK> wrote:

[ ... ]

pg> The limit really is determined by the number of transistors
pg> you can put on a chip, and how useful is to have a cluster
pg> for the average user, or even ''enough users'' that it is
pg> viable.

lindahl> And here I thought the limit was the amount of I/O you
lindahl> could put on the chip, e.g. access to main memory.

Such a careless thought motivated may be motivated by the
assumption that one would use the whole silicon budget to put
just CPUs on chips, not whole clusters. System-on-a-chip is a
pretty strong trend, stronger than CPU-on-chip ;-). Attack of
the killer micros for systems too.

Anyhow the main memory is on-chip already, it has been on-chip
for years. How many people still mistake external DRAM for ''main
memory'' when CPUs run at high multiples of the speed of the IO
bus (once upon a time called ''memory bus''), and on-chip to
off-chip memory traffic happens mostly in largish blocks because
DRAM increasingly has rather non-uniform access times?

Today even mere 2-CPU commercial chips used in PCs you can buy
walking into a shop have got like 4MiB of on-chip memory. And if
applications don't fit in *that* main memory, very bad news
indeed. Never mind forward looking game consoles with on-chip
memory for GPUs too.

I often find quite ironic that concepts like programming for
locality have disappeared in the mists of the past at the same
time as DRAM has evolved into the same role that ''bulk store''
or drums had when as now ''main memory'' was often between 1MiB
and 4MiB. The wheel of reincarnation of technology turns for
''main memory'' as for everything else.

Also note that the original question was:

«How many cores do you think a chip could have, lets say 10 or
20+ years from now?»

and presumably in «10 or 20+ years» larger transistor budgets
trend will allow increasing the amount of on-chip memory a lot.
There are chips already with several MiBs of on-chip memory.

As I wrote, in 10-20 years time we will be able to build chips
with the equivalent of today's small-medium clusters, but the
real question is indeed whether that is going to help a lot.

Not for MS Office, but perhaps for VR or games. After all one of
the most popular applications now is World of Warcraft, and
perhaps the real purpose of PCs is not to process words or
pictures or databases, but to enable new social modes of
interaction.

Perhaps in «10 or 20+ years» we will have solved the much, much
bigger and more important problem of the last mile, and many
people will have gigabit bandwidth into their home, and instead
of running web servers on ADSL or homepages on MySpace we will
all run large 3D environments on our home PCs and then them
being large clusters-on-a-chip will be pretty important.

lindahl> If you aren't in the very tiny memory embedded space,
lindahl> you care about bandwidth to main memory.

I might care more about bandwidth to shared (or not shared, in a
cluster-on-a-chip arrangement) on-chip memory. The big problem is
not even off-chip bandwidth in itself, but that DRAM speed has
just not been keeping up with CPU speeds, as even if we had lots
of off-chip bandwidth, it would be rather wasted (and I don't
like very high degrees of interleaving).

Joe Seigh

unread,
Aug 26, 2006, 10:08:12 PM8/26/06
to
Seongbae Park wrote:

> Gary wrote:
> ...
>>So, for the correct application, designing a hundred (or thousand) core
>>die is easy. For every other application, it's probably impossible,
>>for the reason you gave.
>
>
> I think a hundred threads chip will be feasible and
> commercially viable [1] before 2010.
> A thousand would be too many, unless some radical rethinking happens
> in the software side.
>
> Seongbae
>
> [1] Commercially viable, if you consider Niagara commercially viable.
>

There might be some time lag. I'm still working with single processors.
I'll probably get a dual core processor sometime in the next year.
Actually the problem is scalability, meaning that apps have to be big
enough to have scalability issues. Things like databases, servers, etc...
It take time to write something like that from scratch.

Gary

unread,
Aug 27, 2006, 1:40:44 AM8/27/06
to
Seongbae Park wrote:
> Gary wrote:
> > 4) has trivial inter-processor communication needs,
>
> Having all interprocessor communication on chip helps in this regard.

Well, on-chip is cheaper than off. But nevertheless robust
interprocessor communication on-chip is expensive, in terms of
silicon area, complexity, and scalability. In fact, you'll likely find
your communication network will be yet another limit to how many
cores you can put on a die.

> > 5) is interesting, and
> > 6) probably doesn't care about power dissipation.
> >
> > If only those people doing the assuring were hardwarily competent
> > and would go beyong vigorous hand-waving...
> >
> > So, for the correct application, designing a hundred (or thousand) core
> > die is easy. For every other application, it's probably impossible,
> > for the reason you gave.
>
> I think a hundred threads chip will be feasible and
> commercially viable [1] before 2010.
> A thousand would be too many, unless some radical rethinking happens
> in the software side.

Well, a hundred-thread chip is just normal evolution, considering
we already have 32-thread chips today.

A friend has a 20k gate MIPS core. She can easily pack 100 of them,
plus local storage, on a die--today. That would be an interesting
chip, I suppose, but you couldn't feed it fast enough to let it do
anything beyond printing "Hello world" 100 times, in parallel. Of
course, I exaggerate, slightly.

- Gary

Jeff Kenton

unread,
Aug 27, 2006, 1:14:10 PM8/27/06
to
Chris Thomasson wrote:
> The following will be wrapped around the basic theme:
>
> "in the year 20XX"
> ...

>
> " How many cores do you think a chip could have, lets say 10 or 20+ years
> from now? "
>

Assuming you had 1,000 cores, what could you do with them? BBN built
Butterfly machines with at least 128 (theoretical max 503) and so did
KSR. Others built at least 32 way machines. The problem was always
getting the software to do something useful with them. Most problems
don't divide that well and those that do require hand tuned code to get
reasonable performance. And I/O is always a bottleneck.

So, what use would 1,000 cores be? A few special problems needing custom
code would be a small market. Virtualization is another possibility,
effectively putting a whole datacenter on a single chip. This is a
logical extension of the blade architecture, but, again, the question of
I/O raises its head. This is where I think we're headed in the short
term, even with obvious problems that need to be solved.

jeff

Chris Thomasson

unread,
Aug 27, 2006, 10:09:02 PM8/27/06
to
"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
news:44ee1636$1@darkstar...

> In article <C8ydnRpbOMUNmnPZ...@comcast.com>,
> Chris Thomasson <cri...@comcast.net> wrote:
>>>>fractal-like ..
> powers of 10...
>>> What's fractal about this?
>>
>>I never said it was a fractal, I said is was fractal-like. I guess I
>>should
>>of said it was just a pattern...
>
> Humans are pattern recognizers.

IMHO, pattern recognition is relative; a human that turns out to be an idiot
seems to distort your statement somewhat; IMO at least... Humm... Luckily,
us 'normal's, can recognize idiotic behavior; therefore we can form our
thesis around our repeated 'coherent' recognition(s)... Relativity can be a
highly volatile algorithm, IMHO of course; well, relative to what, exactly?
So open ended... For instance, how is a single galaxies behavior relative to
a cluster of galaxies orbiting central gravity's behavior(s as a whole;
well, IMO, they both look the same at a distance; therefore, the must be
treated exactly like a single entity at a distance?... Ahh, the beauty of
fractals; turtles all the way down indeed!


BTW, thank you for talking about this theoretical physics/cosmos stuff. I
find it interesting... Almost as interesting as programming!

;)


> What kinds of pattern?

Okay... Lets start with the initial "fractal-like pattern":

1) Solar System = Collection Of Planets Orbiting Gravity: Star

<zoom out>


2) Galaxy = Collection Of Solar-Systems Orbiting Gravity: Black Hole
Strength 16

<zoom out>


3) Universe = Collection Of Galaxy's Orbiting Gravity: Black Hole Strength
16^16

<zoom out>


4) CollectionA = Collection Of Universes Orbiting Gravity: Black Hole
Strength (16^16)^16

<zoom out>


5) CollectionB = Collection Of CollectionA's Orbiting Gravity: Black Hole
Strength ((16^16)^16)^16

<zoom out>


6) And on and on forever...


Where does pattern matching come into play here... Well of course it comes
in the form of your personal answer to step 6...

Here is how I would interpret it...


I notice that the block hole strength is "steadily" increasing in equal and
exponential fashion; powers of 16... I got that pattern... Okay... I also
notice that a subsequent collection holds a collection of the previous
entity. Therefore, step 6 would be as follows, IMO:


6) CollectionC = Collection Of CollectionB's Orbiting Gravity: Black Hole
Strength (((16^16)^16)^16)^16


7) CollectionD = Collection Of CollectionC's Orbiting Gravity: Black Hole
Strength ((((16^16)^16)^16) ^16) ^16


8) CollectionE = Collection Of CollectionF's Orbiting Gravity: Black Hole
Strength (((((16^16)^16)^16) ^16) ^16) ^ 16


Anybody notice the pattern? Its hard to define it exactly because pattern
recognition is relative, IMO...


?


>>> Where's the self-similarity?
>>
>>A collection of galaxies (e.g., universe) looks like a normal galaxy at a
>>distance... A collection of universes looks like a normal galaxy at a
>>distance... A collection of collectionA looks like a normal galaxy at a
>>distance... And on and on.
>
> Oh turtles all the way down!
>
> I had to oversee the donation of some art work to the American Center of
> Physics by the brother of the SC Chaos guys used to illustrate a
> cosmology text book. The art work had to show parallel universes,
> oscillating bangs and crunches, etc. But you need other details as well:
> you really want (apparently 8-9 orders of magnitude).
>
> I'm trying reconcil rotation in your example.

I could picture this fractal-like pattern wrapped in an infinitely long cone
shaped tube that exists in a "void"... The shape of the cone gets smaller to
the point of intersection, at the singularity of the cone; the point of the
birthday if you will... All of the jets that come off the black holes would
be angled down to a singularity of sorts... Imagine a X, the top half of the
X is a V.... That could accommodate the fractal I am talking about... The
jets of the black holes would be focused to the center of the X... The
bottom half of the X is, again, a V... Therefore, you could apply symmetry
to the previous description...


>>No matter how far you zoom out, a collection will eventually look like a
>>normal galaxy. That sounds fractal-like to my ears...
>
> (squinting) Not sure what you mean by normal. Lots of galaxies and
> cosmology we still have not figured out.

Well, a spiral or elliptical like galaxies design could easily fit into the
setup... The setup requires a central source of gravity with various
orbiting entities'..


>>;)
>
> ;^)


Chris Thomasson

unread,
Aug 27, 2006, 10:11:56 PM8/27/06
to

"Chris Thomasson" <cri...@comcast.net> wrote in message
news:r_SdnTT7yrdl0G_Z...@comcast.com...

> "Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
> news:44ee1636$1@darkstar...
>> In article <C8ydnRpbOMUNmnPZ...@comcast.com>,
>> Chris Thomasson <cri...@comcast.net> wrote:
[...]

> I could picture this fractal-like pattern wrapped in an infinitely long
> cone shaped tube that exists in a "void"... The shape of the cone gets
> smaller to the point of intersection, at the singularity of the cone; the
> point of the birthday if you will...

^^^

I meant birthday hat of course.

;)


Eugene Miya

unread,
Aug 28, 2006, 12:24:58 PM8/28/06
to
Well as an aside, I have to be skeptical of claims of supers on chips.
Little exceptions might be Deep Crack's processors, the specialized
processors made at and for a Fort (you don't want to get too nosy), and
to lesser tend MPP and CMs.

In article <r_SdnTT7yrdl0G_Z...@comcast.com>,


Chris Thomasson <cri...@comcast.net> wrote:
>>>>>fractal-like ..
>> powers of 10...
>>>> What's fractal about this?
>>>I never said it was a fractal, I said is was fractal-like. I guess I
>>>should of said it was just a pattern...
>>
>> Humans are pattern recognizers.
>
>IMHO, pattern recognition is relative; a human that turns out to be an idiot
>seems to distort your statement somewhat; IMO at least... Humm... Luckily,
>us 'normal's, can recognize idiotic behavior; therefore we can form our
>thesis around our repeated 'coherent' recognition(s)... Relativity can be a
>highly volatile algorithm, IMHO of course; well, relative to what, exactly?
>So open ended... For instance, how is a single galaxies behavior relative to
>a cluster of galaxies orbiting central gravity's behavior(s as a whole;
>well, IMO, they both look the same at a distance; therefore, the must be
>treated exactly like a single entity at a distance?... Ahh, the beauty of
>fractals; turtles all the way down indeed!
>
>BTW, thank you for talking about this theoretical physics/cosmos stuff. I
>find it interesting... Almost as interesting as programming!
>;)

Oh, not me. My now dead friend Bill was the Godfather of the Santa Cruz
Chaos Cabral. This didn't mean that he believed in Chaos theory.
But he was able to introduce me to his students and give me a few
pointers of critical thinking to look for in claims.
Bill was a cosmologist, but his textbook is no longer sold, only his
math book is.

Also when I get the GEB book (took months), I was inspired to track down and
buy a copy of Pattern Recognition by Bongard.

>> What kinds of pattern?
>
>Okay... Lets start with the initial "fractal-like pattern":
>1) Solar System = Collection Of Planets Orbiting Gravity: Star
><zoom out>
>2) Galaxy = Collection Of Solar-Systems Orbiting Gravity: Black Hole
>Strength 16
><zoom out>
>3) Universe = Collection Of Galaxy's Orbiting Gravity: Black Hole Strength
>16^16
><zoom out>
>4) CollectionA = Collection Of Universes Orbiting Gravity: Black Hole
>Strength (16^16)^16
><zoom out>
>5) CollectionB = Collection Of CollectionA's Orbiting Gravity: Black Hole
>Strength ((16^16)^16)^16
><zoom out>
>6) And on and on forever...

Only math can go on forever. Physics, they are not so sure.
I am not certain that 2^4 cuts it. And you have to drop the idea of
orbits around a central point.

>Where does pattern matching come into play here... Well of course it comes
>in the form of your personal answer to step 6...

See last comment.

Point 2 might be flawed as well.

>Here is how I would interpret it...
>I notice that the block hole strength is "steadily" increasing in equal and
>exponential fashion; powers of 16... I got that pattern... Okay... I also
>notice that a subsequent collection holds a collection of the previous
>entity. Therefore, step 6 would be as follows, IMO:
>6) CollectionC = Collection Of CollectionB's Orbiting Gravity: Black Hole
>Strength (((16^16)^16)^16)^16
>7) CollectionD = Collection Of CollectionC's Orbiting Gravity: Black Hole
>Strength ((((16^16)^16)^16) ^16) ^16
>8) CollectionE = Collection Of CollectionF's Orbiting Gravity: Black Hole
>Strength (((((16^16)^16)^16) ^16) ^16) ^ 16
>Anybody notice the pattern? Its hard to define it exactly because pattern
>recognition is relative, IMO...
>?

I am not certain where you are getting the 16s and the basis to justify
the ^ operator than say * or some transcendental function, etc.


>>>> Where's the self-similarity?
>>>
>>>A collection of galaxies (e.g., universe) looks like a normal galaxy at a
>>>distance... A collection of universes looks like a normal galaxy at a
>>>distance... A collection of collectionA looks like a normal galaxy at a
>>>distance... And on and on.
>>
>> Oh turtles all the way down!
>>

>> I'm trying reconcil rotation in your example.
>
>I could picture this fractal-like pattern wrapped in an infinitely long cone
>shaped tube that exists in a "void"... The shape of the cone gets smaller to
>the point of intersection, at the singularity of the cone; the point of the
>birthday if you will... All of the jets that come off the black holes would
>be angled down to a singularity of sorts... Imagine a X, the top half of the
>X is a V.... That could accommodate the fractal I am talking about... The
>jets of the black holes would be focused to the center of the X... The
>bottom half of the X is, again, a V... Therefore, you could apply symmetry
>to the previous description...

Steady state universes are largely out of theoretical favor.

>>>No matter how far you zoom out, a collection will eventually look like a
>>>normal galaxy. That sounds fractal-like to my ears...
>>
>> (squinting) Not sure what you mean by normal. Lots of galaxies and
>> cosmology we still have not figured out.
>
>Well, a spiral or elliptical like galaxies design could easily fit into the
>setup... The setup requires a central source of gravity with various
>orbiting entities'..

Galaxies yes.
Universes?

--

Chris Thomasson

unread,
Aug 29, 2006, 3:04:25 AM8/29/06
to
"Chris Thomasson" <cri...@comcast.net> wrote in message
news:wdqdnYZZqOi-lnPZ...@comcast.com...

I think this design might come with obstruction-free KCSS...


Joe Seigh

unread,
Aug 29, 2006, 5:59:05 AM8/29/06
to
Chris Thomasson wrote:
> "Chris Thomasson" <cri...@comcast.net> wrote in message
> news:wdqdnYZZqOi-lnPZ...@comcast.com...
>
>>I think Suns upcoming Rock processor addresses this issue...
>
>
> I think this design might come with obstruction-free KCSS...
>
>
Sparc doesn't exactly have a lot of registers to play with.
K might end up only being 2.

san...@haldjas.folklore.ee

unread,
Aug 30, 2006, 9:29:21 PM8/30/06
to
Bill Todd wrote:
>
> Well, only partially playing devil's advocate here, I'll counter with
> the question:
>
> "Who cares?"
>

Really - anybody who was (or still is) cheering the killer micros
taking over the world should. Because this is just stage two of killer
micros - swarms of cores picking off large, labyrinthine micros.

> I was glad to see that the Ed (A.) Lee who wrote the article referred to
> over at c.p.t. apparently wasn't the same Ed (K.) Lee whose
> contributions I've come to respect over the years. To me, the question
> of how to handle zillions of cores puts the cart before the horse:
> rather, I'd ask why most people would want such chips rather than simply
> assume that because it may be possible to build them well, then, clearly
> we must and also are obliged to come with ways to use them (at least
> beyond the ways we *already* know how to, as in the embarrassingly
> parallel applications noted there).
>

Maybe not zillions of cores and / or threads but hundreds certainly.
All I can say for those is "gimme all you've got!" Hopefully Sun will
make a Dual UltraSparc T2 box sometime next year for 128 threads in 2U
chasis.

> Just because it's become harder to improve single-thread performance
> doesn't mean that it's no longer useful to and that taking the path of
> least hardware resistance (by ignoring single-thread performance
> increases in favor of burgeoning core counts) is The Right Thing To Do.
> In fact, one could argue that because single-threaded operation
> characterizes such a large percentage of today's applications, and
> because software has historically changed so slowly compared with
> hardware, then there's relatively little reason to push multiple cores
> per chip beyond *at most* a few dozen for the immediate future while
> continuing to devote significant concentration to improving
> single-thread performance too.
>

Ultimately what people care for is the system performance - and not
some specific piece of software being slow because single thread
performance hasn't improved much.

> - bill

Nick Maclaren

unread,
Aug 31, 2006, 7:24:10 AM8/31/06
to

In article <1156987760....@i3g2000cwc.googlegroups.com>,

san...@haldjas.folklore.ee writes:
|> Bill Todd wrote:
|> >
|> > Well, only partially playing devil's advocate here, I'll counter with
|> > the question:
|> >
|> > "Who cares?"
|>
|> Really - anybody who was (or still is) cheering the killer micros
|> taking over the world should. Because this is just stage two of killer
|> micros - swarms of cores picking off large, labyrinthine micros.

Nothing is sadder than the murder of a beautiful theory by a gang of
ugly facts.

|> Maybe not zillions of cores and / or threads but hundreds certainly.
|> All I can say for those is "gimme all you've got!" Hopefully Sun will
|> make a Dual UltraSparc T2 box sometime next year for 128 threads in 2U
|> chasis.

As I have said for a LONG time, Intel could produce such a chip, now,
by simply saying "let it be done". The problem is making use of that
number of cores, given that they would be limited by memory access,
and that is why Intel has not done so. Yet.

|> Ultimately what people care for is the system performance - and not
|> some specific piece of software being slow because single thread
|> performance hasn't improved much.

And that is the problem. There are damn few applications that parallelise
up to 100x, while using a very small amount of cache and essentially no
memory accesses. Such a system would have (say) 100 slow CPUs (< 1 GHz)
all beeing fed by an existing memory controller. Could you use that to
speed up existing applications? I can't, though I can think of ways in
which it might be useful.


However, I have told people in both Sun and Intel that, if they want a
paradigm shift, they MUST make workstations using this technology
available openly and cheaply to the mad hacker brigade. 99% of them
will do nothing interesting, 99% of those will do nothing useful, but
0.01% might start a breakthrough. I make no comment about whether you
or I are one of the 99%, 0.99% or 0.01% :-)


Regards,
Nick Maclaren.

Sander Vesik

unread,
Aug 31, 2006, 4:56:34 PM8/31/06
to
Nick Maclaren <nm...@cus.cam.ac.uk> wrote:
> |> Maybe not zillions of cores and / or threads but hundreds certainly.
> |> All I can say for those is "gimme all you've got!" Hopefully Sun will
> |> make a Dual UltraSparc T2 box sometime next year for 128 threads in 2U
> |> chasis.
>
> As I have said for a LONG time, Intel could produce such a chip, now,
> by simply saying "let it be done". The problem is making use of that
> number of cores, given that they would be limited by memory access,
> and that is why Intel has not done so. Yet.

So intel can mandate a cpu that has 128 threads but cannot mandate a cpu that
has that plus sufficent memory bandwidth? I find that strange. Adding
nine xdr interfaces (given they've been buddies with rambus a lot) would
make the chip relevant to a lot more things.

>
> |> Ultimately what people care for is the system performance - and not
> |> some specific piece of software being slow because single thread
> |> performance hasn't improved much.
>
> And that is the problem. There are damn few applications that parallelise
> up to 100x, while using a very small amount of cache and essentially no
> memory accesses. Such a system would have (say) 100 slow CPUs (< 1 GHz)
> all beeing fed by an existing memory controller. Could you use that to
> speed up existing applications? I can't, though I can think of ways in
> which it might be useful.

But why would it be using an existing memory controller? If Niagara was just
32 microsparc cores all using the usparc3 memory controller, sure, the result
would suck, and thats why it is something different.

>
> However, I have told people in both Sun and Intel that, if they want a
> paradigm shift, they MUST make workstations using this technology
> available openly and cheaply to the mad hacker brigade. 99% of them
> will do nothing interesting, 99% of those will do nothing useful, but
> 0.01% might start a breakthrough. I make no comment about whether you
> or I are one of the 99%, 0.99% or 0.01% :-)

Using a T2000 as a desktop machine is not really hard. T1000 is slightly
inconvinient.

I'm certainly not in that 0.01% starting a breakthrough because I'm not
even trying really. Just taking a vacation from forcing the world go in
directions it doesn't want to and seeing to it people want to scale to a
large number of threads not just 32.

>
>
> Regards,
> Nick Maclaren.

--
Sander

+++ Out of cheese error +++

jacko

unread,
Aug 31, 2006, 5:26:32 PM8/31/06
to
hi

http:indi.joox.net 16n architecture.

having investigated parrallelism and pipelining to some extent for the
indi16 it seems multi threading is needed to achive good pipeline
efficiency, or a 16 stage pipeline would have to wait 16 cycles to get
a complete memory address, and so a stall happens, and so 16 seperate
tasks are the best way of utilizing time without data depenancies.

i have the idea of 128 KB memory space for each processor, 16*64Kbits,
so that there is not much memory overlap. then do all external memory
access via a bit serial ring.

this has advantages of little memory bottleneck, but limits the single
task to 128KB

there is no way arround long addressing taking longer. isn't this how
cat 5e server farms roughly operate anyhow?

i propose extyending the port number to 32 bit to have 2^64 NAT node
possibility, with long long done by a network bridge.

cheers

Nick Maclaren

unread,
Aug 31, 2006, 5:35:12 PM8/31/06
to

In article <11570577...@haldjas.folklore.ee>,

Sander Vesik <san...@haldjas.folklore.ee> writes:
|>
|> So intel can mandate a cpu that has 128 threads but cannot mandate a cpu that
|> has that plus sufficent memory bandwidth? I find that strange. Adding
|> nine xdr interfaces (given they've been buddies with rambus a lot) would
|> make the chip relevant to a lot more things.

Er, I should have said "for a realistic cost"! If one makes the fairly
reasonable assumption that the current Woodcrests are reasonably set up
as dual-socket systems, and the CPUs were 4 times slower, it would still
need of the order of 32 interfaces (if I recall correctly). That would
be a problem.

|> But why would it be using an existing memory controller? If Niagara was just
|> 32 microsparc cores all using the usparc3 memory controller, sure, the result
|> would suck, and thats why it is something different.

Feasible number of pins x feasible bandwidth per pin, for a feasible
latency. Minor differences wouldn't help.

|> Using a T2000 as a desktop machine is not really hard. T1000 is slightly
|> inconvinient.

Yes, but they have to ATTRACT the mad hackers, not just make it marginally
possible.

|> I'm certainly not in that 0.01% starting a breakthrough because I'm not
|> even trying really. Just taking a vacation from forcing the world go in
|> directions it doesn't want to and seeing to it people want to scale to a
|> large number of threads not just 32.

Good luck. I have been trying that for some decades, with no great
success.


Regards,
Nick Maclaren.

jacko

unread,
Aug 31, 2006, 5:35:42 PM8/31/06
to

jacko wrote:
> hi
>
> http:indi.joox.net 16n architecture.
>
<snip>

just thought of a good shared memory architecture!!

take a standard memory block as 128KB
have 65536 by 65536 grid of blocks.
high address selects column for a row processor, and row for a column
processor.
any processor can communicate indirectly with any other processor.
maximum memory size is 2^(17+32) bytes.
with 2^17 processors (16 bit with high word page register).
?????

cheers

Joe Seigh

unread,
Aug 31, 2006, 6:43:37 PM8/31/06
to
Nick Maclaren wrote:
>
> And that is the problem. There are damn few applications that parallelise
> up to 100x, while using a very small amount of cache and essentially no
> memory accesses. Such a system would have (say) 100 slow CPUs (< 1 GHz)
> all beeing fed by an existing memory controller. Could you use that to
> speed up existing applications? I can't, though I can think of ways in
> which it might be useful.
>
>
> However, I have told people in both Sun and Intel that, if they want a
> paradigm shift, they MUST make workstations using this technology
> available openly and cheaply to the mad hacker brigade. 99% of them
> will do nothing interesting, 99% of those will do nothing useful, but
> 0.01% might start a breakthrough. I make no comment about whether you
> or I are one of the 99%, 0.99% or 0.01% :-)
>
>

I did blow my chance at getting a free T2000 so I'll have to make do
with whatever low power dual core is cheapest. Kind of hard to show
scalability with only 2 processors though.

But it may be a moot point since Sun definetly has terminal "not invented
here" disease. It's pretty pathetic if one of your top research groups
in synchronization for scalability totally ignores something they're told
directly about and spend three years reinventing it themselves (20060037026
Lightweight reference counting using single-target synchronization). See
http://atomic-ptr-plus.sourceforge.net/ for some details. Of course I
have the opposite problem with IBM (20060130061 Use of rollback RCU with
read-side modifications to RCU-protected data structures ). See same
site for details.

Of course, maybe they don't need us "mad hackers" since they're only
three years behind.

Chris Thomasson

unread,
Aug 31, 2006, 8:56:17 PM8/31/06
to
"Joe Seigh" <jsei...@xemaps.com> wrote in message
news:IcadnY7XW-5R-GrZ...@comcast.com...

> Nick Maclaren wrote:
>>
>> And that is the problem. There are damn few applications that
>> parallelise
>> up to 100x, while using a very small amount of cache and essentially no
>> memory accesses. Such a system would have (say) 100 slow CPUs (< 1 GHz)
>> all beeing fed by an existing memory controller. Could you use that to
>> speed up existing applications? I can't, though I can think of ways in
>> which it might be useful.
>>
>>
>> However, I have told people in both Sun and Intel that, if they want a
>> paradigm shift, they MUST make workstations using this technology
>> available openly and cheaply to the mad hacker brigade. 99% of them
>> will do nothing interesting, 99% of those will do nothing useful, but
>> 0.01% might start a breakthrough. I make no comment about whether you
>> or I are one of the 99%, 0.99% or 0.01% :-)
>>
>>
>
> I did blow my chance at getting a free T2000 so I'll have to make do
> with whatever low power dual core is cheapest. Kind of hard to show
> scalability with only 2 processors though.

http://groups.google.com/group/comp.programming.threads/msg/ddee1703e2ba960a

http://groups.google.com/group/comp.programming.threads/msg/12a3f0990b8266fa

HOLY SHI^T!!!

I got lucky Joe... ?

:O


Well... Maybe not... My distributed memory allocator is actually an
extremely scaleable design:

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/8245f4b48591fc69/e3efa5628aad4a82?lnk=gst&q=lock-free+distributed&rnum=1#e3efa5628aad4a82


I am not sure if my proxy gc or my memory allocator was the winner...
Humm...

;)


BTW... I would be happy to discuss my memory allocator design with anybody
who is interested...

http://groups.google.com/group/comp.programming.threads/msg/0aab29e24...
http://groups.google.com/group/comp.programming.threads/msg/907573c0e...
http://groups.google.com/group/comp.programming.threads/msg/1d3aeaa6e...

part of the implementation I used in vZOOM:

http://groups.google.com/group/comp.lang.c++.moderated/msg/d636387e9f10e306)


Would anybody be interested in reading through a fairly detailed and
technical white paper on my lock-free allocator design? I would love to hear
some comments/suggestions/criticism from the good people that frequent this
group!

:)


> But it may be a moot point since Sun definitely has terminal "not invented


> here" disease. It's pretty pathetic if one of your top research groups
> in synchronization for scalability totally ignores something they're told
> directly about and spend three years reinventing it themselves
> (20060037026
> Lightweight reference counting using single-target synchronization). See
> http://atomic-ptr-plus.sourceforge.net/ for some details.

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/f2c94118046142e8/d481a720e22fcc06?lnk=gst&q=atomic+reference+counting+patent%27&rnum=1#d481a720e22fcc06
(read all. Its "interesting"... Indeed!)


> Of course I
> have the opposite problem with IBM (20060130061 Use of rollback RCU with
> read-side modifications to RCU-protected data structures ). See same
> site for details.

http://groups.google.com/group/comp.programming.threads/msg/249da8e954ec3c97

http://groups.google.com/group/comp.programming.threads/msg/e7b68f55d3c87152

http://groups.google.com/group/comp.programming.threads/msg/ac53e24403515c2f


Humm...


> Of course, maybe they don't need us "mad hackers" since they're only
> three years behind.

OUCH!


Chris Thomasson

unread,
Aug 31, 2006, 9:00:05 PM8/31/06
to

Jan Vorbrüggen

unread,
Sep 1, 2006, 4:23:47 AM9/1/06
to
> It's pretty pathetic if one of your top research groups
> in synchronization for scalability totally ignores something they're told
> directly about and spend three years reinventing it themselves (20060037026
> Lightweight reference counting using single-target synchronization). See
> http://atomic-ptr-plus.sourceforge.net/ for some details. Of course I
> have the opposite problem with IBM (20060130061 Use of rollback RCU with
> read-side modifications to RCU-protected data structures ).

So watch when they submit it to the EPO and get it granted, and then submit
an opposition (which is cheap) citing prior art to get it killed. At least
this works in Europe, the US patent process being terminally ill.

Jan

Joe Seigh

unread,
Sep 1, 2006, 6:14:32 AM9/1/06
to
Yes, but then they'd realize they didn't invent it and wouldn't use it.
It's better to let Sun use it in something, since they have the resources
and I don't, and demonstrate its scalability. The only thing I need to do,
if I ever distribute the algorithms in my FOSS project again, is modify the
licenses to exclude liability for patent infringement, real or imagined.

Andy Glew

unread,
Sep 1, 2006, 6:14:58 PM9/1/06
to
"Gary" <gar...@gmail.com> writes:

> The same idiots who throw tens of cores on a die (and apparently
> someone is doing a thousand-core die)

By the way: the KiloCore from Rapport
http://www.rapportincorporated.com/kilocore/kilocore.html
is not really what I would call 1000 CPU cores on a die.

Rather, it is 1000 8-bit ALUs on a die, structured in something like a
hybrid of a VLIW and systolic array. (Although the Rapport guys got
mad at you when I said this.)

Given that modern CPUs have 64+64+64+128+128 = the equivalent of 56
8-bit ALUs in a given CPU core right now, this is much more
reasonable. 4 cores on a chip => 200 8 bit ALUs. Rapport has maybe 4x
more such ALUs, but much lower power.

Also, KiloCore is programmed by a VLIW-style compiler exploiting
relatively fine grain parallelism; it is not really a coarse grain
multithreaded machine.

IMHO Rapport's biggest innovation is their "virtual stripe mode". If
you can get the entire computation to fit in their ALU array, you just
pump it; if not, they have pretty good support for delivering a fairly
large VLIW style instruction to it on each cycle.

---

Overall, I agree with Gary: too many computer archtects are smoking
the hash-hish pipe of high multicore and multithreading. Basically,
they are building such things because they can, not because there is a
market.

I just wanted to note that Rapport KiloCore is not necessarily in that
crowd.

It is unfortunate that Rapport chose the name "KiloCore". It is a misnomer.

Chris Thomasson

unread,
Sep 1, 2006, 6:49:09 PM9/1/06
to
"Andy Glew" <first...@employer.domain> wrote in message
news:peypac5j...@PXPL8591.amr.corp.intel.com...

> "Gary" <gar...@gmail.com> writes:
>
>> The same idiots who throw tens of cores on a die (and apparently
>> someone is doing a thousand-core die)
>
> By the way: the KiloCore from Rapport
> http://www.rapportincorporated.com/kilocore/kilocore.html
> is not really what I would call 1000 CPU cores on a die.

Impressive... I wonder how similar their instruction set actually is to
PowerPC...

Hummm... I now think I need to get my hands on an architecture manual and an
assembler for this processor!

;)


Chris Thomasson

unread,
Sep 4, 2006, 7:20:59 PM9/4/06
to
"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
news:44f318da$1@darkstar...
[...]

>>IMHO, pattern recognition is relative

[...]

>>BTW, thank you for talking about this theoretical physics/cosmos stuff. I
>>find it interesting... Almost as interesting as programming!
>>;)

> Oh, not me. My now dead friend Bill was the Godfather of the Santa Cruz
> Chaos Cabral.


[...]

>>> What kinds of pattern?

>>Okay... Lets start with the initial "fractal-like pattern":

[...]

>>> (squinting) Not sure what you mean by normal. Lots of galaxies and
>>> cosmology we still have not figured out.
>>Well, a spiral or elliptical like galaxies design could easily fit into
>>the
>>setup... The setup requires a central source of gravity with various
>>orbiting entities'..

> Galaxies yes.
> Universes?

I could apply the inherent self-similarity to the design of a galaxy. A
galaxy basically owns a collection of orbiting entities (e.g.,
solar-systems); influenced through black hole at center.... Therefore, I say
that universe basically owns a collection of orbiting entities (e.g.,
galaxies'... ); influences' through supper massive black hole at center.


Again... Instead of orbiting solar-systems, universes contain orbiting
galaxies' around very super massive black hole at the center...


What do you think? Could that be possible?


Chris Thomasson

unread,
Sep 4, 2006, 7:22:00 PM9/4/06
to
"Chris Thomasson" <cri...@comcast.net> wrote in message
news:d76dnXZIw77iL2HZ...@comcast.com...

> "Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
> news:44f318da$1@darkstar...
> [...]
>
>>>IMHO, pattern recognition is relative
> [...]
>
>>>BTW, thank you for talking about this theoretical physics/cosmos stuff. I
>>>find it interesting... Almost as interesting as programming!
>>>;)
>
>> Oh, not me. My now dead friend Bill was the Godfather of the Santa Cruz
>> Chaos Cabral.

Sorry to hear about that.


Chris Thomasson

unread,
Sep 4, 2006, 7:25:47 PM9/4/06
to
"Chris Thomasson" <cri...@comcast.net> wrote in message
news:d76dnXZIw77iL2HZ...@comcast.com...
> "Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
> news:44f318da$1@darkstar...
[...]
>> Galaxies yes.
>> Universes?
>
> I could apply the inherent self-similarity to the design of a galaxy.

[...]

Just to quickly clarify:

A galaxy is similar to a solar-system because a solar-system contains
orbiting entities around gravity at center; star...

A universe is similar to a galaxy and a solar-system because it contains
orbiting entities around gravity at center; black hole...

And all the way down we go....


;)


Alex Colvin

unread,
Sep 5, 2006, 2:14:27 PM9/5/06
to
>A universe is similar to a galaxy and a solar-system because it contains
>orbiting entities around gravity at center; black hole...

center?

AirRa...@gmail.com

unread,
Sep 5, 2006, 9:58:41 PM9/5/06
to
google Intel "Platform 2015"

they're talking about:

10s to 100s of cores
100s threads
10s of billions of transistors

here:
http://www.zdnet.de/i/news/200501/platform2015.jpg
http://www.intel.com/technology/magazine/computing/platform-2015-0305.htm


Intel says they're going to have super chips with several major
full-blown cores, lots of small multi-purpose accelerator cores that
can redirect their focus on different tasks quickly, and specialised
single-purpose units for 3D graphics rendering.

Chris Thomasson wrote:
> The following will be wrapped around the basic theme:
>
> "in the year 20XX"
>
>

> Okay... With the recent topic of how multithreading will interact with
> mega/giga/tera-core chips presented by Barry Kelly in "The Problem with
> Threads" discussion over in c.p.t.
>
> http://groups.google.com/group/comp.programming.threads/msg/4416cc8e32fdf169?hl=en
>
> http://groups.google.com/group/comp.programming.threads/browse_frm/thread/b192c5ffe9b47926/5301d091247a4b16?hl=en#5301d091247a4b16
>
>
> as a programmer that makes frequent use of advanced thread synchronization
> techniques; I decided to ask the following questions to this group:


>
>
>
>
> " How many cores do you think a chip could have, lets say 10 or 20+ years
> from now? "
>
>

> " Are there any known laws/phenomena that are imposed/governed by the laws
> of physics, as they are understood today of course, that can put a hard
> limit on the total number of cores that can fit on the usable surface area
> of a chip? "
>
>
>
>
> IMHO, it seems that the rapid innovation of highly advanced nano
> technologies' could make a super-computer-on-a-chip possible. Indeed...
>
>
>
>
> Warning!:
>
> Please note that I am not a hardware guy in any sense of the term... I try
> hard to make a decent living on creating highly scaleable server software
> components out of the instruction sets that are ultimately created by some
> of the hardware gurus that grace comp.arch with their intellect, and most
> importantly, IMHO, their valuable TIME... Here are some of my brief
> ramblings and probably naive opinions' on a specific aspect of this subject,
> so please "try" to bear with me for a moment. It may also help you to keep
> an open sense of humor...
>
> Thank you!
>
> :)
>
>
>
>
> Okay. We can zoom down to the sub-atomic level and we can zoom out into the
> cosmos, but how far can we go in either direction? If one cannot zoom out/in
> enough to suite their specific needs, well IMHO: A prompt increase in raw
> power and a rash of clever innovations can probably allow one go further....
> Therefore, it is my humble thesis that the distance limit for zooming in or
> out of anything is infinite. For instance, and IMO, the entire "universe"
> probably looks like a normal galaxy at a certain distance...
>
> Instead of a collection of solar-systems that orbit a common central source
> of gravity (e.g., black hole), a universe contains a collection of galaxies
> that orbit another exponentially stronger common central source of gravity
> (e.g., perhaps super massive black hole)... Example of my, perhaps naive,
> opinion that the distance that one can zoom out from, say a solar-system, is
> indeed infinite.
>
> A pattern for this could be as simple as the following fractal-like scheme:


>
>
> 1) Solar System = Collection Of Planets Orbiting Gravity: Star
>
> <zoom out>
>
>
> 2) Galaxy = Collection Of Solar-Systems Orbiting Gravity: Black Hole
> Strength 16
>
> <zoom out>
>
>
> 3) Universe = Collection Of Galaxy's Orbiting Gravity: Black Hole Strength
> 16^16
>
> <zoom out>
>
>
> 4) CollectionA = Collection Of Universes Orbiting Gravity: Black Hole
> Strength (16^16)^16
>
> <zoom out>
>
>
> 5) CollectionB = Collection Of CollectionA's Orbiting Gravity: Black Hole
> Strength ((16^16)^16)^16
>
> <zoom out>
>
>
> 6) And on and on forever...
>
>

> I think that this make some sort of sense, in my mind at least... ?
>
> ;)
>
>
> Now for the infinitely small, again IMHO, when you start to zoom in far
> enough to observe atoms, and further down to the sub-atomic/quantum level,
> things mysteriously and oddly seem to resemble and sort of behave like a
> tiny little solar-system; Nucleus is the source of "gravity" for orbiting
> particles... So, at this point I think that the same pattern for zooming out
> can be applied to zooming in, and, IMO, can be repeated for ever... It seems
> like an infinite amount of space can indeed occupy/exist in and throughout
> the world of the super small, ditto for the world of the super massive...
>
>
> Oh shi^ in am getting off into fuc%king "Startrek Land!!!!"... Beam me up
> Scotty!
>
>
> Anyway if you are still skimming over this AND if ANY of the crap I just
> typed out makes any sort of sense at all, then, IMHO you can get a truly
> massive amount, perhaps even an undefined or infinite amount of cores on a
> chip... Perhaps a fractal-like chip design, that mimics my personal
> interpretation of the design of the cosmos, cores can contain collections of
> cores, which in turn contain other collections of cores. The central gravity
> source that holds a collection of cores "together" could be a memory system
> that is local for each collection of cores. The cores would be constructed
> in the quantum world of the super small... Need more cores, zoom down to
> another level and construct it there... Does this many any sense at all; any
> thoughts?
>
>
>
>
> P.S.
>
> Please try not to flame me "too" much!
>
> ;)

Chris Thomasson

unread,
Sep 7, 2006, 12:42:23 AM9/7/06
to
"Alex Colvin" <al...@TheWorld.com> wrote in message
news:edkeq3$8qi$2...@pcls4.std.com...

> >A universe is similar to a galaxy and a solar-system because it contains
>>orbiting entities around gravity at center; black hole...
>
> center?

When you look into the center of our own galaxy we can see a bright spot of
sorts; energy radiating off the quasar / event horizon from the black
hole...


I think if we get a powerful enough telescope, we can peer into the center
of our universe... IMHO, it should look very similar to the center of a
galaxy... Tons of energy radiating off the super massive quasar / event
horizon from "our" universes black hole. If we get exe


If we get extremely powerful telescopes we can probably see other
universes'; IMHO, they should look exactly like a galaxy... There are
probably spiral and elliptical universes... Each with trillions upon
trillions of galaxies in orbit...


Bill Todd

unread,
Sep 7, 2006, 1:10:11 AM9/7/06
to

Unless my admittedly limited acquaintance with current theories of
cosmology is significantly flawed, the last two paragraphs above sound
suspiciously like pure, unadulterated babble.

- bill

Chris Thomasson

unread,
Sep 7, 2006, 3:19:22 AM9/7/06
to
"Bill Todd" <bill...@metrocast.net> wrote in message
news:kbqdneIa8ccpNGLZ...@metrocastcablevision.com...

> Chris Thomasson wrote:
>> "Alex Colvin" <al...@TheWorld.com> wrote in message

[...]

>> I think if we get a powerful enough telescope, we can peer into the
>> center of our universe... IMHO, it should look very similar to the center
>> of a galaxy... Tons of energy radiating off the super massive quasar /
>> event horizon from "our" universes black hole. If we get exe
>>
>>
>> If we get extremely powerful telescopes we can probably see other
>> universes'; IMHO, they should look exactly like a galaxy... There are
>> probably spiral and elliptical universes... Each with trillions upon
>> trillions of galaxies in orbit...
>
> Unless my admittedly limited acquaintance with current theories of
> cosmology is significantly flawed, the last two paragraphs above sound
> suspiciously like pure, unadulterated babble.

Do you think that there is even a remote possibility that it could be the
way things are? I would not take this stuff too seriously; its all
theoretical babble in some sense...

:)


Bill Todd

unread,
Sep 7, 2006, 4:29:06 AM9/7/06
to

That sounds disturbingly close to the view cherished by the ignorant
that their opinions are just as good as anyone else's - a stance which
is demonstrably incorrect when it comes to matters subject to even
partial verification.

There is a clear and distinct difference between uninformed, sophomoric
babbling and tentative conclusions based on carefully-analyzed data:
while the former may turn out to be closer to reality in rare (and often
apparently pretty random) cases, one would not be wise to place anything
close to even money on it.

- bill

Jan Vorbrüggen

unread,
Sep 7, 2006, 4:39:42 AM9/7/06
to
>>>I think if we get a powerful enough telescope, we can peer into the
>>>center of our universe... IMHO, it should look very similar to the center
>>>of a galaxy... Tons of energy radiating off the super massive quasar /
>>>event horizon from "our" universes black hole. If we get exe
>>>
>>>If we get extremely powerful telescopes we can probably see other
>>>universes'; IMHO, they should look exactly like a galaxy... There are
>>>probably spiral and elliptical universes... Each with trillions upon
>>>trillions of galaxies in orbit...
>>
>>Unless my admittedly limited acquaintance with current theories of
>>cosmology is significantly flawed, the last two paragraphs above sound
>>suspiciously like pure, unadulterated babble.
>
> Do you think that there is even a remote possibility that it could be the
> way things are? I would not take this stuff too seriously; its all
> theoretical babble in some sense...

No. For the first, read up on Emmy Noether. The universe cannot have a
centre, or a major symmetry of physics would be broken. For the second,
you've just revoked general relativity; not a good move for similar reasons.

Jan

Nick Maclaren

unread,
Sep 7, 2006, 4:52:29 AM9/7/06
to

In article <4ma452F...@individual.net>,

=?ISO-8859-1?Q?Jan_Vorbr=FCggen?= <jvorbr...@not-mediasec.de> writes:
|> >
|> > Do you think that there is even a remote possibility that it could be the
|> > way things are? I would not take this stuff too seriously; its all
|> > theoretical babble in some sense...
|>
|> No. For the first, read up on Emmy Noether. The universe cannot have a
|> centre, or a major symmetry of physics would be broken. For the second,
|> you've just revoked general relativity; not a good move for similar reasons.

Well, given the amount of stuff that gets serious money and assumes a
solution to the Halting Problem, that sounds positively sane ....


Regards,
Nick Maclaren.

Chris Thomasson

unread,
Sep 7, 2006, 6:00:20 AM9/7/06
to
"Bill Todd" <bill...@metrocast.net> wrote in message
news:CZKdnXVgUvLORWLZ...@metrocastcablevision.com...

So be it. I think I will stick to programming... This is all OT anyway...

;)


Peter Dickerson

unread,
Sep 7, 2006, 5:58:20 AM9/7/06
to
"Chris Thomasson" <cri...@comcast.net> wrote in message
news:k8GdneiA8sRTPWLZ...@comcast.com...

> "Alex Colvin" <al...@TheWorld.com> wrote in message
> news:edkeq3$8qi$2...@pcls4.std.com...
> > >A universe is similar to a galaxy and a solar-system because it
contains
> >>orbiting entities around gravity at center; black hole...
> >
> > center?

There is no centre of the universe. The concept makes no sense.

> When you look into the center of our own galaxy we can see a bright spot
of
> sorts; energy radiating off the quasar / event horizon from the black
> hole...

When you look at the centre of our galaxy you probably see very little.
Firstly it would be best to be in the southern hemisphere, but even then
there is a big cloud of gas in the way. Having said that, telescopes can see
though this cloud at various wavelengths, so we have a good idea what is
behind. The is no evidence for a quasar there but it is generally accepted
that there is a smallish black hole (as galactic central black holes go).
There is relatively little radiation from around it. From the event horizon
itself we'd expect only Hawking radiation. AFAIK Hawking radiation has never
been detected, its theoretical (but few doubt its existance). Large black
holes have weaker Hawking radiation than small ones because the event
horizon is flatter.

> I think if we get a powerful enough telescope, we can peer into the center
> of our universe... IMHO, it should look very similar to the center of a
> galaxy... Tons of energy radiating off the super massive quasar / event
> horizon from "our" universes black hole. If we get exe

No such thing as the centre of our universe. Bigger telescopes peer further
back in time.

> If we get extremely powerful telescopes we can probably see other
> universes'; IMHO, they should look exactly like a galaxy... There are
> probably spiral and elliptical universes... Each with trillions upon
> trillions of galaxies in orbit...

The word Universe means something like 'one and all'. If we can see
something then it is definitely in our universe. As a corollary its clear
that we can never see other universes.

Peter


Del Cecchi

unread,
Sep 7, 2006, 12:21:05 PM9/7/06
to
AirRa...@gmail.com wrote:
> google Intel "Platform 2015"
>
> they're talking about:
>
> 10s to 100s of cores
> 100s threads
> 10s of billions of transistors
>
> here:
> http://www.zdnet.de/i/news/200501/platform2015.jpg
> http://www.intel.com/technology/magazine/computing/platform-2015-0305.htm
>
>
> Intel says they're going to have super chips with several major
> full-blown cores, lots of small multi-purpose accelerator cores that
> can redirect their focus on different tasks quickly, and specialised
> single-purpose units for 3D graphics rendering.
>
And along those lines, the shiny new issue of the IBM Journal of
Research and Development is out, on Advanced Silicon Technology

You may peruse it at http://www.research.ibm.com/journal/rd50-45.html

It seems more transistor oriented than architecture.


(snip)

--
Del Cecchi
"This post is my own and doesn’t necessarily represent IBM’s positions,
strategies or opinions.”

Nick Maclaren

unread,
Sep 7, 2006, 12:35:28 PM9/7/06
to

In article <4mav7gF...@individual.net>,

Del Cecchi <cecchi...@us.ibm.com> writes:
|> >
|> And along those lines, the shiny new issue of the IBM Journal of
|> Research and Development is out, on Advanced Silicon Technology
|>
|> You may peruse it at http://www.research.ibm.com/journal/rd50-45.html
|>
|> It seems more transistor oriented than architecture.

Indeed. While I could have predicted several of the topics, the papers
lose me in their introduction!

The exception I noted at a quick glance was the idea that hetereogeneous
cores could be used to reduce power consumption. I can't claim to have
invented that, as I will bet that I wasn't in the first 100 people to
think of it, but I have been proposing it for some years!

When will we see such things in production, I wonder?


Regards,
Nick Maclaren.

Alex Colvin

unread,
Sep 8, 2006, 11:32:32 AM9/8/06
to
>I think if we get a powerful enough telescope, we can peer into the center
>of our universe... IMHO, it should look very similar to the center of a
>galaxy... Tons of energy radiating off the super massive quasar / event
>horizon from "our" universes black hole. If we get exe

This would be very bad.

The limits of our universe are effectively determined by the age and the
speed of light. That gives us a sphere, with ourselves at the center.

We'd be toast.

--
mac the naïf

ken...@cix.compulink.co.uk

unread,
Sep 8, 2006, 11:37:00 AM9/8/06
to
In article <03SLg.10725$Mh2....@newsfe6-win.ntli.net>,
first{dot}sur...@tesco.net (Peter Dickerson) wrote:

> From the event horizon
> itself we'd expect only Hawking radiation.

No the Hawking radiation would be to small to detect. It is
inversely proportional to size which is why the search for
Quantum black holes has stopped. What could be theoretically
detected is radiation from the accretion disk. As matter spirals
down the gravity gradient energy is released.

Ken Young

jacko

unread,
Sep 8, 2006, 2:30:38 PM9/8/06
to

hyperbolic space has the property that everywhere is centre.!!

jacko

unread,
Sep 8, 2006, 2:40:02 PM9/8/06
to
hi

Peter Dickerson wrote:
<snip>


> When you look at the centre of our galaxy you probably see very little.
> Firstly it would be best to be in the southern hemisphere, but even then
> there is a big cloud of gas in the way. Having said that, telescopes can see
> though this cloud at various wavelengths, so we have a good idea what is
> behind. The is no evidence for a quasar there but it is generally accepted
> that there is a smallish black hole (as galactic central black holes go).
> There is relatively little radiation from around it. From the event horizon
> itself we'd expect only Hawking radiation. AFAIK Hawking radiation has never
> been detected, its theoretical (but few doubt its existance). Large black
> holes have weaker Hawking radiation than small ones because the event
> horizon is flatter.

but as time is dilated near the surface of the event horizon then the
inward particle of the particle pair never realy reaches the event
horizon in any reasonable time scale, and the outer particle of the
pair takes sucjh a long time to get away from the hole combined with
red shift and the unfeasability of low frequency antenna size. well you
cant use super low C dielectrics everywhere ;-)
<snip>

> The word Universe means something like 'one and all'. If we can see
> something then it is definitely in our universe. As a corollary its clear
> that we can never see other universes.

Some would conject that god is a short word for universe.

cheers

Peter Dickerson

unread,
Sep 8, 2006, 3:09:09 PM9/8/06
to
<ken...@cix.compulink.co.uk> wrote in message
news:FfydnZJ2VrK...@pipex.net...

Yes I know. The key point was the word 'only'.
Yes, as matter falls inward it radiates some of the energy away. But that
happens outside the event horizon not at it. Further, the Milky Way's
central back hole does not seem to be accreting much matter (probablem
equivalent to not a quasar).

Peter


jacko

unread,
Sep 8, 2006, 3:21:41 PM9/8/06
to

Nick Maclaren wrote:
> In article <4mav7gF...@individual.net>,
> Del Cecchi <cecchi...@us.ibm.com> writes:
> |> >
> |> And along those lines, the shiny new issue of the IBM Journal of
> |> Research and Development is out, on Advanced Silicon Technology
> |>
> |> You may peruse it at http://www.research.ibm.com/journal/rd50-45.html
> |>
> |> It seems more transistor oriented than architecture.
>
> Indeed. While I could have predicted several of the topics, the papers
> lose me in their introduction!

2nm as final transistor cannel size. then something else instaed of
transistor needed..

> The exception I noted at a quick glance was the idea that hetereogeneous
> cores could be used to reduce power consumption. I can't claim to have
> invented that, as I will bet that I wasn't in the first 100 people to
> think of it, but I have been proposing it for some years!
>
> When will we see such things in production, I wonder?
>

bytecodes: numerics, mem and control sequencing, fpga programming,
spaciographics.

32 bits?
http://indi.joox.net indi16 cpu bytecode for mem and control sequencing
done.
this is a area minimised approach with an easy (low area->high speed)
bytecode decode.

Eugene Miya

unread,
Sep 8, 2006, 8:15:17 PM9/8/06
to
In article <IYOdnf5p5ZMhL2HZ...@comcast.com>,

Chris Thomasson <cri...@comcast.net> wrote:
>"Chris Thomasson" <cri...@comcast.net> wrote in message
>news:d76dnXZIw77iL2HZ...@comcast.com...
>> "Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
>> news:44f318da$1@darkstar...
>>>>IMHO, pattern recognition is relative
>>>>BTW, thank you for talking about this theoretical physics/cosmos stuff.
>>
>>> Oh, not me. My now dead friend Bill was the Godfather of the Santa Cruz
>>> Chaos Cabral.
>
>Sorry to hear about that.

You can read a little bit about him in Gleick's Chaos book or Tom Bass'
The Eudaemonic Pie.

He was a student of Richard Feynman, John Wheeler, and Kip Thorne.
His PhD thesis was on finding 1/O(n^5) components of gravity (which is
Newtonian calculus is a little off). But do not mistake, Bill was not
a chaos person. He just stood up for them to the Faculty.

--

Eugene Miya

unread,
Sep 8, 2006, 8:20:31 PM9/8/06
to
>>> Galaxies yes.
>>> Universes?
>> I could apply the inherent self-similarity to the design of a galaxy.

In article <S7Sdnac8yfMCLmHZ...@comcast.com>,


Chris Thomasson <cri...@comcast.net> wrote:
>Just to quickly clarify:
>
>A galaxy is similar to a solar-system because a solar-system contains
>orbiting entities around gravity at center; star...
>
>A universe is similar to a galaxy and a solar-system because it contains
>orbiting entities around gravity at center; black hole...
>
>And all the way down we go....
>;)

Well, doesn't quite work that way.
First is the problem of the distances involved. That affects time.
And it's hard to mean anything with light at that speed because it takes
a lot of time. Plus it's not clear that you can say the universe has a
center.

Or what it means to have parallel universes (that's merely 1 theory
and nothing to substantiate it). Etc. etc.

No one is sure what's at the galatic center.
That idea has changed in the past few decades.

For the two smaller entities, you can use the idea of a center center.
For the larger, you have to be ready to drop it.

--

Eugene Miya

unread,
Sep 8, 2006, 8:26:14 PM9/8/06
to
In article <d76dnXZIw77iL2HZ...@comcast.com>,
Chris Thomasson <cri...@comcast.net> wrote:
>>>theoretical physics/cosmos stuff.
>>>> cosmology
>
>I could apply the inherent self-similarity to the design of a galaxy. A
>galaxy basically owns a collection of orbiting entities (e.g.,
>solar-systems); influenced through black hole at center.... Therefore, I say
>that universe basically owns a collection of orbiting entities (e.g.,
>galaxies'... ); influences' through supper massive black hole at center.
>
>
>Again... Instead of orbiting solar-systems, universes contain orbiting
>galaxies' around very super massive black hole at the center...
>
>
>What do you think? Could that be possible?

I am not a cosmologist or a physicist.
I hang with a few. You really want sci.astro.* to set you straight.
You have to get into the dark matter discussion, understand what gravity
does to light over long distances (Bill was looking at gravitational
lensing when he died, using yacc and lex no less (he took Pike's hoc(1)
calculator, changed it first to do complex arithmetic, then quaternions)).


--

Chris Thomasson

unread,
Sep 9, 2006, 12:56:42 AM9/9/06
to
"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
news:45020a26$1@darkstar...

if the gravity from the sun can warp light around it, I can only imagine how
many times light from a distant galaxy gets warped before it reaches us...


Chris Thomasson

unread,
Sep 9, 2006, 1:03:44 AM9/9/06
to
"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
news:450208cf$1@darkstar...

>>>> Galaxies yes.
>>>> Universes?
>>> I could apply the inherent self-similarity to the design of a galaxy.
>
> In article <S7Sdnac8yfMCLmHZ...@comcast.com>,
> Chris Thomasson <cri...@comcast.net> wrote:
>>Just to quickly clarify:
>>
>>A galaxy is similar to a solar-system because a solar-system contains
>>orbiting entities around gravity at center; star...
>>
>>A universe is similar to a galaxy and a solar-system because it contains
>>orbiting entities around gravity at center; black hole...
>>
>>And all the way down we go....
>>;)
>
> Well, doesn't quite work that way.
> First is the problem of the distances involved. That affects time.
> And it's hard to mean anything with light at that speed because it takes
> a lot of time. Plus it's not clear that you can say the universe has a
> center.

Well, lets drop the universe having a center thing... Although, the exact
place in space and time where the big bang "was called into existence" could
arguably be the center? Na...


Okay... Let me start on something we all can agree on... A galaxy is an
"entity that exists in our universe" that "owns" a collection of orbiting
solar-systems around gravity...?


I think there is another "entity that exists in our universe" that "owns" a
collection of orbiting galaxies around a massive source of gravity...?


> For the larger, you have to be ready to drop it.

Am I still sounding crazy?

;)


Eugene Miya

unread,
Sep 9, 2006, 11:39:28 AM9/9/06
to
In article <LKCdnSyIWo9U1Z_Y...@comcast.com>,

Chris Thomasson <cri...@comcast.net> wrote:
>"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
>news:450208cf$1@darkstar...
>>>>> Galaxies yes.
>>>>> Universes?
>> First is the problem of the distances involved. That affects time.
>> And it's hard to mean anything with light at that speed because it takes
>> a lot of time. Plus it's not clear that you can say the universe has a
>> center.

>if the gravity from the sun can warp light around it, I can only imagine how

>many times light from a distant galaxy gets warped before it reaches us...

I think galaxy gravity is insufficient density to warp what's needed.
The sun only warps it very close.

>Well, lets drop the universe having a center thing... Although, the exact
>place in space and time where the big bang "was called into existence" could
>arguably be the center? Na...

You must not confuse geometry here. We don't know what exists "outside"
our universe. Maybe nothing. It's not space. Use the rising bread
except make it an expanding balloon surface.

>Okay... Let me start on something we all can agree on... A galaxy is an
>"entity that exists in our universe" that "owns" a collection of orbiting
>solar-systems around gravity...?

That's proabbly some of it.

>I think there is another "entity that exists in our universe" that "owns" a
>collection of orbiting galaxies around a massive source of gravity...?

Then our job is figure out that entity?

>> For the larger, you have to be ready to drop it.
>Am I still sounding crazy?
>;)

I have no idea.
I'm going to a memorial in a few hours and Thorne might be there.

--

Andy Glew

unread,
Sep 9, 2006, 11:47:44 PM9/9/06
to
Del Cecchi pointed comp.arch to

> the shiny new issue of the IBM Journal of
> Research and Development is out, on Advanced Silicon Technology
> at http://www.research.ibm.com/journal/rd50-45.html

which contain Topol's paper on 3D integration
http://www.research.ibm.com/journal/rd/504/topol.html

which contains the graph
http://www.research.ibm.com/journal/rd/504/topol3.gif

which shows that "performance increases as the square root of the
number of layers stacked"


Ignore for the moment that I do not think this is what the graph is
showing... I haven't chased down its source, so there may be more
convincing evidence.


I just find it interesting to find yet another square root law
in computer architecture.


I am becoming something of a collector, a connoiseur, of square root
laws in computer architecture.


The first such square root law I encountered was Tjaden and Flynn's
1968 paper, whose key result I remember as "performance increases as
the square root of the number of branches that can be speculatively
executed".

Since then, I have seen other examples. I coined my own "law", that
performance increases as the square root of the number of
transistors... - although this has become more well-known as Pollack's
Law. (Fred Pollack was my original boss at Intel; Fred presented
slides to this effect at some widely oublicized talks.)


By the way, I have long wondered whether this "law" is due to the
2D-ness of most VLSI: the number of transistors increases as the
radius^2. I have conjectured that, in 3D integration, the appropriae
law may be a cube root law.


Topol's graph suggests a new square root law for 3D.

russell kym horsell

unread,
Sep 10, 2006, 1:43:04 AM9/10/06
to
Andy Glew <first...@employer.domain> wrote:
[...]

> I am becoming something of a collector, a connoiseur, of square root
> laws in computer architecture.
> The first such square root law I encountered was Tjaden and Flynn's
> 1968 paper, whose key result I remember as "performance increases as
> the square root of the number of branches that can be speculatively
> executed".
[...]


Maybe it's all just q-ing thy. :)

"Performance" is often related to "ability to handle variability".
E.g. the propsensity of bubbles of unexpectedness developing in networks
of synchronised processes of various kinds.

I'm not restricting anything to comp arch.

The variation in the "average success" (e.g. deliver output for given input
after unit time) of n processes connected together is proprtional to sqrt(n).

If the number of processes/parts between X and Y is proptional to the
separation of X and Y then the sqrt is idempotently moved to the headline
position in a new law.

Even taking the Manhatten distance in a 3d rig, the law would then
still be sqrt(). I don't know whether this is pleasing or displeasing.

Chris Thomasson

unread,
Sep 10, 2006, 3:14:16 PM9/10/06
to
"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
news:4502e030$1@darkstar...

> In article <LKCdnSyIWo9U1Z_Y...@comcast.com>,
> Chris Thomasson <cri...@comcast.net> wrote:
>>"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
>>news:450208cf$1@darkstar...
>>>>>> Galaxies yes.
>>>>>> Universes?
>>> First is the problem of the distances involved. That affects time.

[...]

>>Well, lets drop the universe having a center thing... Although, the exact
>>place in space and time where the big bang "was called into existence"
>>could
>>arguably be the center? Na...

Humm...


> You must not confuse geometry here. We don't know what exists "outside"
> our universe.

Interesting...


> Maybe nothing.

I wonder if there was an "intelligent design" of 'sorts' behind the big
bang?


Well, who knows... One little "analogy"...


<sidetrack>

>>> jacko Wrote:
>>> Some would conject that god is a short word for universe.


Flora greenery "exhales oxygen and inhales carbon", while fauna "inhales
oxygen and exhales carbon".

Humm... Flora annuals seem to react to duration of light they get per-day
(e.g., flowering process)... That tells me that the "intelligence" that
created the flora, fauna and the sun is "perhaps" the same intelligence; an
"unknown and aw inspiring entity"...

?
</sidetrack>


Damn, I really want to "talk to the guy who holds the patent on gravity and
electromagnetism"...

I will ask him "how they fit together"...

If I ever get to talk to it, "It's" answer will probably...

"I AM THAT I AM! STUPID!!!!"

The universe is a mysterious and wonderful place indeed...

;)


> It's not space. Use the rising bread
> except make it an expanding balloon surface.

Humm....


>>Okay... Let me start on something we all can agree on... A galaxy is an
>>"entity that exists in our universe" that "owns" a collection of orbiting
>>solar-systems around gravity...?
>
> That's proabbly some of it.

Perhaps...


>>I think there is another "entity that exists in our universe" that "owns"
>>a
>>collection of orbiting galaxies around a massive source of gravity...?
>
> Then our job is figure out that entity?

Indeed!!!


>>> For the larger, you have to be ready to drop it.
>>Am I still sounding crazy?
>>;)
>
> I have no idea.
> I'm going to a memorial in a few hours and Thorne might be there.

Okay...


:O


Ketil Malde

unread,
Sep 11, 2006, 3:37:17 AM9/11/06
to
"Chris Thomasson" <cri...@comcast.net> writes:

> Well, lets drop the universe having a center thing... Although, the exact
> place in space and time where the big bang "was called into existence" could
> arguably be the center? Na...

If you involve time, I guess you could say that the "center" is at
t=0. That means that you are now X billion years from the center
(regardless of your spatial position).

AIUI, our telescopes *can* in fact peer at the center of the universe,
it's called background radiation.

> I think there is another "entity that exists in our universe" that "owns"

consists of?

> a collection of orbiting galaxies around a massive source of gravity...?

Galaxies occur in clusters, which kind of sort of fit your
description. I'm not aware of any "massive source of gravity"
involved, beyond the galaxies themselves, but nobody is stopping you
if you want to conjecture about 'dark matter' and whatnot.

Anyway, it seems clear that these systems (solar system - galaxy -
galaxy cluster - universe) are similar only in a very, very
superficial sense, so I'm not sure what the point is.

-k
--
If I haven't seen further, it is by standing in the footprints of giants

Ketil Malde

unread,
Sep 11, 2006, 3:55:27 AM9/11/06
to
eug...@cse.ucsc.edu (Eugene Miya) writes:

> First is the problem of the distances involved. That affects time.
> And it's hard to mean anything with light at that speed because it takes
> a lot of time. Plus it's not clear that you can say the universe has a
> center.

Bear with me as I run off on a tangent here:

Picture the universe as a sphere, where time is the radius, and our
three dimensions are reduced to the two dimensions of the sphere's
surface.

Now if I look off in the distance, due to the speed of light, I'm also
looking back in time. In other words, my line of sight does not lie
on the surface, but curves into the interior of the sphere (the past),
towards the center.

The question is: how large is the universe really, and how fast does
it expand? More to the point, is it possible that my line of sight:

a) crosses another line of sight, so that I can see the same point in
space-time (but different from t=0) in two or more directions?

b) performs multiple orbits before reaching the center, so that I can
see my own location in the past one or more times by looking far
enough?

And if the answer is no, well: why not? Is a universe where this
occurs impossible, or were we just unlucky?

I'll apologize for being off-topic, but I really have to know :-)

Anton Ertl

unread,
Sep 11, 2006, 4:40:53 AM9/11/06
to
Ketil Malde <ketil...@ii.uib.no> writes:
>is it possible that my line of sight:
>
>a) crosses another line of sight, so that I can see the same point in
> space-time (but different from t=0) in two or more directions?

Sure, several gravitational lenses have been discovered, resulting in
the same object appearing in our sight twice, in two slightly
different directions.

- anton
--
M. Anton Ertl Some things have to be seen to be believed
an...@mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html

Alex Colvin

unread,
Sep 11, 2006, 9:17:16 PM9/11/06
to
>The question is: how large is the universe really, and how fast does
>it expand? More to the point, is it possible that my line of sight:

>a) crosses another line of sight, so that I can see the same point in
> space-time (but different from t=0) in two or more directions?

>b) performs multiple orbits before reaching the center, so that I can
> see my own location in the past one or more times by looking far
> enough?

>And if the answer is no, well: why not? Is a universe where this
>occurs impossible, or were we just unlucky?

It seems possible. A couple of years ago it even looked likely.
http://www.newscientist.com/article.ns?id=dn4250

http://news.nationalgeographic.com/news/2003/10/1008_031008_finiteuniverse.html

However, I haven't heard much of this lately.

--
mac the naïf

Alex Colvin

unread,
Sep 11, 2006, 9:25:07 PM9/11/06
to
>> Well, lets drop the universe having a center thing... Although, the exact
>> place in space and time where the big bang "was called into existence" could
>> arguably be the center? Na...

Since the big bang filled the entire universe, not just some part of it,
that center would be everywhere.

>> I think there is another "entity that exists in our universe" that "owns"

>> a collection of orbiting galaxies around a massive source of gravity...?

well, there's the "Great Attractor" in Centaurus.

--
mac the naïf

Peter Dickerson

unread,
Sep 12, 2006, 2:42:21 AM9/12/06
to
"Alex Colvin" <al...@TheWorld.com> wrote in message
news:ee529j$g8h$2...@pcls4.std.com...

> >> Well, lets drop the universe having a center thing... Although, the
exact
> >> place in space and time where the big bang "was called into existence"
could
> >> arguably be the center? Na...
>
> Since the big bang filled the entire universe, not just some part of it,
> that center would be everywhere.

While I can agree with the conclusion (or that there is no well defined
centre) I can't agree with the logic. Just because air fills an entire
balloon doesn't mean that the centre is everywhere.

> >> I think there is another "entity that exists in our universe" that
"owns"
> >> a collection of orbiting galaxies around a massive source of
gravity...?
>
> well, there's the "Great Attractor" in Centaurus.

There are many Galactic clusters and clusters of Galactic clusters. The
Great Attactor just happens to be a nearby one.

Peter


Chris Thomasson

unread,
Sep 12, 2006, 4:06:40 AM9/12/06
to
"Peter Dickerson" <first{dot}sur...@tesco.net> wrote in message
news:hFsNg.22941$8V4....@newsfe5-win.ntli.net...

That's fractal-like? There are probably clusters of, clusters of Galactic
clusters...

;)

Let me rephrase my OP...


1) Solar System = Collection Of Planets Orbiting Gravity
<zoom out>


2) Galaxy = Collection Of Solar-Systems Orbiting Gravity
<zoom out>


3) GalaticClusterA = Collection Of Galaxy's Orbiting Gravity
<zoom out>


4) GalaticClusterB = Collection Of GalaticClusterA Orbiting Gravity
<zoom out>


5) GalaticClusterC = Collection Of GalaticClusterB Orbiting Gravity
<zoom out>


6) GalaticClusterD = Collection Of GalaticClusterC Orbiting Gravity
<zoom out>


5) GalaticClusterE = Collection Of GalaticClusterD Orbiting Gravity
<zoom out>


6) Turtles all the way down...

Could this possibly be correct?


Eugene Miya

unread,
Sep 12, 2006, 1:51:58 PM9/12/06
to
In article <L9GdnUB9TZgA_JnY...@comcast.com>,

Chris Thomasson <cri...@comcast.net> wrote:
>>>>>>> Galaxies yes.
>>>>>>> Universes?
>>>> First is the problem of the distances involved. That affects time.
>>>Well, lets drop the universe having a center thing... Although, the exact
>>>place in space and time where the big bang "was called into existence"
>>>could arguably be the center? Na...
>Humm...
>> You must not confuse geometry here. We don't know what exists "outside"
>> our universe.
>
>Interesting...

If you want to explore that area, you really need sci.astro, some
reading in "higher" math and physics, and cosmology.

>> Maybe nothing.
>I wonder if there was an "intelligent design" of 'sorts' behind the big
>bang?
>Well, who knows... One little "analogy"...

Then you want one of the creationism groups.

><sidetrack>
>>>> jacko Wrote:
>>>> Some would conject that god is a short word for universe.
>
>Flora greenery "exhales oxygen and inhales carbon", while fauna "inhales
>oxygen and exhales carbon".
>
>Humm... Flora annuals seem to react to duration of light they get per-day
>(e.g., flowering process)... That tells me that the "intelligence" that
>created the flora, fauna and the sun is "perhaps" the same intelligence; an
>"unknown and aw inspiring entity"...
>?

Look, you want plants, sci.bio, but you will do better with bio text
books and hanging around biologists (I prefer hanging with geologists these
days, but there are also more women in biology if you want to hang there,
they do not think in binary terms as many EEs).

></sidetrack>
>
>
>
>
>Damn, I really want to "talk to the guy who holds the patent on gravity and
>electromagnetism"...
>I will ask him "how they fit together"...

Take a physics class.

>If I ever get to talk to it, "It's" answer will probably...
>"I AM THAT I AM! STUPID!!!!"
>The universe is a mysterious and wonderful place indeed...
>;)

I am not certain I would use those adjectives.

Take a physics class.


>> It's not space. Use the rising bread
>> except make it an expanding balloon surface.
>Humm....

These analogies are decades old. Like raisins in rising bread.


>>>Okay... Let me start on something we all can agree on... A galaxy is an
>>>"entity that exists in our universe" that "owns" a collection of orbiting
>>>solar-systems around gravity...?

>> That's probably some of it.


>Perhaps...
>>>I think there is another "entity that exists in our universe" that "owns"
>>>a
>>>collection of orbiting galaxies around a massive source of gravity...?
>> Then our job is figure out that entity?
>Indeed!!!

You need to pick up the terminology and the meaning (quantitatively) in
it use.


>>>> For the larger, you have to be ready to drop it.
>>>Am I still sounding crazy?
>>>;)
>> I have no idea.
>> I'm going to a memorial in a few hours and Thorne might be there.
>Okay...
>:O

Nope, Kip was not there, but I spent the weekend with one of his colleagues.

Take a physics class.
--

Eugene Miya

unread,
Sep 12, 2006, 1:59:05 PM9/12/06
to
In article <eglkoqg7...@polarvier.ii.uib.no>,

Ketil Malde <ketil...@ii.uib.no> wrote:
>eug...@cse.ucsc.edu (Eugene Miya) writes:
>> First is the problem of the distances involved. That affects time.
>> And it's hard to mean anything with light at that speed because it takes
>> a lot of time. Plus it's not clear that you can say the universe has a
>> center.
>
>Bear with me as I run off on a tangent here:
>
>Picture the universe as a sphere, where time is the radius, and our
>three dimensions are reduced to the two dimensions of the sphere's
>surface.

Grab a book on cosmology and prepare to learn some math.
A sphere might seem convenient because of the term big-bang, but it's
insufficient in terms of structure.

>Now if I look off in the distance, due to the speed of light, I'm also
>looking back in time. In other words, my line of sight does not lie
>on the surface, but curves into the interior of the sphere (the past),
>towards the center.
>
>The question is: how large is the universe really, and how fast does
>it expand? More to the point, is it possible that my line of sight:
>
>a) crosses another line of sight, so that I can see the same point in
> space-time (but different from t=0) in two or more directions?
>
>b) performs multiple orbits before reaching the center, so that I can
> see my own location in the past one or more times by looking far
> enough?

Ask over in sci.astro and/or get (better) some astronomy textbooks.
You need to prepare to drop orbits and forget about center at this
stage.

>And if the answer is no, well: why not? Is a universe where this
>occurs impossible, or were we just unlucky?
>
>I'll apologize for being off-topic, but I really have to know :-)

Naw just learn to use the other news groups in usenet.

--

Eugene Miya

unread,
Sep 12, 2006, 2:07:39 PM9/12/06
to
In article <CLKdnVn-Cq-99ZvY...@comcast.com>,
Chris Thomasson <cri...@comcast.net> wrote:
>That's fractal-like?
...

>6) Turtles all the way down...
>
>Could this possibly be correct?

No.

At least 8-9 orders of magnitude of structural similarity and not simple.

--

Del Cecchi

unread,
Sep 12, 2006, 2:53:07 PM9/12/06
to
Peter Dickerson wrote:
> "Alex Colvin" <al...@TheWorld.com> wrote in message
> news:ee529j$g8h$2...@pcls4.std.com...
>
>>>>Well, lets drop the universe having a center thing... Although, the
>
> exact
>
>>>>place in space and time where the big bang "was called into existence"
>
> could
>
>>>>arguably be the center? Na...
>>
>>Since the big bang filled the entire universe, not just some part of it,
>>that center would be everywhere.
>
>
> While I can agree with the conclusion (or that there is no well defined
> centre) I can't agree with the logic. Just because air fills an entire
> balloon doesn't mean that the centre is everywhere.

How can you have a center if there is no boundary? And there was no
space and time until the big bang, or until the Lord said "let there be
light" depending on your preference.
>

>
>


--
Del Cecchi
"This post is my own and doesn’t necessarily represent IBM’s positions,
strategies or opinions.”

Peter Dickerson

unread,
Sep 13, 2006, 3:37:24 AM9/13/06
to
"Del Cecchi" <cecchi...@us.ibm.com> wrote in message
news:4moe0lF...@individual.net...

> Peter Dickerson wrote:
> > "Alex Colvin" <al...@TheWorld.com> wrote in message
> > news:ee529j$g8h$2...@pcls4.std.com...
> >
> >>>>Well, lets drop the universe having a center thing... Although, the
> >
> > exact
> >
> >>>>place in space and time where the big bang "was called into existence"
> >
> > could
> >
> >>>>arguably be the center? Na...
> >>
> >>Since the big bang filled the entire universe, not just some part of it,
> >>that center would be everywhere.
> >
> >
> > While I can agree with the conclusion (or that there is no well defined
> > centre) I can't agree with the logic. Just because air fills an entire
> > balloon doesn't mean that the centre is everywhere.
>
> How can you have a center if there is no boundary? And there was no
> space and time until the big bang, or until the Lord said "let there be
> light" depending on your preference.

While I don't have any argument with the cosmology i feel I must nitpick.
Consider the ball (x,y,z) with x²+y²+z² < 1. This ball doesn't not have a
boundary, at least not that is part of the ball. But it does have a centre.
So you can most definitely have a centre without a boundary.

I must also nitpick about 'until', a concept that only exists after the big
bang.

Peter


Del Cecchi

unread,
Sep 13, 2006, 7:49:24 PM9/13/06
to

"Peter Dickerson" <first{dot}sur...@tesco.net> wrote in message
news:UyONg.23343$8V4....@newsfe5-win.ntli.net...
That ball most certainly has a boundary, in that x=y=z =1 (or 2 for that
matter) is outside the ball. So if you are playing semantic games by the
fact that the "boundary doesn't exist because < allows infinitesimals.
bah.

Now, point at the region outside the universe, in three space.


Peter Dickerson

unread,
Sep 14, 2006, 2:57:41 AM9/14/06
to
"Del Cecchi" <delcecchi...@gmail.com> wrote in message
news:4mrjo3F...@individual.net...

Well, I assume you didn't quite mean a single point. Presumably you meant
x²+y²+z² = 1. And no, 2 would not be on the boundary - at least not in my
(mathematical) understanding of the word.

> matter) is outside the ball. So if you are playing semantic games by the
> fact that the "boundary doesn't exist because < allows infinitesimals.
> bah.

If the boundary you describe is not part of the ball then it is not a ball
with boundary. Similarly if you are say that there is a boundary to the ball
in a larger space in which the ball is embedded then that possibility also
applies to the universe. Whether the universe is embeeded in a larger (or
higher dimensional) space is probably moot, because if we could detect it
we'd include it as part of the universe.

> Now, point at the region outside the universe, in three space.

This is meaningless, even without the three-space proviso. Outside? what's
that, a bit like before?

And, yes, many years ago I studied at the uni where Nick now is. However, I
studies mathematics, including cosmology, gen rel, topology etc so I'm
probably more pedantic than him! But it was a long time ago...

Peter


Del Cecchi

unread,
Sep 14, 2006, 1:17:56 PM9/14/06
to
That was my point. The ball has boundary in that there are points "in
the ball" and points "not in the ball"

Now if it had been said that the universe is like the surface of a ball
which doesn't have a boundary and therefore doesn't have a center, that
I could go along with.

wclo...@lanl.gov

unread,
Sep 14, 2006, 4:32:14 PM9/14/06
to

Del Cecchi wrote:
> Peter Dickerson wrote:
> > "Alex Colvin" <al...@TheWorld.com> wrote in message
> > news:ee529j$g8h$2...@pcls4.std.com...
> >
> >>>>Well, lets drop the universe having a center thing... Although, the
> >
> > exact
> >
> >>>>place in space and time where the big bang "was called into existence"
> >
> > could
> >
> >>>>arguably be the center? Na...
> >>
> >>Since the big bang filled the entire universe, not just some part of it,
> >>that center would be everywhere.
> >
> >
> > While I can agree with the conclusion (or that there is no well defined
> > centre) I can't agree with the logic. Just because air fills an entire
> > balloon doesn't mean that the centre is everywhere.
>
> How can you have a center if there is no boundary? And there was no
> space and time until the big bang, or until the Lord said "let there be
> light" depending on your preference.
> > <snip>

FWIW you can have a center (of mass) without having a definite boundary
provided the average number density at large distances decreases faster
than r^-(2.+delta) (where delta is any finite positive value), e.g.
average density ~ exp(-((x-x0)^2+(y-y0)^2+(z-z0)^2)/r0^2). However for
reasons I have never investigated nonuniform distributions at large
scales has not been popular with astrophysicists. I have sometimes
wondered over the extent to which that rejection is justified by
observation, consistency with current theory, and relative complexity
of theories incorporating such nonuniformities.

Del Cecchi

unread,
Sep 14, 2006, 9:01:21 PM9/14/06
to

<wclo...@lanl.gov> wrote in message
news:1158265934.2...@i3g2000cwc.googlegroups.com...

Isn't the non-specialness of our location pretty much of a postulate in
cosmology? Sort of the antithesis of earth being the center of the
universe?
>


ken...@cix.compulink.co.uk

unread,
Sep 15, 2006, 7:08:00 AM9/15/06
to
In article <4mth6bF...@individual.net>,
cecchi...@us.ibm.com (Del Cecchi) wrote:

> Now if it had been said that the universe is like the surface
> of a ball which doesn't have a boundary and therefore doesn't
> have a center, that I could go along with.

Off course the observable universe does have a centre. It is a
sphere centred on earth expanding at the speed of light. Whether
or not the universe has a boundary is purely theoretical unless
that boundary is reached by the observable universe. However
given string theory and inflation our universe is probably a
bubble in a foam of all possible universes.

Ken Young

Del Cecchi

unread,
Sep 15, 2006, 8:56:07 PM9/15/06
to

<ken...@cix.compulink.co.uk> wrote in message
news:Uo6dneXs-9Q...@pipex.net...

How do you know it is a sphere? And if it is a sphere, what is outside
it? Or were you just being sarcastic?


ken...@cix.compulink.co.uk

unread,
Sep 16, 2006, 6:29:46 AM9/16/06
to
In article <4n10d5F...@individual.net>,
delcecchi...@gmail.com (Del Cecchi) wrote:

> How do you know it is a sphere? And if it is a sphere, what
> is outside it? Or were you just being sarcastic?

I am talking about the observable universe. That is what
ignoring dust clouds etc we can see. This is a function of the
speed of light and the age of the universe. From any arbitrary
position in the universe all you can see is what has had time for
the light to arrive. If the universe was a million years old the
furthest you could see from Earth in any direction would be 1
million light years hence a sphere. Of course the observable
universe increases in size by a light second every second and
it's correspondence to the whole universe is not known and
probably not knowable.

Ken Young

wclo...@lanl.gov

unread,
Sep 18, 2006, 3:42:11 PM9/18/06
to

Del Cecchi wrote:
> <snip>

>
> Isn't the non-specialness of our location pretty much of a postulate in
> cosmology? Sort of the antithesis of earth being the center of the
> universe?
> >
I am not a cosmologist (IANAC) but I get the impression that the
fundamental postulate is the nonexistence of ANY special location. I
have trouble internally reconciling the postulate with the simplest
visualizations of the big bang, and the imagery often used to explain
the non-specialness of all space, a balloon expanding with no point on
its surface being special, implies an expansion in four spatial
dimensions, with only three accessible, which I have trouble
understanding. Unfortunately my ability to reconcile myself to
non-straightforward concepts started declining shortly after I learned
non-relativistic quantum mechanics, about 30 years ago.

Eric P.

unread,
Sep 18, 2006, 4:37:17 PM9/18/06
to

Inquiring minds want to know...

Where is the centre of the universe?
http://math.ucr.edu/home/baez/physics/Relativity/GR/centre.html

Where was the center of the Big Bang?
http://www.astro.ucla.edu/~wright/nocenter.html

Lots more at...

Usenet Physics FAQ
http://math.ucr.edu/home/baez/physics/

Frequently Asked Questions in Cosmology
http://www.astro.ucla.edu/~wright/cosmology_faq.html

Eric

Eugene Miya

unread,
Sep 18, 2006, 6:40:57 PM9/18/06
to
If you guys want to understand cosmology better, it's generally a useful
idea to have a few other prerequisite classes, books, knowledge under
your belt. You want to spend a little bit of time trying to understand
spherical and hyperbolic geometry. Reading Abbot's Flatland will
help, but it's merely a beginnning. There are reasons why these
analogies about spheres, and expansion, etc. are presented, and that's
why you can't go into the field purely with Euclidean geometry and
rational numbers. Throwing supercomputer power at Euclidean geometries
and rational numbers merely makes computer people look like fools to
real cosmologists.

--

Del Cecchi

unread,
Sep 18, 2006, 11:32:21 PM9/18/06
to

"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message
news:450f2079@darkstar...

Read flatland in 9th grade. don't be so condescending.


Jan Vorbrüggen

unread,
Sep 19, 2006, 3:38:12 AM9/19/06
to
> I am not a cosmologist (IANAC) but I get the impression that the
> fundamental postulate is the nonexistence of ANY special location.

The reason I mentioned Emmy Noether in this subthread is that this
postulate translates 8-) into translation-invariance of the equations
that describe your world, and this symmetry implies a conservation rule
- in this particular case, the conservation of energy.

Jan

Carlie J. Coats

unread,
Sep 20, 2006, 8:19:56 AM9/20/06
to
Peter Dickerson wrote:
[snip]

> While I don't have any argument with the cosmology i feel I must nitpick.
> Consider the ball (x,y,z) with x²+y²+z² < 1. This ball doesn't not have a
> boundary, at least not that is part of the ball. But it does have a centre.
> So you can most definitely have a centre without a boundary.
>
> I must also nitpick about 'until', a concept that only exists after the big
> bang.
>
> Peter

You can't even call it { (x,y,z) with x²+y²+z² < 1 } until you
have a metric. And you don't have a metric until there's matter
there (i.e., until after the bang).

And if you'd taken that differential manifold and put a *hyperbolic*
metric on it, it still wouldn't have a boundary, even in Del's sense.

fwiw.

-- Carlie Coats (once differential topologist, now model-hacker)

Eugene Miya

unread,
Sep 20, 2006, 12:06:33 PM9/20/06
to
In article <4n96luF...@individual.net>,
>Read flatland in 9th grade. don't be so condescending.

Who's being condescending?
Modern cosmology has prerequisites. Do you deny that?
I frankly think the sphere analogy is a poor one, but it's standard.

To the typical graphics programmer the world, much less the universe,
is a flat 2-D. Ask any the shape of states like WY and CO, and 90% or more
will give you the wrong answer. I ran a SIGGRAPH chapter.
What's your excuse?

--

Richard E Maine

unread,
Sep 20, 2006, 12:13:22 PM9/20/06
to
Eugene Miya <eug...@cse.ucsc.edu> wrote:

> I frankly think the sphere analogy is a poor one, but it's standard.

Amen. In fact, I think it so poor that it just didn't "click" with me at
all for the longest time. That analogy kept looking to me like the
points on the sphere were moving as the sphere expanded... which turns
out to not "work". I could see that it didn't "work" and generated all
kinds of apparent contradictions, causing me to be completely lost.
Cosmology wasn't making sense to me.

It wasn't until an article just a few years ago, I'm pretty sure it was
in Scientific American, that I finally "got" it. That article pointed
out some common misconceptions, and sure enough, some of those were
exactly the ones I had formerly gotten. Aha! The light finally dawns.
The points don't move. Instead, the amount of space between them
increases because space itself expands. Yes, I know that the sphere
analogy sort of does that, but it still didn't click that way to me. It
made me focus on the geometry of the sphere, which is sort of besides
the point. I find it less distracting to look at just a line, without
extra distracing dimensions and geometry thrown in. I'd recommend that
article. It sure helped me. I don't have the exact citation handy, but
it shouldn't be hard to search SciAm for it.

--
Richard Maine | Good judgment comes from experience;
email: my first.last at org.domain| experience comes from bad judgment.
org: nasa, domain: gov | -- Mark Twain

It is loading more messages.
0 new messages