Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Future of Computing

226 views
Skip to first unread message

tabb...@gmail.com

unread,
Dec 6, 2016, 6:16:22 AM12/6/16
to
What do people think of the possibility in the distant future of transitioning to a solid 3d block silicon IC structure?

Thermal management is the most obvious issue. Extremely low power circuitry exists, albeit slow. Coper rods could be included to improve heat transfer to the surface.


NT

Martin Brown

unread,
Dec 6, 2016, 6:39:49 AM12/6/16
to
On 06/12/2016 11:16, tabb...@gmail.com wrote:
> What do people think of the possibility in the distant future of
> transitioning to a solid 3d block silicon IC structure?

I expect most supercomputers for the forseeable future to look pretty
much like they have hypercube connectivity.

Domestic and office PCs have reached a limit where they are already more
than powerful enough so the next stage is to do the same with a bit less
power and less silicon (or a cheaper alternative).

Better interfaces between modern SSD type memory and human brains looks
like it might happen in the reasonably forseeable future.

http://spectrum.ieee.org/biomedical/bionics/darpa-project-starts-building-human-memory-prosthetics

I think their schedule is a little optimistic. YMMV
>
> Thermal management is the most obvious issue. Extremely low power
> circuitry exists, albeit slow. Coper rods could be included to
> improve heat transfer to the surface.

Too difficult to engineer. Self assembling chemical semiconductors could
be one way that things might go or even subverting DNA replication to
solve certain types of hard combinatorial problems.

The other possibility gaining credence is exotic chemical systems in 3D
printed shaped chambers to solve specific problems at first or even
general purpose computing one day.

At the moment they are still at the level of basic gate components:

https://arxiv.org/ftp/arxiv/papers/1212/1212.2762.pdf

A fully working liquid chemistry computer would be impressive.

--
Regards,
Martin Brown

bill....@ieee.org

unread,
Dec 6, 2016, 7:54:05 AM12/6/16
to
On Tuesday, December 6, 2016 at 10:16:22 PM UTC+11, tabb...@gmail.com wrote:
> What do people think of the possibility in the distant future of transitioning to a solid 3d block silicon IC structure?
>
> Thermal management is the most obvious issue. Extremely low power circuitry exists, albeit slow. Copper rods could be included to improve heat transfer to the surface.

Diamond is better, and tolerably easy to lay down by chemical vapour deposition.

Don't forget Josephson junctions - which seem to be available with high temperature superconductors.

--
Bill Sloman, Sydney

Phil Hobbs

unread,
Dec 6, 2016, 9:07:32 AM12/6/16
to
The problem is yield. Bonding known-good dice to known-good chips on
wafers is barely acceptable, and iterating the process to make real 3D
is very hard.

You can't mix front-end-of-line (FEOL) processes (the ones that make the
actual transistors) with back-end-of-line (BEOL) ones (basically wiring)
because you need things like 900C anneals and 600C epitaxy, whereas FEOL
stuff is typically limited to 300C. That means that you have to stack
thinned-down 2D structures.

Then there's the elevator shaft problem. The reason that tall buildings
have "sky lobbies" is that if all elevators go to all floors, eventually
all your floor space is taken up by elevator shafts. The same applies
to the through-silicon vias (TSVs) that you need to go from layer to layer.

So fairly flat 3D structures are reasonable, but anything with the
aspect ratio of a cube isn't.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net

tabb...@gmail.com

unread,
Dec 6, 2016, 11:17:59 AM12/6/16
to
On Tuesday, 6 December 2016 14:07:32 UTC, Phil Hobbs wrote:
> On 12/06/2016 06:16 AM, tabbypurr wrote:

> > What do people think of the possibility in the distant future of
> > transitioning to a solid 3d block silicon IC structure?
> >
> > Thermal management is the most obvious issue. Extremely low power
> > circuitry exists, albeit slow. Coper rods could be included to
> > improve heat transfer to the surface.
> >
>
> The problem is yield. Bonding known-good dice to known-good chips on
> wafers is barely acceptable, and iterating the process to make real 3D
> is very hard.

Yield could I expect be addressed with a self test routine that permanently disables all faulty blocks. Or where practical limits what they can do to what works.

> You can't mix front-end-of-line (FEOL) processes (the ones that make the
> actual transistors) with back-end-of-line (BEOL) ones (basically wiring)
> because you need things like 900C anneals and 600C epitaxy, whereas FEOL
> stuff is typically limited to 300C. That means that you have to stack
> thinned-down 2D structures.

I'll look at that later, must run.

> Then there's the elevator shaft problem. The reason that tall buildings
> have "sky lobbies" is that if all elevators go to all floors, eventually
> all your floor space is taken up by elevator shafts. The same applies
> to the through-silicon vias (TSVs) that you need to go from layer to layer.
>
> So fairly flat 3D structures are reasonable, but anything with the
> aspect ratio of a cube isn't.
>
> Cheers
>
> Phil Hobbs

Surely a 3d block gives better interconnectivity than today's flat plane devices.


NT

Tom Gardner

unread,
Dec 6, 2016, 11:26:11 AM12/6/16
to
On 06/12/16 16:17, tabb...@gmail.com wrote:
> Surely a 3d block gives better interconnectivity than today's flat plane devices.

A key problem with today's advanced small-geometry processes
is getting the heat away from the junctions.

Proposed new structures have to have a story for how that
is achieved.

George Herold

unread,
Dec 6, 2016, 11:44:11 AM12/6/16
to
On Tuesday, December 6, 2016 at 9:07:32 AM UTC-5, Phil Hobbs wrote:
> On 12/06/2016 06:16 AM, tabb...@gmail.com wrote:
> > What do people think of the possibility in the distant future of
> > transitioning to a solid 3d block silicon IC structure?
> >
> > Thermal management is the most obvious issue. Extremely low power
> > circuitry exists, albeit slow. Coper rods could be included to
> > improve heat transfer to the surface.
> >
>
> The problem is yield. Bonding known-good dice to known-good chips on
> wafers is barely acceptable, and iterating the process to make real 3D
> is very hard.
>
> You can't mix front-end-of-line (FEOL) processes (the ones that make the
> actual transistors) with back-end-of-line (BEOL) ones (basically wiring)
> because you need things like 900C anneals and 600C epitaxy, whereas FEOL
> stuff is typically limited to 300C. That means that you have to stack
> thinned-down 2D structures.

Is FEOL done first?, then you mean that BEOL is limited to 300C.
(not that it matters here.)

George H.

Phil Hobbs

unread,
Dec 6, 2016, 11:57:41 AM12/6/16
to
On 12/06/2016 11:44 AM, George Herold wrote:
> On Tuesday, December 6, 2016 at 9:07:32 AM UTC-5, Phil Hobbs wrote:
>> On 12/06/2016 06:16 AM, tabb...@gmail.com wrote:
>>> What do people think of the possibility in the distant future of
>>> transitioning to a solid 3d block silicon IC structure?
>>>
>>> Thermal management is the most obvious issue. Extremely low power
>>> circuitry exists, albeit slow. Coper rods could be included to
>>> improve heat transfer to the surface.
>>>
>>
>> The problem is yield. Bonding known-good dice to known-good chips on
>> wafers is barely acceptable, and iterating the process to make real 3D
>> is very hard.
>>
>> You can't mix front-end-of-line (FEOL) processes (the ones that make the
>> actual transistors) with back-end-of-line (BEOL) ones (basically wiring)
>> because you need things like 900C anneals and 600C epitaxy, whereas FEOL
>> stuff is typically limited to 300C. That means that you have to stack
>> thinned-down 2D structures.
>
> Is FEOL done first?, then you mean that BEOL is limited to 300C.
> (not that it matters here.)

Right, fat fingers.

John Larkin

unread,
Dec 6, 2016, 12:26:58 PM12/6/16
to
There are some stacked-wafer 3D technologies, but they are better
suited to memory than to computing. CPUs get too hot.

Silicon may finally flatline at 5 nm, maybe 10x the performance that
we have now.

Most people are happy with a 1 GHz ARM and some flash in their
smartphone. Disk drives are in the terabytes and the cloud stores a
lot of our stuff. We don't really need a lot more computing.

The "grand problems", weather and fluid flow and some math things,
wouldn't be greatly benefitted by 1000 or even 1e6 times more
supercomputer power than we have now.

I would like 100x more Spice power, but that could be done any time by
adapting LT Spice to use an Nvidia card.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics

Phil Hobbs

unread,
Dec 6, 2016, 12:37:20 PM12/6/16
to
I'd settle for having stepped simulations run concurrently on separate
cores.

tabb...@gmail.com

unread,
Dec 6, 2016, 1:33:30 PM12/6/16
to
On Tuesday, 6 December 2016 16:26:11 UTC, Tom Gardner wrote:
> On 06/12/16 16:17, tabbypurr wrote:

> > Surely a 3d block gives better interconnectivity than today's flat plane devices.
>
> A key problem with today's advanced small-geometry processes
> is getting the heat away from the junctions.
>
> Proposed new structures have to have a story for how that
> is achieved.

Heat production has to be orders of magnitude less. I see no other practical-ish option.


NT

tabb...@gmail.com

unread,
Dec 6, 2016, 1:38:15 PM12/6/16
to
On Tuesday, 6 December 2016 17:26:58 UTC, John Larkin wrote:
> On Tue, 6 Dec 2016 03:16:18 -0800 (PST), tabbypurr wrote:
>
> >What do people think of the possibility in the distant future of transitioning to a solid 3d block silicon IC structure?
> >
> >Thermal management is the most obvious issue. Extremely low power circuitry exists, albeit slow. Coper rods could be included to improve heat transfer to the surface.
> >
> >
> >NT
>
> There are some stacked-wafer 3D technologies, but they are better
> suited to memory than to computing. CPUs get too hot.

There are CPUs that run cold today. Minimal power use is an absolute necessity if your silicon's an inch thick.

> Silicon may finally flatline at 5 nm, maybe 10x the performance that
> we have now.
>
> Most people are happy with a 1 GHz ARM and some flash in their
> smartphone. Disk drives are in the terabytes and the cloud stores a
> lot of our stuff. We don't really need a lot more computing.

We so do, but I'll leave that discussion for another day.

> The "grand problems", weather and fluid flow and some math things,
> wouldn't be greatly benefitted by 1000 or even 1e6 times more
> supercomputer power than we have now.
>
> I would like 100x more Spice power, but that could be done any time by
> adapting LT Spice to use an Nvidia card.

We must have very different ideas of what the grand problems are. But I don't want to diverge into yet another topic for now.


NT

tabb...@gmail.com

unread,
Dec 6, 2016, 1:43:50 PM12/6/16
to
On Tuesday, 6 December 2016 14:07:32 UTC, Phil Hobbs wrote:
> On 12/06/2016 06:16 AM, tabbypurr wrote:

> > What do people think of the possibility in the distant future of
> > transitioning to a solid 3d block silicon IC structure?
> >
> > Thermal management is the most obvious issue. Extremely low power
> > circuitry exists, albeit slow. Coper rods could be included to
> > improve heat transfer to the surface.

> You can't mix front-end-of-line (FEOL) processes (the ones that make the
> actual transistors) with back-end-of-line (BEOL) ones (basically wiring)
> because you need things like 900C anneals and 600C epitaxy, whereas FEOL
> stuff is typically limited to 300C. That means that you have to stack
> thinned-down 2D structures.

That is where my knowledge runs right out. What prevents one from, in principle, writing aluminium tracks using a scanning atom beam, then exposing it to low temperature gas to passivate it? Could data be piped round by SiC LEDs?


NT

Phil Hobbs

unread,
Dec 6, 2016, 1:52:16 PM12/6/16
to
Aluminum is useless because of electromigration, and besides it melts at
about 700C. You can't push much more than 1E6 A/cm**2 even through
copper. (That sounds like a lot, but it's a serious limitation.)

And then there are the dielectrics.

But the main issue is diffusion of metal impurities, which trashes the
carrier lifetime.

Could data be
> piped round by SiC LEDs?

Way too slow, and wrong wavelength. For silicon photonics you want
InGaAs or InP.

Phil Hobbs

unread,
Dec 6, 2016, 1:53:57 PM12/6/16
to
But all the device scaling is taking us the opposite direction, even at
fixed clock rates. At this point, smaller transistors are leakier,
slower, and have less gain. It's been that way since about 65 nm.

rickman

unread,
Dec 6, 2016, 1:55:46 PM12/6/16
to
Add TEGs to the periphery and power the device by the escaping heat.... lol

It might be feasible to capture some of the waste heat this way to
reduce the power input. While not reducing the need for cooling the
device, it can help with the total energy in and waste heat to the
environment.

--

Rick C

rickman

unread,
Dec 6, 2016, 2:02:08 PM12/6/16
to
I'd be happy if my PC would just do the normal PC stuff without bogging
down. I don't know that CPU speed affects many uses of computers other
than the small percentage of uses that basically need an SR-71 sort of
CPU. Even gaming computers aren't suffering from CPU performance.
Often the limitations are elsewhere.

But then Bill Gates is supposed to wonder why anyone would need more
than 640 kB of memory. Seems like every MIP finds a home.

--

Rick C

John Larkin

unread,
Dec 6, 2016, 2:45:34 PM12/6/16
to
On Tue, 6 Dec 2016 12:37:13 -0500, Phil Hobbs
<pcdhSpamM...@electrooptical.net> wrote:

>On 12/06/2016 12:26 PM, John Larkin wrote:
>> On Tue, 6 Dec 2016 03:16:18 -0800 (PST), tabb...@gmail.com wrote:
>>
>>> What do people think of the possibility in the distant future of transitioning to a solid 3d block silicon IC structure?
>>>
>>> Thermal management is the most obvious issue. Extremely low power circuitry exists, albeit slow. Coper rods could be included to improve heat transfer to the surface.
>>>
>>>
>>> NT
>>
>> There are some stacked-wafer 3D technologies, but they are better
>> suited to memory than to computing. CPUs get too hot.
>>
>> Silicon may finally flatline at 5 nm, maybe 10x the performance that
>> we have now.
>>
>> Most people are happy with a 1 GHz ARM and some flash in their
>> smartphone. Disk drives are in the terabytes and the cloud stores a
>> lot of our stuff. We don't really need a lot more computing.
>>
>> The "grand problems", weather and fluid flow and some math things,
>> wouldn't be greatly benefitted by 1000 or even 1e6 times more
>> supercomputer power than we have now.
>>
>> I would like 100x more Spice power, but that could be done any time by
>> adapting LT Spice to use an Nvidia card.
>
>I'd settle for having stepped simulations run concurrently on separate
>cores.
>
>Cheers
>
>Phil Hobbs

I want sliders so I can tune values and see the resulting waveforms
essentially instantly. OK, 1000x.




--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com

George Herold

unread,
Dec 6, 2016, 2:48:51 PM12/6/16
to
On Tuesday, December 6, 2016 at 1:55:46 PM UTC-5, rickman wrote:
> On 12/6/2016 1:33 PM, tabb...@gmail.com wrote:
> > On Tuesday, 6 December 2016 16:26:11 UTC, Tom Gardner wrote:
> >> On 06/12/16 16:17, tabbypurr wrote:
> >
> >>> Surely a 3d block gives better interconnectivity than today's flat plane devices.
> >>
> >> A key problem with today's advanced small-geometry processes
> >> is getting the heat away from the junctions.
> >>
> >> Proposed new structures have to have a story for how that
> >> is achieved.
> >
> > Heat production has to be orders of magnitude less. I see no other practical-ish option.
>
> Add TEGs to the periphery and power the device by the escaping heat.... lol

I'm guessing the TEG would add to the thermal resistance, making the IC
run hotter. (TANSTAAFL)

GH

whit3rd

unread,
Dec 6, 2016, 4:04:17 PM12/6/16
to
On Tuesday, December 6, 2016 at 8:17:59 AM UTC-8, tabb...@gmail.com wrote:
> On Tuesday, 6 December 2016 14:07:32 UTC, Phil Hobbs wrote:
> > On 12/06/2016 06:16 AM, tabbypurr wrote:
>
> > > What do people think of the possibility in the distant future of
> > > transitioning to a solid 3d block silicon IC structure?

> > Then there's the elevator shaft problem. The reason that tall buildings
> > have "sky lobbies" is that if all elevators go to all floors, eventually
> > all your floor space is taken up by elevator shafts. The same applies
> > to the through-silicon vias (TSVs) that you need to go from layer to layer.
> >
> > So fairly flat 3D structures are reasonable, but anything with the
> > aspect ratio of a cube isn't.


> Surely a 3d block gives better interconnectivity than today's flat plane devices.

Well, you can get the first factor of two by mounting solder-bump devices back to
back on double-sided circuit board. The problem, though, is the cost of making
a new layer of single-crystal silicon, which gets extremely difficult if it has to
go over a road system of wiring. Most ICs are monolithic (start with a chunk
of pure silicon) and only small bits of noncritical (polysilicon) material ever get
deposited on top.

The Samsung stacked-elements flash chips that are in SSDs nowadays are multistory
stacks of elements, but that works for flash because there's very little heat dissipated
(addressing one word, you leave millions of others un-powered). And, it
relies on doing LOTS of mechanical assembly work. The robotic wire-bond
machines must spend a lot of time connecting the layers.

tabb...@gmail.com

unread,
Dec 6, 2016, 5:57:47 PM12/6/16
to
On Tuesday, 6 December 2016 18:53:57 UTC, Phil Hobbs wrote:
> On 12/06/2016 01:33 PM, tabbypurr wrote:
> > On Tuesday, 6 December 2016 16:26:11 UTC, Tom Gardner wrote:
> >> On 06/12/16 16:17, tabbypurr wrote:
> >
> >>> Surely a 3d block gives better interconnectivity than today's flat plane devices.
> >>
> >> A key problem with today's advanced small-geometry processes
> >> is getting the heat away from the junctions.
> >>
> >> Proposed new structures have to have a story for how that
> >> is achieved.
> >
> > Heat production has to be orders of magnitude less. I see no other practical-ish option.

> But all the device scaling is taking us the opposite direction, even at
> fixed clock rates. At this point, smaller transistors are leakier,
> slower, and have less gain. It's been that way since about 65 nm.
>
> Cheers
>
> Phil Hobbs

... which limits us to 65nm+ process. Not much of a problem when you can slip a smartphone sized piece of silicon in your pocket.


NT

Tim Williams

unread,
Dec 6, 2016, 8:55:09 PM12/6/16
to
"George Herold" <ghe...@teachspin.com> wrote in message
news:a41821e2-cb93-48c0...@googlegroups.com...
>> Add TEGs to the periphery and power the device by the escaping heat....
>> lol
>
> I'm guessing the TEG would add to the thermal resistance, making the IC
> run hotter. (TANSTAAFL)
>

Make it out of GaN, SiC or diamond, so it can handle high temperatures. Add
TECs (or nano-engines of whatever sort), and boom, you've got computronium!

All that's left, at that point, is to build a swarm* of the things in the
orbits between, say, Earth and Mercury, and take it from there.

*You build a swarm because it's trivially scalable and it doesn't have the
obvious structural instability of a Dyson sphere or ring.

(Don't worry about Venus or Mercury disturbing the orbits; we'll have to
disassemble them to obtain enough material. No, the view won't be so great
to future sky-watchers, but no one will want to spend their time doing
anything so boring. The simulations of the "old days" will be more pure,
anyway -- not being subject to biological limitations, like poor eyesight,
or chills from standing outside in winter.)

Tim

--
Seven Transistor Labs, LLC
Electrical Engineering Consultation and Contract Design
Website: http://seventransistorlabs.com

Tim Williams

unread,
Dec 6, 2016, 8:57:57 PM12/6/16
to
<tabb...@gmail.com> wrote in message
news:d749dd1b-f0e3-4ff6...@googlegroups.com...
> That is where my knowledge runs right out. What prevents one from, in
> principle, writing aluminium tracks using a scanning atom beam, then
> exposing it to low temperature gas to passivate it? Could data be piped
> round by SiC LEDs?
>

Scanning beams are extraordinarily slow -- ICs are unimaginably detailed
today. (They're tolerable for prototypes, AFAIK, but still very expensive.)

Printing layer upon layer, nanometers at a time, would take so long, just
printing one chip would take as long as *developing and perfecting* a whole
new alternative technique!

Martin Brown

unread,
Dec 7, 2016, 3:02:18 AM12/7/16
to
On 06/12/2016 16:17, tabb...@gmail.com wrote:
> On Tuesday, 6 December 2016 14:07:32 UTC, Phil Hobbs wrote:
>> On 12/06/2016 06:16 AM, tabbypurr wrote:
>
>>> What do people think of the possibility in the distant future of
>>> transitioning to a solid 3d block silicon IC structure?
>>>
>>> Thermal management is the most obvious issue. Extremely low
>>> power circuitry exists, albeit slow. Coper rods could be included
>>> to improve heat transfer to the surface.
>>>
>>
>> The problem is yield. Bonding known-good dice to known-good chips
>> on wafers is barely acceptable, and iterating the process to make
>> real 3D is very hard.
>
> Yield could I expect be addressed with a self test routine that
> permanently disables all faulty blocks. Or where practical limits
> what they can do to what works.

I vaguely recall a wafer scale system intending to do that which sank
without trace. Some of them tried quite hard!

https://en.wikipedia.org/wiki/Wafer-scale_integration

It seems easier to cut the parts up and run them at whatever speed they
are stable at than have to run at the speed of the worst good one.
>
> Surely a 3d block gives better interconnectivity than today's flat
> plane devices.

The interconnects are typically N-D where N can be upto 8 or so.

--
Regards,
Martin Brown

Phil Hobbs

unread,
Dec 7, 2016, 8:59:08 AM12/7/16
to
The last IBM foundry process I knew at all well had 11 layers of metal
even in 2007.

Phil Hobbs

unread,
Dec 7, 2016, 9:01:36 AM12/7/16
to
On 12/06/2016 08:57 PM, Tim Williams wrote:
> <tabb...@gmail.com> wrote in message
> news:d749dd1b-f0e3-4ff6...@googlegroups.com...
>> That is where my knowledge runs right out. What prevents one from, in
>> principle, writing aluminium tracks using a scanning atom beam, then
>> exposing it to low temperature gas to passivate it? Could data be
>> piped round by SiC LEDs?
>>
>
> Scanning beams are extraordinarily slow -- ICs are unimaginably detailed
> today. (They're tolerable for prototypes, AFAIK, but still very
> expensive.)

The resolution is also too low for modern chips. SEMs ran out of
resolution about 8 or 9 years ago--you have to use TEMs nowadays even
for failure analysis, which is almost unimaginably labour-intensive,
involving focused ion beams to chop out a tiny thin sample that is then
transferred to the grid of the TEM.


>
> Printing layer upon layer, nanometers at a time, would take so long,
> just printing one chip would take as long as *developing and perfecting*
> a whole new alternative technique!

Probably so!

Cheers

Phil "former chip guy" Hobbs

tabb...@gmail.com

unread,
Dec 7, 2016, 2:01:22 PM12/7/16
to
On Wednesday, 7 December 2016 08:02:18 UTC, Martin Brown wrote:
> On 06/12/2016 16:17, tabbypurr wrote:
> > On Tuesday, 6 December 2016 14:07:32 UTC, Phil Hobbs wrote:
> >> On 12/06/2016 06:16 AM, tabbypurr wrote:
> >
> >>> What do people think of the possibility in the distant future of
> >>> transitioning to a solid 3d block silicon IC structure?
> >>>
> >>> Thermal management is the most obvious issue. Extremely low
> >>> power circuitry exists, albeit slow. Coper rods could be included
> >>> to improve heat transfer to the surface.
> >>>
> >>
> >> The problem is yield. Bonding known-good dice to known-good chips
> >> on wafers is barely acceptable, and iterating the process to make
> >> real 3D is very hard.
> >
> > Yield could I expect be addressed with a self test routine that
> > permanently disables all faulty blocks. Or where practical limits
> > what they can do to what works.
>
> I vaguely recall a wafer scale system intending to do that which sank
> without trace. Some of them tried quite hard!
>
> https://en.wikipedia.org/wiki/Wafer-scale_integration
>
> It seems easier to cut the parts up and run them at whatever speed they
> are stable at than have to run at the speed of the worst good one.

There's no need to run all blocks at the same speed. With buffers everywhere each bit can run at its own max error free speed less a margin. But since the block has to be very low power density, sections will run far below their speed limits.

For a large scale computer it makes sense in principle. No point chopping, packaging and reassembling it all on PCBs if you can let it self-test, lose bad sections and run as is.

I wonder where they ran into problems. Imperfect yield shouldn't be a problem for large computers, maybe they weren't able to successfully disconnect dud sections from busses, maybe they leaked badly anyway.


> > Surely a 3d block gives better interconnectivity than today's flat
> > plane devices.
>
> The interconnects are typically N-D where N can be upto 8 or so.

I don't know what you mean there.


NT

krw

unread,
Dec 7, 2016, 2:25:16 PM12/7/16
to
One of the processors I worked on in 2000-2006 had 10 layers of copper
interconnects.

bill....@ieee.org

unread,
Dec 7, 2016, 8:29:09 PM12/7/16
to
On Thursday, December 8, 2016 at 1:01:36 AM UTC+11, Phil Hobbs wrote:
> On 12/06/2016 08:57 PM, Tim Williams wrote:
> > <tabb...@gmail.com> wrote in message
> > news:d749dd1b-f0e3-4ff6...@googlegroups.com...
> >> That is where my knowledge runs right out. What prevents one from, in
> >> principle, writing aluminium tracks using a scanning atom beam, then
> >> exposing it to low temperature gas to passivate it? Could data be
> >> piped round by SiC LEDs?
> >>
> >
> > Scanning beams are extraordinarily slow -- ICs are unimaginably detailed
> > today. (They're tolerable for prototypes, AFAIK, but still very
> > expensive.)
>
> The resolution is also too low for modern chips.

Probably not true.

> SEMs ran out of resolution about 8 or 9 years ago--you have to use TEMs
> nowadays even for failure analysis, which is almost unimaginably labour-
> intensive, involving focused ion beams to chop out a tiny thin sample that is > then transferred to the grid of the TEM.

The problem is more that integrated circuits are now layered structures and working out what is going on deep in 10 or 11 layers of inter-connecting metalisation requires penetration as well as resolution. Even a high voltage TEM can't drive an electron all that deep into an integrated circuit.

The electron bean tester that I was working on from 1988 to 1991 was going to be sold with an ion beam source to cut holes down to the lower layers of metalistion (and deposit tungsten interconnects to replace stuff which had been deposited in the wrong place - we had to worry about getting rid of unused tungsten carbonyl in the vacuum exhaust). Happily the project got canned before we'd got our hands on the ion-beam column.

<snip>

--
Bill Sloman, Sydney

John Larkin

unread,
Dec 7, 2016, 10:06:27 PM12/7/16
to
On Wed, 7 Dec 2016 09:01:26 -0500, Phil Hobbs
<pcdhSpamM...@electrooptical.net> wrote:

>On 12/06/2016 08:57 PM, Tim Williams wrote:
>> <tabb...@gmail.com> wrote in message
>> news:d749dd1b-f0e3-4ff6...@googlegroups.com...
>>> That is where my knowledge runs right out. What prevents one from, in
>>> principle, writing aluminium tracks using a scanning atom beam, then
>>> exposing it to low temperature gas to passivate it? Could data be
>>> piped round by SiC LEDs?
>>>
>>
>> Scanning beams are extraordinarily slow -- ICs are unimaginably detailed
>> today. (They're tolerable for prototypes, AFAIK, but still very
>> expensive.)
>
>The resolution is also too low for modern chips. SEMs ran out of
>resolution about 8 or 9 years ago--you have to use TEMs nowadays even
>for failure analysis, which is almost unimaginably labour-intensive,
>involving focused ion beams to chop out a tiny thin sample that is then
>transferred to the grid of the TEM.
>
>
>
>Cheers
>
>Phil "former chip guy" Hobbs


I did some work on this one, a 3D tomographic atom probe.

http://www.cameca.com/instruments-for-research/atom-probe.aspx

It constructs a 3D image of the sample, atom by atom, and identifies
the species and isotope of each atom. But the sample prep is horrible;
they have to ion mill a tiny, nm-radius tip out of the semiconductor.

(I got a bunch of stock which, of course, turned out to be worthless.)

John "former microscopy guy" Larkin







--

John Larkin Highland Technology, Inc

lunatic fringe electronics

Martin Brown

unread,
Dec 8, 2016, 3:22:15 AM12/8/16
to
One thing that would radically change things would be if there was no
longer a system clock and everything was allowed to free run
asynchronously. But they you get all sorts of potential nasty side
effects and race conditions - just like in software.
>
> For a large scale computer it makes sense in principle. No point
> chopping, packaging and reassembling it all on PCBs if you can let it
> self-test, lose bad sections and run as is.

ISTR That was the reasoning at the time but it proved way too difficult
to implement and get anything like acceptable yield. Power dissipation
and obtaining good enough silicon process was I think the killer in
terms of commercial success. Anamartic was one player I knew in
Cambridge - making for the time large 40MB ram from 2x 6" wafers.

http://www.computinghistory.org.uk/det/3043/Anamartic-Wafer-Scale-160MB-Solid-State-Disk/

The infamous relativity denier Ivor Catt had some patents on it.

> I wonder where they ran into problems. Imperfect yield shouldn't be a
> problem for large computers, maybe they weren't able to successfully
> disconnect dud sections from busses, maybe they leaked badly anyway.

You would have to research the history. It is quite long time ago now. I
am pretty sure there were others trying to do the same sort of thing
with transputer clusters too. Here is a paper from the early days when
the technlogy was full of promise - enjoy:

http://people.csail.mit.edu/cel/resume/papers/wafer-scale-integration.pdf
>
>
>>> Surely a 3d block gives better interconnectivity than today's
>>> flat plane devices.
>>
>> The interconnects are typically N-D where N can be upto 8 or so.
>
> I don't know what you mean there.

CPU clusters's have fast interconnect data paths typically configured
like a hypercube of dimension 8. A square is 2, cube is 3 links per node
etc. In 3D space that is about where fitting the cables in and getting
enough cooling to the internal works starts to bite!

--
Regards,
Martin Brown

jurb...@gmail.com

unread,
Dec 8, 2016, 4:22:25 AM12/8/16
to
General reply.

While there are a couple of hurdles in this, we are looking at diminishing returns. Reason being we are close to getting limited by how fast electrons can be sent through a conductor.

You are getting to the point where like two electrons are going to have to flip a switch, gate or whatever. Remember the building blocks of this shit, a processor is nothing but a bunch of gates.

Building them smaller and smaller of course allows for more complexity, but it also puts the separate devices closer together. The electrons have less distance to travel and there is also less capacitance. (inductance too)

I think the key to this is to make it all smaller, but not necessarily overcomplicate it more. Imagine a PC where you are online before your finger comes off the mouse button. And don't give me this "Windows does that" because it is running the program in the background all the time and all you are doing is switching it to the foreground.

This brings into question the need for this in the first place. When your PC is slow it is because something has clogged it up. My shit runs fast, and I mean really fast. I never defrag or run scandisk. I just don't install things. The only thing I installed was Office 97 Pro. Every other piece of software I have runs portable, I got a shortcut to the EXE file and that is that. And that is why this thing, eight years old, runs faster than what you buy in the store now. Even AVG free doesn't slow it down. Yeah, I installed that, so that is two.

But advancing this technology is just going to result in them piling in more programs, some of them snooping on you, others trying to rip you off and all this shit. New Jeeps run Windows, god damn, what the fuck were they thinking ? I tell you guys who play the stock market, SHORT THEIR STOCK, you will probably make money. My business sense, which is not perfect but is better than some tells me that people who but four wheel drive vehicles that are mad a little bit like an army vehicle, do not want Bill Gates and Windows running their car. In fact I would bet some of them would like to go back to ignition points and maybe even having to get out and lock the hubs to use the four wheel drive.

And even if not I am sure none of them want Windows running their car. At least the ones who know anything, they would probably opt for Linux. But business is not like that now, choice is limited and "You buy what we got" and you got no choice.

And your new PC comes with Win 10, you got no choice.

It is fucking high time the consumer takes the power back. Like WE ARE NOT BUYING THIS. YOU pay the money therefore YOU should be able to choose. YOU are the people who made these motherfuckers rich, and YOU have the power to take it away.

You could buy MACs, you could buy barebbones or even components and load Linux and tell Gates to take his vaccinate the Africans deal and go fuck himself. You CAN do that. But you won't. It's too hard.

so because they are going to load you up with all these "apps" and whatever they have to go into quantum computing or some shit so they run fast enough and guess who pays for that ? I come from a time when people paid cash for a new car. Some (a few) for new houses. Now people put the gasoline on credit.

This is progress.

If I could get online with DOS, get an occasional video, whatever, I would be fine. But the web itself and whoever else always bug you to upgrade software and then eventually your hardware has to go because it is not good enough. And nobody sees the scam, THE BLATANT SCAM that is going on right before your eyes.

You now, when you send a Man to Mars, build one of these supercomputers, by all means. But if you do it to have some punk ass piece of shit have better graphics in his video game where he shits his pants because he can't leave the screen, leave me out.

Is anyone picking up what I am laying down or is it really hopeless ?

Phil Hobbs

unread,
Dec 8, 2016, 10:46:03 AM12/8/16
to
Trilogy was the big dog, but died for several reasons. A few:

1. You can't get signals in and out of a chip larger than about 22 mm
square, because even with underfill to relieve the stress, differential
thermal expansion rips the solder balls off the base level metal. That
cripples the I/O bandwidth of anything larger.

2. You can turn off cores and memory lines with some types of defects,
but not other types. Bleeding edge processor yields are low enough that
you never get a working wafer.

3. Logic and DRAM processes are really different.

Cheers

Phil Hobbs

Robert Baer

unread,
Dec 8, 2016, 12:34:56 PM12/8/16
to
tabb...@gmail.com wrote:
> What do people think of the possibility in the distant future of transitioning to a solid 3d block silicon IC structure?
>
> Thermal management is the most obvious issue. Extremely low power circuitry exists, albeit slow. Coper rods could be included to improve heat transfer to the surface.
>
>
> NT
It has been done; maybe 10-15 years ago.

tabb...@gmail.com

unread,
Dec 8, 2016, 6:50:30 PM12/8/16
to
On Thursday, 8 December 2016 15:46:03 UTC, Phil Hobbs wrote:
> On 12/08/2016 03:22 AM, Martin Brown wrote:
So solder on LEDs and do it optically. Or maybe solder it to that flexy plastic Parlux.

> 2. You can turn off cores and memory lines with some types of defects,
> but not other types.

That's the major problem I suspected. One might perhaps improve the odds by adding 1 or 2 switchbanks between bus & core etc, and switches in power lines for those cores, to much improve the odds of being able to take them out of circuit electrically.

> Bleeding edge processor yields are low enough that
> you never get a working wafer.

Use a high yield CPU. It may not be what's wanted for today's supercomputers, but if that's the one way to make this work, the imagined future can live with it.

Or split the CPU into sections, use a fair bit of redundancy and the yield goes up. With so much silicon one can waste a lot of it, at least in the future when it's relatively cheap.

> 3. Logic and DRAM processes are really different.

I'm hoping there is some type of RAM that can be made on the same process, even if it's dino RAM.

The goals are quite different. The single wafer attempts were trying to produce supercomputers, where peak performance per core is a requirement. I'm trying to have as much computing power in a future user's pocket as possible, a future where silicon has become extremely cheap.

> Cheers
>
> Phil Hobbs


NT

tabb...@gmail.com

unread,
Dec 8, 2016, 7:10:56 PM12/8/16
to
On Thursday, 8 December 2016 09:22:25 UTC, jurb...@gmail.com wrote:
> General reply.
>
> While there are a couple of hurdles in this, we are looking at diminishing returns. Reason being we are close to getting limited by how fast electrons can be sent through a conductor.

diminishing returns on raw speed maybe. But that isn't the only way to go. Massive parallellism is hugely problematic, but is probably an unavoidable part of future computing. At least it works better when the computer is doing a large number of tasks, which IMHO future computers will.

> You are getting to the point where like two electrons are going to have to flip a switch, gate or whatever. Remember the building blocks of this shit, a processor is nothing but a bunch of gates.
>
> Building them smaller and smaller of course allows for more complexity, but it also puts the separate devices closer together. The electrons have less distance to travel and there is also less capacitance. (inductance too)
>
> I think the key to this is to make it all smaller, but not necessarily overcomplicate it more. Imagine a PC where you are online before your finger comes off the mouse button. And don't give me this "Windows does that" because it is running the program in the background all the time and all you are doing is switching it to the foreground.
>
> This brings into question the need for this in the first place. When your PC is slow it is because something has clogged it up. My shit runs fast, and I mean really fast. I never defrag or run scandisk. I just don't install things. The only thing I installed was Office 97 Pro. Every other piece of software I have runs portable, I got a shortcut to the EXE file and that is that. And that is why this thing, eight years old, runs faster than what you buy in the store now. Even AVG free doesn't slow it down. Yeah, I installed that, so that is two.
>
> But advancing this technology is just going to result in them piling in more programs, some of them snooping on you, others trying to rip you off and all this shit.

We get to choose. You can load a new powerful PC with junk or can find bloat-free apps.

> New Jeeps run Windows, god damn, what the fuck were they thinking ? I tell you guys who play the stock market, SHORT THEIR STOCK, you will probably make money. My business sense, which is not perfect but is better than some tells me that people who but four wheel drive vehicles that are mad a little bit like an army vehicle, do not want Bill Gates and Windows running their car. In fact I would bet some of them would like to go back to ignition points and maybe even having to get out and lock the hubs to use the four wheel drive.
>
> And even if not I am sure none of them want Windows running their car. At least the ones who know anything, they would probably opt for Linux. But business is not like that now, choice is limited and "You buy what we got" and you got no choice.
>
> And your new PC comes with Win 10, you got no choice.

I don't buy PCs with win preinstalled, or anything preinstalled. People buy what they want, and most users are pretty ignorant. Hence it's popular. Meanwhile Linux is more able than it used to be.

> It is fucking high time the consumer takes the power back. Like WE ARE NOT BUYING THIS. YOU pay the money therefore YOU should be able to choose. YOU are the people who made these motherfuckers rich, and YOU have the power to take it away.
>
> You could buy MACs, you could buy barebbones or even components and load Linux and tell Gates to take his vaccinate the Africans deal and go fuck himself. You CAN do that. But you won't. It's too hard.
>
> so because they are going to load you up with all these "apps" and whatever they have to go into quantum computing or some shit so they run fast enough and guess who pays for that ? I come from a time when people paid cash for a new car. Some (a few) for new houses. Now people put the gasoline on credit.
>
> This is progress.
>
> If I could get online with DOS, get an occasional video, whatever, I would be fine.

I was online with an Apple II at 300 baud. Trust me a CLI with no browser and a 1970s CPU is far from fine. And no hard disc.

> But the web itself and whoever else always bug you to upgrade software and then eventually your hardware has to go because it is not good enough. And nobody sees the scam, THE BLATANT SCAM that is going on right before your eyes.

You can put a given away junker online if you want. But why? It's the downside of progress. And yes, computers have progressed. Try using my 50MHz laptop and tell me it's ok. It's not. It does what it ever did, but is totally inadequate for all the extra functionality that has come along since.

> You now, when you send a Man to Mars, build one of these supercomputers, by all means. But if you do it to have some punk ass piece of shit have better graphics in his video game where he shits his pants because he can't leave the screen, leave me out.
>
> Is anyone picking up what I am laying down or is it really hopeless ?

Tools can be used for good or not. That some will use them poorly is not news, but it's their life. Today we have much better tools. Thank God we're not stuck in the computing abilities of 1982, or whatever semi-distant date you want to choose.

It's all about perspective.


NT

Clifford Heath

unread,
Dec 10, 2016, 4:56:46 PM12/10/16
to
On 06/12/16 22:16, tabb...@gmail.com wrote:
> What do people think of the possibility in the distant future of transitioning to a solid 3d block silicon IC structure?
> Thermal management is the most obvious issue.

There's a good reason that almost all the computation in a human brain
is done in the outer 2mm (that's why it's called "cortex"), and why
it's so deeply folded to increase the surface area. The rest is
mostly interconnect. Thermal management is *the* issue.

Clifford Heath.

krw

unread,
Dec 10, 2016, 5:43:36 PM12/10/16
to
Liquid cooling doesn't hurt.

bill....@ieee.org

unread,
Dec 10, 2016, 7:31:38 PM12/10/16
to
Birds seem to do better. They are more committed to minimising weight and maximising performance.

--
Bill Sloman, Sydney

tabb...@gmail.com

unread,
Dec 10, 2016, 9:19:35 PM12/10/16
to
On Saturday, 10 December 2016 21:56:46 UTC, Clifford Heath wrote:
Well, it's certainly one of them, not the only one. While we're familiar with computing hardware running near to flat out, producing lots of heat, there are also cold running CPUs of lesser but still useful performance. Run that 3.6W PowerPC 750FX 900MHz CPU at 90MHz and it only eats around 0.36W. Run an XScale 80321 600 MHz 0.5 watt CPU at 60MHz and it eats about 0.05W. Run a Pentium M ULV 773 1.3 GHz 5 W CPU at 130MHz and it's down to 0.5W.

Even today a lot can be done with 200MHz if you give every task, subtask and subsubtask its own CPU or set of parallel CPUs. Now assess each task to see what CPU performance it really needs, and you find that most of the CPUs can be run at much lower speed than 200MHz, and power consumption dwindles dramatically. Also most of those CPUs can be simpler, again cutting power use.

Since this is set in the future I can well believe enough progress is likely to occur to permit running CPUs at far below their speed limits. The motivation is enough computing power in one handheld block to run a huge number of tasks in parallel. (It can do a percentage of that by farming the processing out elsewhere if necessary.)


NT

rickman

unread,
Dec 11, 2016, 2:01:43 AM12/11/16
to
I hadn't heard of that. I recall a Ball Semiconductor which developed
processing of spherical silicon integrated circuit equipment. Not sure
what they are doing these days though. They appear to be around, but
not making many waves. Still, that isn't really 3-D silicon as they
only use the surface.

--

Rick C

tabb...@gmail.com

unread,
Dec 11, 2016, 5:07:47 AM12/11/16
to
On Thursday, 8 December 2016 23:50:30 UTC, tabby wrote:
> On Thursday, 8 December 2016 15:46:03 UTC, Phil Hobbs wrote:

> > Bleeding edge processor yields are low enough that
> > you never get a working wafer.
>
> Use a high yield CPU. It may not be what's wanted for today's supercomputers, but if that's the one way to make this work, the imagined future can live with it.
>
> Or split the CPU into sections, use a fair bit of redundancy and the yield goes up. With so much silicon one can waste a lot of it, at least in the future when it's relatively cheap.

The smaller the CPU, the less silicon area is lost from each defect. Maybe a lookup table CPU with RISC.


> > 3. Logic and DRAM processes are really different.
>
> I'm hoping there is some type of RAM that can be made on the same process, even if it's dino RAM.

on-die cache is fast


Yield is the problem. What's wrong with connecting CPUs to busses via a series of logic gates that connect or disconnect, and power rails too for the 50mW CPUs? Maybe to guard against a bad CPU not being disconnectable due to coincident logic gate failures one can use multiple busses.


NT

upsid...@downunder.com

unread,
Dec 11, 2016, 6:43:00 AM12/11/16
to
On Sun, 11 Dec 2016 02:07:42 -0800 (PST), tabb...@gmail.com wrote:

>
>Yield is the problem. What's wrong with connecting CPUs to busses via a series of logic gates that connect or disconnect, and power rails too for the 50mW CPUs? Maybe to guard against a bad CPU not being disconnectable due to coincident logic gate failures one can use multiple busses.

Assuming power on self tests to disable non-functional blocks.

After the wafer has been produced, just do a test on the disconnection
logic (not the internal functionality of the block) and if some
disconnect logic fails, burn off the bad block data, clock and power
connections using a laser.

This method should not take too long compared to cutting a wafer into
chips, testing each individual chip, skipping bad chips and packaging
good chips into packages.

upsid...@downunder.com

unread,
Dec 11, 2016, 6:47:39 AM12/11/16
to
On Sun, 11 Dec 2016 02:07:42 -0800 (PST), tabb...@gmail.com wrote:

>> > Bleeding edge processor yields are low enough that
>> > you never get a working wafer.
>>
>> Use a high yield CPU. It may not be what's wanted for today's supercomputers, but if that's the one way to make this work, the imagined future can live with it.
>>
>> Or split the CPU into sections, use a fair bit of redundancy and the yield goes up. With so much silicon one can waste a lot of it, at least in the future when it's relatively cheap.
>
>The smaller the CPU, the less silicon area is lost from each defect. Maybe a lookup table CPU with RISC.

This sounds much as disk bad block replacement algorithms. Just
address the processor with a logical processor number and some
(redundant) hardware will do the logical to physical processor
mapping.

upsid...@downunder.com

unread,
Dec 11, 2016, 6:49:55 AM12/11/16
to
On Tue, 6 Dec 2016 03:16:18 -0800 (PST), tabb...@gmail.com wrote:

>What do people think of the possibility in the distant future of transitioning to a solid 3d block silicon IC structure?

Three dimensional printing ?

upsid...@downunder.com

unread,
Dec 11, 2016, 6:55:26 AM12/11/16
to
On Sun, 11 Dec 2016 08:56:42 +1100, Clifford Heath
<no....@please.net> wrote:

When we get semiconductor materials that allows high density ICs
running reliably above +100 C, vapor cooling helps getting away with a
lot of heat.

Phil Hobbs

unread,
Dec 11, 2016, 7:04:06 AM12/11/16
to
>Since this is set in the future I can well believe enough progress is
> likely to occur to permit running CPUs at far below their speed limits.
>The motivation is enough computing power in one handheld block to
>run a huge number of tasks in parallel.

The power consumption doesn't go down anywhere near zero at zero clock speed, even in a fully static design. Sub-32nm FETs are horribly leaky.

And the utilization percentage of highly multicore processors is nearly always poor, because it makes it much harder to write correct, efficient programs.

And the thermal expansion problem strangles the I/O bandwidth of super-size chips, as I noted upthread.

Cheers

Phil Hobbs

Phil Hobbs

unread,
Dec 11, 2016, 7:18:20 AM12/11/16
to
>When we get semiconductor materials that allows high density ICs
>running reliably above +100 C, vapor cooling helps getting away with a
>lot of heat.

There's nothing special about 100C--heat pipes work fine at normal operating temperatures. I have several boxes that use them, and you probably do too.

Cheers

Phil Hobbs

upsid...@downunder.com

unread,
Dec 11, 2016, 7:30:08 AM12/11/16
to
On Sun, 11 Dec 2016 04:04:03 -0800 (PST), Phil Hobbs
<pcdh...@gmail.com> wrote:


>And the utilization percentage of highly multicore processors is nearly always poor, because it makes it much harder to write correct, efficient programs.

This is true for mobile and simple desk top applications, but for
servers (such as data base and web) , you really want to throw in as
much processors as are available). This has been the case for at least
40-60 years, then the physical I/O performance limits kicks in.

Tom Gardner

unread,
Dec 11, 2016, 7:38:48 AM12/11/16
to
On 11/12/16 12:04, Phil Hobbs wrote:
> And the utilization percentage of highly multicore processors is nearly always poor, because it makes it much harder to write correct, efficient programs.

In some cases the program/algorithm/problem is dominated
by Amhdal's law.

But there are other important "embarassingly parallel"
programs and problems - telecom systems are an obvious
example.

In all cases the modern bottlenecks are IO/memory
bandwidth and memory latency. "DRAM is the new
disk" :(

upsid...@downunder.com

unread,
Dec 11, 2016, 7:44:28 AM12/11/16
to
The heat pipe helps removing the heat from the chip/wafer, but it
doesn't help removing the heat into the environment. Still big
heatsinks are required.

I have a 7.5 kW heat source in my sauna. Throwing water on the sauna
stones will quite effectively transfer the heat into the sauna room or
into the outside world, if the sauna window is open.

Thus, I really do not think that getting rid of 10 kW computer
dissipation would be a problem, provided that the electronics can
comfortably handle +100 C.

Phil Hobbs

unread,
Dec 11, 2016, 7:52:47 AM12/11/16
to
>This is true for mobile and simple desk top applications, but for
>servers (such as data base and web) , you really want to throw in as
>much processors as are available).

If the workload is dominated by unconnected small tasks, as in a web server, sure. But not things like ERP or other big database tasks, which don't parallellize at all well.

(Plus the connection between "wtiting correct and efficient programs" and web dev is, *ahem*, oblique. ;)

I've written a big-iron (100-core-ish) 3D electromagnetic simulator, and getting it to scale well was a man-sized problem.

Database locking breaks parallellism very badly.

Cheers

Phil Hobbs

Phil Hobbs

unread,
Dec 11, 2016, 7:57:15 AM12/11/16
to
>The heat pipe helps removing the heat from the chip/wafer, but it
>doesn't help removing the heat into the environment. Still big
>heatsinks are required.

>I have a 7.5 kW heat source in my sauna. Throwing water on the sauna
>stones will quite effectively transfer the heat into the sauna room or
>into the outside world, if the sauna window is open.

Open cycle cooling has its own problems, such as scale deposits and the requirement for a water tank that never runs dry.

Cheers

Phil Hobbs

bill....@ieee.org

unread,
Dec 11, 2016, 8:02:56 AM12/11/16
to
You do have to check the performance of heat pipes working below 100C. It doesn't take much air-leakage to make them work really badly.

My 1996 thermostat went over to heat-pipes after I'd left, and what was needed to make them work reliably was a low temperature test bed. Initially about a quarter of the parts supplied failed the test, but eventually the supplier got the message and tested them before he shipped them.

--
Bill Sloman, Ssydney

krw

unread,
Dec 11, 2016, 8:13:53 AM12/11/16
to
On Sun, 11 Dec 2016 13:42:58 +0200, upsid...@downunder.com wrote:

>On Sun, 11 Dec 2016 02:07:42 -0800 (PST), tabb...@gmail.com wrote:
>
>>
>>Yield is the problem. What's wrong with connecting CPUs to busses via a series of logic gates that connect or disconnect, and power rails too for the 50mW CPUs? Maybe to guard against a bad CPU not being disconnectable due to coincident logic gate failures one can use multiple busses.
>
>Assuming power on self tests to disable non-functional blocks.
>
>After the wafer has been produced, just do a test on the disconnection
>logic (not the internal functionality of the block) and if some
>disconnect logic fails, burn off the bad block data, clock and power
>connections using a laser.

You want to turn the power off on any bad, or unused, blocks to save
power. Modern processors do this. It's well known art.
>
>This method should not take too long compared to cutting a wafer into
>chips, testing each individual chip, skipping bad chips and packaging
>good chips into packages.

There is no guarantee that because a chip tests good at time=0 that it
will down the road. If its neighbor had a defect, there may be a good
chance it does, too (probability dependent on defect). The chip
manufacturers are well aware of the costs.

krw

unread,
Dec 11, 2016, 8:20:36 AM12/11/16
to
IBM was using vapor cooling at 85C, forty years ago. The problem is
that boiling = distillation. Any gunk in your coolant tends to get
deposited on the chips. That's the main reason IBM went away from it,
in favor of the Thermal Conduction Module.

Tom Gardner

unread,
Dec 11, 2016, 8:30:32 AM12/11/16
to
On 11/12/16 12:52, Phil Hobbs wrote:
> (Plus the connection between "wtiting correct and efficient programs" and web dev is,*ahem*, oblique.;)

s/and efficient/ / :(

Clifford Heath

unread,
Dec 11, 2016, 6:12:52 PM12/11/16
to
And of course the Cray II that had a continuous flow of
Freon over the circuit boards of the entire CPU. There
may have been local boiling, but not the entire content.
It takes measures like that to get rid of 100KW of heat
from a space the size of a large fridge.

tabb...@gmail.com

unread,
Dec 11, 2016, 6:46:35 PM12/11/16
to
On Sunday, 11 December 2016 12:04:06 UTC, Phil Hobbs wrote:
>NT:

> >Since this is set in the future I can well believe enough progress is
> > likely to occur to permit running CPUs at far below their speed limits.
> >The motivation is enough computing power in one handheld block to
> >run a huge number of tasks in parallel.
>
> The power consumption doesn't go down anywhere near zero at zero clock speed, even in a fully static design. Sub-32nm FETs are horribly leaky.

A pile of silicon simply has to be energy efficient, and static leakage must be minimised. Hence I mentioned sticking with a larger process.

> And the utilization percentage of highly multicore processors is nearly always poor, because it makes it much harder to write correct, efficient programs.

It's a core computing problem, pardon the pun. But a future computer which I propose will be running many times more apps & background tasks than today (a topic for another day perhaps) can at least make good use of many more cores/CPUs. And it can choose whichever CPUs deliver the wanted result in the wanted time with the least energy use.


> And the thermal expansion problem strangles the I/O bandwidth of super-size chips, as I noted upthread.

How does that prevent one for example mounting an IC upside down on a heatsink and soldering 20mm wide strips of Parlux to its connections?


NT

> Cheers
>
> Phil Hobbs

tabb...@gmail.com

unread,
Dec 11, 2016, 6:50:58 PM12/11/16
to
On Sunday, 11 December 2016 13:13:53 UTC, krw wrote:
> On Sun, 11 Dec 2016 13:42:58 +0200, upsid...@downunder.com wrote:
Best for each CPU or whatever unit to have its own test & disconnect system, followed by local neighbourhood system that tests & disconnects several CPUs etc. Putting all eggs in one basket is not a good plan - unless it's simple enough that the yield is high enough for that bit of silicon.


NT

krw

unread,
Dec 11, 2016, 6:54:31 PM12/11/16
to
On Mon, 12 Dec 2016 10:12:47 +1100, Clifford Heath
TCMs were 1.2kW (~100W/chip) and about 4"x4"x3". The liquid
encapsulated modules, about 1kW (also around 100W/per). There's more
than one way to skin a skunk.

tabb...@gmail.com

unread,
Dec 11, 2016, 6:56:15 PM12/11/16
to
On Sunday, 11 December 2016 11:49:55 UTC, upsid...@downunder.com wrote:
> On Tue, 6 Dec 2016 03:16:18 -0800 (PST), tabbypurr wrote:
>
> >What do people think of the possibility in the distant future of transitioning to a solid 3d block silicon IC structure?
>
> Three dimensional printing ?

I can't see how that would produce monocrystalline silicon. Unless it was laid down atom layer by atom layer, and ones that settle wrongly are then stripped off and relaid. As was pointed out that would simply be far too slow.


NT

tabb...@gmail.com

unread,
Dec 11, 2016, 6:58:54 PM12/11/16
to
FWIW what I had in mind was a handheld computer the size of a smartphone, so it can't run hot. This at least reduces thermal expansion.


NT

krw

unread,
Dec 11, 2016, 7:47:14 PM12/11/16
to
What's best is for the manufacturer to design the system based on
requirements, available technology, and economics. Putting everything
one chip makes the I/O impossible (the I/O has to be rerouted around
failed chips). Bandwidth is only money. Latency is forever.

krw

unread,
Dec 11, 2016, 7:48:56 PM12/11/16
to
You mean like a...

...wait for it...

... smart phone?

They're sorta thermally limited, too.

Phil Hobbs

unread,
Dec 11, 2016, 8:04:11 PM12/11/16
to
>TCMs were 1.2kW (~100W/chip) and about 4"x4"x3".  The liquid
>encapsulated modules, about 1kW (also around 100W/per).  There's more
>than one way to skin a skunk.

The 3090 TCMs I know about had either 100 of 121 chips per module. A Clark board ( a giant PCB about half an inch thick with a zillion layers) held (iirc) 9 of them. Power supplies were something like +1.6 and -3.3V at 8000 amps. The bus bars were made of plated copper angle stock, iirc about 1-1/4 x 1-1/4 x 1/8 inch. So that's 40 kW per board, circa 1991.

Cheers

Phil Hobbs

krw

unread,
Dec 11, 2016, 8:47:01 PM12/11/16
to
CPU boards held six TCMs and channel directors, nine, IIRC. A system
could have four CPUs and six channels, IIRC.

With all that power, the density isn't all that high. I once
calculated the power density of the dual-core processor I worked on.
It came out to something like 1E9 times the power density of the sun.
;-)

tabb...@gmail.com

unread,
Dec 12, 2016, 8:03:05 AM12/12/16
to
On Monday, 12 December 2016 00:47:14 UTC, krw wrote:
> On Sun, 11 Dec 2016 15:50:55 -0800 (PST), tabbypurr wrote:
> >On Sunday, 11 December 2016 13:13:53 UTC, krw wrote:
> >> On Sun, 11 Dec 2016 13:42:58 +0200, upsid...@downunder.com wrote:
> >> >On Sun, 11 Dec 2016 02:07:42 -0800 (PST), tabbypurr wrote:
> >> >
> >> >>
> >> >>Yield is the problem. What's wrong with connecting CPUs to busses via a series of logic gates that connect or disconnect, and power rails too for the 50mW CPUs? Maybe to guard against a bad CPU not being disconnectable due to coincident logic gate failures one can use multiple busses.
> >> >
> >> >Assuming power on self tests to disable non-functional blocks.
> >> >
> >> >After the wafer has been produced, just do a test on the disconnection
> >> >logic (not the internal functionality of the block) and if some
> >> >disconnect logic fails, burn off the bad block data, clock and power
> >> >connections using a laser.
> >>
> >> You want to turn the power off on any bad, or unused, blocks to save
> >> power. Modern processors do this. It's well known art.
> >> >
> >> >This method should not take too long compared to cutting a wafer into
> >> >chips, testing each individual chip, skipping bad chips and packaging
> >> >good chips into packages.
> >>
> >> There is no guarantee that because a chip tests good at time=0 that it
> >> will down the road. If its neighbor had a defect, there may be a good
> >> chance it does, too (probability dependent on defect). The chip
> >> manufacturers are well aware of the costs.
> >
> >Best for each CPU or whatever unit to have its own test & disconnect system, followed by local neighbourhood system that tests & disconnects several CPUs etc. Putting all eggs in one basket is not a good plan - unless it's simple enough that the yield is high enough for that bit of silicon.
> >
> What's best is for the manufacturer to design the system based on
> requirements, available technology, and economics.

Yes. Which will be running a vast number of tasks in a handheld device. Physics prevents a low power THz CPU happening hence a massive number of cores/cpus is the viable option.

> Putting everything
> one chip makes the I/O impossible (the I/O has to be rerouted around
> failed chips). Bandwidth is only money. Latency is forever.

I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.

Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.


NT

tabb...@gmail.com

unread,
Dec 12, 2016, 8:05:28 AM12/12/16
to
On Monday, 12 December 2016 00:48:56 UTC, krw wrote:
> On Sun, 11 Dec 2016 15:58:50 -0800 (PST), tabbypurr wrote:
> >On Sunday, 11 December 2016 23:46:35 UTC, tabby wrote:
> >> On Sunday, 11 December 2016 12:04:06 UTC, Phil Hobbs wrote:
> >> >NT:
> >>
> >> > >Since this is set in the future I can well believe enough progress is
> >> > > likely to occur to permit running CPUs at far below their speed limits.
> >> > >The motivation is enough computing power in one handheld block to
> >> > >run a huge number of tasks in parallel.
> >> >
> >> > The power consumption doesn't go down anywhere near zero at zero clock speed, even in a fully static design. Sub-32nm FETs are horribly leaky.
> >>
> >> A pile of silicon simply has to be energy efficient, and static leakage must be minimised. Hence I mentioned sticking with a larger process.
> >>
> >> > And the utilization percentage of highly multicore processors is nearly always poor, because it makes it much harder to write correct, efficient programs.
> >>
> >> It's a core computing problem, pardon the pun. But a future computer which I propose will be running many times more apps & background tasks than today (a topic for another day perhaps) can at least make good use of many more cores/CPUs. And it can choose whichever CPUs deliver the wanted result in the wanted time with the least energy use.
> >>
> >>
> >> > And the thermal expansion problem strangles the I/O bandwidth of super-size chips, as I noted upthread.
> >>
> >> How does that prevent one for example mounting an IC upside down on a heatsink and soldering 20mm wide strips of Parlux to its connections?
> >>
> >FWIW what I had in mind was a handheld computer the size of a smartphone, so it can't run hot. This at least reduces thermal expansion.
> >
> You mean like a...
>
> ...wait for it...
>
> ... smart phone?
>
> They're sorta thermally limited, too.

Yes, but with a vast number of cores, vast RAM and humungous data storage all in 1 block of silicon. As said in the OP.


NT

tabb...@gmail.com

unread,
Dec 12, 2016, 8:14:29 AM12/12/16
to
On Monday, 12 December 2016 13:03:05 UTC, tabby wrote:

> > Putting everything
> > one chip makes the I/O impossible (the I/O has to be rerouted around
> > failed chips). Bandwidth is only money. Latency is forever.
>
> I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.

IO possibilities so far include:
optical LEDs
optic fibre
solder data lines to Parlux
partially flexible pins into sockets
connection to pcb via a sheet of insulating dielectric
wireless comms might also be an option
I suppose even going back to a 1970s style mass of twisted pairs of flexible wires soldered on could be possible with machine assembly, albeit hateful.


NT

krw

unread,
Dec 12, 2016, 12:43:44 PM12/12/16
to
I'm not buying what you're selling.
>
>> Putting everything
>> one chip makes the I/O impossible (the I/O has to be rerouted around
>> failed chips). Bandwidth is only money. Latency is forever.
>
>I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.

Ah, there's physics thing again.
>
>Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.

FPGA manufacturers have decided that that's *not* the way to go. One
"hot" driver and the whole thing is toast.

>
>
>NT

tabb...@gmail.com

unread,
Dec 12, 2016, 1:32:45 PM12/12/16
to
On Monday, 12 December 2016 17:43:44 UTC, krw wrote:
You don't know what I'm selling

> >> Putting everything
> >> one chip makes the I/O impossible (the I/O has to be rerouted around
> >> failed chips). Bandwidth is only money. Latency is forever.
> >
> >I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.
>
> Ah, there's physics thing again.
> >
> >Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.
>
> FPGA manufacturers have decided that that's *not* the way to go. One
> "hot" driver and the whole thing is toast.

I don't know what you mean by 'hot driver', but all sections of the chip are shut down if faulty, including busses.

AFAIK there is no equivalent of this thing today, with lots of CPUs, lots of busses and multiples of everything else. Manufacturers are more into not using redundancy, selling the perfect ones and scrapping the bad. That works when your amount of silicon per device is small enough to avoid defects, it certainly doesn't work for whole wafer circuits, or in this case whole block circuits.


NT

krw

unread,
Dec 12, 2016, 7:13:21 PM12/12/16
to
I can read what you're writing. I'm not buying. (sheesh)
>
>> >> Putting everything
>> >> one chip makes the I/O impossible (the I/O has to be rerouted around
>> >> failed chips). Bandwidth is only money. Latency is forever.
>> >
>> >I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.
>>
>> Ah, there's physics thing again.
>> >
>> >Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.
>>
>> FPGA manufacturers have decided that that's *not* the way to go. One
>> "hot" driver and the whole thing is toast.
>
>I don't know what you mean by 'hot driver', but all sections of the chip are shut down if faulty, including busses.

You have a bus that goes to every processor. If anyone on that bus
grabs the bus, the entire chip is bad.

>AFAIK there is no equivalent of this thing today, with lots of CPUs, lots of busses and multiples of everything else. Manufacturers are more into not using redundancy, selling the perfect ones and scrapping the bad. That works when your amount of silicon per device is small enough to avoid defects, it certainly doesn't work for whole wafer circuits, or in this case whole block circuits.

There is a reason for that.

tabb...@gmail.com

unread,
Dec 12, 2016, 7:33:12 PM12/12/16
to
On Tuesday, 13 December 2016 00:13:21 UTC, krw wrote:
I don't doubt it.

> I'm not buying. (sheesh)

You still don't know what I'm selling. Go on then, what am I selling?

> >> >> Putting everything
> >> >> one chip makes the I/O impossible (the I/O has to be rerouted around
> >> >> failed chips). Bandwidth is only money. Latency is forever.
> >> >
> >> >I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.
> >>
> >> Ah, there's physics thing again.
> >> >
> >> >Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.
> >>
> >> FPGA manufacturers have decided that that's *not* the way to go. One
> >> "hot" driver and the whole thing is toast.
> >
> >I don't know what you mean by 'hot driver', but all sections of the chip are shut down if faulty, including busses.
>
> You have a bus that goes to every processor. If anyone on that bus
> grabs the bus, the entire chip is bad.

Maybe you've not read the thread. That's not how it works.


> >AFAIK there is no equivalent of this thing today, with lots of CPUs, lots of busses and multiples of everything else. Manufacturers are more into not using redundancy, selling the perfect ones and scrapping the bad. That works when your amount of silicon per device is small enough to avoid defects, it certainly doesn't work for whole wafer circuits, or in this case whole block circuits.
>
> There is a reason for that.

Of course. The only present day app for lots of CPUs and huge amounts of silicon is supercomputers. They need top performance CPUs, something this proposed approach can not deliver.


NT

krw

unread,
Dec 12, 2016, 8:40:24 PM12/12/16
to
Some cockamamie 3-D waferscale hypercube processor that's low enough
power to hold in your hand.
>
>> >> >> Putting everything
>> >> >> one chip makes the I/O impossible (the I/O has to be rerouted around
>> >> >> failed chips). Bandwidth is only money. Latency is forever.
>> >> >
>> >> >I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.
>> >>
>> >> Ah, there's physics thing again.
>> >> >
>> >> >Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.
>> >>
>> >> FPGA manufacturers have decided that that's *not* the way to go. One
>> >> "hot" driver and the whole thing is toast.
>> >
>> >I don't know what you mean by 'hot driver', but all sections of the chip are shut down if faulty, including busses.
>>
>> You have a bus that goes to every processor. If anyone on that bus
>> grabs the bus, the entire chip is bad.
>
>Maybe you've not read the thread. That's not how it works.

Then you can't just remove a bad processor from the system. It's I/O
would be dead.
>
>
>> >AFAIK there is no equivalent of this thing today, with lots of CPUs, lots of busses and multiples of everything else. Manufacturers are more into not using redundancy, selling the perfect ones and scrapping the bad. That works when your amount of silicon per device is small enough to avoid defects, it certainly doesn't work for whole wafer circuits, or in this case whole block circuits.
>>
>> There is a reason for that.
>
>Of course. The only present day app for lots of CPUs and huge amounts of silicon is supercomputers. They need top performance CPUs, something this proposed approach can not deliver.

No, it's a (bad) pipe dream.

tabb...@gmail.com

unread,
Dec 13, 2016, 6:24:02 AM12/13/16
to
On Tuesday, 13 December 2016 01:40:24 UTC, krw wrote:
Not even close.

> >> >> >> Putting everything
> >> >> >> one chip makes the I/O impossible (the I/O has to be rerouted around
> >> >> >> failed chips). Bandwidth is only money. Latency is forever.
> >> >> >
> >> >> >I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.
> >> >>
> >> >> Ah, there's physics thing again.
> >> >> >
> >> >> >Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.
> >> >>
> >> >> FPGA manufacturers have decided that that's *not* the way to go. One
> >> >> "hot" driver and the whole thing is toast.
> >> >
> >> >I don't know what you mean by 'hot driver', but all sections of the chip are shut down if faulty, including busses.
> >>
> >> You have a bus that goes to every processor. If anyone on that bus
> >> grabs the bus, the entire chip is bad.
> >
> >Maybe you've not read the thread. That's not how it works.
>
> Then you can't just remove a bad processor from the system. It's I/O
> would be dead.

I see you've missed a central concept of this.

> >> >AFAIK there is no equivalent of this thing today, with lots of CPUs, lots of busses and multiples of everything else. Manufacturers are more into not using redundancy, selling the perfect ones and scrapping the bad. That works when your amount of silicon per device is small enough to avoid defects, it certainly doesn't work for whole wafer circuits, or in this case whole block circuits.
> >>
> >> There is a reason for that.
> >
> >Of course. The only present day app for lots of CPUs and huge amounts of silicon is supercomputers. They need top performance CPUs, something this proposed approach can not deliver.
>
> No, it's a (bad) pipe dream.

Your imagined version of it is indeed.


NT

krw

unread,
Dec 13, 2016, 8:08:09 PM12/13/16
to
That's *exactly* what you've been describing.

>
>> >> >> >> Putting everything
>> >> >> >> one chip makes the I/O impossible (the I/O has to be rerouted around
>> >> >> >> failed chips). Bandwidth is only money. Latency is forever.
>> >> >> >
>> >> >> >I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.
>> >> >>
>> >> >> Ah, there's physics thing again.
>> >> >> >
>> >> >> >Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.
>> >> >>
>> >> >> FPGA manufacturers have decided that that's *not* the way to go. One
>> >> >> "hot" driver and the whole thing is toast.
>> >> >
>> >> >I don't know what you mean by 'hot driver', but all sections of the chip are shut down if faulty, including busses.
>> >>
>> >> You have a bus that goes to every processor. If anyone on that bus
>> >> grabs the bus, the entire chip is bad.
>> >
>> >Maybe you've not read the thread. That's not how it works.
>>
>> Then you can't just remove a bad processor from the system. It's I/O
>> would be dead.
>
>I see you've missed a central concept of this.
>
>> >> >AFAIK there is no equivalent of this thing today, with lots of CPUs, lots of busses and multiples of everything else. Manufacturers are more into not using redundancy, selling the perfect ones and scrapping the bad. That works when your amount of silicon per device is small enough to avoid defects, it certainly doesn't work for whole wafer circuits, or in this case whole block circuits.
>> >>
>> >> There is a reason for that.
>> >
>> >Of course. The only present day app for lots of CPUs and huge amounts of silicon is supercomputers. They need top performance CPUs, something this proposed approach can not deliver.
>>
>> No, it's a (bad) pipe dream.
>
>Your imagined version of it is indeed.

Then perhaps you should learn to write what you mean.

tabb...@gmail.com

unread,
Dec 13, 2016, 8:22:15 PM12/13/16
to
On Wednesday, 14 December 2016 01:08:09 UTC, krw wrote:
> On Tue, 13 Dec 2016 03:23:57 -0800 (PST), tabbypurr wrote:
> >On Tuesday, 13 December 2016 01:40:24 UTC, krw wrote:
> >> On Mon, 12 Dec 2016 16:33:07 -0800 (PST), tabbypurr wrote:
> >> >On Tuesday, 13 December 2016 00:13:21 UTC, krw wrote:
> >> >> On Mon, 12 Dec 2016 10:32:40 -0800 (PST), tabbypurr wrote:
> >> >> >On Monday, 12 December 2016 17:43:44 UTC, krw wrote:
> >> >> >> On Mon, 12 Dec 2016 05:02:58 -0800 (PST), tabbypurr wrote:
> >> >> >> >On Monday, 12 December 2016 00:47:14 UTC, krw wrote:
> >> >> >> >> On Sun, 11 Dec 2016 15:50:55 -0800 (PST), tabbypurr wrote:

> >> >> >> >Yes. Which will be running a vast number of tasks in a handheld device. Physics prevents a low power THz CPU happening hence a massive number of cores/cpus is the viable option.
> >> >> >>
> >> >> >> I'm not buying what you're selling.
> >> >> >
> >> >> >You don't know what I'm selling
> >> >>
> >> >> I can read what you're writing.
> >> >
> >> >I don't doubt it.
> >> >
> >> >> I'm not buying. (sheesh)
> >> >
> >> >You still don't know what I'm selling. Go on then, what am I selling?
> >>
> >> Some cockamamie 3-D waferscale hypercube processor that's low enough
> >> power to hold in your hand.
> >
> >Not even close.
>
> That's *exactly* what you've been describing.

More or less, but I'm not selling it.

> >> >> >> >> Putting everything
> >> >> >> >> one chip makes the I/O impossible (the I/O has to be rerouted around
> >> >> >> >> failed chips). Bandwidth is only money. Latency is forever.
> >> >> >> >
> >> >> >> >I expect i/o will largely be optical and/or wireless. An assortment of compromises must be tolerated to get such a beast working, but the end result is what I'm looking for.
> >> >> >>
> >> >> >> Ah, there's physics thing again.
> >> >> >> >
> >> >> >> >Rerouted, not really. There will be several busses each connecting to many CPUs, the most suitable viable CPU will switch on to take each available batch of data/instructions.
> >> >> >>
> >> >> >> FPGA manufacturers have decided that that's *not* the way to go. One
> >> >> >> "hot" driver and the whole thing is toast.
> >> >> >
> >> >> >I don't know what you mean by 'hot driver', but all sections of the chip are shut down if faulty, including busses.
> >> >>
> >> >> You have a bus that goes to every processor. If anyone on that bus
> >> >> grabs the bus, the entire chip is bad.
> >> >
> >> >Maybe you've not read the thread. That's not how it works.
> >>
> >> Then you can't just remove a bad processor from the system. It's I/O
> >> would be dead.
> >
> >I see you've missed a central concept of this.
> >
> >> >> >AFAIK there is no equivalent of this thing today, with lots of CPUs, lots of busses and multiples of everything else. Manufacturers are more into not using redundancy, selling the perfect ones and scrapping the bad. That works when your amount of silicon per device is small enough to avoid defects, it certainly doesn't work for whole wafer circuits, or in this case whole block circuits.
> >> >>
> >> >> There is a reason for that.
> >> >
> >> >Of course. The only present day app for lots of CPUs and huge amounts of silicon is supercomputers. They need top performance CPUs, something this proposed approach can not deliver.
> >>
> >> No, it's a (bad) pipe dream.
> >
> >Your imagined version of it is indeed.
>
> Then perhaps you should learn to write what you mean.

Everyone else followed it ok. You've missed some of the core concepts somewhere along the way.


NT

krw

unread,
Dec 13, 2016, 8:56:34 PM12/13/16
to
On Tue, 13 Dec 2016 17:22:11 -0800 (PST), tabb...@gmail.com wrote:

>On Wednesday, 14 December 2016 01:08:09 UTC, krw wrote:
>> On Tue, 13 Dec 2016 03:23:57 -0800 (PST), tabbypurr wrote:
>> >On Tuesday, 13 December 2016 01:40:24 UTC, krw wrote:
>> >> On Mon, 12 Dec 2016 16:33:07 -0800 (PST), tabbypurr wrote:
>> >> >On Tuesday, 13 December 2016 00:13:21 UTC, krw wrote:
>> >> >> On Mon, 12 Dec 2016 10:32:40 -0800 (PST), tabbypurr wrote:
>> >> >> >On Monday, 12 December 2016 17:43:44 UTC, krw wrote:
>> >> >> >> On Mon, 12 Dec 2016 05:02:58 -0800 (PST), tabbypurr wrote:
>> >> >> >> >On Monday, 12 December 2016 00:47:14 UTC, krw wrote:
>> >> >> >> >> On Sun, 11 Dec 2016 15:50:55 -0800 (PST), tabbypurr wrote:
>
>> >> >> >> >Yes. Which will be running a vast number of tasks in a handheld device. Physics prevents a low power THz CPU happening hence a massive number of cores/cpus is the viable option.
>> >> >> >>
>> >> >> >> I'm not buying what you're selling.
>> >> >> >
>> >> >> >You don't know what I'm selling
>> >> >>
>> >> >> I can read what you're writing.
>> >> >
>> >> >I don't doubt it.
>> >> >
>> >> >> I'm not buying. (sheesh)
>> >> >
>> >> >You still don't know what I'm selling. Go on then, what am I selling?
>> >>
>> >> Some cockamamie 3-D waferscale hypercube processor that's low enough
>> >> power to hold in your hand.
>> >
>> >Not even close.
>>
>> That's *exactly* what you've been describing.
>
>More or less, but I'm not selling it.

Oh, good grief. You certainly are selling the idea. I'm not buying

tabb...@gmail.com

unread,
Dec 13, 2016, 9:16:04 PM12/13/16
to
On Thursday, 8 December 2016 15:46:03 UTC, Phil Hobbs wrote:
> On 12/08/2016 03:22 AM, Martin Brown wrote:
> > On 07/12/2016 19:01, tabbypurr wrote:
> >> On Wednesday, 7 December 2016 08:02:18 UTC, Martin Brown wrote:
> >>> On 06/12/2016 16:17, tabbypurr wrote:
> >>>> On Tuesday, 6 December 2016 14:07:32 UTC, Phil Hobbs wrote:
> >>>>> On 12/06/2016 06:16 AM, tabbypurr wrote:
> >>>>
> >>>>>> What do people think of the possibility in the distant future
> >>>>>> of transitioning to a solid 3d block silicon IC structure?
> >>>>>>
> >>>>>> Thermal management is the most obvious issue. Extremely low
> >>>>>> power circuitry exists, albeit slow. Coper rods could be
> >>>>>> included to improve heat transfer to the surface.
> >>>>>>
> >>>>>
> >>>>> The problem is yield. Bonding known-good dice to known-good
> >>>>> chips on wafers is barely acceptable, and iterating the process
> >>>>> to make real 3D is very hard.
> >>>>
> >>>> Yield could I expect be addressed with a self test routine that
> >>>> permanently disables all faulty blocks. Or where practical
> >>>> limits what they can do to what works.
> >>>
> >>> I vaguely recall a wafer scale system intending to do that which
> >>> sank without trace. Some of them tried quite hard!
> >>>
> >>> https://en.wikipedia.org/wiki/Wafer-scale_integration
> >>>
> >>> It seems easier to cut the parts up and run them at whatever speed
> >>> they are stable at than have to run at the speed of the worst good
> >>> one.
> >>
> >> There's no need to run all blocks at the same speed. With buffers
> >> everywhere each bit can run at its own max error free speed less a
> >> margin. But since the block has to be very low power density,
> >> sections will run far below their speed limits.
> >
> > One thing that would radically change things would be if there was no
> > longer a system clock and everything was allowed to free run
> > asynchronously. But they you get all sorts of potential nasty side
> > effects and race conditions - just like in software.
> >>
> >> For a large scale computer it makes sense in principle. No point
> >> chopping, packaging and reassembling it all on PCBs if you can let it
> >> self-test, lose bad sections and run as is.
> >
> > ISTR That was the reasoning at the time but it proved way too difficult
> > to implement and get anything like acceptable yield. Power dissipation
> > and obtaining good enough silicon process was I think the killer in
> > terms of commercial success. Anamartic was one player I knew in
> > Cambridge - making for the time large 40MB ram from 2x 6" wafers.
> >
> > http://www.computinghistory.org.uk/det/3043/Anamartic-Wafer-Scale-160MB-Solid-State-Disk/
>
> Trilogy was the big dog, but died for several reasons. A few:
>
> 1. You can't get signals in and out of a chip larger than about 22 mm
> square, because even with underfill to relieve the stress, differential
> thermal expansion rips the solder balls off the base level metal. That
> cripples the I/O bandwidth of anything larger.

Also with a whole computer in one piece of silicon there is no need to solder the chip to a PCB.


NT

tabb...@gmail.com

unread,
Dec 14, 2016, 6:57:18 AM12/14/16
to
On Sunday, 11 December 2016 12:38:48 UTC, Tom Gardner wrote:
> On 11/12/16 12:04, Phil Hobbs wrote:

> > And the utilization percentage of highly multicore processors is nearly always poor, because it makes it much harder to write correct, efficient programs.
>
> In some cases the program/algorithm/problem is dominated
> by Amhdal's law.
>
> But there are other important "embarassingly parallel"
> programs and problems - telecom systems are an obvious
> example.

Gustafson's Law probably fits better here.


NT
0 new messages