On Thursday, 8 December 2016 15:46:03 UTC, Phil Hobbs wrote:
> On 12/08/2016 03:22 AM, Martin Brown wrote:
> > On 07/12/2016 19:01, tabbypurr wrote:
> >> On Wednesday, 7 December 2016 08:02:18 UTC, Martin Brown wrote:
> >>> On 06/12/2016 16:17, tabbypurr wrote:
> >>>> On Tuesday, 6 December 2016 14:07:32 UTC, Phil Hobbs wrote:
> >>>>> On 12/06/2016 06:16 AM, tabbypurr wrote:
> >>>>
> >>>>>> What do people think of the possibility in the distant future
> >>>>>> of transitioning to a solid 3d block silicon IC structure?
> >>>>>>
> >>>>>> Thermal management is the most obvious issue. Extremely low
> >>>>>> power circuitry exists, albeit slow. Coper rods could be
> >>>>>> included to improve heat transfer to the surface.
> >>>>>>
> >>>>>
> >>>>> The problem is yield. Bonding known-good dice to known-good
> >>>>> chips on wafers is barely acceptable, and iterating the process
> >>>>> to make real 3D is very hard.
> >>>>
> >>>> Yield could I expect be addressed with a self test routine that
> >>>> permanently disables all faulty blocks. Or where practical
> >>>> limits what they can do to what works.
> >>>
> >>> I vaguely recall a wafer scale system intending to do that which
> >>> sank without trace. Some of them tried quite hard!
> >>>
> >>>
https://en.wikipedia.org/wiki/Wafer-scale_integration
> >>>
> >>> It seems easier to cut the parts up and run them at whatever speed
> >>> they are stable at than have to run at the speed of the worst good
> >>> one.
> >>
> >> There's no need to run all blocks at the same speed. With buffers
> >> everywhere each bit can run at its own max error free speed less a
> >> margin. But since the block has to be very low power density,
> >> sections will run far below their speed limits.
> >
> > One thing that would radically change things would be if there was no
> > longer a system clock and everything was allowed to free run
> > asynchronously. But they you get all sorts of potential nasty side
> > effects and race conditions - just like in software.
> >>
> >> For a large scale computer it makes sense in principle. No point
> >> chopping, packaging and reassembling it all on PCBs if you can let it
> >> self-test, lose bad sections and run as is.
> >
> > ISTR That was the reasoning at the time but it proved way too difficult
> > to implement and get anything like acceptable yield. Power dissipation
> > and obtaining good enough silicon process was I think the killer in
> > terms of commercial success. Anamartic was one player I knew in
> > Cambridge - making for the time large 40MB ram from 2x 6" wafers.
> >
> >
http://www.computinghistory.org.uk/det/3043/Anamartic-Wafer-Scale-160MB-Solid-State-Disk/
>
> Trilogy was the big dog, but died for several reasons. A few:
>
> 1. You can't get signals in and out of a chip larger than about 22 mm
> square, because even with underfill to relieve the stress, differential
> thermal expansion rips the solder balls off the base level metal. That
> cripples the I/O bandwidth of anything larger.
Also with a whole computer in one piece of silicon there is no need to solder the chip to a PCB.
NT