aeolus
>Seems to me that if all you talkers got together and shared the work
<snip>
One of the worst flavors of human beings are those who insist
that everybody else do the work.
/BAH
Subtract a hundred and four for e-mail.
I would like to learn more about this KA-10 FPGA project.
Is there a webpage that describes it?
Thanks,
Scott Baker
Being in Western Australia is a distinct disadvantage to being in the
US "HUB" of things, as I think you will agree. I am doing my best to
become familiar with CPLD design and may one day venture into FPGA
design. On the software side I would do more if time permitted but
there are only 24 hrs in a day. Information is far more scarce where I
am than where you gurus are which makes software design very
difficult.
Regards,
Rolie Baldock.
aeolus
My point exactly. The first requirement is to find out how to drive
the various I/O devices and that is the difficult part. If you have
the knowledge it is just hard slog to write the service routines.
Regards.
aeolus
Funny, Australia makes one of the best FPGA proto-type boards
on the market. http://www.burched.com.au/index.html Australia
and the UK seem to be the only places where people who still
do electronics as hobby anymore.
> Being in Western Australia is a distinct disadvantage to being in the
> US "HUB" of things, as I think you will agree.
> Information is far more scarce where I
> am than where you gurus are which makes software design very
> difficult.
Well Switzerland is no better than Australia in that respect. Luckily
the WWW is WorldWide, so anyone anywhere can get the info.
> I am doing my best to
> become familiar with CPLD design and may one day venture into FPGA
> design.
Why not go straight to FPGAs?
CPLDs are sort of outdated. OK, they are good enough for simple glue
logic replacement.
> On the software side I would do more if time permitted but
> there are only 24 hrs in a day.
It is called time managment. Split up those 7*24 per week so that,
after the neccessary things (= sleep/rest, eating/clothing/house) are
taken care of, the hobbies get enough of the rest.
Job is the thing that usually gets too much time and can lose some.
For the last 4 years I have been working an 80% (4 days/week) sysadmin
job and will in May be changing to an new 40% (2 days/week) job programming.
Less time and less frustrating (so needing less rest to recover).
I actually requre about 50..55% to make as much money as I spend, so I
can afford to live off of the 4 years over-earning for about 6..12 years.
I just had to learn how to exit the consumerism ratrace (do I really need
that new thing?) to gain life quality and freedom in return. Definitely
recommended!
And with the extra time I will be able to start my sister project to
make my own open source FPGA toolset. And my new employer, whose
_managment_ actively believes in open source software, is willing
to start paying towards extending that, one it becomes usable for
projects at work. So that will bring me back up to 60% before 5 years
are over.
--
Neil Franklin, ne...@franklin.ch.remove http://neil.franklin.ch/
Hacker, Unix Guru, El Eng HTL/BSc, Sysadmin, Archer, Roleplayer
- Intellectual Property is Intellectual Robbery
----------
In article <3c6d8f09...@news.ami.com.au>, ber...@ami.com.au wrote:
We do try very hard to be involved with the best of technology.
Regards,
Rolie Baldock.
aeolus
Regards,
Rolie Baldock.
On 17 Feb 2002 23:41:58 +0100, Neil Franklin <ne...@franklin.ch.remove>
wrote:
aeolus
Careful! The fact that CPLD's are often smaller than FPGA's does not
mean they are easier, or that they are a good learning tool for large
FPGA designs. The design issues are different, since the architectures
are different. Knowledge can be gained (especially if you're using
synthesis), but there are more direct paths to doing FPGA CPU's.
I understand you have uses for CPLD's, so this post isn't really for
you--it's more for others who might want to get into this stuff. It's
easier than one might think!
FPGA's are preferred for (hobby) CPU design for a couple reasons:
because they are generally bigger, faster, and cheaper than anything
comparable, and because simple pipelined RISC architectures tend to map
quite well to them (since FPGA's tend to have lots of registers and
limited routing resources). Links to many examples are at
http://www.fpgacpu.org. Also, FPGA's (and modern CPLD's) allow one to
use an iterative design approach, which is VERY helpful for debugging.
It is not necessary to design a CPU by hand-placing LUT's, as some
people do (?!? :-). Today's synthesis tools work quite well, once you
figure out how to tell them what you want. It's important, however, to
understand the architecture of the FPGA you're using, and how HDL code
maps to the FPGA architecture through your synthesis tool. The best way
to learn this stuff is to do it. Both Altera and Xilinx have free
(downloadable, at least) toolchains which support synthesis reasonably
well.
I think the best way for someone to get into CPU design at home is to
get a small proto board with an Altera or Xilinx part, and start using
the free tools the vendors provide to play around. First, build some
simple little button-sensing light-flashing things to get used to the
tools and the board. Then, design and build a simple little CPU (RISCish
for simplicity). It doesn't have to be pipelined, and it doesn't have to
be fast--it just has to work. Once it works, pipeline it. Then, jump
headfirst into a large, complex architecture with existing software
requirements, like, say, the PDP-10. :-)
I can't exactly say "it works for me!", because I'm still working on the
last two steps, but it's been good so far. :-)
jake
In Perth Western Australia I found it easier to deal with Lattice who
have really been a great help in getting me to the stage I'm at. The
Altera man whilst being mildly helpful could not supply the
documentation I needed at the time. Xilinx to the best of my knowledge
don't have a rep in this city. Hence my choice.
I would like to make in an FPGA a microcontroller such as an
upgraded 68HC11 because Motorola don't seem to be interested in small
MCUs readilly available in reasonable MOQs. But finding the time to
undertake such a project might be beyond my ability just now as I have
existing power electronic projects which are partially completed and I
would prefer to complete them first.
Rolie Baldock.
aeolus
ber...@ami.com.au writes:
>
> On Tue, 19 Feb 2002 07:42:32 GMT, Jacob Nelson
> <ja...@DeleteMe.jfet.net> wrote:
>
> >ber...@ami.com.au wrote:
> >> Neil old buddy it's called "walking before you run". I was lucky to
> >
> >Careful! The fact that CPLD's are often smaller than FPGA's does not
> >mean they are easier, or that they are a good learning tool for large
> >FPGA designs.
I fully agree to this one.
> > The design issues are different, since the architectures
> >are different.
Vastly different.
CPLDs have a faily small amount (usually 32 to 320, largest is 1000)
of 1bit storage cells with each an large wide-input (up to 50 inputs)
logic element in front of them, which takes from many of the few
stored bits. They are best for decoders, complex state machines and
general "glue" logic.
FPGAs have an large array (16x8 to over 100x150) 1bit storage cells,
with each an small narrow-input (4 or 5) logic in front of them,
which requires an programmable "circuit board" beween the elements, to
select which cells states get delivered. Often the logic element can
be re-configured to be an 16bit storage element. They are best for
data paths, which any processor needs.
Consequently CPLD programming is mainly about logic formulas, while
FPGA programming is mainly about wiring up elements.
> > Knowledge can be gained (especially if you're using
> >synthesis), but there are more direct paths to doing FPGA CPU's.
If you are using synthesis, which is unquestionably sensible on CPLDs,
but very controversial on FPGAs.
> >comparable, and because simple pipelined RISC architectures tend to map
> >quite well to them (since FPGA's tend to have lots of registers and
Them 16bit storage elements are particularly good, as you need only
36 of them to implement 16 36bit registers. On an medium-size FPGA
that is only 1% of space. For an same-time 1-write, n-read register
file take 2*n elements per 16bit.
That would blast away 576 CPLD elements, more than most entire chips.
So with CPLDs you need to do KA-10 style external in memory registers.
No multi-access is possible.
> >It is not necessary to design a CPU by hand-placing LUT's, as some
> >people do (?!? :-).
And as I do. So I am going to put in a votum for using this technique.
> > Today's synthesis tools work quite well, once you
> >figure out how to tell them what you want.
"once you figure out". Also known as "I know what I want, how do I get
this damn tool to comprehend it; computers should help, not get in the
way". Also called "rope pushing" in the business.
> > It's important, however, to
> >understand the architecture of the FPGA you're using, and how HDL code
> >maps to the FPGA architecture through your synthesis tool.
Or understand the architecture and then use that knowlege to partition
your logic into elements, configure chip elements and place individual
elements, just like partitioning, selecting and placing 74(LS)xx(x)
TTL parts.
IMHO that is a far better mental fit for hardware engineers, who think
in components and interconnections (as opposed to programmers or CPLD
logic formula designers, who think in code).
It is particularly because of this, that I do not regard CPLDs as an
good introduction path to FPGAs.
> > The best way
> >to learn this stuff is to do it. Both Altera and Xilinx have free
> >(downloadable, at least) toolchains which support synthesis reasonably
> >well.
Fully agreed.
And Xilinx also has free low level hand-config&place tools (JBits).
Altera lacks these, which is the main reason why I took Xilinx. Their
chips being faster in arithmetic (Altera is better at table-driven and
content-lookup stuff), and the tool working under Linux also helped.
> >the free tools the vendors provide to play around. First, build some
> >simple little button-sensing light-flashing things to get used to the
> >tools and the board. Then, design and build a simple little CPU (RISCish
> >for simplicity). It doesn't have to be pipelined, and it doesn't have to
> >be fast--it just has to work. Once it works, pipeline it. Then, jump
> >headfirst into a large, complex architecture with existing software
> >requirements, like, say, the PDP-10. :-)
Or do it like I did, and jump into the deep end first. :-)
> In Perth Western Australia I found it easier to deal with Lattice who
> have really been a great help in getting me to the stage I'm at. The
> Altera man whilst being mildly helpful could not supply the
> documentation I needed at the time. Xilinx to the best of my knowledge
> don't have a rep in this city. Hence my choice.
Here in Winterthur Switzerland none have reps. I have had to rely 100%
on the net.
> I would like to make in an FPGA a microcontroller such as an
> upgraded 68HC11 because Motorola don't seem to be interested in small
> MCUs readilly available in reasonable MOQs.
Or try something RISCy. 16-register architectures with simple data
paths seem to fit FPGAs far better than single-accumulator designs
with many different instructions.
1) Building (essentially) highly-specialized MSI-scale devices; if a design
needs something like an octal register with a 4:1 mux on the front or a
little slice of a crossbar switch or a 1:8 address buffer for a big cache,
you can just make what you want.
2) Building fairly complex control logic. The big product term arrays
available in modern CPLDs can evaluate very complex functions very quickly;
FPGAs have trouble doing this because a lot of fairly random wiring is
involved, and FPGAs are a lot poorer at random wiring than their
manufacturers will usually admit.
And for what it's worth, even when doing FPGAs, I think in equations. My
concept sketches always look like chunks of physical layout with little
equations written in some of the boxes, and equations for control.
While I am no EE, I have worked with many, mostly in administering
their tools on UNIX systems.
The above statement is an altruism... gleaned from EE's using Xilinx,
Altera, Lattice, take your pick.
I like to look at it like "CPLD = Garbage In/Garbage Out (GIGO)". An
FPGA is capable of a lot of manipulation inbetween. It you choose to make
it garbage, so be it. But, you have a lot of room in an FPGA to make
sure it's not garbage :)
CPLD's (to me) are glorified PAL's...
Of course, here comes the caveat: I have no idea what I'm talking about,
but I think the above may tweak a few people into commenting. Please DO!
aak
Well, not speaking for myself, but some still are....
--
Huw Davies | e-mail: Huw.D...@kerberos.davies.net.au
| "If God had wanted soccer played in the
| air, the sky would be painted green"
I think JMF enjoyed the Australian DECUS the most. That
trip allowed him to literally travel around the world.
PLDs: everything is simple sum-of-products form with few (8-20) inputs and
outputs (4-12)
Generally good for glue logic, address decoders, and (small)
state machines. These
tend to have cosistent timing, as it is two-level logic...
CPLDS: usually several PLD cores on a chip with additional interchip
routing. Have access
to more inputs and outputs, but now there are routing
constraints... Handling timing
is a little more difficult, as there is the delay in the
interconnect as well as the
(still two-level) delay in the PLD-like subblocks... Can
handle more of the
same stuff as PLDs, i.e., larger state machines, etc.
CPLDs are not as good for arithmetic elements, in general, and for complex
data paths. For PLDs to handle these things, the design generally has to be
partitioned across several devices giving basically the same thing as a CPLD
does in one chip...
FPGA: Several small logic elements placed in an array on a chip with
routing resources
between the cells... Can handle complex data paths, better for
arithmetic elements,
especially in the FPGAs that include special support for
arithmetic elements
in cells (i.e., fast carry chains, multiplier cells, etc.)
Timing is a more significant
problem, as delay introduced by routing can dominate cell
delay...
Timing in PLDs is relative simple -- if you've done a good job of simplifing
the equations,
then either you meet timing, use a faster part, or it just won't work. CPLDs
are more complex,
in that you can try to meet timing by avoiding the interblock routing by
placing function inputs
on the same block as the function output. FPGAs timing is an important
issue, and to a great
extent you are at the mercy of the place and route tool... You can use tools
to hand route
sections or the whole design, but it's pretty time consuming...
Another factor is density... In PLDs and CPLDs a lot of area is taken up by
the "routing"
arrays, where the routing resources on an FPGA tend to take less
corresponding area
on the die, allowing more room for the logic cells, hence more logic can be
packed in
a device.
Another advantage that FPGAs have is that some of the cells can be
configured as memory,
often times dual-port memory (good for register arrays) and ROM (good for
microcode
storage) where these types of functions done in CPLDs consume registers,
which tend to be
limited resources...
Having used PLDs, CPLDs, and FPGAs, I would say that an FPGA is a pretty
obvious
choice to me for a processor. Of course, it may very well take more than
one...
Bill McDermith
Additional comments below...
"Arthur Krewat" <kre...@bartek.dontspamme.net> wrote in message
news:3C75D1CB...@bartek.dontspamme.net...
> Neil Franklin wrote:
> >
...snip...snip...snip...
> I like to look at it like "CPLD = Garbage In/Garbage Out (GIGO)". An
> FPGA is capable of a lot of manipulation inbetween. It you choose to make
> it garbage, so be it. But, you have a lot of room in an FPGA to make
> sure it's not garbage :)
I'm not sure what you mean here... In a sense, if you feed the same design,
say a VHDL file, to a tool and target either one, if the design is garbage,
it will be equally bad in either, and if it's not garbage, the design will
work
equally well in either... It's really a matter of using the part for what it
does
best -- for a small state machine an FPGA isn't worth the effort, especially
if the state machine has to be really fast -- it's just easier to use a PLD
or
CPLD. On the other hand, to implement a processor it's not worth taking
the time to try to pack it into a bunch of CPLDs, when it will fit better
into
one or two FPGAs...
Another factor is is it a 'All in one 'or a multi chip design.
With something like the PDP-10 I would rather do it multi-chip design
as routing can get rather complex over 16 bits of data path.
Right now I am using ram based FPGA's but I rather like anti-fuse
or EEPROM based logic because the idea of having the chip pre-programmed
makes more sense for me as less logic on your PCB. Also big FPGA's
don't come in easy to use packages.
> Having used PLDs, CPLDs, and FPGAs, I would say that an FPGA is a pretty
> obvious
> choice to me for a processor. Of course, it may very well take more than
> one...
True but logic design like a data path or random logic takes up more
resources
than you think.I like schematic entry because I can tweek a design
better. :)
Bigger, even. The Xilinx VirtexII XCV26000 (not even the biggest!) has
96x88 CLB's, each of which contains *8* of these table/flop
combinations. In addition, it has dedicated hardware multipliers, and
about 2.5Mbits of dual-port RAM (in addition to what you get when you
use the lookup tables as RAM). Now, the cost might be a bit prohibitive
for a hobbyist, but for processor design (specialty or otherwise), they
are very powerful.
>> > > Today's synthesis tools work quite well, once you
> > >figure out how to tell them what you want.
>
> "once you figure out". Also known as "I know what I want, how do I get
> this damn tool to comprehend it; computers should help, not get in the
> way". Also called "rope pushing" in the business.
Hah! Yes, lots of rope pushing is needed. But once you figure out where
the rope likes to go, you can guide it pretty well.
> > > It's important, however, to
> > >understand the architecture of the FPGA you're using, and how HDL code
> > >maps to the FPGA architecture through your synthesis tool.
>
> Or understand the architecture and then use that knowlege to partition
> your logic into elements, configure chip elements and place individual
> elements, just like partitioning, selecting and placing 74(LS)xx(x)
> TTL parts.
>
> IMHO that is a far better mental fit for hardware engineers, who think
> in components and interconnections (as opposed to programmers or CPLD
> logic formula designers, who think in code).
I agree entirely, and I think this is a good use for HDL's. When I write
HDL code, I am thinking in terms of components and their connections.
Components might be registers and muxes, or they might be
adder/subtractors and state machines. They would not, in general, be
more complex than that (at the bottom of a hierarchical design). It is
not at the hardware level, which is bad and good: bad, in that I'm not
always sure how many levels of logic something may synthesize to, or how
far apart two components will be; but also good, in that I can work on a
large design and not need to know exactly where each of those 60,000
LUTs goes or what specific function each performs. The tools can do a
lot of what's needed to make the design meet timing.
It is possible to write arbitrary HDL code, with no thought given to the
hardware at all. It is very unlikely that this will actually produce
useful hardware. :-) Synthesis tools have gotten much better in recent
years, but they are definitely not *that* good. Maybe one day. I'm not
waiting, though.
Working very near the hardware level (with Jbits or with schematics) is
also a valid way to do FPGA design. I have seen designs where the only
way to make it work was to hand-place each LUT. One does what one has to
to achieve design goals. I support any design technique that produces
the results the designer intends. :-)
jake
> My perspective is based on building synthesis tools for programmable
> logic...
I have the impression that was quite a while (= over 5 years in this
business) ago.
> CPLDs are not as good for arithmetic elements, in general, and for complex
> data paths. For PLDs to handle these things, the design generally has to be
> partitioned across several devices
Thus making a group of CPLDs into a group of bit slice chips, like
using 2901s. And then an other CPLD for the control section.
> FPGA: Several small logic elements placed in an array on a chip with
> routing resources
> between the cells... Can handle complex data paths, better for
> arithmetic elements,
> especially in the FPGAs that include special support for
> arithmetic elements
> in cells (i.e., fast carry chains, multiplier cells, etc.)
Which both Xilinx and Altera offer these days.
> Another factor is density... In PLDs and CPLDs a lot of area is taken up by
> the "routing"
> arrays, where the routing resources on an FPGA tend to take less
> corresponding area
> on the die, allowing more room for the logic cells, hence more logic can be
> packed in
> a device.
Hmm. Methinks you got that exactly the wrong way round. PLDs/PALs and
CPLDs have no routing at all. FPGAs have routing. Here a bit of size
calculation:
PLDs/PALs are dominated (about 99% size) by their product term evaluation
logic (the big fuse-AND-OR array). That has as size of 2*i*(8..12)*o bits,
where i=input-only pins and o=in/out/FF pins and (8..12) terms per FF.
So it grows in square of the amount of FFs. So:
PAL16R8 with 8 FF: o=8 i=8 terms=8 = 2048 bits
PAL22V10 with 10 FFs: o=10 i=12 terms=12 = 5720 bits
ATV2500B with 48 FFs: o=48 i=36 terms=8.5 = 68544 bits
hypothetical 4704 FFs: o=4704 i=0 terms=8 = 354'041'856 bits (!)
As you see that large (= XC2S200 FPGA size) is way out of range. Would
actually take about 3*354=~1000mio transistors, not manufacturable today!
CPLDs are "several PLDs on one chip" (good description that one!).
Each sub-PLD is constant size o=16..20 and i+0=32..50 and 5 terms,
so 2*(32..50)*5=~400 bits/FF, so product term space grows only linear
with FFs. But the interconnect between the sub-PLDs still grows
quadratic, usually subs*2*o*subs*i/(4..8). The 4..8 is the "sparsity"
factor of the interconnect (not every output/FF can go to any input).
So:
small CPLD 2*16 FFs: o=16 i+o=32 terms=5 subs=2 sparse=4
5120*2 + 1024 = 11264 bits
large CPLD 16*20 FFs: o=20 i+o=50 terms=6 subs=16 sparse=6
12000*16 + 85333 = 277333 bits
hypothetical 4704 FFs: o=20 i+o=50 terms=6 subs=235 sparse=6
12000*235 + 18408333 = 21'228'333 bits
As you see our XC2S200 size CPLD is only 1/15 PAL size, but still
3*21=~63mio transistors. Just about at the edge of todays
manufacturability.
FPGAs are space dominated by their routing, with the logic elements
as little islands within the big expanse of wiring (the "programmable
circuit board" I referred to). They only grows linear with amount of
FFs, as the routing is all local segments that get connected togehter.
small FPGA XC2S15 8x12x4=384 FFs: 8x12x864 = 82944 bits
medium FPGA XC2S200 28x42x4=4704 FFs: 28x42x864 = 1016064 bits
large FPGA XCV3200 104x156x4=64896 FFs: 104x156x864 = 14017536 bits
So the small XC2S15 has 4/3 bits than the ATV2500B, but 8 times more
FFs, and 1/3 the large CPLD for the same FFs. Our XC2S200 is already
1/20 CPLD size, and at 7*1=~7mio transistors it translates to $50 price.
And even the 14 times bigger XCV3200 monster is only 2/3 of that CPLD,
and wants 7*14=~98mio transistors which cost $3000.
So FPGAs have more logic, because their routing per element/FF is a
lot smaller than CPLDs producterms+interconnect per element/FF, as
soon as you go to large enough count of FFs. Then quadratic growth
of the interconnect kills CPLDs off.
> choice to me for a processor. Of course, it may very well take more than
> one...
Not with todays large FPGAs! A XC2S200 is an 56x84 array of element/FFs,
good enough for an PDP-10. And XCV3200E is 208x312 FFs, i.e. even 14
times more space than needed! We _are_ spoilt these days.
> Another factor is is it a 'All in one 'or a multi chip design.
> With something like the PDP-10 I would rather do it multi-chip design
> as routing can get rather complex over 16 bits of data path.
Why that? I see no complexity increase, as one basically has 36
identical "slices". Bit width in up/down direction and circuit
complexity in left/right direction of the chip. and control logic "on
top" of the data path.
> Right now I am using ram based FPGA's but I rather like anti-fuse
> or EEPROM based logic because the idea of having the chip pre-programmed
> makes more sense for me as less logic on your PCB.
That is the main problem with FPGAs: CPLDs are inmmediately working at
powerup. FPGAs need to first be "booted". That requires extra external
circuits to boot from.
Hmm, a bit like an KL needing its PDP-11 for microcode loading.
Actually 8051 microprocessors or CPLDs are often used for booting.
> Also big FPGA's
> don't come in easy to use packages.
Big chip wants many pins. :-)
But a lot timing has to be designed in in the first place.
Most of my tweaking is with 97% full FPGA and I want to try one more
feature.( A 2 years old FPGA proto board that I have only has 576 macro
cells
and a small block ram -- none of this dual port stuff ). My biggest
problem
is timing as I am using old i/o chips,and 74LSXX for decoding and
running
too fast could be a problem. A 500 ns memory cycle is fine but I am
trying for
a 410 ns memory cycle ( 9.83 Mhz ) and I am not sure if things will
work.
> It is possible to write arbitrary HDL code, with no thought given to the
> hardware at all. It is very unlikely that this will actually produce
> useful hardware. :-) Synthesis tools have gotten much better in recent
> years, but they are definitely not *that* good. Maybe one day. I'm not
> waiting, though.
The hard part is not using too many device vender specific features
in case you have to change product.
> Working very near the hardware level (with Jbits or with schematics) is
> also a valid way to do FPGA design. I have seen designs where the only
> way to make it work was to hand-place each LUT. One does what one has to
> to achieve design goals. I support any design technique that produces
> the results the designer intends. :-)
When I think working at the hardware level, it gate design using
discrete
transistors, resistors and diodes. I must be a young chap here as you
notice
I did not say tubes. :)
>
> jake
> Big chip wants many pins. :-)
I want useable pins, not 25% for power, ground and config.
I think the pins is more a factor of die size, you need a
bigger chip to fit every thing, so you have room for lots
of pins, but too big a chip for a few pins like a 84 PLCC.
Also the aniti-fuse chips tend to be FF's, latches and gates,
fewer products with fast carry or dual port memory.
> Neil Franklin wrote:
>
> > Big chip wants many pins. :-)
>
> I want useable pins, not 25% for power, ground and config.
Big chip and high clock speed result in potentially lots of power
consumption. Only lots of power pins can beat the supply line
induction.
Ever counted the power pins on an modern CPU with >10A current?
Intel jumped the Pentium4 from 420 to 480 pins, making it incompatible
with existing motherboards, just to get more power in. Yes, that is 60
additional power pins!
> I think the pins is more a factor of die size, you need a
> bigger chip to fit every thing,
Die size just allows more logic. That then brings more IO and power
consumption with it.
> of pins, but too big a chip for a few pins like a 84 PLCC.
Them 84 after subtracting power would result in about 40-50 usable for
IO, and that will do the possible designs no justice. I once looked at
an board with an PLCC84 chip and gave it up after seening the IO count.
Even the 140 IOs of the TQFP208 I am aiming for is really limiting. I
can give up all ideas about an 2*36+ECC data bus. Them 260 IOs of an
BGA352 package look really tempting - if I could just process BGAs.
> Also the aniti-fuse chips tend to be FF's, latches and gates,
> fewer products with fast carry or dual port memory.
Yes. The anti-fuse guys seem to prefer fine-grain FPGAs, as opposed
to the coarse-grain designs of the RAM based chips. Do an google for
"XC6216 data sheet" for the failled attempt of Xilinx to make an
fine-grain RAM based FPGA.
> Ever counted the power pins on an modern CPU with >10A current?
> Intel jumped the Pentium4 from 420 to 480 pins, making it incompatible
> with existing motherboards, just to get more power in. Yes, that is 60
> additional power pins!
Sure why not ... Open's a PC ... nope home brew computer 40 pin dip**.
Ok next PC ... hmm 486 , try my other pc ... Pentium Classic :)
** The cpu part of the FPGA uses 38 pins of i/o. About 6 more pins for
the on board uart and glue logic.
> Even the 140 IOs of the TQFP208 I am aiming for is really limiting. I
> can give up all ideas about an 2*36+ECC data bus. Them 260 IOs of an
> BGA352 package look really tempting - if I could just process BGAs.
Why a 72 bit data buss?
Well they have BGA sockets at about $200 each. Considering you are
paying
really big $$$ for the chips that is small change. The thing is I have
not
seen TQFP sockets, not that I have looked real hard.
> Yes. The anti-fuse guys seem to prefer fine-grain FPGAs, as opposed
> to the coarse-grain designs of the RAM based chips. Do an google for
> "XC6216 data sheet" for the failled attempt of Xilinx to make an
> fine-grain RAM based FPGA.
I don't care if it is coarse or fine grained as long as it fits.
> Neil Franklin wrote:
>
> > Ever counted the power pins on an modern CPU with >10A current?
> > Intel jumped the Pentium4 from 420 to 480 pins, making it incompatible
> > with existing motherboards, just to get more power in. Yes, that is 60
> > additional power pins!
>
> Sure why not ... Open's a PC ... nope home brew computer 40 pin dip**.
> Ok next PC ... hmm 486 , try my other pc ... Pentium Classic :)
Oh, small ones :-).
I am typing this on an AMD K6/2 with 350MHz. I consider that a nice speed.
At my new job in Mai I will have to get used to an over 1GHz processor.
> > Even the 140 IOs of the TQFP208 I am aiming for is really limiting. I
> > can give up all ideas about an 2*36+ECC data bus. Them 260 IOs of an
> > BGA352 package look really tempting - if I could just process BGAs.
>
> Why a 72 bit data buss?
Data throughput. Pumping bits. Loading the fast cache from slow main
memory. Spare capacity for hi-res/hi-color (say 1152x864x18bit@60Hz)
from-main-memory video display.
The SGI O2 (200MHz R5000 processor) I work on in the office has 128bit
memory, the SGI OnxyII (4 times 300MHz R12000) has 2 512bit memories,
one for each pair of processors.
And back on topic: KL-10 also had 4*36bit memory. Actually I would
really like 144bit, but the pins for that are totally out of range,
unless I could just use an BGA package...
> Well they have BGA sockets at about $200 each.
They have a bad reputation for contact trouble.
> really big $$$ for the chips that is small change. The thing is I have
> not
> seen TQFP sockets, not that I have looked real hard.
I have seen some, but they are also TQFP on the soldering side. OK for
swapping anti-fuse FPGAs, but useless for me.