Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Rabbit Semiconductor

201 views
Skip to first unread message

Darrell Flenniken

unread,
Feb 4, 2000, 3:00:00 AM2/4/00
to
Any users of the Rabbit Z80 "compatible" microcontroller? Feedback ??
How about Dynamic 'C' ?

Darrell

Steve Holle

unread,
Feb 4, 2000, 3:00:00 AM2/4/00
to
I've used Dynamic C in past projects and was pleased with the
environment. It's no Visual C++ but it's entirely adequate for
embeded applications. The language has some simple multi-tasking
extensions and in one of their advertisments I saw they were porting
UCOS. I picked up the Rabbit development system when they were
offering the $99 deal. It's an impressive embedded processor.
Previous systems supporting Dynamic C need a kernal prom running
before you could upload a program. It appears that the Rabbit has an
uploader in microcode. The demo board has flash that can be the
permanent program storage.

Steve

unread,
Feb 4, 2000, 3:00:00 AM2/4/00
to Darrell Flenniken
Darrell Flenniken wrote:
>
> Any users of the Rabbit Z80 "compatible" microcontroller? Feedback ??
> How about Dynamic 'C' ?
>
> Darrell
-----------------------
Looks like somebody is grabbing the new Zilog Z80 set integrated
products and renaming them and reselling the old Z80 shit with them.
Scummy idea, but if you're too stupid to hunt up that old stuff it might
be worthwhile if they don't gouge you out the ass for the same stuff a
lot of have laying around in basements. www.rabbitmicro.com
Hahahahahahah!
-Steve
--
-Steve Walz rst...@armory.com ftp://ftp.armory.com:/pub/user/rstevew
-Electronics Site!! 1000 Files/50 Dirs!! http://www.armory.com/~rstevew
Europe Naples Italy: http://ftp.unina.it/pub/electronics/ftp.armory.com

Chanaka Gurusinghe

unread,
Feb 4, 2000, 3:00:00 AM2/4/00
to
Did any one find GOOD tutorial on Floodfil Algorithm? Thanks you guys.

regards

Chanaka

Robert Posey

unread,
Feb 4, 2000, 3:00:00 AM2/4/00
to

Steve wrote:
>
> Darrell Flenniken wrote:
> >
> > Any users of the Rabbit Z80 "compatible" microcontroller? Feedback ??
> > How about Dynamic 'C' ?
> >
> > Darrell
> -----------------------
> Looks like somebody is grabbing the new Zilog Z80 set integrated
> products and renaming them and reselling the old Z80 shit with them.
> Scummy idea, but if you're too stupid to hunt up that old stuff it might
> be worthwhile if they don't gouge you out the ass for the same stuff a
> lot of have laying around in basements. www.rabbitmicro.com
> Hahahahahahah!

Wouldn't any rights on a Z80 be expired anyway, and thus its perfectly
alright to copy one outright. While the base compiler maybe a straight lift,
they may have spent a lot of effort on the IDE which is of great importance
to workablity.

Muddy

Monte Dalrymple

unread,
Feb 4, 2000, 3:00:00 AM2/4/00
to
The Rabbit is not a "copy" of anything. It happens to use the Z80
architecture and a slightly modified Z80 instruction set. A casual
reading of the first page of the spec reveals that it is a 2-clock-cycle
machine. The Z80 is a 4-clock-cycle machine, and the newer Z180
is a 3-clock-cycle machine. None of the "z80-integrated" products
have the peripheral set of the Rabbit.

Monte Dalrymple


Dave White

unread,
Feb 4, 2000, 3:00:00 AM2/4/00
to
I'm currently evaluating the Rabbit for a new project. So far I've been
very impressed with the chip - the onchip peripherals cover most of the
generic I/O that I need. The Dynamic C package is pretty impressive -
particularly the real time debugging of the Rabbit while in circuit - no
need for an ICE.

Definitely worth a look.


--
Dave White
SpectraChrom Software
www.spectrachrom.com

Darrell Flenniken <deflen...@home.com> wrote in message
news:5yvm4.31090$eC2.3...@news1.alsv1.occa.home.com...

Mike Baptiste

unread,
Feb 5, 2000, 3:00:00 AM2/5/00
to
Same here. Our main Home Automation Controller was a Z180 and now uses an
S180. Problem is, all the glue logic needed on the expansion boards makes
the boards draw a lot more power than they need to and also makes them more
expensive to build. This new chip would simplify the hardware of our system
considerably. Plus the FLASH bootstrap is something I want in the next
version of our board anyway.

I'll be giving this chip a close look. If any of you get Circuit Cellar,
the design methods used for the Rabbit were covered in the Sept & Oct 1999
issues.

Of course, the ez80 looks interesting too. But I'll probably offload TCP/IP
to a slave device like the Dallas TINI or something.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Mike Baptiste bapt...@cc-concepts.com
Creative Control Concepts http://www.cc-concepts.com/
** Home Automation Products for the Serious Enthusiast **
Check out the new HA FAQ http://63.67.172.146/cgi-bin/fom
See Our Home Renovation http://63.67.172.146:8002/remodel
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
"Dave White" <spec...@thebestisp.com> wrote in message
news:RSLm4.1356$8p3.1...@ptah.visi.com...

Bill Sturm

unread,
Feb 5, 2000, 3:00:00 AM2/5/00
to
I just bought one of their development kits. Should be lots of fun
for 100 bucks. I like the Rabbit chip because it has four serial ports
built
in. I haven't seen that many ports on any other micros, especially not
ones
that are so easy to use.

Bill Sturm

Steven Ames

unread,
Feb 7, 2000, 3:00:00 AM2/7/00
to
I used Dynamic 'C' on a Z-World "Little Star"
http://www.zworld.com/pk2200.html for a project and found it workable. The
language has some multitasking extensions that could be useful for the right
job. It takes a bit of getting used to and I do NOT consider it's debug
environment a replacement/substitute for an emulator. I can say the tech
guys at Z-World were VERY helpful while trying to squeak the performance
needed to read a high speed quadrature encoder with a couple of input bits.

$99 for a development kit sounds like a deal. Any idea what the cost is for
the chip in quantity? Are these rabbit folks going to still be around 5
years from now or they going to bounce in and bounce out?

Steven Ames

Uwe Bonnes

unread,
Feb 8, 2000, 3:00:00 AM2/8/00
to
Steven Ames <ste...@apisindustries.com> wrote:
: I used Dynamic 'C' on a Z-World "Little Star"

: http://www.zworld.com/pk2200.html for a project and found it workable. The
: language has some multitasking extensions that could be useful for the right
: job. It takes a bit of getting used to and I do NOT consider it's debug
: environment a replacement/substitute for an emulator. I can say the tech
: guys at Z-World were VERY helpful while trying to squeak the performance
: needed to read a high speed quadrature encoder with a couple of input bits.

: $99 for a development kit sounds like a deal. Any idea what the cost is for
: the chip in quantity? Are these rabbit folks going to still be around 5
: years from now or they going to bounce in and bounce out?

"Any forecast is difficult, especially those concerning the future"

--
Uwe Bonnes b...@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

tauy...@ieee.org

unread,
Feb 8, 2000, 3:00:00 AM2/8/00
to
First of all, the Rabbit 2000 is not 100% compatible with the Z180, and
it is not a microcontroller (strictly speaking, since it requires
external RAM). One can argue that the compatibility is necessary in
order to make it better optimized than the original Z180. Nonetheless,
the assembly instructions are not identical to the Z180 despite a
somewhat large overlap. Do *not* even think about binary code
compatibility!

Of course, if you are using C and relies on vendor provided drivers for
peripherals (such as serial ports and etc.), whether it is 100%
compatible is moot. On the other hand, if you are planning to develop
your own board based on the MPU, you most likely need to deal with the
architecture. In that case, it may be wise to study the manuals first
(available on-line). Note that the timers, serial ports, parallel ports
and even interrupt architecture are different from the Z180 family
MPUs.

The Dynamic C language overlaps with ANSI-C. I cannot quantify the
importance of the differences, as that depends on the application and
programming style. The built-in software-based debugger is quite handy
for reproduceable bugs that are not timing dependent.

Different can be better or worse. For $99, the evaluation kit is quite
inexpensive. Anyone who is seriously thinking about using the Rabbit
2000 MPU should take it for a test spin and compare this MPU
(and the development platform) against others.

--Tak

In article <5yvm4.31090$eC2.3...@news1.alsv1.occa.home.com>,


"Darrell Flenniken" <deflen...@home.com> wrote:
> Any users of the Rabbit Z80 "compatible" microcontroller? Feedback
??
> How about Dynamic 'C' ?
>
> Darrell
>
>


Sent via Deja.com http://www.deja.com/
Before you buy.

Norman Rogers

unread,
Feb 9, 2000, 3:00:00 AM2/9/00
to
There are some new benchmarks up on the Rabbit Semiconductor site:

http://www.rabbitsemiconductor.com/benchmark.html

The benchmarks conver mainly floating point performance and show that the
Rabbit is way faster than the 188, 8051, Z180, etc.

Norman Rogers

Jon Kirwan

unread,
Feb 9, 2000, 3:00:00 AM2/9/00
to
On Wed, 09 Feb 2000 21:33:12 -0800, Norman Rogers
<norman...@altavista.net> wrote:

>The benchmarks conver mainly floating point performance and show that the
>Rabbit is way faster than the 188, 8051, Z180, etc.

After reading the few posts about this product, I wouldn't mind
hearing about the planned initial target markets for it -- those
expected to be more successful, at first.

It looks like about 100 pins and over an inch on the largest
dimension. Power consumption is relatively high for an 8 bit CPU
(which your benchmark comments focus on), performance is ... still
being discussed... Price?

Is this processor sold on it's FP performance? From your two recent
posts, I'd begin to think so. Yet you also include a wide variety of
chips in the same breath, leaving me wondering. In any case, would
you mind commenting on which applications and products this is
initially targeted directly at? Is this the result of a "sugar daddy"
who allowed the later marketing of Rabbit in order to defray some of
their NRE to a design group; now looking for any market?

(I'm assuming that there are specific, short-term plans for the chip
and a growth plan, as well.)

I apologize for my ignorance, in advance.

Jon

Kelly Hall

unread,
Feb 9, 2000, 3:00:00 AM2/9/00
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

"Norman Rogers" <norman...@altavista.net> wrote in message
news:38A24D98...@altavista.net...


> There are some new benchmarks up on the Rabbit Semiconductor site:
>
> http://www.rabbitsemiconductor.com/benchmark.html
>

> The benchmarks conver mainly floating point performance and show
> that the Rabbit is way faster than the 188, 8051, Z180, etc.

Impressive! And yet, misleading and perhaps pointless.

No place in what you've published gives me any clear idea what
exactly you mean by 'floating point' - are you talking 80 bit IEEE
754? Are you talking 32 bit? Are the same data formats being used
across all platforms? Are all compilers claiming full support for
exceptions? Any chance we can see the compile options for each
compiler? I'm not suggesting that embedded apps require full-up IEEE
754 compliant libraries - but if you're going to use floating point
performance to compare the performance of different architectures,
it's meaningless unless all architectures are attempting to perform
the same task.

I note the conspicuous absense of any 'modern' processors for
comparison - where are the Siemens parts? The new PICs? The 50 and
100 MHz Scenix? The low end TI and Analog DSPs? Granted, finite
testing time and resources means that not every processor gets a
chance in the barrel but still...

I must say that the PDF is the first time I've *ever* seen anyone
claim that the AVR is a 16-bit controller while at the same time
suggesting the AMD 188ES is an 8-bit. Also, I'm unclear why the AVR
results were discarded? Because the AVR had sufficient registers to
keep local data local? According to section 3.2 of the Rabbit User's
Manual, the Rabbit's alternate register set "effectively doubles the
number of registers that are easily available for the programmer's
use" - so did Dynamic C fail to use the new registers or was doubling
a small number still too small to offer a dramatic speedup? Are more
registers bad or good?

Any reason I'm seeing proprietary benchmarks and not something more
mainstream like the EEMBC numbers? Even lowly Dhrystones? With a 1
Megabyte code space I'm sure you could compile a few of the SpecInt
routines?

Credit where credit is due: the Rabbit 2000 looks good when it gets
to pick the fight. Secondly, the supplied numbers are a damn sight
better than the chest thumping of a two months ago. I may have to
take the eval board out of the box and compile some stuff.

Kelly

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOKJtfeO3tORmHE48EQILxACgom+W+4+tufday+qSrp2JEJIrXN0AoIT8
9Hh7VhzA5fs3zQWIKbpxqc9t
=CZw7
-----END PGP SIGNATURE-----


Jon Kirwan

unread,
Feb 10, 2000, 3:00:00 AM2/10/00
to
On Wed, 9 Feb 2000 23:49:18 -0800, "Kelly Hall" <ha...@iname.com>
wrote:

>"Norman Rogers" <norman...@altavista.net> wrote in message
>news:38A24D98...@altavista.net...
>> There are some new benchmarks up on the Rabbit Semiconductor site:
>>
>> http://www.rabbitsemiconductor.com/benchmark.html
>>
>> The benchmarks conver mainly floating point performance and show
>> that the Rabbit is way faster than the 188, 8051, Z180, etc.
>
>Impressive! And yet, misleading and perhaps pointless.
>
>No place in what you've published gives me any clear idea what
>exactly you mean by 'floating point' - are you talking 80 bit IEEE
>754? Are you talking 32 bit? Are the same data formats being used
>across all platforms? Are all compilers claiming full support for
>exceptions? Any chance we can see the compile options for each
>compiler? I'm not suggesting that embedded apps require full-up IEEE
>754 compliant libraries - but if you're going to use floating point
>performance to compare the performance of different architectures,
>it's meaningless unless all architectures are attempting to perform
>the same task.

They provided the source code they supposedly used for the Rabbit,
itself. I don't recall seeing any of the modified code used for the
other processors. But those are all good questions.

The C code I saw would execute an "operation" 1000 times. For
example, you'd see "for (i= 0; i < 1000; ++i) z=x*y;" kind of thing,
where the operation variables are declared "float". What each
compiler actually uses for a float is anyone's guess, unless they own
the manuals for them. Various compilers were used and there is no
assurance that they picked compilers that used comparable formats.
Chances are, in fact, they may have selected the compilers that showed
the poorer performance amongst those generally available for a
particular, competing processor. Of course, they may have done a fair
job, too. I couldn't tell.

If their floating point is kept with the exponent separated and the
mantissa unpacked, hehe, then performance could be pretty good. I
suspect that the compiler probably uses the fastest format they can
get away with, consistent with reasonable space use. But you are
right, too. There may be a compiler option they are using that
directs the compiler to use an unpacked format. That would be a
sight, and unfair.

>I note the conspicuous absense of any 'modern' processors for
>comparison - where are the Siemens parts? The new PICs? The 50 and
>100 MHz Scenix? The low end TI and Analog DSPs? Granted, finite
>testing time and resources means that not every processor gets a
>chance in the barrel but still...

I think it would be worth their effort to make this comprehensive, if
they are don't have any focus to their marketing. As I asked but
haven't seen an answer to, they may have very specialized markets they
are targeting and perhaps the chosen processors are the more common
ones used in those applications. Frankly, there is no way to guess
from what I've seen. I'm still drawing a complete blank about exactly
where they expect this processor to take the world by storm.

For me, it seems either way too big or way to power hungry to replace
or get designed into anything I use in the 8-bit world where their
claims to performance might help. When I imagine using it in places
where I need 100 pins, I can get much better performance for very good
money, both in power and speed with products that I have long
experience with.

I'd like to know exactly where they see this processor's BEST CHANCE.
I'm sure they must already have some sense of it or else they
shouldn't be in business. I just wish I could see it, too.

>I must say that the PDF is the first time I've *ever* seen anyone
>claim that the AVR is a 16-bit controller while at the same time
>suggesting the AMD 188ES is an 8-bit.

Hehe. Took my breath away, too.

>Also, I'm unclear why the AVR
>results were discarded? Because the AVR had sufficient registers to
>keep local data local? According to section 3.2 of the Rabbit User's
>Manual, the Rabbit's alternate register set "effectively doubles the
>number of registers that are easily available for the programmer's
>use" - so did Dynamic C fail to use the new registers or was doubling
>a small number still too small to offer a dramatic speedup? Are more
>registers bad or good?

It was interesting that they talked about it, but didn't put the
numbers in the table.

>Any reason I'm seeing proprietary benchmarks and not something more
>mainstream like the EEMBC numbers? Even lowly Dhrystones? With a 1
>Megabyte code space I'm sure you could compile a few of the SpecInt
>routines?

Could be that it would be too difficult for them to set up all the
tools needed to test the processors they wanted to test. But the rest
of the document makes it difficult to tell this, too.

>Credit where credit is due: the Rabbit 2000 looks good when it gets
>to pick the fight. Secondly, the supplied numbers are a damn sight
>better than the chest thumping of a two months ago. I may have to
>take the eval board out of the box and compile some stuff.

I'm still waiting to see where they see their own best shot.

Jon

Norman Rogers

unread,
Feb 10, 2000, 3:00:00 AM2/10/00
to

I'd like to address some of the questions and comments posted concerning
the Rabbit microprocessor.

The Rabbit is not a power hog compared to other processors, especially
considering its high performance. The list below shows typical current
consumption at 5V for some of the processors compared in the benchmarks on
the rabbitsemiconductor.com site.

AMD 188ES @ 40 MHz - 240 mA
Rabbit @ 29 MHz - 109 mA
Z180 @ 33 MHz - 60 mA
Dallas DS80C320 @ 33 MHz - 35 mA


As can be seen from the list above the Rabbit uses about half as much
power as the 188ES. For applications where power consumption is important,
for example battery powered applications, the Rabbit offers many options
to reduce power consumption by reducing operating voltage and modulating
clock speed as computational needs vary. In the "sleepy" mode the Rabbit
continues to execute instructions at 32 kHz with current consumption in
the area of 100 micro amperes. The is far more powerful then the sleep
mode found on other processors because decision making capability is
preserved.

The Dallas DS80C320 appears to have relatively good power consumption, but
in order to obtain a 33 MHz clock speed additional high powered glue logic
is needed and the memory chips must be operated with the chip select pins
grounded, further increasing power requirements. This is detailed in
Application Note 57 on the Dallas Semiconductor web site. They suggest
using a 74F373 logic part as an address latch. This glue logic part alone
has a typical current requirement of 38 mA - more than the microprocessor.

Pricing is given on the Rabbit web site at
http://www.rabbitmicro.com/pricesched.html and is $9.50 in quantity 1,000.

The target market for the Rabbit is small embedded microprocessor boards.
The Rabbit makes it easy to design such boards with a minimum of glue
logic and other chips. Having 4 serial ports and a built in real time
clock can replace $5 of external components. Memory interfacing is
incredibly easy with the Rabbit. The Rabbit read cycle is 2 clocks long.
The address comes out immediately after the clock edge and the setup time
for data before the second clock edge is minimal. The result is that the
memory access time required is only about 12 nS less than 2 clock periods.

The floating point benchmarks reflect both the good performance of the
Rabbit and the high quality of the floating point library in Dynamic C. In
the case of the 188ES the very poor floating point results reflect the
fallacy of implementing floating point by emulating an FPU that is only
supplied on high end x86 processors. The Rabbit uses IEEE format 32-bit
floating point and carries out computations with full accuracy, meaning
typically 0, 1 or 2 counts of error. We don't try to exactly follow every
fine point concerning the handling of exceptions and numbers near the
limits of the floating point range. It would actually be quite mindless do
this since it would give the user nothing very useful at considerable cost
in performance. The IEEE floating specification was written for hardware
FPU's where the cost-performance equation is different than it is for a
software implementation. There are no tricks or deceptions in the
benchmarks. The Rabbit is faster because the hardware is faster and the
library is better.

Whether the AVR is 8-bit or 16-bit can be debated, since it has a 16-bit
instruction path and an 8-bit data path. I think the instruction path is
far more important and should be the deciding factor. The Rabbit has
16-bit internal data paths but the external data interface and the
instruction path is 8 bits, so we call it an 8-bit processor. We didn't do
a full test on the AVR because it is quite different than the Rabbit and
not suited for the same classes of problems due to the limited amount of
instruction memory. But, since it is popular and since we did a few
cursory tests using a simulator we decided to offer up this information
for what it is worth.

The Rabbit is not the world's fastest 8-bit processor. Other processors
are cheaper. Some processors have a cleaner architecture or have
peripheral devices that the Rabbit is lacking. The Rabbit and Dynamic C
constitute a package with certain advantages that will make the Rabbit the
processor of choice for some projects. Strengths include excellent
development support, easy design, good performance, fast floating point,
lots of serial ports, built in real time clock, precision pulse
generation, sleepy mode, slave port, etc.

Norm Rogers


Mark Borgerson

unread,
Feb 11, 2000, 3:00:00 AM2/11/00
to

Jon Kirwan wrote:
>
<<SNIP>>

> I'm still waiting to see where they see their own best shot.


Yeah, me too. Although over the years, I've accumulated half a
dozen different eval kits--from PIC to 6805J1 to 68F333. If I
want speed and integer performance, it's hard to beat a Scenix
chip. 8051 variants were cool because of BASIC51, Keil C and
the neat NVRAM of the Dallas Chips. When I want floating
point performance, I pick a board with a 683XX variant and
get the processing power of the original Macintosh. It's hard
to beat the MetroWerks libraries and the counter/timer
resources of those chips.

I just can't seem to find a place for the Rabbit. I'm still looking
for a spot for the BASICX controller, too ;-) I did one
project with a Z-World controller and Dynamic C, and it's
not a bad development environment, even though you have to
compile each file from scratch each time. With a modern
desktop at 200mHz or better, compile time sort of falls out
of the whole development time equation.

Mark Borgerson

Jon Kirwan

unread,
Feb 11, 2000, 3:00:00 AM2/11/00
to
On Thu, 10 Feb 2000 21:41:36 -0800, Norman Rogers
<norman...@altavista.net> wrote:

>I'd like to address some of the questions and comments posted concerning
>the Rabbit microprocessor.

Thanks for the effort! Really. Please excuse my argumentative format
below, but it can cut through issues quicker. I want to be convinced,
you can be assured.

>The Rabbit is not a power hog compared to other processors, especially
>considering its high performance. The list below shows typical current
>consumption at 5V for some of the processors compared in the benchmarks on
>the rabbitsemiconductor.com site.
>
>AMD 188ES @ 40 MHz - 240 mA
>Rabbit @ 29 MHz - 109 mA
>Z180 @ 33 MHz - 60 mA
>Dallas DS80C320 @ 33 MHz - 35 mA

Hmmm. Franky, I don't agree with your conclusion, just yet. When I'm
looking for low power consumption ([1] sensitive measurement circuits
placed in circumstances where heat removal is difficult, where
differentials would wreck havoc, or where large variations in ambient
are hard to correct, [2] battery applications where battery life is a
competitive feature, etc.) with fast computation times, I don't start
looking at the 188ES!! hehe.

I already pointed out one CPU family, for example, that apparently
achieves an order or two magnitude better floating point performance
on a per-Joule basis. There are others, too. Certainly, for
applications where this really is important, an order of magnitude or
more will compensate for any differences in ease of development you
might argue for.

I'd gladly be wrong about this, though.

>As can be seen from the list above the Rabbit uses about half as much
>power as the 188ES.

Yes. And lots less than a 37 watt Pentium Pro, too! I don't look at
the 188ES as my low-energy-per-work-performed benchmark. I'm not even
sure if it would be visible on my chart.

>For applications where power consumption is important, for example
>battery powered applications, the Rabbit offers many options to
>reduce power consumption by reducing operating voltage and modulating
>clock speed as computational needs vary.

As do many other families these days, with more coming out almost
every week now. But given similar efforts to provide lower voltage
supplies and modulating clock rates, the question is whether or not
Rabbit is still better. Compare apples with apples on this one.

>In the "sleepy" mode the Rabbit continues to execute instructions at
>32 kHz with current consumption in the area of 100 micro amperes.

5V or 3V? This certainly is a good figure to work with, although TI's
well advertised MSP430 MCU with flash draws less than 350ua/MHz in
active mode, operating at 3V, and around 1ua or 2ua in standby.

But I'll take a closer look at the performance at very low currents.
There could be something worthwhile there. I'm already heading in the
direction of a conclusion that at high speed it sucks far too much
power without offering compensating factors.

>The is far more powerful then the sleep mode found on other processors
>because decision making capability is preserved.

Well, who could possibly disagree? Most any processor still running
will be more powerful than one that isn't. You got me there, I'll
grant.

>The Dallas DS80C320 appears to have relatively good power consumption, but
>in order to obtain a 33 MHz clock speed additional high powered glue logic
>is needed and the memory chips must be operated with the chip select pins
>grounded, further increasing power requirements. This is detailed in
>Application Note 57 on the Dallas Semiconductor web site. They suggest
>using a 74F373 logic part as an address latch. This glue logic part alone
>has a typical current requirement of 38 mA - more than the microprocessor.

I guess I wouldn't be using that one in a battery environment, then.

>Pricing is given on the Rabbit web site at
>http://www.rabbitmicro.com/pricesched.html and is $9.50 in quantity 1,000.

For 100 pins, that's in the ball park.

>The target market for the Rabbit is small embedded microprocessor boards.

I don't mean to be harsh at all, but that's not the least bit of an
answer. I could have written that, as confused as I am and having
absolutely no affiliation with the folks at Rabbit. And it wouldn't
be sufficient for the investors I've known and worked with. You'd
have to have a much more precise understanding of exactly what you
expect to replace, when and who.

Is it possible to provide more of an edge on your focus?

>The Rabbit makes it easy to design such boards with a minimum of glue
>logic and other chips. Having 4 serial ports and a built in real time
>clock can replace $5 of external components.

Okay. This is a good point. When more than one or two serial ports
are needed, and cheaply, the available CPUs dwindles dramatically.

>Memory interfacing is incredibly easy with the Rabbit.
>The Rabbit read cycle is 2 clocks long.
>The address comes out immediately after the clock edge and the setup time
>for data before the second clock edge is minimal. The result is that the
>memory access time required is only about 12 nS less than 2 clock periods.

At 34.5ns per cycle, this is say... 34.5*2-12 or about 57ns at 29MHz?
Some processors include RAM inside and still appear to perform better
with less power. That's even easier to use. But I'll grant that you
like your memory interface.

>The floating point benchmarks reflect both the good performance of the
>Rabbit and the high quality of the floating point library in Dynamic C.

Granted that it is hard to separate the two issues.

>In the case of the 188ES the very poor floating point results reflect the
>fallacy of implementing floating point by emulating an FPU that is only
>supplied on high end x86 processors. The Rabbit uses IEEE format 32-bit
>floating point and carries out computations with full accuracy, meaning
>typically 0, 1 or 2 counts of error.

In 32 bits? 0 and 1 is okay. 2? Does this imply non-monotonicity
exists?

>We don't try to exactly follow every fine point concerning the handling
>of exceptions and numbers near the limits of the floating point range.

For the usual embedded use I might prefer performance over NaN
support. But when you are comparing performance in your benchmarks,
perhaps it would be good to either find a compiler that provides
roughly the same level of support or else call out the details here?
Certainly, trying to emulate full support for all the special values
in IEEE formats can put some of those comparisons at a serious
disadvantage. A note to this effect might be useful to those who
would NOT be using a compiler tool but are trying to develop a general
impression of performance when coding in assembly.

Thanks for mentioning this point.

>It would actually be quite mindless do
>this since it would give the user nothing very useful at considerable cost
>in performance. The IEEE floating specification was written for hardware
>FPU's where the cost-performance equation is different than it is for a
>software implementation. There are no tricks or deceptions in the
>benchmarks. The Rabbit is faster because the hardware is faster and the
>library is better.

Okay. For those considering the use of C with the Rabbit, the
combination is good to illustrate. My applications frequently involve
assembly coding and I usually use a custom FP format, when needed. Is
there some help your web site might provide to evaluate this kind of
use?

>Whether the AVR is 8-bit or 16-bit can be debated, since it has a 16-bit
>instruction path and an 8-bit data path. I think the instruction path is
>far more important and should be the deciding factor. The Rabbit has
>16-bit internal data paths but the external data interface and the
>instruction path is 8 bits, so we call it an 8-bit processor. We didn't do
>a full test on the AVR because it is quite different than the Rabbit and
>not suited for the same classes of problems due to the limited amount of
>instruction memory. But, since it is popular and since we did a few
>cursory tests using a simulator we decided to offer up this information
>for what it is worth.

Hmm. Let me be understand about your point, here.

The Rabbit has an internal 16-bit instruction path AND a 16-bit
internal data path? But since it's interface to memory is limited to
8 pins, you call it an 8-bit CPU?

Yet the AVR, with a 16-bit internal instruction path and an 8-bit
internal data path is a 16-bit CPU because it happens to have a
limited amount of internal flash that is read 16-bits at a time?

This is a new use of the terms for me in describing the class of a CPU
architecure.

>The Rabbit is not the world's fastest 8-bit processor. Other processors
>are cheaper. Some processors have a cleaner architecture or have
>peripheral devices that the Rabbit is lacking. The Rabbit and Dynamic C
>constitute a package with certain advantages that will make the Rabbit the
>processor of choice for some projects. Strengths include excellent
>development support, easy design, good performance, fast floating point,
>lots of serial ports, built in real time clock, precision pulse
>generation, sleepy mode, slave port, etc.

Any particular applications you are already aware of where these
strengths make this a clear and unambiguous winner?

(I'm thinking that the 4 serial ports may be a key here, perhaps in
particular when combined with the other peripherals. Not sure, but
I'm seeing a maybe.)

Sincere thanks for trying to provide some information and ways to
think about this product. There is a significant vertical barrier for
potential customers of Rabbit to consider in making an investment in
it and there would need to be more than a little margin of benefit
present to cause any motion for many of us. Good luck.

Jon

cbfal...@my-deja.com

unread,
Feb 11, 2000, 3:00:00 AM2/11/00
to
In article <38A39505...@oes.to>,
ma...@oes.to wrote:
>
>
> Jon Kirwan wrote:
> >
> <<SNIP>>

> > I'm still waiting to see where they see their own best shot.
>
...SNIP...

>
> I just can't seem to find a place for the Rabbit. I'm still looking
> for a spot for the BASICX controller, too ;-) I did Mark Borgerson
>

Take a look at the POS terminal, credit card
verification, etc. field. In fact, anywhere
you see a Verifone terminal. And that is
just for starters.

--
Chuck Falconer
(cbfal...@my-deja.com)

Norman Rogers

unread,
Feb 11, 2000, 3:00:00 AM2/11/00
to
TI's MPU430 is good in the power department but has many other limitations such
as a maximum clock speed of 1.65 MHz at 3V. It supports limited internal memory
and apparently there is no C compiler which is just as well since it is short
of code memory. It does one thing well - not consume power. The Rabbit does
many things well.

If you want an area where the Rabbit is clearly ahead of everyone else I'd
mention the slave port. The slave port is really done in a manner that no one
else has ever done to my knowledge. No one notices it because it is different
and thus doesn't fit in people's paradigms. We won't get design wins based on
the slave port until we do an educational campaign so that potential users
realize the what it can do.

Another area where the Rabbit goes beyond conventional thinking is in Dynamic
C's support for cooperative multitasking via language extensions to C. This is
quite different and makes multitasking simpler. We have one patent granted and
several applied for that relate to this.

But, the market strength of the Rabbit is really based on it having a nice
combination of features. Some other features that I haven't mentioned include a
familiar (Z80) instruction set. Ability to operate at 5 V or 3 V and a memory
mapping scheme that facilitates remote downloading of new software. We will
have a very strong and no extra cost package for TCP/IP and Internet
connectivity in the near future. We are also planning a DSP software package
that will include a fast FFT.

Norm Rogers

Jon Kirwan

unread,
Feb 11, 2000, 3:00:00 AM2/11/00
to
On Fri, 11 Feb 2000 07:32:54 -0800, Norman Rogers
<norman...@altavista.net> wrote:

>If you want an area where the Rabbit is clearly ahead of everyone else I'd
>mention the slave port. The slave port is really done in a manner that no one
>else has ever done to my knowledge. No one notices it because it is different
>and thus doesn't fit in people's paradigms. We won't get design wins based on
>the slave port until we do an educational campaign so that potential users
>realize the what it can do.

Does it operate like the Hitachi 330 port? Or the ADSP-2111 port?
Along those lines?

>Another area where the Rabbit goes beyond conventional thinking is in Dynamic
>C's support for cooperative multitasking via language extensions to C. This is
>quite different and makes multitasking simpler. We have one patent granted and
>several applied for that relate to this.
>
>But, the market strength of the Rabbit is really based on it having a nice
>combination of features. Some other features that I haven't mentioned include a
>familiar (Z80) instruction set. Ability to operate at 5 V or 3 V and a memory
>mapping scheme that facilitates remote downloading of new software. We will
>have a very strong and no extra cost package for TCP/IP and Internet
>connectivity in the near future. We are also planning a DSP software package
>that will include a fast FFT.

The claim to fame here then is your help and support in reducing "time
to market" for those using it? Is this why the focus on C, fast
libraries, and the like?

Jon

Jon Kirwan

unread,
Feb 11, 2000, 3:00:00 AM2/11/00
to
On Sat, 12 Feb 2000 10:50:10 +1100, russell shaw <rus...@webaxs.net>
wrote:

>For MIPS/Watt performance, the ARM+Thumb cpu is said to be
>near the top: http://www.arm.com.

It sure seems that way. ARM7/Thumb on a CPU where you can shut down
individual peripherals is hard to beat. I'm not terribly happy about
their manuals. Mistakes still uncorrected after 5 years. But
assembly programmers seem to be scarce for them, most use C instead,
so the errors are irrelevant to those users.

Jon

Kelly Hall

unread,
Feb 11, 2000, 3:00:00 AM2/11/00
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

<cbfal...@my-deja.com> wrote in message
news:881cq6$duo$1...@nnrp1.deja.com...


> Take a look at the POS terminal, credit card
> verification, etc. field. In fact, anywhere
> you see a Verifone terminal. And that is
> just for starters.

I'd be *really* leery of using Rabbit for secure applications -
having the address and data busses in plain sight (due to the lack on
onchip storage) means that *all* of your data, keys, and code are
available for anyone with a scope or logical analyzer.

There was an interesting article on Slashdot that mentioned a French
man who painstakingly took apart and analyzed his country's
smart-card POS device. Let's not even get into the DVD/CSS
application. I can't see how Rabbit could be useful in a secure
environment.

Kelly

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOKUJxOO3tORmHE48EQKUIwCfUjg/oT8Lz64aMVgOzqv/Pfjnm2EAoLAg
6ahe7Q0+lF+498O4ps9wtuzV
=o4Au
-----END PGP SIGNATURE-----


Kelly Hall

unread,
Feb 11, 2000, 3:00:00 AM2/11/00
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

"Norman Rogers" <norman...@altavista.net> wrote in message
news:38A42BA6...@altavista.net...


> If you want an area where the Rabbit is clearly ahead of everyone
> else I'd mention the slave port. The slave port is really done in a
> manner that no one else has ever done to my knowledge. No one
> notices it because it is different and thus doesn't fit in people's
> paradigms. We won't get design wins based on the slave port until
> we do an educational campaign so that potential users realize the
> what it can do.

Not quite "Field of Dreams", eh? ("If you build it, they will come")

> Another area where the Rabbit goes beyond conventional thinking is
> in Dynamic C's support for cooperative multitasking via language
> extensions to C. This is quite different and makes multitasking
> simpler. We have one patent granted and several applied for that
> relate to this.

First, although the Rabbit uses Dynamic C as a dev tool, don't the
cooperative extensions in Dynamic C predate Rabbit? It seems almost
disingenuous to hype the Rabbit based on prior software research.
Anyway, back in college, we used to call this 'syntactic sugar'. I'm
not saying domain-specific extensions are a bad thing to add into a
programming language. But since programmers cost more and more, and
hardware costs less and less, I feel that the advantages of standard
languages with specialized libraries outweigh the advantages of
rolling the library into the compiler. That is, I can learn a new
library with a standard language faster than I can learn a new
language construct. And I can port my code, later.

I'm sorry, but I can't take Dynamic C too seriously until it starts
supporting the Rabbit to some nontrivial degree. This example from
the manual (page 141 of the online PDF) says it all (syntax errors
included verbatim):
"The fastest way to read and write I/O registers in Dynamic C is to
use a short segment of assembly language inserted in the C program:
// compute value and write to port A data register
value=x+y
#asm
ld a,(value) ; value to write
ioi ld (PADR),a ; write value to PADR
#endasm
I mean, if you're going to be cavalier about standard ANSI C, you
might as well put the *common* things people do into the compiler
efficiently instead semi-esoteric things like cooperative
multitasking. Heck, this ought to be a standard macro if nothing
else.

> But, the market strength of the Rabbit is really based on it having
> a nice combination of features. Some other features that I haven't
> mentioned include a familiar (Z80) instruction set. Ability to
> operate at 5 V or 3 V and a memory mapping scheme that facilitates
> remote downloading of new software. We will have a very strong and
> no extra cost package for TCP/IP and Internet connectivity in the
> near future. We are also planning a DSP software package that will
> include a fast FFT.

Based on this and other posts here by Norm, I believe I understand
why the Rabbit exists and what the target market is: primarily,
ZWorld needs new processors for their controllers, and the Rabbit
seems to fit their needs. The Rabbit appears to be a general purpose
8-bit. Thus by design the Rabbit is a set of tradeoffs. If you need
to build general purpose single board computers, the Rabbit is likely
worthy of further evaluation. The Jack Rabbit is probably short for
'jack-of-all-trades'.

On the other side of the coin, though, it's worth noting that the
8bit processor market doesn't appear to be composed of various
general purpose CPUs battling it out anymore. Now I see large
families of parts, each part looking for a niche to fill. There are
PICs of every size, shape, and IO mix. There are AVRs from 8pin
SOICs up through 64pin TQFPs, and even on the corner of a 352ball BGA
FPGA. 8051 variants are available all over the place, with a huge
variety of speed grades, IO mixes, and power requirements. The
number of affordable emulators, free reference designs, and CPUs with
onboard storage means that whipping up a custom board to solve a
problem is not nearly the ordeal it was a decade ago. At work, our
new PCB layout guy said that at his last job the answer to almost
every problem was to "throw a PIC at it".

I'm sure that no matter how cheap and fast single chip MCU solutions
become, there will always be a market left over for those who can't
afford (the time, the cost) of spinning a custom board for the task
at hand. For those people, general purpose CPUs on general purpose
industrial controllers will remain an option.

My 2 cents on a Friday night,
Kelly

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOKUKNuO3tORmHE48EQKVLgCgqErxtDD6xoK0s+upyKv9lzhO2bYAoL4r
HOKotlmJfkVwRq0gT13cIvgv
=iUcG
-----END PGP SIGNATURE-----


russell shaw

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
For MIPS/Watt performance, the ARM+Thumb cpu is said to be
near the top: http://www.arm.com.

--
*******************************************
* Russell Shaw, B.Eng, M.Eng(Research) *
* Electronics Consultant *
* email: rus...@webaxs.net *
* Australia *
*******************************************

Norman Rogers

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
In response to John Kirwin's comments on the Rabbit:

Does it operate like the Hitachi 330 port?  Or the ADSP-2111 port?
Along those lines?

The Hitachi 330 (H8) "handshaking" port is somewhat similar to the slave port of the Rabbit but closer to a Z80 PIO. It can only transfer data in one direction at at time and there is only one 8-bit register.

The ADSP-2111 port is quite similar to the Rabbit slave port - it has many registers and can operate in both directions at the same time. It can also be used to boot the slave, as it can in the Rabbit. Of course the ADSP-2111 is a DSP rather than a general purpose microprocessor, so it still stands that the Rabbit slave interface is unique.

The claim to fame here then is your help and support in reducing "time
to market" for those using it?  Is this why the focus on C, fast
libraries, and the like?

The problem with promoting a product with the time to market argument is that everyone is making the same claim. Concrete and easy to understand points of superiority are more likely to attract the readers' attention. Since our floating point is dramatically better it is concrete point of superiority and it is not hard to explain in a few words, unlike, for example, cooperative multitasking.

Norm Rogers

Norman Rogers

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
In response to Kelly Hall's comments:

The traditional approach to multitasking is to use a library rather than
modifying the language. The advantage of extending the language is that
the resulting product is easier to learn and use. Another advantage is
that the implementation is more efficient since the function calling
interface is eliminated.

Although some of our work on C languge extensions and cooperative
multitasking predates the Rabbit, the Rabbit release of Dynamic C is the
first release to include cofunctions, a very important new feature.

Carried to an extreme if we always keep a standard language and use
libraries to add new features we would all be using Fortran II with great
libraries.

The use of embedded assembly language in C is not standardized and I think
it is fair to say is not favored by standards making bodies. However,
assembly language is vastly faster than C for a lot of problems. C simply
does not have the language operators that allow narrowly defining
procedures so that they can be efficiently implemented by the compiler.
Since assembly language is hard to write it makes sense to mix C and
assembly language, using just enough assembly to handle the speed critical
problem. Dynamic C strongly supports this style of programming and we are
not ashamed to give examples and promote it.

There is some truth to Kelly Hall's remarks concerning our motives for
developing the Rabbit. We do make small embedded processor boards and the
Rabbit works out very well as the processor for such a product. The Rabbit
does reflect our 15 years of experience in this business working with
Z80's, Z180's and 386's. We are keenly aware of the problems that we have
encountered and we tried to eliminate those problems by designing the
Rabbit to be better.

Our approach to the design of the Rabbit was practical and detail oriented
rather than theoretical. If Z-World (a.k.a Rabbit Semiconductor) had a
nest of Ph.D. computer architects the Rabbit would probably have turned
out quite differently. It wouldn't be based on a 25-year old architecture
and the bus would probably be far more complicated. The computer
architects, if they were ordered to keep the Z80 model, probably won't
know which Z80 instructions are nearly useless so they would keep them
all, making the new instructions too long. To compensate for this they'd
add a cache. Then there would be a management review where they would add
a bunch of features suggested by customers. By this time the die would be
too big so they would add some more features to justify a higher price.
Someone will probably bring out this product soon.

Norm Rogers

Mark Borgerson

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
Of all those 4 serial ports on the Rabbit, I just wish
they'd implemented one full SCC port---with DMA,
HDLC with FM encoding and DPLL clock detection. That,
along with a nice C library to protect me from the
complexities of the 8530 registers, would be a processor
that I would really appreciate!

As it is, the four serial ports seem somewhat old-fashioned--
the lack of FIFO or DMA would seem to hamper the use of the
Rabbit in any sort of data multiplexer/concentrator roles
where you might have several serial ports operating at
56KBaud or better. At 56KBaud, you only get 177 microseconds
between interrupts. With 4 channels running, that ends up
being about 44 microseconds per channel to service the
interrupt and enqueu/dequeue the data. I suspect that it
can be done by careful programming, but some FIFOs or
DMA for the serial ports would sure be nice!

If you want lots of bare-nekkid serial ports, why not go
to a 68332 system??? Then you can use the up to 16 of
the TPU channels as either input or output UARTS (less
what you need for timing chores). Again, you end up
without DMA or FIFOs, but you gain the advantage of
lots of 32-bit registers and GNU toolchain support!

SOUR GRAPES WARNING: Most of the above may be considered
as a rant based on my failure to sit down and work out
some data multiplexer problems by diving into either the
68360 or MPC860. I keep looking for a system that
doesn't require either those chips or a PC-104 stack to
get me a full HDLC implementation! (at low power, in
a 3" x 5" form factor, no less).

Mark Borgerson

Jon Kirwan

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
On Sat, 12 Feb 2000 05:05:02 -0800, Norman Rogers
<norman...@altavista.net> wrote:

>Although some of our work on C languge extensions and cooperative
>multitasking predates the Rabbit, the Rabbit release of Dynamic C is the
>first release to include cofunctions, a very important new feature.

By this, do you mean "coroutines?" As was implemented by Metaware in
their High-C product in the 80's? (Very useful semantic.) And in
many other language implementations before them?

>There is some truth to Kelly Hall's remarks concerning our motives for
>developing the Rabbit. We do make small embedded processor boards and the
>Rabbit works out very well as the processor for such a product. The Rabbit
>does reflect our 15 years of experience in this business working with
>Z80's, Z180's and 386's. We are keenly aware of the problems that we have
>encountered and we tried to eliminate those problems by designing the
>Rabbit to be better.

Ah! Thanks, Kelly.

Jon


Kelly Hall

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

"Norman Rogers" <norman...@altavista.net> wrote in message

news:38A55A7D...@altavista.net...


> The traditional approach to multitasking is to use a library rather
> than modifying the language. The advantage of extending the
> language is that the resulting product is easier to learn and use.
> Another advantage is that the implementation is more efficient
> since the function calling interface is eliminated.

The issue is more general than just multitasking extensions to the
language. Every time a new feature comes along, you have to decide
whether to implement it in the compiler or in libraries. I'm not
sure I believe that adding new syntax and reserved words to C makes
the system easier to use. It certainly makes the code less portable,
and perhaps that's a design win for ZWorld since they'd prefer you to
stick with their products.

I agree that moving the features into the compiler can lead to
efficiency boosts over libraries. But we can address this issue in
different ways - the CPU can be designed to minimize the overhead
associated with function calls - the AVR and Sparc do this by having
a lot of registers (32 for the AVR, 136 on a microSparc-II), and
compilers clever enough to not creating stack frames as long as
arguments can fit in the register file. ARMs have fast ways to push
only the registers they need to save.

I guess what surprises me most is that portable C is sacrificed on
the altar of efficiency for multitasking features, and yet for
writing to an IO port you drop down into assembly.

> Although some of our work on C languge extensions and cooperative
> multitasking predates the Rabbit, the Rabbit release of Dynamic C
> is the first release to include cofunctions, a very important new
> feature.

Is there any architectural support in the Rabbit to support
cooperative multitasking per se?

> Carried to an extreme if we always keep a standard language and use
> libraries to add new features we would all be using Fortran II with
> great libraries.

Of course, Fortran allowed separate compilation of library units.
Dynamic C does not - you compile the entire program from the source
on every build. This is likely a big part of why features creep into
Dynamic C and not into the libraries, IMHO. Leaving new features in
the libraries would mean shipping source code to the patented new
features, which is likely undesireable from ZWorld's point of view.

> The use of embedded assembly language in C is not standardized and
> I think it is fair to say is not favored by standards making
> bodies. However, assembly language is vastly faster than C for a
> lot of problems. C simply does not have the language operators that
> allow narrowly defining
> procedures so that they can be efficiently implemented by the
> compiler.

You'll get no debate on this.

> Since assembly language is hard to write it makes sense to mix C
> and assembly language, using just enough assembly to handle the
> speed critical problem. Dynamic C strongly supports this style of
> programming and we are not ashamed to give examples and promote it.

My point is that you're already changing C to make it better support
multitasking. Why not change it some more to allow for efficient IO
port access? Why not make the fancy new Rabbit features *part of the
language* instead of forcing programmers to pick up Rabbit assembly?
"Since assembly language is hard to write", to use your words.

> Our approach to the design of the Rabbit was practical and detail
> oriented rather than theoretical. If Z-World (a.k.a Rabbit
> Semiconductor) had a nest of Ph.D. computer architects the Rabbit
> would probably have turned out quite differently. It wouldn't be
> based on a 25-year old architecture and the bus would probably be
> far more complicated. The computer
> architects, if they were ordered to keep the Z80 model, probably
> won't know which Z80 instructions are nearly useless so they would
> keep them all, making the new instructions too long. To compensate
> for this they'd add a cache. Then there would be a management
> review where they would add a bunch of features suggested by
> customers. By this time the die would be too big so they would add
> some more features to justify a higher price. Someone will probably
> bring out this product soon.

You seem to have a low opinion of both PhDs in computer architecture
and managers that listen to customer input :) Intel seems to make a
good living employing both :)

It might be interesting to see benchmarks comparing Rabbit to
processors designed in manner you describe. Which is exactly what
some of us have been asking for from the start.

Kelly

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOKWmPuO3tORmHE48EQIt4ACfdtPE7jyVPe9AGm22duN3FXajs24AoMgb
B3Vuqk5jMNvrqYgkgVGBZRIj
=WXkA
-----END PGP SIGNATURE-----


Norman Rogers

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
They are coroutines but we call them cofunctions since C has functions rather
than subroutines.

Dynamic C cofunctions go way beyond coroutines implemented via a library.
Cofunctions can be conveniently nested to any depth. Any number of cofunctions
can be executed in parallel with a "waitfordone" statement which waits for all
cofunctions in a list to complete execution. Indexed cofunctions allow for
multiple instantiations of a cofunction to be active at the same time. This is
convenient for controlliing multiple identical I/O devices. A "single user"
cofunction can be called my multiple callers at the same time and each caller
will wait its turn. This is useful for created shared drivers for I/O devices
with automatic conflict resolution.

The big advantage of cofunctions and cooperative multitasking in general is
that sharing of data between tasks is much easier than it is for preemptive
multitasking kernels.

Norm Rogers

Jon Kirwan wrote:

> On Sat, 12 Feb 2000 05:05:02 -0800, Norman Rogers
> <norman...@altavista.net> wrote:
>
> >Although some of our work on C languge extensions and cooperative
> >multitasking predates the Rabbit, the Rabbit release of Dynamic C is the
> >first release to include cofunctions, a very important new feature.
>

> By this, do you mean "coroutines?" As was implemented by Metaware in
> their High-C product in the 80's? (Very useful semantic.) And in
> many other language implementations before them?
>

> >There is some truth to Kelly Hall's remarks concerning our motives for
> >developing the Rabbit. We do make small embedded processor boards and the
> >Rabbit works out very well as the processor for such a product. The Rabbit
> >does reflect our 15 years of experience in this business working with
> >Z80's, Z180's and 386's. We are keenly aware of the problems that we have
> >encountered and we tried to eliminate those problems by designing the
> >Rabbit to be better.
>

> Ah! Thanks, Kelly.
>
> Jon


Norman Rogers

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
In response to Kelly Hall:

I guess what surprises me most is that portable C is sacrificed on
the altar of efficiency for multitasking features, and yet for
writing to an IO port you drop down into assembly.

There are a complete set of C routines for doing I/O and they are fine if speed is not needed. Of necessity they have a lot of overhead due to all the arguments being passed.  In any case portable C is especially mythical when it comes to I/O programming.

Is there any architectural support in the Rabbit to support
cooperative multitasking per se?

There is a lot of architectural support for C - The Rabbit does a much better job of fetching 16-bit variables relative to the stack or other index register. There are many other new instructions aimed at implementing C code - for example the "bool" instruction. We also have added another segment to the memory management unit to allow for a separate stack segment register which essentially makes the stack space unlimited compared to the Z180. However this feature is most important for preemptive multitasking, which we also support. So I have to say that there is no special support that is there only because of cooperative multitasking.

Dynamic C does not - you compile the entire program from the source
on every build.  This is likely a big part of why features creep into
Dynamic C and not into the libraries, IMHO.  Leaving new features in
the libraries would mean shipping source code to the patented new

features, which is likely undesirable from Z-World's point of view.

All our libraries are in source form, but they are libraries, meaning that the compiler acts like a linker and compiles functions that are needed to satisfy calls in the user's code. The libraries are quite easy to use. We have no need to hide our source code and nothing is hidden in the current release of Dynamic C for the Rabbit, including the boot loader and the BIOS code. Certainly there is no need to hide patented techniques since these must be disclosed as a condition of getting a patent. Typical real time kernels have dozens of library calls and it is quite difficult to learn and use such a system. We think the language extension makes sense from both the ease of use and efficiency viewpoints.

Dynamic C for the Rabbit has a new feature that allows for recompiled functions that remain in target memory and that don't have to be recompiled with the user code compiles unless the target crashes.

It might be interesting to see benchmarks comparing Rabbit to
processors designed in manner you describe.  Which is exactly what
some of us have been asking for from the start.

I'd like to see benchmarks for the ARM thumb and compare them to the Rabbit. I looked at the Atmel version of this chip. Although it can run at 33 MHz, at that clock speed 12 nS memory would be required. If you use 55 nS flash memory the maximum clock speed without wait states is about 14 MHz. Although many instructions operate in a single clock, many don't. I suspect that it requires 4 or 5 clocks to fetch a 16-bit operand from external memory.

Norm Rogers

Jon Kirwan

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
On Sat, 12 Feb 2000 15:19:58 -0800, Norman Rogers
<norman...@altavista.net> wrote:

>They are coroutines but we call them cofunctions since C has functions rather
>than subroutines.
>
>Dynamic C cofunctions go way beyond coroutines implemented via a library.
>Cofunctions can be conveniently nested to any depth. Any number of cofunctions
>can be executed in parallel with a "waitfordone" statement which waits for all
>cofunctions in a list to complete execution. Indexed cofunctions allow for
>multiple instantiations of a cofunction to be active at the same time. This is
>convenient for controlliing multiple identical I/O devices. A "single user"
>cofunction can be called my multiple callers at the same time and each caller
>will wait its turn. This is useful for created shared drivers for I/O devices
>with automatic conflict resolution.
>
>The big advantage of cofunctions and cooperative multitasking in general is
>that sharing of data between tasks is much easier than it is for preemptive
>multitasking kernels.

I'm quite familiar with compiler-supported coroutines. I believe they
are an essential part of a good langauge, far too often left out.
They enhance the maintainability and semantic power, as well as (in
some cases) speed. I like them.

I gather you are NOT familiar with Metaware's implementation, then.
Or else you would NOT have suggested the idea of "implemented via a
library." An example of their syntax is:

for p <- Primes(1, 100) do printf("%d\n", p);

Generally, they allow the form:

for n1, n2, n3, n4, ..., nn <- func(p1, p2, ..., pm) do <body>

and the func() is allowed to "yield" as many return values as needed
by simply:

yield(a1, a2, a3, ..., am);

When func() returns, the loop is broken.

This is not a library, but a compiler-generated form of iterator.

Metaware offered the use of iterators in their Pascal and then, later,
in their High C product some time around 1987-1990, if I recall. I
only wish such ideas had become part of the C and/or C++ language.
They are cheap in resource, entirely at the programmer's control as
they are not preemptive, and easily implemented. And they are very
useful.

Jon

Jon Kirwan

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
On Sat, 12 Feb 2000 16:07:31 -0800, Norman Rogers
<norman...@altavista.net> wrote:

>There are a complete set of C routines for doing I/O and they are fine if
>speed is not needed. Of necessity they have a lot of overhead due to all
>the arguments being passed. In any case portable C is especially mythical
>when it comes to I/O programming.

The compiler can certainly recognize these calls and replace with
inline code, if it can guarantee semantic equivalence (which I suspect
it can.)

Jon

Jon Kirwan

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
On Sat, 12 Feb 2000 16:07:31 -0800, Norman Rogers
<norman...@altavista.net> wrote:

>I'd like to see benchmarks for the ARM thumb and compare them to the
>Rabbit. I looked at the Atmel version of this chip. Although it can run at
>33 MHz, at that clock speed 12 nS memory would be required. If you use 55
>nS flash memory the maximum clock speed without wait states is about 14
>MHz. Although many instructions operate in a single clock, many don't. I
>suspect that it requires 4 or 5 clocks to fetch a 16-bit operand from
>external memory.

The Atmel ARM implementation has internal RAM that can be used both
for data and instruction space. They also have several chips with
flash built in. But I wonder whether you'd consider the use of the
internal RAM for instruction space as a fair test...

I'd test them both ways, since there are times when someone can
properly stay within the RAM limits of the Atmel ARM7 and it would be
useful to see how your chip compares when such things are possible.
Of course, it would also be appropriate to assume that the code
resides in the Atmel flash, as well. Both results are useful.

Jon

Norman Rogers

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
In reply to Jon Kirwan:

The compiler can certainly recognize these calls and replace with
inline code, if it can guarantee semantic equivalence (which I suspect
it can.)

I suppose that the compiler could inline these functions but the situation is quite complicated and I don't think that building the functions into the compiler would be better than suggesting assembly language when the library routines are too slow.

The functions currently available are described in chapter 16 of the Rabbit Reference Manual on http://www.rabbitsemiconductor.com. There are 4 basic functions, read byte, write byte, read bit and write bit. Then there are separate functions for internal I/O devices and external I/O devices which have different address spaces. This is a total of 8 functions. Then there is the option of a shadow register which is used to hold the current value of write only registers. These functions are by no means adequate for all work with the I/O registers. These functions are too general for applications where it is known who else is fiddling with the registers. If the programmer understands the total environment he can often dispense with shadow registers, atomic moves and  other overly general approaches, so assembly language is still much faster than inlined overly general functions. Speed critical applications are common, for example a high speed serial routine. Such a routine will certainly be written in assembly, in which case features added to the compiler for I/O don't help.

Norm Rogers
 

Jon Kirwan wrote:

On Sat, 12 Feb 2000 16:07:31 -0800, Norman Rogers
<norman...@altavista.net> wrote:

>There are a complete set of C routines for doing I/O and they are fine if
>speed is not needed. Of necessity they have a lot of overhead due to all
>the arguments being passed.  In any case portable C is especially mythical
>when it comes to I/O programming.

The compiler can certainly recognize these calls and replace with

Norman Rogers

unread,
Feb 12, 2000, 3:00:00 AM2/12/00
to
Yes, I did not know about Metaware's implementation of coroutines.

In Dynamic C cofunctions and costatements we allow an explicit yield, but we also
have implicit yields in various types of "wait for" statements. Usually the program
is waiting for an external event to take place or for a certain time or certain
time delay. It may also wait for a signal from some other part of the program.

Norm Rogers

Jon Kirwan wrote:

Darrell Flenniken

unread,
Feb 13, 2000, 3:00:00 AM2/13/00
to
I see my original post has stirred up some very interesting and useful
debate ... I've been lurking with interest. Keep it up. Are any list
subscribers using or evaluating the Rabbit?
Darrell

Darrell Flenniken <deflen...@home.com> wrote in message
news:5yvm4.31090$eC2.3...@news1.alsv1.occa.home.com...
> Any users of the Rabbit Z80 "compatible" microcontroller? Feedback ??
> How about Dynamic 'C' ?
>
> Darrell
>
>

Stephen Pelc

unread,
Feb 13, 2000, 3:00:00 AM2/13/00
to comp.arch.embedded
Norman Rogers wrote in message <38A5EA9E...@altavista.net>...

>They are coroutines but we call them cofunctions since C has functions
rather
>than subroutines.
[snip]

>The big advantage of cofunctions and cooperative multitasking in
general is
>that sharing of data between tasks is much easier than it is for
preemptive
>multitasking kernels.

Its really nice to see all this mature technology being recycled. I have
long
suspected that trends in software should be compared to the fashion
industry. Coroutines and cooperative multitasking have been around for
a very long time in Forth and Modula2.

I would also add that with cooperative multitasking, all those prority
inversion
problems never arise.
--
Stephen Pelc, s...@mpeltd.demon.co.uk
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)2380 631441, fax: +44 (0)2380 339691
web: http://www.mpeltd.demon.co.uk - free ProForth downloads!

John Kocurek

unread,
Feb 13, 2000, 3:00:00 AM2/13/00
to
In article <JQ7p4.4800$Ev4.1...@nnrp1-w.snfc21.pbi.net>,

"Kelly Hall" <ha...@iname.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> <cbfal...@my-deja.com> wrote in message
> news:881cq6$duo$1...@nnrp1.deja.com...
> > Take a look at the POS terminal, credit card
> > verification, etc. field. In fact, anywhere
> > you see a Verifone terminal. And that is
> > just for starters.
>
> I'd be *really* leery of using Rabbit for secure applications -
> having the address and data busses in plain sight (due to the lack on
> onchip storage) means that *all* of your data, keys, and code are
> available for anyone with a scope or logical analyzer.
>
> There was an interesting article on Slashdot that mentioned a French
> man who painstakingly took apart and analyzed his country's
> smart-card POS device. Let's not even get into the DVD/CSS
> application. I can't see how Rabbit could be useful in a secure
> environment.
>
> Kelly
>
You're assuming that credit card terminals are secure. Most of the ones
that I am aware of are cheap to design and manufacture. The earlier
ones (and some that still are in use), only have a single serial port
for both the phone line and the printer. To keep the whole transaction
from printing out, the terminal turns the printer off using XON/XOFF
protocol.

John Kocurek

Kelly Hall

unread,
Feb 13, 2000, 3:00:00 AM2/13/00
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

"John Kocurek" <j...@selec.net> wrote in message
news:886dqs$pfg$1...@nnrp1.deja.com...


> You're assuming that credit card terminals are secure. Most of the
> ones that I am aware of are cheap to design and manufacture. The
> earlier ones (and some that still are in use), only have a single
> serial port for both the phone line and the printer. To keep the
> whole transaction from printing out, the terminal turns the printer
> off using XON/XOFF protocol.

I'm assuming that credit card terminals will only become *more*
secure. Do you think that HP/Veriphone will release new products
that are no more secure than the first generation devices?

Kelly

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOKcSguO3tORmHE48EQKB1wCgqKZ4Bip6U660qDWrwk8Iibjkgr0AoLna
ojbdF8xY+Kp1+IvCoU5Yk5bU
=UcfO
-----END PGP SIGNATURE-----


cbfal...@my-deja.com

unread,
Feb 14, 2000, 3:00:00 AM2/14/00
to
In article <38A5EA9E...@altavista.net>,

norman...@altavista.net wrote:
> They are coroutines but we call them cofunctions since C has
> functions rather than subroutines.
>
> Dynamic C cofunctions go way beyond coroutines implemented via
> a library. Cofunctions can be conveniently nested to any depth.
> Any number of cofunctions can be executed in parallel with a
> "waitfordone" statement which waits for all cofunctions in a
> list to complete execution. Indexed cofunctions allow for
> multiple instantiations of a cofunction to be active at the same
> time. This is convenient for controlliing multiple identical I/O
> devices. A "single user" cofunction can be called my multiple
> callers at the same time and each caller will wait its turn.
> This is useful for created shared drivers for I/O devices
> with automatic conflict resolution.
>
> The big advantage of cofunctions and cooperative multitasking in
> general is that sharing of data between tasks is much easier than
> it is for preemptive multitasking kernels.

Shades of Modula philosophy !-). I think the point is that coroutines
can be used to implement processes and the ilk, without the heavy
overhead of a full OS. The result is pretty well suited to embedded
machinery.

--
Chuck Falconer
(cbfal...@my-deja.com)

John Kocurek

unread,
Feb 15, 2000, 3:00:00 AM2/15/00
to
In article <%nEp4.5294$Ev4.1...@nnrp1-w.snfc21.pbi.net>,

"Kelly Hall" <ha...@iname.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> "John Kocurek" <j...@selec.net> wrote in message
> news:886dqs$pfg$1...@nnrp1.deja.com...
> > You're assuming that credit card terminals are secure. Most of the
> > ones that I am aware of are cheap to design and manufacture. The
> > earlier ones (and some that still are in use), only have a single
> > serial port for both the phone line and the printer. To keep the
> > whole transaction from printing out, the terminal turns the printer
> > off using XON/XOFF protocol.
>
> I'm assuming that credit card terminals will only become *more*
> secure. Do you think that HP/Veriphone will release new products
> that are no more secure than the first generation devices?
>
> Kelly
>

Yep. Unless things have changed, HP is only interested in the
e-business side of Verifone. The rest of the business, HP has had a
"hands off" policy. Look at it this way, there is no reason, other than
ones relating to sound engineering practices, to change things as they
stand. The customer base is not educated enough to ask for more
security, most of their competition is so focused on making their
products like Verifone's that concern over something that Verifone is
not doing is unlikely in the extreme. And Verifone, well they have
something like 80% of the market (seat of the pants estimate), so why
should they change?

You have to realize that the standard for the printers used is the P200
(a re-branded CBM-560). This is a printer that uses a 8048 as the
controller...

John Kocurek

John Kocurek

unread,
Feb 15, 2000, 3:00:00 AM2/15/00
to

(a re-branded iDP560). This is a printer that uses a 8048 as the
controller and hasn't even been made in 5 years and probably was
designed in the early '80s. This is not an industry that is big on
change.

nob...@rtd.com

unread,
Feb 22, 2000, 3:00:00 AM2/22/00
to
On Wed, 09 Feb 2000 21:33:12 -0800, Norman Rogers
<norman...@altavista.net> was overheard to say:

> There are some new benchmarks up on the Rabbit Semiconductor site:
>
> http://www.rabbitsemiconductor.com/benchmark.html

Jeez, one would think that after 30 years folks would learn that
benchmarks are for The Gullible...

> The benchmarks conver mainly floating point performance and show that the
> Rabbit is way faster than the 188, 8051, Z180, etc.

No, your "benchmarks" don't "show that the Rabbit is way faster
than...". Your benchmarks show that on the SELF SELECTED tests
that you chose to run among the SELF SELECTED "candidates"
that were chosen under the SELF SELECTED criteria that were
(arbitrarily?) imposed, the Rabbit yielded quicker results.

It says nothing about how REPRESENTATIVE those tests are of
*actual* usage (gee, when was the last time -- in my 25 years
in this business -- that I used an atan() in a product? Ah,
I remember! It must have been in that autopilot -- I think
I used it to check the position of the Sun in the sky
with respect to the constellation Bogus Major...).

It says nothing about the *accuracy* of those results -- *assuming*
the answers obtained from each function invocation were *roughly*
correct to within a few bits of the size of each datum represented...
Hmmm... exactly *what* floating point format(s) were used? Gee,
doesn't seem to say... Did all of the compilers use the same
size floats? doubles? Did they even *try* to comply with
some notion of "standardization"? (Did you verify the accuracy
of all the values in float.h?)

I'm willing to bet my floating point library will beat any
of the cited performance figures -- especially since you've not
constrained me to a particular data representation, accuracy, etc.
:>

Were all of the library functions completely reentrant? Was
this an issue?

I assume the stack and all temporary values used in the libraries
were stored in the FLASH? (Oh, I guess we forgot to mention
the access time of the RAM used in the system...)

Curiously, I didn't see any mention of code sizes for the various
implementations. Nor a description of the specific compilation
options invoked for each compiler, etc.

And did anyone note how *fast* the compiler was? I mean, if
we have to wait 5 minutes to compile, link and load, then
we might just miss our time-to-market window...

Also interesting to see why the Scenix, PIC, AVR, etc. devices weren't
included. Ah, obviously the PIC's were ruled out because they
aren't 8 bit devices -- after all, their INSTRUCTION WORDS are
12, 14, etc. bits. Hardly a fair comparison, eh? :-/

A *stopwatch* to time the AMD188ES's performance?? Jeez, you
guys spared no expense putting this test together!! ("using
available equipment"... refreshing to know that *I* have more
equipment on hand than this "big semiconductor firm"... maybe
I should go in the custom CPU business, eh?)

You "pick on" the FPU coprocessor interface in the x86's as
something "bad" (my term). Yet, I don't see any mention of the
migration path available to users of the Rabbit should they
happen to find that they've run out of gas down the road
and need a bit extra performance (*assuming* they like
to compute atan()'s, of course). Gee, just how much are 386EX's
running for nowadays in 1K quantity? (ah, but it's not fair to
compare that -- since it's not an 8 bit machine, right?)

Sorry for the sarcasm. But, a "benchmark" like that would
get any second year EE/CS student a failing mark pretty quick!
Why not invest a bit of effort into a *real* benchmark instead
of some *contrived* set of tests that anyone could throw together
in an afternoon? Or, are your *budgets* that *tight*, there? :-(

The answer, of course, is that coming up with a *good* REPRESENTATIVE
benchmark is pretty hard to do. Even simple benchmarks like
recompiling *large* projects and just checking for the compiler's
*space* efficiency require a significant effort. A quick peak through
my Z180 projects shows most of them having 1 - 3MB of *sources*
in 200 - 1000 different *files*. Unless you have a *strictly
conforming* ANSI compiler, it's doubtful *they* would compile
painlessly. And, if you have to start tinkering with the sources,
you have to assign some arbitrary "cost" to that activity...

But, I guess as long as there are folks naive enough to think you can
reduce a product -- nay, a *system* (CPU, compiler, libraries) -- to
a SINGLE NUMBER, there will continue to be meaningless benchmarks
like this one...

--don
Return address is bogus. Mail received at my "real" address
is actively filtered to reduce spam. So, unless your address
has previously been incorporated in that filter, don't bother
sending mail -- it will be discarded before I see it. Sorry.

--don

nob...@rtd.com

unread,
Feb 22, 2000, 3:00:00 AM2/22/00
to
On Sat, 12 Feb 2000 05:05:02 -0800, Norman Rogers

<norman...@altavista.net> was overheard to say:

> In response to Kelly Hall's comments:


>
> The traditional approach to multitasking is to use a library rather than
> modifying the language. The advantage of extending the language is that
> the resulting product is easier to learn and use. Another advantage is
> that the implementation is more efficient since the function calling
> interface is eliminated.

Huh? I must be missing something here. Context switches are
performed (in my world) with:
void yield(void)
which, on any of the compilers that *I* use, generates a three
byte "CALL" instruction to "_yield". Unless you're inlining the
actual context switch code in each such place, I can't see how
you're going to *reduce* this overhead -- after all, "_yield"
still needs to know where to *resume* the "task" when next
activated.

Likewise, any timing service invocations, dynamic task creation
(hmmm... perhaps you don't support that...), etc. need to pass
arguments to the actual "helper routines" (since you claim this
isn't part of a "library", I can't call them "library functions")
that perform the work (again, assuming you aren't inlining all of
this!). And, those routines need to find their way back to the
invoking code/task... so, what are you *saving*?

> Although some of our work on C languge extensions and cooperative
> multitasking predates the Rabbit, the Rabbit release of Dynamic C is the
> first release to include cofunctions, a very important new feature.
>
> Carried to an extreme if we always keep a standard language and use
> libraries to add new features we would all be using Fortran II with great
> libraries.

No, there are mechanisms and programming methodologies that
just don't *fit* with the FORTRAN model -- unless you know of
a (compiler independant) way of passing a pointer to a function
to a FORTRAN subroutine... :> Or, of invoking an object's
"method"...

This seems a bit naive in terms of a business model. Looking at this
product from the outside, it's only *real* feature seems to be some of
the integrated I/O. The changes to the instruction set at *best*
are a "wash" -- losses offset by gains -- and the lack of binary
*or* source compatibility seems to weigh in *against* those changes.

The issue you could most easily have accommodated (but didn't)
was the kludge MMU in the Z180. It would have taken less
silicon than the "add-ons" you've introduced *and* coupled
with a GOOD compiler that knows how to exploit that *repaired*
memory space, could easily have set itself apart from the other Z80
and Z180 offerings from other chip vendors.

(sigh) But, take heart, it appears that even folks like
Zilog are going to shoot themselves in the foot on *that* score!
:-(

nob...@rtd.com

unread,
Feb 22, 2000, 3:00:00 AM2/22/00
to
On Thu, 10 Feb 2000 12:05:22 -0800, Jon Kirwan
<jki...@easystreet.com> was overheard to say:

> On Wed, 9 Feb 2000 23:49:18 -0800, "Kelly Hall" <ha...@iname.com>


>
> >"Norman Rogers" <norman...@altavista.net> wrote in message

[snip]

> >> The benchmarks conver mainly floating point performance and show
> >> that the Rabbit is way faster than the 188, 8051, Z180, etc.
> >

> >Impressive! And yet, misleading and perhaps pointless.
> >
> >No place in what you've published gives me any clear idea what
> >exactly you mean by 'floating point' - are you talking 80 bit IEEE
> >754? Are you talking 32 bit? Are the same data formats being used
> >across all platforms? Are all compilers claiming full support for
> >exceptions? Any chance we can see the compile options for each
> >compiler? I'm not suggesting that embedded apps require full-up IEEE
> >754 compliant libraries - but if you're going to use floating point
> >performance to compare the performance of different architectures,
> >it's meaningless unless all architectures are attempting to perform
> >the same task.

Exactly...

> They provided the source code they supposedly used for the Rabbit,
> itself. I don't recall seeing any of the modified code used for the
> other processors. But those are all good questions.

This, in itself, is meaningless. What is the benchmark intended
to test -- the *compiler* or the *processor*? The *title* of
the report is "Microprocessor Benchmark Results". One would
*infer* that this is intended to benchmark the MICROPROCESSOR.
As such, a *fair* benchmark (neglecting the arbitrary? choice
of what the actual *test* would be -- why is floating point
performance more important than, for example, the time it
takes to perform a full context switch? Or, the time
required to service an interrupt? Or the *amount* of code
needed to perform some particular task??) would present a
qualified *problem* and invite *optimum* solutions to that problem
using the processor's *native* instruction sets -- since we're
talking about *processors, not *compilers, right???

> The C code I saw would execute an "operation" 1000 times. For
> example, you'd see "for (i= 0; i < 1000; ++i) z=x*y;" kind of thing,
> where the operation variables are declared "float". What each

Yeah, and some smart compilers could look and see that *none* of the
results were being *used* and just optimize the entire loop
down to a single NoOp... (being pedantic). What does *that*
tell you about the *CPU*?

> compiler actually uses for a float is anyone's guess, unless they own
> the manuals for them. Various compilers were used and there is no
> assurance that they picked compilers that used comparable formats.
> Chances are, in fact, they may have selected the compilers that showed
> the poorer performance amongst those generally available for a
> particular, competing processor. Of course, they may have done a fair
> job, too. I couldn't tell.
>
> If their floating point is kept with the exponent separated and the
> mantissa unpacked, hehe, then performance could be pretty good. I
> suspect that the compiler probably uses the fastest format they can
> get away with, consistent with reasonable space use. But you are
> right, too. There may be a compiler option they are using that
> directs the compiler to use an unpacked format. That would be a
> sight, and unfair.

Yes. There are other games that can be played when the choice
of "test" is left up to someone who has a stake in the results...

> >I note the conspicuous absense of any 'modern' processors for
> >comparison - where are the Siemens parts? The new PICs? The 50 and
> >100 MHz Scenix? The low end TI and Analog DSPs? Granted, finite

Exactly...

> >testing time and resources means that not every processor gets a
> >chance in the barrel but still...
>
> I think it would be worth their effort to make this comprehensive, if
> they are don't have any focus to their marketing. As I asked but
> haven't seen an answer to, they may have very specialized markets they
> are targeting and perhaps the chosen processors are the more common
> ones used in those applications. Frankly, there is no way to guess
> from what I've seen. I'm still drawing a complete blank about exactly
> where they expect this processor to take the world by storm.

They expect ZWorld to buy LOTS of them! :>

> For me, it seems either way too big or way to power hungry to replace
> or get designed into anything I use in the 8-bit world where their
> claims to performance might help. When I imagine using it in places
> where I need 100 pins, I can get much better performance for very good
> money, both in power and speed with products that I have long
> experience with.

Well, you *could* always run out and buy a Rabbit emulator...
(ah, I forgot... you're not supposed to *need* one! Yes, especially
if you are buying all your hardware from ZWorld... convenient, eh?)

> I'd like to know exactly where they see this processor's BEST CHANCE.
> I'm sure they must already have some sense of it or else they
> shouldn't be in business. I just wish I could see it, too.
>
> >I must say that the PDF is the first time I've *ever* seen anyone
> >claim that the AVR is a 16-bit controller while at the same time
> >suggesting the AMD 188ES is an 8-bit.
>
> Hehe. Took my breath away, too.

Must be because it has all those *8's* in the part number!
(they probably felt guilty trying to call it a *1* bit processor!)

> >Also, I'm unclear why the AVR
> >results were discarded? Because the AVR had sufficient registers to
> >keep local data local? According to section 3.2 of the Rabbit User's
> >Manual, the Rabbit's alternate register set "effectively doubles the
> >number of registers that are easily available for the programmer's
> >use" - so did Dynamic C fail to use the new registers or was doubling
> >a small number still too small to offer a dramatic speedup? Are more
> >registers bad or good?
>
> It was interesting that they talked about it, but didn't put the
> numbers in the table.

(paraphrasing due to an aging memory) "Ignore that man behind the
curtain..." :>

(sigh)

nob...@rtd.com

unread,
Feb 22, 2000, 3:00:00 AM2/22/00
to
On Thu, 10 Feb 2000 21:41:36 -0800, Norman Rogers

<norman...@altavista.net> was overheard to say:

Hmmm... it appears from the number of posts, that Mr Rogers is
employed by Rabbit/ZWorld? Sure would be nice if his .signature
made this clear so we know when he's "speaking for the company"
(since apparently neither rabbitmicro.com nor zworld.com
have NNTP access -- hence the "altavista.net" posting
address for him) Sorry to pick nits. I just like to know
when someone has a vested interest in a product they are
so strongly advocating...

> The target market for the Rabbit is small embedded microprocessor boards.
> The Rabbit makes it easy to design such boards with a minimum of glue
> logic and other chips. Having 4 serial ports and a built in real time
> clock can replace $5 of external components. Memory interfacing is
> incredibly easy with the Rabbit. The Rabbit read cycle is 2 clocks long.
> The address comes out immediately after the clock edge and the setup time
> for data before the second clock edge is minimal. The result is that the
> memory access time required is only about 12 nS less than 2 clock periods.

So, using 70ns SRAM's would require a 25MHz clock, or slower?
Hmmm... what's the next *convenient* clock rate to yield "standard
baud rates" (to be compatible with the previously mentioned
benchmark?)

> The floating point benchmarks reflect both the good performance of the
> Rabbit and the high quality of the floating point library in Dynamic C. In

But the floating point library could be cloned or otherwise improved
upon. *Especially* if you claim to (almost) follow a standard
floating point format -- the compiler doesn't care about the
internals of the library... just *handles* to drag in the
appropriate helper routines... (i.e. easily interfaced to
other compilers -- "been there, done that, 'nuff said")

> the case of the 188ES the very poor floating point results reflect the
> fallacy of implementing floating point by emulating an FPU that is only
> supplied on high end x86 processors. The Rabbit uses IEEE format 32-bit

Why do you insist on picking on the x86's as "evil" in this
regard? What is *RABBIT's* solution to the "Oh, sh*t! It's not
fast enough!" dilemma that draws folks to product lines that *have*
upward migration paths? I started a design with a 386EX several
months ago. In production, I'll probably end up porting it all
to a SC500 -- 10 times the performance for minimal *system*
cost increases (since the CPU is only a small piece of a typical
system). How do I do that if I adopt a Rabbit platform? *Hope*
that you can shrink the geometries and scale the clocks up??

> floating point and carries out computations with full accuracy, meaning
> typically 0, 1 or 2 counts of error. We don't try to exactly follow every
> fine point concerning the handling of exceptions and numbers near the
> limits of the floating point range. It would actually be quite mindless do
> this since it would give the user nothing very useful at considerable cost
> in performance. The IEEE floating specification was written for hardware
> FPU's where the cost-performance equation is different than it is for a
> software implementation. There are no tricks or deceptions in the
> benchmarks. The Rabbit is faster because the hardware is faster and the
> library is better.
>
> Whether the AVR is 8-bit or 16-bit can be debated, since it has a 16-bit
> instruction path and an 8-bit data path. I think the instruction path is
> far more important and should be the deciding factor. The Rabbit has

Well, *I* think the "instruction path" is meaningless. *Typically*,
the internal data paths are what sets the "size" (technology)
of the processor. I.e. the 386SX is a *32* bit CPU with a 16 bit
bus. The 68008 is (arguably) a 32 bit CPU as is the 68010 and 020,
etc.

> 16-bit internal data paths but the external data interface and the

Not just the widths of the data paths but the actual default data
*sizes*. I.e. efficiency hacks (two ALU's) don't really (IMHO)
make a device twice as "wide" but, rather, the inherent data size
classifies the device.

> instruction path is 8 bits, so we call it an 8-bit processor. We didn't do
> a full test on the AVR because it is quite different than the Rabbit and
> not suited for the same classes of problems due to the limited amount of
> instruction memory. But, since it is popular and since we did a few
> cursory tests using a simulator we decided to offer up this information
> for what it is worth.

It's worth exactly what *most* benchmarks are worth -- i.e. slightly
less than the cost of the paper it was printed on... :-(

nob...@rtd.com

unread,
Feb 22, 2000, 3:00:00 AM2/22/00
to
On Fri, 11 Feb 2000 23:22:47 -0800, "Kelly Hall" <ha...@iname.com> was
overheard to say:

> "Norman Rogers" <norman...@altavista.net> wrote in message
> news:38A42BA6...@altavista.net...

> > Another area where the Rabbit goes beyond conventional thinking is
> > in Dynamic C's support for cooperative multitasking via language
> > extensions to C. This is quite different and makes multitasking

Ah, yes! So *clever* to bastardize a language that a good many
people have invested a sh*tload of hours STANDARDIZING just to
support something that we've all been doing for *years* WITHOUT
language extensions!

Sorry, but from what I've seen of these "extensions", I don't see
how they are necessary for "cooperative multitasking" (which is even
*lamer* than a nonpreemptive environment!)

> > simpler. We have one patent granted and several applied for that
> > relate to this.

gee, I'd better rush right out and patent my C language extensions
for packed BCD data types! And, maybe I can amend that patent filing
at a later date to support language extensions to allow the CARRY
FLAG to be manipulated and observed in C! :-/



> First, although the Rabbit uses Dynamic C as a dev tool, don't the
> cooperative extensions in Dynamic C predate Rabbit? It seems almost
> disingenuous to hype the Rabbit based on prior software research.
> Anyway, back in college, we used to call this 'syntactic sugar'. I'm
> not saying domain-specific extensions are a bad thing to add into a
> programming language. But since programmers cost more and more, and
> hardware costs less and less, I feel that the advantages of standard
> languages with specialized libraries outweigh the advantages of
> rolling the library into the compiler. That is, I can learn a new
> library with a standard language faster than I can learn a new
> language construct. And I can port my code, later.

Agreed. Especially when there is only *one* company (?) supporting
that "extension" -- and, the POTENTIAL that they might prevent others
from adopting it by claiming it as IP.

<snip>



> > But, the market strength of the Rabbit is really based on it having
> > a nice combination of features. Some other features that I haven't
> > mentioned include a familiar (Z80) instruction set. Ability to
> > operate at 5 V or 3 V and a memory mapping scheme that facilitates
> > remote downloading of new software. We will have a very strong and
> > no extra cost package for TCP/IP and Internet connectivity in the
> > near future. We are also planning a DSP software package that will
> > include a fast FFT.
>
> Based on this and other posts here by Norm, I believe I understand
> why the Rabbit exists and what the target market is: primarily,
> ZWorld needs new processors for their controllers, and the Rabbit

Not quite. ZWorld needs *proprietary* processors for their
controllers. There are two (primary) approaches to grabbing market
share: design something that folks can *only* buy from you
(or, from one of your subsidiaries!) *or* design something that is
so *good* that folks *want* to buy it from you and not your
competitors offering *similar* products. It seems that ZWorld
is opting for the former... I wonder what that implies for
long term quality/"goodness"...?

> seems to fit their needs. The Rabbit appears to be a general purpose
> 8-bit. Thus by design the Rabbit is a set of tradeoffs. If you need
> to build general purpose single board computers, the Rabbit is likely
> worthy of further evaluation. The Jack Rabbit is probably short for
> 'jack-of-all-trades'.
>
> On the other side of the coin, though, it's worth noting that the
> 8bit processor market doesn't appear to be composed of various
> general purpose CPUs battling it out anymore. Now I see large
> families of parts, each part looking for a niche to fill. There are
> PICs of every size, shape, and IO mix. There are AVRs from 8pin
> SOICs up through 64pin TQFPs, and even on the corner of a 352ball BGA
> FPGA. 8051 variants are available all over the place, with a huge
> variety of speed grades, IO mixes, and power requirements. The

Exactly. And, for the most parts, all the 8051 derivatives are
*still* 8051's. No radical new instruction sets. A compiler
written for one will (often) work for *any* (assuming the I/O
is handled outside the language, of course)

> number of affordable emulators, free reference designs, and CPUs with
> onboard storage means that whipping up a custom board to solve a
> problem is not nearly the ordeal it was a decade ago. At work, our
> new PCB layout guy said that at his last job the answer to almost
> every problem was to "throw a PIC at it".

Yes. I use PIC's whenever I need a UART -- as *peripherals* to
a *host* CPU! I get the UART I need plus a bunch of random I/O's,
maybe a spare timer, true "parallel processing" and some added
measure of design security (since the PICs now become "custom
chips"). I've even used them in place of FPGA's... (though they
are *still* horrendous to use!)

> I'm sure that no matter how cheap and fast single chip MCU solutions
> become, there will always be a market left over for those who can't
> afford (the time, the cost) of spinning a custom board for the task
> at hand. For those people, general purpose CPUs on general purpose
> industrial controllers will remain an option.

Yes, and it appears that folks who are *capable* of rolling their
own designs are getting harder and harder to find... :-(

nob...@rtd.com

unread,
Feb 22, 2000, 3:00:00 AM2/22/00
to
On Sat, 12 Feb 2000 15:19:58 -0800, Norman Rogers

<norman...@altavista.net> was overheard to say:

<snip>


>
> The big advantage of cofunctions and cooperative multitasking in general is
> that sharing of data between tasks is much easier than it is for preemptive
> multitasking kernels.

This is only true for simple objects. Whenever a task must
acquire multiple resources, I've found that there is no
"advantage" to a nonpreemptive approach.

It can be argued that a premptive implementation *quickly*
teaches the developer the perils of failing to implement
atomic operations on shared objects. Next, he subtly
learns the hazards of more complex locking -- the first
time his system falls into a "deadly embrace".

With nonpreemptive approaches (of which, I consider
"cooperative multitasking" to be a more *explicit*
manifestion of the rescheduling), the user gets a false
sense of security. He isn't aware of the atomicity
"guaranteed" by the system. He never encounters
problems with simple objects being accessed concurrently.
And, if he attempts to (implicitly or explicitly) lock multiple
objects (or, a complex object), the implementation will
either blindly allow this (possibly leading to a deadlock)
*or* will take measures to ensure that any such held locks are
automatically freed if the task "yield()'s" within the
lock -- a fact that the developer might not be aware of.

Personal experience has indicated that forcing explicit
sharing/locking tends to be easier to maintain and
troubleshoot. Developers are wary of the potential
for trouble in these areas and, so, design algorithms that
deliberately minimize the prospects for conflict...

YMMV, of course.

nob...@rtd.com

unread,
Feb 22, 2000, 3:00:00 AM2/22/00
to
In Sun, 13 Feb 2000 05:54:03 -0000, "Stephen Pelc"
<s...@mpeltd.demon.co.uk> was overheard to say:

> I would also add that with cooperative multitasking, all those prority
> inversion problems never arise.

That's not true.

Oh, sure, you can say they aren't "priority inversion"
because a nonpreemptive "OS" doesn't have any conventional
notion of "priority"...

But, you still have problems with two or more tasks competing for
shared resources. These can be "hand waved" away by requiring
the "OS" (for want of a better word) to impose certain constraints
on tasks and resource locking -- i.e. implicitly releasing all
held resources whenever a task implicitly/explicitly "yield()'s"
control of the processor to another task. In other words,
not allowing a task to hold a resource unless it is executing.

Otherwise, A grabs a and waits for b. B holds b and is waiting
for a. Same *effective* problem as priority inversion -> deadlock.

If, on the other hand, the "OS" implicitly relinquishes
any locks A holds when A "waits" (yields!), then the problem
goes away. But, now the user (developer) has to deal with the
fact that A has not maintained an exclusive lock on that
resource for all this time... (see my previous post on this)

nob...@rtd.com

unread,
Feb 22, 2000, 3:00:00 AM2/22/00
to
On Sat, 12 Feb 2000 10:28:24 -0800, "Kelly Hall" <ha...@iname.com> was
overheard to say:

> "Norman Rogers" <norman...@altavista.net> wrote in message


> news:38A55A7D...@altavista.net...
> > The traditional approach to multitasking is to use a library rather
> > than modifying the language. The advantage of extending the
> > language is that the resulting product is easier to learn and use.
> > Another advantage is that the implementation is more efficient
> > since the function calling interface is eliminated.
>
> The issue is more general than just multitasking extensions to the
> language. Every time a new feature comes along, you have to decide
> whether to implement it in the compiler or in libraries. I'm not
> sure I believe that adding new syntax and reserved words to C makes
> the system easier to use. It certainly makes the code less portable,
> and perhaps that's a design win for ZWorld since they'd prefer you to
> stick with their products.

Exactly -- on all counts! :>

Furthermore, I don't see how any of these really *needs* to
be an extension to the language -- though I'll have to take a closer
look at the language manual to see if there aren't some things that
*do* increase functionality (nothing in the sales blurbs does!)

> I agree that moving the features into the compiler can lead to
> efficiency boosts over libraries. But we can address this issue in
> different ways - the CPU can be designed to minimize the overhead
> associated with function calls - the AVR and Sparc do this by having
> a lot of registers (32 for the AVR, 136 on a microSparc-II), and
> compilers clever enough to not creating stack frames as long as
> arguments can fit in the register file. ARMs have fast ways to push
> only the registers they need to save.
>
> I guess what surprises me most is that portable C is sacrificed on
> the altar of efficiency for multitasking features, and yet for
> writing to an IO port you drop down into assembly.

It would be interesting to see the "efficiency gains"
*quantified* for these "integrated features". Perhaps someone
with a copy of Dynamic C could post some ASM generated code
for a trivial application just to see what the overhead of
these mechanisms are compared to "traditional" approaches
(assuming, of course, that the compiler can generate ASM
as an intermediate output form -- *and* assuming that
ASM *vaguely* resembles Z80 assembly language! :>)



> > Although some of our work on C languge extensions and cooperative
> > multitasking predates the Rabbit, the Rabbit release of Dynamic C
> > is the first release to include cofunctions, a very important new
> > feature.
>
> Is there any architectural support in the Rabbit to support
> cooperative multitasking per se?
>
> > Carried to an extreme if we always keep a standard language and use
> > libraries to add new features we would all be using Fortran II with
> > great libraries.
>
> Of course, Fortran allowed separate compilation of library units.
> Dynamic C does not - you compile the entire program from the source
> on every build. This is likely a big part of why features creep into
> Dynamic C and not into the libraries, IMHO. Leaving new features in
> the libraries would mean shipping source code to the patented new
> features, which is likely undesireable from ZWorld's point of view.

Huh? Am I missing something here? Libraries can be shipped
in relocatable object format (assuming the compiler supports
multiple compilation units -- ?). So, no need to disclose the
internals of the libraries (at least, no *less* secure than
letting me create a dummy main() that simply invokes each library
routine and then manually disassembling it...)

As for the features of the language being disclosed, how does
building everything as one giant source object make things
"more secure" for ZWorld?

I suspect, instead, that it was a case of design expedience.
Far easier to build a single object from a single source
than to have to deal with relocatable objects, a linkage
editor, etc. I would suspect a lack of manpower/time-to-market
is more the reason (which seems confirmed by the "feature rich"
text editor included in the IDE! :>)



> > Since assembly language is hard to write it makes sense to mix C
> > and assembly language, using just enough assembly to handle the
> > speed critical problem. Dynamic C strongly supports this style of
> > programming and we are not ashamed to give examples and promote it.
>
> My point is that you're already changing C to make it better support
> multitasking. Why not change it some more to allow for efficient IO
> port access? Why not make the fancy new Rabbit features *part of the
> language* instead of forcing programmers to pick up Rabbit assembly?
> "Since assembly language is hard to write", to use your words.

(shakes head) Bastardizing the language is a Bad Thing.
You *really* need to have some *significant* gain to justify
this.



> > Our approach to the design of the Rabbit was practical and detail
> > oriented rather than theoretical. If Z-World (a.k.a Rabbit
> > Semiconductor) had a nest of Ph.D. computer architects the Rabbit
> > would probably have turned out quite differently. It wouldn't be
> > based on a 25-year old architecture and the bus would probably be
> > far more complicated. The computer
> > architects, if they were ordered to keep the Z80 model, probably
> > won't know which Z80 instructions are nearly useless so they would
> > keep them all, making the new instructions too long. To compensate
> > for this they'd add a cache. Then there would be a management
> > review where they would add a bunch of features suggested by
> > customers. By this time the die would be too big so they would add
> > some more features to justify a higher price. Someone will probably
> > bring out this product soon.
>
> You seem to have a low opinion of both PhDs in computer architecture
> and managers that listen to customer input :) Intel seems to make a
> good living employing both :)

Well, that's no reflection on intel *or* the PhD's! :-/



> It might be interesting to see benchmarks comparing Rabbit to
> processors designed in manner you describe. Which is exactly what
> some of us have been asking for from the start.

Exactly. I can recall sitting in meetings in the early 80's
with bigwigs from TI (can you spell 99000?), Motogorilla
(anyone want to buy an RGP?), etc. In each case, the features
they were proposing for their processors were all based in
hard science/research -- though almost always 5 or 6 years
too far ahead of the technology. Every feature, instruction,
mechanism was justified with research. Hundreds of thousands
of lines of code analyzed to see what folks were trying to
do with their CPU's. Of course, they all failed to take into
account changes to the programming model like C++...

Even little things like removing RIM and SIM when introducing
the Z80 had significant repercussions...

Peter Wilson

unread,
Feb 28, 2000, 3:00:00 AM2/28/00
to
Is the Rabbit a *legal* spinoff of the Z80? (Zilog's been rather touchy
about that kind of thing recently.) Was it done with Zilog's blessing?
What is Rabbit's relationship with Zilog?

Norman Rogers

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
Yes, It is safe. It does not violate any Zilog property right in any
manner.

Norm Rogers

Norman Rogers

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
Your posting is little more than a string of unverified accusations. You say
that benchmarks are for the "gullible." The implication is that we are trying
to trick people with phony benchmarks or that every benchmark is an attempt to
trick someone.

You claim that our benchmarks don't show anything because we selected the tests
and the criteria. What do you expect us to do? What could we possibly do that
would satisfy you? We published the source code for the benchmarks. Here I am
publicly defending them. Should I submit them to the United Nations for
approval? The benchmarks show that the Rabbit does do faster floating point
than some of the competitors, in some cases a lot faster. Try to find a
floating point benchmark for the ARM Thumb. I've been trying for weeks. For
some reason the ARM company does not publish floating performance figures, even
though they publish figures for integer benchmarks (Dhrystone). Is this a
conspiracy by the ARM company or do they prefer to publish information that
shows the strong points of their product rather than the weak points? Most
microprocessor companies and compiler companies avoid publishing any
benchmarks. They don't want to be judged on objective price performance
criteria. They would rather rely on vague assurances, image and salesmanship -
in other words hype. Why don't you criticize companies that don't publish
benchmarks and charge very high prices for their products, for example Wind
River Systems or Green Hills? Why don't you run benchmarks on their products
and expose them for the world to see?

Why don't you congratulate us for making available objective and verifiable
information concerning the performance of our microprocessor and compiler? Is
it of no interest that Borland C generates floating point add code on a '188
that is 20 times slower than that generated by Dynamic C for the Rabbit, even
though the microprocessors are roughly comparable in speed? Why aren't you
trashing Borland C and the many companies that resell it for embedded work?

Norm Rogers

tauy...@ieee.org

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
Just a mild reminder that the special deal on the Rabbit Eval. Kit is
valid up to but not including April Fool's Day (4/1/00). It is probably
a good idea for anyone who is interested in the Rabbit 2000 processor to
get the eval. kit and thoroughly evaluate the whole package. The price
is US$100, which is not too much for a kit including the hardware and
dev. software. Z-World/RabbitSemi/Mr. Rogers probably cannot make much
money out of the eval. kit. I don't know if it comes with a 30-day money
back guarantee.

It should be interesting and productive to read posts of people who have
evaluated the kit first hand.

--Tak

In article <38BE755F...@altavista.net>,


norman...@altavista.net wrote:
> Your posting is little more than a string of unverified accusations.
You say
> that benchmarks are for the "gullible." The implication is that we are
trying
> to trick people with phony benchmarks or that every benchmark is an
attempt to
> trick someone.
>

> [Mr. Rogers' long list of Whys and complete quote from Nobody
> snipped.]

Steve Holle

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
Congratulations! Something must have been deleted before your
response. I've looked over the documentation for the Rabbit and
purchased the demo system. I think the Rabbit could have solved a
number of sticky problems I've dealt with in the past and although I
don't have an immediate application, I will consider it when the
opportunity arrives.

I have worked with the Dynamic C environment for a previous employer
and found it suitable for commercial applications. The Dynamic C
environment is by far one of the least expensive options available for
a complete environment.

I'm currently working with Codewright, Borland Builder 4 and cross
compiling to the Coldfire processor using a Diab compiler. Each of
these tools has it's use and area of application. I refuse to be a
"Code Snob" and say that my current environment is the only way to go.
I will look at the project requirements and select my tools based on
what I'm familiar with, what's available, and what is most cost
effective.

blac...@rtd.com

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
On Thu, 02 Mar 2000 06:06:23 -0800, Norman Rogers

<norman...@altavista.net> was overheard to say:

> Your posting is little more than a string of unverified accusations.

Gee, and *I* thought *your* posts were "a string of unsubstantiated
proclamations"! :>

> You say
> that benchmarks are for the "gullible."

They are! I can't recall any serious engineer looking at *any*
benchmark *without* first thinking "OK, what are they trying
*not* to show me?" Truly "fair" benchmarks that ARE MEANINGFUL
TO A POTENTIAL CUSTOMER in the process of evaluating a product
are very difficult to construct. You *don't* just grab whatever
compilers/processors (e.g.) you happen to have "handy" and
*toss* something together!

Our new 4x4 truck got 23 MPG towing this mobile home
from Phoenix to Tucson.

The competitor's 4x4 truck got only 18 MPG towing this
(other) mobile home from Hartford to Boston.

What does this tell me? It *looks* (if your glasses are
fogged up) like it *could* be a fair comparison. After all,
they *are* both 4x4's (it wouldn't be fair to compare a
4x2 to a 4x4?). They are both towing mobile homes. And
the distance from Phoenix to Tucson is roughly the same
as the distance from Hartford to Boston.

Ah, but are the mobile homes the same *weight*? In
addition, do they have the same *tongue* weight? Do
they have the same aerodynamic *drag*? Do they have
the same rolling friction?

What's the *terrain* like in Arizona vs. New England?
What's the *weather* like? What time of year was it?
How much "stop and go" traffic was involved? Who
*drove* the vehicles??

When you hear these criticisms, is the critique
*unfair*? One could argue that you didn't have
one of the competitors 4x4's available to you in AZ
and had to content yourself with letting an
associate in Hartford make the test for you
under "nearly identical" circumstances.

And, of course, the cost of *shipping* the mobile home
that *you* used (in AZ) up to New England would be
far too expensive for this (low budget!) benchmark
so we'll just have to use whatever mobile home
he can find up there. And, it's not *your*
fault that there are lots of tollways in New
England so why should someone *blame* you for
that aspect of the comparison??

As to the weather, well... it's not *your* fault that it
was snowing in Framingham (a town outside Boston)!
You could argue that it was *hot* in southern AZ
and you had to run the air conditioner on that
drive -- which puts an added load on the engine!

And, the fact that the guy who made the test in
New England has had to replace the clutch on his
own car three times in the past year is beside
the point (i.e. he likes to ride the brake)

Does this *sound* ludicrous?? *That's* how most
benchmarks sound!

To make a truly *meaningful* benchmark, you have
to eliminate or *account* for *all* of the variations
between the two experiments.

37 % of mice smoking 2 packs of mentholated
cigarettes each day died within 2 years.

42 % of armadillos smoking 1 pack of "regular"
cigarettes each day died weighing less than
7 pounds.

Based on this evidence, we should prohibit
the sale of tobacco products to *fish*!

If your "benchmark" was a "research paper" -- or,
even a "senior project/thesis" in a respectable
university, you'd have received it back with
all sorts of red markups on it.

*What* are you benchmarking? The "microprocessor"
(as the title indicates)? The *compiler*? Or,
just whatever you can get away with??

If the microprocessor is being compared to some
*other* MPU's, then compare *it*. Since the compiler
could have been crafted *specifically* to exploit some
architectural feature of the MPU, it needs to be
taken out of the comparison (assuming we are looking
at MPU's only). "Fair" comparisons of MPUs rely
on presenting problems to separate "teams" -- each
skilled in the application of a particular processor.
Each team comes up with a solution for their particular
MPU. The results are disclosed and the process
*repeated* (since one team may have used a trick
that the other team could *also* have benefted
from had they but known it!) In any case, each
team must be motivated to come up with *optimal*
solutions since *customers* will be doing that,
eventually!

Likewise, to compare *compilers*, you apply
each compiler to (a variety) of problems ON THE
SAME IRON! Saying that compiler X on CPU Y
solves a particular math problem to an accuracy
of Z in time T and space S doesn't give you
*any* information upon which to base an
*intelligent* evaluation -- UNLESS you plan on
solving problem P on CPU Y uising compiler X
to an accuracy of Z... (hardly likely!)

With those *two* metrics (compiler merit and
MPU merit), you can then argue your RELATIVE
merit against other offerings. And, it
shows potential customers where the real
gains are (if any). I.e. if your *silicon*
comes out with a "no net gain" figure, then
folks know that what they *should* be pursuing
is your *compiler* technology, *not* your silicon!
Or, it lets them know that they should, perhaps,
put pressure on *their* compiler vendor to
improve the quality of it's generated code...

Since CPU/tool decisions tend to be *big* investments
(the cost of the compiler is insignificant in
The Big Picture), this lets folks do some strategic
planning. If, for example, your silicon can't
be expected to see a die shrink or move to a
faster process, then they have to figure that
any/all performance gains will have to come from
the compiler writer's skills. They can *inspect*
the quality of that (generated) code and, given some
experience, make an educated guess as to how much
tighter it could *possibly* get -- ASSUMING you
apply all the required resources to this task.

Obviously, if the code looks pretty "Small C"-ish
(i.e. not well optimized -- apologies to Ron Cain?)
then it's *probably* a safe bet that the compiler will
improve -- or, some third party with better compiler
technology will come along and offer a better tool.

If the code looks really clever and tight, then
one has to assume that compiler gains will be pretty
minimal and no wyou need to *hope* for gains in
the silicon, instead.

> The implication is that we are trying
> to trick people with phony benchmarks or that every benchmark is an attempt to
> trick someone.

It is the very *essence* of marketing to *trick* people! :>
Whether that "trick" is to get you to buy something you don't
*need* (i.e. some peripheral that is taking up space on the
die... and that space translates to *some* type of cost!!),
something you don't *want* (e.g., Dynamic C?? :>) or
at a price that is higher than it needs to be (speaking in
*general* terms here, I've made *no* comments regarding
your pricing policies!)

"Benchmarks" always try to show the vendor's product in
a "good light". Just like *any* add! When was the last
time you saw a vendor publish a list of *complaints*
they've had from customers? Or, bugs that they haven't
yet fixed? "Joe Isuzu" just doesn't exist! (apologies
to international readers for the missed humor... :<)
Expecting a vendor to show his *bad* side is a joke.
You can claim that *hiding* this is NOT an attempt to
"trick" the customer... but I think you'd have a tough
time *rationalizing* that claim!

[As an aside, the *best* experience I have had in my career
with MPU choices has been NatSemi's ill-fated 32032. I
can't recall any *other* vendor showing up on Day One
with a list of bugs that have been found in the silicon
*and* the toolchain... *before* I was locked in to that
decision! By contrast, other vendors only concede the
existence of bugs when you have incontravertible *proof*]

> You claim that our benchmarks don't show anything because we selected the tests
> and the criteria. What do you expect us to do? What could we possibly do that
> would satisfy you? We published the source code for the benchmarks. Here I am

I would expect you to do a *quality* job evaluating
your product in an *objective* manner. If you want to
*claim* to be objective and enhance your test's credibility,
then you should at least make every effort to *try* to be.
A *company* has AT LEAST as much resources as a college
student (working on a thesis/senior project), a researcher
(working on a "paper" for publication) or anyone else in
the business with a set of acuired tools and experience.
AND, that *company* has a vested financial interest in
making that presentation *convincing*. All the more
motivation to "do it right".

> publicly defending them. Should I submit them to the United Nations for
> approval? The benchmarks show that the Rabbit does do faster floating point

It shows that it does *it's* version of floating point faster
than the competitors do *their* versions (which might be more
"standardized!") Just like "mentholated cigarettes"
and those poor mice... (i.e. what does that have to do
with *armadillos* OR fish???)

> than some of the competitors, in some cases a lot faster. Try to find a
> floating point benchmark for the ARM Thumb. I've been trying for weeks. For

Spend a few bucks and *write* one!! Or, if you're operating
on a shoestring budget, hold a *contest* and get otherfolks
to write their own versions and submit them. For the price
of, for example, a free Palm Pilot and a full page add in one of
the trade rags, you could avail yourself of *lots* of "free
talent" out there. And, get some *real* data to publish.

Or, better yet, do this *before* you design your CPU
so you can see how people *really* solve problems using
your competitors' products -- rather than dismissing
customer input as "unimportant in the design process".

> some reason the ARM company does not publish floating performance figures, even
> though they publish figures for integer benchmarks (Dhrystone). Is this a
> conspiracy by the ARM company or do they prefer to publish information that
> shows the strong points of their product rather than the weak points? Most
> microprocessor companies and compiler companies avoid publishing any
> benchmarks. They don't want to be judged on objective price performance

No, they either don't have the resources to do a *good*
benchmark; or, they know that benchmarks are meaningless;
or, they know that their *users* know benchmarks are
meaningless!

From personal experience, I've found that you can almost
always get what "needs to be done" using the resources
you have available (assuming you put *some* effort into
making intelligent decisions up front).

I've been discussing this thread with various friends
via email -- asking many of the same questions of *them*
that I've (rhetorically?) asked of you (i.e. "when was the
last time you used an atan()", "when was the last time you
used floating point in an *embedded* product", etc.).
So far, I'm pretty comfortable with their replies...

> criteria. They would rather rely on vague assurances, image and salesmanship -
> in other words hype. Why don't you criticize companies that don't publish
> benchmarks and charge very high prices for their products, for example Wind

I don't use their products. I vote with my wallet.
I've cast the same "vote" for *your* products.

> River Systems or Green Hills? Why don't you run benchmarks on their products
> and expose them for the world to see?

*They* haven't stuck their nose into c.a.e touting their
products, have they??



> Why don't you congratulate us for making available objective and verifiable
> information concerning the performance of our microprocessor and compiler? Is

Because that information is meaningless. Take my 4x4 or "smoking"
example. Why would I want to invest my time proving that you
can count clock cycles correctly?? I'd rather look at *which*
clock cycles you have chosen to count and decide if *they*
are representative of the types of code that I've had to write over
the years...

> it of no interest that Borland C generates floating point add code on a '188
> that is 20 times slower than that generated by Dynamic C for the Rabbit, even

Is it of no interest that there is no upgrade path from
the Rabbit to faster processors? Gee, if *I* don't chose to
take up your banner and tout this aspect of the Rabbit vs.
'188, *I* am somehow remiss?? Yet, if *you* don't proclaim
the advanatges of having an upgrade path (and different tools,
etc.) from the Rabbit, you are "being fair"?

Ah, but there's no "trickery" involved here, right?? :-/

> though the microprocessors are roughly comparable in speed? Why aren't you

Folks use Borland C because they want to develop on an x86 and
like the quality of the *product*. I use compilers that are
*FAR* from *optimal* in the quality (speed/size) of the code they
generate. But, the *accuracy* of that code is *flawless*
(and I *really* hammer on compilers -- if the language says I
can do something, then the *ANSI C* compiler had damned well
better let me do it!!). If I have to wait a *day* for a
bug fix (assuming the vendor *will* fix the bug!), then the
cost of the compiler has increased by $500 - $1000. How
many times do I have to do this in the course of
generating 30K to 50K LoC? Experience has taught me that I
can get a two or three (or five!) -fold increase in hardware
performance for an additional 5% in system cost (CPU is only
a *tiny* part of the equation). So, why live with a
fancy compiler that generates code that is only 40% faster
if that code is *buggy*?

The design I'm coding currently will use a 6MHz Z180.
Yet, the prototype runs at 20MHz (18.xxx). I'm *sure* I won't
have any problems getting the code to run in "real time"! :>

> trashing Borland C and the many companies that resell it for embedded work?

You seem to be operating under a persecution complex. Again,
*they* haven't poked their heads into this forum making bald
"declarations".

I looked at your product. I design a *lot* of systems.
I looked at some of ZWorld's off the shelf solutions.
I don't relish doing designs just for the sake of
doing another design!

*Without* reading your benchmark (I have read it subsequently),
I had decided that your CPU wasn't worth the cost of the
"switch". Giving up my ICE. Giving up my *known good*
compiler. Giving up my version control, editor, test
scaffolding, etc. Giving up my existing code base,
operating systems, libraries, etc. And, in return, getting
a CPU with a *little* more horsepower (?), a new (undocumented)
set of bugs (?), a new development environment (bugs??).

As a *businessman*, the decision was pretty obvious!
I'm here to design *products*, not play with new chips.
I didn't see how your product was going to help me
get that done. So, I dismissed it.

You'll note that I didn't take it upon myself to get out
my soapbox and post to c.a.e:
"New genetic research: Rabbit is a dog"
Nor did I post:
"Borland bytes the big one"
Rather, I only voiced my opinions when a thread of
interest caught my eye and when *you* started making
claims that I considered "unsubstantiated".

Likewise, you can scour the newsgroup archives and never
find a post in which I proclaimed the folly of buying
ZWorld boards (or any other vendor) *despite* having
done a REAL WORLD BENCHMARK on the merits of
buying vs. designing/building (which is what we
each do whenever we are faced with the need to assemble
a new hardware platform for a product).

If you don't like the criticisms, then don't make the *claims*!
*I'm* here replying to *your* posts!

--don
--------------


Return address is bogus. Mail received at my "real" address is

actively filtered to reduce spam. So, unless your address has been
incorporated into that filter previously, don't bother sending

Steve Holle

unread,
Mar 3, 2000, 3:00:00 AM3/3/00
to
Such a long post for such a busy man.

On Thu, 02 Mar 2000 18:29:14 GMT, blac...@rtd.com wrote:


Dan Henry

unread,
Mar 3, 2000, 3:00:00 AM3/3/00
to
Steve Holle wrote:
>
> Such a long post for such a busy man.
>
> On Thu, 02 Mar 2000 18:29:14 GMT, blac...@rtd.com wrote:

Sorry for being dense, but I don't understand the point of your
observation. How is the post's length, or how the poster chooses to
spend his/her time, relevant to the topic of the thread? Can you
elaborate?

Thanks,

--Dan Henry

Bill Giovino

unread,
Mar 5, 2000, 3:00:00 AM3/5/00
to
Sorry, I don't understand your concern on his observation. Please expand
on this?

Bill Giovino

unread,
Mar 5, 2000, 3:00:00 AM3/5/00
to
One of the things that the embedded community has lacked for the longest
time is a set of benchmarks. The Drystone benchmark was invented to test
the performance of a microcontroller in a representative
control-oriented environment. The source for the Drystone are available
in C. Problem is, vendors tweaked their C compilers to specifically do
well on the Drystone benchmark... so the value of that benchmark is
minimal.

Admittedly, benchmarks can become highly subjective, and microcontroler
companies have been known to blatantly slant the benchmarks to favor
their own processor. This has created some healthy skepticism amongst
engineers on benchmarks developed by semiconductor companies.

Regardless of the source, when I look at benchmarks the first thing I
look at is to see how well documented it is. I may give the final
results a cursory first glance, but I look at the following:

1. Is the procedure documented?
Rabbit well documents the procedure used in their file speedtest.pdf.

2. Are representative compilers used?
All of the compilers used are excellent - however, I would have liked
to have seen Keil used for the Philips
device. Also, the compiler options are not listed.

3. Are clock speeds normalized?
Rabbit normalized all the clock speeds to near 30MHz EXCEPT for the
Z180 which was the slowest at 24.58MHz.

4. Are representative members of each processor family used?
Common members of each family are used, however, the Dallas & Philips
devices are 8-bit microcontrollers and cannot
be expected to perform well in 16-bit benchmark tests (although the
Dallas 80C390 has a math co-processor and
would have performed well in the benchmarking).

5. Inclusion of source files.
Rabbit includes all source files.

You should never let offered benchmarks do all your thinking for you -
but you should use them as a starting place to make up your own mind as
to the value of the processor in your application.

-Bill Giovino
http://Microcontroller.com

Steve Holle

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to
Sorry, no time.
On Fri, 03 Mar 2000 09:18:22 -0700, Dan Henry <dhe...@sprynet.com>

blac...@rtd.com

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to
On Sun, 05 Mar 2000 06:15:50 GMT, Bill Giovino
<edi...@nospam.microcontroller.com> was overheard to say:

[long post... if your eyes tire easy, quit now]

> One of the things that the embedded community has lacked for the longest
> time is a set of benchmarks.

I suspect that is largely due to the fact that, unlike desktop
environments, embedded systems have varied and divers requirements...

> The Drystone benchmark was invented to test
> the performance of a microcontroller in a representative
> control-oriented environment. The source for the Drystone are available
> in C. Problem is, vendors tweaked their C compilers to specifically do
> well on the Drystone benchmark... so the value of that benchmark is
> minimal.
>
> Admittedly, benchmarks can become highly subjective, and microcontroler
> companies have been known to blatantly slant the benchmarks to favor
> their own processor. This has created some healthy skepticism amongst
> engineers on benchmarks developed by semiconductor companies.

Gee, *really*?? :>



> Regardless of the source, when I look at benchmarks the first thing I
> look at is to see how well documented it is. I may give the final
> results a cursory first glance, but I look at the following:
>
> 1. Is the procedure documented?
> Rabbit well documents the procedure used in their file speedtest.pdf.

I disagree. The most fundamental aspects of the test are never
spelled out.

- one can *assume* the test was designed to show EXECUTION SPEED
as the criteria to be measured. This is never explicitly stated
nor is the effect of the (likely) consequence of "increased (?)
code size". I.e. was the test *exclusively* concerned with
speed *regardless* of the size of the resulting executables? Or,
the amount of other resources (e.g., RAM) required to implement
the test in each environment?
- how was the "correctness" of each result determined? E.g., is
"0.1" a suitable approximation for atan(0)? :> At the very least,
the benchmark should express the accuracy of the results in
the context of the maximum theoretical accuracy for a particular
floating point representation. If, for example, sin(pi/4) = .7
then, chances are, I'm *sure* not going to want to use *that*
implementation!
- what was the representation used for each floating point scheme?
Are floats a "traditional" 32 bit implementation? Do they support
the same range of values for the "mantissa" and "exponent"?
Or, has some corner been cut along the way? Do they support
NaN's and exceptions? Is it possible to run a series of
arithmetic operations and test the *result* to see if any
problems occured along the way? Or, does the prudent programmer
have to explicitly test the results of *each* operation for
signalled errors (even "+", +-", "*", etc.??)? Or, worse
yet, does a floating point exception just call exit()?
- How "nonconforming" is the compiler? Are, for example,
all of the <float.h> *minimum* values set forth in The
Standard satisfied? Does a float guarantee at least 6
significant digits over the range 1E-37 to 1E+37? Or,
does the implementation cut corners (to gain speed)?
- Does the implementation *really* compare apples and apples??
For example, the declaration of most of the floating point
library functions is:
double function_name(double);
More specifically:
double atan(double);
I strongly suspect that the Dynamic C library implements
"doubles" as "floats" (!!). [this should be simple for
anyone with a copy of the compiler to verify -- as well
as making a guesstimate of the actual size/format of
floats, etc. -- float.h would be a great start! :>]

Yet, the "much maligned" Borland compiler implements
doubles as "genuine" doubles -- 64 bit representations!
So, the fragment:
float x,y;
...
y = atan(x);
paraphrased from the source code used in the benchmark, actually
is interpreted as:
float x, y;
double a, b;
...
a = (double) x;
b = atan(a);
y = (float) b;
when the Borland compiler operates on it (effectively). ***If***
the Dynamic C compiler supports a wider "double" data type, then
it would also do these two extra type conversions... along with
COMPUTING ATAN(A) AS A 64 BIT DOUBLE!! If the Dynamic C compiler
*doesn't* support a genuine double, then it doesn't have to waste
time on the extra promotion and cast -- nor does it have to
evaluate the function to the same INCREASED precision.

One can argue that double need not *really* be supported...
and, likewise, argue that *float* need not really be
supported! :> But, likewise, the developer who uses
the Borland compiler could always rebuild his libraries
(effectively) adding:
#define double float
to the include files and recompiling the sources (in reality,
this isn't quite this simple since these routines are
written in ASM -- in the Borland distribution). This allows
the developer to cut the same corner as an implementation that
doesn't support doubles.

> 2. Are representative compilers used?

Assuming, of course, that the compiler is being benchmarked...
If, however, you are benchmarking the *microprocessor*...

> All of the compilers used are excellent - however, I would have liked
> to have seen Keil used for the Philips
> device. Also, the compiler options are not listed.

Yes. It is quite possible that the Dynamic C compiler is
designed to, by default, favor speed over size, etc. The
Borland compiler, IIRC, favors *size* over speed *and*
generates 8086 code (even though it can generate 80186
code!) by default.

> 3. Are clock speeds normalized?
> Rabbit normalized all the clock speeds to near 30MHz EXCEPT for the
> Z180 which was the slowest at 24.58MHz.

Presumably, that is to satisfy the first condition set forth
in the "ground rules" -- "able to generate standard baud rates"
(something that would only be obvious? to a person who had
actually designed with the part...). Though that must imply
that the benchmark is meant for comparison against the
"classic" Z180 / 64180 (i.e. devices without the ASEXT
registers which can "generate standard baudrates" from
XTALs approaching 30MHz (e.g., 29.49MHz). At that rate,
all timings should speed up by ~15%.

It's also interesting to see how poorly Dynamic C (Z180)
fares against Dynamic C (Rabbit)! Of course, these
are different compilers with different code generators,
etc.. But, have any changes been made to the *libraries*
in one that haven't been retrofitted to the other? It
really makes Dynamic C (Z180) look pretty bad against
the Dallas part or even a "plain" 8051...!

Sure would be nice if the reasons behind each "magic
number" were spelled out so folks didn't have to be
"experts" with the technical aspects of each of these
different devices to understand why the criteria
differ... :-(

> 4. Are representative members of each processor family used?
> Common members of each family are used, however, the Dallas & Philips
> devices are 8-bit microcontrollers and cannot
> be expected to perform well in 16-bit benchmark tests (although the
> Dallas 80C390 has a math co-processor and
> would have performed well in the benchmarking).

Yet other devices which are currently "en vogue" are
noticeably absent...



> 5. Inclusion of source files.
> Rabbit includes all source files.

I'm dismayed that your criteria for evaluating a benchmark
doesn't appear to include:
- how *relevant* is the benchmark to the types of work I am
likely to be *using* that device for...

I took some time to grep(1) my source code archive. There
are dozens of projects represented there spanning a couple
of decades of work in almost a dozen different industries
(process control, medical instruments, consumer products, etc.)

Aside from the definitions in my libraries, I found:
- *0* instances of sqrt(3) (despite the fact that "classic"
definitions of many "physical variables" seem to *infer*
that they need to be derived using this function as
well as a good deal of SPC theory)
- *0* instances of exp(3), sin(3) or log(3) -- despite
the fact that I've had products that needed to draw
circles, control power delivered to a load (by varying
phase angle), etc.
- *1* instance of atan(3) -- though it is only present in
a tool that is used to compute a table of atan(3)
values "at compile time" that are #included into another
routine as manifest constants.
- *1* instance of "C_SIEVE" in a TEST PROGRAM from another
compiler vendor :>
- an innumerable number of add, mul, div operators -- though
none in places where their speed would be of any consequence
to the design (for obvious reasons!)
So, what does this benchmark tell me regarding the things I
am *likely* to call upon the processor/compiler for? :-(

> You should never let offered benchmarks do all your thinking for you -
> but you should use them as a starting place to make up your own mind as
> to the value of the processor in your application.

Yes. And anyone presenting a "benchmark" should (IMHO) spend
*their* time dotting all the I's and crossing all the T's
so that *I* don't have to spend my time assuring myself
that all of the issues that *MY* design is likely to
face are *accurately* represented (or, *discounted*!)
in the test presented.

Expecting me to be savvy on all aspects of each of the
processors that *any* particular vendor might chose to
showcase his product against *doesn't* make my job any
easier. So, what incentive is there for me to cherish
the "gift" (of information) that has been bestowed upon
me? :>

The sorts of details that were left out of this
benchmark can't be rationalized (IMHO). If you
don't want to clutter the text of the presentation
with lots of details (compiler switch settings,
floating point representations, accuracy of
values computed, etc.) then stuff them in an
appendix. At least, those of us who *know* what
questions to ask (of the document) will be
able to *find* the information and "double check"
the results...

Kelly Hall

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

<blac...@rtd.com> wrote in message
news:38c3ebdc....@News.RTD.com...


> I strongly suspect that the Dynamic C library implements
> "doubles" as "floats" (!!). [this should be simple for
> anyone with a copy of the compiler to verify -- as well
> as making a guesstimate of the actual size/format of
> floats, etc. -- float.h would be a great start! :>]

- From my tests, I think it's fair to assume that in Dynamic C floats
and doubles are the same thing: 32 bit values (8 exponent, 24
mantissa).

> Yes. It is quite possible that the Dynamic C compiler is
> designed to, by default, favor speed over size, etc. The
> Borland compiler, IIRC, favors *size* over speed *and*
> generates 8086 code (even though it can generate 80186
> code!) by default.

Dynamic C gives the user a check box to select optimizations for
either speed or size. I haven't looked at the generated code to see
what the differences are (if any).

> It's also interesting to see how poorly Dynamic C (Z180)
> fares against Dynamic C (Rabbit)! Of course, these
> are different compilers with different code generators,
> etc.. But, have any changes been made to the *libraries*
> in one that haven't been retrofitted to the other? It
> really makes Dynamic C (Z180) look pretty bad against
> the Dallas part or even a "plain" 8051...!

I think the floating point is entirely new for the Rabbit. I'm not
sure that the code generator is much different aside from supporting
some of the new Rabbit instructions. I haven't played around with
code generation all that much yet, but I've yet to see the compiler
generate any code that uses the alternate register banks. If I
decide to spend some more money on this hobby, I'll probably pick up
DC 5.x and a BL1500 and see what the differences are between the two
systems.

> Sure would be nice if the reasons behind each "magic
> number" were spelled out so folks didn't have to be
> "experts" with the technical aspects of each of these
> different devices to understand why the criteria
> differ... :-(

I think of it as a puzzle for those of us with too much time on our
hands. Frankly, I'm having fun peering into the corners of Dynamic
C.

> I'm dismayed that your criteria for evaluating a benchmark
> doesn't appear to include:
> - how *relevant* is the benchmark to the types of work I am
> likely to be *using* that device for...

It would have been nice for ZWorld to go for the EEMBC approach
(www.embedded-benchmarks.com).

> The sorts of details that were left out of this
> benchmark can't be rationalized (IMHO). If you
> don't want to clutter the text of the presentation
> with lots of details (compiler switch settings,
> floating point representations, accuracy of
> values computed, etc.) then stuff them in an
> appendix. At least, those of us who *know* what
> questions to ask (of the document) will be
> able to *find* the information and "double check"
> the results...

The lack of the usual boring techno details is what piqued my
curiousity about the Rabbit benchmarks in the first place.

Kelly

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOMSgx+O3tORmHE48EQI1KQCgw+q9yLLUqD6VVmstgc7laQMHadUAoNCe
+Rs5rHPdVGTn9B+qFo/yxxdY
=uS3B
-----END PGP SIGNATURE-----


David Brown

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to

blac...@rtd.com wrote in message <38c3ebdc....@News.RTD.com>...

>On Sun, 05 Mar 2000 06:15:50 GMT, Bill Giovino
><edi...@nospam.microcontroller.com> was overheard to say:
>
>[long post... if your eyes tire easy, quit now]
>
>[Lots of stuff about floating point benchmarks ... ]

Most uses of small 8-bit microcontrollers do not need floating point. As
you have pointed out, floating point benchmarking involves a great many
complications. We would come a long way with some basic integer benchmarks,
and some microcontroller-specific benchmarks. The kinds of things I would
like to see benchmarked would be:

16-bit add, sub, compare (signed and unsigned)
multiplication and division (8, 16 and maybe 32-bit, signed and
unsigned)
simple table search (e.g., finding a byte in a 256 byte ROM table)
Maximum speed of a software SPI (tests bit instruction timing)
Minimum interrupt overhead, including saving/restoring critical
registers
Overhead of a simple software UART

I think this sort of benchmark would be far more useful to most people than
highly disputible floating point benchmarks.

Bill Giovino

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
To Anonymous Dan:

You've obviously given this some thought - let me respond to just some
of your comments:

blac...@rtd.com wrote:
<...snip!...>


> > 1. Is the procedure documented?
> > Rabbit well documents the procedure used in their file speedtest.pdf.
>
> I disagree. The most fundamental aspects of the test are never
> spelled out.
>
> - one can *assume* the test was designed to show EXECUTION SPEED
> as the criteria to be measured. This is never explicitly stated

While the benchmarks are not spelled out in the level of detail you may
have liked, let me point out that since the benchmark answers are given
in units of MICROSECONDS I find it highly doubtful in my own personal
opinion that the benchmarks represent anything other than EXECUTION
SPEED.

> - how was the "correctness" of each result determined? E.g., is

The benchmark merely supposes the successful completion of the code.
Since all the code is the same, accuracy is assumed to be normalized,
although I expect disagreement from you on this (sigh).

<...snip!...>


>
> > 4. Are representative members of each processor family used?
> > Common members of each family are used, however, the Dallas & Philips
> > devices are 8-bit microcontrollers and cannot
> > be expected to perform well in 16-bit benchmark tests (although the
> > Dallas 80C390 has a math co-processor and
> > would have performed well in the benchmarking).
>
> Yet other devices which are currently "en vogue" are
> noticeably absent...

This is an entirely new subject but... benchmarking little-used
processors that are currently "en-vogue" isn't necessarily the best
thing to do. You benchmark against your perceived market competitors in
your field, as opposed to giving "free advertising" to up-and-coming
competitors with little market share.

When designing in a microcontroller, technical considerations are not
the only thing you should look at - you need to heavily look at the
business considerations. I have watched companies choose
microcontrollers with great architectures and decent tools that have no
market share. I have watched as these micros were discontinued by the
semiconductor companies and the customers went out of business. For
example, when the Transputer was discontinued, many companies that wrote
their code in Occam (the Transputer's language) could not port their
code to another processor fast enough. They chose the best processor
with superior techncial capabilities - and their companies went
bankrupt.

There is a name for these microcontrollers - "Great micros that your
boss won't let you use". Your boss may have been bitten before.
Motorola, ST-Microelectronics, TI, Analog Devices - they've all done it,
discontinued cores with little market share. Benchmarks can't save your
company when this happens.

>
> > 5. Inclusion of source files.
> > Rabbit includes all source files.
>
> I'm dismayed that your criteria for evaluating a benchmark
> doesn't appear to include:
> - how *relevant* is the benchmark to the types of work I am
> likely to be *using* that device for...

Already addressed in my first paragraph - everyone knows how fragmented
the embedded systems marketplace is (after all, this newsgroup isn't
Embedded 101). While the EEMBC is addressing some of this, as we all
know benchmarks are meant to be starting points for which to evaluate
different architectures. All our applications are different, our mileage
may vary, see your local dealer for details.

You've spent more time examining the benchmarks than anyone else here -
your investment in time, based on your posts, seem to be into hours - so
I assume you've seriously considered the Rabbit for an application. Do
you have any final, technical, dispassionate conclusions on the
architecture for the masses? For what applications would YOU recommend
the Rabbit?

-Bill Giovino
http://Microcontroller.com

Dave Hansen

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
On Tue, 07 Mar 2000 20:45:46 GMT, Bill Giovino
<edi...@nospam.microcontroller.com> wrote:
>To Anonymous Dan:

I'm not --don, but I thought I'd respond anyway. Though I really don't have
much to say (that's never stopped me before, though. ;-)

>
>You've obviously given this some thought - let me respond to just some
>of your comments:
>
>blac...@rtd.com wrote:

[...]


>> - how was the "correctness" of each result determined? E.g., is
>

>The benchmark merely supposes the successful completion of the code.
>Since all the code is the same, accuracy is assumed to be normalized,
>although I expect disagreement from you on this (sigh).

From me, too. To paraphrase Gerald Weinberg (I think), ``Optimization is
easy when you don't have to get the correct answer.''

-=Dave
--
Just my (10-010) cents.
I can barely speak for myself, so I certainly can't speak for B-Tree.
Change is inevitable. Progress is not.

blac...@rtd.com

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
On Mon, 6 Mar 2000 22:25:13 -0800, "Kelly Hall" <ha...@iname.com> was
overheard to say:

> <blac...@rtd.com> wrote in message
> news:38c3ebdc....@News.RTD.com...


> > I strongly suspect that the Dynamic C library implements
> > "doubles" as "floats" (!!). [this should be simple for
> > anyone with a copy of the compiler to verify -- as well
> > as making a guesstimate of the actual size/format of
> > floats, etc. -- float.h would be a great start! :>]
>

> From my tests, I think it's fair to assume that in Dynamic C floats
> and doubles are the same thing: 32 bit values (8 exponent, 24
> mantissa).

In which case, the Borland compiler is unfairly "penalized"
by supporting *true* doubles. Consider that binary operations
quadruple in time when doubled in length... and, that any
function computed with a power series would obviously need
to be evaluated to more terms to get accuracy approaching
the resolution of the coding scheme...

> > Yes. It is quite possible that the Dynamic C compiler is
> > designed to, by default, favor speed over size, etc. The
> > Borland compiler, IIRC, favors *size* over speed *and*
> > generates 8086 code (even though it can generate 80186
> > code!) by default.
>

> Dynamic C gives the user a check box to select optimizations for
> either speed or size. I haven't looked at the generated code to see
> what the differences are (if any).

Ah. Perhaps a revised benchmark will be written which
makes all of these issues more explicit...



> > It's also interesting to see how poorly Dynamic C (Z180)
> > fares against Dynamic C (Rabbit)! Of course, these
> > are different compilers with different code generators,
> > etc.. But, have any changes been made to the *libraries*
> > in one that haven't been retrofitted to the other? It
> > really makes Dynamic C (Z180) look pretty bad against
> > the Dallas part or even a "plain" 8051...!
>

> I think the floating point is entirely new for the Rabbit. I'm not

Huh? Surely the Dynamic C product for the Z180 supported
floating point (since it is included in the benchmark).
Or, perhaps you mean an "entirely new" IMPLEMENTATION
(for the Rabbit)? Possible. Likely.

But, then why not fold those "enhancements" back into the
Z180 product?

> sure that the code generator is much different aside from supporting
> some of the new Rabbit instructions. I haven't played around with
> code generation all that much yet, but I've yet to see the compiler
> generate any code that uses the alternate register banks. If I

I would assume that use of the alternate register set is
an *option* that can be turned on/off (since you might
chose to reserve the use of those registers for other
purposes)?

> decide to spend some more money on this hobby, I'll probably pick up

Hobby???! :-(

> DC 5.x and a BL1500 and see what the differences are between the two
> systems.
>

> > Sure would be nice if the reasons behind each "magic
> > number" were spelled out so folks didn't have to be
> > "experts" with the technical aspects of each of these
> > different devices to understand why the criteria
> > differ... :-(
>

> I think of it as a puzzle for those of us with too much time on our

(sigh) Unfortunately, I don't have that luxury. *I* look at
it as any other new product -- something that I can potentially
capitalize on in new designs. I fully expect Zilog to botch
the eZ80 (anyone care for a Z800? Z280? Z380? Z80,000?? :<)
and would like to see a migration path to move onto without
having to invest (time/money) lots of extra resources -- sine
the "8 bit" market appears to have a firm price/performance
advantage in many cases (though 1MB address spaces are quickly
becoming too crowded)

> hands. Frankly, I'm having fun peering into the corners of Dynamic
> C.
>

> > I'm dismayed that your criteria for evaluating a benchmark
> > doesn't appear to include:
> > - how *relevant* is the benchmark to the types of work I am
> > likely to be *using* that device for...
>

> It would have been nice for ZWorld to go for the EEMBC approach
> (www.embedded-benchmarks.com).

Well, the advantage of a standardized benchmark is that it
allows you to compare a wider range (theoretically) of
devices. The *disadvantage* of a standard benchmark is
that it is too easily manipulated by manufacturers
(*if* customers place too much emphasis on it!) as well
as not usually defining *real* problems -- just someone
else's idea of what "real problems" *might* be...
(hence the inherent paradox in benchmarks)

I'd appreciate other metrics/feedback you (and others?)
stumble across in your "playing around"...

blac...@rtd.com

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
On Tue, 7 Mar 2000 11:04:05 +0100, "David Brown"
<david....@westcontrol.com> was overheard to say:

> blac...@rtd.com wrote in message <38c3ebdc....@News.RTD.com>...


> >On Sun, 05 Mar 2000 06:15:50 GMT, Bill Giovino
> ><edi...@nospam.microcontroller.com> was overheard to say:
>

> [Lots of stuff about floating point benchmarks ... ]
>
> Most uses of small 8-bit microcontrollers do not need floating point. As

Yes. Though (playing devil's advocate) that doesn't mean they
shouldn't be *able* to leverage that technology AS APPROPRIATE...

> you have pointed out, floating point benchmarking involves a great many
> complications. We would come a long way with some basic integer benchmarks,
> and some microcontroller-specific benchmarks. The kinds of things I would
> like to see benchmarked would be:
>
> 16-bit add, sub, compare (signed and unsigned)

Years ago, I would have considered this a moot point. But, recent
applications would add 32 bit operations to this list. In
particular, I've found that 32 bit compares are quite expensive
on smaller CPU's (hint: think "IP address") and this causes
significant changes in algorithms that manipulate those
quantities (think: "ARP cache", "routing table", etc.).

> multiplication and division (8, 16 and maybe 32-bit, signed and
> unsigned)

I think many (small) embedded systems developers invariably
find ways of avoiding both of these operators -- especially
division (by anything other than powers of 2). Usually,
rethinking an algorithm "upside down" (e.g., comparing
*period* of signals instead of *frequencies*).

> simple table search (e.g., finding a byte in a 256 byte ROM table)

I would possibly replace this with "table LOOKUP" instead of
a "search". Since tables are often used to store precomputed
values (e.g., fast math functions) or dispatch to particular
routines (jump tables, etc.)

I would also possibly add "queue operators" (i.e. adding and
removing entries from a FIFO)

> Maximum speed of a software SPI (tests bit instruction timing)

See UART comment below

> Minimum interrupt overhead, including saving/restoring critical
> registers

Yes. Or, more specifically, cost of saving/restoring the
entire processor state (think: MTOS).

> Overhead of a simple software UART

Hmmm... I'm not sure of the applicability of that (though in
very small MCU's -- e.g. some of the 68705's -- this can be
a huge portion of the application's time budget)

The problem with benchmarks is illustrated perfectly, here,
in "our" respective choices for evaluation criteria. From
some of your suggestions, I'll infer a bias towards small,
cheap systems that do *everything* in software (e.g., small
data objects -- 16 bits -- and lots of bit twiddling).

Yet, my comments tend to push towards the other end -- how
easily can I implement "big" algorithms on *small*
systems (support for "huge" text, "longs", etc.).

And, we both are developing "embedded systems" -- maybe
even COMPETING PRODUCTS (with different implementation
approaches!) This is why MCU vendors have to cram
85 gazillion peripherals onto the die -- or, offer
57 different varieties of a particular MCU to "cater"
to our varied needs.

> I think this sort of benchmark would be far more useful to most people than
> highly disputible floating point benchmarks.

Floating point benchmarks have a place. *Assuming* you
*need* floating point *and* as long as they don't try
to compare "floating point" to "FloaTINg PoiNt"...

I'd rather the time spent on marketing hype/benchmarks
be spent, instead, improving the quality of the *product*
(since *that* helps me in far more tangible ways than
any benchmark ever will!)

blac...@rtd.com

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
On Tue, 07 Mar 2000 20:45:46 GMT, Bill Giovino

<edi...@nospam.microcontroller.com> was overheard to say:

> To Anonymous Dan:

Grrr.... "don"!

> You've obviously given this some thought - let me respond to just some
> of your comments:

I love Z80's (et al.). Any time there's "yet another z80 clone"
announced, I look with *hope* at the possibility of using it
to leverage my existing code base, etc. So, any banchmark
is quickly reviewed...

> blac...@rtd.com wrote:
> <...snip!...>


> > > 1. Is the procedure documented?
> > > Rabbit well documents the procedure used in their file speedtest.pdf.
> >
> > I disagree. The most fundamental aspects of the test are never
> > spelled out.
> >
> > - one can *assume* the test was designed to show EXECUTION SPEED
> > as the criteria to be measured. This is never explicitly stated
>

> While the benchmarks are not spelled out in the level of detail you may
> have liked, let me point out that since the benchmark answers are given
> in units of MICROSECONDS I find it highly doubtful in my own personal
> opinion that the benchmarks represent anything other than EXECUTION
> SPEED.

Yes, of course! :> But, there is no formal statement of
any *other* criteria involved. For example, if there are
no limits placed on code *size*, then I could chose to
"spend" that 1MB of address space on large tables and
interpolation techniques, etc. In the real world (gasp!)
you rarely have that sort of luxury! :>



> > - how was the "correctness" of each result determined? E.g., is
>

> The benchmark merely supposes the successful completion of the code.

So, as long as the processor doesn't 8crash* the test is
considered "successful"? :>

> Since all the code is the same, accuracy is assumed to be normalized,
> although I expect disagreement from you on this (sigh).

I believe a post from Mr. Rogers indicated that the
different compilers gave *differing* results. I have
not seen an analysis made of the actual accuracy
of each of these results. It Shirley wasn't present
in the benchmark report. Nor was the choice of
algorithms spelled out. I.e. what does the error
function (i.e. accuracy vs. input values) look like
for the algorithms chosen, etc.

Further, as I mentioned previously, the Borland compiler
would *appear* (barring any information to the contrary)
to support "genuine" doubles and, thus, compute the
values to a finer precision (which, while not *guaranteeing*
it, *could* result in increased accuracy.)

> <...snip!...>


> >
> > > 4. Are representative members of each processor family used?
> > > Common members of each family are used, however, the Dallas & Philips
> > > devices are 8-bit microcontrollers and cannot
> > > be expected to perform well in 16-bit benchmark tests (although the
> > > Dallas 80C390 has a math co-processor and
> > > would have performed well in the benchmarking).
> >
> > Yet other devices which are currently "en vogue" are
> > noticeably absent...
>

> This is an entirely new subject but... benchmarking little-used
> processors that are currently "en-vogue" isn't necessarily the best

I don't think the AVR, Scenix, etc. parts are "little used". :>

> thing to do. You benchmark against your perceived market competitors in
> your field, as opposed to giving "free advertising" to up-and-coming
> competitors with little market share.

In this case, it would *appear* (stated without any support to
substantiate this observation!) that the *Rabbit* would be the
"up-and-coming" device... (?)



> When designing in a microcontroller, technical considerations are not
> the only thing you should look at - you need to heavily look at the
> business considerations. I have watched companies choose

Of course! And, some of those "business considerations" include:
- availability of quality tools at affordable prices
- suitability of existing tools and methodologies
- availability of staff capable of using the product effectively
- anticipated long term availability
etc.

> microcontrollers with great architectures and decent tools that have no
> market share. I have watched as these micros were discontinued by the
> semiconductor companies and the customers went out of business. For

Sure! Want to buy a 32016 ICE -- *cheap*??? :>

> example, when the Transputer was discontinued, many companies that wrote
> their code in Occam (the Transputer's language) could not port their
> code to another processor fast enough. They chose the best processor
> with superior techncial capabilities - and their companies went
> bankrupt.

Of course. The "market" makes no assurances that the "best"
device/product will "win". Witness the choice of the x86
for the PC...



> There is a name for these microcontrollers - "Great micros that your
> boss won't let you use". Your boss may have been bitten before.

Of course! Though, in my case (as an independant contractor),
*I* tend to be "the boss" most of the time. It's rare that
I have found clients who *decree* that a particular processor,
tool, etc. be used for a particular job (though there are often
biases towards certain choices either due to existing
corporate standards, inventory, equipment, regulatory agency
certifications, etc.)

This leaves the burden of making these decisions AND JUSTIFYING
THEM largely on my shoulders. *I* have to weigh the relative
values of the customer's existing tools, personnel, equipment,
etc. against the potential downside of using something "new"
(different).

In addition to cost, manufacturability, etc.

> Motorola, ST-Microelectronics, TI, Analog Devices - they've all done it,
> discontinued cores with little market share. Benchmarks can't save your
> company when this happens.

Right.



> > > 5. Inclusion of source files.
> > > Rabbit includes all source files.
> >
> > I'm dismayed that your criteria for evaluating a benchmark
> > doesn't appear to include:
> > - how *relevant* is the benchmark to the types of work I am
> > likely to be *using* that device for...
>

> Already addressed in my first paragraph - everyone knows how fragmented
> the embedded systems marketplace is (after all, this newsgroup isn't
> Embedded 101). While the EEMBC is addressing some of this, as we all
> know benchmarks are meant to be starting points for which to evaluate
> different architectures. All our applications are different, our mileage
> may vary, see your local dealer for details.

Yes. But unless people CRITICALLY look at benchmarks (cynically?)
and *think* about these issues -- what IS being said, what ISN'T being
said, "is it fair", "is it meaningful", etc. those benchmarks don't
amount to much of *practical* value to any *particular* user...



> You've spent more time examining the benchmarks than anyone else here -
> your investment in time, based on your posts, seem to be into hours - so

Reviewing the benchmark took very little time. Print a copy
and drag out a red pen. Perhaps my familiarity with different
processors made it easier for me to spot cracks in the armor...

Typing USENET posts has taken the most time, of course.
Thankfully (?) I have a slow printer :>

> I assume you've seriously considered the Rabbit for an application. Do

Yes, I have a TCP/IP stack that was looking for a host. I
don't feel like waiting for Zilog to botch (?) the eZ80.
The Z180 can support the stack but even at 30MHz it will
only support a fraction of a 10MHz ethernet bandwidth.
I'd like to have more margin there...

I'll be looking at the Toshiba (?) part next...

> you have any final, technical, dispassionate conclusions on the

I dunno. The fact that it isn't a Z180 (by a considerable
margin!) really mucks up any plans I could have had for it.
I can't afford to live without the tools I am currently
using (source control, emulator, compiler, regression
test scaffolding, etc.) since some of the markets I have
to fit in impose requirements on the design *process*,
tools used, etc. I could have lived with the changes
in relative speed of instructions -- though it might
have caused some algorithms to have been designed
differently had that been known up front. Of all the
changes they *could* have made, fixing the MMU is the
*first* they should have addressed (the "stack segment"
is useless to me since I already have that feature
in my runtime environment)

But I *know* it's not worth the effort to try and port my
code to the Dynamic C development environment. Too much
potential for "problems" (can Dynamic C support "huge"
TEXT and "huge" DATA? how hard will it be to get 1000+
files to "fit" into a single editor window and compile
without a hitch? how much manual juggling of function
arrangement will be required to make the "locator"?
happy with the "module" placements within the MMU
managed areas? etc.). The subsystem (the TCP/IP stack)
is just a bit too big for an 8 bit CPU *if* everything
goes "well"... if anything hiccups, then I'm screwed...

I'm in the *system* business -- not the *tools* business.
So, I don't relly want to have to build more tools to make
that choice of *hardware* work for me. Or, if I am
going to invest more in tools, then I would like to be able
to leverage those efforts for "solutions" that better
fit *my* needs -- instead of my needs as perceived by
some company trying to sell "one size fits all".
(i.e. it gives me opportunities to add value to my
solutions and differentiate them from other approaches).

Right now, I think the best *practical* bet for *me* is
to pursue FPGA based approaches (since I'm not optimistic
about the Toshiba part). If I am putting big FPGA's
into designs *anyways* (to address I/O requirements, etc.)
then why not a *bigger* FPGA and tuck the CPU in there
with it? This then gives me the ability to "fix"
those aspects of a CPU that are "costing" a particular
application disproportionately.

(shrug) Looks like I'll have to spend more time building
tools than I had hoped. But, perhaps "just once"? :>

> architecture for the masses? For what applications would YOU recommend
> the Rabbit?

No idea. I'm not in the business of telling people how
they should apply a particular chip, etc. Rather, I
solve the reverse problem -- finding implementations
for particular *system* requirements!

Joel Baumert

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
blac...@rtd.com wrote:
[...]

>> > It's also interesting to see how poorly Dynamic C (Z180)
>> > fares against Dynamic C (Rabbit)! Of course, these
>> > are different compilers with different code generators,
>> > etc.. But, have any changes been made to the *libraries*
>> > in one that haven't been retrofitted to the other? It
[...]

Much of the improvement in the Z80 vs. Rabbit floating point
is due to three factors. First the rabbit has a 16x16 mult.
When multiplying two 24bit mantissas, the libraries can take
advantage of this operation and can do roughly 1/2 the
multiplies that had to be done on the Z80. Second the new
exchange operations make the alternate register set more
accessable... i.e. ex de',hl (2 clocks), ld bc',bc (4 clks)
ld d',e (4 clocks). These instructions make it possible to
carefully code a routine keeping intermediate values in
the registers instead of spilling them to the stack or to
static storage. Third, the Rabbit instructions run faster
then their Z80 counterparts.

Joel

Zworld may pay my bills, but I speak for myself.

Joel Baumert

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
blac...@rtd.com wrote:
[...]

> I love Z80's (et al.). Any time there's "yet another z80 clone"
> announced, I look with *hope* at the possibility of using it
> to leverage my existing code base, etc. So, any banchmark
> is quickly reviewed...
[...]

> Of course! And, some of those "business considerations" include:
> - availability of quality tools at affordable prices
> - suitability of existing tools and methodologies
> - availability of staff capable of using the product effectively
> - anticipated long term availability
> etc.
[...]

Assuming the tool chain fit your application would you consider
the Rabbit??? I have done internal developement on both the Z80
and the Rabbit and I prefer writing code for the Rabbit. Some
of the new instructions make low level assembly much easier to
write.

Examples are ld hl,(sp+8bitoffset) or ld (sp+8bitoffset),hl.
Other examples are the lcall xpc,mn instruction, which changes the
upper window and jumps to the offset mn, or ldp instructions which
allows you to directly or indirectly access memory in the 1MB
physical address space. The alternate register loads and
exchanges can be handy for jockeying things around and avoiding
unnecessary loads and stores to memory.

I don't know how it compares to some of the processors you have
been mentioning, I've mainly used the X86, Z180, and Rabbit,
but depending on your application the Rabbit can have advantages
over the other two.

Kelly Hall

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

"Joel Baumert" <jbau...@mail.indirectlink.com> wrote in message
news:96nx4.226$ox3....@news.pacbell.net...


> Much of the improvement in the Z80 vs. Rabbit floating point
> is due to three factors. First the rabbit has a 16x16 mult.

Well, it's got a *signed* 16x16 multiply. That's not quite as handy
as an unsigned multiply, IMHO. But I grant that it's better than the
Z180 and Z80.

> When multiplying two 24bit mantissas, the libraries can take
> advantage of this operation and can do roughly 1/2 the
> multiplies that had to be done on the Z80. Second the new
> exchange operations make the alternate register set more
> accessable... i.e. ex de',hl (2 clocks), ld bc',bc (4 clks)
> ld d',e (4 clocks). These instructions make it possible to
> carefully code a routine keeping intermediate values in
> the registers instead of spilling them to the stack or to
> static storage.

Does this imply that Dynamic C will one day support register
variables? Will Dynamic C actually keep it's own intermediate
expressions in the CPU or will it continue to treat the register set
as a fancy accumulator and spill everything else to the stack?

> Third, the Rabbit instructions run faster
> then their Z80 counterparts.

This is a good thing. I like it.

Kelly

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOMYLJOO3tORmHE48EQJ56QCgkBrbZv4ymXDkx+rb2r1x9XgPl2UAoOeM
SCSlQS1H5OOz96cUaJPXF4Es
=VQR2
-----END PGP SIGNATURE-----


David Brown

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to

blac...@rtd.com wrote in message <38cbc5de....@News.RTD.com>...

>>
>> [Lots of stuff about floating point benchmarks ... ]
>>
>> Most uses of small 8-bit microcontrollers do not need floating point. As
>
>Yes. Though (playing devil's advocate) that doesn't mean they
>shouldn't be *able* to leverage that technology AS APPROPRIATE...
>

True, but floating point performance is seldom critical to the choice of
such small microcontrollers. If it is, designers will of preferance choose
chips that are optomized for floating point (such as a DSP).

>> you have pointed out, floating point benchmarking involves a great many
>> complications. We would come a long way with some basic integer
benchmarks,
>> and some microcontroller-specific benchmarks. The kinds of things I
would
>> like to see benchmarked would be:
>>
>> 16-bit add, sub, compare (signed and unsigned)
>
>Years ago, I would have considered this a moot point. But, recent
>applications would add 32 bit operations to this list. In
>particular, I've found that 32 bit compares are quite expensive
>on smaller CPU's (hint: think "IP address") and this causes
>significant changes in algorithms that manipulate those
>quantities (think: "ARP cache", "routing table", etc.).

Fair enough - include 32 bit operations. You might also want to include
other basic operations, like the equivilent of strcpy() (although I am
thinking mainly in assembly here to keep the microcontroller benchmarking
seperate from compiler benchmarking). Some micros with paging can copy
memory within a page fairly quickly, but are very slow across pages.

>
>> multiplication and division (8, 16 and maybe 32-bit, signed and
>> unsigned)
>
>I think many (small) embedded systems developers invariably
>find ways of avoiding both of these operators -- especially
>division (by anything other than powers of 2). Usually,
>rethinking an algorithm "upside down" (e.g., comparing
>*period* of signals instead of *frequencies*).
>

We often try, but it is not always possible. And when they are needed, it
is often important that the run quickly. It is also a good test of the
general computing power of the microcontroller (for example, an AVR can do
16-bit multiplies and divides at around 6 times the speed of a PIC16, which
in turn is around 9 times the speed of a COP8, all at the same clock
frequency). It would also be useful to give the results both for a
software-only routine, and routines using any hardware multiplier.

>> simple table search (e.g., finding a byte in a 256 byte ROM
table)
>
>I would possibly replace this with "table LOOKUP" instead of
>a "search". Since tables are often used to store precomputed
>values (e.g., fast math functions) or dispatch to particular
>routines (jump tables, etc.)
>

I agree that a lookup is more common - I was thinking of a search as a way
of combining lots of lookups. A copy from a ROM table to RAM might also be
a way to look at this performance (i.e., reading a message string from
memory).


>I would also possibly add "queue operators" (i.e. adding and
>removing entries from a FIFO)

Especially if they can take advantage of more than one pointer.

>
>> Maximum speed of a software SPI (tests bit instruction timing)
>
>See UART comment below
>
>> Minimum interrupt overhead, including saving/restoring critical
>> registers
>
>Yes. Or, more specifically, cost of saving/restoring the
>entire processor state (think: MTOS).

Most small systems do not implement pre-emptive multi-tasking - working with
multiple stacks and contexts is very inefficient on many small micros. See
my comments to your comments below.

>
>> Overhead of a simple software UART
>
>Hmmm... I'm not sure of the applicability of that (though in
>very small MCU's -- e.g. some of the 68705's -- this can be
>a huge portion of the application's time budget)
>
>The problem with benchmarks is illustrated perfectly, here,
>in "our" respective choices for evaluation criteria. From
>some of your suggestions, I'll infer a bias towards small,
>cheap systems that do *everything* in software (e.g., small
>data objects -- 16 bits -- and lots of bit twiddling).
>
>Yet, my comments tend to push towards the other end -- how
>easily can I implement "big" algorithms on *small*
>systems (support for "huge" text, "longs", etc.).
>
>And, we both are developing "embedded systems" -- maybe
>even COMPETING PRODUCTS (with different implementation
>approaches!) This is why MCU vendors have to cram
>85 gazillion peripherals onto the die -- or, offer
>57 different varieties of a particular MCU to "cater"
>to our varied needs.
>

The huge variation certainly makes things difficult for benchmarking. I
mostly use slightly more powerful micros than was implied (most have a UART
and SPI) - I was thinking of software SPI and UARTs as examples that will be
familiar to many people, especially at the lower end. I have also
implemented both SPI and UARTs in software where that was more convenient
(e.g., if I needed two UARTs).

>> I think this sort of benchmark would be far more useful to most people
than
>> highly disputible floating point benchmarks.
>
>Floating point benchmarks have a place. *Assuming* you
>*need* floating point *and* as long as they don't try
>to compare "floating point" to "FloaTINg PoiNt"...
>

"Aye, there's the rub." Differences in compilers can make far more
difference than individual chips (I have run simple floating point
benchmarks on different compilers for the 68332 - they varied by a factor of
10). Some compilers will stress compatibility and handle NaNs, etc.,
correctly, whereas others will stress speed and fall over on overflows.
Both sorts of systems have their uses. I suppose we need benchmarks for
both "floating point" and "FloaTINg PoiNt" on each micro. The most
important thing, however, is that they are properly labelled on the
benchmarks so that readers know what they are testing.

>
>I'd rather the time spent on marketing hype/benchmarks
>be spent, instead, improving the quality of the *product*
>(since *that* helps me in far more tangible ways than
>any benchmark ever will!)
>
>--don


Wishfull thinking :-) But we could hope for more accurate hype and
benchmarks.


Joel Baumert

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Kelly Hall <ha...@iname.com> wrote:
> Well, it's got a *signed* 16x16 multiply. That's not quite as handy
> as an unsigned multiply, IMHO. But I grant that it's better than the
> Z180 and Z80.

I think that a 16x16 unsigned multiply would have fit the floating
point libraries better. Absolutely I have expressed this opinion
in the past, but I don't design the chips. The signed multiply is
still better than anything on the Z180 and it can get the unsigned
job done.

> Does this imply that Dynamic C will one day support register
> variables? Will Dynamic C actually keep it's own intermediate
> expressions in the CPU or will it continue to treat the register set
> as a fancy accumulator and spill everything else to the stack?

Sure. Can't promise when though. We are improving the tool
chain and library support daily and are always looking for more
compiler and embedded engineers (shameless recruitment plug :-)
to help us fill in the gaps.

Any engineers interested in leaving the hustle and bustle of <insert
you nasty metro area here> for the beautiful town of Davis, CA can
mail your resume to jbau...@zworld.com :-).

cbfal...@my-deja.com

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
In article <38ccc5e8....@News.RTD.com>,

blac...@rtd.com wrote:
> On Mon, 6 Mar 2000 22:25:13 -0800, "Kelly Hall" <ha...@iname.com> was
> overheard to say:
>
> > <blac...@rtd.com> wrote in message
> > news:38c3ebdc....@News.RTD.com...
> > > I strongly suspect that the Dynamic C library implements
> > > "doubles" as "floats" (!!). [this should be simple for
> > > anyone with a copy of the compiler to verify -- as well
> > > as making a guesstimate of the actual size/format of
> > > floats, etc. -- float.h would be a great start! :>]
> >
> > From my tests, I think it's fair to assume that in Dynamic C floats
> > and doubles are the same thing: 32 bit values (8 exponent, 24
> > mantissa).
>
> In which case, the Borland compiler is unfairly "penalized"
> by supporting *true* doubles. Consider that binary operations
> quadruple in time when doubled in length... and, that any
> function computed with a power series would obviously need
> to be evaluated to more terms to get accuracy approaching
> the resolution of the coding scheme...

.... large snippety snip ...

Lets correct this misinformation. Binary operations, in the sense of
multiplications of binary numbers of length n bits, DO NOT NECESSARILY
have to be order n**2 algorithms. There are n log n algorithms known,
and other orders quite suitable for binary math operations on Z80s etc.
are also known.

--
Chuck Falconer
(cbfal...@my-deja.com)

cbfal...@my-deja.com

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
In article <96nx4.226$ox3....@news.pacbell.net>,
Joel Baumert <jbau...@mail.indirectlink.com> wrote:
> blac...@rtd.com wrote:
> [...]

> >> > It's also interesting to see how poorly Dynamic C (Z180)
> >> > fares against Dynamic C (Rabbit)! Of course, these
> >> > are different compilers with different code generators,
> >> > etc.. But, have any changes been made to the *libraries*
> >> > in one that haven't been retrofitted to the other? It
> [...]

>
> Much of the improvement in the Z80 vs. Rabbit floating point
> is due to three factors. First the rabbit has a 16x16 mult.
> When multiplying two 24bit mantissas, the libraries can take
> advantage of this operation and can do roughly 1/2 the
> multiplies that had to be done on the Z80. Second the new
> exchange operations make the alternate register set more
> accessable... i.e. ex de',hl (2 clocks), ld bc',bc (4 clks)
> ld d',e (4 clocks). These instructions make it possible to
> carefully code a routine keeping intermediate values in
> the registers instead of spilling them to the stack or to
> static storage. Third, the Rabbit instructions run faster
> then their Z80 counterparts.

I hope this use of the alternate register set is not mandated. The
original purpose for them was to have a quick way of saving context
during an interrupt, and thus being able to provide very fast service to
at least one interrupt.

Speedy things like this are one reason to use the Z80 family. I once
selected the 64180 over a V25 because the DMA cycle was one clock
shorter, and that was the critical performance limitation.

cbfal...@my-deja.com

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
In article <MDnx4.230$ox3....@news.pacbell.net>,

Joel Baumert <jbau...@mail.indirectlink.com> wrote:
> blac...@rtd.com wrote:
> [...]
> > I love Z80's (et al.). Any time there's "yet another z80 clone"
> > announced, I look with *hope* at the possibility of using it
> > to leverage my existing code base, etc. So, any banchmark
> > is quickly reviewed...
> [...]

> > Of course! And, some of those "business considerations" include:
> > - availability of quality tools at affordable prices
> > - suitability of existing tools and methodologies
> > - availability of staff capable of using the product effectively
> > - anticipated long term availability
> > etc.
> [...]
>
> Assuming the tool chain fit your application would you consider
> the Rabbit??? I have done internal developement on both the Z80
> and the Rabbit and I prefer writing code for the Rabbit. Some
> of the new instructions make low level assembly much easier to
> write.
>
> Examples are ld hl,(sp+8bitoffset) or ld (sp+8bitoffset),hl.
> Other examples are the lcall xpc,mn instruction, which changes the
> upper window and jumps to the offset mn, or ldp instructions which
> allows you to directly or indirectly access memory in the 1MB
> physical address space. The alternate register loads and
> exchanges can be handy for jockeying things around and avoiding
> unnecessary loads and stores to memory.

I assume the 8bitoffsets are always positive - no point in encouraging
programmers to use undefined storage !;-). I remember writing a memo to
intel suggesting that very instruction for the next 8080 back around
1974. Those two instructions greatly simplify code generation and
should be a major speedup.

Joel Baumert

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
cbfal...@my-deja.com wrote:
[...]

> I hope this use of the alternate register set is not mandated. The
> original purpose for them was to have a quick way of saving context
> during an interrupt, and thus being able to provide very fast service to
> at least one interrupt.
[...]

Some of our libraries are using the alternate registers for their
processing. If fast interrupts are your primary concern and there is
enough demand we could look into reworking some of these pieces, but
in this news group there has been at least one request for the compiler
to take better advantage of the alternate register set. These two
requests could be competing goals. I'll bring this up as a concern.

Joel

My bills are paid by Zworld, but my words are my own.

Jon Kirwan

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
On Wed, 08 Mar 2000 17:04:56 GMT, Joel Baumert
<jbau...@mail.indirectlink.com> wrote:

>Some of our libraries are using the alternate registers for their
>processing. If fast interrupts are your primary concern and there is
>enough demand we could look into reworking some of these pieces, but
>in this news group there has been at least one request for the compiler
>to take better advantage of the alternate register set. These two
>requests could be competing goals. I'll bring this up as a concern.

Compiler option?

Jon

Norman Rogers

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Blackhole is correct when he says that the Borland compiler uses double
routines for most math functions. However it uses and supports single
precision floating point for add, multiply, etc. Probably the math routines
would be considerbly faster if there were a single precision library
available. But, as far as I know there isn't. I had no idea that the Borland
compiler was doing this until Blackhole pointed it out. There were no warning
messages.

The ARM compiler has the same problem and it does give a warning message. This
lack of single precision math routines makes the ARM Thumb run slow for things
like square root and sine.

The Keil compiler, in contrast, does not indulge in this foolishness. The
benchmark times suggest that the Keil compiler has a well optmized floating
point library.

The incorrect idea that floating point is of little use was evidently shared
by the authors of the ARM and Borland libraries. Perhaps they never used
floating point. For many applications floating point is critical. Some
programmers use practically nothing else. Unfortunately the floating point
implmentions are so pathetic in commonly used embedded compilers that
programmers really have to avoid floating point for any speed critical
application.

Norm Rogers

Monte Dalrymple

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to

blac...@rtd.com wrote in message <38c5defb....@News.RTD.com>...
>

<snip>

>Right now, I think the best *practical* bet for *me* is
>to pursue FPGA based approaches (since I'm not optimistic
>about the Toshiba part). If I am putting big FPGA's
>into designs *anyways* (to address I/O requirements, etc.)
>then why not a *bigger* FPGA and tuck the CPU in there
>with it? This then gives me the ability to "fix"
>those aspects of a CPU that are "costing" a particular
>application disproportionately.
>

I apologize for the blatant opportunism but you say that your e-mail
is filtered. If you are serious about embedding a CPU into an FPGA
and you like the Z180, look at http://www.systemyde.com for the CPU
only (no peripherals or MMU) in Verilog HDL.

Monte Dalrymple
(Yes, the designer of the Rabbit)

Kelly Hall

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

"Norman Rogers" <norman...@altavista.net> wrote in message
news:38C730CB...@altavista.net...


> Blackhole is correct when he says that the Borland compiler uses
> double routines for most math functions. However it uses and
> supports single precision floating point for add, multiply, etc.
> Probably the math routines would be considerbly faster if there
> were a single precision library available. But, as far as I know
> there isn't. I had no idea that the Borland compiler was doing this
> until Blackhole pointed it out. There were no warning messages.

I generally agree with you that the compiler ought to be a little
more clear what it's doing. On the other hand, I'm apt to cut
Borland some slack because I've never seen them claim that their
product was targetted at embedded computing platforms. For five
years or so most of us who've used their product on our desktop
machines have had access to a high quality 80 bit hardware floating
point unit. With resources like that available, why not just coerce
floats into the native 80 bit format and let the FPU deal with the
problem?

> The ARM compiler has the same problem and it does give a warning
> message. This lack of single precision math routines makes the ARM
> Thumb run slow for things like square root and sine.

I can appreciate the desire on ARM's part to follow the C standard
and just coerce floats to doubles for library routines. But I agree
with you that this behavior is undesirable for embedded platforms.
The programmer should be actively involved in selecting the FP
accuracy in his/her application. Of course, ARMs go into cell phones
and why do cell phones need floating point? Hard to know what they
were thinking.

> The incorrect idea that floating point is of little use was
> evidently shared by the authors of the ARM and Borland libraries.
> Perhaps they never used floating point. For many applications
> floating point is critical. Some programmers use practically
> nothing else. Unfortunately the floating point implmentions are so
> pathetic in commonly used embedded compilers that programmers
> really have to avoid floating point for any speed critical
> application.

So allow me to ask two questions:
a) why not offer the Rabbit developers a choice between fast and
accurate libraries?
b) why not offer the Rabbit customers a die with a hardware floating
point unit? Something as small as the 8087 would be *highly* useful.

Kelly

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOMdA3OO3tORmHE48EQJwhQCgn8qP+DP4bpQfwIwghAff155/HVcAn15W
Q80TvEXU7M4zB8SnlG39l3ZT
=ZbZ5
-----END PGP SIGNATURE-----


Joel Baumert

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to

> Compiler option?

It would have to be both library and compiler option
because I have written assembly code in the libraries
that makes extensive use of the alternate registers.
I'm afraid that we would need to see enough customer
demand to rewrite the library code. No promises, but
I'll bring it up as an issue.

blac...@rtd.com

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
On Wed, 8 Mar 2000 13:45:23 +0100, "David Brown"

<david....@westcontrol.com> was overheard to say:

> blac...@rtd.com wrote in message <38cbc5de....@News.RTD.com>...

[32 bit operations worthwhile benchmark?]

> Fair enough - include 32 bit operations. You might also want to include
> other basic operations, like the equivilent of strcpy() (although I am

I'm always amused at how many "specialty" instructions get added
to insruction sets with *some* intent that they could benefit HLL
implementations (e.g., CPIR for strlen(3)) yet how rarely those
seem to be capitalized upon!

> thinking mainly in assembly here to keep the microcontroller benchmarking
> seperate from compiler benchmarking). Some micros with paging can copy
> memory within a page fairly quickly, but are very slow across pages.

> >> Maximum speed of a software SPI (tests bit instruction timing)
> >
> >See UART comment below
> >
> >> Minimum interrupt overhead, including saving/restoring critical
> >> registers
> >
> >Yes. Or, more specifically, cost of saving/restoring the
> >entire processor state (think: MTOS).
>
> Most small systems do not implement pre-emptive multi-tasking - working with
> multiple stacks and contexts is very inefficient on many small micros. See
> my comments to your comments below.

Well, depends on how you define "small". :> The Z180 (and by
the same token, the Rabbit) I think are quite rich for
the "small" processor end of the spectrum. Certainly
more than adequate for "healthy" MTOS and RTOS implementations.

Even the smaller ends of the spectrum -- PICs, etc. -- can
support limited multitasking environments *if* you limit
your expectations from those environments (often considerably!)
E.g., I use a nonpreemptive multitasking executive quite
nicely in a PIC...

The idea of working *without* a multitasking framework seems
severely crippling (YMMV... I've become too "spoiled" over
the years by that programming model :>)

> >I'd rather the time spent on marketing hype/benchmarks
> >be spent, instead, improving the quality of the *product*
> >(since *that* helps me in far more tangible ways than
> >any benchmark ever will!)
>

> Wishfull thinking :-) But we could hope for more accurate hype and
> benchmarks.

Yeah, and Whirrled Peas... :-/

blac...@rtd.com

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
On Wed, 08 Mar 2000 07:15:17 GMT, Joel Baumert
<jbau...@mail.indirectlink.com> was overheard to say:

> blac...@rtd.com wrote:
> [...]


> >> > It's also interesting to see how poorly Dynamic C (Z180)
> >> > fares against Dynamic C (Rabbit)! Of course, these
> >> > are different compilers with different code generators,
> >> > etc.. But, have any changes been made to the *libraries*
> >> > in one that haven't been retrofitted to the other? It

> [...]
>
> Much of the improvement in the Z80 vs. Rabbit floating point
> is due to three factors. First the rabbit has a 16x16 mult.
> When multiplying two 24bit mantissas, the libraries can take
> advantage of this operation and can do roughly 1/2 the
> multiplies that had to be done on the Z80. Second the new

Agreed. Though it doesn't help with division, denormalizing
numbers for add/sub, etc.

> exchange operations make the alternate register set more
> accessable... i.e. ex de',hl (2 clocks), ld bc',bc (4 clks)
> ld d',e (4 clocks). These instructions make it possible to
> carefully code a routine keeping intermediate values in
> the registers instead of spilling them to the stack or to

Yes, though they make the "trick" of using the alternate register
set for a high frequency interrupt service routine's "state"
impossible. So, you end up burdening an ISR (which typically
*needs* to be fast) with an extra cost to save and restore it's state.
It's a lot more expensive to:
push af
push bc
push de
push hl
...
pop hl
pop de
pop bc
pop af
than it is to:
ex af,af'
exx
...
exx
ex af',af
right? Perhaps the compiler has a switch that tells it
to consider the alternate regster set as "precious"?

> static storage. Third, the Rabbit instructions run faster
> then their Z80 counterparts.

What about the actual *representations* used? For
example, Mr. Rogers likes to point to how *slow* the
Borland compiler was on the 188. But, I know the
Borland compiler implements <math.h> functions as
The Standard declares them -- namely, accepting *double*
arguments and computing *double* results. Note
that the Standard also declares doubles to be at least
10 significant digits, etc.

Yet, I *suspect* that Dynamic C cheats and implements
a single type of floating point representation -- a
"float". And, even the format of *that* representation
hasn't been disclosed (here or in the "benchmark").

Could you shed some light on that? As well as the
accuracy of each *algorithm* employed (i.e. if using
power series, how many terms, etc.)

Is the library completely reentrant (think in terms
of your "preemptive" environment)? What is the
overhead associated with the library (i.e. auto
variables and statics) as well as the cost of
saving it's state?

> Zworld may pay my bills, but I speak for myself.

Thanks for the disclaimer! Sure would be nice if Mr Rogers
and Mr Matthews had been as "up front" from the beginning
disclosing their affiliations! Perhaps you can show
them how to add a .signature in Mozilla?

blac...@rtd.com

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
On Wed, 08 Mar 2000 07:51:08 GMT, Joel Baumert

<jbau...@mail.indirectlink.com> was overheard to say:

> blac...@rtd.com wrote:
> [...]


> > I love Z80's (et al.). Any time there's "yet another z80 clone"
> > announced, I look with *hope* at the possibility of using it
> > to leverage my existing code base, etc. So, any banchmark
> > is quickly reviewed...

> [...]


> > Of course! And, some of those "business considerations" include:
> > - availability of quality tools at affordable prices
> > - suitability of existing tools and methodologies
> > - availability of staff capable of using the product effectively
> > - anticipated long term availability
> > etc.

> [...]
>
> Assuming the tool chain fit your application would you consider
> the Rabbit???

Of course! But note that the toolchain is more than just
editor/compiler/assembler/librarian/linker/debugger, etc.
Giving up the use of an ICE would be painful -- though
I imagine I could write a disassembler for the logic analyzer
and use a more "passive" approach to debugging :-(

But, there are also tools for source code control, regression
testing scaffolding, bug tracking, etc. I try to implement
a "formal" design environment since some of the products I
design need regulatory agency approvals (and "ad hoc"
design methodologies are one sure-fire way to *fail*
those certifications!)

As a result, I have been steadily moving my toolchain into
the "open source" world. This is primarily to allow me to
support products "down the road" (10 - 20 years is not an
uncommon service life for many of these systems). If the
vendor has gone out of business, abandoned the product line
or simply decided not to support an "old" version of their
product (can you spell "Microsoft"?), then I'm screwed...

With the sources, I can usually patch the bug in the tool,
demonstrate the nature of this patch to the regulator/client
(i.e. assure tham that it doesn't? break anything else!) and
deal with any changes to the system in a more incremental
fashion (vs. a complete re-validation/certification/etc.)

Currently, the only tools that I ave left remaining in
the "DOS" world are the compilers and debuggers for some
of the processors. I can run a "DOS emulation" (from FBSD)
and run these since they tend to be well-behaved applications
(i.e. don't use funky DOS extensions, EMS, etc.).

This is good -- 'cuz it lets me continue to use BRIEF, too! :>
All due respect to Mr. Stallman but emacs(1) is really a pain!
And, vi(1) is almost as bad as a trip to the dentist's for
large projects!!! :-(

I don't expect ZWorld/Rabbit to adopt that model.
Indeed, Mr. Rogers seemed dismayed at the suggestion
since he couldn't figure out how to "make any money"
with it (apparently, he saw it as losing the ability
to *sell* the toolchain instead of *gaining* the
"free" support of the user base *for* that toolchain
and thereby reducing his costs). Interesting to
note that most of the semiconductor manufacturers
are looking for ways to get *out* of that market
since they have found it to be so labor intensive
(you can improve an IC process and save money but
can't just do a "die shrink" to increase the
profitability of your "tool team" :>)

So, I've started exploring that avenue as,
perhaps, a "necessary evil" if I want to be assured
a continuous supply of a "quality" product (tool).
I've already hacked up a Z180-ish port of gas(1)
that seems to be as robust as the DOS-based tools
I've been using. After adding support for some
embelishments to support the MMU more "seemlessly",
I'll see how painful it is to convert the symbol table
to a format suitable for the ICE.

I don't cherish the idea of building new tools.
And gas(1) is a piece of cake when compared to
a gcc(1) port! I'd much rather someone *else*
maintain the toolchain -- but, that comes at
a cost: will it be as "correct" as my existing
toolchain (i.e. code *accuracy* is far more
important than *speed*) and will (prompt, affordable)
support continue to be available in the *years*
to come...?

The marketing answers to both of these questions
are, of course, "YES!". But, *reality* often
proves to be quite a different story...

> I have done internal developement on both the Z80
> and the Rabbit and I prefer writing code for the Rabbit. Some
> of the new instructions make low level assembly much easier to
> write.

I try to write VERY LITTLE assembly language code
since it affects portability and the skill level
required of the maintainer of that code. If, for
example, I design a product for a client but
my chosen *implementation* requires the client to
scour the Earth in search of someone who knows
ALGOL *and* the hardware characteristics of a
particular processor, I haven't done my job well!
(based on *my* criteria for "satisfactoriness")

So, it's very important (to me) that a compiler
exist, be "strictly conforming" (for reasons
mentioned above and previously) and, *correct*
(i.e. bug free)

I've invested many dozens of hours chasing down
and documenting compiler/linker/librarian/debugger
bugs and getting them repaired. I seem to have
an unhealthy ability to do things that aggravate
compilers, etc. :-( I'd like not to just
casually discard all of that investment in the
*hopes* of a toolchain of comparable quality.

This, IMO, was the biggest drawback to the
lack of binary compatibility that the Rabbit
forfeit in it's design. I *can't* use my existing
tools (at least not the ones most intimately
tied to the iron) since they are designed around
an entirely different processor model.

> Examples are ld hl,(sp+8bitoffset) or ld (sp+8bitoffset),hl.
> Other examples are the lcall xpc,mn instruction, which changes the
> upper window and jumps to the offset mn, or ldp instructions which
> allows you to directly or indirectly access memory in the 1MB
> physical address space. The alternate register loads and
> exchanges can be handy for jockeying things around and avoiding
> unnecessary loads and stores to memory.
>

> I don't know how it compares to some of the processors you have
> been mentioning, I've mainly used the X86, Z180, and Rabbit,
> but depending on your application the Rabbit can have advantages
> over the other two.

My past experience is a bit more divers... :>
But, regardless, I *don't* want to have to deal with
the raw iron. Part of the "value" that I add in my
system designs is making systems that "average"
developers can maintain.

For example, a "programmer" who has only written for
desktop environments previously (e.g., most folks
fresh out of college) can sit down and read and
modify my code without ever realizing that the
code isn't *running* on a PC. The details of
the operating system, file system, etc. are all
magically hidden. He can fopen() even though there
is no disk in the machine, etc.

If I opt for an (possibly more *expedient*, undeniably
more efficient -- in terms of execution speed) implementation
that too heavily weights ASM and exposure to the raw iron,
then I put clients back in the position of *needing*
me (or someone *like* me) to *maintain* those products.

This is not desireable from *either* of our perspectives!
- I'm more "expensive" than the "run of the mill"
programmer fresh out of school, etc. So, his maintenance
costs would be higher
- I'm not real excited about "routine maintenance" of an
existing/mature product (there's much less opportunity
to get exposed to new ideas, technologies, markets, etc.)
especially over 10 - 20 year life cycles! :-(

(shrug) See what you can offer. I'd be interested in seeing
what you come up with as well as willing to offer suggestions
for features, approaches, etc.

And, at the very least, I'm *always* interested in arguing
different points of view on a subject! :>

Joel Baumert

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
blac...@rtd.com wrote:
[...]

>> exchange operations make the alternate register set more
>> accessible... i.e. ex de',hl (2 clocks), ld bc',bc (4 clks)

>> ld d',e (4 clocks). These instructions make it possible to
>> carefully code a routine keeping intermediate values in
>> the registers instead of spilling them to the stack or to

> Yes, though they make the "trick" of using the alternate register
> set for a high frequency interrupt service routine's "state"
> impossible. So, you end up burdening an ISR (which typically
> *needs* to be fast) with an extra cost to save and restore it's state.
> It's a lot more expensive to:
> push af
> push bc
> push de
> push hl
> ...
> pop hl
> pop de
> pop bc
> pop af
> than it is to:
> ex af,af'
> exx
> ...
> exx
> ex af',af

Agreed. Using the current floating point and some of the
other library routines will cause problems if you want to
use interrupt routines organized in this way. It could
be possible that your work inside the interrupt routine
wouldn't require you to push all four register pairs,
but I do understand your concern...

>> static storage. Third, the Rabbit instructions run faster
>> then their Z80 counterparts.

> What about the actual *representations* used? For
> example, Mr. Rogers likes to point to how *slow* the
> Borland compiler was on the 188. But, I know the
> Borland compiler implements <math.h> functions as
> The Standard declares them -- namely, accepting *double*
> arguments and computing *double* results. Note
> that the Standard also declares doubles to be at least
> 10 significant digits, etc.

[...]

I didn't say anything about the floating point
instructions. I'm talking about comparable assembly
instructions which in general run faster, and some
of the new instruction changed commonly used sequences
of instructions into one faster instructions.

I don't have a copy of the standard in front of me, but
from "C A Reference Manual 4th ed" section 5.2: "C does
not dictate the sizes to be used for the floating-point
types, or even that they be different." Currently,
Dynamic C supports the 32bit floating point format. The
compiler represents floats and doubles as both 32bit
numbers. Maybe I misunderstood your statement.

I wasn't involved in the benchmarking so I don't know
if Norm changed the Borland libraries to use the 32bit
format. Has anyone done this or know how easy it is
to do??? I'll ask Norm the next time I see him.

[...]


> Is the library completely reentrant (think in terms
> of your "preemptive" environment)? What is the
> overhead associated with the library (i.e. auto
> variables and statics) as well as the cost of
> saving it's state?

I understand what reentrant is :-). Floating point
add, subtract, multiply and divide _are_ reentrant,
though they do use the alternate registers to get
things done. I looked at the trancendentals and they
look reentrant, but it would take some more
investigation for me to be completely convinced.

>> Zworld may pay my bills, but I speak for myself.

> Thanks for the disclaimer! Sure would be nice if Mr Rogers
> and Mr Matthews had been as "up front" from the beginning
> disclosing their affiliations! Perhaps you can show
> them how to add a .signature in Mozilla?

I'll see what I can do, but I have been using tin
soooooo lonnnnnggggggg!!!!! I guess it is probably point
and click :-).

Joel

Mark Borgerson

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to

Joel Baumert wrote:
<<SNIP>>

> Any engineers interested in leaving the hustle and bustle of <insert
> you nasty metro area here> for the beautiful town of Davis, CA can
> mail your resume to jbau...@zworld.com :-).
>

Hmmmm, so I should consider leaving Corvallis OR and my 3400sqft house
on 2 acres for Davis??? How are housing prices there these days??? I
last lived there before I graduated from UCD in '68. I suspect that both
the town and the college have grown quite a bit in 32 years! I do seem
to be caught in the 'mid-sized city with cow college' syndrome, though.

Can you still paddle canoes down Putah creek through the middle of
campus??? Is there still an airport on campus??

Mark Borgerson

(Curious Ex-Aggie)

Joel Baumert

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
Mark Borgerson <ma...@oes.to> wrote:


> Joel Baumert wrote:
> <<SNIP>>
>> Any engineers interested in leaving the hustle and bustle of <insert
>> you nasty metro area here> for the beautiful town of Davis, CA can
>> mail your resume to jbau...@zworld.com :-).
>>

> Hmmmm, so I should consider leaving Corvallis OR and my 3400sqft house
> on 2 acres for Davis??? How are housing prices there these days??? I
> last lived there before I graduated from UCD in '68. I suspect that both
> the town and the college have grown quite a bit in 32 years! I do seem
> to be caught in the 'mid-sized city with cow college' syndrome, though.

Housing prices are higher than the immediate area because of the
strong school system and calm atmousphere. If you want your two
acres you will have to live near Davis, not in it :-) because
that'll cost you. Houses in the 3400sqft range will probably cost
you in the $400s, hard to say because most of the ads just list
bedrooms, not sqft.

> Can you still paddle canoes down Putah creek through the middle of
> campus??? Is there still an airport on campus??

There is still the airport, but I don't know about paddling through
Putah creek, I think you would probably disturb some endangered
pond algae :-).

> Mark Borgerson

> (Curious Ex-Aggie)

Joel Baumert

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
blac...@rtd.com wrote:
[...]
>> Assuming the tool chain fit your application would you consider
>> the Rabbit???

> Of course! But note that the toolchain is more than just
> editor/compiler/assembler/librarian/linker/debugger, etc.
> Giving up the use of an ICE would be painful -- though
> I imagine I could write a disassembler for the logic analyzer
> and use a more "passive" approach to debugging :-(

[...]


> As a result, I have been steadily moving my toolchain into
> the "open source" world. This is primarily to allow me to
> support products "down the road" (10 - 20 years is not an
> uncommon service life for many of these systems). If the
> vendor has gone out of business, abandoned the product line
> or simply decided not to support an "old" version of their
> product (can you spell "Microsoft"?), then I'm screwed...

There isn't a open source compiler for the Rabbit that I'm
aware of. Probably the best starting point are either
SDCC or LCC since they already have Z80 support. I think
that GCC would be a tough port, but I haven't done much
more than peek under the hood.

> I don't expect ZWorld/Rabbit to adopt that model.
> Indeed, Mr. Rogers seemed dismayed at the suggestion
> since he couldn't figure out how to "make any money"
> with it (apparently, he saw it as losing the ability
> to *sell* the toolchain instead of *gaining* the
> "free" support of the user base *for* that toolchain
> and thereby reducing his costs). Interesting to
> note that most of the semiconductor manufacturers
> are looking for ways to get *out* of that market
> since they have found it to be so labor intensive
> (you can improve an IC process and save money but
> can't just do a "die shrink" to increase the
> profitability of your "tool team" :>)

Hey, wait a minute I think you are trying to
eliminate my job. I guess I could find something
else to do :-).

> (shrug) See what you can offer. I'd be interested in seeing
> what you come up with as well as willing to offer suggestions
> for features, approaches, etc.

> And, at the very least, I'm *always* interested in arguing
> different points of view on a subject! :>

Yeah, me too. It drives my wife crazy...

blac...@rtd.com

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
On Thu, 09 Mar 2000 08:17:20 GMT, Joel Baumert

<jbau...@mail.indirectlink.com> was overheard to say:

> blac...@rtd.com wrote:
> [...]


> >> Assuming the tool chain fit your application would you consider
> >> the Rabbit???

[...]

> > As a result, I have been steadily moving my toolchain into
> > the "open source" world. This is primarily to allow me to
> > support products "down the road" (10 - 20 years is not an
> > uncommon service life for many of these systems). If the
> > vendor has gone out of business, abandoned the product line
> > or simply decided not to support an "old" version of their
> > product (can you spell "Microsoft"?), then I'm screwed...
>

> There isn't a open source compiler for the Rabbit that I'm

Correct.

> aware of. Probably the best starting point are either
> SDCC or LCC since they already have Z80 support. I think
> that GCC would be a tough port, but I haven't done much
> more than peek under the hood.

Yes, gcc(1) appears to adhere to the philosophy:
"Make nothing simply if a way can be found to
make it Wonderful and Complex"
:-/

But, gcc(1) already works with the rest of my
toolchain and would require less effort to convince
clients of its "correctness" (in the grand scheme
of things).

The copyleft is annoying but, since I don't plan on
*selling* the compiler, there's no problem! :>

> > I don't expect ZWorld/Rabbit to adopt that model.
> > Indeed, Mr. Rogers seemed dismayed at the suggestion
> > since he couldn't figure out how to "make any money"
> > with it (apparently, he saw it as losing the ability
> > to *sell* the toolchain instead of *gaining* the
> > "free" support of the user base *for* that toolchain
> > and thereby reducing his costs). Interesting to
> > note that most of the semiconductor manufacturers
> > are looking for ways to get *out* of that market
> > since they have found it to be so labor intensive
> > (you can improve an IC process and save money but
> > can't just do a "die shrink" to increase the
> > profitability of your "tool team" :>)
>

> Hey, wait a minute I think you are trying to
> eliminate my job. I guess I could find something
> else to do :-).

:> No, not *trying* to eliminate your job... just
pointing out how other companies are rationalizing
adopting the open source model. It's a decision
Mr Rogers (as Owner of Zworld/Rabbit?) has to
make for himself -- does he want to sell compilers
or hardware... (knowing that the cost of selling
hardware tends to go down with time and scale
whereas the cost of selling tools only goes *up*!)

> > And, at the very least, I'm *always* interested in arguing
> > different points of view on a subject! :>
>

> Yeah, me too. It drives my wife crazy...

If two people agree on everything, one of them is "unnecessary".
Or, alternatively, it's much harder to *learn* anything new if
you're only hearing your own beliefs/knowledge coming back
at you...

Though I wouldn't tell your *wife* that! She might
take it as a hint that *she* should argue with you more!!

blac...@rtd.com

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
On Thu, 09 Mar 2000 02:25:15 GMT, Joel Baumert
<jbau...@mail.indirectlink.com> was overheard to say:

Note that *one* ISR might only need r.a, r.bc and r.de while
another might need r.a and r.hl. As long as the two never
are active concurrently, *neither* need be concerned with
anything more than ex af,af' and exx to save state.

> >> static storage. Third, the Rabbit instructions run faster
> >> then their Z80 counterparts.
>
> > What about the actual *representations* used? For
> > example, Mr. Rogers likes to point to how *slow* the
> > Borland compiler was on the 188. But, I know the
> > Borland compiler implements <math.h> functions as
> > The Standard declares them -- namely, accepting *double*
> > arguments and computing *double* results. Note
> > that the Standard also declares doubles to be at least
> > 10 significant digits, etc.
> [...]
>
> I didn't say anything about the floating point

Yes, I know! I was attempting to get some clarification of
this since neither Mr. Rogers nor Mr. Matthews appeared
willing to share any *technical* information on the
subject.

> instructions. I'm talking about comparable assembly
> instructions which in general run faster, and some
> of the new instruction changed commonly used sequences
> of instructions into one faster instructions.
>
> I don't have a copy of the standard in front of me, but
> from "C A Reference Manual 4th ed" section 5.2: "C does
> not dictate the sizes to be used for the floating-point
> types, or even that they be different." Currently,

Right. It also doesn't dictate the size of *ints*!
But, you'll recall that short must be capable of
storing a number that would require 16 bits in a
binary implementation (my long way of saying that
a short >= 16 bits) and that an int must be at
least as large as a short and a long must be
(blah blah blah) 32 bits.

If you check the language of the floating point data types
(float, double, long double) I think you will see that
a double must not be smaller than a float. A long
double must not be smaller than a double...

I.e. just like a short can't be 64 bits (!) unless
an int is at LEAST 64 bits and a long is at least
that size, as well.

In addition, there are minimum values specified for
the number of decimal digits that must be representable
by float, double, etc. I think these are 6 and 10,
respectively. So, a 32 bit float satisfies the
"6" requirement (assuming at least 20 bits are
set aside for the "mantissa" [sic] and at least 7
bits for the exponent -- the IEEE "single" format
commonly used *does* satisfy these requirements).

A 32 bit double satisfies the requirement of being
AT LEAST as large as a float (assuming the same
format used for the "float"). But, it won't
provide the minimum *10* (decimal) digits of
mantissa (which requires at least 30 bits to
encode) and at least another 7 bits for the
exponent (IIRC, the minimum range of exponent
values for double is not any larger than that
required for a float?)

And, of course, a sign bit... :>

So, double needs (I think) at least a 5 byte
representation to "just squeek by" and most
doubles tend to be 8 bytes.

(sigh) I'm *supposed* to have this crap commited
to memory but mine seems to be "leaking bits"
lately so someone with a copy of The Standard
in front of them could correct these numbers
(I'll look for my copy next time I'm rummaging
around in the garage...)

> Dynamic C supports the 32bit floating point format. The
> compiler represents floats and doubles as both 32bit
> numbers. Maybe I misunderstood your statement.

See above. The point was made in the context of the
benchmarks mentioned previously. An attempt to
show that there MOST PROBABLY weren't "apples for apples"
comparisons being made. I had imagined that this
would be an aspect that most folks who write in C
would readily recognize as a likely source of
one such discrepancy.

> I wasn't involved in the benchmarking so I don't know
> if Norm changed the Borland libraries to use the 32bit
> format. Has anyone done this or know how easy it is
> to do??? I'll ask Norm the next time I see him.

IIRC, there aren't any "command line switches" to
magically demote all doubles to floats. So, that
change would require a rewrite of their library
(which, I believe, is only provided in ASM form).
Perhaps someone who hammers on BCC more regularly
can comment on this?

> [...]
> > Is the library completely reentrant (think in terms
> > of your "preemptive" environment)? What is the
> > overhead associated with the library (i.e. auto
> > variables and statics) as well as the cost of
> > saving it's state?
>
> I understand what reentrant is :-). Floating point

My comment re: the preemptive environment was intended
to remind you that you might not have the liberty
to decide when the context switch occurs -- as you *do*
have in the nonpreemptive environment.

> add, subtract, multiply and divide _are_ reentrant,
> though they do use the alternate registers to get

Do they build a stack frame and operate *in* it?
Or, just save all the registers and work within the
CPU? I.e. :

What is the overhead associated with the library
(i.e. auto variables and statics) as well as the
cost of saving it's state?

> things done. I looked at the trancendentals and they


> look reentrant, but it would take some more
> investigation for me to be completely convinced.
>

> >> Zworld may pay my bills, but I speak for myself.
>

> > Thanks for the disclaimer! Sure would be nice if Mr Rogers
> > and Mr Matthews had been as "up front" from the beginning
> > disclosing their affiliations! Perhaps you can show
> > them how to add a .signature in Mozilla?
>
> I'll see what I can do, but I have been using tin
> soooooo lonnnnnggggggg!!!!! I guess it is probably point
> and click :-).

Dunno since I use Agent from Windows and trn(1) from my
shell. :>

blac...@rtd.com

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
On Wed, 08 Mar 2000 15:30:58 GMT, cbfal...@my-deja.com was overheard
to say:

> In article <38ccc5e8....@News.RTD.com>,
> blac...@rtd.com wrote:
> > On Mon, 6 Mar 2000 22:25:13 -0800, "Kelly Hall" <ha...@iname.com> was
> >

> > > <blac...@rtd.com> wrote in message
> > > news:38c3ebdc....@News.RTD.com...
> > > > I strongly suspect that the Dynamic C library implements
> > > > "doubles" as "floats" (!!). [this should be simple for
> > > > anyone with a copy of the compiler to verify -- as well
> > > > as making a guesstimate of the actual size/format of
> > > > floats, etc. -- float.h would be a great start! :>]
> > >
> > > From my tests, I think it's fair to assume that in Dynamic C floats
> > > and doubles are the same thing: 32 bit values (8 exponent, 24
> > > mantissa).
> >
> > In which case, the Borland compiler is unfairly "penalized"
> > by supporting *true* doubles. Consider that binary operations
> > quadruple in time when doubled in length... and, that any
> > function computed with a power series would obviously need
> > to be evaluated to more terms to get accuracy approaching
> > the resolution of the coding scheme...
>
> .... large snippety snip ...
>
> Lets correct this misinformation. Binary operations, in the sense of
> multiplications of binary numbers of length n bits, DO NOT NECESSARILY
> have to be order n**2 algorithms. There are n log n algorithms known,
> and other orders quite suitable for binary math operations on Z80s etc.
> are also known.

Careful since many "textbook" algorithms only *roughly*
scale in this manner. They fail to take account of the
particular details of practical *implementations*
(which, of course, they can't be concerned with since
they are just making generalizations).

For a trivial example, you'll agree that I can
add two *8* bit numbers (on most CPUs) in
the same time that I could add two *4* bit
numbers. So, why isn't the 4 bit operation
considerably faster? (or, the 8 bit operation
considerably *slower*?)

Of course, the answer is obvious. :>

Architectural issues muddy the "textbook"
approaches to algorithms and their relative
costs. So, while an algorithm might
appear to scale nicely from 8 to 16 bits
in a given architecture, there is no
guarantee that a 32 bit algorithm will
scale as nicely to a 64 bit algorithm!

Simply because of the extra housekeeping
overhead that sneaks in when you start
to exceed the resources available to a
given architecture (i.e. run out of
registers!)

You'll note that the suggested floating point
formats in this discussion (Borland/IEEE) are
actually *worse* than the "2:1" change implied.

For example, the "mantissa" [sic] in a float
is 23 bits while the (IEEE) double requires
*53* bits. Or, a 3 byte quantity has increased
to a *7* byte quantity. (though the 1 byte
exponent has only grown to less than 2 bytes).

Suddenly, even using *all* of the registers
in the Rabbit, it gets difficult to keep just
two "mantissa"s in the processor (even using
IX/IY) let alone any temporary results,
iteration counters, pointers, etc.

When implementing a 32 bit operation, this same
complication was not present (or, in as significant
a manner).

Note, I'm not advocating that Rabbit support a
particular float or double format. Rather, the
point was that at least one of the compilers
mentioned in the benchmark was doing considerably
more than processing "float"s. And, that
this wasn't spelled out in the benchmark
nor "normalized" in the metrics obtained from
those comparisons.

cbfal...@my-deja.com

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
In article <38c98526....@News.RTD.com>,

I don't really recommend any n log n operations for Wascally
Wabbits etc - those generally involve full transforms and are
not practical for real problems. The speedups that are useful
usually involve the relations:

N = 2^n, n = natural word size

(Na + b) * (Nc + d) = NNac +N(ad + bc) + bd

so that the presence of a 16*16 --> 32 multiply make creation
of a 32 * 32 --> 64 much easier, etc. For normalized floating
point significands, the bd term is going to be largely discarded
anyhow (rounding considerations) and can be simplified. All this
makes doubling accuracy use 3 multiplies and some adds, and speed
is much better than order n^2.

My high speed floating point for the 8080 used 16 bit significands
(quite adequate for the jobs, with proper rounding), performed all
arithmetic in the registers (no ix, iy, hl' etc in an 8080), and
was significantly sped up by basing multiplication on a routine
that did 8 * 16 --> 24 bits. Two of these operations handled the
16 * 16 --> 24 bits (rounded). The resultant package was about 10x
faster than any competition, with 4.7 digit accuracy. (maybe 4.5,
depends on how you define things :-). Complete with transcendentals
it gobbled up about 2k of code space, as I recall.

BTW, I am not recommending the 16 bit significand for general use,
but the package was more accurate at such things as matrix inversion
than Bills basic (with 24 bit significands and poor rounding) at the
time. (This was circa 1980). 3rd order curve fitting was about the
worst thing I had to handle.

cbfal...@my-deja.com

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
In article <38c9fd24....@News.RTD.com>,

.... snipped ...

>
> So, I've started exploring that avenue as,
> perhaps, a "necessary evil" if I want to be assured
> a continuous supply of a "quality" product (tool).
> I've already hacked up a Z180-ish port of gas(1)
> that seems to be as robust as the DOS-based tools
> I've been using. After adding support for some
> embelishments to support the MMU more "seemlessly",
> I'll see how painful it is to convert the symbol table
> to a format suitable for the ICE.
>
> I don't cherish the idea of building new tools.
> And gas(1) is a piece of cake when compared to
> a gcc(1) port! I'd much rather someone *else*
> maintain the toolchain -- but, that comes at
> a cost: will it be as "correct" as my existing
> toolchain (i.e. code *accuracy* is far more
> important than *speed*) and will (prompt, affordable)
> support continue to be available in the *years*
> to come...?
>

... snipped ...


>
> So, it's very important (to me) that a compiler
> exist, be "strictly conforming" (for reasons
> mentioned above and previously) and, *correct*
> (i.e. bug free)
>
> I've invested many dozens of hours chasing down
> and documenting compiler/linker/librarian/debugger
> bugs and getting them repaired. I seem to have
> an unhealthy ability to do things that aggravate
> compilers, etc. :-( I'd like not to just
> casually discard all of that investment in the
> *hopes* of a toolchain of comparable quality.
>
> This, IMO, was the biggest drawback to the
> lack of binary compatibility that the Rabbit
> forfeit in it's design. I *can't* use my existing
> tools (at least not the ones most intimately
> tied to the iron) since they are designed around
> an entirely different processor model.
>

.... snipped ...

Are you making your gas ports etc. available? Sounds very useful.

Don't your old tools include SLRLINK, Z80ASM, OPTASM, etc. Those
were very nice, but never took over the world the way they should.
Unfortunately no longer available.

Norman Rogers

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
In answer to the questions:

The floating point is accurate as far as 32 bits goes. Doing computations
to 80 bits is very oppressive without hardware and this is only useful for
specialized situations, so developing a high precision floating library
has a low priority, but it would be useful to some people.

We are considering hardware floating point in future Rabbit processors.

Norm Rogers

Jon Kirwan

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
On Thu, 09 Mar 2000 01:25:02 GMT, blac...@rtd.com wrote:

>Indeed, Mr. Rogers seemed dismayed at the suggestion
>since he couldn't figure out how to "make any money"
>with it (apparently, he saw it as losing the ability
>to *sell* the toolchain instead of *gaining* the
>"free" support of the user base *for* that toolchain
>and thereby reducing his costs).

I'd like to add another possibility: An open toolchain will likely
become a far more robust toolchain, if opened. With the source opened
to the eyes of many skilled programmers with a wide variety of
talents, it has an opportunity that closed sources cannot enjoy. It
may be that the more robust packages will one day be those derived
from open source.

It can be hard for the creator to do such things, though. To make
sure that your idea succeeds through open and honest nurturing of it,
even if that means that your own sense of personal control and
ownership may have to suffer some in the process.

Jon

It is loading more messages.
0 new messages