Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is Forth About Tinkering Rather than Design?

399 views
Skip to first unread message

rickman

unread,
Jun 2, 2015, 4:05:02 AM6/2/15
to
I was thinking about a timing issue and recalled that Chuck Moore had to
write code to test his idea that he could use timing loops in the GA144
to control a video display. The timing loops didn't work and he had to
add a crystal oscillator. I think instead he ended up using a ceramic
resonator because he couldn't get the crystal to start up reliably.
Neither of these results were unforeseeable.

I also remember taking my swing at the GA144 and through analysis
realized the bottleneck in the design would be the memory performance.
So I tried to analyze a sync DRAM interface only to be limited by the
timing data provided for the GA144. When I asked for more detailed
timing info on the instructions for comms and I/O I was told I didn't
need that info, I should just build the design code it up and see if it
works. When I said that was not the way I work, rather I do analysis
before I design I was told that was how that person had been doing Forth
design for 40 years, by building and testing.

To me, when I should be able to do analysis to determine limitations and
feasibility, it just seems silly to spend time on building and testing
only to find things don't work. I know I would never try to use timing
loops to generate video signals. I think a simple analysis would show
the lack of stability. Timing data is just so basic to the definition
of a digital device that I can't see any reason to not want to use it to
save days or weeks of development and testing that might well prove
fruitless.

Is it typical in the Forth community to code things up rather than to
think them through?

--

Rick

Mark Wills

unread,
Jun 2, 2015, 12:52:22 PM6/2/15
to
Both yes and no. I'll often code something up "quick and dirty" and then day to myself "yep that works" then junk it and re-write a proper application from scratch. The first attempt is nothing more than shaking the nuts loose nuts and bolts rattling around in my head into some semblance of order.

I don't think it's unique to Forth. It's how most softies work. They will look at the real crux of the problem, the real make-or-break stuff, convince themselves that they can solve it (with some rough code, perhaps) then set about writing a proper solution.

You're a hardware guy. I know a few hardware guys. They'll sit with datasheetsfor a few days before breaking out the soldering iron. They'll make some circuits to shake actual facts and figures out of assumptions (how fast is this bus *really*?, what is the *actual* latency when an interrupt comes in?) then they'll throw those b boards away and get down to it. I think it's the same thing, only different ;-)

Mark Wills

unread,
Jun 2, 2015, 12:55:20 PM6/2/15
to
Another point: is very simple and satisfying to incrementally"grow" forth programs. This might seem like tinkering and I suppose I kind of is. But add long and when you have completed the application you have a sense that the way it had evolved and connects to each other is correct then that's okay. You wouldn't write an autopilot program that way, but for say, a software based video controller then why not?

JUERGEN

unread,
Jun 2, 2015, 1:01:52 PM6/2/15
to
Is this a serious post? I cannot believe it.
You can tinker with a supercomputer if you have access.
If it runs Forth or any other language even program in binary you decide if you tinker or do serious work and have to earn money.
Apart from this, tools are to do a job - serious or tinker.
Actually, to answer your questions: any community will have the spread of participants - and some aspects you plan, some you try out as this might be quicker or borderline.

JUERGEN

unread,
Jun 2, 2015, 1:09:08 PM6/2/15
to
On Tuesday, June 2, 2015 at 5:55:20 PM UTC+1, Mark Wills wrote:
> Another point: is very simple and satisfying to incrementally"grow" forth programs. This might seem like tinkering and I suppose I kind of is. But add long and when you have completed the application you have a sense that the way it had evolved and connects to each other is correct then that's okay. You wouldn't write an autopilot program that way, but for say, a software based video controller then why not?

Hi Mark, for once I would kindly disagree. The Forth language allows for incremental and interactive programming so people use it that way. Is there a definition like High End Tinkering? Or Planned Trial and Error? And as you say, there is no real difference between hardware and software - except for the soldering iron ...

Elizabeth D. Rather

unread,
Jun 2, 2015, 1:36:22 PM6/2/15
to
There are ways and ways of "thinking things through." Artists think with
a brush or pencil in hand. Writers often just start writing to let their
thoughts emerge, and then try to shape the result into something.

In the 70's and 80's it was fashionable to design software programs
using huge modeling and design tools before writing any code, because it
took so long to compile programs and they were difficult to change. A
couple of times FORTH, Inc. was given extensively flow-charted programs
to code, and every time we were able to come up with radically different
designs that worked much faster and better.

Forth's interactivity and flexibility make it a great design tool. Many
people who are constrained to use a "more mainstream" language in
projects write a first version in Forth, as a proof of concept, because
it's such a good design tool.

If you're basically a hardware guy, it's natural for you to "think
things through" with circuit diagrams or a breadboard, but for a person
like Chuck who "thinks in Forth," it's what works for him.

Cheers,
Elizabeth

--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================

Walter Banks

unread,
Jun 2, 2015, 2:22:00 PM6/2/15
to
On 02/06/2015 4:05 AM, rickman wrote:
>
> To me, when I should be able to do analysis to determine limitations and
> feasibility, it just seems silly to spend time on building and testing
> only to find things don't work. I know I would never try to use timing
> loops to generate video signals. I think a simple analysis would show
> the lack of stability. Timing data is just so basic to the definition
> of a digital device that I can't see any reason to not want to use it to
> save days or weeks of development and testing that might well prove
> fruitless.
>
> Is it typical in the Forth community to code things up rather than to
> think them through?
>

I spent a lot of time in the 90's in Asian software development shops
and learned a lot about software design by watching a different
approach. They did two things that changed how I wrote software.

1) They relied heavily on self contained modules with very clear
interfaces to them so they could be swapped out and replaced with other
code. This was important because the very fixed clean software interface
reduces the number of series terms in the overall application reliability.

2) They had software budgets for each module. Ram, Rom and execution
time. In general they quit tinkering when the budget was met, and often
the budget came from an implementation prototype. Rather that draconian
negotiations of the software budget, they were often developed by the
same person who would be responsible for the final implementation.

The budget and complete application map predicted, part specs that could
be used for an application. Not surprisingly they were able to design in
parts with less capability than competative approaches.

Overall design and implementation time generally a little than most
alternative approaches. Lots of group meetings early on but few during
the implementation phase which tended to be shorter. Implementation
language was not a big factor.

w..




Walter Banks

unread,
Jun 2, 2015, 2:27:42 PM6/2/15
to
On 02/06/2015 4:05 AM, rickman wrote:
>
> To me, when I should be able to do analysis to determine limitations and
> feasibility, it just seems silly to spend time on building and testing
> only to find things don't work. I know I would never try to use timing
> loops to generate video signals. I think a simple analysis would show
> the lack of stability. Timing data is just so basic to the definition
> of a digital device that I can't see any reason to not want to use it to
> save days or weeks of development and testing that might well prove
> fruitless.
>
> Is it typical in the Forth community to code things up rather than to
> think them through?
>

I spent a lot of time in the 90's in Asian software development shops
and learned a lot about software design by watching a different
approach. They did two things that changed how I wrote software.

1) They relied heavily on self contained modules with very clear
interfaces to them so they could be swapped out and replaced with other
code. This was important because the very fixed clean software interface
reduces the number of series terms in the overall application reliability.

2) They had software budgets for each module. Ram, Rom and execution
time. In general they quit tinkering when the budget was met, and often
the budget came from an implementation prototype. Rather than draconian
negotiations for a software software budget, they were often developed
by the same person who would ultimately be responsible for the final
implementation.

The budget and complete application map predicted, part specs that could
be used for an application. Not surprisingly they were able to design in
parts with less capability than competitive approaches.

Overall design and implementation time generally a little less than most

Bob

unread,
Jun 2, 2015, 4:07:16 PM6/2/15
to
On 06/02/2015 04:05 AM, rickman wrote:
> I was thinking about a timing issue and
<snip>
>
> Is it typical in the Forth community to code things up rather than to
> think them through?
>
A difference is the attitude toward the documentation. For example, I
know a guy who buys a replacement car battery based on the length of the
warranty. He figures the price divided by the months of warranty. That
is one way. I do it differently. I use a mechanic I trust and I bring
him all my car repair business. If he has enough customers he remains
available to me when I need a complicated repair. Then his judgement,
skill, and experience become important.

If the spec sheet for a part has all the data needed and all of the
parts perform correctly and the rigorous attention to detail is
sufficient then technology can be applied without risk. Interactive
Forth and Basic make it easier than Fortran or C++ to tinker with a new
part to allow the use of judgement, skill, and experience to evaluate
whether it is feasable or appropriate to use that part for that purpose.

Which is more significant, the tube of 20 parts in hand or the
specification? Both are important. All parts should meet their specs,
but often some detail isn't specified but it matters for the application
at hand. Testing of the tricky corner cases is necessary.

When my car engine cranks slowly but then starts I am warned that I
might need to replace the battery soon. When I turn the key and the
battery goes bang it maximizes inconvenience. I don't get to where I
want to go and I have an acid cleanup. The failure mode of the last
battery I bought wasn't specified at the time of purchase. I can hope
that it won't go bang. I know that it is a different brand than the one
that did go bang. If it does go bang I will be disappointed but not
surprised.
--
Bob

Anton Ertl

unread,
Jun 3, 2015, 1:49:52 AM6/3/15
to
rickman <gnu...@gmail.com> writes:
>Is it typical in the Forth community to code things up rather than to
>think them through?

I don't think that the Forth community is a community when it comes to
such questions. However, Forth is better for tinkering than some
other languages. IMO there are occasions for tinkering and occasions
for thinking things through. Of course, there are terms like
"prototyping" that are used instead of "tinkering" that have less of a
negative connotation.

- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2015: http://www.mpeforth.com/euroforth2015/euroforth2015.htm

Rod Pemberton

unread,
Jun 3, 2015, 3:06:49 AM6/3/15
to
On Tue, 02 Jun 2015 04:05:02 -0400, rickman <gnu...@gmail.com> wrote:

What did you say about personal manifesto's? ...

> When I asked for more detailed timing info on the instructions
> for comms and I/O I was told I didn't need that info, I should
> just build the design code it up and see if it works.

Now, you *seem* to be embellishing your stories, like Hugh ...
Previously, you stated this about the timing info:

"I was not able to get that info as it is proprietary."

-by rickman, "GA144 in a Serious Low Power Project (SLPP)" thread,
comp.lang.forth, Jan 7, 2013, msg-id: kc7q3g$e45$2...@dont-email.me

Was this a different situation of timing info?

If so, just how many people in the world don't
want to give you timing info?

> [...] doing Forth design for 40 years, by building and testing.

This works, but a skilled programmer generally has an idea of
what to implement, and sufficient experience in solving similar
problems to chose a path which is likely to succeed. Other
techniques like build-and-test, or shot-gun, or re-write,
or debug-it, or wing-it, or cut-n-paste, must be used too
because some people are brighter and others less so, and
there are always unforeseeable interactions. Also, the less
able the person or perhaps less well paid, the more they'll
rely on simpler techniques, because they're easier and slower.
Agree?

> To me, when I should be able to do analysis to determine
> limitations and feasibility, it just seems silly to spend
> time on building and testing only to find things don't work.

Reality simply doesn't match theory due to assumptions and
simplifications or only does so rarely. Digital circuits
should be much more precise in general than analog.

At some point, the code must be tested. So, you might as
well test incrementally. Even if you're designing something
and not just winging it, testing as you go is easier and a
bit more thorough. You can work with the code in an
intermediate form that may allow more access and ability
to test than in the final form. Even so, you should still
produce a comprehensive final test to test the design after
completion. However, there will always be some things that
simply can't be tested in the final form.

> I know I would never try to use timing loops to generate
> video signals.

Why not?

Much the same thing is done in electronic circuits.
Isn't it? I.e., imprecise methods, but good enough.
E.g., 14.318Mhz or 3.58Mhz divided down by counters
or a digital delay line to generate sync signals or
color-burst ring oscillator, or PLL, or RC timing,
etc.

> I think a simple analysis would show the lack of stability.

The obsolete NTSC televisions are very forgiving. Very few
of the early 8-bit computers generated correct horizontal or
vertical sync and they worked on almost every television.

> Timing data is just so basic to the definition of a digital
> device that I can't see any reason to not want to use it
> to save days or weeks of development and testing that might
> well prove fruitless.

Some chips simply aren't to spec because they're cheap,
or are thrown off by poor tolerance production parts.

E.g., a company I worked for designed audio circuits
using high-end video op-amps and 1% tolerance resistors,
but then they bought the cheapest audio versions of
the op-amp they could find for production and used 10%
resistors. The production op-amps had a high failure
rate, but they were unbelievably cheap. The resistors'
possibly poor tolerance had been accounted for by the
electrical engineers for most situations but not all,
i.e., two in parallel, _both_ with wide tolerances ...

> Is it typical in the Forth community to code things
> up rather than to think them through?

Was this a comment on someone here in particular?

You have to start somewhere to solve a problem.
If you don't already have an understanding of what
to solve, you'll likely have to learn something or
code something up first to attempt to obtain an
understanding of the issue(s).


Rod Pemberton

--
If fewer guns reduced murders, how does one explain
Moscow, Chicago, New York, and South Africa?

djc

unread,
Jun 5, 2015, 5:50:10 AM6/5/15
to
Am Dienstag, 2. Juni 2015 19:36:22 UTC+2 schrieb Elizabeth D. Rather:
> On 6/1/15 10:05 PM, rickman wrote:
[snip]
> > Is it typical in the Forth community to code things up rather than to
> > think them through?

1) The community is not uniform in that aspect
2) There is no contradiction between coding and thinking things through

> There are ways and ways of "thinking things through." Artists think with
> a brush or pencil in hand. Writers often just start writing to let their
> thoughts emerge, and then try to shape the result into something.

Having artists in my family, I have to compliment you on finding the right words to describe parts of the creative process. Thank you very much!

> In the 70's and 80's it was fashionable to design software programs
> using huge modeling and design tools before writing any code,

It still is. Look at V-cycle, CMM, SPICE - all these require the model/architecture to exist before coding starts.

> because it
> took so long to compile programs

Turnaround cycles for embedded systems written in C may still be in the 30-45 minutes range from saving the code, compiling, linking, flashing to starting the debug process.

In addition, there are large and distributed teams working on the code, so there must be something that allows to identify and distribute work packages for the individual team members.

> Forth's interactivity and flexibility make it a great design tool.

Absolute and fully agreed, especially for todays embedded systems with the above turnaround cycles - which are commonplace in some industries.

> Many
> people who are constrained to use a "more mainstream" language in
> projects write a first version in Forth, as a proof of concept, because
> it's such a good design tool.

I did that several times.

There is a SW development methodology called "TDD" (test driven development) where writing SW tests (unit tests, integration tests...) starts well before SW development.
In C, this approach is almost impossible to impose on a team, in Forth coding and testing goes hand in hand naturally.

Daniel

Albert van der Horst

unread,
Jun 5, 2015, 7:32:16 AM6/5/15
to
In article <564c1313-344a-486c...@googlegroups.com>,
djc <cies...@gmx.net> wrote:
<snip>
>There is a SW development methodology called "TDD" (test driven
>development) where writing SW tests (unit tests, integration tests...)
>starts well before SW development.

There is something very fishy about this. Software development doesn't
start with tests, it start with breaking a problem in parts and
specify the interaction between parts. On those specifications
you can base tests.

Bottom line I think that test driven development is crucial for
those who can't make specifications, so instead they use tests
sets as specifications. It works, but it may confuse the issue.

What I do is specify (design), implement and test at about the same time.

This works for smallish problems where you can afford to start over.
In large projects a very knowledgable person, an architect ( an real
one, not a job title) needs to split the problem such that the parts
are implementable. If that fails, so does the project.

Note that this requires multidisciplinary skills.



>
>Daniel
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

djc

unread,
Jun 5, 2015, 8:01:51 AM6/5/15
to
Am Freitag, 5. Juni 2015 13:32:16 UTC+2 schrieb Albert van der Horst:
> djc wrote:
> <snip>
> >There is a SW development methodology called "TDD" (test driven
> >development) where writing SW tests (unit tests, integration tests...)
> >starts well before SW development.
>
> There is something very fishy about this. Software development doesn't
> start with tests

I did not claim that it starts with testing, I claimed that tests are created before coding the SW starts. Wikipedia has an article on TDD.

Requirements (as per requirements engineering) are sentences like
"x shall y"
and they are testable by definition (CMM, SPICE).
The primary source of requirements is the customer in the cases I know.

> What I do is specify (design), implement and test at about the same time.

When I create SW in Forth and/or in a small expert team, I often do this, too.

Daniel

rickman

unread,
Jun 5, 2015, 5:04:46 PM6/5/15
to
On 6/5/2015 8:01 AM, djc wrote:
> Am Freitag, 5. Juni 2015 13:32:16 UTC+2 schrieb Albert van der Horst:
>> djc wrote:
>> <snip>
>>> There is a SW development methodology called "TDD" (test driven
>>> development) where writing SW tests (unit tests, integration tests...)
>>> starts well before SW development.
>>
>> There is something very fishy about this. Software development doesn't
>> start with tests
>
> I did not claim that it starts with testing, I claimed that tests are created before coding the SW starts. Wikipedia has an article on TDD.
>
> Requirements (as per requirements engineering) are sentences like
> "x shall y"
> and they are testable by definition (CMM, SPICE).
> The primary source of requirements is the customer in the cases I know.

I had the training for this and it applies to more than software. The
idea is that you define the requirements starting with the user
requirements and breaking those requirements into lower level
requirements which is essentially the design process. Once you reach
the bottom where the implementation of requirements is clear, you start
defining the tests which will be applied to verify each of the
requirements at all levels and only then you code to meet those tests.
By this point the design issues have been hashed pretty thoroughly.

I haven't done a lot of design work under this regimen, so I can't say
if it is very effective or not. The company I worked for was learning
this as a group so it was initially implemented very ineffectively. But
then most things that company did were ineffective with lots of
infighting and little cooperation. It may have gotten better with time,
I don't know, I wasn't there for subsequent designs.


>> What I do is specify (design), implement and test at about the same time.
>
> When I create SW in Forth and/or in a small expert team, I often do this, too.

Yeah, for small groups the process can be very informal, or maybe no
process at all, just people talking. But for larger groups it is
important to have a defined methodology unless everyone is an expert and
just knows how the project will proceed. I've never seen that happen...

--

Rick

Paul Rubin

unread,
Jun 7, 2015, 11:59:39 PM6/7/15
to
djc <cies...@gmx.net> writes:
> Turnaround cycles for embedded systems written in C may still be in
> the 30-45 minutes range from saving the code, compiling, linking,
> flashing to starting the debug process.

Is that for real? On current systems where people also use Forth, not
some relic from the era where the debug cycle involved waiting for a UV
eprom eraser? Even on such systems, it was often possible to do most of
the development and testing on a larger machine, then port to the
embedded target.

These days I don't think it takes more than a few seconds to compile a
reasonable size C program and flash it into an Arduino.

Paul Rubin

unread,
Jun 8, 2015, 2:12:16 AM6/8/15
to
alb...@spenarnc.xs4all.nl (Albert van der Horst) writes:
> Bottom line I think that test driven development is crucial for
> those who can't make specifications, so instead they use tests
> sets as specifications. It works, but it may confuse the issue.

The point of TDD is that you can run all the tests every time you change
the program, to make sure (or anyway have higher confidence) that you
didn't inadvertently break something with your change. There are many
automated test frameworks and deployment tools (search terms:
"continuous integration") built around this idea. To do the same thing
with a specification would require manually checking the whole program
against the spec after every change.

TDD also helps guide the organization of the code, since you have to
write every part to work under both the real application and under the
test system. That helps keep the interfaces clean. Normally a
functional specification wouldn't reach down to such a fine grained
level.

Albert van der Horst

unread,
Jun 8, 2015, 5:48:42 AM6/8/15
to
In article <873822v...@jester.gateway.sonic.net>,
That sounds good. I've worked under such a regime once but it mainly
applied there to driver software.

The closest thing to that description is my own Forth. The source
has a combination of code, test and documentation like so.

worddoc( {STACKS},{CLS},{c_l_s},{ i*x -- },{},
{Clear the data stack},
{{DSP@@}},
{{ 1 2 3 CLS DEPTH .},{0} },
enddoc)
_HEADER({CLS},{CLS},{DOCOL})
DC SZERO, FETCH
DC SPSTO
DC SEMIS
_C

In maintenance-development you need a comprehensive
regression test. There typically is none, so I make up my own.
It mostly boiled down to:
"If it can handles one day worth of data, and the result is the
same" then pass. You can't be sure that a 2Gbyte data file
hits all datapaths, but in a sloppy environment it hits probably
more datapaths than have ever been tested by anyone else.
And Bayes says that the chance you've introduced an error is pretty
slim.

You're right that if "test driven development" is used as a
methodology and not a buzz word it is powerful.

Groetjes Albert

djc

unread,
Jun 10, 2015, 8:59:35 AM6/10/15
to
Am Montag, 8. Juni 2015 05:59:39 UTC+2 schrieb Paul Rubin:
> djc writes:
> > Turnaround cycles for embedded systems written in C may still be in
> > the 30-45 minutes range from saving the code, compiling, linking,
> > flashing to starting the debug process.
>
> Is that for real? On current systems where people also use Forth, not
> some relic from the era where the debug cycle involved waiting for a UV
> eprom eraser?

Using a current C environment for complex embedded systems.
* C preprocessor
* C compiler
* C linker
* (erase code/data flash) - optional, calibration is lost here
* flash to target via JTAG (not UV)
- not including calibration of the SW in the target
We talk about several 100k of object code and Megabytes of source code.
I do not say that this is desirable.

> These days I don't think it takes more than a few seconds to compile a
> reasonable size C program and flash it into an Arduino.

"Reasonable" is very dependent on the perspective and the targets. For the Arduino, it may be a matter of seconds, but think about MSP56xx and friends and their areas of application...

Daniel


Raimond Dragomir

unread,
Jun 10, 2015, 12:05:54 PM6/10/15
to
A Cortex-M4 project, C only:
Compilation: 7 seconds
Flash downloading: 11 seconds (via a serial bootloader at 460800 bps)
binary: 230K (sources 2.9M)

I have a note here: the project is prepared for emBlocks (a codeblocks variant)
which is able to compile all files in about 7 seconds.
The same project Makefile'd compile in more than 20 seconds.
Make is slow, but I don't know why.
The compiler is an 4.7 arm gcc eabi variant

Raimond Dragomir

unread,
Jun 10, 2015, 12:28:50 PM6/10/15
to
I also have a lot of AVR designs, ranging from 2K to about 30K. I'm using
their gcc compiler. Compile times can be noticed only if Makefile'd :)
Programming times vary but they are in the range of seconds.
For small projects of 8K or less, compilation and flash downloading are
about instant, with just two key presses or mouse clicks. Faster than any
REPL...

A funny thing about the above Cortex-M4 that I mentioned is that I'm using
a serial bootloader just for the speed of programming :)
If I use the 'native' flash programming with a dongle, the programming time
for 230K is more than a minute... For that project I have done two things
to improve the speed of the cycle:
1. Switch from Makefile to emBlocks and
2. using my bootloader.
The speed improvement is huge, only about 18 seconds compared to more than
2 minutes!

Paul Rubin

unread,
Jun 10, 2015, 1:37:22 PM6/10/15
to
Raimond Dragomir <raimond....@gmail.com> writes:
> I have a note here: the project is prepared for emBlocks (a codeblocks
> variant) which is able to compile all files in about 7 seconds. The
> same project Makefile'd compile in more than 20 seconds. Make is
> slow, but I don't know why.

What is the host cpu? Are you using make -j ? I get around a 3-4x
speedup from make -j 8 on a i7-3770 (quad core with hyperthreading).

Paul Rubin

unread,
Jun 10, 2015, 1:51:16 PM6/10/15
to
Raimond Dragomir <raimond....@gmail.com> writes:
>> > Using a current C environment for complex embedded systems. ...
>> > We talk about several 100k of object code and Megabytes of source code.
>> > I do not say that this is desirable.

Are you saying this is taking 45 minutes? How is the 45 minutes being
spent? What kind of computer are you running the compilation on? A few
MB of source code isn't a lot these days. I've been fooling a little
with ffmpeg, which has around 30MB of C code in 1800+ files and it takes
around 2.5 minutes to build from scratch on an i7-3770, a fairly fast
machine but not a monster. Also, normally if you use something like
"make", you won't have to recompile all the source files if you just
make localized changes.

>> > For the Arduino, it may be a matter of seconds, but think about
>> > MSP56xx and friends and their areas of application...

Not sure what an MSP56xx is and wasn't able to quickly figure it out.

Stephen Pelc

unread,
Jun 10, 2015, 2:13:48 PM6/10/15
to
On Wed, 10 Jun 2015 09:05:52 -0700 (PDT), Raimond Dragomir
<raimond....@gmail.com> wrote:

>A Cortex-M4 project, C only:
>Compilation: 7 seconds
>Flash downloading: 11 seconds (via a serial bootloader at 460800 bps)
>binary: 230K (sources 2.9M)

MPE cross compiler generating about 180 kb with Forth interpreter, FAT
file system, PowerNet ...

Section Type Used Start DP BP End
Page
CCMRAM UDATA 0 10000000 10000000 1000ED00 1000FFFF
PROGU UDATA C354 20001000 2000D354 2001F400 2001FFFF
PROGD IDATA 3DC 20000000 200003DC 20001000 20000FFF
STM32F4EVALPN CDATA 2B800 8000000 802B800 8080000 807FFFF

Compilation time was 0.168 seconds.

Flash programming with a Segger JLink is a couple of seconds at most
and is performed automagically after a successful compile.

Stephen


--
Stephen Pelc, steph...@mpeforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeforth.com - free VFX Forth downloads

rickman

unread,
Jun 10, 2015, 2:17:58 PM6/10/15
to
I think he means DSP56xxx. The DSP56K line were Motorola's entry into
the DSP market. They did things differently from TI and had some
advantages. Much like the Intel vs. Motorola designs, the DSP56K family
was planned rather than just happening, so was architecturally better,
at least on paper. But again, in a manner parallel to the Intel vs.
Motorola competition, TI worked the market better focusing on the one
big app, cell phones.

I've lost track of cell phones these days. Do they still mostly contain
a TI DSP chip along with an ARM? Or has the DSP become part of a custom
ASIC?

--

Rick

Bernd Paysan

unread,
Jun 10, 2015, 2:20:42 PM6/10/15
to
rickman wrote:
> I've lost track of cell phones these days. Do they still mostly contain
> a TI DSP chip along with an ARM? Or has the DSP become part of a custom
> ASIC?

The whole thing had become a SoC, with the most important things (including
GPU and multi-core ARM for the smartphone functionality) all on one chip.
Broadcom and Samsung are the by far biggest contributors to this market.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
net2o ID: kQusJzA;7*?t=uy@X}1GWr!+0qqp_Cn176t4(dQ*
http://bernd-paysan.de/

rickman

unread,
Jun 10, 2015, 3:03:36 PM6/10/15
to
On 6/10/2015 2:20 PM, Bernd Paysan wrote:
> rickman wrote:
>> I've lost track of cell phones these days. Do they still mostly contain
>> a TI DSP chip along with an ARM? Or has the DSP become part of a custom
>> ASIC?
>
> The whole thing had become a SoC, with the most important things (including
> GPU and multi-core ARM for the smartphone functionality) all on one chip.
> Broadcom and Samsung are the by far biggest contributors to this market.

Are you saying cell phones do the DSP for the radio on the ARM or that
the ARM chip includes separate DSP hardware?

--

Rick

Bernd Paysan

unread,
Jun 10, 2015, 4:06:38 PM6/10/15
to
Everything except power management (needs big, cheap power transistors),
memory, touch screen controller, and camera sensor is on the SoC. The
mobile part today is called something like "LTE modem" or so ;-).

Companies without that know-how like Apple apparently still buy a standalone
mobile part (LTE modem), which contains an ARM, DSP, antenna driver and
amplifier.

rickman

unread,
Jun 10, 2015, 7:13:57 PM6/10/15
to
On 6/10/2015 4:06 PM, Bernd Paysan wrote:
> rickman wrote:
>
>> On 6/10/2015 2:20 PM, Bernd Paysan wrote:
>>> rickman wrote:
>>>> I've lost track of cell phones these days. Do they still mostly contain
>>>> a TI DSP chip along with an ARM? Or has the DSP become part of a custom
>>>> ASIC?
>>>
>>> The whole thing had become a SoC, with the most important things
>>> (including GPU and multi-core ARM for the smartphone functionality) all
>>> on one chip. Broadcom and Samsung are the by far biggest contributors to
>>> this market.
>>
>> Are you saying cell phones do the DSP for the radio on the ARM or that
>> the ARM chip includes separate DSP hardware?
>
> Everything except power management (needs big, cheap power transistors),
> memory, touch screen controller, and camera sensor is on the SoC. The
> mobile part today is called something like "LTE modem" or so ;-).
>
> Companies without that know-how like Apple apparently still buy a standalone
> mobile part (LTE modem), which contains an ARM, DSP, antenna driver and
> amplifier.

Any idea which DSP these two types of devices use? There had been a
metric ton of software for the TI fixed point chips. I remember seeing
a number of start-ups hawking their custom DSP designs which could have
been rolled into such SoCs, but I never followed them enough to know
which ones were chosen. I can only imagine that TI would have been
happy to be in the IP business selling software compatible DSP IP to SoC
makers as well.

So who won that race?

--

Rick

Bernd Paysan

unread,
Jun 10, 2015, 8:59:43 PM6/10/15
to
Apparently a company called CEVA sells IP cores with LTE modem DSPs:

https://www.semiwiki.com/forum/content/4499-qualcomm-lte-modem-competitors-samsung-intel-mediatek-spreadtrum-leadcore%85-simply-ceva.html

I don't think Ti is still in there, but the other big names mentioned there
do make their own LTE modems.

Paul Rubin

unread,
Jun 11, 2015, 12:41:27 AM6/11/15
to
rickman <gnu...@gmail.com> writes:
>> Not sure what an MSP56xx is and wasn't able to quickly figure it out.
> I think he means DSP56xxx. The DSP56K line were Motorola's entry into
> the DSP market.

Oh ok, that makes sense. The 56k series was cool in the 1990's but I
think they're not used much any more, so it sounds like djc might be
using some very old development setups. 45 minutes to compile/link a
program that size is ridiculous these days. Plus as mentioned, normally
there would be lots of modules with separate compilation.

djc

unread,
Jun 11, 2015, 3:40:36 AM6/11/15
to
Am Donnerstag, 11. Juni 2015 06:41:27 UTC+2 schrieb Paul Rubin:
> rickman writes:
> >> Not sure what an MSP56xx is and wasn't able to quickly figure it out.
> > I think he means DSP56xxx. The DSP56K line were Motorola's entry into
> > the DSP market.
>
> Oh ok, that makes sense. The 56k series was cool in the 1990's but I
> think they're not used much any more, so it sounds like djc might be
> using some very old development setups. 45 minutes to compile/link a
> program that size is ridiculous these days.

Lots of guesswork, huh?
MSP != DSP and no, it is neither a mobile phone CPU nor a signal processor.
The compilers were current and the hosts were mid-level office laptops, not top end. Make -j was used, the time relates to the full build, like make depend make boot, make application, link and flash.

And - need I say it again: I do not claim is good to work like that.

Daniel

Raimond Dragomir

unread,
Jun 11, 2015, 4:24:38 AM6/11/15
to
So, it's probably a MPC56xx, isn't it?

rickman

unread,
Jun 11, 2015, 5:33:38 AM6/11/15
to
Ah, found it...

"The Qorivva MPC56xx family of 32-bit MCUs built on Power Architecture
technology is designed for engine management, body control, gateway,
safety, chassis and driver ..."

--

Rick

Raimond Dragomir

unread,
Jun 11, 2015, 3:17:55 AM6/11/15
to
This worked, thank you. Compiling speed with Makefile is greatly improved.
0 new messages