Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

pulse counter using LPC1768 proving to very challenging

556 views
Skip to first unread message

navman

unread,
Jun 7, 2011, 9:16:30 AM6/7/11
to
Hi,
I'm trying to counter some pulses (2-10usec width) using the LPC1768
Cortex-M3 microcontroller. There are 2 channels on which I have count the
pulses on. I have to allow only pulses that are >=2us pulse width (so we
cannot simply use the counter function).

But it is turning out to be an incredibly difficult feat to achieve this on
a 100MHz Cortex-M3. The problem arises when we try to measure the pulse
width to allow only pulses lasting 2us or higher. We are trying to use a
capture pin and the capture interrupt. First we set it for a falling edge
and capture the timer value, then set it to rising edge and again capture
the timer value. Then take the difference between two values to see if it
>2usec. But the processing itself is taking over 6-8usec. We also tried
simply using a external interrupt & reading timer registers with each edge,
but with the same results.

We cannot seem to understand how or why the processing is taking so long
there are hardly 3-4 "C" statements in the interrupt routine (change edge
of capture, take the difference in captured values and compare if it is
>=2us). Any ideas how this feat could be accomplished on a LPC1768?


---------------------------------------
Posted through http://www.EmbeddedRelated.com

Bruce Varley

unread,
Jun 7, 2011, 9:33:37 AM6/7/11
to

"navman" <naveen_pn@n_o_s_p_a_m.yahoo.com> wrote in message
news:rMadnYYXWcwzuXPQ...@giganews.com...

I can't help with this specific micro, but I've encountered the problem on
various other platforms, and solved it by using polling rather than
interrupts. Avoiding the context saving associated with interrupts can save
you a significant amount of time if your processing task is otherwise
reasonably trivial, as yours seems to be. This solution does depend on being
able to sacrifice some processing time for the polling loop (interrupts
disabled), that will depend on the pulse frequency and what else the device
has to contend with.

You might also consider doing your time-critical coding in asm, if that's
possible. Have oyu checked your listings to see how many removeable
instructions the compiler is inserting?


Roberto Waltman

unread,
Jun 7, 2011, 9:47:51 AM6/7/11
to

David Brown

unread,
Jun 7, 2011, 9:49:22 AM6/7/11
to
On 07/06/2011 15:33, Bruce Varley wrote:
> "navman"<naveen_pn@n_o_s_p_a_m.yahoo.com> wrote in message
> news:rMadnYYXWcwzuXPQ...@giganews.com...
>> Hi,
>> I'm trying to counter some pulses (2-10usec width) using the LPC1768
>> Cortex-M3 microcontroller. There are 2 channels on which I have count the
>> pulses on. I have to allow only pulses that are>=2us pulse width (so we
>> cannot simply use the counter function).
>>
>> But it is turning out to be an incredibly difficult feat to achieve this
>> on
>> a 100MHz Cortex-M3. The problem arises when we try to measure the pulse
>> width to allow only pulses lasting 2us or higher. We are trying to use a
>> capture pin and the capture interrupt. First we set it for a falling edge
>> and capture the timer value, then set it to rising edge and again capture
>> the timer value. Then take the difference between two values to see if it
>>> 2usec. But the processing itself is taking over 6-8usec. We also tried
>> simply using a external interrupt& reading timer registers with each

>> edge,
>> but with the same results.
>>

Can you connect the signal to two pins, so that one will capture times
on a falling edge, and the other will capture times and cause an
interrupt on the rising edge?

Have you considered some analogue tricks, assuming you don't need too
much accuracy for your measurements? A diode, a capacitor and a couple
of resistors should let you charge up a capacitor during the pulse.
Measure the voltage on the capacitor with the ADC when the pulse is
complete. Or use an analogue comparator to trigger an interrupt on the
processor once the capacitor voltage is over a certain level.

>> We cannot seem to understand how or why the processing is taking so long
>> there are hardly 3-4 "C" statements in the interrupt routine (change edge
>> of capture, take the difference in captured values and compare if it is
>>> =2us). Any ideas how this feat could be accomplished on a LPC1768?
>

> I can't help with this specific micro, but I've encountered the problem on
> various other platforms, and solved it by using polling rather than
> interrupts. Avoiding the context saving associated with interrupts can save
> you a significant amount of time if your processing task is otherwise
> reasonably trivial, as yours seems to be. This solution does depend on being
> able to sacrifice some processing time for the polling loop (interrupts
> disabled), that will depend on the pulse frequency and what else the device
> has to contend with.
>
> You might also consider doing your time-critical coding in asm, if that's
> possible. Have oyu checked your listings to see how many removeable
> instructions the compiler is inserting?
>
>

The compiler may be generating too much code for context saving. A
common cause of that is to call external functions from within the
interrupt function - since the compiler doesn't know what registers it
uses, it must save everything.

And are you using appropriate flags for the compiler? Many people
complain their compiler code is poor, when it turns out they have
disabled optimisation...

Roberto Waltman

unread,
Jun 7, 2011, 9:55:35 AM6/7/11
to
navman wrote:
>I'm trying to counter some pulses (2-10usec width) using the LPC1768
>Cortex-M3 microcontroller. There are 2 channels on which I have count the
>pulses on. I have to allow only pulses that are >=2us pulse width (so we
>cannot simply use the counter function).
> ...

I have not used the LPC1768, so these are generic suggestions:

a) If they are running in FLASH, make your interrupt routines RAM
resident. Ditto for critical functions in normal code.

b) Do the width computation/decision inside the rising edge interrupt
handler.

c) If possible, configure the capture to work on either edge to save
the time needed to reconfigure it.

--
Roberto Waltman

[ Please reply to the group.
Return address is invalid ]

Mark Borgerson

unread,
Jun 7, 2011, 10:26:00 AM6/7/11
to
In article <cqOdncM8Pd0YsXPQ...@lyse.net>,
da...@westcontrol.removethisbit.com says...

You might not even need the comparator if the input pin threshold level
is consistent over the operating conditions. A simple RC filter might
trigger the interrupt only if the pulse length is over the threshold.

Using a comparator does make it easier to compute the trigger pulse
width. If you fed the reference leg of the comparator with the output
of one of the LPC1768 DAC outputs, you could tune the triggering
pulse width in software.
>

<<Snip>>

Mark Borgerson


D Yuniskis

unread,
Jun 7, 2011, 11:08:33 AM6/7/11
to
On 6/7/2011 6:16 AM, navman wrote:
> I'm trying to counter some pulses (2-10usec width) using the LPC1768
> Cortex-M3 microcontroller. There are 2 channels on which I have count the
> pulses on. I have to allow only pulses that are>=2us pulse width (so we
> cannot simply use the counter function).

Can you *prevent* (filter) short pulses from ever getting to the pin?
I.e., a simple digital filter can excise all short pulses without
affecting any *longer* pulses.

> But it is turning out to be an incredibly difficult feat to achieve this on
> a 100MHz Cortex-M3. The problem arises when we try to measure the pulse
> width to allow only pulses lasting 2us or higher. We are trying to use a
> capture pin and the capture interrupt. First we set it for a falling edge
> and capture the timer value, then set it to rising edge and again capture
> the timer value. Then take the difference between two values to see if it
> 2usec. But the processing itself is taking over 6-8usec. We also tried

> simply using a external interrupt& reading timer registers with each edge,


> but with the same results.

Is the control logic "double buffered"? E.g., some devices would allow
you to preload the *next* contents -- which would be transferred into
the *real* hardware when the first condition had been satisfied.

[sorry, I'm not familiar with the counter/timer channels on the device]

Can you, instead, set up the device to *start* a counter on the
"leading edge" (whether that is rising or falling) of the pulse
(note NO interrupt generated, here!), program the counter (timer)
for "2 us" and have the timer *overflow* trigger THE interrupt
(which just polls the state of the pin to see if it is "still"
at the right level to be countable)

[there are flaws in this scheme -- depending on the actual nature
of your input pulse stream]

Some devices with double buffered logic will let you use this
to enqueue the *next* set of controls for the channel so that
the timeout/overflow can effectively reprogram the channel
for you -- to generate an IRQ on the *trailing* edge of the
pulse, for example.

[see initial disclaimer]

> We cannot seem to understand how or why the processing is taking so long
> there are hardly 3-4 "C" statements in the interrupt routine (change edge
> of capture, take the difference in captured values and compare if it is
> =2us). Any ideas how this feat could be accomplished on a LPC1768?

Rewrite it in ASM. This is clearly (IMO) a case where the
"lack of clarity" of ASM is easily justified by its brevity.
What do you need, a handful of instructions??

Don't do *any* work in the IRQ. Just grab the data that you need
and make the decision about how to use it *later*.

What guarantees do you have regarding the interarrival time of
subsequent pulses, etc.? What happens to your system if, for
example, a 100KHz signal suddenly appears on this pin? Will
your system effectively "lock up" because it is dutifully
catching interrupts (that probably are meaningless in this
case)? Consider designing the code so that this is a self-limiting
process...

Marc Jet

unread,
Jun 7, 2011, 11:19:15 AM6/7/11
to
I think the solution is obvious.

Configure for falling edge interrupt (I assume from your post that LOW
is ACTIVE).

In the interrupt, do the following:

1) Clear the interrupt flag
2) Verify that input pin is still ACTIVE. If it is not, exit silently
(no event).
3) Do a CPU timed busy-wait for 2us (minus typ/max interrupt latency
up to this point). You might need inline asm or a calibrated function
for this purpose.
4) When the busy-wait function returns, check the interrupt flag
again. If there was another interrupt event, go back to 2).
5) Verify that input pin is still ACTIVE. If it is not, exit silently
(no event).
6) Accept the event and exit the interrupt function.

The interrupt function will consume 2us plus overhead. It will be as
accurate, as your prediction of typ/max interrupt latency is. That
depends mostly on the usage of "critical section" primitives in the
rest of your code. The LPC1768 contains an ARM core, so you could
reserve the FIQ (fast interrupt) for this purpose only, and use IRQ
(slow interrupt) throughout the rest of your code (including OS).
That would allow you to never disable interrupts (FIQ) and thus never
suffer software-induced latency. You could also move the vector table
and the FIQ handler code (which is the function described above) and
all of its variables / subfunctions, in an area that has predictable
memory timing. I'm not aware of the LPC1768 details, but on other
archs you can use the TCM or disable i-cache via MMU for the memory
area in question. The memory access doesn't have to be fast, it must
just be very very predictable.

Using these tips, you can achieve very reliable acception/rejection.
The imperfections are:

A) There is a "blind window" which is from detection of the edge to
the clear interrupt flag. If there is a spurious HIGH in this window,
it will not be detected. You could possibly extend the proposal,
using your counter mode to improve on this imperfection.

B) That most chips (and certainly the LPC1768 too) synchronize the
input pin before detecting edges. Thus, some very fast flicker might
happen completely unnoticed by the hardware (and software). You
would need to move the pulse detection into a specially designed
hardware circuit to handle this better.

C) The sampling of step 4 and 5 are not taken from the same-moment-in-
time. Therefore the pulse must be longer than X to be accepted
reliably, but smaller than Y to be rejected reliably. There is a
window in which a pulse may or may not be rejected. This is the crux
of using software instead of hardware. Again, you can only handle
this better in a specially designed hardware circuit.

Best regards
Marc

Tim Wescott

unread,
Jun 7, 2011, 11:52:43 AM6/7/11
to

This is almost everything that I was going to suggest.

Look at the assembly that your compiler is generating, and make sure that
it's really as efficient as can possibly be. If it isn't, just go to the
well and write the ISR in assembly language.

Even on a 100MHz processor, 2us is an awfully short period of time.
Doing some sort of preconditioning makes a lot of sense to me, although
Schmitt trigger logic is rarely accurate enough for any practical purpose
beyond glitch reduction. It should be possible to use a multivibrator
(74xx126??) and some gates to do this if an RC and a Schmitt isn't
accurate enough. An asynchronous clear counter would do the trick as
well -- hold it in reset when the pulse is inactive, and trigger the
micro whenever it counts to it's 'carry out' value. Then you just need
to feed it an appropriate clock to hit your "more than 2us" criterion.

If your ISR pops off quickly enough, and if it doesn't waste too much
time, spin in the ISR until the signal goes inactive, and check the
time. If that ">2us" can mean "sometimes _much_ greater than 2us" then
this obviously won't work.

I like the "two pins" approach, if you can make it unambiguous. Make
that microcontroller hardware work for you, if you can.

--
http://www.wescottdesign.com

Roberto Waltman

unread,
Jun 7, 2011, 12:02:57 PM6/7/11
to
"navman" wrote:
>We cannot seem to understand how or why the processing is taking so long
>there are hardly 3-4 "C" statements in the interrupt routine ...

Is flash prefetch enabled?

Bob

unread,
Jun 7, 2011, 12:08:26 PM6/7/11
to

I don't know this part, but timer-capture hardware I've seen usually
uses the peripheral clock to "filter" out short pulses. Can you lower
the peripheral clock to the 1 or 2 MHz range and let the hardware
synchronizer do all the work? Obviously, if you have other peripherals
that need a higher clock, this wouldn't work, but the core could
continue to run at 100MHz.

just an off-the-wall idea,
Bob

Vladimir Vassilevsky

unread,
Jun 7, 2011, 12:09:36 PM6/7/11
to

navman wrote:

> Hi,
> I'm trying to counter some pulses (2-10usec width) using the LPC1768
> Cortex-M3 microcontroller. There are 2 channels on which I have count the
> pulses on. I have to allow only pulses that are >=2us pulse width (so we
> cannot simply use the counter function).

[...]

Put a capacitor to the ground on the interrupt pin.

Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com

Andy

unread,
Jun 7, 2011, 12:28:47 PM6/7/11
to

another option might be to use a sync serial port running continuously
as a sampling engine and then run filtering over the captured stream
of bits.

Rob Gaddi

unread,
Jun 7, 2011, 12:50:20 PM6/7/11
to
On 6/7/2011 6:16 AM, navman wrote:
> Hi,
> I'm trying to counter some pulses (2-10usec width) using the LPC1768
> Cortex-M3 microcontroller. There are 2 channels on which I have count the
> pulses on. I have to allow only pulses that are>=2us pulse width (so we
> cannot simply use the counter function).
>
> But it is turning out to be an incredibly difficult feat to achieve this on
> a 100MHz Cortex-M3. The problem arises when we try to measure the pulse
> width to allow only pulses lasting 2us or higher. We are trying to use a
> capture pin and the capture interrupt. First we set it for a falling edge
> and capture the timer value, then set it to rising edge and again capture
> the timer value. Then take the difference between two values to see if it
>> 2usec. But the processing itself is taking over 6-8usec. We also tried
> simply using a external interrupt& reading timer registers with each edge,

> but with the same results.
>
> We cannot seem to understand how or why the processing is taking so long
> there are hardly 3-4 "C" statements in the interrupt routine (change edge
> of capture, take the difference in captured values and compare if it is
>> =2us). Any ideas how this feat could be accomplished on a LPC1768?
>
>
> ---------------------------------------
> Posted through http://www.EmbeddedRelated.com

Something seems wrong there. I've got an application where I use an
LPC1758 to close a fairly complex PI loop controller. I've only got the
clock turned up to 52 MHz, and I'm able to execute a printed page worth
of ISR in only about 2us.

A) Everyone who's told you to look at the disassembly for stupidity is
right.

B) Setting up all of the clocks on the LPC17s is non-trivial. Have you
confirmed that you're actually running at 100 MHz, and that the
peripheral clock going to the timer peripheral is too? If I recall
correctly, the CMSIS code to do so doesn't actually work. If you've got
access to the CLKOUT pin, setting it up and dropping a scope there might
do you a world of good. Also, there's an errata note that says you've
got to have the main PLL disabled while you're configuring the
peripheral clocks

--
Rob Gaddi, Highland Technology
Email address is currently out of order

David Brown

unread,
Jun 7, 2011, 3:31:47 PM6/7/11
to

Long before you consider writing the ISR in assembly, check that the C
code is decently written (if you can't write appropriate C code for an
interrupt routine, you are unlikely to be able to write good assembly),
and check that you are using your compiler properly (and that you have a
decent compiler). Used properly (which includes studying the generated
assembly), C on a processor like this should be very close to the speed
of optimal assembly.

> Even on a 100MHz processor, 2us is an awfully short period of time.
> Doing some sort of preconditioning makes a lot of sense to me, although
> Schmitt trigger logic is rarely accurate enough for any practical purpose
> beyond glitch reduction. It should be possible to use a multivibrator
> (74xx126??) and some gates to do this if an RC and a Schmitt isn't
> accurate enough. An asynchronous clear counter would do the trick as
> well -- hold it in reset when the pulse is inactive, and trigger the
> micro whenever it counts to it's 'carry out' value. Then you just need
> to feed it an appropriate clock to hit your "more than 2us" criterion.
>
> If your ISR pops off quickly enough, and if it doesn't waste too much
> time, spin in the ISR until the signal goes inactive, and check the
> time. If that ">2us" can mean "sometimes _much_ greater than 2us" then
> this obviously won't work.
>
> I like the "two pins" approach, if you can make it unambiguous. Make
> that microcontroller hardware work for you, if you can.
>

I fully agree here. Software is not good at doing things with a few
microsecond timing - any processor fast enough to have plenty of
instruction cycles in that time will have unpredictable latencies due to
caches, pipelines, buffers, etc. But this should be fairly simple code
- with enough care, it could be done.

David Brown

unread,
Jun 7, 2011, 3:34:51 PM6/7/11
to

An easy way to check timings is to calculate the timer delays needed for
one second, and connect it up to an LED. You can quickly tell if your
clocks are right, without scopes or other equipment - after all, /every/
electronics board has a blinking LED.

Arlet Ottens

unread,
Jun 7, 2011, 3:45:05 PM6/7/11
to
On 06/07/2011 09:34 PM, David Brown wrote:

>> A) Everyone who's told you to look at the disassembly for stupidity is
>> right.
>>
>> B) Setting up all of the clocks on the LPC17s is non-trivial. Have you
>> confirmed that you're actually running at 100 MHz, and that the
>> peripheral clock going to the timer peripheral is too? If I recall
>> correctly, the CMSIS code to do so doesn't actually work. If you've got
>> access to the CLKOUT pin, setting it up and dropping a scope there might
>> do you a world of good. Also, there's an errata note that says you've
>> got to have the main PLL disabled while you're configuring the
>> peripheral clocks
>>
>
> An easy way to check timings is to calculate the timer delays needed for
> one second, and connect it up to an LED. You can quickly tell if your
> clocks are right, without scopes or other equipment - after all, /every/
> electronics board has a blinking LED.

Another sanity check I use is to blink a LED as fast as possible in a
software loop, and see if the frequency matches what I'd expect.
Sometimes there's a difference of a factor 100, which usually means the
PLL isn't set up correctly, or the CPU clock dividers are still at the
most conservative defaults, or it's still running from an internal RC
oscillator or something like that.


Tim Wescott

unread,
Jun 7, 2011, 3:45:50 PM6/7/11
to

The only caveat would be if the compiler wasn't very good at generating
efficient code -- but I'd have a hard time believing that for a Cortex
processor. Not setting up the optimization flags correctly, and
inadvertently writing inefficient C code -- yes.

I'm just old and suspicious, and remembering too many bad experiences
with compilers that _were_ crappy.

>> Even on a 100MHz processor, 2us is an awfully short period of time.
>> Doing some sort of preconditioning makes a lot of sense to me, although
>> Schmitt trigger logic is rarely accurate enough for any practical purpose
>> beyond glitch reduction. It should be possible to use a multivibrator
>> (74xx126??) and some gates to do this if an RC and a Schmitt isn't
>> accurate enough. An asynchronous clear counter would do the trick as
>> well -- hold it in reset when the pulse is inactive, and trigger the
>> micro whenever it counts to it's 'carry out' value. Then you just need
>> to feed it an appropriate clock to hit your "more than 2us" criterion.
>>
>> If your ISR pops off quickly enough, and if it doesn't waste too much
>> time, spin in the ISR until the signal goes inactive, and check the
>> time. If that ">2us" can mean "sometimes _much_ greater than 2us" then
>> this obviously won't work.
>>
>> I like the "two pins" approach, if you can make it unambiguous. Make
>> that microcontroller hardware work for you, if you can.
>>
>
> I fully agree here. Software is not good at doing things with a few
> microsecond timing - any processor fast enough to have plenty of
> instruction cycles in that time will have unpredictable latencies due to
> caches, pipelines, buffers, etc. But this should be fairly simple code -
> with enough care, it could be done.

However: you'll need to be exceedingly strict about your interrupt
response time elsewhere. If you have a habit of turning off interrupts
to make sure that operations are atomic, and _particularly_ if you're
one of a team of programmers that do this, then you have to be Really
Really Strict about just how long these intervals last.

Because all it'll take is one guy turning off interrupts while he
calculates pi to 100 decimal places in some bit of shared memory, and
your little interval counter will fail.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html

Arlet Ottens

unread,
Jun 7, 2011, 4:07:14 PM6/7/11
to

The Cortex-M3 the OP is using is actually very good for this task. There
are 8 different hardware level nested interrupts, and the CPU will
automatically push your registers in 12 cycles. ISRs can be written in
pure C without any special pragmas/attributes, and without assembler stubs.

When one ISR follows the other, the registers are not restored/saved
again, but are kept in saved state, while it immediately executes the
next ISR, only taking 6 cycles latency inbetween.

See http://www.arm.com/files/pdf/IntroToCortex-M3.pdf for more details.

Rich Webb

unread,
Jun 7, 2011, 4:27:04 PM6/7/11
to
On Tue, 07 Jun 2011 21:45:05 +0200, Arlet Ottens <usen...@c-scape.nl>
wrote:

The embedded world's version of the canonical printf("Hello, world!");
One should always, always do something like this when first approaching
a new processor or a new compiler on an old processor.

--
Rich Webb Norfolk, VA

Jon Kirwan

unread,
Jun 7, 2011, 4:47:06 PM6/7/11
to
On Tue, 07 Jun 2011 21:31:47 +0200, David Brown
<david...@removethis.hesbynett.no> wrote:

><snip>


>Long before you consider writing the ISR in assembly, check that the C
>code is decently written (if you can't write appropriate C code for an
>interrupt routine, you are unlikely to be able to write good assembly),
>and check that you are using your compiler properly (and that you have a
>decent compiler). Used properly (which includes studying the generated
>assembly), C on a processor like this should be very close to the speed
>of optimal assembly.

><snip>

A decent warning for those not fluent in assembly. Not for
those of us who know it cold.

C code is almost _never_ as good as hand-written assembly
code and unless you are doing math expressions which is NOT a
good idea in an interrupt routine not much different than
writing in c. Same hardware needs dealing with and the
compilers usually have "constraints" that an assembly
programmer does not have, at all.

We've been down this c vs assembly path a million times here.
Some points are good, but I hate broad brush stuff. Look up
the discussion we had a few years ago on a GCD algorithm. To
this day, not even the best x86 compilers can come close even
when ALL of the c-constraints must be fully observed by the
hand assembly coder. Compilers cannot do topology inversion
and they handle status bits somewhat poorly. There are many
other issues that may relate to interrupts, as well, where
there is no syntax in c for certain semantics.

How all this applies in the ARM case I'll leave to folks
better informed than me. But I really don't like it when I
see "you are unlikely to be able to write good assembly" and
"should be very close to ... optimal assembly." Most
particularly, when discussing interrupt routines.

I will leave it there.

Jon

Jim Granville

unread,
Jun 7, 2011, 6:17:09 PM6/7/11
to
On Jun 8, 1:16 am, "navman" <naveen_pn@n_o_s_p_a_m.yahoo.com> wrote:
> Hi,
> I'm trying to counter some pulses (2-10usec width) using the LPC1768
> Cortex-M3 microcontroller. There are 2 channels on which I have count the
> pulses on. I have to allow only pulses that are >=2us pulse width (so we
> cannot simply use the counter function).  

You have not said what is driving this unusual spec, nor the repeat-
rate, and SW alone may not be enough to reject all noise types.
ie Two 1.8us pulses close together, could pass.

So an external filter, either Schmitt + RC, or a simple state-engine
in a SPLD/CPLD or HC163 counter may be needed.

You need to minimise the SW & interrupt calls, by helping in HW, eg
capturing a value on each edge, but only INT on trailing edge, then
check the delta-time.

Some of the new NXPs have a capture-clears-timer feature, which would
be very useful on this type of problem.

David Brown

unread,
Jun 8, 2011, 4:22:26 AM6/8/11
to

There are still some inefficient compilers around, but they are mainly
for the smaller processors that are hard to work with. On something
like the Cortex, it's easy to generate reasonable code for short C
functions. The big differences are for things like automatic use of
vector or DSP functions, smarter loop unrolling, interprocedural
optimisations, etc. - but they should not make a difference in a case
like this.

Yes, there is always someone that thinks the UART receive interrupt
routine is the best place to interpret incoming telegrams, act on them,
and build up a reply...

navman

unread,
Jun 8, 2011, 6:44:08 AM6/8/11
to
Thanks for your valuable inputs. We tried toggling an IO pin inside
while(1) and see that it only generates pulses of 150ns width. So is there
something wrong with the clock configuration?

We use the LPCXpresso compiler. I'll try to post the code here for the
clock init a little later.

David Brown

unread,
Jun 8, 2011, 8:12:49 AM6/8/11
to
On 08/06/2011 12:44, navman wrote:
> Thanks for your valuable inputs. We tried toggling an IO pin inside
> while(1) and see that it only generates pulses of 150ns width. So is there
> something wrong with the clock configuration?
>

It /sounds/ likely that there is something wrong, but I am not sure what
you should expect here. Certainly for some ARM devices IO pin access is
surprisingly slow. I don't know this chip, so I'll let others give more
definite answers.

> We use the LPCXpresso compiler. I'll try to post the code here for the
> clock init a little later.
>

LPCXpresso uses gcc, which will produce solid and efficient code. But
that depends on the compiler flags - if optimisation is turned off, you
will get very big and slow object code.

Rich Webb

unread,
Jun 8, 2011, 8:22:57 AM6/8/11
to
On Wed, 08 Jun 2011 05:44:08 -0500, "navman"
<naveen_pn@n_o_s_p_a_m.yahoo.com> wrote:

>Thanks for your valuable inputs. We tried toggling an IO pin inside
>while(1) and see that it only generates pulses of 150ns width. So is there
>something wrong with the clock configuration?

Take a look at the generated assembly to see how many actual
instructions are executed to implement the toggle feature. You should be
able to works back from there to the effective instruction cycle time.

Arlet Ottens

unread,
Jun 8, 2011, 8:53:27 AM6/8/11
to
On 06/08/2011 02:12 PM, David Brown wrote:
> On 08/06/2011 12:44, navman wrote:
>> Thanks for your valuable inputs. We tried toggling an IO pin inside
>> while(1) and see that it only generates pulses of 150ns width. So is
>> there
>> something wrong with the clock configuration?
>>
>
> It /sounds/ likely that there is something wrong, but I am not sure what
> you should expect here. Certainly for some ARM devices IO pin access is
> surprisingly slow. I don't know this chip, so I'll let others give more
> definite answers.

The LPC series use a special fast gpio interface, which is actually
pretty good compared to older APB based GPIO interfaces.

I don't have a LPC17xx, but I just tried it on a LPC2478 which has a
similar FGPIO interface (but an ARM7 core instead of Cortex-M3), doing:

while( 1 )
{
FIO0SET = BITMASK;
FIO0CLR = BITMASK;
}

This results in pulses of 2 cycles high, and 5 cycles low.

In assembly, this loop is implemented as 2 stores and a branch.

Toggling the same pin with:

while( 1 )
{
FIO0PIN ^= BITMASK;
}

results in 9 cycles high, 9 cycles low for a load, exor, store, and
branch. All of this using gcc -O2.

15 cycles for the pulse width seems a bit high in comparison.

Ulf Samuelsson

unread,
Jun 8, 2011, 1:13:43 PM6/8/11
to
navman skrev 2011-06-07 15:16:
> Hi,
> I'm trying to counter some pulses (2-10usec width) using the LPC1768
> Cortex-M3 microcontroller. There are 2 channels on which I have count the
> pulses on. I have to allow only pulses that are>=2us pulse width (so we
> cannot simply use the counter function).
>
> But it is turning out to be an incredibly difficult feat to achieve this on
> a 100MHz Cortex-M3. The problem arises when we try to measure the pulse
> width to allow only pulses lasting 2us or higher. We are trying to use a
> capture pin and the capture interrupt. First we set it for a falling edge
> and capture the timer value, then set it to rising edge and again capture
> the timer value. Then take the difference between two values to see if it
>> 2usec. But the processing itself is taking over 6-8usec. We also tried
> simply using a external interrupt& reading timer registers with each edge,
> but with the same results.
>
> We cannot seem to understand how or why the processing is taking so long
> there are hardly 3-4 "C" statements in the interrupt routine (change edge
> of capture, take the difference in captured values and compare if it is
>> =2us). Any ideas how this feat could be accomplished on a LPC1768?
>
>
> ---------------------------------------
> Posted through http://www.EmbeddedRelated.com

I know how I would implement this on an Atmel AT32UC3C.

You use the pulse input as a gate to the clock of a counter (CNT0).
CNT0 will count up, while the pulse is active.
When the pulse ends, the counter should be reset.

A compare register is used to determine if the signal is > 2 us.
If CNT0 matches the compare register, an "event" is triggered.
The event is used to clock another counter CNT1.

Best Regards
Ulf Samuelsson

Leon

unread,
Jun 9, 2011, 7:06:28 AM6/9/11
to
On Jun 8, 11:44 am, "navman" <naveen_pn@n_o_s_p_a_m.yahoo.com> wrote:
> Thanks for your valuable inputs. We tried toggling an IO pin inside
> while(1) and see that it only generates pulses of 150ns width. So is there
> something wrong with the clock configuration?
>
> We use the LPCXpresso compiler. I'll try to post the code here for the
> clock init a little later.        

ARM chips have surprisingly slow I/O. It's better that it was with the
earlier devices like the LPC2106, but it's still not very good.

Leon

Mark Borgerson

unread,
Jun 9, 2011, 11:41:14 AM6/9/11
to
In article <24a2423f-e48f-4981-88f7-c761120c2e63@
32g2000vbe.googlegroups.com>, leo...@btinternet.com says...
That was particularly true where setting or clearing a bit required
a read-modify-write sequence. Many of the Cortex M3 systems I'm
working with now have separate bit set and bit clear registers which
reduces the instruction count.

I just looked at a code sequence that toggles a bit to clock data
from a FIFO to an LCD display. It is a partially-unrolled loop with
a sequence of 16 bit bit clear and bit set instructions.

In C it is a sequence of:

GPIOB->BRR = FIFO_RD_BIT | LCD_WR_BIT;
GPIOB->BSRR = FIFO_RD_BIT | LCD_WR_BIT;

GPIOB->BRR = FIFO_RD_BIT | LCD_WR_BIT;
GPIOB->BSRR = FIFO_RD_BIT | LCD_WR_BIT;

. . .

The Thumb code generated is

STR R3, [R0]
STR R3, [R0, #4]

STR R3, [R0]
STR R3, [R0, #4]


R0 is loaded with the port base address and R3 is loaded with the
bit pattern before the start of the loop. Each instruction is a
single 16-bit word.

I don't think I'm going to beat that with any assembly-language
optimizations. ;-) The code was generated with the IAR compiler
and with optimizations level set to high.


When running on an STM32F103 with the main clock set to 64mHz, the
bit toggles at 16mHz. This is consistent with the fact that
the local peripheral bus for the general purpose IO bits is running at
1/2 the main clock rate, since that bus is rated for a maximum clock
rate of 36MHz. Updating the whole QVGA display with 2 bytes/pixel
(in RGB(565) format) takes about 14mSec.

It certainly helps that the engineer who designed the board put all
the FIFO and LCD clocking bits on pins from the same peripheral
port. If they were on different ports, it would take a separate
instruction for each clock bit---doubling the number of instructions.

The Cortex M3 also implements bit banding, where each bit in
a peripheral register or RAM word is assigned a memory location
of its own. That means a bit test operation can be reduced
to:
bitstatus = UART_RCV_StatusBitBand; // returns either 0x01 or 0x00

instead of

bitstatus = UART_Status_Register & RCV_Status_Mask;
// returns either the mask bit or zero

I haven't yet had to optimize an interrupt handler to the
degree that would benefit from this capability, but it
could cut some instructions from a handler that required
you to figure out which of a number of possible bits caused
an interrupt. Writing the code to use the bit banding
would require that you pre-calculate the proper bit band
address for each port bit that you want to test.

Mark Borgerson

D Yuniskis

unread,
Jun 9, 2011, 12:18:40 PM6/9/11
to
Hi Mark,

On 6/9/2011 8:41 AM, Mark Borgerson wrote:

[much elided]

> It certainly helps that the engineer who designed the board put all
> the FIFO and LCD clocking bits on pins from the same peripheral
> port. If they were on different ports, it would take a separate
> instruction for each clock bit---doubling the number of instructions.

This is where having someone comfortable in software and
hardware really pays off. A pure hardware dweeb would connect
things "wherever" without concern for how they are going to be
used/accessed. E.g., "make the schematic look pretty" (signal
names in numerical order -- even if REVERSE numerical order
would have made more sense!) or "make the layout easy".

I recall the sound output (CVSD) on some old arcade hardware
required the processor to shift the data byte (in an accumulator)
and write *one* bit of it into the CVSD, generate a clock,
lather, rinse, repeat. As a result, the quality of the speech
generated on those machines was piss poor -- with the processor
spending 100.0% of its time doing this!

A bit more forethought on the part of the hardware designer
would have made the software easier *and* more capable. I.e.,
write a routine to move data and see how clumsy it REALLY is!

navman

unread,
Jun 9, 2011, 1:30:43 PM6/9/11
to
Just to update my findings, the LPC1768 **is** running at 100MHz as
confirmed by the CLKOUT pin. I'll have to check the disassembly and see
where the problem is. But I'm still very much surprised to see 150ns pulses
to toggle a pin. An 8 bit AVR on 16MHz clock can do about the same and
maybe faster!

Tauno Voipio

unread,
Jun 9, 2011, 1:41:54 PM6/9/11
to

There's something fishy in the interrupt code.

I'm running 200 kHz data capture (10 bit SPI A/D) and
a 12800 bits/s software UART simultaneously on a 50 MHz
Cortex-M3 (TI/Stellaris LM3S818). There is still plenty
of processor time left for other chores.

--

Tauno Voipio

Mark Borgerson

unread,
Jun 9, 2011, 3:16:08 PM6/9/11
to
In article <to2dnb9SyPnenmzQ...@giganews.com>,
naveen_pn@n_o_s_p_a_m.yahoo.com says...

This definitely sounds like a code generation issue. At 10nSec
per instruction, I would expect you to be able to toggle a
bit in 20nSec. The GPIO ports go directly to the CPU,
according to the block diagram in the LPC1768 data sheet,
so I wouldn't expect and AHB slowdown.

160nSec is about what I would expect to get into an interrupt
service routine or a function call that saved a few
registers.

Mark Borgerson

Rich Webb

unread,
Jun 9, 2011, 4:41:17 PM6/9/11
to

For a datapoint, I dragged out an old devboard & tossed a few lines of
test code into a scratch project.

Running at 96 MHz on a "Blueboard LPC1768-H" devboard with Rowley's
CrossWorks set for "Flash Release" mode and -O1 optimizations, the
output pin toggles at just over 20 nsec for an entire period (one uppie,
one downie) (nothing fancy; just a series of FIO0SETs and CLRs wrapped
in a (shudder) goto). Unoptimized ("Flash Debug" mode) the period is
about 94 nsec overall.

So it *can* do it. The issue is probably just finding that one line in
the user's manual or that one register bit that has been overlooked.
Bloody processors are so damned *literal* sometimes...

cassiope

unread,
Jun 10, 2011, 6:25:20 PM6/10/11
to

Some older LPCs had "slow" GPIO and "fast" GPIO methods. Might this
be the case for your 1768?

Albert van der Horst

unread,
Jun 12, 2011, 8:43:23 AM6/12/11
to
In article <rMadnYYXWcwzuXPQ...@giganews.com>,

navman <naveen_pn@n_o_s_p_a_m.yahoo.com> wrote:
>Hi,
>I'm trying to counter some pulses (2-10usec width) using the LPC1768
>Cortex-M3 microcontroller. There are 2 channels on which I have count the
>pulses on. I have to allow only pulses that are >=2us pulse width (so we
>cannot simply use the counter function).
>
>But it is turning out to be an incredibly difficult feat to achieve this on
>a 100MHz Cortex-M3. The problem arises when we try to measure the pulse
>width to allow only pulses lasting 2us or higher. We are trying to use a
>capture pin and the capture interrupt. First we set it for a falling edge
>and capture the timer value, then set it to rising edge and again capture
>the timer value. Then take the difference between two values to see if it
>>2usec. But the processing itself is taking over 6-8usec. We also tried
>simply using a external interrupt & reading timer registers with each edge,

>but with the same results.
>
>We cannot seem to understand how or why the processing is taking so long
>there are hardly 3-4 "C" statements in the interrupt routine (change edge
>of capture, take the difference in captured values and compare if it is
>>=2us). Any ideas how this feat could be accomplished on a LPC1768?

This shows the limits of interrupt processing. The GA144 with
asynchronous waits for input change (up-going and down-going)
should be able to switch between the two in a matter of nS.
(Burning one processor for the input and a couple to do the processing,
such as reading out a timer when signalled by the input processor.)

>---------------------------------------
>Posted through http://www.EmbeddedRelated.com

Groetjes Albert

--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Albert van der Horst

unread,
Jun 12, 2011, 8:47:28 AM6/12/11
to
In article <3cWdnfMeAJr0rHLQ...@lyse.net>,

With decent handling of interrupt levels and UART at the lowest
priority, this may well be the cleanest design ...

Arlet Ottens

unread,
Jun 12, 2011, 8:25:04 AM6/12/11
to
On 06/12/2011 02:43 PM, Albert van der Horst wrote:

>> But it is turning out to be an incredibly difficult feat to achieve this on
>> a 100MHz Cortex-M3. The problem arises when we try to measure the pulse
>> width to allow only pulses lasting 2us or higher. We are trying to use a
>> capture pin and the capture interrupt. First we set it for a falling edge
>> and capture the timer value, then set it to rising edge and again capture
>> the timer value. Then take the difference between two values to see if it
>>> 2usec. But the processing itself is taking over 6-8usec. We also tried

>> simply using a external interrupt& reading timer registers with each edge,


>> but with the same results.
>>
>> We cannot seem to understand how or why the processing is taking so long
>> there are hardly 3-4 "C" statements in the interrupt routine (change edge
>> of capture, take the difference in captured values and compare if it is
>>> =2us). Any ideas how this feat could be accomplished on a LPC1768?
>
> This shows the limits of interrupt processing. The GA144 with
> asynchronous waits for input change (up-going and down-going)
> should be able to switch between the two in a matter of nS.
> (Burning one processor for the input and a couple to do the processing,
> such as reading out a timer when signalled by the input processor.)

Not quite. The interrupt latency of the Cortex-M3 is only 12 cycles in
normal circumstances, or 120 ns when running at 100 MHz, using fast memory.

The fact that the ISR takes 6-8 us in the OP's case, doesn't mean
there's a problem with the interrupt mechanism itself.

malcolm

unread,
Jun 12, 2011, 5:54:37 PM6/12/11
to
On Jun 8, 1:16 am, "navman" <naveen_pn@n_o_s_p_a_m.yahoo.com> wrote:
> Hi,
> I'm trying to counter some pulses (2-10usec width) using the LPC1768
> Cortex-M3 microcontroller. There are 2 channels on which I have count the
> pulses on. I have to allow only pulses that are >=2us pulse width (so we
> cannot simply use the counter function).  
>

I see in
http://ics.nxp.com/literature/other/microcontrollers/pdf/line.card.cortex-m.pdf
that the LPC18xx series, has a Timer State engine, which could be
ideal for this type of pre-qualify.

Of course, the LPC18xx seems to only come in big packages..
so it might not be a easy shift to make.
Depends how important this count+Filter is ?

Mr.CRC

unread,
Jun 17, 2011, 10:27:11 AM6/17/11
to
David Brown wrote:
> I fully agree here. Software is not good at doing things with a few
> microsecond timing - any processor fast enough to have plenty of
> instruction cycles in that time will have unpredictable latencies due to
> caches, pipelines, buffers, etc. But this should be fairly simple code
> - with enough care, it could be done.


I have meaningful ISRs taking 400-600ns on a TMS320F2812 running at
150MHz. The entire executor engine of a dynamically reconfigurable
waveform generation state sequencer completes in under 4us, with that
time depending strongly on the number of possible transitions allowed
per state, currently limited to 4.

The new TI Delfino at 300MHz could halve these times. The reason these
processors can do this is because they have (albeit not particularly
generous amounts, although the Delfino again improves this greatly) SRAM
running at full processor speed, and a non-cached architecture.

What is critically important is interrupt latency and interrupt latency
jitter. In my application, I use assembly language "pre-ISRs" above the
non-critical ISRs which simply juggle a few interrupt enables, then very
quickly re-enable interrupts. The main cause of interrupt latency
jitter is C preamble code for each ISR takes quite a long time to
complete. So if the main() code gets interrupted, the interrupt latency
is about 100ns.

But if another low-priority interrupt is running when the highest
priority real-time interrupt is triggered, then if I just leave it to
the C compiler I have to wait for it to save full context, and run a
bunch of silly little extra instructions, before having a chance to
re-enable interrupts and let the high-priority one take over. That
might extend the effective latency to 300ns. Hence, latency jitter.

By writing a pre-ISR in .asm, I can get interrupts re-enabled,
restricted to just the more important ones, in just 3-4 instructions.
Then the high-priority ISR can preempt the preamble codes of the low
priority ISRs, greatly reducing latency jitter. I think the last time I
measured, I could guarantee less than 200ns latency on the F2812.

Once past getting interrupts reenabled, the C compiler is good enough at
writing the actual meat and potatoes ISR code.


I can't wait to get my hands on the Delfino. Being able to do serious
work in a few 100s of ns is very much fun.

--
_____________________
Mr.CRC
crobc...@REMOVETHISsbcglobal.net
SuSE 10.3 Linux 2.6.22.17

Jon Kirwan

unread,
Jun 17, 2011, 3:45:38 PM6/17/11
to
On Fri, 17 Jun 2011 07:27:11 -0700, "Mr.CRC"
<crobc...@REMOVETHISsbcglobal.net> wrote:

><snip>


>What is critically important is interrupt latency and interrupt latency
>jitter.

><snip>

I enjoyed reading the detailed overview. And it makes the
point, again, among many other ways it can also be made.

As a side bar to just you, there are a few processor families
where interrupt latency __jitter__ for internally generated
interrupts (timers) is zero. Such interrupts are always
synchronous with the clocking system (of course) and all
instructions have the exact same execution time (1 cycle) and
so interrupts are always entered on the same relative phase
to the timer event. If you don't disable the system itself,
of course. I've used this for an operating system with a
jitter-free guarantee on starting sleeping processes using
delta queues (where only one such process is allowed to sleep
on the same timed event.)

Anyway, interesting overview. Enjoyed thinking a little
about it. Makes me want to buckle down and design, build,
and code up a personal arb-function gen for my desk. May do
that.

Jon

Mr.CRC

unread,
Jun 17, 2011, 6:14:44 PM6/17/11
to
Jon Kirwan wrote:
> On Fri, 17 Jun 2011 07:27:11 -0700, "Mr.CRC"
> <crobc...@REMOVETHISsbcglobal.net> wrote:
>
>> <snip>
>> What is critically important is interrupt latency and interrupt latency
>> jitter.
>> <snip>
>
> I enjoyed reading the detailed overview. And it makes the
> point, again, among many other ways it can also be made.

Thanks Jon. I've mostly lurked here for over 12 years, and usually
listen to your writings with great eagerness to learn something and am
rarely disappointed.

> As a side bar to just you, there are a few processor families
> where interrupt latency __jitter__ for internally generated
> interrupts (timers) is zero. Such interrupts are always
> synchronous with the clocking system (of course) and all
> instructions have the exact same execution time (1 cycle) and
> so interrupts are always entered on the same relative phase
> to the timer event. If you don't disable the system itself,
> of course.

Is that ARM families that can basically switch context in hardware, or
some other device?


> I've used this for an operating system with a
> jitter-free guarantee on starting sleeping processes using
> delta queues (where only one such process is allowed to sleep
> on the same timed event.)

<scratches head, wonders what a "delta queue" is>

Hmm, looking at a few search results I sort of get it.

> Anyway, interesting overview. Enjoyed thinking a little
> about it. Makes me want to buckle down and design, build,
> and code up a personal arb-function gen for my desk. May do
> that.

The recently produced Agilent 33522A I've gotten two of at work. In the
past I was using Tek AFG3022. Unfortunately, the new Agilent is
seriously bug ridden. They fixed it somewhat with a recent update, but
still there are just embarrassing bugs. I would resign and go become a
monk if I put out something like that. I was close to buying one for
home, but I can't afford to put out my allowance funds for something
screwey.

The nice thing about the 33522A if they ever get it to work is that it
lets you choose an arbitrary sampling rate for arbs. Older generation
ones like the Tek in this price range had fixed sampling rates, which
seriously hampered usefulness if you wanted a precise or low frequency
arbs. 33522A also has the ability to modulate many things with noise,
and coolest of all is that the noise bandwidth is settable!

What I'd be tempted to do if I had the time or was retired (having a
little one running amok is slowing me down to where a simple Nixie clock
project takes 2 years just to make a PCB) is an audio range arb. with
very high vertical precision output and very low distortion. That may
be doable in a DSP as well, rather than needing an FPGA which would be
my first consideration to implement anything serious. It would be cool
to extend it into a full blown audio development instrument with
analysis capabilities as well.

Weren't you one of the first people do code up a DDS on an AVR years ago?

Anyway, someone did that, and it inspired me to do something a little
different, instead of trying the tightest inner loop for highest
sampling rate, I went for low frequencies, but with an 8-digit LED
display, with 5 frequency digits, and 3 phase shift digits for 1 degree
phase setting.

I think it was a dual channel too. My interest was in making
psychadelic doodles with a laser and closed-loop scanner mirrors, laser
show style.

I got that to a working stage then threw it in a drawer. Then I did
something more sophisticated with the TMS320F2812, but with no UI.

Then that went in the drawer. Ultimately I'll use the F2812 to make a
full blown laser show. I really prefer to just goof off with eye candy
toys. Only at work do I do serious stuff.

Jon Kirwan

unread,
Jun 18, 2011, 2:50:45 PM6/18/11
to
On Fri, 17 Jun 2011 15:14:44 -0700, "Mr.CRC"
<crobc...@REMOVETHISsbcglobal.net> wrote:

>Jon Kirwan wrote:
>> On Fri, 17 Jun 2011 07:27:11 -0700, "Mr.CRC"
>> <crobc...@REMOVETHISsbcglobal.net> wrote:
>>
>>> <snip>
>>> What is critically important is interrupt latency and interrupt latency
>>> jitter.
>>> <snip>
>>
>> I enjoyed reading the detailed overview. And it makes the
>> point, again, among many other ways it can also be made.
>
>Thanks Jon. I've mostly lurked here for over 12 years, and usually
>listen to your writings with great eagerness to learn something and am
>rarely disappointed.

Thanks. I don't usually have a lot to say, anymore, though.

With the move towards large memory systems and 32-bit cpus
with FP and memory mgmt systems capable of runing Linux on a
chip, "embedded" has blurred to the point where you can't
tell the difference between a Microsoft MSDN developer, a
Linux guru, and an embedded micro coder, anymore.

The Windows CE coder seems to imagine they are doing embedded
work. So does the Linux coder. .NET can run embedded, in
fact, though anyone familiar with its details must realize
just how abstracted that environment is. Technically, yes, I
suppose it's true that porting code from a workstation to run
on an "embedded device" using .NET, for example, might still
meet some people's definitions. A lot of the discussions
here seem to be at that level now. Although I do .NET coding
and have my paid-up annual MSDN subscription, it's dull stuff
to me.

I think of embedded to be about the skills required by us and
where they __differ__ from hosted environment development
skill sets. When a job requires familiarity more than just
one language and requires familiarity with how compilers
compile code, with assembly, with linker semantics and how to
modify their control files for some desired result, as well
as broad acquaintances with physics, numerical methods,
signal processing, optics, and certainly electronic design,
then we find more of these differences. When it requires
less of these differences from workstation development
skills, it is less about the "embedded" I know and love.

Times are changing and the relative mix of skills found
amongst the embedded programmer body are shifting with the
capabilities being offered in todays tools. Entire operating
systems are ported over by people I do consider to be well
healed embedded programmers, but then just used lock-stock-
and-barrel by those who know little about what took place and
don't care to know and who just use the usual workstation
tools without knowing much difference, at all.

That's a different thing to me. So I write less, today. I
haven't changed, but the audience has.

>> As a side bar to just you, there are a few processor families
>> where interrupt latency __jitter__ for internally generated
>> interrupts (timers) is zero. Such interrupts are always
>> synchronous with the clocking system (of course) and all
>> instructions have the exact same execution time (1 cycle) and
>> so interrupts are always entered on the same relative phase
>> to the timer event. If you don't disable the system itself,
>> of course.
>
>Is that ARM families that can basically switch context in hardware, or
>some other device?

Some other. For one example, the now "mature" or "older"
ADSP-21xx Family from Analog Devices is the example I had
coded that delta queue for.

>> I've used this for an operating system with a
>> jitter-free guarantee on starting sleeping processes using
>> delta queues (where only one such process is allowed to sleep
>> on the same timed event.)
>
><scratches head, wonders what a "delta queue" is>
>
>Hmm, looking at a few search results I sort of get it.

It's a very simple, easy to operate, precision tool. I first
read about the idea from Douglas Comer's first book on XINU.

I think I'd focus on an audio range device, as well. But I'm
pretty sure I'd just make it a toy and not something
professional. There is so much more "work" involved in
making something ready for others to use and although I find
some of that enjoyable, I don't find all of it to be so. And
I'd be looking more for my own hobbyist pleasure, self-test,
and education than anything else.

I looked over some of what you write elsewhere and I wish I
had your experiences with lasers, too. Lots of potential fun
there, both for me and for students I like to teach at times.

By the way, I've also got a lot of "stuff in drawers." And I
definitely get it about just goofing off with toys. Work is
work, but on my own I don't want the burden of having to do
all the extra stuff needed to productize. I'd rather play.

Jon

Don Y

unread,
Jun 18, 2011, 3:22:12 PM6/18/11
to
Hi Jon,

The "one-liner" description of "embedded systems" that I
use to try to give folks (e.g., at a dinner party) an idea
of what I do is: "something that is quite obviously a computer
(inside) but doesn't look/act like a 'computer'" (assuming that
they will relate "desktop PC" to the term "computer").
It's easy to give them examples of all of these systems that
they've interacted with in the past 24 hours:
- your ATM
- the gas station experience
- the ECU that runs your automobile
- the *mouse* (!) attached to your PC (yup, there's a computer inside!)
- your sewing machine
- your TV
- the modem that connects your PC to the internet
- your cell phone
- your iPod
etc.

I.e., it is easy to *quickly* overwhelm them with a list of
things that they take for granted without even resorting
to more esoteric applications (e.g., the photo-radar detectors
around town; the traffic light controller; the machine that
weighs and labels the meats you purchase at the butcher/grocer)

I summarize by saying "I make *things*".

To folks in the Industry, I describe embedded systems as
"software that comes with a warranty" (*think* about it!)

Jon Kirwan

unread,
Jun 18, 2011, 9:25:57 PM6/18/11
to
On Sat, 18 Jun 2011 12:22:12 -0700, Don Y <now...@here.com>
wrote:

I don't look at it from the outside. I look at it from the
processes involved in performing the work and the skills and
talents those entail. It's not about usage. It's about what
is required by the craft.

Making a table from a shipped kit that requires assembly
using a provided wrench and screwdriver, with everything
already cut, pre-assembled and dismantled before shipping,
and nicely varnished as well is indeed a table in the end and
the end user, in fact, "made it." However, someone who does
all the design, taking into account structure, form, use,
available tools, fasteners, and skills, and then cuts each
piece after careful measurement and strategy beforehand, and
then does all the treatments and so on before getting to
assembly, also "made it." Yet the shared backgrounds,
skills, talented are completely lacking here.

To me it is about the shared life's experience and knowledge,
skills and interests, depth and breadth, tools and so on that
are involved in the shaping that count. It's who we are, not
what we make, that makes us "embedded programmers."

When the things themselves change -- for example, when the
making of furniture goes from hand-selection of grain and
quality and orientation and colors and hand-crafted use of a
wide variety of odd styled chisels of every manner and kind,
to make a piece that will last 200 years (I have many such
pieces, by the way, which are in perfect condition today)
through all manner of humidty and temperature.... to using
30-yr old doug fir softwood stapled together with plastic and
staples and some cheesy metalwork slapped onto the outside
without any real idea of how these things get used over the
years in the end (as happens to be the huge difference
between old "roll top" desks and the new crap that could only
be said to be copied out of some catalog, by comparison)...
well, I just cannot call them the same kind of craft anymore.

Things have changed. And they have.

It's not for the bad. In many ways, the changing face of it
makes it more accessible to many who otherwise could never
have laid hands to the work before. People who couldn't have
readily fathomed what it takes to write their own O/S on the
fly, don't need to do so. They don't even need to understand
them very well. In fact, they can downright abuse them and
often come up, ignorantly, with something that "works" well
enough to get the job done even though they don't even know
they didn't use the tools well, at all. The tools are that
good and readily available. But it also means that they have
no idea what a thunk is, or a coroutine, or how c++ manages
to mechanically achieve its inheritance rules, or what a
stack unwind actually is and does.

It's a slightly different world coding for embedded Linux. So
we don't share that much between us.

Embedded to me is about the skills and shared backgrounds we
share as a community. Not products with warranties.

Jon

Don Y

unread,
Jun 18, 2011, 11:25:34 PM6/18/11
to
Hi Jon,

On 6/18/2011 6:25 PM, Jon Kirwan wrote:

>> The "one-liner" description of "embedded systems" that I
>> use to try to give folks (e.g., at a dinner party) an idea
>> of what I do is: "something that is quite obviously a computer
>> (inside) but doesn't look/act like a 'computer'"
>>

>> I summarize by saying "I make *things*".
>>
>> To folks in the Industry, I describe embedded systems as
>> "software that comes with a warranty" (*think* about it!)
>
> I don't look at it from the outside. I look at it from the
> processes involved in performing the work and the skills and
> talents those entail. It's not about usage. It's about what
> is required by the craft.

But the requirements change as the craft evolves! Do you
want to dope your own silicon? Do you prefer using a
"pocket assembler" in lieu of a HLL compiler?

> Making a table from a shipped kit that requires assembly
> using a provided wrench and screwdriver, with everything
> already cut, pre-assembled and dismantled before shipping,
> and nicely varnished as well is indeed a table in the end and
> the end user, in fact, "made it."

What's wrong with that? If it results in more people
having *tables*...

Or, if it frees them up to work on something *else* that they
couldn't have had time to do, otherwise (because of the
the countless hours/weeks that they would have spent designing,
measuring, selecting wood grains, etc.)?

I.e., how much *didn't* get done, previously, because we were
screwing around with 6 character, uppercase symbols in our
code instead of writing things more expressively?

> However, someone who does
> all the design, taking into account structure, form, use,
> available tools, fasteners, and skills, and then cuts each
> piece after careful measurement and strategy beforehand, and
> then does all the treatments and so on before getting to
> assembly, also "made it." Yet the shared backgrounds,
> skills, talented are completely lacking here.

They are redirected to other purposes.

One of the first products I was involved with was a LORAN-C
position plotter. Feed it time-difference coordinates and
it charts your progress, on paper, in real time. Anywhere on
the globe.

Hardware:
- 8085 (did that max out at fosc of 6MHz or 4?)
- 2 timer/counters
- 12KB ROM
- 256 bytes RAM
- two unipolar stepper motor drives (low power, nothing fancy)
- 2x6 digit PGD display

I.e., you could almost do this with a PIC, today.

The code -- complex for that time -- would:
- gather incoming TD's
- fit them to the appropriate "chains" based on GRI
- convert hyperbolic coordinates to lat-lon
- compensate for oblateness of Earth
- translate to a scaled Mercator projection
- drive X&Y motors to bring the pen to that position
- lather, rinse, repeat

Today, tackling this project would be a 4-6 *week* effort
instead of the many man-years that went into it.

Builds would take *seconds* instead of half an hour of
listening to an 8" floppy grinding away -- followed by
2 *hours* of EPROM burning. You'd be able to step through
your code "at your desk" instead of burning EPROMs and
probing for signatures with a 'scope.

> To me it is about the shared life's experience and knowledge,
> skills and interests, depth and breadth, tools and so on that
> are involved in the shaping that count. It's who we are, not
> what we make, that makes us "embedded programmers."

I don't see how evolution in tools or device capabilities
changes that. To me, the differences *are* in what we make.
You use an autopilot differently than a "word processor"
(that some desktop programmer created). The consequences
of the autopilot's failure can be more significant, and
immediate. A Therac hiccups and it *cooks* someone.

Users are more *engaged* with "devices" than "computers".
They become integrated in their lives. Sitting down at
"your PC" is an entirely different experience than
cooking your dinner in the microwave; or, making a call
on your cell phone; or driving to work; etc.

The "warranty" aspect, to me, speaks to the mindset
differences between the two product worlds. People
*expect* (yes, EXPECT!) the software on their PC to crash.
They *don't* expect -- nor are they tolerant of -- the
software *in* their microwave to crash! And, if the
latter occurs, they expect to be compensated for it
("I want a new microwave. One that *works*!").

> When the things themselves change -- for example, when the
> making of furniture goes from hand-selection of grain and
> quality and orientation and colors and hand-crafted use of a
> wide variety of odd styled chisels of every manner and kind,
> to make a piece that will last 200 years (I have many such
> pieces, by the way, which are in perfect condition today)
> through all manner of humidty and temperature.... to using
> 30-yr old doug fir softwood stapled together with plastic and
> staples and some cheesy metalwork slapped onto the outside
> without any real idea of how these things get used over the
> years in the end (as happens to be the huge difference
> between old "roll top" desks and the new crap that could only
> be said to be copied out of some catalog, by comparison)...
> well, I just cannot call them the same kind of craft anymore.

So, would you prefer to spend weeks or months building that
hand-crafted desk? Or, would you prefer to assemble it
from a kit and spend the rest of your time building a unique
set of windchimes (that no one else will ever have)?

I am thrilled at the opportunities these changes have given
me to move my art into different areas. If I was still relying
on that "pocket assembler", I'd never have time to even *imagine*
the other devices that I could create let alone actually create
them!

Or, the time to spend *commenting* on how "things have changed" :>

Jon Kirwan

unread,
Jun 19, 2011, 2:07:20 AM6/19/11
to
On Sat, 18 Jun 2011 20:25:34 -0700, Don Y <now...@here.com>
wrote:

>Hi Jon,


>
>On 6/18/2011 6:25 PM, Jon Kirwan wrote:
>
>>> The "one-liner" description of "embedded systems" that I
>>> use to try to give folks (e.g., at a dinner party) an idea
>>> of what I do is: "something that is quite obviously a computer
>>> (inside) but doesn't look/act like a 'computer'"
>>>
>>> I summarize by saying "I make *things*".
>>>
>>> To folks in the Industry, I describe embedded systems as
>>> "software that comes with a warranty" (*think* about it!)
>>
>> I don't look at it from the outside. I look at it from the
>> processes involved in performing the work and the skills and
>> talents those entail. It's not about usage. It's about what
>> is required by the craft.
>
>But the requirements change as the craft evolves!

Indeed that is so. But it isn't just that there are new
tools in town. It's also that _more_ people can participate
at a much wider variety of skill levels. I'm not complaining
about it. Just noting it.

>Do you want to dope your own silicon?

I have done that. Were you aware of a Bell Labs kit to do
just that, put out in the mid 1960's?? (I've done it since,
with my own home-brew oven, as well, made with a nickel
plated, water cooled chamber and halogen lamps. Long story
there, too.)

>Do you prefer using a
>"pocket assembler" in lieu of a HLL compiler?

><snip>

I said that the group's interests have moved away from my own
over the years. That's true. I _also_ believe that as the
processors used and tools applied increasingly look more like
traditional, hosted programming environments found on
workstations, to that degree it also is less and less a
differentiating feature. Taken to its limit, there will be
no difference in embedded development and any other and no
point in choosing to use the adjective anymore.

The group here has had this debate here. Long threads about
it. I'm not changing any of the position I took a decade
back about any of this. It's the same stand today. What
makes embedded development "embedded" to me are how the
skills and tools are differentiated from workstation
development. That's the main point. It's not about the end
product.

If a washing machine uses Windows 7 Ultimate and Microsoft
writes the .NET objects used to do the hardware interfacing
at a low level and then provides abstraction objects to the
programmer, then this particular washing machine programmer
is no more an embedded programmer -- even though it is a
washing machine -- than would be any other .NET Windows 7
Ultimate programmer dragging and dropping a few objects onto
a form.

Others have instinctively asked the questions you have asked.
But I have considered them and don't agree with them once I
thought more on it. It's not a useful dividing line. Sorry,
but that's the end of it for me. What _is_ useful to know
are the types of experiences, talents, and backgrounds
required to _do_ development for some sphere. And in that
sense, embedded has real meaning the way I apply it. It has
almost no useful meaning the way you seem to suggest.

I'll stop here. There's more to this, but I didn't want to
go too far. If you are interested, I've posted on this topic
before and with more of my views on it exposed. Still
available in google, I'm sure.

Jon

Mr.CRC

unread,
Jun 19, 2011, 11:23:02 AM6/19/11
to
Jon Kirwan wrote:
> On Fri, 17 Jun 2011 15:14:44 -0700, "Mr.CRC"
> <crobc...@REMOVETHISsbcglobal.net> wrote:
>> Thanks Jon. I've mostly lurked here for over 12 years, and usually
>> listen to your writings with great eagerness to learn something and am
>> rarely disappointed.
>
> Thanks. I don't usually have a lot to say, anymore, though.

Your welcome.

> With the move towards large memory systems and 32-bit cpus
> with FP and memory mgmt systems capable of runing Linux on a
> chip, "embedded" has blurred to the point where you can't
> tell the difference between a Microsoft MSDN developer, a
> Linux guru, and an embedded micro coder, anymore.

I can relate. I prefer bit banging, writing ISRs, that sort of thing.
Though drivers can get a little tiresome. I figure if it doesn't need
an oscilloscope to debug and verify, it's not my kind of "embedded."
Perhaps I just prefer any excuse to use an oscilloscope!

> The Windows CE coder seems to imagine they are doing embedded
> work. So does the Linux coder. .NET can run embedded, in
> fact, though anyone familiar with its details must realize
> just how abstracted that environment is. Technically, yes, I
> suppose it's true that porting code from a workstation to run
> on an "embedded device" using .NET, for example, might still
> meet some people's definitions. A lot of the discussions
> here seem to be at that level now. Although I do .NET coding
> and have my paid-up annual MSDN subscription, it's dull stuff
> to me.

I've cringed at the mere sight of ".NET" since its inception. I also
hated Java since I first heard of it.

We had a guy at work who thought "embedded" meant installing Linux on a
SBC and programming it. It is "embedded" in a sense, but not quite the
sense that it seems we would pretty much agree upon.

> I think of embedded to be about the skills required by us and
> where they __differ__ from hosted environment development
> skill sets. When a job requires familiarity more than just
> one language and requires familiarity with how compilers
> compile code, with assembly, with linker semantics and how to
> modify their control files for some desired result, as well
> as broad acquaintances with physics, numerical methods,
> signal processing, optics, and certainly electronic design,
> then we find more of these differences. When it requires
> less of these differences from workstation development
> skills, it is less about the "embedded" I know and love.
>
> Times are changing and the relative mix of skills found
> amongst the embedded programmer body are shifting with the
> capabilities being offered in todays tools. Entire operating
> systems are ported over by people I do consider to be well
> healed embedded programmers, but then just used lock-stock-
> and-barrel by those who know little about what took place and
> don't care to know and who just use the usual workstation
> tools without knowing much difference, at all.
>
> That's a different thing to me. So I write less, today. I
> haven't changed, but the audience has.

Well I don't think the need for the more EE skill side of the trade will
go away. The changes probably amount to an overall improvement, since
more people can access more technology and tools. That still a benefit
even if some of them don't become master craftsmen. There's a place for
developers with a cursory, high level undertanding. Think of Arduino
and kinetic skulptors, for ex. If they can get something to just "work"
then the world is a better place.

At first I thought Arduino was stupid. "I can work with a bare AVR,
what do I need that for?" I thought. Then I realized that if it makes
more people play with microcontrollers, it is good. Now I'm even
curious to check it out and see if it can spare me some time on my next
8-bit project.

>> Is that ARM families that can basically switch context in hardware, or
>> some other device?
>
> Some other. For one example, the now "mature" or "older"
> ADSP-21xx Family from Analog Devices is the example I had
> coded that delta queue for.

Oh that one. I was close to trying that out once. I actually would
have preferred to use ADI processors for what I use the TI C2000 for,
but at the time ADI had nothing like a "digital signal controller" with
DSP speed and microcontroller peripherals. Blackfin has closed the gap
a little, but it's still not what you'd pick to interface quadrature
encoders and run MOSFET SMPS front-ends.

But TI assembly language is an ugly thing. It's not that bad if you can
figure out the syntax and work with it enough to keep it memorized,
which I haven't, because the docs are all language lawyer style when
what is needed is more simple examples.

With ADI, at least for SHARC which I looked at a bit, assembly is a breeze.

>>> I've used this for an operating system with a
>>> jitter-free guarantee on starting sleeping processes using
>>> delta queues (where only one such process is allowed to sleep
>>> on the same timed event.)
>> <scratches head, wonders what a "delta queue" is>
>>
>> Hmm, looking at a few search results I sort of get it.
>
> It's a very simple, easy to operate, precision tool. I first
> read about the idea from Douglas Comer's first book on XINU.

Well I've a tidbit from you again. Thanks.


[edit]


> I think I'd focus on an audio range device, as well. But I'm
> pretty sure I'd just make it a toy and not something
> professional. There is so much more "work" involved in
> making something ready for others to use and although I find
> some of that enjoyable, I don't find all of it to be so. And
> I'd be looking more for my own hobbyist pleasure, self-test,
> and education than anything else.

Once it blurs into legalities, regulations, and injection molded die
making, I start to run for cover. Probably better that I have a 9-5 job
then.


> I looked over some of what you write elsewhere and I wish I
> had your experiences with lasers, too. Lots of potential fun
> there, both for me and for students I like to teach at times.

Yeah, well the lasers and my silly Chemistry degree cost me a lot of
time that I sometimes wish I had spent on getting a proper EE degree.

> By the way, I've also got a lot of "stuff in drawers." And I
> definitely get it about just goofing off with toys. Work is
> work, but on my own I don't want the burden of having to do
> all the extra stuff needed to productize. I'd rather play.
>
> Jon


Have a good Father's Day, whether or not your a father!

Jon Kirwan

unread,
Jun 19, 2011, 12:19:21 PM6/19/11
to
On Sun, 19 Jun 2011 08:23:02 -0700, "Mr.CRC"
<crobc...@REMOVETHISsbcglobal.net> wrote:

>Jon Kirwan wrote:
><snip>


>
>> With the move towards large memory systems and 32-bit cpus
>> with FP and memory mgmt systems capable of runing Linux on a
>> chip, "embedded" has blurred to the point where you can't
>> tell the difference between a Microsoft MSDN developer, a
>> Linux guru, and an embedded micro coder, anymore.
>
>I can relate. I prefer bit banging, writing ISRs, that sort of thing.
>Though drivers can get a little tiresome. I figure if it doesn't need
>an oscilloscope to debug and verify, it's not my kind of "embedded."
>Perhaps I just prefer any excuse to use an oscilloscope!

I don't always require an oscilloscope or an MSO, but just
being threatened that I might need one is what makes it all
the more fun for me. Without at least the threat present,
it's certain to be boring.

>> The Windows CE coder seems to imagine they are doing embedded
>> work. So does the Linux coder. .NET can run embedded, in
>> fact, though anyone familiar with its details must realize
>> just how abstracted that environment is. Technically, yes, I
>> suppose it's true that porting code from a workstation to run
>> on an "embedded device" using .NET, for example, might still
>> meet some people's definitions. A lot of the discussions
>> here seem to be at that level now. Although I do .NET coding
>> and have my paid-up annual MSDN subscription, it's dull stuff
>> to me.
>
>I've cringed at the mere sight of ".NET" since its inception. I also
>hated Java since I first heard of it.
>
>We had a guy at work who thought "embedded" meant installing Linux on a
>SBC and programming it. It is "embedded" in a sense, but not quite the
>sense that it seems we would pretty much agree upon.

If you don't need to read datasheets, study peripheral
operation, read schematics, consider sensor/transducer
physics, do some laplace and partial fractions, look over
voltage thresholds and current limits, scan over compiler
output in assembly or machine code, set up that HP 54645D
with both 8-lead probes in hand just in case, and figure out
how to modify a linker control file, and all in the same
project, then it isn't embedded work... much.

>> I think of embedded to be about the skills required by us and
>> where they __differ__ from hosted environment development
>> skill sets. When a job requires familiarity more than just
>> one language and requires familiarity with how compilers
>> compile code, with assembly, with linker semantics and how to
>> modify their control files for some desired result, as well
>> as broad acquaintances with physics, numerical methods,
>> signal processing, optics, and certainly electronic design,
>> then we find more of these differences. When it requires
>> less of these differences from workstation development
>> skills, it is less about the "embedded" I know and love.
>>
>> Times are changing and the relative mix of skills found
>> amongst the embedded programmer body are shifting with the
>> capabilities being offered in todays tools. Entire operating
>> systems are ported over by people I do consider to be well
>> healed embedded programmers, but then just used lock-stock-
>> and-barrel by those who know little about what took place and
>> don't care to know and who just use the usual workstation
>> tools without knowing much difference, at all.
>>
>> That's a different thing to me. So I write less, today. I
>> haven't changed, but the audience has.
>
>Well I don't think the need for the more EE skill side of the trade will
>go away.

No, it grows. But the size of the pyramid of programmers
grows exponentially larger still. So it remains a dwindling
proportion of the conversation here despite the truth of what
you say.

>The changes probably amount to an overall improvement, since
>more people can access more technology and tools. That still a benefit
>even if some of them don't become master craftsmen. There's a place for
>developers with a cursory, high level undertanding. Think of Arduino
>and kinetic skulptors, for ex. If they can get something to just "work"
>then the world is a better place.

Agreed. I think this is very good, that computers have moved
from when I first worked on building my own. What I did
caused me to get written up in a large spread, with pictures,
in the local newspaper. It was _that_ unusual, I guess. I
don't know who ratted me out at the time. But the news
people showed up, one day, all the same. To have the case
where one can get a TI Launchpad send to you for $4.30, with
cables and a crystal and two cpus, and connectors and the
rest... no shipping charges... well, what can one say? It's
a great time, indeed!!

I am glad for all this. And I'm glad others might be
interested in them for any reason of their own, at all.

>At first I thought Arduino was stupid. "I can work with a bare AVR,
>what do I need that for?" I thought. Then I realized that if it makes
>more people play with microcontrollers, it is good. Now I'm even
>curious to check it out and see if it can spare me some time on my next
>8-bit project.

I just used a Launchpad to create a parallel port to USB
"printer device" that can be used as a parallel port printer
and it saves files automatically on the PC, instead. Had to
add a DB25, some wire and a few resistors and one cap, is
all. Oh, and a tiny piece of vector board. So yes, I get
your point here.

>>> Is that ARM families that can basically switch context in hardware, or
>>> some other device?
>>
>> Some other. For one example, the now "mature" or "older"
>> ADSP-21xx Family from Analog Devices is the example I had
>> coded that delta queue for.
>
>Oh that one. I was close to trying that out once. I actually would
>have preferred to use ADI processors for what I use the TI C2000 for,
>but at the time ADI had nothing like a "digital signal controller" with
>DSP speed and microcontroller peripherals. Blackfin has closed the gap
>a little, but it's still not what you'd pick to interface quadrature
>encoders and run MOSFET SMPS front-ends.

I used a TI 'C40 quite a while back, but when I was actively
also using the ADSP-21xx. I have to say it was night and day
between the two. I had TI support on the phone because the
hardware timing I was getting was 11 clocks for a cached bit
of code that according to their docs should have taken 7
clocks. They NEVER were able to explain the timing of the
bit of source code I sent them. Even after 3 weeks of their
working on it and comparing it to their docs about register
clashes and so on. Never did resolve the issue to my
satisfaction. By comparison, the ADSP-21xx worked _exactly_
as the docs said. Always. Exactly. Never a question about
them. The assembly (up to 3 instructions per cycle) was
nice, too.

>But TI assembly language is an ugly thing. It's not that bad if you can
>figure out the syntax and work with it enough to keep it memorized,
>which I haven't, because the docs are all language lawyer style when
>what is needed is more simple examples.
>
>With ADI, at least for SHARC which I looked at a bit, assembly is a breeze.

I know.

>>>> I've used this for an operating system with a
>>>> jitter-free guarantee on starting sleeping processes using
>>>> delta queues (where only one such process is allowed to sleep
>>>> on the same timed event.)
>>> <scratches head, wonders what a "delta queue" is>
>>>
>>> Hmm, looking at a few search results I sort of get it.
>>
>> It's a very simple, easy to operate, precision tool. I first
>> read about the idea from Douglas Comer's first book on XINU.
>
>Well I've a tidbit from you again. Thanks.

It's a book worth reading through. Very clear, very easy,
and it stimulates the imagination well.

>[edit]
>> I think I'd focus on an audio range device, as well. But I'm
>> pretty sure I'd just make it a toy and not something
>> professional. There is so much more "work" involved in
>> making something ready for others to use and although I find
>> some of that enjoyable, I don't find all of it to be so. And
>> I'd be looking more for my own hobbyist pleasure, self-test,
>> and education than anything else.
>
>Once it blurs into legalities, regulations, and injection molded die
>making, I start to run for cover. Probably better that I have a 9-5 job
>then.

Hehe.

>> I looked over some of what you write elsewhere and I wish I
>> had your experiences with lasers, too. Lots of potential fun
>> there, both for me and for students I like to teach at times.
>
>Yeah, well the lasers and my silly Chemistry degree cost me a lot of
>time that I sometimes wish I had spent on getting a proper EE degree.

I did as much chemistry as I wanted to do -- mostly
explosives as a kid. Mercury fulminate was my absolute fave
-- the reaction before the crystals settle out is a mad
scientist's exothermic, boiling, vaporous dream. And what
you get after, or better still after filtering and
precipitation with glacial acetic acid, was also a lot of fun
too. I did rocket fuels, explosives, fireworks, smoke bombs,
and pretty much anything "thermodynamic." Luckily also
learned enough extra to stay alive while doing that at home.
Still have picric acid, chlorates and perchlorates, and a few
other goodies laying about here. They used to ship that to
16 yr old kids, though the picric acid had to go by train. I
know. I was one and they Boulevard Labs in Chicago shipped
to me, regularly! Organics I got into a little. Enough to
get some of the basic terms down so that I could read and
draw things when asked, but nothing much more than that. I
know what a hydroxy ketone is defined as and I can draw out a
diagram for 1-Chloro-3,5-dinitro-4-hydroxybenzene if asked,
for example. But that's about it. Although there is logic
to organic naming, there is enough memorization of various
specialized words to bother me. Inorganics is easier in that
sense.

>> By the way, I've also got a lot of "stuff in drawers." And I
>> definitely get it about just goofing off with toys. Work is
>> work, but on my own I don't want the burden of having to do
>> all the extra stuff needed to productize. I'd rather play.
>>
>> Jon
>
>
>Have a good Father's Day, whether or not your a father!

Thanks. You too. And yes, I've 3. All in their mid 20's
now.

Jon

Mel

unread,
Jun 19, 2011, 9:26:36 PM6/19/11
to
Jon Kirwan wrote:
By comparison, the ADSP-21xx worked _exactly_
> as the docs said. Always. Exactly. Never a question about
> them. The assembly (up to 3 instructions per cycle) was
> nice, too.

That one was intriguing. I was several pages into the manual before I
realized that the sample code I was reading wasn't BASIC. Never wound up
working with one, though. There was an ADSP-2105 at the other end of the
product from me, once.

Mel.

Jon Kirwan

unread,
Jun 20, 2011, 12:33:57 AM6/20/11
to

The ADSP-2105 was their "value line" part. Cheap at $5,
memory serving. I used them. I also have an ISA board
system with one installed on it where you can download code
and play a bit. I did most of my work on an ADSP-2111. A
somewhat higher priced spread than the 2105. I liked them a
lot. Learned some things from the books that ADI produced
for them, as well.

Jon

Mr.CRC

unread,
Jun 20, 2011, 9:22:59 PM6/20/11
to
Jon Kirwan wrote:
> On Sat, 18 Jun 2011 20:25:34 -0700, Don Y <now...@here.com>
> wrote:
>> Do you want to dope your own silicon?
>
> I have done that. Were you aware of a Bell Labs kit to do
> just that, put out in the mid 1960's?? (I've done it since,
> with my own home-brew oven, as well, made with a nickel
> plated, water cooled chamber and halogen lamps. Long story
> there, too.)


Holy smokes I can't believe you mentioned that Bell Labs kit. It was to
make a silicon solar cell, and my dad and I worked through it when I was
about 12. I remember actually being able to see a change in the visual
properties at the edge of the wafer which was possibly evidence of the
doped junction.

My dad and I built the furnace with fire bricks and *asbestos panels*
that you could freely purchase in a hardware store.

We got our wafer Ni plated, but never could get any evidence of current
production. It was a lot of fun though.

Jon Kirwan

unread,
Jun 21, 2011, 3:29:03 AM6/21/11
to

It's amazing to find another who knew about the kit!! It's
still available, I think. At least, I had some contact a few
years ago with the husband/wife pair who bought up the rights
for the Bell Labs kits. I should see if they are still alive
and kicking.

Jon

Don Y

unread,
Jun 21, 2011, 11:09:07 AM6/21/11
to
Hi Jon,

On 6/18/2011 11:07 PM, Jon Kirwan wrote:

> I said that the group's interests have moved away from my own
> over the years. That's true. I _also_ believe that as the
> processors used and tools applied increasingly look more like
> traditional, hosted programming environments found on
> workstations, to that degree it also is less and less a
> differentiating feature. Taken to its limit, there will be
> no difference in embedded development and any other and no
> point in choosing to use the adjective anymore.

I disagree. Because the type of application and demands
of the "user environment/experience" differ greatly in the
two worlds.

As I said, a user interacts with a "device" (thing) differently
than with a "computer" (desktop application). Everything about
how and where he uses it is different. When you use a desktop
application, you are (typically) investing much more effort
*trying* to use it as *you* intend. OTOH, when you use a
*device*, the interface wants to be intuitive, second-nature,
unobtrusive, etc.

You don't have some (arbitrary) set of "user interface guidelines"
imposed by (e.g.) MS to adopt (ah, yes... we must support a
'cut' operation... and this must be initiated with these magic
keystrokes...). Rather, you design the interface to fit the
application and *environment* in which you expect the device to
be operated.

You don't think in terms of long, drawn out "operational sequences"
(create alpha channel from mask, darken, multiply, flood fill)
but, rather, simple, short-lived exchanges. (consider an
automobile: the most "involved" driving activity is probably
parallel parking?)

Likewise, you know the user is far less forgiving of any
"misunderstandings" (or, *gasp*, screwups!) on your part.
When you turn the steering wheel left, the car had *better*
turn left! By contrast, in the desktop world, the user
expects problems interacting with the application (either
because of a lack of familiarity with its intricacies, bugs
in the application or whatever)

Look at the design and interaction you experience when using an
iPod (or other "portable media player") vs. a desktop media
player experience. Which is friendlier? More intuitive?

The desktop player's "controls" probably "make sense" -- to
the developer and to the user -- yet they are nowhere near as
intuitive and unobtrusive as the portable media player.

[IMO, this is where most of apple's products fall down, big time!]

> The group here has had this debate here. Long threads about
> it. I'm not changing any of the position I took a decade
> back about any of this. It's the same stand today. What
> makes embedded development "embedded" to me are how the
> skills and tools are differentiated from workstation
> development. That's the main point. It's not about the end
> product.

That starts to sound elitist. As if only a person who has
developed his own film can call himself a photographer. You
may *lament* the fact that others can now *simply* do things
that were, previously, signs of supreme accomplishment in
the field. But, that doesn't make them any less so.

> If a washing machine uses Windows 7 Ultimate and Microsoft
> writes the .NET objects used to do the hardware interfacing
> at a low level and then provides abstraction objects to the
> programmer, then this particular washing machine programmer
> is no more an embedded programmer -- even though it is a
> washing machine -- than would be any other .NET Windows 7
> Ultimate programmer dragging and dropping a few objects onto
> a form.

So, if *I* were to write that .NET program, it, *somehow*
makes it an embedded program? (whereas it wasn't when *he*
did so? Even though, chances are, *he* will get the .NET
program "more correct" than I?)

I've a friend from school who has *probably* more "embedded
devices" in use than most people I know. I wouldn't trust him
to design a voltage divider -- or even solder two wires
together without *also* putting a wire nut on them! He could
easily have been designing compilers or writing desktop
applications had he not "started" by writing code for
"computers that don't look like computers". And, I suspect
he could just as easily transition to that world if he had
the interest -- despite his "embedded" (?) background.

Jon Kirwan

unread,
Jun 21, 2011, 3:47:09 PM6/21/11
to
Hi, Don,

On Tue, 21 Jun 2011 08:09:07 -0700, Don Y <now...@here.com>
wrote:

>On 6/18/2011 11:07 PM, Jon Kirwan wrote:


>
>> I said that the group's interests have moved away from my own
>> over the years. That's true. I _also_ believe that as the
>> processors used and tools applied increasingly look more like
>> traditional, hosted programming environments found on
>> workstations, to that degree it also is less and less a
>> differentiating feature. Taken to its limit, there will be
>> no difference in embedded development and any other and no
>> point in choosing to use the adjective anymore.
>
>I disagree. Because the type of application and demands
>of the "user environment/experience" differ greatly in the
>two worlds.

><snip>

I disagree, as well. What binds us is our experiences and
knowledge of tools and practices. Who we _are_. Not end
use. I think we are talking cross-purposes, though. I know
you get it. I think we are just talking across each others'
bow, so to speak. I'm talking about what it is to be a tool
developer not a tool user.

>> The group here has had this debate here. Long threads about
>> it. I'm not changing any of the position I took a decade
>> back about any of this. It's the same stand today. What
>> makes embedded development "embedded" to me are how the
>> skills and tools are differentiated from workstation
>> development. That's the main point. It's not about the end
>> product.
>
>That starts to sound elitist.

That's probably my fault. Bad writing. But it's not.

I have a great deal of respect for Windows programmers, for
example. I am one, as I have been programming nationally-
sold Windows programs since about Windows 3.0 (learned on
Win286 and Win386.) I'm working on such a product this very
year, in fact. So I'm not talking about lofting embedded
coders above Windows coders. That would be crazy.

I just happen to know, quite well, that "who I have been" as
a Windows programmer is quite a lot different than "who I
have been" as an embedded programmer. I include both types
of people inside me and reject neither of them. But I simply
see within me two different skill sets and talents arranged
in different priority arrangements, when wearing one hat or
another. The embedded side requires a far wider range of
skills, but not a "better" range. There is no "better."

I readily admit I _prefer_ the embedded side of work. That's
personal. If you feel that is elitist, I can't help that.

>As if only a person who has
>developed his own film can call himself a photographer. You
>may *lament* the fact that others can now *simply* do things
>that were, previously, signs of supreme accomplishment in
>the field. But, that doesn't make them any less so.

Why would I care, Don? Doesn't matter to me that others have
their interests and desires, different from mine.

But to use your example, that doesn't mean that a person who
owns a camera and doesn't know anything about developing film
will be someone that a film developer necessarily lumps into
their own social group. If you imagine conflating these two
things into the same mush, I guess I then understand your
confusion about where I'm coming from.

I don't mean to be abrupt about it, but it is obvious to me
this way you write here isn't a useful way to divide things.

Put simply, if I wanted to waste my precious reading and
responding time on Windows and Linux programming issues, I'd
go find a Windows or Linux group. They already have plenty
of places, in fact, for a wide variety of special interests
in their own categories.

In any case, I like your discussions here. They sing to me
and I understand them pretty well and enjoy the issues
brought out by them. Just so you know.

>> If a washing machine uses Windows 7 Ultimate and Microsoft
>> writes the .NET objects used to do the hardware interfacing
>> at a low level and then provides abstraction objects to the
>> programmer, then this particular washing machine programmer
>> is no more an embedded programmer -- even though it is a
>> washing machine -- than would be any other .NET Windows 7
>> Ultimate programmer dragging and dropping a few objects onto
>> a form.
>
>So, if *I* were to write that .NET program, it, *somehow*
>makes it an embedded program? (whereas it wasn't when *he*
>did so? Even though, chances are, *he* will get the .NET
>program "more correct" than I?)

You miss my point. This is why we are talking at cross
purposes. As I just wrote, I __like__ reading your posts.
That won't change at all just because you write a .NET
program. Cripes, __I__ write .NET programs, for gosh sake. I
like to think that I'm still an embedded programmer. My soul
doesn't change.

I enjoy the kinds of challenges, and seeing the kinds of
interesting solutions to them, that deeply embedded
development entails. That's why I'm here. Those challenges
often bring people with similar interests together. I like
that fact. To the degree that this group changes its
discussions towards .NET or Linux development, to that degree
I will read less and participate less.

Not because I don't do that development work -- I do -- but
because my personal interests lay elsewhere. And not because
I mean to judge it harshly -- I don't. But I don't define
myself as a Windows programmer, despite the fact that I've
now 25 years of such experience mixed within 38 years of
embedded experience. I don't define myself as a Linux
developer, despite having been involved with Unix since the
v6 kernel days (mid to late 1970s.) I prefer my embedded
side, love that aspect, what to share it with others. The
rest is part of who I am, too, but it's not what I enjoy as
much.

If you wrote a .NET program, it would still be a .NET program
and wouldn't rely upon the kinds of shared experiences I like
listening to. That doesn't take away from the fact that you
and I _do_ still have shared experiences and interests. And
that comes out in your posts here. It will, regardless.

Again, I'll stop here. I think we are talking at cross
purposes and it won't help anything to beat an already dead
horse.

Jon

Don Y

unread,
Jun 22, 2011, 8:30:34 AM6/22/11
to
Hi Jon,

On 6/21/2011 12:47 PM, Jon Kirwan wrote:

>>> I said that the group's interests have moved away from my own
>>> over the years. That's true. I _also_ believe that as the
>>> processors used and tools applied increasingly look more like
>>> traditional, hosted programming environments found on
>>> workstations, to that degree it also is less and less a
>>> differentiating feature. Taken to its limit, there will be
>>> no difference in embedded development and any other and no
>>> point in choosing to use the adjective anymore.
>>
>> I disagree. Because the type of application and demands
>> of the "user environment/experience" differ greatly in the
>> two worlds.
>> <snip>
>
> I disagree, as well. What binds us is our experiences and
> knowledge of tools and practices. Who we _are_. Not end
> use. I think we are talking cross-purposes, though. I know
> you get it.

No, that's the problem! I keep rephrasing *my* points and
yours in an attempt to come up with something that I can
"relate to" as an aid (to me) to understanding the distinction
you're trying to make.

And, obviously, I keep missing the mark. :-/

> I think we are just talking across each others'
> bow, so to speak.

[this was an excellent phrase -- I willhave to remember it!]

> I'm talking about what it is to be a tool
> developer not a tool user.

I'm assuming this, also, isn't the "differentiating factor"
that you're using between embedded/desktop so I won't go
into that, here...

[snip]

> Again, I'll stop here. I think we are talking at cross
> purposes and it won't help anything to beat an already dead
> horse.

I'd really like to understand the point(s) you're making.
If you can be patient with me, to that end, I'd like to continue
this (in private). Watch your inbox (though I may not get to it
today as I have a tree to cut down -- on someone else's
property! :> )

[of course, if you're tired of it, feel free to click "delete" :> ]

Thx,
--don

0 new messages