Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

baud rate autodetection on AVR 8-bit?

656 views
Skip to first unread message

Ivan Shmakov

unread,
Dec 7, 2012, 9:17:16 AM12/7/12
to
BTW, is there an easy way to autodetect the baud rate while
using an AVR UART? (Preferably something that works with
ATmega8, given that those MCU's are such a cheap thing
nowadays.)

There're some ideas (and 8051 code) for that on [1], but I'd
like to know if there could be any better techniques.

TIA.

[1] http://www.pjrc.com/tech/8051/autobaud.html

PS. It seems that I'm slowly drifting into designing my own, AVR-based
Bus Pirate clone. The good news is that the parts for this one
will likely cost under $10... (connectors included.)

--
FSF associate member #7257

Rich Webb

unread,
Dec 7, 2012, 10:28:41 AM12/7/12
to
On Fri, 07 Dec 2012 21:17:16 +0700, Ivan Shmakov <onei...@gmail.com>
wrote:

> BTW, is there an easy way to autodetect the baud rate while
> using an AVR UART? (Preferably something that works with
> ATmega8, given that those MCU's are such a cheap thing
> nowadays.)
>
> There're some ideas (and 8051 code) for that on [1], but I'd
> like to know if there could be any better techniques.

It's pretty straightforward if you can use a timer capture pin; it takes
care of grabbing the timer count on the active edge and you can examine
it at leisure. Essentially, you will want to listen to an incoming
serial stream to measure the narrowest pulse and when you've seen enough
identical pulses, that determines the rate. If you get a narrower pulse
(that is wider than a glitch width), start the count over.

A little more robust is to add N to an accumulator when you see a pulse
within epsilon of the current narrow width, subtract M if there is a
wider pulse, or reset to zero and start over if there is a narrower.
When the accumulator reaches Q, you've found the pulse width. If it
counts down to zero (from too many wider pulses), assume you saw a
narrow noise pulse and start over. A divide-by-16 (>>4) works pretty
well for epsilon.

--
Rich Webb Norfolk, VA

Ivan Shmakov

unread,
Dec 7, 2012, 11:31:21 AM12/7/12
to
>>>>> Rich Webb <bbe...@mapson.nozirev.ten> writes:
>>>>> On Fri, 07 Dec 2012 21:17:16 +0700, Ivan Shmakov wrote:

>> BTW, is there an easy way to autodetect the baud rate while using an
>> AVR UART? (Preferably something that works with ATmega8, given that
>> those MCU's are such a cheap thing nowadays.)

>> There're some ideas (and 8051 code) for that on [1], but I'd like to
>> know if there could be any better techniques.

> It's pretty straightforward if you can use a timer capture pin; it
> takes care of grabbing the timer count on the active edge and you can
> examine it at leisure.

Well, I guess I can use a pin change interrupt instead, and just
save the timer's counter in the handler. (Thus introducing a
constant delay, which will disappear anyway in the delta.)

> Essentially, you will want to listen to an incoming serial stream to
> measure the narrowest pulse and when you've seen enough identical
> pulses, that determines the rate.

Indeed, thanks!

> If you get a narrower pulse (that is wider than a glitch width),
> start the count over.

> A little more robust is to add N to an accumulator when you see a
> pulse within epsilon of the current narrow width, subtract M if there
> is a wider pulse, or reset to zero and start over if there is a
> narrower. When the accumulator reaches Q, you've found the pulse
> width. If it counts down to zero (from too many wider pulses),
> assume you saw a narrow noise pulse and start over. A divide-by-16
> (>> 4) works pretty well for epsilon.

I guess I'd need to meditate over this one for some time.
Thanks.

Rich Webb

unread,
Dec 7, 2012, 12:42:37 PM12/7/12
to
On Fri, 07 Dec 2012 23:31:21 +0700, Ivan Shmakov <onei...@gmail.com>
wrote:
The problem is that if a rogue pulse is detected then you could end up
waiting forever for the confirming pulses. Say you expect baud rates
from 4800 to 115200, corresponding to pulses from 2400 to 100 ticks wide
(picking an artificial clock rate) and selected a minimum of 80 ticks
(where 80 or less is ignored as a glitch).

Say the actual baud rate was 4800 but before you sync'd to it a noise
pulse from, say, inserting the connector was detected that was 1200
ticks wide. You'd wait forever watching the 4800 baud pulses (2400 wide)
and never see another, so there has to be some mechanism to abandon the
current minimum and start the search anew.

One way to avoid the trap is to add, say, 10 to an accumulator for every
pulse matching the currently observed minimum and declaring a valid
minimum if, say, a count of 50 is reached. Deduct 1 from the accumulator
whenever a pulse wider than the current minimum is seen and start the
search over if the count ever gets back to zero (or if a narrower pulse
is detected, of course).

Tim Wescott

unread,
Dec 7, 2012, 2:07:29 PM12/7/12
to
On Fri, 07 Dec 2012 23:31:21 +0700, Ivan Shmakov wrote:

>>>>>> Rich Webb <bbe...@mapson.nozirev.ten> writes: On Fri, 07 Dec 2012
>>>>>> 21:17:16 +0700, Ivan Shmakov wrote:
>
> >> BTW, is there an easy way to autodetect the baud rate while using an
> >> AVR UART? (Preferably something that works with ATmega8, given that
> >> those MCU's are such a cheap thing nowadays.)
>
> >> There're some ideas (and 8051 code) for that on [1], but I'd like to
> >> know if there could be any better techniques.
>
> > It's pretty straightforward if you can use a timer capture pin; it
> > takes care of grabbing the timer count on the active edge and you can
> > examine it at leisure.
>
> Well, I guess I can use a pin change interrupt instead, and just
save
> the timer's counter in the handler. (Thus introducing a constant
> delay, which will disappear anyway in the delta.)

The delay will only be constant if you have no higher priority interrupt
active. Depending on what you're trying to do this will range anywhere
from completely painless to an absolute deal-killer.

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com

Dave Nadler

unread,
Dec 7, 2012, 3:09:55 PM12/7/12
to
On Friday, December 7, 2012 11:31:21 AM UTC-5, Ivan Shmakov wrote:
> Well, I guess I can use a pin change interrupt instead, and just
> save the timer's counter in the handler. (Thus introducing a
> constant delay, which will disappear anyway in the delta.)

No "capture" function available on the timer,
so that you get an exact measurement ?
Sorry I don't remember on this part...

Ivan Shmakov

unread,
Dec 7, 2012, 3:47:26 PM12/7/12
to
>>>>> Dave Nadler <d...@nadler.com> writes:
>>>>> On Friday, December 7, 2012 11:31:21 AM UTC-5, Ivan Shmakov wrote:

>> Well, I guess I can use a pin change interrupt instead, and just
>> save the timer's counter in the handler. (Thus introducing a
>> constant delay, which will disappear anyway in the delta.)

> No "capture" function available on the timer, so that you get an
> exact measurement? Sorry I don't remember on this part...

AIUI, ATmega8 only allows for input capture to happen on an
"event" occuring either on analog comparator output, or a
specific ICP1 pin. Unfortunately, I'd need the latter for GPIO.
(Naturally, ICP1 is distinct from UART's RxD.)

mike

unread,
Dec 7, 2012, 4:35:14 PM12/7/12
to
I gave up several times on a similar project when I couldn't determine
the characteristics of the data stream...or even if there was one.

Been thinking about this for an hour or so.

If you take a random pulse width measurement.
Save that as mmpw, minimum measured pulse width.
Have a table of pulse widths for the allowable baud rates.
Scan up the table till you find a number match.
IF not, multiply the table by 2 then three then four...
Eventually you should find a match.
Program that into the USART. Have the character input interrupt
start looking for framing errors.

Take another random pulse width measurement.
If it's smaller than mmpw, plug it into mmpw and restart.
If it's larger, subtract the two numbers. The difference should
be an integral multiple of the REAL pulse width for the actual baud rate.
Divide the numbers. If the remainder isn't zero, mmpw isn't the
correct bit time. I think the correct bit time is an integer multiple
of the remainder, but I haven't got my head fully around that one.
Use those numbers to scan for a higher baud rate and an inferred new mmpw.
Quit when you get tired of looking for narrower mmpw or those that
aren't an integral multiple of the trial bit time and aren't
getting framing errors.

Of course, there are all kinds of issues with timing synchronization,
error bounds, glitches etc. that need to be dealt with. And the
simplest way of thinking about it seems to result in a recursive
algorithm which isn't nice on a simple processor...so maybe the
details kill the concept...but it would be interesting to play with.

I like it because I think it converges even if you never catch an actual
one-bit-wide pulse.

Rich Webb

unread,
Dec 7, 2012, 8:07:28 PM12/7/12
to
For any given algorithm [*] it's probably possible to construct a data
stream that can fool it. I have found the "accumulator with decrement"
method to be reliable and simple [**] to sync with ASCII (NMEA 0183)
data that's always at 4800 baud except when it isn't. It certainly helps
that on common 8N1 lines, ASCII's 0 MSB precedes the 1 of the stop bit
and that is followed by a 0 for the next start bit.

It is a good idea to check the discovered pulse width against expected
values for legal (for the application) baud rates and then take
appropriate action if there isn't a match.

[*] The exception may be the "Hit the <some specific> key until the
terminal replies with OK" method. Even then it might be possible to find
bogus inputs that "work" at the wrong baud rate for some keys.

[**] Always a plus when coming back do maintenance on the code several
years later.

Mark Borgerson

unread,
Dec 8, 2012, 2:43:22 AM12/8/12
to
In article <k9tnfa$8pf$1...@dont-email.me>, ham...@netzero.net says...
If you've got lots of time and continuous input, you can try the
timing approach, then try to verify your estimate by looking
for known patterns in the data. If you can't find the pattern
(or correct checksum, etc.), start over.


If the data has known patterns, and you can assume a
limited set of standard baud rates, you can simply
look for a known character stream in the data. I once
used an instrument which would accept any standard baud
rate. The manual said "when you power up the instrument,
press the space bar repeatedly until the instrument
responds with the startup message. The instrument
simply stepped through baud rates with each incoming
character until it found a space character.

You could apply the same approach, with a more complex
algorithm, to any source that provides a data stream
with known characteristics.

Mark Borgerson

MK

unread,
Dec 8, 2012, 4:44:36 AM12/8/12
to
Since you are starting out on a new project why use an obsolete processor ?
You can buy an LPC1113 (or some one else's ARM Cortex M0) for less money
than an ATmega8 and it has much better timers with input capture. It
also has a good 10x the performance !

Michael Kellett

Ivan Shmakov

unread,
Dec 8, 2012, 6:31:13 AM12/8/12
to
>>>>> MK <m...@nospam.co.uk> writes:
>>>>> On 07/12/2012 20:47, Ivan Shmakov wrote:

(I vaguely recall that there already was a discussion on this.)

[...]

>> AIUI, ATmega8 only allows for input capture to happen on an "event"
>> occurring either on analog comparator output, or a specific ICP1
>> pin. Unfortunately, I'd need the latter for GPIO. (Naturally, ICP1
>> is distinct from UART's RxD.)

> Since you are starting out on a new project why use an obsolete
> processor? You can buy an LPC1113 (or some one else's ARM Cortex M0)
> for less money than an ATmega8 and it has much better timers with
> input capture.

Apart from lacking any experience whatsoever with ARM, I'm also
somewhat concerned about the ARM's /generally/ being unavailable
in TQFP, SO or DIP packages, which may be important should I'll
be making a "kit" of the design I'm currently working on.

Besides, where can I find an ARM for $0.9 (or less; including
shipping)? (For the quantity, I'd readily buy 10 or so.)

> It also has a good 10x the performance!

... And what about the power consumption?

upsid...@downunder.com

unread,
Dec 8, 2012, 8:01:44 AM12/8/12
to
On Fri, 07 Dec 2012 10:28:41 -0500, Rich Webb
<bbe...@mapson.nozirev.ten> wrote:

>On Fri, 07 Dec 2012 21:17:16 +0700, Ivan Shmakov <onei...@gmail.com>
>wrote:
>
>> BTW, is there an easy way to autodetect the baud rate while
>> using an AVR UART? (Preferably something that works with
>> ATmega8, given that those MCU's are such a cheap thing
>> nowadays.)
>>
>> There're some ideas (and 8051 code) for that on [1], but I'd
>> like to know if there could be any better techniques.
>
>It's pretty straightforward if you can use a timer capture pin; it takes
>care of grabbing the timer count on the active edge and you can examine
>it at leisure. Essentially, you will want to listen to an incoming
>serial stream to measure the narrowest pulse and when you've seen enough
>identical pulses, that determines the rate. If you get a narrower pulse
>(that is wider than a glitch width), start the count over.

When using capture pins or using simple bit banging, if there are
something known about the signal source, some heuristics can be used
to reduce the possible combinations.

In asynchronous communication, the line is in the "idle" state (Mark,
logical 1) 20 mA in current lop, -12 V in RS-232 and so on. The
"fail-safe" termination will pull the line to idle state in RS-485.
Each character starts by the start bit (Space, logical 0, 0 mA, +12
V..) ), following by a number (usually 5-14) of data bits, followed by
an optional parity bit, followed by 1, 1.5 or 2 stop bits (logical 1).

After the last stop bit, if there are no more characters to send, the
idle period (logical 1) starts. The line remains in the idle state for
an arbitrary time, unless there is more characters to send. While this
time can be absolutely anything, typical UARTs start transmitting at
some clocked intervals, typically 1/16 multiples of the bit clock
period.

If there is a lot of characters in to be transmitted, immediately
after the last stop bit ("1"), the start bit ("0") will be immediately
sent.

However, in quite a few situations, the line remains longer or shorter
periods in the idle "1" state, thus detecting the if�dle state will
help in detecting the start bit "0" after the idle period and hence
help figuring out the timing.

For instance, if the signal source is a keyboard operated by a human,
even with autorepeat and line speeds above 300 bits/s, there are quite
long idle periods between characters.

Even with half duplex protocols (request/response) even with full
duplex capable hardware, there are some idle periods between master
requests and slave responses (especially Modbus requires a 3.5
character idle period between request and response). If you are only
listening to master requests or only slave responses, there are quite
long idle periods between two requests or two responses.

Once you have identified the idle state, wait for the start bit
"1"->"0" transition. Most UARTs use some kind of false start bit
detection to see, if the line is still in the "0" state, by sampling
at the middle of start bit of use three 1/16 bit time samples and
majority voting. Of course, this assumes that the bit rate is known.

Since the bit rate is not known, check for each possible half bit
length point for various possible standard bit rates, is the line
still in "0" state and continue validation. However, if you get a "1"
state at some of the sampling points, it can be assumed, that the
original falling edge was not a true start bit and discard it or that
the data rate is higher than you expected. Search for other
alternatives.

Dombo

unread,
Dec 8, 2012, 9:18:30 AM12/8/12
to
Op 07-Dec-12 15:17, Ivan Shmakov schreef:
> BTW, is there an easy way to autodetect the baud rate while
> using an AVR UART? (Preferably something that works with
> ATmega8, given that those MCU's are such a cheap thing
> nowadays.)

You might find something useful in this article which uses a ATtiny2313:
http://spritesmods.com/?art=autobaud


Glenn

unread,
Dec 9, 2012, 4:47:29 AM12/9/12
to
(Please respond to news://comp.arch.embedded )

Hi!

I am so "pissed" about RS-232/EIA-232.

After so many years with that "stupid vintage" serial communications
protocol, we still do not have autonegotiation (and auto-baud-detection)
built into the protocol definitions. Why not?

Why have nobody made a request-for-comment about that, then so many
people do not have to bother with a myriad of out-of-bound signals and
in-band signal (xon, xoff) manually.

It is simply incredibly, that after so many decades, you manually has to
find out, how to get it to work.

Please be inspired to release open and free RFC-definitions now, so that
"vintage" serial communication will work smoothly - and with backward
compatibility and of cause with auto "null-modem" functionality.

I am looking forward to, that all out-of-bound signals can automatically
be mapped by software by a series of signal pertubations and response
measurements.

I know I am very demanding, but it ought to be possible? At least the
software should detect and notify the user, that a null-modem cable
connection is required. But it is a bad compromise.

The communications world (and its users) would be much happier with a
full blown software solution.

Let us exterminate 232 jumper boxes. They are the ultimate time eating
stupid solution, that shows we have given up finding a better solution:
http://www.amazon.com/DB25-Female-RS-232-Jumper-Assembly/dp/B000I996EE

Instead we should have a 232 autonegotiation-box/cable, that can be
inserted between no-negotiation 232 equipped equipment.

;-)

Glenn

PS: I know that USB exists...

Paul E. Bennett

unread,
Dec 9, 2012, 5:22:41 AM12/9/12
to
Glenn wrote:

> On 07/12/12 15.17, Ivan Shmakov wrote:

[%X]

> I am so "pissed" about RS-232/EIA-232.
>
> After so many years with that "stupid vintage" serial communications
> protocol, we still do not have autonegotiation (and auto-baud-detection)
> built into the protocol definitions. Why not?
>
> Why have nobody made a request-for-comment about that, then so many
> people do not have to bother with a myriad of out-of-bound signals and
> in-band signal (xon, xoff) manually.
>
> It is simply incredibly, that after so many decades, you manually has to
> find out, how to get it to work.

Having dealt with the RS232 protocol for many years I am wondering why you
are suddenly so vexed about it. The protocol precedes computing devices,
having been created as a communication protocol for teleprinters (originally
the 5 bit code before becoming the 7 bit and 8 bit codes we have today).
Auto-negotiation takes intelligence at each end. As that was not available
at that time we just accepted the need to get things set-up right before we
started.

> Please be inspired to release open and free RFC-definitions now, so that
> "vintage" serial communication will work smoothly - and with backward
> compatibility and of cause with auto "null-modem" functionality.
>
> I am looking forward to, that all out-of-bound signals can automatically
> be mapped by software by a series of signal pertubations and response
> measurements.

I am sure there is an official way to propose a new RFC if you need to. You
could try and do that if you have some ideas that you would like to see
implemented as standard. I am not sure you would get much support with such
an old protocol though.

--
********************************************************************
Paul E. Bennett...............<email://Paul_E....@topmail.co.uk>
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************

Bruce Varley

unread,
Dec 9, 2012, 6:07:12 AM12/9/12
to

"Glenn" <glen...@gmail.com> wrote in message
news:50c45e31$0$284$1472...@news.sunsite.dk...
The problem isn't RS232, it's simply end points that don't talk the same
language. You can have just as much grief with any other comunication
protocol, for exactly the same reasons. In fact, IME some others can deliver
heaps more grief in setting up than serial async/RS232.

It's a matter of standards, and in the case of RS232 specifically, there's a
lot to like. The voltage levels are compatible, as long as you don't violate
the specified cable specs (and even if you do by a reasonable margin)
crosstalk won't cause errors, and drivers are specified against damage from
wiring errors. Higher up the stack - not RS232 any more, which deals with
electrical behaviour only - the ASCII character set is locked in, including
various standard control sequences if you want to use them.

Higher up still the situation is less clear, but that's due to the facts
that there are a huge multiplicity of client devices involved, with little
or no unifying behaviour in many cases, and mutiple producers of equipment.
If the devices at each end of the link come from the same supplier, then
you're likely to find the setup plug-n-play. If they don't then the chances
of mismatch are greater, and the problem isn't clearly owned by anyone
except yourself. This lack of ownership is a large part of the problem, and
it can only be solved by more encompassing standards.


Paul

unread,
Dec 9, 2012, 8:27:28 AM12/9/12
to
In article <50c45e31$0$284$1472...@news.sunsite.dk>, glenn2233
@gmail.com says...
>
> On 07/12/12 15.17, Ivan Shmakov wrote:
> > BTW, is there an easy way to autodetect the baud rate while
> > using an AVR UART? (Preferably something that works with
> > ATmega8, given that those MCU's are such a cheap thing
> > nowadays.)
> >
> > There're some ideas (and 8051 code) for that on [1], but I'd
> > like to know if there could be any better techniques.
> >
> > TIA.
> >
> > [1] http://www.pjrc.com/tech/8051/autobaud.html
> >
> > PS. It seems that I'm slowly drifting into designing my own, AVR-based
> > Bus Pirate clone. The good news is that the parts for this one
> > will likely cost under $10... (connectors included.)
> >
>
> (Please respond to news://comp.arch.embedded )
>
> Hi!
>
> I am so "pissed" about RS-232/EIA-232.
>
> After so many years with that "stupid vintage" serial communications
> protocol, we still do not have autonegotiation (and auto-baud-detection)
> built into the protocol definitions. Why not?

As others have said it is old, but first of lets consider some things

RS-232 is a voltage levels and cable spec, nothing to do with how many
bits or how transmitted in its framing.

You can use RS232 quite vailidly for on/off control of lights as DC
levels.

There is NO protocol in RS232

Your problem is poor implementations higher up the chain.

UART and other communications can be done many ways even without any
form of RS-xxx level conversions.

--
Paul Carpenter | pa...@pcserviceselectronics.co.uk
<http://www.pcserviceselectronics.co.uk/> PC Services
<http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font
<http://www.gnuh8.org.uk/> GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
<http://www.badweb.org.uk/> For those web sites you hate

MK

unread,
Dec 9, 2012, 9:13:38 AM12/9/12
to
ARMs are easily available in TQFP packages and I rarely use anything else.
As for prices:
100 off Digikey,
LPC1113FBD48/302,1 = $2.05
ATMEGA8A-AU = $1.79

You'll need to compare power consumption yourself but generally ARM M0s
are pretty good and will easily beat the AVR in terms of useful
calculations per watt.

If you'll relax your package demands and accept LPC1111FHN33/201,5 in
QFN it'll only cost you $1.38 (Digikey 100 off).

I've hand soldered quite a few of these and it's perfectly feasible
(you may need a microscope but I use one for TQFPs any way.)

Michael Kellett


Hans-Bernhard Bröker

unread,
Dec 9, 2012, 10:19:09 AM12/9/12
to
On 09.12.2012 10:47, Glenn wrote:

> After so many years with that "stupid vintage" serial communications
> protocol, we still do not have autonegotiation (and auto-baud-detection)
> built into the protocol definitions. Why not?

Because there really is nothing to negotiate.

> Why have nobody made a request-for-comment about that,

For starters, because I'm pretty sure there is no RFC mechanism in place
for the body that this specification is from. If that body still
exists, that is.

> It is simply incredibly, that after so many decades, you manually has to
> find out, how to get it to work.

Actually no. It's exactly because of all those decades, and the myriad
of devices already in the field, that this specification is, for all
practical intents and purposes, immutable. No change you could come up
with now would help one bit with all those devices. And if a change
doesn't achieve anything for the overwhelming majority of applications,
what point could there possibly be?

TTman

unread,
Dec 9, 2012, 10:38:50 AM12/9/12
to

"Glenn" <glen...@gmail.com> wrote in message
news:50c45e31$0$284$1472...@news.sunsite.dk...
> On 07/12/12 15.17, Ivan Shmakov wrote:
>> BTW, is there an easy way to autodetect the baud rate while
>> using an AVR UART? (Preferably something that works with
>> ATmega8, given that those MCU's are such a cheap thing
>> nowadays.)
>>
>> There're some ideas (and 8051 code) for that on [1], but I'd
>> like to know if there could be any better techniques.
>>
>> TIA.
>>
>> [1] http://www.pjrc.com/tech/8051/autobaud.html
>>
>> PS. It seems that I'm slowly drifting into designing my own, AVR-based
>> Bus Pirate clone. The good news is that the parts for this one
>> will likely cost under $10... (connectors included.)
>>

Pefectly possible...you just have to write the software....


upsid...@downunder.com

unread,
Dec 9, 2012, 10:45:44 AM12/9/12
to
On Sun, 09 Dec 2012 10:47:29 +0100, Glenn <glen...@gmail.com> wrote:

>
>I am so "pissed" about RS-232/EIA-232.
>
>After so many years with that "stupid vintage" serial communications
>protocol, we still do not have autonegotiation (and auto-baud-detection)
>built into the protocol definitions. Why not?

RS-232 originally only specified, how a DCE (Data Communication
Equipment, i.e. a modem) should be connected over a short distance (up
to 15 m) to a DTE (Data Terminal Equipment) either a central computer
or a remote terminal. Thus it made possible to use remote terminals
(DTE to DCE) over a communication channel e.g. (leased) phone line to
central computer (DCE to DTE).

The standard specifies the voltage levels and abstract CCITT signal
numbers as more familiar signal names. The original standard did not
specify the DB25 connector or pin numbering.

However, the specification includes secondary channel, clock signals
(for synchronous communication) etc.

Later on the standard was "misused" to directly connect terminals to a
computer locally and various modem eliminators (null modems) to skip
the DCE-phone_line-DCE part of the remote circuit. In the simplest
case, just cross connect TxD and Rxd, however, various tricks are
needed to fool handshakes. For interfacing two synchronous devices,
some electronics is actually needed in the "null modem".

The problem was that each manufacturer "misused" the DTE-DCE standard
_differently_ causing problems for "null modems" for direct DTE to DTE
connections !!

It should once more be stressed that the RS-232 standard was not
originally designed for direct terminal to computer interfacing.

Much simpler systems existed for local terminal to computer
interfacing, such as 20 mA current loop. In a mechanical Teletype, the
only semiconductors were the rectifier diodes in the power supply
(24-60 V) and a big power transistor to generate the 20 mA current
source. A Teletype with RS-232 interface required at least an
additional +/-12 V power supply and at least two ICs (e.g. 1488/1489)
or a lot of discrete components before those chips were available.

amd...@gmail.com

unread,
Dec 9, 2012, 10:45:38 AM12/9/12
to
someone beat you to it...

http://blog.hodgepig.org/busninja/

Ivan Shmakov

unread,
Dec 9, 2012, 10:52:09 AM12/9/12
to
>>>>> MK <m...@nospam.co.uk> writes:
>>>>> On 08/12/2012 11:31, Ivan Shmakov wrote:
>>>>> MK <m...@nospam.co.uk> writes:
>>>>> On 07/12/2012 20:47, Ivan Shmakov wrote:

[Cross-posting to news:comp.sys.arm, at last.]

>>>> AIUI, ATmega8 only allows for input capture to happen on an
>>>> "event" occurring either on analog comparator output, or a
>>>> specific ICP1 pin. Unfortunately, I'd need the latter for GPIO.
>>>> (Naturally, ICP1 is distinct from UART's RxD.)

>>> Since you are starting out on a new project why use an obsolete
>>> processor? You can buy an LPC1113 (or some one else's ARM Cortex
>>> M0) for less money than an ATmega8 and it has much better timers
>>> with input capture.

>> Apart from lacking any experience whatsoever with ARM, I'm also
>> somewhat concerned about the ARM's /generally/ being unavailable in
>> TQFP, SO or DIP packages, which may be important should I'll be
>> making a "kit" of the design I'm currently working on.

>> Besides, where can I find an ARM for $0.9 (or less; including
>> shipping)? (For the quantity, I'd readily buy 10 or so.)

> ARMs are easily available in TQFP packages and I rarely use anything
> else. As for prices:

> 100 off Digikey,
> LPC1113FBD48/302,1 = $2.05
> ATMEGA8A-AU = $1.79

Well, I was able to order 10 ATmega8's for $8.80 on eBay.
(Hopefully, I'll get them within a week from now.) Besides, it
isn't quite "for less money than," as was stated above.

Digi-Key doesn't seem like a sensible choice /for me/, either.
FWIW, they're located on a whole different continent.

>>> It also has a good 10x the performance!

>> ... And what about the power consumption?

> You'll need to compare power consumption yourself but generally ARM
> M0s are pretty good and will easily beat the AVR in terms of useful
> calculations per watt.

ACK, thanks.

> If you'll relax your package demands and accept LPC1111FHN33/201,5 in
> QFN it'll only cost you $1.38 (Digikey 100 off).

> I've hand soldered quite a few of these and it's perfectly feasible
> (you may need a microscope but I use one for TQFPs any way.)

The point is that should I ever end up designing my own kits,
there'd be a whole world of hobbyists that won't be able to
tackle anything with pitch finer than that of TQFP. (Or perhaps
even finer than SO; but there, ARM seem to be in an advantage,
as some LPC111x seem to be available in SO just as well.)

John Larkin

unread,
Dec 9, 2012, 12:02:07 PM12/9/12
to
On Sun, 09 Dec 2012 10:47:29 +0100, Glenn <glen...@gmail.com> wrote:

>On 07/12/12 15.17, Ivan Shmakov wrote:
>> BTW, is there an easy way to autodetect the baud rate while
>> using an AVR UART? (Preferably something that works with
>> ATmega8, given that those MCU's are such a cheap thing
>> nowadays.)
>>
>> There're some ideas (and 8051 code) for that on [1], but I'd
>> like to know if there could be any better techniques.
>>
>> TIA.
>>
>> [1] http://www.pjrc.com/tech/8051/autobaud.html
>>
>> PS. It seems that I'm slowly drifting into designing my own, AVR-based
>> Bus Pirate clone. The good news is that the parts for this one
>> will likely cost under $10... (connectors included.)
>>
>
>(Please respond to news://comp.arch.embedded )
>
>Hi!
>
>I am so "pissed" about RS-232/EIA-232.
>
>After so many years with that "stupid vintage" serial communications
>protocol, we still do not have autonegotiation (and auto-baud-detection)
>built into the protocol definitions. Why not?

RS232 doesn't have a protocol definition. And it never will.


--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators

Lanarcam

unread,
Dec 9, 2012, 12:13:30 PM12/9/12
to
Le 09/12/2012 18:02, John Larkin a �crit :
> On Sun, 09 Dec 2012 10:47:29 +0100, Glenn <glen...@gmail.com> wrote:
>
>> On 07/12/12 15.17, Ivan Shmakov wrote:
>>> BTW, is there an easy way to autodetect the baud rate while
>>> using an AVR UART? (Preferably something that works with
>>> ATmega8, given that those MCU's are such a cheap thing
>>> nowadays.)
>>>
>>> There're some ideas (and 8051 code) for that on [1], but I'd
>>> like to know if there could be any better techniques.
>>>
>>> TIA.
>>>
>>> [1] http://www.pjrc.com/tech/8051/autobaud.html
>>>
>>> PS. It seems that I'm slowly drifting into designing my own, AVR-based
>>> Bus Pirate clone. The good news is that the parts for this one
>>> will likely cost under $10... (connectors included.)
>>>
>>
>> (Please respond to news://comp.arch.embedded )
>>
>> Hi!
>>
>> I am so "pissed" about RS-232/EIA-232.
>>
>> After so many years with that "stupid vintage" serial communications
>> protocol, we still do not have autonegotiation (and auto-baud-detection)
>> built into the protocol definitions. Why not?
>
> RS232 doesn't have a protocol definition. And it never will.
>
There is a "protocol" in that RTS must be followed by CTS and so on.

Apart from that, there are a lot of embedded systems that
don't need "fancy" features. When the data rate is fixed
there is no need for autodetection. It adds cost, complexity,
rampant bugs, all sorts of nastiness.

A good design is the least complexity to achieve maximum safety
and compliance, not a host of sophisticated features that
are used for no good reason, only because it is trendy.

Of course, in some cases, you need sophistication, but then,
by all means, one should use another protocol.

upsid...@downunder.com

unread,
Dec 9, 2012, 12:44:23 PM12/9/12
to
On Sun, 09 Dec 2012 18:13:30 +0100, Lanarcam <lana...@yahoo.fr>
wrote:

>Le 09/12/2012 18:02, John Larkin a �crit :
>> On Sun, 09 Dec 2012 10:47:29 +0100, Glenn <glen...@gmail.com> wrote:
>>
>>> On 07/12/12 15.17, Ivan Shmakov wrote:
>>>> BTW, is there an easy way to autodetect the baud rate while
>>>> using an AVR UART? (Preferably something that works with
>>>> ATmega8, given that those MCU's are such a cheap thing
>>>> nowadays.)
>>>>
>>>> There're some ideas (and 8051 code) for that on [1], but I'd
>>>> like to know if there could be any better techniques.
>>>>
>>>> TIA.
>>>>
>>>> [1] http://www.pjrc.com/tech/8051/autobaud.html
>>>>
>>>> PS. It seems that I'm slowly drifting into designing my own, AVR-based
>>>> Bus Pirate clone. The good news is that the parts for this one
>>>> will likely cost under $10... (connectors included.)
>>>>
>>>
>>> (Please respond to news://comp.arch.embedded )
>>>
>>> Hi!
>>>
>>> I am so "pissed" about RS-232/EIA-232.
>>>
>>> After so many years with that "stupid vintage" serial communications
>>> protocol, we still do not have autonegotiation (and auto-baud-detection)
>>> built into the protocol definitions. Why not?
>>
>> RS232 doesn't have a protocol definition. And it never will.
>>
>There is a "protocol" in that RTS must be followed by CTS and so on.

There are quite lot of handshaking in the DTE-DCE connection.

Typically the DTE computer/terminal sets the DTR (Data terminal Ready)
when it is powered up. The modem (DCE) sets the DSR (Data Set Ready)
when it is powered up and ready to communicate (telephone contact
established). Until both DTR and DSR are on, there is not much point
of trying to communicate.

In half duplex (radio)communication world, rising RTS is an indication
that this station wants to transmit. For a radio link, this might
include listening to the radio channel if the radio channel is free
and if so, turn on the radio transmitter and wait until the PLL is
stabilized, before turning on the actual radio transmitter, after
which the CTS is raised and the actual message transmission can begin.

Those signals are there or a reason, not to make it harder to
interface ordinary devices.

Paul

unread,
Dec 9, 2012, 4:30:56 PM12/9/12
to
In article <v1b9c8d9tvv7aj506...@4ax.com>,
upsid...@downunder.com says...
>
> On Sun, 09 Dec 2012 10:47:29 +0100, Glenn <glen...@gmail.com> wrote:
>
> >
> >I am so "pissed" about RS-232/EIA-232.
> >
> >After so many years with that "stupid vintage" serial communications
> >protocol, we still do not have autonegotiation (and auto-baud-detection)
> >built into the protocol definitions. Why not?
>
> RS-232 originally only specified, how a DCE (Data Communication
> Equipment, i.e. a modem) should be connected over a short distance (up
> to 15 m) to a DTE (Data Terminal Equipment) either a central computer
> or a remote terminal. Thus it made possible to use remote terminals
> (DTE to DCE) over a communication channel e.g. (leased) phone line to
> central computer (DCE to DTE).

Actually RS232 specifies signal levels the cable length urban myth is a
common misreading of RS232A and RS232B, where it said that if configured
using a very capacative cable with lots of adjacent signals the noise
level went up at 15m. However the noise level never went across a noise
margin to cause problems.

I have seen RS232 driven down 1km of bell wire and function correctly.

The DTE/DCE comes from CCIT V24 (Now part of ITU in particular ITU-T) is
a telecoms standard particularly for modems and working out what was an
end point and a mid-point (modem). This is what originally specified the
DB25 and the signal naming.

> The standard specifies the voltage levels and abstract CCITT signal
> numbers as more familiar signal names. The original standard did not
> specify the DB25 connector or pin numbering.

The abstract CCITT numbers are CCITT V24 refernce not RS232 until about
RS232 Rev D.

Paul

unread,
Dec 9, 2012, 4:32:46 PM12/9/12
to
In article <50c4c6ba$0$16503$426a...@news.free.fr>, lana...@yahoo.fr
says...
>
> Le 09/12/2012 18:02, John Larkin a ᅵcrit :
> > On Sun, 09 Dec 2012 10:47:29 +0100, Glenn <glen...@gmail.com> wrote:
> >
> >> On 07/12/12 15.17, Ivan Shmakov wrote:
> >>> BTW, is there an easy way to autodetect the baud rate while
> >>> using an AVR UART? (Preferably something that works with
> >>> ATmega8, given that those MCU's are such a cheap thing
> >>> nowadays.)
> >>>
> >>> There're some ideas (and 8051 code) for that on [1], but I'd
> >>> like to know if there could be any better techniques.
> >>>
> >>> TIA.
> >>>
> >>> [1] http://www.pjrc.com/tech/8051/autobaud.html
> >>>
> >>> PS. It seems that I'm slowly drifting into designing my own, AVR-based
> >>> Bus Pirate clone. The good news is that the parts for this one
> >>> will likely cost under $10... (connectors included.)
> >>>
> >>
> >> (Please respond to news://comp.arch.embedded )
> >>
> >> Hi!
> >>
> >> I am so "pissed" about RS-232/EIA-232.
> >>
> >> After so many years with that "stupid vintage" serial communications
> >> protocol, we still do not have autonegotiation (and auto-baud-detection)
> >> built into the protocol definitions. Why not?
> >
> > RS232 doesn't have a protocol definition. And it never will.
> >
> There is a "protocol" in that RTS must be followed by CTS and so on.

That is CCITT V24 and actually does not specify RS232 only actually
could be used with all sorts of signalling levels and connectors.

Robert Wessel

unread,
Dec 10, 2012, 1:35:06 AM12/10/12
to
On Sun, 09 Dec 2012 10:47:29 +0100, Glenn <glen...@gmail.com> wrote:

As others have mentioned, RS-232 does not come close to defining the
stuff you want, and you need several layers higher in the stack. And
heck, not even the signal levels required by RS-232 are particularly
well respected.

But after doing everything you want, you're going to end up with
something largely incompatible with conventional async/serial/RS-232
style connections. At that point, there's no point - that sort of
serial connection is used *only* because support for it is ubiquitous,
not because it's a particularly good technology. So if you change it,
it becomes irrelevant.

And in this day and age serial ports are (slowly) dying anyway. And
once you add all that stuff into your link, you'll be back to
something with complexity similar to that of USB anyway, so why
reinvent it? And for peripherals, you've been able to get single USB
chip implementations for a buck or two for years now, which are only
minimally more difficult to use than a bare serial port from your
device's CPU. Or if you don't like that, throw a buck or two Ethernet
port and a TCP/IP stack on the device.

Meindert Sprang

unread,
Dec 10, 2012, 7:38:32 AM12/10/12
to
"Ivan Shmakov" <onei...@gmail.com> wrote in message
news:86sj7iu...@gray.siamics.net...
> BTW, is there an easy way to autodetect the baud rate while
> using an AVR UART? (Preferably something that works with
> ATmega8, given that those MCU's are such a cheap thing
> nowadays.)
>
> There're some ideas (and 8051 code) for that on [1], but I'd
> like to know if there could be any better techniques.
>

Here's how I've done it in a commercial product that has to support
auto-baud between 4800 and 57600 Baud on a fixed 8N1 format:

Check the framing error bit in the receive interrupt handler. After 20
framing errors, switch to the next baud rate and reset the framing counter.

Works like a charm.

Meindert


Ivan Shmakov

unread,
Dec 10, 2012, 8:19:41 AM12/10/12
to
>>>>> Meindert Sprang <m...@NOJUNKcustomORSPAMware.nl> writes:
>>>>> "Ivan Shmakov" <onei...@gmail.com> wrote in...

>> BTW, is there an easy way to autodetect the baud rate while using an
>> AVR UART? (Preferably something that works with ATmega8, given that
>> those MCU's are such a cheap thing nowadays.)

[...]

> Here's how I've done it in a commercial product that has to support
> auto-baud between 4800 and 57600 Baud on a fixed 8N1 format:

> Check the framing error bit in the receive interrupt handler. After
> 20 framing errors, switch to the next baud rate and reset the framing
> counter.

> Works like a charm.

Doesn't it mean that the host has to transmit considerable
amount of data for the device to adapt to the baud rate used?
Given the possibility of "interactive" use, such a delay doesn't
seem all that reasonable.

Might work as a last resort, however.

Meindert Sprang

unread,
Dec 10, 2012, 10:24:46 AM12/10/12
to
"Ivan Shmakov" <onei...@gmail.com> wrote in message
news:8638zer...@gray.siamics.net...
> >>>>> Meindert Sprang <m...@NOJUNKcustomORSPAMware.nl> writes:
> > Here's how I've done it in a commercial product that has to support
> > auto-baud between 4800 and 57600 Baud on a fixed 8N1 format:
>
> > Check the framing error bit in the receive interrupt handler. After
> > 20 framing errors, switch to the next baud rate and reset the framing
> > counter.
>
> > Works like a charm.
>
> Doesn't it mean that the host has to transmit considerable
> amount of data for the device to adapt to the baud rate used?
> Given the possibility of "interactive" use, such a delay doesn't
> seem all that reasonable.

It takes some time indeed. But in my application (receiving a constant data
stream from navigation instruments), this is no problem.

Meindert


Arlet Ottens

unread,
Dec 10, 2012, 10:49:56 AM12/10/12
to
On 12/10/2012 04:24 PM, Meindert Sprang wrote:

>> > Here's how I've done it in a commercial product that has to support
>> > auto-baud between 4800 and 57600 Baud on a fixed 8N1 format:
>>
>> > Check the framing error bit in the receive interrupt handler. After
>> > 20 framing errors, switch to the next baud rate and reset the framing
>> > counter.
>>
>> > Works like a charm.
>>
>> Doesn't it mean that the host has to transmit considerable
>> amount of data for the device to adapt to the baud rate used?
>> Given the possibility of "interactive" use, such a delay doesn't
>> seem all that reasonable.
>
> It takes some time indeed. But in my application (receiving a constant data
> stream from navigation instruments), this is no problem.

The time could be improved by keeping track of both well received data,
and framing errors. At first, you could try a new baud rate after 2 or 3
framing errors, but as soon as you receive a couple of good chars,
increase the tolerance for further framing errors.

To improve detection time, save a good baudrate in non-volatile memory
so it can be used as the first guess when the device powers up again.




j.m.gr...@gmail.com

unread,
Dec 12, 2012, 12:22:45 AM12/12/12
to
On Saturday, December 8, 2012 3:17:16 AM UTC+13, Ivan Shmakov wrote:
> BTW, is there an easy way to autodetect the baud rate while
> using an AVR UART?

Most AutoBaud designs also send a known character to calibrate, and a pause.

If you want to auto-detect on random data, that is more complex, and you will (usually) discard info while you are deciding. Then, even if your hardware is smart enough to quickly lock onto a bit-time, you next have to decide which is actually the Start bit...

So there is no magic solution, but you can make systems that behave in a known way, reliable.

Another approach is to start a baud dialog at a known low speed, and then exchange information about mutually supported higher rates, and then switch to that.
If you want highest speeds, and widest clock-choice tolerance, that is the best approach.





> PS. It seems that I'm slowly drifting into designing my own, AVR-based
> Bus Pirate clone. The good news is that the parts for this one
> will likely cost under $10... (connectors included.)

Just to reality check that aspiration, I see Bus Pirate have moved to use a 256K Flash device.

["We didn't want to run out of space again soon, so we used a PIC 24FJ256GB106 with 256K of space."]

So you might want to look at the ATXmega parts, as they have some good prices on smaller USB models.

-jg

Robert Wessel

unread,
Dec 12, 2012, 1:29:38 AM12/12/12
to
On Tue, 11 Dec 2012 21:22:45 -0800 (PST), j.m.gr...@gmail.com
wrote:

>On Saturday, December 8, 2012 3:17:16 AM UTC+13, Ivan Shmakov wrote:
>> BTW, is there an easy way to autodetect the baud rate while
>> using an AVR UART?
>
>Most AutoBaud designs also send a known character to calibrate, and a pause.
>
> If you want to auto-detect on random data, that is more complex, and you will (usually) discard info while you are deciding. Then, even if your hardware is smart enough to quickly lock onto a bit-time, you next have to decide which is actually the Start bit...
>
> So there is no magic solution, but you can make systems that behave in a known way, reliable.


It's impossible to make baud autodetection 100% reliable without the
cooperation of the sender. Consider that at 8-N-1, the single byte
0xB5 is indistinguishable from the pair 0x67, 0x67 at double the baud
rate. Hardly the only such example, just a handy one. For a simple
doubling of the baud rate at 8-N-1, an indistinguishable pair of bytes
exists at the higher baud rate for any byte where the fourth bit (from
the high end) is one and the fifth is zero at the lower baud rate.

Mark Borgerson

unread,
Dec 12, 2012, 2:43:54 AM12/12/12
to
In article <nh8gc81516b289uhc...@4ax.com>, robertwessel2
@yahoo.com says...
I think that's true if you define " without cooperation" to mean that
the receiver has absolutely no prior knowledge of the message from
the sender and the data stream is continuous with no significant
inter-character intervals. If you know that the data is ASCII
text in a given language, you probably have enough data to get
the baud rate correct, given a large enough sample size.

It the data stream is encrypted binary data in a continuous
stream, you've got more to worry about than just the baud rate!

Mark Borgerson


Robert Wessel

unread,
Dec 12, 2012, 3:38:06 AM12/12/12
to
I'd consider sending a known (or at least constrained) stream to be
cooperation. Obviously the tighter the constraints, the more quickly
you can get to a required confidence level in your baud rate
detection.

As for inter-character gaps - it depends on the speeds and characters
in question - so long as the low speed characters meet the xxx10xxx
format requirement, then the double speed stream simply looks like two
immediately adjacent characters, and gaps between individual low speed
characters are just gaps between high speed pairs.

My point was more that automatic baud rate detection is a hack, albeit
a useful one in some circumstances, although it will always have
limits. And frankly the use of RS-232/async ports should not be a
first choice these days.

Rich Webb

unread,
Dec 12, 2012, 8:50:26 AM12/12/12
to
Here https://www.dropbox.com/s/tslyhurkxpnri32/BaudRateDetermination.pdf
is an example of watching the pulse widths to infer the baud rate. It
decides on the rate by the end of the NMEA identifier field, leaving the
rest of that sentence available for sanity checks (framing errors,
reasonable character set, etc.). There are certainly other approaches
but I've been pretty successful with this.

--
Rich Webb Norfolk, VA

Mark Borgerson

unread,
Dec 12, 2012, 10:46:11 AM12/12/12
to
In article <rufgc89c170lh0o16...@4ax.com>, robertwessel2
I still find them useful when connecting to oceanographic instruments.
I have been able to get full-speed usb (12mb) through a pair of
waterproof connectors, despite the impedance problems. I haven't
yet tried to get USB through the connector and 20meters from the
deck to the dry lab on a research vessel. I've also tried
Zigbee radios, but they run into problems with aluminum pressure
cases and deckhouses.

One advantage that serial ports have over USB is that, with
a properly designed receiving system, you have controlled
latency that can allow time-stamping of incoming data. That's
more difficult with USB or radio links where the data gets
mashed together into packets and the reception time has little
relation to the transmission time.


Mark Borgerson


linnix

unread,
Dec 12, 2012, 1:50:32 PM12/12/12
to
On Dec 12, 7:46 am, Mark Borgerson <mborger...@comcast.net> wrote:
> In article <rufgc89c170lh0o16ddk022rrsrhgq5...@4ax.com>, robertwessel2
> @yahoo.com says...
>
>
>
>
>
>
>
> > On Tue, 11 Dec 2012 23:43:54 -0800, Mark Borgerson
> > <mborger...@comcast.net> wrote:
>
> > >In article <nh8gc81516b289uhcdla34qrm3d1dhm...@4ax.com>, robertwessel2
> > >@yahoo.com says...
>
> > >> On Tue, 11 Dec 2012 21:22:45 -0800 (PST), j.m.granvi...@gmail.com
I don't get it. If you got a hole for RS232 cable, you certainly can
run an (water-proof) antenna through the case. 20 meters is no big
deal for 802.15.4 (or ZigBee).

>
> One advantage that serial ports have over USB is that, with
> a properly designed receiving system, you have controlled
> latency that can allow time-stamping of incoming data.  That's
> more difficult with USB or radio links where the data gets
> mashed together into packets and the reception time has little
> relation to the transmission time.

Yes, it's difficult to time-stamp the receiving end, but you can
always time-stamp the transmitting end. We put timing data in the
packet itself.

Jon Kirwan

unread,
Dec 12, 2012, 3:42:50 PM12/12/12
to
On Wed, 12 Dec 2012 10:50:32 -0800 (PST), linnix
<m...@linnix.info-for.us> wrote:

>On Dec 12, 7:46�am, Mark Borgerson <mborger...@comcast.net> wrote:
>>
>><snip>
>>
>> One advantage that serial ports have over USB is that, with
>> a properly designed receiving system, you have controlled
>> latency that can allow time-stamping of incoming data. �That's
>> more difficult with USB or radio links where the data gets
>> mashed together into packets and the reception time has little
>> relation to the transmission time.
>
>Yes, it's difficult to time-stamp the receiving end, but you can
>always time-stamp the transmitting end. We put timing data in the
>packet itself.

That's nice for some purposes, not nice for others. For
example, if you are designing an instrument (which is what I
do for work) that may be part of a closed loop control system
(such as, for example, a GaAs boule puller) then time stamps
do NOT HELP you at all. What matters entirely is the rigorous
repeatability of the phase delay of measurements and smaller
loop times. You want a very short loop and zero variance in
the measurement timing (a delay cannot be avoided, but what
you need _most_ is no variation of that delay -- it must be
the exact same value every time, if possible.)

Jon

Mark Borgerson

unread,
Dec 12, 2012, 3:43:37 PM12/12/12
to
In article <a643e02a-fa5f-41de-a83e-5bb572f2e0c7
@i7g2000pbf.googlegroups.com>, m...@linnix.info-for.us says...
>
> On Dec 12, 7:46ᅵam, Mark Borgerson <mborger...@comcast.net> wrote:
> > In article <rufgc89c170lh0o16ddk022rrsrhgq5...@4ax.com>, robertwessel2
> > @yahoo.com says...
> >
> >
> >
> >
><<SNIP>>
> > I still find them useful when connecting to oceanographic instruments.
> > I have been able to get full-speed usb (12mb) through a pair of
> > waterproof connectors, despite the impedance problems. ᅵI haven't
> > yet tried to get USB through the connector and 20meters from the
> > deck to the dry lab on a research vessel. ᅵI've also tried
> > Zigbee radios, but they run into problems with aluminum pressure
> > cases and deckhouses.
>
> I don't get it. If you got a hole for RS232 cable, you certainly can
> run an (water-proof) antenna through the case. 20 meters is no big
> deal for 802.15.4 (or ZigBee).

There is limited space on the end caps for new holes. There is
already a hole for power and RS-232/USB which needs to be there
as the power is plugged/unplugged at that connector.

It's also more than a hole---it's an underwater bulkhead
connector that costs about $100 and has to hold pressures
up to 5000PSI. That last requirement is a step above
"waterproof". Next there's the problem of getting the
signals through the aluminum or steel bulkheads and
watertight doors.

Another issue with Zigbee is configuring the radio addresses
so that the host communicates with the proper instrument.
That's a bit more complex than plugging in the connector
and selecting COM1.
>
> >
> > One advantage that serial ports have over USB is that, with
> > a properly designed receiving system, you have controlled
> > latency that can allow time-stamping of incoming data. ᅵThat's
> > more difficult with USB or radio links where the data gets
> > mashed together into packets and the reception time has little
> > relation to the transmission time.
>
> Yes, it's difficult to time-stamp the receiving end, but you can
> always time-stamp the transmitting end. We put timing data in the
> packet itself.

That is the case with most instruments I've designed in the last decade.
Oceanographic instruments have long service lifetimes. The one I am
redesigning now was first operated in 1990. It used clocks, counters
and CMOS logic state machines to feed data from an ADC to a UART and up
an RS-485 cable about 500m to the surface. The data was streamed in 2-
byte pairs at near the cable capacity at 115KB. Since there was no RTC
in the subsurface system, the surface program was responsible for time-
stamping the data, merging it with GPS data, and recording.

My goal for the near-term replacement of the subsurface instrument
is to be able to use the same surface software----which means
duplicating the timing of the old instrument.

Mark Borgerson


Jon Kirwan

unread,
Dec 12, 2012, 3:56:47 PM12/12/12
to
On Wed, 12 Dec 2012 12:42:50 -0800, I wrote:

>exact same value every time

exact same delay every time

Jon

upsid...@downunder.com

unread,
Dec 12, 2012, 4:22:54 PM12/12/12
to

>On Wed, 12 Dec 2012 10:50:32 -0800 (PST), linnix
><m...@linnix.info-for.us> wrote:
>
>>On Dec 12, 7:46�am, Mark Borgerson <mborger...@comcast.net> wrote:
>>>
>>><snip>
>>>
>>> One advantage that serial ports have over USB is that, with
>>> a properly designed receiving system, you have controlled
>>> latency that can allow time-stamping of incoming data. �That's
>>> more difficult with USB or radio links where the data gets
>>> mashed together into packets and the reception time has little
>>> relation to the transmission time.
>>
>>Yes, it's difficult to time-stamp the receiving end, but you can
>>always time-stamp the transmitting end. We put timing data in the
>>packet itself.

Time stamping at the source helps in many situations, however, you
_need_ a reliable timing generator at the source.

Of course, if the source device is connected to a good GPS or IRIG
time source, things should be pretty easy.

However, if you have to synchronize the local clock at the signal
source over the same serial link, there are a few pitfalls, mainly to
various jitter sources.

In order to avoid these jitters in serial communication, you may have
to use something like NTP (Network time protocol) adapted for serial
links, in order to average out jitter in individual synchronization
attempts over the serial link.

Of course, the NTP principle assumes equal propagation delay in both
direction, which can usually be achieved in serial communication.

Robert Wessel

unread,
Dec 12, 2012, 4:39:28 PM12/12/12
to
In Jon's application, the communications link is part of the control
loop. It's the actual jitter on that that's the problem - it doesn't
matter how accurately you time when the datum was produced at the
device, the controller, at the other end of the communications cable
can't generate a response until that datum actually shows up.

A USB isochronous pipe might fit the application, although certainly
not at the distances he's talking about.

Jon Kirwan

unread,
Dec 12, 2012, 6:13:51 PM12/12/12
to
Exactly. The analogy I like to use is this:

Imagine you need to poke a long, thin, flexible bamboo rod
into the entry hole of a bird house. You only get to hold the
rod at the end and the hole is at your eye level 50' away.
Now generalize this idea for the above: the length of the
pole is the loop delay. It is easier to do if the pole is
short. VERY MUCH easier, in fact. Distance makes a LOT of
difference -- it's non-linear. Then also, now imagine that
the length of the pole is varying all over the place as you
try? The less variation in length there is, the easier the
job is. No variation is best, even if you are stuck with a
very long pole -- because you can design an algorithm that
can anticipate the flexing, at least. But if the length is
shifting almost at random, it greatly complicates any attempt
to design such an algorithm.

Of course, it's dead easy to design an algorithm with a short
rod. The changes in control at your hand are immediately
reflected and so it's child's play then.

Anyway, that kind of gets the idea across. Time stamping will
tell you what you might have wanted to have done. But it is
useless 20/20 hindsight. Your GaAs boule has huge ripples and
you are wasting lots of money in your process and you start
looking for a different solution and you fire those who
couldn't understand how to do a proper closed loop control
design.

>A USB isochronous pipe might fit the application, although certainly
>not at the distances he's talking about.

I think I don't know enough about USB to comment here. Yes, I
read through much of the over 1000 pages of the USB 2.0
document -- not entirely enjoying it, either. But I seem to
recall that the shortest timer on the host side is set at
1ms. And I honestly don't know how that translates into
allowable variability -- but I suspect it means a need to
plan that much, 1ms, variability. And even then, it might be
worse under circumstances I'm not well informed about being
as ignorant as I am about USB nuances.

RS-232 isn't "great guns." It's still serial. But with RS-422
for example I can at least get fairly quick transfers of data
and at precision intervals that are as known and as
predictable as the crystal clocks I'm using to time them. I
will usually select a processor with fixed (or very small
variability in) latencies relative to timer interrupt events
-- the ADSP-21xx processor for example has zero variability
(if it isn't busy with interrupts locked out, of couse) in
its interrupting response. These things can be important, at
times.

Jon

Jon Kirwan

unread,
Dec 12, 2012, 6:28:15 PM12/12/12
to
On Wed, 12 Dec 2012 15:13:51 -0800, I wrote:

>I think I don't know enough about USB to comment here. Yes, I
>read through much of the over 1000 pages of the USB 2.0
>document -- not entirely enjoying it, either. But I seem to
>recall that the shortest timer on the host side is set at
>1ms. And I honestly don't know how that translates into
>allowable variability -- but I suspect it means a need to
>plan that much, 1ms, variability. And even then, it might be
>worse under circumstances I'm not well informed about being
>as ignorant as I am about USB nuances.

Not to mention variability in managing USB packets and all of
the conditional code whose branch edges (different branch
pathways) do not all execute in exactly the same number of
cycles so that there is a fixed delay which can be planned
on. Compilers do NOT provide a method to force fixed timing
through all code edges, either.

>RS-232 isn't "great guns." It's still serial. But with RS-422
>for example I can at least get fairly quick transfers of data
>and at precision intervals that are as known and as
>predictable as the crystal clocks I'm using to time them. I
>will usually select a processor with fixed (or very small
>variability in) latencies relative to timer interrupt events
>-- the ADSP-21xx processor for example has zero variability
>(if it isn't busy with interrupts locked out, of couse) in
>its interrupting response. These things can be important, at
>times.

In the case of simple serial communications, I can much more
easily arrange equal timing in all branches of the rather
short and simple code, by inspection. So I can guarantee, to
the cycle, that there is no variability. Which is good.

Jon

Robert Wessel

unread,
Dec 12, 2012, 7:41:45 PM12/12/12
to
On Wed, 12 Dec 2012 15:13:51 -0800, Jon Kirwan
<jo...@infinitefactors.org> wrote:

>>A USB isochronous pipe might fit the application, although certainly
>>not at the distances he's talking about.
>
>I think I don't know enough about USB to comment here. Yes, I
>read through much of the over 1000 pages of the USB 2.0
>document -- not entirely enjoying it, either. But I seem to
>recall that the shortest timer on the host side is set at
>1ms. And I honestly don't know how that translates into
>allowable variability -- but I suspect it means a need to
>plan that much, 1ms, variability. And even then, it might be
>worse under circumstances I'm not well informed about being
>as ignorant as I am about USB nuances.


USB 2.0 has service intervals down to 125us, isochronous transfers
reserve bandwidth slots. With USB 3.0 ("Superspeed"), I think the
guaranteed jitter is 200ns, It's higher than that for hi-speed, but I
don't remember the value. In most cases it's going to be the software
stack that's going to be the limiting factor on jitter. The audio
guys seem to be slightly unhappy, apparently wanting 100ns jitter
guarantee.


>RS-232 isn't "great guns." It's still serial. But with RS-422
>for example I can at least get fairly quick transfers of data
>and at precision intervals that are as known and as
>predictable as the crystal clocks I'm using to time them. I
>will usually select a processor with fixed (or very small
>variability in) latencies relative to timer interrupt events
>-- the ADSP-21xx processor for example has zero variability
>(if it isn't busy with interrupts locked out, of couse) in
>its interrupting response. These things can be important, at
>times.


There are certainly advantages to a very lightweight stack.

OTOH, it doesn't sound like you application would really ever make any
use of automatic baud rate detection.

Jon Kirwan

unread,
Dec 12, 2012, 8:12:55 PM12/12/12
to
Oh, yeah. I'm on about something different, of course. Just
responding to this proposed idea that time-stamping solves
all problems.

Jon

josephkk

unread,
Dec 12, 2012, 11:30:38 PM12/12/12
to
On Sun, 09 Dec 2012 10:47:29 +0100, Glenn <glen...@gmail.com> wrote:

>On 07/12/12 15.17, Ivan Shmakov wrote:
>> BTW, is there an easy way to autodetect the baud rate while
>> using an AVR UART? (Preferably something that works with
>> ATmega8, given that those MCU's are such a cheap thing
>> nowadays.)
>>
>> There're some ideas (and 8051 code) for that on [1], but I'd
>> like to know if there could be any better techniques.
>>
>> TIA.
>>
>> [1] http://www.pjrc.com/tech/8051/autobaud.html
>>
>> PS. It seems that I'm slowly drifting into designing my own, AVR-based
>> Bus Pirate clone. The good news is that the parts for this one
>> will likely cost under $10... (connectors included.)
>>
>
I have two things to say to you:

Get a copy of the standard and study it, it has been TIA-232 for over 15
years now.

The widget sounds like a great idea, go ahead and make it and sell it, see
where it gets you. Be sure to cover all of the off specification
implementations out there.

Bye.

?-)

Frnak McKenney

unread,
Dec 13, 2012, 11:25:41 AM12/13/12
to
On Wed, 12 Dec 2012 12:43:37 -0800, Mark Borgerson <mborg...@comcast.net> wrote:
> In article <a643e02a-fa5f-41de-a83e-5bb572f2e0c7
> @i7g2000pbf.googlegroups.com>, m...@linnix.info-for.us says...
>>
>> On Dec 12, 7:46 am, Mark Borgerson <mborger...@comcast.net> wrote:
>> > In article <rufgc89c170lh0o16ddk022rrsrhgq5...@4ax.com>, robertwessel2
>> > @yahoo.com says...
>> >
>><<SNIP>>
>> > I still find them useful when connecting to oceanographic instruments.
>> > I have been able to get full-speed usb (12mb) through a pair of
>> > waterproof connectors, despite the impedance problems.  I haven't
>> > yet tried to get USB through the connector and 20meters from the
>> > deck to the dry lab on a research vessel.  I've also tried
>> > Zigbee radios, but they run into problems with aluminum pressure
>> > cases and deckhouses.

[...]

> It's also more than a hole---it's an underwater bulkhead
> connector that costs about $100 and has to hold pressures
> up to 5000PSI. That last requirement is a step above
> "waterproof". Next there's the problem of getting the
> signals through the aluminum or steel bulkheads and
> watertight doors.

Mark,

Apologies for the diversion from your subject, but you have brought up
a topic I have been curious about for some time.

Is it easier/cheaper to design and build a through-bulhead connector
capable of withstanding (say) 5000psi than (say) an optical or
magnetic port through the same bulkhead?

All three approaches can pass data, but only the throug-bulkhead
connector can pass power. Is that the major criteria? That is, your
application requires more power than can be aeasily be supplied
through batteries? Os is there something else involved?

Jes' curious...


Frank
--
Perhaps the greatest mystery of the Cold War is why the Worker's
Paradise could not manage to produce a decent pair of jeans.
-- Niall Ferguson / Civilization: The West and the Rest
--
Frank McKenney, McKenney Associates
Richmond, Virginia / (804) 320-4887
Munged E-mail: frank uscore mckenney aatt mindspring ddoott com



Robert Wessel

unread,
Dec 13, 2012, 1:34:06 PM12/13/12
to
There *are* techniques to wirelessly move power. They're usually
either not terribly convenient, or not good for huge amounts of power.
For example, you could simply put a large coil of wire on either side,
and run AC through one of those, and you've basically got an
inefficient transformer. A project I was peripherally associated with
many years ago used a pump with no drive shaft (nasty liquids with
very exothermic reactions with each other, so leaks were a bad thing).
They used a solid aluminum pump housing, with a rotating magnet on the
outside, and a matching magnet attached to the impeller inside (think
magnetic stirrer). The entire assembly, along with the piping was
welded - no seals, shafts, gaskets, joints or anything else to leak.
While that didn't move electrical power, replacing the impeller with a
small generator would clearly have been possible.

The 5000psi requirement would lead one to guess steel as a primary
structural element, which will rather impact your magnetics, so my two
examples would be problematic, but here are other approaches.

OTOH, a pair of wires solves all of those problem with rather less
complexity (and higher efficiency), except for the need to actually
run them through the container wall.

Mark Borgerson

unread,
Dec 13, 2012, 7:47:38 PM12/13/12
to
In article <VuWdndaS1dcYnFfN...@earthlink.com>,
fr...@far.from.the.madding.crowd.com says...
Optical or magnetic interfaces may work OK at lower pressures, but the
mechanical strength to resist 5 to 10KPSI makes the design more
difficult. Building an end cap to hold the pressure, but having
good data transmission capability can also be expensive. Furthermore,
optical and magnetic interfaces may need more power and cost a lot
more than an RS-232 interface chip.


>
> All three approaches can pass data, but only the throug-bulkhead
> connector can pass power. Is that the major criteria? That is, your
> application requires more power than can be aeasily be supplied
> through batteries? Os is there something else involved?
>
Passing power is one criterion. There are systems that can live on
internal batteries. We put loggers on moorings on the equator that
collect data for 16 channels at up to 100 samples/second. We can
use internal lithium primary cells and have enough power to log
for a year at about 200MB/day . Those systems only need about 15mA at
7V. Other systems may need 10 or a hundred times that power and
collect less data or require external power.

We often use a bulkehead connector in place of a power switch.
With a dummy plug, power is off. With a plug routing
power through the pins, the system is on. Some instrument makers
have tried magnetic reed switches, but you lose the ability to
power up the instrument when the batteries diea, and upload data
through the port by providing power through the port.

Oceanographic instrument designers envy those doing designs
for space: Nothing grows on their instrument surfaces, the
medium isn't corrosive, and they only have to worry about
pressure differentials of about 1 atmosphere, and asian fishermen don't
tie up to satellites because they think it's an easy way to save fuel.


Of course, the space guys do face some other hazards--
micrometeorites, big temperature differentials, etc..


> Jes' curious...
>
Mark Borgerson



Jon Kirwan

unread,
Dec 13, 2012, 9:22:22 PM12/13/12
to
On Thu, 13 Dec 2012 16:47:38 -0800, Mark Borgerson
<mborg...@comcast.net> wrote:

><snip>
>Oceanographic instrument designers envy those doing designs
>for space: Nothing grows on their instrument surfaces, the
>medium isn't corrosive, and they only have to worry about
>pressure differentials of about 1 atmosphere, and asian fishermen don't
>tie up to satellites because they think it's an easy way to save fuel.
><snip>

It's not so nice in space, either. The following is a paper
just on one aspect -- the serious problem of satellite
charging effects:

http://www.dept.aoe.vt.edu/~cdhall/courses/aoe4065/NASADesignSPs/rp1375.pdf

Take a look over the long list of satellites and problems.

There are other effects, as well, and a "crud" accumulates
over time and covers optics, solar panels, and so on in ways
that impair function eventually to the point of failure.

If otherwise lucky, 30 years is about the most you hope for.

Jon

Jon Kirwan

unread,
Dec 13, 2012, 9:27:33 PM12/13/12
to

Mark Borgerson

unread,
Dec 14, 2012, 2:20:52 AM12/14/12
to
In article <2n1lc85bcrotcf4t1...@4ax.com>,
jo...@infinitefactors.org says...
>
> On Thu, 13 Dec 2012 16:47:38 -0800, Mark Borgerson
> <mborg...@comcast.net> wrote:
>
> ><snip>
> >Oceanographic instrument designers envy those doing designs
> >for space: Nothing grows on their instrument surfaces, the
> >medium isn't corrosive, and they only have to worry about
> >pressure differentials of about 1 atmosphere, and asian fishermen don't
> >tie up to satellites because they think it's an easy way to save fuel.
> ><snip>
>
> It's not so nice in space, either. The following is a paper
> just on one aspect -- the serious problem of satellite
> charging effects:
>
> http://www.dept.aoe.vt.edu/~cdhall/courses/aoe4065/NASADesignSPs/rp1375.pdf

I must admit that accumulated charge isn't so much a problem for
oceanographic instruments. ;-)
>
> Take a look over the long list of satellites and problems.
>
> There are other effects, as well, and a "crud" accumulates
> over time and covers optics, solar panels, and so on in ways
> that impair function eventually to the point of failure.
>
> If otherwise lucky, 30 years is about the most you hope for.

I doubt oceanographers would wait that long for their data.
It's tough to get high-bandwidth data back in
real time from instruments under 30m of seawater.
We're recording 2 to 3Kbytes of data per second continuously.
Most anything that can transmit with that bandwidth back from
the equator exceeds our power budget.

Here's what one of our turbulence sensors looks like after
about 6 months to a year on a mooring at the equator:

https://dl.dropbox.com/u/24841567/P3120108.JPG

The copper probe at the lower left has the fast-response
thermistor that is the primary sensor.

More stuff at:

http://mixing.coas.oregonstate.edu/research/moored_mixing/



Mark Borgerson



upsid...@downunder.com

unread,
Dec 14, 2012, 3:33:19 AM12/14/12
to
On Thu, 13 Dec 2012 12:34:06 -0600, Robert Wessel
<robert...@yahoo.com> wrote:

>>Is it easier/cheaper to design and build a through-bulhead connector
>>capable of withstanding (say) 5000psi than (say) an optical or
>>magnetic port through the same bulkhead?
>>
>>All three approaches can pass data, but only the throug-bulkhead
>>connector can pass power. Is that the major criteria? That is, your
>>application requires more power than can be aeasily be supplied
>>through batteries? Os is there something else involved?
>>
>>Jes' curious...
>
>
>There *are* techniques to wirelessly move power. They're usually
>either not terribly convenient, or not good for huge amounts of power.
>For example, you could simply put a large coil of wire on either side,
>and run AC through one of those, and you've basically got an
>inefficient transformer.

Since there are windows on manned vehicles capable of reaching the
bottom of the oceans, there should also be other high strength
materials for the port hole with suitable dielectric materials to be
used with capacitively coupling or near field RF. With a metallic
hull, the return current path should be easy to arrange.


Mark Borgerson

unread,
Dec 14, 2012, 10:43:20 AM12/14/12
to
In article <17olc8hk2ohoem6ij...@4ax.com>,
upsid...@downunder.com says...
Those windows are usually at least 8 inches thick. That's a problem
for capacitive coupling. Near-field RF is possible, but probably
has a larger power budget and higher cost than a pair of connectors
and RS-232 transceiver chips.

You don't want to use your pressure case as part of a return path.
That's just begging for electrolytic destruction of the pressure
case. For shallow pressure cases, we use Delrin plastic. For
deeper cases we use aluminum alloys. For the latter, we have
to make sure that they are isolated from contact with other
metals.

Mark Borgerson

Ivan Shmakov

unread,
Dec 14, 2012, 11:21:11 AM12/14/12
to
>>>>> Robert Wessel <robert...@yahoo.com> writes:

[...]

> And frankly the use of RS-232/async ports should not be a first
> choice these days.

Given the intended application, I tend to agree -- the use of
USB would probably simplify the things there a lot.

Unfortunately, I'm yet to find a really cheap MCU (8-, 32-, or
perhaps even 16-bit) with an on-chip USB. (V-USB doesn't seem
to fit well, for its CDC-ACM capability is necessarily a hack.)
The best I've found so far are some STM32's for under $4. I'd
like to see if there could be anything else at half that price.

(Though USB identifiers may become an issue. They provide one
allowing for relatively unrestricted use with V-USB, but I don't
know what's the current practice for the MCU's with hardware USB
ports.)

--
FSF associate member #7257

Rob Gaddi

unread,
Dec 14, 2012, 12:18:00 PM12/14/12
to
Half that price is easy, buy them at kilounit quantities. Digikey's got
the LPC1342, 250p at $2.57, and 1000p at $1.95. If you need fewer
pieces than that, then at the end of the day you're just not talking
about much money as compared to the other engineering costs.


--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.

Frank Miles

unread,
Dec 14, 2012, 12:50:41 PM12/14/12
to
On Thu, 13 Dec 2012 23:20:52 -0800, Mark Borgerson wrote:

[snip]

> Here's what one of our turbulence sensors looks like after about 6
> months to a year on a mooring at the equator:
>
> https://dl.dropbox.com/u/24841567/P3120108.JPG
>
> The copper probe at the lower left has the fast-response thermistor that
> is the primary sensor.

Fun! It looks comparatively unslimed!

In my work biocompatibility is often important. Seems like you need
materials that are somewhat bio-incompatible to retard these growths. Do
you try for that? Or is there too great a concern with adding toxic
substances to the environment?

Simon Clubley

unread,
Dec 14, 2012, 12:53:01 PM12/14/12
to
On 2012-12-14, Ivan Shmakov <onei...@gmail.com> wrote:
>>>>>> Robert Wessel <robert...@yahoo.com> writes:
>
> [...]
>
> > And frankly the use of RS-232/async ports should not be a first
> > choice these days.
>
> Given the intended application, I tend to agree -- the use of
> USB would probably simplify the things there a lot.
>
> Unfortunately, I'm yet to find a really cheap MCU (8-, 32-, or
> perhaps even 16-bit) with an on-chip USB. (V-USB doesn't seem
> to fit well, for its CDC-ACM capability is necessarily a hack.)
> The best I've found so far are some STM32's for under $4. I'd
> like to see if there could be anything else at half that price.
>

I am assuming you want USB device only. If you want USB host as well,
try looking at the PIC32MX range to see if it's something that would
match your requirements.

Your previous postings imply that you are doing hobbyist type work and
you don't say what packaging you need the device in so as much as it
pains me :-), I am going to point you to the PIC18F range if you want
it in PDIP.

Start by looking at the PIC18F14K50. It's quite a limited device so
you may need to look at the other options in the PIC18F range to see
which of them also support USB device.

> (Though USB identifiers may become an issue. They provide one
> allowing for relatively unrestricted use with V-USB, but I don't
> know what's the current practice for the MCU's with hardware USB
> ports.)
>

Since I only use them for my own projects, I just make them up. Since
no one else uses my embedded code, I also don't know what the current
practice is short of having to pay for formally assigned identifiers.

In the past I have heard of some manufacturers allowing you to use
identifiers within a subset of the range assigned to the manufacturer,
but I don't know what the current policy is.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world

Ivan Shmakov

unread,
Dec 14, 2012, 1:26:44 PM12/14/12
to
>>>>> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:
>>>>> On 2012-12-14, Ivan Shmakov <onei...@gmail.com> wrote:

[...]

>> Unfortunately, I'm yet to find a really cheap MCU (8-, 32-, or
>> perhaps even 16-bit) with an on-chip USB. (V-USB doesn't seem to
>> fit well, for its CDC-ACM capability is necessarily a hack.) The
>> best I've found so far are some STM32's for under $4. I'd like to
>> see if there could be anything else at half that price.

> I am assuming you want USB device only.

Yes.

[...]

> Your previous postings imply that you are doing hobbyist type work
> and you don't say what packaging you need the device in so as much as
> it pains me :-), I am going to point you to the PIC18F range if you
> want it in PDIP.

My guess is that anything with 0.8 mm or larger spacing will be
just fine. 0.5 mm pitch feels a bit too fine to tackle for an
amateur, but I'm yet to try it myself.

> Start by looking at the PIC18F14K50. It's quite a limited device so
> you may need to look at the other options in the PIC18F range to see
> which of them also support USB device.

... As for the PIC's, given the sheer number of university
courses built around these I've seen, I've always assumed
there're something wrong with them. (Some of their "weak
points" were referenced in recent CAE postings, BTW.)

Anyway, is there a reason I'd choose this particular MCU over,
say, STM32F103C8 (other than the package, which is the troubling
0.5 mm LQFP 48 for the latter, that is)? As for the cost, it
seems to be > $5 a piece for the former vs. < $3 for the latter
(for the smaller orders, e. g., a dozen or so.)

>> (Though USB identifiers may become an issue. They provide one
>> allowing for relatively unrestricted use with V-USB, but I don't
>> know what's the current practice for the MCU's with hardware USB
>> ports.)

> Since I only use them for my own projects, I just make them up.

Unfortunately, my ultimate intent is to share the designs with
the community, and using a "made-up" identifiers doesn't seem
like a particularly good example to teach on.

[...]

Jon Kirwan

unread,
Dec 14, 2012, 2:19:39 PM12/14/12
to
To Rob: The title of the thread includes the word "amateur,"
so probably not kilounit qty. More likely, it's more about
posting up a web page on some completed project or another.

To OP: For a cheap hobbyist one-off with USB connection I'll
probably just grab an MSP430 LaunchPad off the shelf if the
application idea fits. It's already got connectors for a
daughterboard, the cpu is socketed, comes with two cpus, a
32kHz xtal, two different colored LEDs, two different
pushbuttons, a USB cable, and uses RS232 between the target
and the host via USB. It is $4.30, good tools are available,
and most anyone can easily get one. You still write simple
rs232 drivers that way, as well.

Jon

Simon Clubley

unread,
Dec 14, 2012, 2:30:54 PM12/14/12
to
On 2012-12-14, Ivan Shmakov <onei...@gmail.com> wrote:
>>>>>> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:
>>>>>> On 2012-12-14, Ivan Shmakov <onei...@gmail.com> wrote:
>
> [...]
>
> > Your previous postings imply that you are doing hobbyist type work
> > and you don't say what packaging you need the device in so as much as
> > it pains me :-), I am going to point you to the PIC18F range if you
> > want it in PDIP.
>
> My guess is that anything with 0.8 mm or larger spacing will be
> just fine. 0.5 mm pitch feels a bit too fine to tackle for an
> amateur, but I'm yet to try it myself.
>
> > Start by looking at the PIC18F14K50. It's quite a limited device so
> > you may need to look at the other options in the PIC18F range to see
> > which of them also support USB device.
>
> ... As for the PIC's, given the sheer number of university
> courses built around these I've seen, I've always assumed
> there're something wrong with them. (Some of their "weak
> points" were referenced in recent CAE postings, BTW.)
>

Yes, including by myself. :-) They do have a lousy architecture, but
they are available in packages I am comfortable working with and they
have USB device in those same packages which the other MCU architectures
I prefer do not.

> Anyway, is there a reason I'd choose this particular MCU over,
> say, STM32F103C8 (other than the package, which is the troubling
> 0.5 mm LQFP 48 for the latter, that is)? As for the cost, it
> seems to be > $5 a piece for the former vs. < $3 for the latter
> (for the smaller orders, e. g., a dozen or so.)
>

If you are comfortable working at something smaller than PDIP, then
I cannot think of any reason to consider the 8-bit PICs in your case.

BTW, at Farnell in the UK, they are charging just over 2 UKP per unit
(well under 4 US$ at current exchange rates) for the PIC18F14K50 and I
have always considered Farnell to be a bit more expensive than some
others based on the prices I have paid for other products.

The AT90USB162-16AU comes in a TQFP 0.8mm pitch package according to
Atmel's summary PDF; the Farnell UK price seems reasonable at qty 10,
but given what you have said above, I don't know what price your
suppliers would charge for them however.

Rich Webb

unread,
Dec 14, 2012, 2:34:15 PM12/14/12
to
On Sat, 15 Dec 2012 01:26:44 +0700, Ivan Shmakov <onei...@gmail.com>
wrote:

> My guess is that anything with 0.8 mm or larger spacing will be
> just fine. 0.5 mm pitch feels a bit too fine to tackle for an
> amateur, but I'm yet to try it myself.

0.5 mm pitch is actually pretty easy. Swipe and wipe then inspect and
(if necessary) wick. Use lots of flux. For removal, if you don't have
access to hot air then try ChipQuik.

--
Rich Webb Norfolk, VA

Mark Borgerson

unread,
Dec 14, 2012, 5:28:11 PM12/14/12
to
In article <kafotg$2mn$1...@dont-email.me>, f...@u.washington.edu says...
In the past, I've seen instrument housings painted with anti-fouling
paint. The effectiveness of those paints on moorings diminished
when tin disappeared from the formulas. Many of the new AF paints
rely on occasional bursts of water velocity, which sheds surface
growth as the paint ablades. That doesn't work so well on moorings.

For the instrument shown, the growth away from the sensors
isn't much of an issue. The Delrin is generally ok for redeployment
after pressure washing. The copper around the thermistors seems
to keep the thermistors working OK for up to a year.

The folks who make optical instruments that need a clear window
into the ocean have tried small wipers, copper covers, and a number
of other approaches to handling biofouling. None are perfect,
to my knowledge, and there is still a lot of experimentation
going on.


Mark Borgerson


Paul

unread,
Dec 14, 2012, 6:25:27 PM12/14/12
to
In article <86vcc4o...@gray.siamics.net>, onei...@gmail.com
says...
>
> >>>>> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:
> >>>>> On 2012-12-14, Ivan Shmakov <onei...@gmail.com> wrote:
>
> [...]
>
> > Your previous postings imply that you are doing hobbyist type work
> > and you don't say what packaging you need the device in so as much as
> > it pains me :-), I am going to point you to the PIC18F range if you
> > want it in PDIP.
>
> My guess is that anything with 0.8 mm or larger spacing will be
> just fine. 0.5 mm pitch feels a bit too fine to tackle for an
> amateur, but I'm yet to try it myself.

Try getting a Schmart Board then to mount it very easily.

Look here http://www.schmartboard.com/



--
Paul Carpenter | pa...@pcserviceselectronics.co.uk
<http://www.pcserviceselectronics.co.uk/> PC Services
<http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font
<http://www.gnuh8.org.uk/> GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
<http://www.badweb.org.uk/> For those web sites you hate

Ivan Shmakov

unread,
Dec 14, 2012, 11:25:29 PM12/14/12
to
>>>>> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:

[...]

> The AT90USB162-16AU comes in a TQFP 0.8mm pitch package according to
> Atmel's summary PDF; the Farnell UK price seems reasonable at qty 10,

As it seems, ATmega8U2/TQFP is available at the same price. Any
specific reason to prefer one over the other?

> but given what you have said above, I don't know what price your
> suppliers would charge for them however.

Well, Farnell may turn to be suitable, but I'm yet to check
their shipping conditions.

Mark Borgerson

unread,
Dec 15, 2012, 12:26:13 AM12/15/12
to
In article <kafp1t$79j$1...@dont-email.me>,
clubley@remove_me.eisner.decus.org-Earth.UFP says...
I think FTDI still does that for their interface chips. I once used
IDs from a subset that they assigned to me. Now, I just use their
default IDs. That gets a standard USB serial driver loaded under
Windows and they have a config bit that will cause their direct
driver to be loaded so that you can use their API for direct
comms with their chips. Things got more complex under Win7x64
when it started requiring signed drivers. Just changing IDs in
the configuration text files caused the driver to be rejected as
unsigned.

I've used the FT245RL FIFO-to-USB chip in a number of designs.
If you have about 12 free bits---ideally with 8 of them in
a single port---you can add USB without having to worry about
any of the USB details. With an MSP430 at 8MHZ and a pretty
simple Windows host program I can transfer data at about
200KBytes/second.


If you need higher speed transfers, you can move up to the
UM-232H. With that module connected to an STM32F2XX with a
hardware SDIO port, I can upload data from the SD card
at 4-6MB/second.

Mark Borgerson


Ivan Shmakov

unread,
Dec 16, 2012, 8:48:03 AM12/16/12
to
>>>>> Paul <pa...@pcserviceselectronics.co.uk> writes:
>>>>> In article <86vcc4o...@gray.siamics.net>, onei...@gmail.com says...
>>>>> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:

[...]

>>> Your previous postings imply that you are doing hobbyist type work
>>> and you don't say what packaging you need the device in so as much
>>> as it pains me :-), I am going to point you to the PIC18F range if
>>> you want it in PDIP.

>> My guess is that anything with 0.8 mm or larger spacing will be just
>> fine. 0.5 mm pitch feels a bit too fine to tackle for an amateur,
>> but I'm yet to try it myself.

> Try getting a Schmart Board then to mount it very easily.

> Look here http://www.schmartboard.com/

Indeed, I've ordered some LQFP 48 to DIP break-out boards from
http://dipmicro.com/ (I guess I'd need to wait for quite some
time for delivery, since they're on the other side of the Earth,
literally), not to mention that I have a couple of home-made SO
prototyping boards, each with a TQFP 32 landing space (the
design that could certainly be improved.)

Yet, as I've said before, my intent is to eventually publish my
designs for the others to use (and improve), and I'm still in
doubt that mounting a 0.5 mm pitch IC would be an easy thing for
a J. Random Amateur.

But well, perhaps I'm wrong on that.

Ivan Shmakov

unread,
Dec 16, 2012, 9:00:41 AM12/16/12
to
>>>>> Jon Kirwan <jo...@infinitefactors.org> writes:
>>>>> On Fri, 14 Dec 2012 09:18:00 -0800, Rob Gaddi wrote:
>>>>> On Fri, 14 Dec 2012 23:21:11 +0700 Ivan Shmakov wrote:

[...]

>>> Unfortunately, I'm yet to find a really cheap MCU (8-, 32-, or
>>> perhaps even 16-bit) with an on-chip USB. (V-USB doesn't seem to
>>> fit well, for its CDC-ACM capability is necessarily a hack.) The
>>> best I've found so far are some STM32's for under $4. I'd like to
>>> see if there could be anything else at half that price.

>> Half that price is easy, buy them at kilounit quantities. Digikey's
>> got the LPC1342, 250p at $2.57, and 1000p at $1.95. If you need
>> fewer pieces than that, then at the end of the day you're just not
>> talking about much money as compared to the other engineering costs.

> To Rob: The title of the thread includes the word "amateur," so
> probably not kilounit qty. More likely, it's more about posting up a
> web page on some completed project or another.

The point is that if I'd be able to get in touch with the fellow
amateurs in my vicinity, then we'd probably "waste" a few dozens
of chips together, in relatively short time. 1000 PCS of any
given IC would most probably be beyond our demands.

> To OP: For a cheap hobbyist one-off with USB connection I'll probably
> just grab an MSP430 LaunchPad off the shelf if the application idea
> fits. It's already got connectors for a daughterboard, the cpu is
> socketed, comes with two cpus,

Somehow, I was unable to find out what exactly comes with this
board? But given the price, it indeed looks like a nice thing
to have.

> a 32kHz xtal, two different colored LEDs, two different pushbuttons,
> a USB cable, and uses RS232 between the target and the host via USB.

Does that mean CDC ACM? Or that the board includes an USB to
Serial (UART) converter (which is TUSB3410, I guess)?

> It is $4.30, good tools are available, and most anyone can easily get
> one.

... But what surprises me is that while one can get this one for
under $5, the TUSB3410 chip alone costs over $6.

> You still write simple rs232 drivers that way, as well.

Jon Kirwan

unread,
Dec 16, 2012, 4:46:10 PM12/16/12
to
On Sun, 16 Dec 2012 21:00:41 +0700, Ivan Shmakov
<onei...@gmail.com> wrote:

>>>>>> Jon Kirwan <jo...@infinitefactors.org> writes:
>>>>>> On Fri, 14 Dec 2012 09:18:00 -0800, Rob Gaddi wrote:
>>>>>> On Fri, 14 Dec 2012 23:21:11 +0700 Ivan Shmakov wrote:
>
>[...]
>
> >>> Unfortunately, I'm yet to find a really cheap MCU (8-, 32-, or
> >>> perhaps even 16-bit) with an on-chip USB. (V-USB doesn't seem to
> >>> fit well, for its CDC-ACM capability is necessarily a hack.) The
> >>> best I've found so far are some STM32's for under $4. I'd like to
> >>> see if there could be anything else at half that price.
>
> >> Half that price is easy, buy them at kilounit quantities. Digikey's
> >> got the LPC1342, 250p at $2.57, and 1000p at $1.95. If you need
> >> fewer pieces than that, then at the end of the day you're just not
> >> talking about much money as compared to the other engineering costs.
>
> > To Rob: The title of the thread includes the word "amateur," so
> > probably not kilounit qty. More likely, it's more about posting up a
> > web page on some completed project or another.
>
> The point is that if I'd be able to get in touch with the fellow
> amateurs in my vicinity, then we'd probably "waste" a few dozens
> of chips together, in relatively short time. 1000 PCS of any
> given IC would most probably be beyond our demands.

yeah, that's about where my mind was at after reading what
you were saying. Rob is a sharp person, but I didn't think
Rob had noticed your context -- or, if he had, had discounted
it for other reasons when he wrote, "If you need fewer
pieces than that, then at the end of the day you're just not
talking about much money as compared to the other engineering
costs." It's spoken from a professional perspective, but one
that has long since fogotten their roots and/or hobbyist
perspectives.

> > To OP: For a cheap hobbyist one-off with USB connection I'll probably
> > just grab an MSP430 LaunchPad off the shelf if the application idea
> > fits. It's already got connectors for a daughterboard, the cpu is
> > socketed, comes with two cpus,
>
> Somehow, I was unable to find out what exactly comes with this
> board? But given the price, it indeed looks like a nice thing
> to have.

You get:
• Nice box
• Paperwork
• ½ m USB cable
• Microcrystal MS3V-T1R tuning fork 32.768kHz crystal
• MSP430G2231 cpu, DIP
• MSP430G2211 cpu, DIP
• LaunchPad board, which includes a USB to host section
and a developer section with socket for cpu, two pushbuttons
for user use (as well as reset), two leds, one green, one
red, jumpers for enabling and disabling features, a special
interface for using the board to program target boards as
well (6-pin EZ430 connector), a power connector for your use
to run the board, and of course a USB connector
• four headers for daughter card extensions, 2 male-female
and 2 male-male

You can look for the documents, SLAU318 and the "Student
guide and Lab manual" for the LaunchPad (I don't know the
number for that one.)

There are two pins (dedicated if you want, or reusable for
any reason you want by pulling two jumpers) used for RxD and
TxD that talk with the USB section of the board. That section
then communicates with the host by setting up a HID virtual
COM port, automatically. Any serial port software can talk
with it.

> > a 32kHz xtal, two different colored LEDs, two different pushbuttons,
> > a USB cable, and uses RS232 between the target and the host via USB.
>
> Does that mean CDC ACM? Or that the board includes an USB to
> Serial (UART) converter (which is TUSB3410, I guess)?

Not sure what CDC ACM means. I'm sorry. But yes, I think it
uses the TUSB3410 chip plus another dedicated MSP430 as well
to run it and communicate with your target processor.

> > It is $4.30, good tools are available, and most anyone can easily get
> > one.
>
> ... But what surprises me is that while one can get this one for
> under $5, the TUSB3410 chip alone costs over $6.

Yeah. I know. And you can get completed AD9850 boards from
ebay for way less than you can buy an AD9850 chip, too.
Regardless, that's one reason this board is such a steal of a
deal. I've used it as the base of several successful projects
-- the first of which was snap onto a parallel-port printer
output of a device and to convert the output into a text file
on the PC, via the USB interface. Used both cpus for that
one, one to handle the parallel port interface comms and the
other to handle the rs-232 section, and using a versatile
protocol in between the two for synchronization and
buffering.

Jon

Simon Clubley

unread,
Dec 16, 2012, 7:29:21 PM12/16/12
to
On 2012-12-14, Ivan Shmakov <onei...@gmail.com> wrote:
>>>>>> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:
>
> > The AT90USB162-16AU comes in a TQFP 0.8mm pitch package according to
> > Atmel's summary PDF; the Farnell UK price seems reasonable at qty 10,
>
> As it seems, ATmega8U2/TQFP is available at the same price. Any
> specific reason to prefer one over the other?
>

Sorry, but I have no experience with the USB capable AVRs. I only know
about them because I have seen them mentioned while working with other
AVRs.

Simon Clubley

unread,
Dec 16, 2012, 7:53:21 PM12/16/12
to
On 2012-12-16, Jon Kirwan <jo...@infinitefactors.org> wrote:
> On Sun, 16 Dec 2012 21:00:41 +0700, Ivan Shmakov
><onei...@gmail.com> wrote:
>>
>> Does that mean CDC ACM? Or that the board includes an USB to
>> Serial (UART) converter (which is TUSB3410, I guess)?
>
> Not sure what CDC ACM means. I'm sorry. But yes, I think it
> uses the TUSB3410 chip plus another dedicated MSP430 as well
> to run it and communicate with your target processor.
>

CDC ACM is the standards based way to implement a serial port over USB.

Many vendors (ie: FTDI and Prolific [PL-2303]) implement serial ports
over USB by using their own protocols which (a) require you to install a
driver on the host PC and (b) for which the specifications are not freely
available unless you sign a NDA.

However, the USB specifications include the Communications Device Class
(CDC) which has implemented within it the Abstract Control Model (ACM)
and which can be used to implement a serial port. It's main limitation
(for me) is that there's no way to signal CTS changes from the device
attached to the USB serial port back to the host PC.

The way Microchip (for example) appear to handle the problem in their
MCP2200 USB serial port is to implement the RTS/CTS handling on the
MCP2200 itself instead of passing the CTS signal back to the host PC.
They appear to have a out-of-band mechanism by which a host PC
configuration utility can turn on and off RTS/CTS flow control handling
on the MCP2200.

I also have implemented a CDC ACM driver within my own USB device stack
and when I add RTS/CTS flow control to it, I also plan to handle the
flow control on the USB device itself, so in practice, I don't see this
as been a major issue for at least some usage patterns.

Jon Kirwan

unread,
Dec 16, 2012, 8:46:33 PM12/16/12
to
I think I may like to discuss this subject at more length,
sometime soon. I'm actually interested in the details (data
structures that must be used, what they are filled with, the
order of communications, etc.)

I'm currently fairly ignorant, excect from glancing through
the 1000 page USB 2.0 spec -- which to my reading does not do
any tutorials on the actual data structures that a slave
device must use, nor much detail on the Windows drivers (or
Linux, freeBSD, etc) side. I know there is code out there to
read, but I'd like to get an overview of subsets, one at a
time, to consider and then master before moving on. Mostly,
it seems an all-or-nothing approach in the USB 2.0 manual --
which makes taking the time quite daunting to start out. And
I consider Jan Axelrod's book almost useless, as well.

If you would consider the idea, perhaps in a few month's time
I may want to contact you. I'd be pretty dumb, to start,
except that I have 35 years embedded programming practice --
so this would mostly be about me reading and testing and then
asking questions when I get stuck on something. (Which, at
first, I expect to be often. sadly.)

Or perhaps there is a great forum for someone focused on
mastering various details but starting out largely ignorant
of the terms and the domain space?

Jon

Simon Clubley

unread,
Dec 17, 2012, 8:27:53 AM12/17/12
to
On 2012-12-16, Jon Kirwan <jo...@infinitefactors.org> wrote:
>
> I think I may like to discuss this subject at more length,
> sometime soon. I'm actually interested in the details (data
> structures that must be used, what they are filled with, the
> order of communications, etc.)
>

Assuming I have the time available when you want to discuss this,
I don't have a problem with this. However, I think the discussion
should be in comp.arch.embedded because it could benefit others
as well. Another advantage would be that any opinions or comments
I have would also be reviewable by others here.

> I'm currently fairly ignorant, excect from glancing through
> the 1000 page USB 2.0 spec -- which to my reading does not do
> any tutorials on the actual data structures that a slave
> device must use, nor much detail on the Windows drivers (or
> Linux, freeBSD, etc) side. I know there is code out there to
> read, but I'd like to get an overview of subsets, one at a
> time, to consider and then master before moving on. Mostly,
> it seems an all-or-nothing approach in the USB 2.0 manual --
> which makes taking the time quite daunting to start out. And
> I consider Jan Axelrod's book almost useless, as well.
>

Don't focus on the size of the USB 2.0 spec; you only have to read
a few chapters of it in order to implement a USB device stack as
the spec covers physical specifications as well as hub/host requirements.
You don't need to read much of that in order to implement a device only
stack.

I found I spent the vast majority of my time in Chapters 8 and 9 along
with some dipping into Chapter 5. However, I already knew much of what
is in Chapter 5 when I started, so you may spend more time there than
I did.

I also very strongly recommend you start out with a full speed device
instead of a high speed one. This means you can ignore the high speed
specific parts of the protocol until later.

Your jumping on point for following the USB protocol at data structure
level is the SETUP packet which your device will receive from the host
as part of any operation on the control endpoint (endpoint 0). This is
in table 9.2 of section 9.3. Look at this table while reading 8.5.3
(Control Transfers) which shows how this packet fits into the scheme of
things.

Higher level protocols (such as the CDC layer) have their own data
structures and are documented in their own specifications.

When you plug your USB device into the host, the USB device controller
will be reset. When you have enabled endpoint 0 and have come out of
reset, the next thing which will happen is that you will receive a
SETUP packet from the host on endpoint 0.

Your control endpoint logic is geared around interpreting these requests
and responding to them as required. One of these requests will be for the
device descriptor which, along with the configuration/interface/endpoint
descriptors, describes your device's capabilities.

I recommend you start out by making the device descriptor a vendor specific
descriptor (table 9.8, bDeviceClass = 0xff) so that the host does not
recognise it. In this way you don't have to worry about higher level
protocols throwing requests at your device stack (at least on Linux,
I don't know about Windows), but you will still go through the process
of been asked for your descriptors, been assigned a address by the host
and been told which configuration to use.

This will allow you to start out by developing and testing the lower
levels of the USB stack without having to worry about the higher level
function layers just yet.

Be warned however, that if you want to write a USB device stack, there will
be a lot of work involved in understanding the details. Given the multiple
layers to the protocol as well as it's scope, this cannot be avoided.

Mel Wilson

unread,
Dec 17, 2012, 10:32:24 AM12/17/12
to
Simon Clubley wrote:

> On 2012-12-16, Jon Kirwan <jo...@infinitefactors.org> wrote:
>>
>> I think I may like to discuss this subject at more length,
>> sometime soon. I'm actually interested in the details (data
>> structures that must be used, what they are filled with, the
>> order of communications, etc.)
>>
>
> Assuming I have the time available when you want to discuss this,
> I don't have a problem with this. However, I think the discussion
> should be in comp.arch.embedded because it could benefit others
> as well. Another advantage would be that any opinions or comments
> I have would also be reviewable by others here.

[ ... ]
Thanks for these tips. I know USB is not hard because the guy working next
to me did it, based on the implementation for PIC32. Because he did it, I
didn't have to. Now on another project I have to.

Mel.

m...@linnix.info-for.us

unread,
Dec 17, 2012, 2:05:56 PM12/17/12
to
On Sunday, December 16, 2012 4:29:21 PM UTC-8, Simon Clubley wrote:
> On 2012-12-14, Ivan Shmakov <onei...@gmail.com> wrote:
>
> >>>>>> Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> writes:
>
> >
>
> > > The AT90USB162-16AU comes in a TQFP 0.8mm pitch package according to
>
> > > Atmel's summary PDF; the Farnell UK price seems reasonable at qty 10,
>
> >
>
> > As it seems, ATmega8U2/TQFP is available at the same price. Any
>
> > specific reason to prefer one over the other?
>

Yes, i prefer to buy the newer Atmega8U2, but still have to use the old At90usb162 stocks. Unless the older chips are much cheaper.

m...@linnix.info-for.us

unread,
Dec 17, 2012, 2:13:53 PM12/17/12
to
There are Atmel and LUFA USB stacks for AVR and Microchip stacks for PICs. You don't have to write them, just use them.

Jon Kirwan

unread,
Dec 17, 2012, 3:03:22 PM12/17/12
to
Thanks, Simon. I agree with you about posting on here about
this subject when the time comes. I will air my ignorance and
take all the abuse, if any, with good nature. My interest is
to learn.

Yes, I'd like to actually do all of the detailed work
involved of writing everything. I'm not interested in just
using someone's library, except perhaps as another technical
reference "book" I can study along with the spec. There is
nothing quite as good at teaching all of the details. Once I
know them, I can then __intelligently__ make decisions about
what some minimal implementation can be for some application.
There are times when I may not be able to afford a "library"
and need to know if I can do with less. Just using a library
will never enable that kind of knowledge.

And thanks for the above writing, as well. That will give me
a leg up, as well.

Jon

Jon Kirwan

unread,
Dec 17, 2012, 3:17:55 PM12/17/12
to
On Mon, 17 Dec 2012 12:03:22 -0800, I wrote:

>Thanks, Simon. I agree with you about posting on here about
>this subject when the time comes. I will air my ignorance and
>take all the abuse, if any, with good nature. My interest is
>to learn.
>
>Yes, I'd like to actually do all of the detailed work
>involved of writing everything. I'm not interested in just
>using someone's library, except perhaps as another technical
>reference "book" I can study along with the spec. There is
>nothing quite as good at teaching all of the details. Once I
>know them, I can then __intelligently__ make decisions about
>what some minimal implementation can be for some application.
>There are times when I may not be able to afford a "library"
>and need to know if I can do with less. Just using a library
>will never enable that kind of knowledge.
>
>And thanks for the above writing, as well. That will give me
>a leg up, as well.

Oh, and although I know that some of this can be done using
HID drivers written already for linux or Microsoft's O/S, I
am also interested in the work involved on the host side. But
I'll start on the slave side. It is just that I may want to
write a host side implementation for embedded use, too. There
are those occasional times when that would be a "nice to
have" for hobby work so that I could attach keyboards, for
example.

Jon

Christopher Head

unread,
Dec 18, 2012, 2:36:48 AM12/18/12
to
On Mon, 17 Dec 2012 12:17:55 -0800
Jon Kirwan <jo...@infinitefactors.org> wrote:

> Oh, and although I know that some of this can be done using
> HID drivers written already for linux or Microsoft's O/S, I
> am also interested in the work involved on the host side. But
> I'll start on the slave side. It is just that I may want to
> write a host side implementation for embedded use, too. There
> are those occasional times when that would be a "nice to
> have" for hobby work so that I could attach keyboards, for
> example.

I definitely recommend NOT trying to solve both halves of the problem
(device side and host side) at the same time! Start out building a
device. Get it to enumerate and show up in lsusb (or the equivalent for
a different OS). Then start talking to it with user-space host-side
software via libusb. Then decide whether you want to move to
implementing a standard device class if it makes sense for your device,
write a kernel-mode OS driver, or just stick with libusb.

Chris

Jon Kirwan

unread,
Dec 18, 2012, 2:54:00 AM12/18/12
to
On Mon, 17 Dec 2012 23:36:48 -0800, Christopher Head
<ch...@is.invalid> wrote:

>On Mon, 17 Dec 2012 12:17:55 -0800
>Jon Kirwan <jo...@infinitefactors.org> wrote:
>
>> Oh, and although I know that some of this can be done using
>> HID drivers written already for linux or Microsoft's O/S, I
>> am also interested in the work involved on the host side. But
>> I'll start on the slave side. It is just that I may want to
>> write a host side implementation for embedded use, too. There
>> are those occasional times when that would be a "nice to
>> have" for hobby work so that I could attach keyboards, for
>> example.
>
>I definitely recommend NOT trying to solve both halves of the problem
>(device side and host side) at the same time! Start out building a
>device. Get it to enumerate and show up in lsusb (or the equivalent for
>a different OS).

Yeah, I think I know that much, ignorant as I am otherwise
about USB. ;)

>Then start talking to it with user-space host-side
>software via libusb.

Hmm.

>Then decide whether you want to move to
>implementing a standard device class if it makes sense for your device,
>write a kernel-mode OS driver, or just stick with libusb.
>
>Chris

Doesn't sound like where I'd like to be. I am thinking, say,
about hooking up a standard, cheap, PC keyboard to a ...
well, let's say an MSP430 with ... let's give it 16k flash
and 512 bytes sram total. Hmm. There are six different ones
with that spec. Three at $1 and three at $2.

Any chance? It is just a keyboard, after all.

Jon

Simon Clubley

unread,
Dec 18, 2012, 8:06:04 AM12/18/12
to
On 2012-12-18, Jon Kirwan <jo...@infinitefactors.org> wrote:
>
> Doesn't sound like where I'd like to be. I am thinking, say,
> about hooking up a standard, cheap, PC keyboard to a ...
> well, let's say an MSP430 with ... let's give it 16k flash
> and 512 bytes sram total. Hmm. There are six different ones
> with that spec. Three at $1 and three at $2.
>
> Any chance? It is just a keyboard, after all.
>

I have not done anything with the HID layer yet so I don't have a
feeling for what is involved there. However, Microchip have fitted
both a CDC layer and a HID layer (to control GPIO lines, not a
keyboard) into it's MCP2200 device. The MCP2200 is widely reported
to just be a PIC18F14K50 with custom Microchip firmware.

I don't know how the code density of the PIC18 compares with the MSP430
but that should give you a feeling for what is possible on the PIC18.

Note that this does not take into account the additional code needed to
poll the keyboard and also note the PIC18F14K50, when you add in the 256
bytes of USB RAM in bank 2, has 768 bytes of SRAM in total compared to
the 512 bytes you mention above for your MSP430 device.

Here is Microchip's PIC18F14K50 page so you can compare it to your
MSP430 device:

http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en533924

I am not familiar with the MSP430 range. Is the USB device controller
in the MSP430 you are looking at a high speed, full speed or a low
speed controller ?

I see TI have a PDF writeup about creating a USB keyboard using a full
speed MSP430 device connected to a keyboard:

http://www.ti.com/general/docs/lit/getliterature.tsp?literatureNumber=slaa514&fileType=pdf

j.m.gr...@gmail.com

unread,
Dec 18, 2012, 5:04:30 PM12/18/12
to
On Tuesday, December 18, 2012 8:54:00 PM UTC+13, Jon Kirwan wrote:
>
> Doesn't sound like where I'd like to be. I am thinking, say,
> about hooking up a standard, cheap, PC keyboard to a ...
> well, let's say an MSP430 with ... let's give it 16k flash
> and 512 bytes sram total. Hmm. There are six different ones
> with that spec. Three at $1 and three at $2.
>
> Any chance? It is just a keyboard, after all.

Hi Jon,

Sure, just wiggle the lines for PS/2 mode, which you can hope is still there somewhere in the corner of the silicon ;)

Otherwise, you need a USB HOST capable uC, and they are still not 'bottom end' parts.

Cheapest mini host is OTG, and Digikey says PIC32 is lowest cost stocked version.
See also
http://ww1.microchip.com/downloads/en/DeviceDoc/USB_OTG_ver_1.0.pdf

-jg

Jon Kirwan

unread,
Dec 18, 2012, 5:13:46 PM12/18/12
to
On Tue, 18 Dec 2012 14:04:30 -0800 (PST),
j.m.gr...@gmail.com wrote:

>On Tuesday, December 18, 2012 8:54:00 PM UTC+13, Jon Kirwan wrote:
>>
>> Doesn't sound like where I'd like to be. I am thinking, say,
>> about hooking up a standard, cheap, PC keyboard to a ...
>> well, let's say an MSP430 with ... let's give it 16k flash
>> and 512 bytes sram total. Hmm. There are six different ones
>> with that spec. Three at $1 and three at $2.
>>
>> Any chance? It is just a keyboard, after all.
>
>Hi Jon,
>
> Sure, just wiggle the lines for PS/2 mode, which you can
> hope is still there somewhere in the corner of the silicon
> ;)

Presume it is a new keyboard that doesn't support PS/2 mode.

>Otherwise, you need a USB HOST capable uC, and they are still not 'bottom end' parts.

But I think, if all I need is a stripped down host support
for a keyboard then perhaps I can fit it. The question is
about knowledge to know for sure.

>Cheapest mini host is OTG, and Digikey says PIC32 is lowest cost stocked version.
>See also
>http://ww1.microchip.com/downloads/en/DeviceDoc/USB_OTG_ver_1.0.pdf

Nah. Not going there for just adding a keyboard. I'd find
another way for some of the cheap stuff I'm thinking about.

I'm interested in seeing if I can get to the point where I
can code up a keyboard-minimal version of USB hosting on a
small proc. May be impossible. But by the time I can say,
from personal knowledge, that it is impossible I will know a
lot more about USB. ;)

Jon

Christopher Head

unread,
Dec 19, 2012, 5:37:52 AM12/19/12
to
On Tue, 18 Dec 2012 14:13:46 -0800
Jon Kirwan <jo...@infinitefactors.org> wrote:

> But I think, if all I need is a stripped down host support
> for a keyboard then perhaps I can fit it. The question is
> about knowledge to know for sure.

Code-wise, supporting just keyboards is certainly easier than supporting
a wider variety of devices, and if you use BOOTP, even easier than
supporting a full variety of *input* devices. BOOTP is a simplified
form of HID protocol intended for use by things like embedded systems
and BIOS configuration menus; keyboards (and I think mice) support it.
However, you will still need "full" host support at the hardware level.
This is a property of USB: the hardware for a host is somewhat
different to (and bigger than) the hardware for a device. It’s not
physically possible to take a chip that does device mode only and hack
around it to do any kind of host mode. I’m afraid I’m not familiar with
the MSP430 series, but I do know that the STM32F4 series does host
mode, as do some of the higher-end PICs (which jg mentioned). You sound
like you’re looking for something pretty low-end, though, and I agree
with jg, you’re unlikely to find such a chip.

You are right that my advice earlier about using libusb sounded
completely wrong. I thought you were trying to build a USB device, not
a USB host. They’re very different problems, so ignore my earlier
message!

Chris

Simon Clubley

unread,
Dec 19, 2012, 7:59:12 AM12/19/12
to
On 2012-12-19, Christopher Head <ch...@is.invalid> wrote:
>
> You are right that my advice earlier about using libusb sounded
> completely wrong. I thought you were trying to build a USB device, not
> a USB host. They?re very different problems, so ignore my earlier
> message!
>

That's exactly what I thought as well and it's what my responses have
been assuming. When Jon then talked about adding a keyboard, I thought
he was planning to learn about USB by combining a PS/2 (or similar)
keyboard with a USB device capable MCU to create a USB based keyboard.
Now I don't know. :-)

Jon, are you planning to connect a PS/2 type keyboard to a MSP430 and
then create a USB keyboard for use with a host PC or are you wanting to
take a USB keyboard and connect it to a MSP430 with the final destination
target for the keystrokes been a program running on the MSP430 itself ?

If it's the former, you only need a MCU with USB device support and there
are plenty of those. If it's the latter, then as already mentioned you
are going to need to a MCU with USB host capability and the most hobbyist
or student friendly of those (if they are building their own circuits)
is probably the PIC32MX.

I do know there's a AVR based software-only USB stack but it is USB device
only and only supports low speed connectiosn to a host PC; it does not
have any host support.

Rocky

unread,
Dec 19, 2012, 1:19:56 PM12/19/12
to
On Wednesday, December 19, 2012 2:59:12 PM UTC+2, Simon Clubley wrote:

> I do know there's a AVR based software-only USB stack but it is USB device
> only and only supports low speed connectiosn to a host PC; it does not
> have any host support.

FTDI also do a host device - AFAIK with own processor etc - "VINCULUM" was what it used to be called.

Jon Kirwan

unread,
Dec 19, 2012, 1:39:30 PM12/19/12
to
I know that the hardware is different. That's not the issue
for me. What's important is the software behind it and how
complex it must be. There already are several project web
sites I've visited which implement USB, monitoring the Sync
Pattern in software and doing the NRZI encoding in software
using standard I/O pins coupled to the required hardware for
attaching to USB host ports. No hardware support is used if I
understand what I've read (I may not have.)

I do NOT know if similar things can be done for a host-side
implementation to support ONLY a USB-only keyboard, or not. I
mean a keyboard that does NOT support PS/2 mode. (I have
complete docs on doing that.) Not all keyboards support a
connection via either USB or PS/2.

I already know how to talk with PS/2 (I have the original
docs from IBM still and have used that information before --
did you know that the BIOS supports downloading code via the
keyboard interface?)

I'm curious about the idea of supporting USB-only keyboards
with processors which do not have any USB peripheral support
at all. It's likely that it isn't worth the trouble in a real
product, but I'm curious partly because some of the products
I've worked on before succeeded because they did things with
better margins, less power, and smaller size than the
competition using processors that were less obvious choices.

Jon

Jon Kirwan

unread,
Dec 19, 2012, 1:43:01 PM12/19/12
to
On Wed, 19 Dec 2012 12:59:12 +0000 (UTC), Simon Clubley
<clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:

>On 2012-12-19, Christopher Head <ch...@is.invalid> wrote:
>>
>> You are right that my advice earlier about using libusb sounded
>> completely wrong. I thought you were trying to build a USB device, not
>> a USB host. They?re very different problems, so ignore my earlier
>> message!
>>
>
>That's exactly what I thought as well and it's what my responses have
>been assuming. When Jon then talked about adding a keyboard, I thought
>he was planning to learn about USB by combining a PS/2 (or similar)
>keyboard with a USB device capable MCU to create a USB based keyboard.
>Now I don't know. :-)

See my response to Christopher.

>Jon, are you planning to connect a PS/2 type keyboard to a MSP430 and
>then create a USB keyboard for use with a host PC

No.

>or are you wanting to
>take a USB keyboard and connect it to a MSP430 with the final destination
>target for the keystrokes been a program running on the MSP430 itself ?

Yes.

>If it's the former, you only need a MCU with USB device support and there
>are plenty of those. If it's the latter, then as already mentioned you
>are going to need to a MCU with USB host capability and the most hobbyist
>or student friendly of those (if they are building their own circuits)
>is probably the PIC32MX.

I love the PIC32 processor family and am already ramping up
skills there. But it is overkill for some other applications
and would drive up cost, size, and power consumption to the
point of killing the idea.

>I do know there's a AVR based software-only USB stack but it is USB device
>only and only supports low speed connectiosn to a host PC; it does not
>have any host support.

Yes, and there is one for a PIC16:

http://www.lendlocus.com/?q=16fusb

It's been mentioned here before. But both the AVR and PIC
projects merely point in a direction. I'm curious about the
more difficult question.

Jon

Jon Kirwan

unread,
Dec 19, 2012, 1:44:43 PM12/19/12
to
On Wed, 19 Dec 2012 10:39:30 -0800, I wrote:

>No hardware support is used if I
>understand what I've read (I may not have.)

I mean... "No peripheral hardware support within the cpu ..."

Jon

Christopher Head

unread,
Dec 19, 2012, 9:42:16 PM12/19/12
to
I guess you might be able to do something like the V-USB project (the
software-only device-side USB stack for AVR). Maybe you would want a
bit faster CPU (just because the host-side stuff is probably a bit more
complex—though it does have the advantage that a host doesn’t need to
always watch the wires for traffic the way a device does), and it would
not be USB compliant (because a compliant USB host must support at
least low speed *and* full speed, while a device only needs to support
one or the other, and you would probably only implement low speed), but
in the limited case of plugging in USB keyboards, it might work. At
least, you could probably get it to work with *most* keyboards.

Chris

Jon Kirwan

unread,
Dec 20, 2012, 12:05:58 AM12/20/12
to
Thanks, Chris. That doesn't say I can't do it, or that I
necessarily can. But maybe. At least the door remains open,
so far.

I appreciate your thoughts very much,
Jon

Simon Clubley

unread,
Dec 20, 2012, 9:13:11 AM12/20/12
to
On 2012-12-19, Jon Kirwan <jo...@infinitefactors.org> wrote:
> On Wed, 19 Dec 2012 18:42:16 -0800, Christopher Head
><ch...@is.invalid> wrote:
>
>>On Wed, 19 Dec 2012 10:44:43 -0800
>>Jon Kirwan <jo...@infinitefactors.org> wrote:
>>
>>> On Wed, 19 Dec 2012 10:39:30 -0800, I wrote:
>>>
>>> >No hardware support is used if I
>>> >understand what I've read (I may not have.)
>>>
>>> I mean... "No peripheral hardware support within the cpu ..."
>>>
>>> Jon
>>
>>I guess you might be able to do something like the V-USB project (the
>>software-only device-side USB stack for AVR). Maybe you would want a
>>bit faster CPU (just because the host-side stuff is probably a bit more
>>complex?though it does have the advantage that a host doesn?t need to
>>always watch the wires for traffic the way a device does), and it would
>>not be USB compliant (because a compliant USB host must support at
>>least low speed *and* full speed, while a device only needs to support
>>one or the other, and you would probably only implement low speed), but
>>in the limited case of plugging in USB keyboards, it might work. At
>>least, you could probably get it to work with *most* keyboards.
>>
>>Chris
>
> Thanks, Chris. That doesn't say I can't do it, or that I
> necessarily can. But maybe. At least the door remains open,
> so far.
>

I guess you are going to be reading the full USB spec after all. :-)

I am not familiar with the physical signalling level as you don't need
that knowledge to write a USB device or host stack which uses a hardware
USB controller so I will comment on some other issues.

First, are the USB keyboards you can buy today still low-speed devices
or are they now full speed only devices ?

Given what I currently know, I think you can justify exploring further
the idea of a software only low speed USB host, but I don't see how you
can implement a full speed host in software on the types of MCUs you
are talking about without some hardware support.

I've also been thinking about what steps are _required_ from the device
enumeration sequence after it's attached to a host. I think it may be
possible to drop the reading of the descriptors; there's certainly nothing
in my device stack which _requires_ the descriptors to be read before the
next stage can complete.

The values in the descriptors (for example, the configuration index) are
required in later steps of the sequence, but if your host stack already
has prior knowledge of the values, I don't see why it would need to read
them from the keyboard's descriptors.

However, the other steps are all required. For example, during reset I
configure endpoint 0, setting the address is required before you can
set a configuration and the set configuration stage (at least in my stack)
is where you run down the endpoint descriptors and configure the
application level (ie: HID in your case) endpoints as required.

Frnak McKenney

unread,
Dec 20, 2012, 11:00:28 AM12/20/12
to
On Sun, 16 Dec 2012 13:46:10 -0800, Jon Kirwan <jo...@infinitefactors.org> wrote:
> On Sun, 16 Dec 2012 21:00:41 +0700, Ivan Shmakov
><onei...@gmail.com> wrote:
...several attributions lost...

[...]

>> > To OP: For a cheap hobbyist one-off with USB connection I'll probably
>> > just grab an MSP430 LaunchPad off the shelf if the application idea
>> > fits. It's already got connectors for a daughterboard, the cpu is
>> > socketed, comes with two cpus,
>>
>> Somehow, I was unable to find out what exactly comes with this
>> board? But given the price, it indeed looks like a nice thing
>> to have.
>
> You get:
> • Nice box
> • Paperwork
> • ½ m USB cable
> • Microcrystal MS3V-T1R tuning fork 32.768kHz crystal
> • MSP430G2231 cpu, DIP
> • MSP430G2211 cpu, DIP

With the currently-shipping v1.5 LaunchPad boards you get:

MSP430G2553: 16Kb FLASH, 512b RAM, UART/SPI/I2C, 8ch/10bit ADC,
comparator, touch-sense enabled I/Os.
MSP430G2452: 8Kb FLASH, 256b RAM, UART/SPI/I2C, 8ch/10bit ADC,
comparator, touch-sense enabled I/Os.

> • LaunchPad board, which includes a USB to host section
> and a developer section with socket for cpu, two pushbuttons
> for user use (as well as reset), two leds, one green, one
> red, jumpers for enabling and disabling features, a special
> interface for using the board to program target boards as
> well (6-pin EZ430 connector), a power connector for your use
> to run the board, and of course a USB connector
> • four headers for daughter card extensions, 2 male-female
> and 2 male-male

Plus two compiler/IDE suites plus the MSP430 port of gcc.

As for USB, I'm aware of one effort in this direction:

USB v1.1 support on the MSP430 Launchpad
http://www.43oh.com/2012/12/usb-v1-1-support-on-the-msp430-launchpad/

Hope this helps...


Frank McKenney
--
Generations of students in the social sciences have been exposed
to entertaining lectures that point out how dumb everyone else
is, constantly wandering off the path of logic and getting lost
in the fog of intuition. Yet logical norms are blind to content
and culture, ignoring evolved capacities and environmental
structure. Often what looks like a reasoning error from a
purely logical perspective turns out to be a highly intelligent
social judgment in the real world. Good intuitions must go
beyond the information given, and therefore, beyond logic.
-- Gerd Gigerenzer / Gut Feelings: The Intelligence
of the Unconscious.
--
Frank McKenney, McKenney Associates
Richmond, Virginia / (804) 320-4887
Munged E-mail: frank uscore mckenney aatt mindspring ddoott com



Jon Kirwan

unread,
Dec 20, 2012, 2:54:19 PM12/20/12
to
Thanks for the correction. I do remember, now that you bring
it up, that the CPUs had changed. I just don't have any to
remind me about it. Thanks!

>> • LaunchPad board, which includes a USB to host section
>> and a developer section with socket for cpu, two pushbuttons
>> for user use (as well as reset), two leds, one green, one
>> red, jumpers for enabling and disabling features, a special
>> interface for using the board to program target boards as
>> well (6-pin EZ430 connector), a power connector for your use
>> to run the board, and of course a USB connector
>> • four headers for daughter card extensions, 2 male-female
>> and 2 male-male
>
>Plus two compiler/IDE suites plus the MSP430 port of gcc.

Yes. I've been having almost nothing but trouble with IAR's
on my "clean" install of Win7 Ultimate 64-bit, though. I
don't know what the problem is, but I've tried it with many
different LaunchPads and different USB connectors (the
motherboard has a LOT of them) and I also tried doing it with
the XP 32-bit VM from Microsoft running underneath without
any better success (I didn't expect that to work, anyway.) On
the same system, CCS does just fine. Odd thing there, _if_
both are using the same DLL (which maybe they are not.)

On a Thinkpad laptop running Win7 Professional, though, IAR
works great. And on all my other machines, just fine. But
they aren't 64-bit installs.

>As for USB, I'm aware of one effort in this direction:
>
> USB v1.1 support on the MSP430 Launchpad
> http://www.43oh.com/2012/12/usb-v1-1-support-on-the-msp430-launchpad/
>
>Hope this helps...

Oh, yes. Added to my USB favorites folder. I'll look it over.
Thanks.

Jon

Jon Kirwan

unread,
Dec 20, 2012, 2:58:57 PM12/20/12
to
I believe they are low speed, HID. I agree completely that a
full speed host would be ... difficult. ;)

>I've also been thinking about what steps are _required_ from the device
>enumeration sequence after it's attached to a host. I think it may be
>possible to drop the reading of the descriptors; there's certainly nothing
>in my device stack which _requires_ the descriptors to be read before the
>next stage can complete.
>
>The values in the descriptors (for example, the configuration index) are
>required in later steps of the sequence, but if your host stack already
>has prior knowledge of the values, I don't see why it would need to read
>them from the keyboard's descriptors.

Understood. Point taken.

>However, the other steps are all required. For example, during reset I
>configure endpoint 0, setting the address is required before you can
>set a configuration and the set configuration stage (at least in my stack)
>is where you run down the endpoint descriptors and configure the
>application level (ie: HID in your case) endpoints as required.

I'll need to understand more of the spec and think a bit to
see what can be done here, I suspect. But I'm keeping all
this for later and, as with Chris, I very much appreciate
these comments. They will most certainly help focus my
attention better.

Thanks,
Jon
It is loading more messages.
0 new messages