Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Binary protocol design: TLV, LTV, or else?

372 views
Skip to first unread message

Aleksandar Kuktin

unread,
Jan 8, 2014, 4:30:09 PM1/8/14
to
Hi all.

I'm making a protocol for communication between a PC and a peripheral
device. The protocol is expected to, at first, run on raw Ethernet but I
am also supposed to not make any blunders that would make it impossible
to later use the exact same protocol on things like IP and friends.

Since I saw these kinds of things in many Internet protocols (DNS, DHCP,
TCP options, off the top of my head - but note that these may have a
different order of fields), I have decided to make it an array of type-
length-value triplets encapsulated in the packet frame (no header). The
commands would fill the "type" field, "length" would specify the length
of data ("value") following the length field, and "value" would contain
the data for the command.

But I would like to hear other (read: opposing) opinions. Particularly so
since I am self-taught so there may be considerations obvious to
graduated engineers that I am oblivious to.

BTW, the periphery that is on the other end is autonomous and rather
intelligent, but very resource constrained. Really, resource
constrainment of the periphery is my main problem here.


Some interesting questions:
Is ommiting a packet header a good idea? In the long run?

If I put a packet header, what do I put in it? Since addressing and error
detection and "recovery" is supposed to be done by underlying protocols,
the only thing I can think of putting into the header is the total-length
field, and maybe, maybe, maybe a packet-id or transaction-id field. But I
really don't need any of these.

My reasoning with packet-id and transaction-id (and protocol-version,
really) is that I don't need them now, so I can omit them, and if I ever
do need them, I can just add a command which implements them. In doing
this, am I setting myself up for a very nasty problem in the future?

Is using flexible packets like this one (opposed the the contents of,
say, IP header which has strictly defined fields) a good idea, or am I
better off rigidifying my packets?

Is there a special prefference or reason as to why some protocols do TLV
and others do LTV? (Note that I am not trying to ignite a holy war, I'm
just asking.)

Is it good practice to require aligning the beggining of a TLV with a
boundary, say 16-bit word boundary?

Grant Edwards

unread,
Jan 8, 2014, 5:14:35 PM1/8/14
to
On 2014-01-08, Aleksandar Kuktin <aku...@gmail.com> wrote:

> I'm making a protocol for communication between a PC and a peripheral
> device. The protocol is expected to, at first, run on raw Ethernet

I've been supporting a protocol like that for many years. Doing raw
Ethernet on Windows hosts is becoming increasingly problematic due to
attempts by Microsoft to fix security issues. We anticipate it will
soon no longer be feasible and we'll be forced to switch to UDP.

I'm not the Windows guy, but as I understand it you'll have to write a
Windows kernel-mode driver to support your protcol, and users will
require admin privlidges. Even then you'll have problems with various
firewall setups and anti-virus software.

If the PC is running Linux, raw Ethernet isn't nearly as problematic
as it is on Windows, but it does still require either root privledges
or special security capabilities.

If you can, I'd recommend using UDP (which is fairly low overhead).
The PC end can then be written as a normal user-space application that
doesn't require admin privledges. You'll still have problems with
some routers and NAT firewalls, but way fewer problems than trying to
use raw Ethernet.

Using TCP will allow the easiest deployment, but TCP requires quite a
bit more overhead than UDP.

--
Grant Edwards grant.b.edwards Yow! HAIR TONICS, please!!
at
gmail.com

Don Y

unread,
Jan 8, 2014, 5:19:30 PM1/8/14
to
Hi Aleksander,

On 1/8/2014 2:30 PM, Aleksandar Kuktin wrote:

> I'm making a protocol for communication between a PC and a peripheral

Here there be dragons...

> device. The protocol is expected to, at first, run on raw Ethernet but I
> am also supposed to not make any blunders that would make it impossible
> to later use the exact same protocol on things like IP and friends.
>
> Since I saw these kinds of things in many Internet protocols (DNS, DHCP,
> TCP options, off the top of my head - but note that these may have a
> different order of fields), I have decided to make it an array of type-
> length-value triplets encapsulated in the packet frame (no header). The
> commands would fill the "type" field, "length" would specify the length
> of data ("value") following the length field, and "value" would contain
> the data for the command.

Are you sure you have enough variety to merit the extra overhead
(in the packet *and* in the parsing of the packet)? Can you,
instead, create a single packet format whose contents are indicated
by a "packet type" specified in the header? Even if this means leaving
space for values/parameters that might not be required in every
packet type? For example:
<header> <field1> <field2> <field3> <field4>
Where certain fields may not be used in certain packet types
(their contents then being "don't care").

Alternatively, a packet type that implicitly *defines* the format
of the balance of the packet. For example:
type1: <header1> <fieldA> <fieldB>
type2: <header2> <fieldA>
type3: <header3> <fieldA> <fieldB> <fieldC> <fieldD>
(where the format of each field may vary significantly between
message types)

It seems like you are headed in the direction of:
<header> <fields>
where the number of fields can vary as can their individual formats.

> But I would like to hear other (read: opposing) opinions. Particularly so
> since I am self-taught so there may be considerations obvious to
> graduated engineers that I am oblivious to.
>
> BTW, the periphery that is on the other end is autonomous and rather
> intelligent, but very resource constrained. Really, resource
> constrainment of the periphery is my main problem here.

So, the less "thinking" (i.e., handling of variations) that the
remote device has to handle, the better.

Of course, this can be done in a variety of different ways!
E.g., you could adopt a format where each field consists of:
<parameterNumber> <parameterValue>
and the receiving device can blindly parse the parameterNumber
and plug the corresponding parameterValue into a "slot" in an
array of parameters that your algorithms use.

Alternatively, you could write a parser that expects an entire
message to have a fixed format and plug the parameters it
discovers into predefined locations in your app.

> Some interesting questions:
> Is ommiting a packet header a good idea? In the long run?

Headers (and, where necessary, trailers) are intended to pass
specific data (e.g., message type) in a way that is invariant of
the content of the balance of the message. Like saying, "What follows
is ...".

They also help to improve reliability of the message as they can
carry information that helps verify that integrity. E.g., a
checksum. Or, simply the definition of "What follows is..."
allows the recipient to perform some tests on that which follows!
So, if you are claiming that "what follows is an email address",
the recipient can expect <alphanumeric>@<domain>. Anything that
doesn't fit this template suggests something is broken -- you
are claiming this is an email address yet it doesn't conform to
the template for an email address!

> If I put a packet header, what do I put in it? Since addressing and error
> detection and "recovery" is supposed to be done by underlying protocols,

Will that ALWAYS be the case for you? What if you later decide to
run your protocol over EIA232? Will you then require inserting
another protocol *beneath* it to provide those guarantees?

Will your underlying protocol guarantee that messages are delivered IN
ORDER? *Always*?

Do you expect the underlying protocol to guarantee delivery? At most
once? At least once?

> the only thing I can think of putting into the header is the total-length
> field, and maybe, maybe, maybe a packet-id or transaction-id field. But I
> really don't need any of these.
>
> My reasoning with packet-id and transaction-id (and protocol-version,
> really) is that I don't need them now, so I can omit them, and if I ever
> do need them, I can just add a command which implements them. In doing
> this, am I setting myself up for a very nasty problem in the future?
>
> Is using flexible packets like this one (opposed the the contents of,
> say, IP header which has strictly defined fields) a good idea, or am I
> better off rigidifying my packets?

That depends on what you expect in the future -- in terms of additions
to the protocol as well as the conveyance by which your data gets
to/from the device. Simpler tends to be better.

> Is there a special prefference or reason as to why some protocols do TLV
> and others do LTV? (Note that I am not trying to ignite a holy war, I'm
> just asking.)
>
> Is it good practice to require aligning the beggining of a TLV with a
> boundary, say 16-bit word boundary?

Depends on how you are processing the byte stream. E.g., for ethernet,
if you try to deal with any types bigger than single octets, you need
to resolve byte ordering issues (so-called network byte order).
If you design your protocol to deal exclusively with octets, then
you can sidestep this (by specifying an explicit byte ordering)
but then force the receiving (and sending) tasks to demangle/mangle
the data types outof/into these forms.

Joe Chisolm

unread,
Jan 8, 2014, 6:01:35 PM1/8/14
to
Read the Radius protocol RFCs and how they deal with UDP. There is a
boat load of parsing code out there in the various Radius server and
client implementations. If you start with UDP you can even cob together
a test system using many of the scripting languages like perl, python,
ruby, etc.

--
Chisolm
Republic of Texas

David LaRue

unread,
Jan 8, 2014, 6:09:00 PM1/8/14
to
Aleksandar Kuktin <aku...@gmail.com> wrote in
news:lakg10$kri$1...@speranza.aioe.org:
Hello,

I originated a product that used TLV packets back in the 90s and it is
still in use today without any problems. It was similar to a
configuration file that contained various parameters for applications
that shared data. There was a root packet header. This allowed
transmission acros TCP, serial, queued pipes, and file storage. We
enforced a 4-byte alignment on fields due to the machines being used to
parse the data - we had Windows, linux, and embedded devices reading the
data. Just be sure to define the byte order. We wrote and maintained
an RFC like document.

One rule we followed that may help you is that once a tag is defined it
is never redefined. That prevented issues migrating forward and
backward. Tags could be removed from use, but were always supported.

One issue we had with TLV was with one of the developers taking
shortcuts. The TLVs were built in a tree so any V started with a TL
until you got to the lowest level item being communicated. Anyway the
developer in question would read the T and presume they could bypass
reading the lower level tags because the order was fixed - it was not.
Upgraded protocols added fields (a low level TLV) that cause read
issues. Easy to find but frustrating that we had to re-release one of
the node devices.

The only other error you are likely to get is due with TLVs like this is
an issue if they entrire message isn't delivered. The follow on data
becomes part of the previous message. That is why some encaptulation
might be wise. If you are using UDP and there is no need for multiple
packets per message (ever) that might be your encapsulation method.

Good luck,

David

upsid...@downunder.com

unread,
Jan 9, 2014, 1:59:25 AM1/9/14
to
On Wed, 8 Jan 2014 22:14:35 +0000 (UTC), Grant Edwards
<inv...@invalid.invalid> wrote:

>On 2014-01-08, Aleksandar Kuktin <aku...@gmail.com> wrote:
>
>> I'm making a protocol for communication between a PC and a peripheral
>> device. The protocol is expected to, at first, run on raw Ethernet
>
>I've been supporting a protocol like that for many years. Doing raw
>Ethernet on Windows hosts is becoming increasingly problematic due to
>attempts by Microsoft to fix security issues. We anticipate it will
>soon no longer be feasible and we'll be forced to switch to UDP.

UDP adds very little compared to raw ethernet, some more or less
stable header bytes and a small ARP protocol (much less than a page of
code). There are a lot of tools to display the various IP and UDP
headers and standard socket drivers should work OK.

If you are using raw ethernet on a big host, you most likely would
have to put the ethernet adapter into promiscuous mode, which might
security / permission issue.

dp

unread,
Jan 9, 2014, 2:37:02 AM1/9/14
to
On Thursday, January 9, 2014 8:59:25 AM UTC+2, upsid...@downunder.com wrote:
> On Wed, 8 Jan 2014 22:14:35 +0000 (UTC), Grant Edwards
> <inv...@invalid.invalid> wrote:
>
> >On 2014-01-08, Aleksandar Kuktin <aku...@gmail.com> wrote:
> >
> >> I'm making a protocol for communication between a PC and a peripheral
> >> device. The protocol is expected to, at first, run on raw Ethernet
> >
> >I've been supporting a protocol like that for many years. Doing raw
> >Ethernet on Windows hosts is becoming increasingly problematic due to
> >attempts by Microsoft to fix security issues. We anticipate it will
> >soon no longer be feasible and we'll be forced to switch to UDP.
>
> UDP adds very little compared to raw ethernet, some more or less
> stable header bytes and a small ARP protocol (much less than a page of
> code). There are a lot of tools to display the various IP and UDP
> headers and standard socket drivers should work OK.

I would also advocate using UDP rather than raw Ethernet.
Implementing IP can be pretty simple if one does not intend
(as in this case) connect the device to the internet, fragment/defragment
out of order datagrams etc. UDP on top of that is almost negligible.
I can't see which MCU will have an Ethernet MAC and lack the
resources for such an "almost IP" implementation.

Dimiter

------------------------------------------------------
Dimiter Popoff, TGI http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/

Don Y

unread,
Jan 9, 2014, 3:28:21 AM1/9/14
to
Hi Dimiter,

On 1/9/2014 12:37 AM, dp wrote:
> On Thursday, January 9, 2014 8:59:25 AM UTC+2, upsid...@downunder.com wrote:
>> On Wed, 8 Jan 2014 22:14:35 +0000 (UTC), Grant Edwards
>> <inv...@invalid.invalid> wrote:
>>
>>> On 2014-01-08, Aleksandar Kuktin<aku...@gmail.com> wrote:
>>>
>>>> I'm making a protocol for communication between a PC and a peripheral
>>>> device. The protocol is expected to, at first, run on raw Ethernet
>>>
>>> I've been supporting a protocol like that for many years. Doing raw
>>> Ethernet on Windows hosts is becoming increasingly problematic due to
>>> attempts by Microsoft to fix security issues. We anticipate it will
>>> soon no longer be feasible and we'll be forced to switch to UDP.
>>
>> UDP adds very little compared to raw ethernet, some more or less
>> stable header bytes and a small ARP protocol (much less than a page of
>> code). There are a lot of tools to display the various IP and UDP
>> headers and standard socket drivers should work OK.
>
> I would also advocate using UDP rather than raw Ethernet.
> Implementing IP can be pretty simple if one does not intend
> (as in this case) connect the device to the internet, fragment/defragment
> out of order datagrams etc. UDP on top of that is almost negligible.
> I can't see which MCU will have an Ethernet MAC and lack the
> resources for such an "almost IP" implementation.

UDP tends to hit the "sweet spot" between "bare iron" and the bloat
of TCP/IP. The implementer has probably the most leeway in deciding
what he *wants* to implement vs. what he *must* implement (once you
climb up into TCP, most of the "options" go away).

Having said that, the OP still has a fair number of decisions to
make if he chooses to layer his protocol atop UDP. MTU, ARP/RARP
implementation, checksum support (I'd advocate doing this in *his*
protocol if he ever intends to run it over a leaner protocol where
*he* has to provide this reliability), etc.

I've (we've?) been assuming he can cram an entire message into
a tiny "no-fragment" packet -- that may not be the case! (Or,
may prove to be a problem when run over protocols with smaller
MTU's)

dp

unread,
Jan 9, 2014, 3:53:31 AM1/9/14
to
On Thursday, January 9, 2014 10:28:21 AM UTC+2, Don Y wrote:
> ...
> I've (we've?) been assuming he can cram an entire message into
> a tiny "no-fragment" packet -- that may not be the case! (Or,
> may prove to be a problem when run over protocols with smaller
> MTU's)

Hi Don,
UDP does not add any fragmentation overhead compared to his
raw Ethernet anyway (that is, if he stays with UDP packets
fitting in apr. 1500 bytes he will be no worse off than without
UDP).
IP does add fragmentation overhead - if it is a real IP. The sender
may choose its MTU (likely a full size Ethernet packet) but a
receiver must be ready to get that same fragmented in a few pieces
and out of order and be able to defragment it.
But since he is OK with raw Ethernet he does not need a true IP
implementation so he can just do it as if everybody is fine with
a fullsized ethernet MTU and get on with it as you suggest.
Will lose a few bytes for encapsulation but if losing 100 bytes
out of 1500 is an issue chances are there will be a lot of other,
real problems :-).

Don Y

unread,
Jan 9, 2014, 9:05:33 AM1/9/14
to
Hi Dimiter,

On 1/9/2014 1:53 AM, dp wrote:
> On Thursday, January 9, 2014 10:28:21 AM UTC+2, Don Y wrote:
>> ...
>> I've (we've?) been assuming he can cram an entire message into
>> a tiny "no-fragment" packet -- that may not be the case! (Or,
>> may prove to be a problem when run over protocols with smaller
>> MTU's)
>
> UDP does not add any fragmentation overhead compared to his
> raw Ethernet anyway (that is, if he stays with UDP packets
> fitting in apr. 1500 bytes he will be no worse off than without
> UDP).

I'm thinking more in terms of any other media (protocols) over
which he may eventually use for transport. If he doesn't want to
add support for packet reassembly in *his* protocol, then he
would be wise to pick a message format that fits in the smallest
MTU "imaginable".

For ethernet, I think that is ~60+ octets (i.e., just bigger than the
frame header). I'm a big fan of ~500 byte messages (the minimum
that any node *must* be able to accommodate). I think you have to
consider any other media that may get injected along the path
from source to destination (i.e., if it is not purely "ethernet"
from end to end. IIRC, a PPP link drops the MTU to the 200-300
range.

> IP does add fragmentation overhead - if it is a real IP. The sender
> may choose its MTU (likely a full size Ethernet packet) but a
> receiver must be ready to get that same fragmented in a few pieces
> and out of order and be able to defragment it.

As above, I think if you truly want to avoid dealing with fragments,
you have to be able to operate with an MTU that is little more than
the header (plus 4? or 8?? octets). Even a ~500 byte message could,
conceivably, appear as *100* little fragments! :-/ (and the
receiving node had better be equipped to handle all 500 bytes as
they trickle in!)

> But since he is OK with raw Ethernet he does not need a true IP
> implementation so he can just do it as if everybody is fine with
> a fullsized ethernet MTU and get on with it as you suggest.
> Will lose a few bytes for encapsulation but if losing 100 bytes
> out of 1500 is an issue chances are there will be a lot of other,
> real problems :-).

OP hasn't really indicated how complex/big his messages need to be.
Nor what the ultimate fabric might look like.

E.g., here, I've tried really hard to keep messages *ultra* tiny
by thinking about exactly what *needs* to fit in the message and
how best to encode it. So, for example, I can build an ethernet-CAN
bridge in a heartbeat and not have to worry about trading latency
and responsiveness for packet size on the CAN bus (those nodes can
have super tiny input buffers and still handle complete messages
without having to worry about fragmentation, etc.)

It must have been entertaining for the folks who came up with
ethernet, IP, etc. way back when to start with a clean slate
and *guess* as to what would work best! :>

Grant Edwards

unread,
Jan 9, 2014, 10:18:22 AM1/9/14
to
I've never found that to be the case. However raw ethernet access in
non-promiscuous still requires admin/root/special privledges and
causes a lot of security headaches (particularly under Windows).

--
Grant Edwards grant.b.edwards Yow! I'm continually AMAZED
at at th'breathtaking effects
gmail.com of WIND EROSION!!

upsid...@downunder.com

unread,
Jan 9, 2014, 4:55:53 PM1/9/14
to
On Thu, 09 Jan 2014 07:05:33 -0700, Don Y <th...@isnotme.com> wrote:

>Hi Dimiter,
>
>On 1/9/2014 1:53 AM, dp wrote:
>> On Thursday, January 9, 2014 10:28:21 AM UTC+2, Don Y wrote:
>>> ...
>>> I've (we've?) been assuming he can cram an entire message into
>>> a tiny "no-fragment" packet -- that may not be the case! (Or,
>>> may prove to be a problem when run over protocols with smaller
>>> MTU's)
>>
>> UDP does not add any fragmentation overhead compared to his
>> raw Ethernet anyway (that is, if he stays with UDP packets
>> fitting in apr. 1500 bytes he will be no worse off than without
>> UDP).

I have been running raw Ethernet since the DIX days, with DECnet, LAT
(similar to "telnet" terminal connections) and my own protocols
forcing the network adapters into promiscious mode on thick Ethernet
cables with vampire taps on the cable.

>I'm thinking more in terms of any other media (protocols) over
>which he may eventually use for transport. If he doesn't want to
>add support for packet reassembly in *his* protocol, then he
>would be wise to pick a message format that fits in the smallest
>MTU "imaginable".

While some X.25 based protocols might limit the frame size to 64
bytes, 576 bytes has been the norm for quite a few years.
Standard Ethernet frames are above 1400 bytes, while Jumbo frames
could be about 9000 bytes.

>For ethernet, I think that is ~60+ octets (i.e., just bigger than the
>frame header).

64 bytes is the minimum for proper collision detection size on
coaxial Ethernet networks.

Don Y

unread,
Jan 9, 2014, 5:26:10 PM1/9/14
to
On 1/9/2014 2:55 PM, upsid...@downunder.com wrote:
> On Thu, 09 Jan 2014 07:05:33 -0700, Don Y<th...@isnotme.com> wrote:
>
>> Hi Dimiter,
>>
>> On 1/9/2014 1:53 AM, dp wrote:
>>> On Thursday, January 9, 2014 10:28:21 AM UTC+2, Don Y wrote:
>>>> ...
>>>> I've (we've?) been assuming he can cram an entire message into
>>>> a tiny "no-fragment" packet -- that may not be the case! (Or,
>>>> may prove to be a problem when run over protocols with smaller
>>>> MTU's)
>>>
>>> UDP does not add any fragmentation overhead compared to his
>>> raw Ethernet anyway (that is, if he stays with UDP packets
>>> fitting in apr. 1500 bytes he will be no worse off than without
>>> UDP).
>
> I have been running raw Ethernet since the DIX days, with DECnet, LAT
> (similar to "telnet" terminal connections) and my own protocols
> forcing the network adapters into promiscious mode on thick Ethernet
> cables with vampire taps on the cable.
>
>> I'm thinking more in terms of any other media (protocols) over
>> which he may eventually use for transport. If he doesn't want to
>> add support for packet reassembly in *his* protocol, then he
>> would be wise to pick a message format that fits in the smallest
>> MTU "imaginable".
>
> While some X.25 based protocols might limit the frame size to 64

Anything in the chain can set the MTU to 68 bytes and still be
"playing by the rules". So, if you *rely* on 70 octets coming
down the 'pike in one UNFRAGMENTED datagram, if your PMTUd gives
something less, you won't receive that level of service.

> bytes, 576 bytes has been the norm for quite a few years.
> Standard Ethernet frames are above 1400 bytes, while Jumbo frames
> could be about 9000 bytes.
>
>> For ethernet, I think that is ~60+ octets (i.e., just bigger than the
>> frame header).
>
> 64 bytes is the minimum for proper collision detection size on
> coaxial Ethernet networks.

From RFC791:

"Every internet module must be able to forward a datagram of 68
octets without further fragmentation. This is because an internet
header may be up to 60 octets, and the minimum fragment is 8
octets."

"Every internet destination must be able to receive a datagram of
576 octets either in one piece or in fragments to be reassembled."

So, a datagram could, conceivably, be fragmented into hundreds of 68
octet datagrams (which can include padding). Yet, must be able to
reassemble these to form that original datagram. I.e., I could build
a bridge that diced up incoming datagrams into itsy bitsy pieces
and be strictly compliant -- as long as I could handle a 576 octet
datagram (buffer size).

OTOH, reliable PMTU discovery is problematic on generic networks as
many nodes don't handle (all) ICMP traffic (as originally intended).

But, nothing requires the nodes/hops to handle a ~1500 octet datagram
("Datagram Too Big")

Folks working on big(ger) iron often don't see where all the dark
corners of the protocols manifest themselves. And, folks writing
stacks often don't realize how much leeway they actually have in
their implementation(s)! :<

[N.B. IPv6 increases these numbers]

Grant Edwards

unread,
Jan 9, 2014, 5:44:12 PM1/9/14
to
On 2014-01-09, Don Y <th...@isnotme.com> wrote:

> Anything in the chain can set the MTU to 68 bytes and still be
> "playing by the rules". So, if you *rely* on 70 octets coming
> down the 'pike in one UNFRAGMENTED datagram, if your PMTUd gives
> something less, you won't receive that level of service.

Let's not forget that we're discussing UDP _as_a_substitute_for_
_raw_Ethernet_. That means the OP is willing to require that the two
nodes are on the same network segment, and that we can assume that an
Ethernet frame of 1500 bytes is OK.

If using UDP allows packets to be routed between two remote nodes
_some_ of the time, that's still pure gravy compared to using raw
Ethernet -- even if the UDP/IP implementation doesn't support
fragmentation.

--
Grant Edwards grant.b.edwards Yow! PEGGY FLEMMING is
at stealing BASKET BALLS to
gmail.com feed the babies in VERMONT.

Don Y

unread,
Jan 9, 2014, 6:56:49 PM1/9/14
to
Hi Grant,

On 1/9/2014 3:44 PM, Grant Edwards wrote:
> On 2014-01-09, Don Y<th...@isnotme.com> wrote:
>
>> Anything in the chain can set the MTU to 68 bytes and still be
>> "playing by the rules". So, if you *rely* on 70 octets coming
>> down the 'pike in one UNFRAGMENTED datagram, if your PMTUd gives
>> something less, you won't receive that level of service.
>
> Let's not forget that we're discussing UDP _as_a_substitute_for_
> _raw_Ethernet_. That means the OP is willing to require that the two
> nodes are on the same network segment, and that we can assume that an
> Ethernet frame of 1500 bytes is OK.
>
> If using UDP allows packets to be routed between two remote nodes
> _some_ of the time, that's still pure gravy compared to using raw
> Ethernet -- even if the UDP/IP implementation doesn't support
> fragmentation.

As I said in my reply to Dimiter, upthread:
I'm thinking more in terms of any other media (protocols) over
which he may eventually use for transport.
Given that the OP is in the process of designing a protocol, he may
want to consider the inevitability (?) of his interconnect medium
(and/or underlying protocol) changing in the future. CAN-bus, zigbee,
etc. I.e., *expecting* to be able to push a 1500 byte message "in one
burst" can lead to problems down the road when/if that assumption can
no longer be met.

Too often, an ignorance of the underlying protocol ends up having
disproportionate costs for "tiny" bits of protocol overhead. E.g.,
adding a header that brings the payload to one byte beyond the MSS.
"Why is everything so much slower than it (calculated) should be?"

I try to design with a mantra of "expect the least, enforce the most".

[The OP hasn't really indicated what sort of environment he expects
to operate within nor the intent of the device and the relative
importance (or lack thereof) of the comms therein]

Aleksandar Kuktin

unread,
Jan 10, 2014, 1:52:52 PM1/10/14
to
Well, this is reassuring. It means at least someone did what I intend to
do, so I should be able to do the same.

Aleksandar Kuktin

unread,
Jan 10, 2014, 2:04:21 PM1/10/14
to
On Wed, 08 Jan 2014 15:19:30 -0700, Don Y wrote:

> Hi Aleksander,
>
> On 1/8/2014 2:30 PM, Aleksandar Kuktin wrote:
>
>> I'm making a protocol for communication between a PC and a peripheral
>
> Here there be dragons...
>
>> device.

Will give more details in a follow-up in a different sub-thread.

> Are you sure you have enough variety to merit the extra overhead (in the
> packet *and* in the parsing of the packet)?

Pretty sure. The packet transmited over the wire is actually expected to
be an amalgamation of various commands, parameters and options.

> Can you,
> instead, create a single packet format whose contents are indicated by a
> "packet type" specified in the header? Even if this means leaving space
> for values/parameters that might not be required in every packet type?
> For example:
> <header> <field1> <field2> <field3> <field4>
> Where certain fields may not be used in certain packet types (their
> contents then being "don't care").
>
> Alternatively, a packet type that implicitly *defines* the format of the
> balance of the packet. For example:
> type1: <header1> <fieldA> <fieldB>
> type2: <header2> <fieldA>
> type3: <header3> <fieldA> <fieldB> <fieldC> <fieldD>
> (where the format of each field may vary significantly between message
> types)

This is explicitly what I don't want. That way, I would need to send
many, many packets to transmit my message across.

> It seems like you are headed in the direction of:
> <header> <fields>
> where the number of fields can vary as can their individual formats.

It seems this is what I will end up with.


>> But I would like to hear other (read: opposing) opinions. Particularly
>> so since I am self-taught so there may be considerations obvious to
>> graduated engineers that I am oblivious to.
>>
>> BTW, the periphery that is on the other end is autonomous and rather
>> intelligent, but very resource constrained. Really, resource
>> constrainment of the periphery is my main problem here.
>
> So, the less "thinking" (i.e., handling of variations) that the remote
> device has to handle, the better.

Hmmm... Not really. Avability of CPU cycles depends on other details of
the device, but if need be I can make the device drown in its own CPU
cycles. Memory, on the other hand, is constrained.

> Of course, this can be done in a variety of different ways!
> E.g., you could adopt a format where each field consists of:
> <parameterNumber> <parameterValue>
> and the receiving device can blindly parse the parameterNumber and plug
> the corresponding parameterValue into a "slot" in an array of parameters
> that your algorithms use.
>
> Alternatively, you could write a parser that expects an entire message
> to have a fixed format and plug the parameters it discovers into
> predefined locations in your app.

I now go to the other sub-thread to continue the conversation...

Aleksandar Kuktin

unread,
Jan 10, 2014, 2:15:28 PM1/10/14
to
On Wed, 08 Jan 2014 22:14:35 +0000, Grant Edwards wrote:

> On 2014-01-08, Aleksandar Kuktin <aku...@gmail.com> wrote:
>
>> I'm making a protocol for communication between a PC and a peripheral
>> device. The protocol is expected to, at first, run on raw Ethernet
>
> I've been supporting a protocol like that for many years. Doing raw
> Ethernet on Windows hosts is becoming increasingly problematic due to
> attempts by Microsoft to fix security issues. We anticipate it will soon
> no longer be feasible and we'll be forced to switch to UDP.
>
> I'm not the Windows guy, but as I understand it you'll have to write a
> Windows kernel-mode driver to support your protcol, and users will
> require admin privlidges. Even then you'll have problems with various
> firewall setups and anti-virus software.

TBH, I really don't expect to support Windows, at least for the time
being. My reasoning is that I can always patch together a Linux LiveCD
and ship it with the device.

I began honing my skills with the Linux from scratch project, so
assembling a distro should not take me more than a week.

> If the PC is running Linux, raw Ethernet isn't nearly as problematic as
> it is on Windows, but it does still require either root privledges or
> special security capabilities.

The idea is to use one program that runs as root and relays packets and
have a different program do the actual driving of the device.

> If you can, I'd recommend using UDP (which is fairly low overhead). The
> PC end can then be written as a normal user-space application that
> doesn't require admin privledges. You'll still have problems with some
> routers and NAT firewalls, but way fewer problems than trying to use raw
> Ethernet.

UDP/IP is just an extension of IP. I considered using raw IP, but decided
against it on grounds that I didn't want to implement IP, simple as it
may be.

Ofcourse, I eventually *will* implement IP so then I might end up with
the whole UDP/IP but, honestly, at this moment the only benefit of UDP/IP
is the ease of writing the driver. But that is a very marginal benefit.

> Using TCP will allow the easiest deployment, but TCP requires quite a
> bit more overhead than UDP.

TCP/IP is out of the question, period.

Aleksandar Kuktin

unread,
Jan 10, 2014, 2:21:24 PM1/10/14
to
On Thu, 09 Jan 2014 07:05:33 -0700, Don Y wrote:

> It must have been entertaining for the folks who came up with ethernet,
> IP, etc. way back when to start with a clean slate and *guess* as to
> what would work best! :>

Actually, that's not how it happened at all. :)

Just like in any evolutionary process, several possible solutions were
produced and the ones that were "fittest" and most adapted to the
environment were the ones that prevailed.

Aleksandar Kuktin

unread,
Jan 10, 2014, 2:53:22 PM1/10/14
to
On Thu, 09 Jan 2014 16:56:49 -0700, Don Y wrote:

> [The OP hasn't really indicated what sort of environment he expects to
> operate within nor the intent of the device and the relative importance
> (or lack thereof) of the comms therein]

The device is a CNC robot, to be used in manufacture. Because of that, I
can require and assume a fairly strict, secure and "proper" setup, with
or without Stuxnet and its ilk.

The protocol is supposed to support transfer of compiled G-code from the
PC to a device (really a battery of devices), transfer of telemetry,
configuration and perhaps a few other things I forgot to think of by now.

Since its main purpose is transfer of G-code, the protocol is expected to
be able to utilize fairly small packets, small enough that fragmentation
is not expected to happen (60 octets should be enough).

Don Y

unread,
Jan 10, 2014, 5:30:28 PM1/10/14
to
Hi Aleksander,

On 1/10/2014 12:53 PM, Aleksandar Kuktin wrote:
> On Thu, 09 Jan 2014 16:56:49 -0700, Don Y wrote:
>
>> [The OP hasn't really indicated what sort of environment he expects to
>> operate within nor the intent of the device and the relative importance
>> (or lack thereof) of the comms therein]
>
> The device is a CNC robot, to be used in manufacture. Because of that, I
> can require and assume a fairly strict, secure and "proper" setup, with
> or without Stuxnet and its ilk.

Ah, well... there will be an extra service charge to have Stuxnet
installed. Check with the sales office for more details. I think
they are running a "2-for-1" promotion -- THIS MONTH ONLY! :>

> The protocol is supposed to support transfer of compiled G-code from the
> PC to a device (really a battery of devices), transfer of telemetry,
> configuration and perhaps a few other things I forgot to think of by now.
>
> Since its main purpose is transfer of G-code, the protocol is expected to
> be able to utilize fairly small packets, small enough that fragmentation
> is not expected to happen (60 octets should be enough).

Remember, UDP's "eficiency" (if you want to call it that) comes at
a reasonably high cost!

There are no guarantees that a given datagram will be delivered
(or received). The protocol that you develop *atop* UDP has
to notice when stuff "goes missing" (e.g., require an explicit
acknowledgement, sequence numbers, etc.)

There are no guarantees that datagram 1 will be received *before*
datagram 2. "Turn off plasma cutter" "Move left" can be received
as "Move left" "Turn off plasma cutter" (which might be "A Bad Thing"
if there is something located off to the left that doesn't like being
exposed to plasma! :> )

There is no sense of a "connection" between the source and destination
beyond that of each *individual* datagram. Neither party is ever
aware if the other party is "still there". (add keepalives if this is
necessary)

There is no mechanism to moderate traffic as it increases (and,
those increases can lead to more dropped/lost datagrams which
leads to more retransmission *requests*, which leads to more
traffic which leads to... keep in mind any other traffic on
your network that could worsen this -- or, be endangered by it!)

Appliances other than switches can effectively block UDP connections.
If you ever intend to support a physically distributed domain
that exceeds what you can achieve using "maximum cable lengths"
(one of the drawbacks about moving away from "orange hose" and its
ilk was the drop in maximum cable length), you have to be careful
in considering what any *other* "interconnect appliances" do to
the traffic you intend to pass (and, if your protocol will be routed!)

[It's surprising how *short* "100m" is when it comes to cable lengths!
Esp in a manufacturing setting where you might have to go *up* a
considerable way -- and, leave a suitable service loop -- before you
can even begin to go "over"! And, line-of-sight cable routing may be
impractical. For example, here (residential), the *average* length of
a network cable is close to 70 feet -- despite the fact that the
(2D) diagonal of the house is just about that same length *and* all
the drops are centrally terminated!]

When someone later decides it should be a piece of cake for your
"engineering office" to directly converse with your controllers located
in the "manufacturing facility", you'll find yourself explaining
why that's not easily accomplished. "Why not? I can talk to
Google's servers in another *state*/country... (damn consultants
always trying to charge extra for stuff they should have done in
the first place!)"

[N.B. Raw ethernet frames don't even give you the above (lack of)
assurances :> ]

upsid...@downunder.com

unread,
Jan 11, 2014, 2:50:23 AM1/11/14
to
On Fri, 10 Jan 2014 15:30:28 -0700, Don Y <th...@isnotme.com> wrote:

>Hi Aleksander,
>
>On 1/10/2014 12:53 PM, Aleksandar Kuktin wrote:
>> On Thu, 09 Jan 2014 16:56:49 -0700, Don Y wrote:
>>
>>> [The OP hasn't really indicated what sort of environment he expects to
>>> operate within nor the intent of the device and the relative importance
>>> (or lack thereof) of the comms therein]
>>
>> The device is a CNC robot, to be used in manufacture. Because of that, I
>> can require and assume a fairly strict, secure and "proper" setup, with
>> or without Stuxnet and its ilk.

Such applications have been traditionally handled with half duplex
RS-485 multidrop request/response protocols (such as Modbus) at speeds
of 9600 or even 115200 bit/s.

In a new implementation with Ethernet hardware, you get galvanic
isolation, much higher gross throughput (at least 10 Mbit/s), bus
arbitration, message framing and CRC detection for "free", i.e. in
hardware.

In a multidrop environment you can communicate with each device in
parallel in a full-duplex. This greatly compensates for the latencies
with simple half-duplex transactions between the master and a single
slave.


>> The protocol is supposed to support transfer of compiled G-code from the
>> PC to a device (really a battery of devices), transfer of telemetry,
>> configuration and perhaps a few other things I forgot to think of by now.
>>
>> Since its main purpose is transfer of G-code, the protocol is expected to
>> be able to utilize fairly small packets, small enough that fragmentation
>> is not expected to happen (60 octets should be enough).
>
>Remember, UDP's "eficiency" (if you want to call it that) comes at
>a reasonably high cost!
>
>There are no guarantees that a given datagram will be delivered
>(or received). The protocol that you develop *atop* UDP has
>to notice when stuff "goes missing" (e.g., require an explicit
>acknowledgement, sequence numbers, etc.)

On traditional RS-4xx networks there is no such guarantees either, so
request/response+timeout/retransmit protocols have been used for
decades, why not use it on raw eth or UDP ?

>There are no guarantees that datagram 1 will be received *before*
>datagram 2. "Turn off plasma cutter" "Move left" can be received
>as "Move left" "Turn off plasma cutter" (which might be "A Bad Thing"
>if there is something located off to the left that doesn't like being
>exposed to plasma! :> )

Still using the traditional request/response model, you do not send
the "Move left" before you receive the ack from "Turn off" command.
Better yet, use the longer message available, put all the critical
elements into a single transaction (eth/UDP frame). A frame could
consist of "Move to X,Y", "Plasma on", "Move to A,B at speed z",
"Plasma off". After this full sequence has been acknowledged, the
master should not send a new burn sequence.

>There is no sense of a "connection" between the source and destination
>beyond that of each *individual* datagram. Neither party is ever
>aware if the other party is "still there". (add keepalives if this is
>necessary)

When the slave acknowledges the master request, this is easily
handled.

>There is no mechanism to moderate traffic as it increases (and,
>those increases can lead to more dropped/lost datagrams which
>leads to more retransmission *requests*, which leads to more
>traffic which leads to... keep in mind any other traffic on
>your network that could worsen this -- or, be endangered by it!)

The amount of traffic in a network controlling a real (mechanical)
device, is limited by the mechanical movement etc.) of that device,
not the network capacity.

In old coaxial based 10Base2/5 half duplex networks, that might have
been an issue. For switch based (butt not hub based) 10xxxBaseT
networks, this is not really an issue, since a typical industrial
device do not need more than 10BaseT.

>Appliances other than switches can effectively block UDP connections.
>If you ever intend to support a physically distributed domain
>that exceeds what you can achieve using "maximum cable lengths"
>(one of the drawbacks about moving away from "orange hose" and its
>ilk was the drop in maximum cable length), you have to be careful
>in considering what any *other* "interconnect appliances" do to
>the traffic you intend to pass (and, if your protocol will be routed!)

I would not be that foolish to do direct machine control over
unpredictable nets such as the Internet or even less over some
wireless connections, such as WLANs, microwave or satellite links. All
the security issues must be handled with wired connection and in some
cases with certified LAN systems.

>[It's surprising how *short* "100m" is when it comes to cable lengths!
>Esp in a manufacturing setting where you might have to go *up* a
>considerable way -- and, leave a suitable service loop -- before you
>can even begin to go "over"! And, line-of-sight cable routing may be
>impractical. For example, here (residential), the *average* length of
>a network cable is close to 70 feet -- despite the fact that the
>(2D) diagonal of the house is just about that same length *and* all
>the drops are centrally terminated!]

100 m is the distance for a 10xBaseT twisted pair cable limit, putting
switches in between will solve that. For real heavy industry copper
wiring is a no no issue and you have to use fibers anyway, with single
mode cable fibers with over 30 km range without optical repeaters.

>When someone later decides it should be a piece of cake for your
>"engineering office" to directly converse with your controllers located
>in the "manufacturing facility", you'll find yourself explaining
>why that's not easily accomplished. "Why not? I can talk to
>Google's servers in another *state*/country... (damn consultants
>always trying to charge extra for stuff they should have done in
>the first place!)"

Isn't that a good thing that you can deny access from the outside
world to a critical controller ?

If there is a specific need to let a qualified person from a remote
site access the device directly, build a secured VPN connection to the
PC controlling the devices and use all required firewall etc. methods
to restrict the access.

>[N.B. Raw ethernet frames don't even give you the above (lack of)
>assurances :> ]

This is a really good thing.

upsid...@downunder.com

unread,
Jan 11, 2014, 3:25:42 AM1/11/14
to
it is interesting to note that the Ethernet was not a top contester in
the beginning due to the high cost.

One of the first DIX ethernet implementations for DECNET was the DEUNA
card for PDP-11/VAX-11, requiring two Unibus cards (very expensive),
which was connected to the RG-8 coaxial cable using vampire tap
transceivers (very expensive) using the AUI interface (essentially
RS-422 for Tx, Rx, Control and Collision detect).

In fact the AUI interface is electrically and functionally quite
similar to 10BaseT interface (except control and collision detect).

The cost of the vampire tap transceivers were so high that the first
"Ethernet" network I designed and built was a network in a computer
room between computers, using AUI cabling (with 15 bit connectors)
through a DELNI "hub", later adding some long AUI cables for terminal
servers at the other end of the office. Thus no coaxial cable used.

10xxxBaseT based hubs and switches really made Ethernet a viable
option.

upsid...@downunder.com

unread,
Jan 11, 2014, 3:36:44 AM1/11/14
to
On Fri, 10 Jan 2014 19:15:28 +0000 (UTC), Aleksandar Kuktin
<aku...@gmail.com> wrote:

>On Wed, 08 Jan 2014 22:14:35 +0000, Grant Edwards wrote:

>> If you can, I'd recommend using UDP (which is fairly low overhead). The
>> PC end can then be written as a normal user-space application that
>> doesn't require admin privledges. You'll still have problems with some
>> routers and NAT firewalls, but way fewer problems than trying to use raw
>> Ethernet.
>
>UDP/IP is just an extension of IP. I considered using raw IP, but decided
>against it on grounds that I didn't want to implement IP, simple as it
>may be.

IP is different from raw Ethernet.

IP requires handling the ARP issues, adding UDP to it, is just the
port number.

ARP issues require about a page of code and UDP even less.

Grant Edwards

unread,
Jan 11, 2014, 11:40:20 AM1/11/14
to
On 2014-01-11, upsid...@downunder.com <upsid...@downunder.com> wrote:

> The cost of the vampire tap transceivers were so high that the first
> "Ethernet" network I designed and built was a network in a computer
> room between computers, using AUI cabling (with 15 bit connectors)
> through a DELNI "hub", later adding some long AUI cables for terminal
> servers at the other end of the office. Thus no coaxial cable used.

The first place I worked where "Ethernet" was widely used, there was a
thick Ethernet backbone, but the vast majority of the wiring was AUI
cables and hubs.

--
Grant


Don Y

unread,
Jan 11, 2014, 11:56:42 AM1/11/14
to
On 1/11/2014 12:50 AM, upsid...@downunder.com wrote:
> On Fri, 10 Jan 2014 15:30:28 -0700, Don Y<th...@isnotme.com> wrote:
>> On 1/10/2014 12:53 PM, Aleksandar Kuktin wrote:
>>> On Thu, 09 Jan 2014 16:56:49 -0700, Don Y wrote:

>>> The protocol is supposed to support transfer of compiled G-code from the
>>> PC to a device (really a battery of devices), transfer of telemetry,
>>> configuration and perhaps a few other things I forgot to think of by now.
>>>
>>> Since its main purpose is transfer of G-code, the protocol is expected to
>>> be able to utilize fairly small packets, small enough that fragmentation
>>> is not expected to happen (60 octets should be enough).
>>
>> Remember, UDP's "eficiency" (if you want to call it that) comes at
>> a reasonably high cost!
>>
>> There are no guarantees that a given datagram will be delivered
>> (or received). The protocol that you develop *atop* UDP has
>> to notice when stuff "goes missing" (e.g., require an explicit
>> acknowledgement, sequence numbers, etc.)
>
> On traditional RS-4xx networks there is no such guarantees either, so
> request/response+timeout/retransmit protocols have been used for
> decades, why not use it on raw eth or UDP ?

You're making my point for me: the protocol that is layered atop
UDP has to include these provisions. (e.g., using TCP handles
much of this -- at an added expense).

>> There are no guarantees that datagram 1 will be received *before*
>> datagram 2. "Turn off plasma cutter" "Move left" can be received
>> as "Move left" "Turn off plasma cutter" (which might be "A Bad Thing"
>> if there is something located off to the left that doesn't like being
>> exposed to plasma! :> )
>
> Still using the traditional request/response model, you do not send
> the "Move left" before you receive the ack from "Turn off" command.
> Better yet, use the longer message available, put all the critical
> elements into a single transaction (eth/UDP frame). A frame could
> consist of "Move to X,Y", "Plasma on", "Move to A,B at speed z",
> "Plasma off". After this full sequence has been acknowledged, the
> master should not send a new burn sequence.

What if the message gets fragmented (by a device along the way)
and a fragment gets dropped?

What if the message is VERY long (i.e., won't even fit in a jumbo
frame -- assuming every device accepts jumbo frames) -- like a
software update, CNC "program", etc.?

Again: the protocol that is layered atop UDP has to include these
provisions.

>> There is no sense of a "connection" between the source and destination
>> beyond that of each *individual* datagram. Neither party is ever
>> aware if the other party is "still there". (add keepalives if this is
>> necessary)
>
> When the slave acknowledges the master request, this is easily
> handled.

There needs to be a "NoOp" request -- something that can be sent
that has no other effects besides exercising the link.

Again: the protocol that is layered atop UDP has to include these
provisions.

>> There is no mechanism to moderate traffic as it increases (and,
>> those increases can lead to more dropped/lost datagrams which
>> leads to more retransmission *requests*, which leads to more
>> traffic which leads to... keep in mind any other traffic on
>> your network that could worsen this -- or, be endangered by it!)
>
> The amount of traffic in a network controlling a real (mechanical)
> device, is limited by the mechanical movement etc.) of that device,
> not the network capacity.

You are assuming a single device is sitting on the network.
And, that all messages involve "control". E.g., any status
updates (polling) consume bandwidth as well. And, traffic
that fits into "none of the above" (e.g., firmware updates...
or, do you require the plant to be shut down when you do these?)

You're also assuming it's 10/100BaseTX from start to finish
with no lower bandwidth links along the way (or, virtual
networks sharing *physical* networks.

> In old coaxial based 10Base2/5 half duplex networks, that might have
> been an issue. For switch based (butt not hub based) 10xxxBaseT
> networks, this is not really an issue, since a typical industrial
> device do not need more than 10BaseT.

I designed an integrated "air handler" many years ago. It was easy
to saturate a 10Base2 network controlling/monitoring just *one*
such device. And, that's just "moving process air". I.e., you
don't just say "turn on" and "turn off". Instead, you are querying
sensors and controlling actuators to run the (sub)system in a
particular way.

It's foolish to think you're just going to tell a wire EDM machine:
"here's your program. make me five of these." without also
monitoring its progress, responding to alarms ("running low on wire"),
etc.

As with all resources, need grows to fit the resources available.
Hence the appeal of moving up to a fatter pipe.

>> Appliances other than switches can effectively block UDP connections.
>> If you ever intend to support a physically distributed domain
>> that exceeds what you can achieve using "maximum cable lengths"
>> (one of the drawbacks about moving away from "orange hose" and its
>> ilk was the drop in maximum cable length), you have to be careful
>> in considering what any *other* "interconnect appliances" do to
>> the traffic you intend to pass (and, if your protocol will be routed!)
>
> I would not be that foolish to do direct machine control over

You are again assuming the only use for the network is "direct machine
control". Do you want the service technician to have to drive across
town (or, to another state/province) to *interrogate* a failing
device? Do you want the engineering staff that have designed the
part to be machined to have to FedEx a USB drive with the program
for the wire EDM machine to the manufacturing site? Firmware updates
to require "on site" installation?

Or, do you want to develop yet another protocol for these activities
and a gateway *product* that ties the "secured" manufacturing network
to an external network?

[There's nothing wrong with exposing networks -- if you've taken
measures to *protect* them while exposed! Otherwise, what's the
value of a WAN?]

> unpredictable nets such as the Internet or even less over some
> wireless connections, such as WLANs, microwave or satellite links. All
> the security issues must be handled with wired connection and in some
> cases with certified LAN systems.
>
>> [It's surprising how *short* "100m" is when it comes to cable lengths!
>> Esp in a manufacturing setting where you might have to go *up* a
>> considerable way -- and, leave a suitable service loop -- before you
>> can even begin to go "over"! And, line-of-sight cable routing may be
>> impractical. For example, here (residential), the *average* length of
>> a network cable is close to 70 feet -- despite the fact that the
>> (2D) diagonal of the house is just about that same length *and* all
>> the drops are centrally terminated!]
>
> 100 m is the distance for a 10xBaseT twisted pair cable limit, putting
> switches in between will solve that.

Sure! Put a switch up in the metal rafters 20 ft above the
manufacturing floor :> I've a friend who owns a *small* machine shop
(mostly traditional Bridgeports, etc. but two or three wire EDM's).
I suspect he would be hard pressed to cover the shop floor from a
single switch -- probably 20m just to get up to the rafters and
back down again.

> For real heavy industry copper
> wiring is a no no issue and you have to use fibers anyway, with single
> mode cable fibers with over 30 km range without optical repeaters.

Gee, a moment ago we were talking about CAN... now suddenly we're
running optical fibre...

>> When someone later decides it should be a piece of cake for your
>> "engineering office" to directly converse with your controllers located
>> in the "manufacturing facility", you'll find yourself explaining
>> why that's not easily accomplished. "Why not? I can talk to
>> Google's servers in another *state*/country... (damn consultants
>> always trying to charge extra for stuff they should have done in
>> the first place!)"
>
> Isn't that a good thing that you can deny access from the outside
> world to a critical controller ?

If the technology *supports* remote communication, you can *still*
"deny access from the outside world to a critical controller"...
by CUTTING THE CABLE in a physical or virtual sense. OTOH, if
the technology *can't* get beyond your four walls, you can't just
"stretch" the cable!

> If there is a specific need to let a qualified person from a remote
> site access the device directly, build a secured VPN connection to the
> PC controlling the devices and use all required firewall etc. methods
> to restrict the access.

Another product.

What if the remote site is elsewhere on the "campus"? Or, just on the
other end of the building? E.g., most factories that I've been in
have an "office space" at one end of the building with the factory
floor "out back".

E.g., one of the places I worked had all the engineering offices up
front -- and the "factory" hiding behind a single door at the back
of the office. Other buildings were within a mile of the main
offices (most buildings were entire city blocks).

The (old) Burr Brown campus, here, (now TI) is a similar layout -- you'd
need a motorized cart to get around the facility but I'm sure they
wouldn't want to have to use sneakernet to move files/data to/from
the factory floor.

Grant Edwards

unread,
Jan 11, 2014, 12:58:27 PM1/11/14
to
On 2014-01-11, Don Y <th...@isnotme.com> wrote:

> What if the message is VERY long (i.e., won't even fit in a jumbo
> frame -- assuming every device accepts jumbo frames) -- like a
> software update, CNC "program", etc.?
>
> Again: the protocol that is layered atop UDP has to include these
> provisions.

UDP/IP handles fragmentation and reassembly of datgrams (messages) up
to 64KB in length. While you have to deal with UDP datagrams that get
dropped, you don't have to worry about fragmentation.

--
Grant

Don Y

unread,
Jan 11, 2014, 2:11:06 PM1/11/14
to
Hi Grant,
Yes -- the OP would have to make a *full* UDP implementation instead
of "cheating" (if all your messages can be forced to fit in ~500 bytes,
then you've just satisfied the low end for the requirements).

Given that the OP claims memory to be a concern, its unlikely he's
going to want to have a *big* datagram buffer *and* be willing to
track (potentially) lots of fragments.

Recall, my comments are geared towards pointing out issues that
will affect the OP's design of *his* protocol, based on what he
is willing to accept beneath it.

E.g., adding sequence numbers to packets/messages; implementing timers
so you know *when* you can reuse sequence numbers (which will obviously
have constraints on their widths), etc.

There is a reason TCP is so "heavy" -- it takes care of *lots*
of these messy details for you! OTOH, Aleksander has expressly
ruled it out. So, he has to be aware of all those little details
and their analogs in *his* protocol.

John Devereux

unread,
Jan 11, 2014, 2:45:36 PM1/11/14
to
We had "cheapernet"(?) where, when a single one of the dodgy homemade
BNCs went flakey, the whole network went down.

Fun times.


--

John Devereux

Don Y

unread,
Jan 11, 2014, 3:04:31 PM1/11/14
to
There's a fair bit of hand-waving in that statement! Dealing with
raw ethernet frames means MAC addrs and little else. No concept of
"networks"... just "my (MAC) address and your (MAC) address".

Bring in IP and now you have to map IP to MAC (and vice versa).
Another header (with its checksum, etc). The possibility of
fragmentation. TTL. Protocol demultiplexing. Routing options.
Timestamps. Etc.

You can design an application where neither ARP nor RARP are
*required*. But, in practice, you need to implement *both*
(unless you hard-code IP addrsses in each node).

[And, the obvious followup question: do you want to support
ICMP? Or, move its features into *your* protocol??)

UDP sits atop IP and "just" adds port numbers (src,dest) and an
optional *datagram* checksum (IP doesn't checksum the payload).

> ARP issues require about a page of code and UDP even less.

A naive ARP implementation can be small (I think more than a
page as there are two sides to ARP -- issuing and answering
requests). But, if you are relying on IP addrs as an authentication
mechanism, you probably want to think more carefully about just how
naively you make such an implementation!

E.g., OP claims memory constraints. So, probably don't want a
sizeable ARP cache. OTOH, probably don't want to issue an ARP
request for each message!

In a synchronous protocol, you could harvest the (IP,MAC) from the
inbound "request" and use it in the reply (elminating need for
a cache and cache lookup). But, then each "message handler" has
to cache this information for its message (unless you have a single
threaded message handler). And, your network stack has to pass
this information "up" to the application layer.

The OP has to decide if comms are one-to-one or many-to-one
(or many-to-many) to decide how many (IP,MAC) bindings need
to be tracked -- and how best to do so depending on whether
multiple messages can be "active" at any given time (i.e.,
can you initiate a command and wait for its completion while
another message makes inquiries as to its progress?)

In my recent projects, I have a separate (secure) protocol that
carries (IP,MAC) bindings to/from nodes. One of the sanity tests
I apply to packets is to verify these fields agree with the
"authoritative" bindings that I've previously received. I.e.,
if you want to attack my network, the *first* thing you have to
do is spoof a valid MAC address and its current IP binding. The
presence of any "bogus" packets tells a node that the network
has been compromised (or, is under attack).

[There are many other such hurdles/sanity checks before you
can work around my security/integrity]

UDP can be small -- *if* it can rely on IP's services beneath it!
Too often (in resource constrained implementations), IP and UDP
get merged into a single "crippled" layer. This can work well *if*
you make provisions for any "foreign traffic" that your "stack"
(stackette? :> ) simply can't accommodate. (recall, if you don't
have ICMP, then you have to sort out how you are going to handle
these "exceptions" (which, in their eyes, *aren't* exceptions!)

Don Y

unread,
Jan 11, 2014, 3:13:00 PM1/11/14
to
Hi John,

On 1/11/2014 12:45 PM, John Devereux wrote:
> Grant Edwards<inv...@invalid.invalid> writes:
>> On 2014-01-11, upsid...@downunder.com<upsid...@downunder.com> wrote:
>>
>>> The cost of the vampire tap transceivers were so high that the first
>>> "Ethernet" network I designed and built was a network in a computer
>>> room between computers, using AUI cabling (with 15 bit connectors)
>>> through a DELNI "hub", later adding some long AUI cables for terminal
>>> servers at the other end of the office. Thus no coaxial cable used.
>>
>> The first place I worked where "Ethernet" was widely used, there was a
>> thick Ethernet backbone, but the vast majority of the wiring was AUI
>> cables and hubs.

"Orange hose" was a misnomer. SHould have been called "orange ROD"!
I suspect the bend radius on that stuff was something on the order
of a quarter of a mile! :< If it wasn;t for AUI taps, you could never
have run the cable to all the nodes that wanted to connect to it!

> We had "cheapernet"(?) where, when a single one of the dodgy homemade
> BNCs went flakey, the whole network went down.

I *loved* 10Base2! It made cable routing (for me) *so* much easier!
Just string *one* cable from A to B to C to...

By contrast, (physical) stars tend to have *bundles* of cables
re-traversing the same ground as they make the trek out to the
"box next to" the one you just wired.

I had more problems with bad terminators and connections *to* NIC's
failing (i.e., the T or F put too much stress on the connection
as it "hung" off the back of the NIC).

Of course, adding/removing nodes was a chore. But, all the nodes
were mine (or, were within a single "product") so it was easy to
"administer" such changes.

> Fun times.

Don Y

unread,
Jan 11, 2014, 3:24:48 PM1/11/14
to
Hi Aleksander,

On 1/10/2014 12:04 PM, Aleksandar Kuktin wrote:
>> Can you,
>> instead, create a single packet format whose contents are indicated by a
>> "packet type" specified in the header? Even if this means leaving space
>> for values/parameters that might not be required in every packet type?
>> For example:
>> <header> <field1> <field2> <field3> <field4>
>> Where certain fields may not be used in certain packet types (their
>> contents then being "don't care").
>>
>> Alternatively, a packet type that implicitly *defines* the format of the
>> balance of the packet. For example:
>> type1:<header1> <fieldA> <fieldB>
>> type2:<header2> <fieldA>
>> type3:<header3> <fieldA> <fieldB> <fieldC> <fieldD>
>> (where the format of each field may vary significantly between message
>> types)
>
> This is explicitly what I don't want. That way, I would need to send
> many, many packets to transmit my message across.

You could always incluide provisions that allow two or more of these
to be "bundled" together. I.e., as the parser finishes, it returns
to its start and examines the balance of the "packet" to see if another
"message type" is present.

>>> But I would like to hear other (read: opposing) opinions. Particularly
>>> so since I am self-taught so there may be considerations obvious to
>>> graduated engineers that I am oblivious to.
>>>
>>> BTW, the periphery that is on the other end is autonomous and rather
>>> intelligent, but very resource constrained. Really, resource
>>> constrainment of the periphery is my main problem here.
>>
>> So, the less "thinking" (i.e., handling of variations) that the remote
>> device has to handle, the better.
>
> Hmmm... Not really. Avability of CPU cycles depends on other details of
> the device, but if need be I can make the device drown in its own CPU
> cycles. Memory, on the other hand, is constrained.

What is "a little" to some may be "a lot" to others. :>
You'll have to decide what you can spare and where to apply it.
Much will depend on your coding style and the framework you
code within.

E.g., (memory of another reply fresh in my mind) when faced with a
missing (MAC,IP) binding to transmit a message (or reply), you could
send out an ARP request and explicitly wait for its reply; or,
consider your work "done" and wait for the ARP reply to return and
identify itself as such and handle that information *without*
remembering that you had explictly requested it! ("Ah, I can send
this message because I have all the information I need to create
the reply datagram")

In one case, you have a separate process/thread/task (with all
that overhead) blocked awaiting the reply. In the other case,
your single thread once again notices that it doesn't have the
(MAC,IP) binding that it needs -- but, examines a timer that
tells it NOT to issue an ARP request, yet ("Gee, I wonder why?")

[Sorry, this would hve been easier to explain with pseudo-code)

upsid...@downunder.com

unread,
Jan 11, 2014, 3:31:33 PM1/11/14
to
On Wed, 8 Jan 2014 21:30:09 +0000 (UTC), Aleksandar Kuktin
<aku...@gmail.com> wrote:

>Hi all.
>
>I'm making a protocol for communication between a PC and a peripheral
>device. The protocol is expected to, at first, run on raw Ethernet but I
>am also supposed to not make any blunders that would make it impossible
>to later use the exact same protocol on things like IP and friends.
>
>Since I saw these kinds of things in many Internet protocols (DNS, DHCP,
>TCP options, off the top of my head - but note that these may have a
>different order of fields), I have decided to make it an array of type-
>length-value triplets encapsulated in the packet frame (no header). The
>commands would fill the "type" field, "length" would specify the length
>of data ("value") following the length field, and "value" would contain
>the data for the command.
>
>But I would like to hear other (read: opposing) opinions. Particularly so
>since I am self-taught so there may be considerations obvious to
>graduated engineers that I am oblivious to.
>
>BTW, the periphery that is on the other end is autonomous and rather
>intelligent, but very resource constrained. Really, resource
>constrainment of the periphery is my main problem here.
>
>
>Some interesting questions:
>Is ommiting a packet header a good idea? In the long run?
>
>If I put a packet header, what do I put in it? Since addressing and error
>detection and "recovery" is supposed to be done by underlying protocols,
>the only thing I can think of putting into the header is the total-length
>field, and maybe, maybe, maybe a packet-id or transaction-id field. But I
>really don't need any of these.
>
>My reasoning with packet-id and transaction-id (and protocol-version,
>really) is that I don't need them now, so I can omit them, and if I ever
>do need them, I can just add a command which implements them. In doing
>this, am I setting myself up for a very nasty problem in the future?
>
>Is using flexible packets like this one (opposed the the contents of,
>say, IP header which has strictly defined fields) a good idea, or am I
>better off rigidifying my packets?
>
>Is there a special prefference or reason as to why some protocols do TLV
>and others do LTV? (Note that I am not trying to ignite a holy war, I'm
>just asking.)
>
>Is it good practice to require aligning the beggining of a TLV with a
>boundary, say 16-bit word boundary?

Have you looked at ASN.1, is there something that makes it unusable
for your application ?

Simon Clubley

unread,
Jan 11, 2014, 5:00:48 PM1/11/14
to
On 2014-01-11, Don Y <Thi...@not.Me> wrote:
> Hi John,
>
> On 1/11/2014 12:45 PM, John Devereux wrote:
>
>> We had "cheapernet"(?) where, when a single one of the dodgy homemade
>> BNCs went flakey, the whole network went down.
>
> I *loved* 10Base2! It made cable routing (for me) *so* much easier!
> Just string *one* cable from A to B to C to...
>

But then you had to deal with users deciding to unscrew a BNC connector...

Personally, I never had any problems with users doing that, but I read
somewhere one site fixed this problem by putting "Danger - high voltage!"
stickers on the BNC connectors. :-)

Simon.

PS: I only "love" 10Base2 in the sense that I "love" not having to deal
with it anymore... :-)

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
[Note: email address not currently working as the system is physically moving]
Microsoft: Bringing you 1980s technology to a 21st century world

Don Y

unread,
Jan 11, 2014, 5:22:23 PM1/11/14
to
Hi Simon,

On 1/11/2014 3:00 PM, Simon Clubley wrote:
> On 2014-01-11, Don Y<Thi...@not.Me> wrote:
>> On 1/11/2014 12:45 PM, John Devereux wrote:
>>
>>> We had "cheapernet"(?) where, when a single one of the dodgy homemade
>>> BNCs went flakey, the whole network went down.
>>
>> I *loved* 10Base2! It made cable routing (for me) *so* much easier!
>> Just string *one* cable from A to B to C to...
>
> But then you had to deal with users deciding to unscrew a BNC connector...

Only if that is an upstream/downstream connector.

As I said, it was for *me* -- all the machines were mine (so no other
"users" that I had to coordinate my activities with). At one time,
I had three SPARC LX's sitting side by side on a desktop with short
lengths of coax connecting them. Had that been a physical star
technology, I would have had to run three separate cables to them
(or, one cable and a small switch -- along with its power supply).
Currently, I have 16 nodes in my office -- over a span of 24 feet.
Using a switch there is a real PITA (especially if I want to move a
node and now have to replace a wire all the way back to the switch
with a slightly longer one!).

[I recently upgraded the switch and moved it. So, *every* wire
had to be replaced as none of their lengths were appropriate
any longer]

And, in the few products that I deployed with 10Base2, there's no
users *inside* those products, either! (i.e., do you worry that some
"user" is going to perturb the CAN bus in your automobile and render it
inoperable as you are driving down the road? Or, pull a PCI card
out of your PC while it is running?)

Simon Clubley

unread,
Jan 12, 2014, 5:55:22 AM1/12/14
to
On 2014-01-11, Don Y <th...@isnotme.com> wrote:
>
> And, in the few products that I deployed with 10Base2, there's no
> users *inside* those products, either! (i.e., do you worry that some
> "user" is going to perturb the CAN bus in your automobile and render it
> inoperable as you are driving down the road? Or, pull a PCI card
> out of your PC while it is running?)
>

If either of those were accessible in the way a 10Base2 cable could be,
then the answer is probably yes. :-)

Simon.

Robert Wessel

unread,
Jan 12, 2014, 11:11:06 AM1/12/14
to
I've made the comment before that the horror of thinnet
(cheapernet/10base2) is what gave other technologies (like
Token-Ring*) room to exist at all, and nearly killed Ethernet. For
all its faults, 10base5 was at least reliable. 10baseT pretty much
solved all the problems, and thus killed everything else.


*Token-Ring was in many ways a PITA to work with, and not really all
that reliable, but it made 10base2 look like a complete joke in terms
of reliability.

Robert Wessel

unread,
Jan 12, 2014, 11:13:53 AM1/12/14
to
I take it you've never written a parser for ASN.1? The OP seems to
have resource constrained systems in mind. Generating the stuff is
not so bad, but decoding it is a huge PITA.

Don Y

unread,
Jan 12, 2014, 2:24:28 PM1/12/14
to
On 1/12/2014 3:55 AM, Simon Clubley wrote:
> On 2014-01-11, Don Y<th...@isnotme.com> wrote:
>>
>> And, in the few products that I deployed with 10Base2, there's no
>> users *inside* those products, either! (i.e., do you worry that some
>> "user" is going to perturb the CAN bus in your automobile and render it
>> inoperable as you are driving down the road? Or, pull a PCI card
>> out of your PC while it is running?)
>
> If either of those were accessible in the way a 10Base2 cable could be,
> then the answer is probably yes. :-)

Well, I suppose you could take out a 15 foot ladder and climb up
onto a deployed device *IN USE* and start tugging on cables.
Of course, if you did so, "disrupting the network" would be
the least of your concerns (I'd worry more about breaking your
neck from the fall or getting electrocuted as you climb over a
device that isn't intended to be "walked on" :> )

Don Y

unread,
Jan 12, 2014, 2:26:52 PM1/12/14
to
Blech! You *really* don't want to go there...

Are there existing protocols used by similar devices that the
OP can "borrow" (or, learn from)? Even some existing protocol
ported to run *over* IP, etc.?

Simon Clubley

unread,
Jan 12, 2014, 2:47:02 PM1/12/14
to
It wasn't physically possible to do that in all environments unfortunately.

Consider, for example, some possible office environments from the 1990s.
These days, if someone disrupts their own connection, it's only their own
device which is affected, but in that timeframe you might have had a 10Base2
connection going from device to device within a region of a building.

Don Y

unread,
Jan 12, 2014, 3:47:07 PM1/12/14
to
Hi Simon,

On 1/12/2014 12:47 PM, Simon Clubley wrote:
> On 2014-01-12, Don Y<th...@isnotme.com> wrote:
>> On 1/12/2014 3:55 AM, Simon Clubley wrote:
>>>
>>> If either of those were accessible in the way a 10Base2 cable could be,
>>> then the answer is probably yes. :-)
>>
>> Well, I suppose you could take out a 15 foot ladder and climb up
>> onto a deployed device *IN USE* and start tugging on cables.
>> Of course, if you did so, "disrupting the network" would be
>> the least of your concerns (I'd worry more about breaking your
>> neck from the fall or getting electrocuted as you climb over a
>> device that isn't intended to be "walked on" :> )
>
> It wasn't physically possible to do that in all environments unfortunately.

Of course! Nor is it likely that you'll have a dozen or more
nodes for a single individual (or, an entire subnet, for that
matter)!

Being able to use a (bus) network *in* a product instead of having
to run control cables to a central "electronics cabinet" (star)
makes a *huge* difference in installation and maintenance costs!

E.g., a licensed electrician is required to "run cable" in most
facilities. You want to run sense leads from thermocouples,
dew point sensors, anemometers, etc. to a "controller" and you
spend several days of that electrician's time routing each cable
to the equipment cabinet. And, those costs vary depending on
how easy it is to get from points A,B,C... to that cabinet. It
also determines where you can *locate* that cabinet (without
"optional" supplemental signal conditioning).

OTOH, if you can wire all the field devices at the manufacturing
facility and just have *one* cable that the electrician has to route
(besides "utilities"), then installation costs drop by several
kilobucks!

> Consider, for example, some possible office environments from the 1990s.
> These days, if someone disrupts their own connection, it's only their own
> device which is affected, but in that timeframe you might have had a 10Base2
> connection going from device to device within a region of a building.

Of course! But, in my case, they're *all* "my" connections. And,
I'd be aware of what sort of traffic is live on the network when I
opted to disconnect a host (which can be done without interrupting the
rest of the segment provided you aren't *moving* that host and
necessitating a "cable adjustment").

I see more issues with twisted pair wiring because it "looks innocent";
people aren't "intimidated" by it. And, the connectors are total crap.
Worse yet, they *almost* work when the locking tab snaps off -- until
the connector works its way loose (because someone moved the piece
of equipment into which it was plugged).

Then, we have all the home-made cables to contend with (it seems much
easier to build a robust BNC-terminated cable than a twisted pair...
for one thing, you don't need a magnifying glass to inspect your work!)

[I received an accusatory message the other day claiming that *I*
"broke the printer". I replied: "Your handyman was there drilling
holes in the counters. Wanna bet he moved the printer to do that?
Wanna bet there's a cable to/from the printer that is now not seated
properly in its jack?" Long silence. "Um, next time you're here,
could you please fix the printer cable for us?"]

(And, we'll ignore the unfortunate "compatibility" with RJ11's...)

One thing that was great about orange hose was that **nobody** messed
with it! :>

upsid...@downunder.com

unread,
Jan 12, 2014, 10:56:48 PM1/12/14
to
On Sun, 12 Jan 2014 19:47:02 +0000 (UTC), Simon Clubley
<clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:

>On 2014-01-12, Don Y <th...@isnotme.com> wrote:
>> On 1/12/2014 3:55 AM, Simon Clubley wrote:
>>>
>>> If either of those were accessible in the way a 10Base2 cable could be,
>>> then the answer is probably yes. :-)
>>
>> Well, I suppose you could take out a 15 foot ladder and climb up
>> onto a deployed device *IN USE* and start tugging on cables.
>> Of course, if you did so, "disrupting the network" would be
>> the least of your concerns (I'd worry more about breaking your
>> neck from the fall or getting electrocuted as you climb over a
>> device that isn't intended to be "walked on" :> )
>
>It wasn't physically possible to do that in all environments unfortunately.
>
>Consider, for example, some possible office environments from the 1990s.
>These days, if someone disrupts their own connection, it's only their own
>device which is affected, but in that timeframe you might have had a 10Base2
>connection going from device to device within a region of a building.

The nasty thing about 10Base2 is that the cable shield should be
grounded at _exacltly_one point, usually at one of the terminator
resistance.

Thus if the BNC connector touched a grounded metallic cable duct, the
network failed. Thus, you had to cover the connectors with some
insulating material and also make sure that any T-connector
disconnected from a device did not make contact with any grounded
objects.

upsid...@downunder.com

unread,
Jan 12, 2014, 11:09:02 PM1/12/14
to
On Sun, 12 Jan 2014 13:47:07 -0700, Don Y <th...@isnotme.com> wrote:

>Being able to use a (bus) network *in* a product instead of having
>to run control cables to a central "electronics cabinet" (star)
>makes a *huge* difference in installation and maintenance costs!

Since branches are not allowed in 10Base2, you have to run the bus via
_all_ devices, one cable to the T-connector and an other cable back,
quickly extending past the 200 m limit.

In the 10Base5 days, the thick cable was run the shortest way around
the building and long AUI cables were run from each computer to the
vampire tap transceiver sitting on the RG-8 bus cable.

Later on external 10Base2 transceivers with AUI 15 connectors could be
placed optimally along the shortest bus path and again connect the
device via the AUI cable to the transceiver.

With the use of integrated transceivers and T-connectors, you had to
route the Ethernet traffic back and forth, loosing most of the
benefits of a bus structure.

Don Y

unread,
Jan 13, 2014, 2:30:38 AM1/13/14
to
On 1/12/2014 9:09 PM, upsid...@downunder.com wrote:
> On Sun, 12 Jan 2014 13:47:07 -0700, Don Y<th...@isnotme.com> wrote:
>
>> Being able to use a (bus) network *in* a product instead of having
>> to run control cables to a central "electronics cabinet" (star)
>> makes a *huge* difference in installation and maintenance costs!
>
> Since branches are not allowed in 10Base2, you have to run the bus via
> _all_ devices, one cable to the T-connector and an other cable back,
> quickly extending past the 200 m limit.

I don't design aircraft carriers! :> 10m is more than enough to
run from one end of a piece of equipment to the other -- stopping
at each device along the way. 10Base2 was a win when you had lots of
devices "lined up in a row" where it was intuitive to just "daisy
chain" them together. E.g., imagine what a CAN bus deployment
would look like if it had to adhere to a physical star topology
(all those "nodes" sitting within inches of each other yet unable to
take advantage of their proximity for cabling economies -- instead,
having to run individual drops off to some central "hub/switch")

[As we were rolling our own hardware, no need for T's -- two BNC's
on each device: upstream + downstream.]

> In the 10Base5 days, the thick cable was run the shortest way around
> the building and long AUI cables were run from each computer to the
> vampire tap transceiver sitting on the RG-8 bus cable.

But AUI cables were *long*, of necessity. You simply couldn't
route (as in "bend") the coax to get everywhere the bus wanted
to *be*!

> Later on external 10Base2 transceivers with AUI 15 connectors could be
> placed optimally along the shortest bus path and again connect the
> device via the AUI cable to the transceiver.
>
> With the use of integrated transceivers and T-connectors, you had to
> route the Ethernet traffic back and forth, loosing most of the
> benefits of a bus structure.

You could create a "spoked wheel" distribution pattern -- each spoke
being a network segment. E.g., when I ran 10Base2 here, I ran a
cable into each room to service just the nodes within that room.
No need to "return" from the (electrically) far end of the spoke...
just let the segment end, there!

In a typical office environment, you don't have the same sort of
"high node density" that I have (simply because I have less space
to cram everything into! :< ) So, the ability to run a cable
from one device to the next device SITTING RIGHT BESIDE IT was a
huge win -- instead of having to run wires from each of these
to a *third* point that tied everything together.

For example, I just wired a "computer lab" where the machines
sit next to each other (~4 ft apart). Almost exactly 200 ft of
cable despite the fact that the two machines farthest apart are
less than 15 ft as the crow flies -- and could easily have been
tethered together with ~40ft of coax. <shrug>

George Neuner

unread,
Jan 13, 2014, 4:05:52 PM1/13/14
to
On Sun, 12 Jan 2014 10:11:06 -0600, Robert Wessel
<robert...@yahoo.com> wrote:

>*Token-Ring was in many ways a PITA to work with, and not really all
>that reliable, but it made 10base2 look like a complete joke in terms
>of reliability.

Wiring TR was a PITA and the NICs initially were too complex to be
reliable ... but that got fixed and TR's predictable timing made
analyzing systems and programming reliably timed delivery -
particularly across repeaters - easier even than on CAN.

DDI rings had the same good features (and, of course, the same bad
ones).

YMMV,
George

Don Y

unread,
Jan 13, 2014, 5:14:52 PM1/13/14
to
Hi George,

On 1/13/2014 2:05 PM, George Neuner wrote:
> On Sun, 12 Jan 2014 10:11:06 -0600, Robert Wessel
> <robert...@yahoo.com> wrote:
>
>> *Token-Ring was in many ways a PITA to work with, and not really all
>> that reliable, but it made 10base2 look like a complete joke in terms
>> of reliability.
>
> Wiring TR was a PITA and the NICs initially were too complex to be

Connectors were expensive. But, with a centralized MAU/hub/switch,
the same sort of "star topology" related issues prevail.

> reliable ... but that got fixed and TR's predictable timing made
> analyzing systems and programming reliably timed delivery -
> particularly across repeaters - easier even than on CAN.

At one time, I did an analysis that suggested even 4Mb TR would
outperform 10Mb ethernet when you were concerned with temporal
guarantees.

Of course, you can develop a token passing protocol atop ethernet.
But, kind of defeats most of the reasons for *using* ethernet!
(esp if you don't want to constrain the network size/topology
ahead of time)

> DDI rings had the same good features (and, of course, the same bad
> ones).

Not fond of optical "switches"? :>

Robert Wessel

unread,
Jan 13, 2014, 6:21:01 PM1/13/14
to
On Mon, 13 Jan 2014 15:14:52 -0700, Don Y <th...@isnotme.com> wrote:

>Hi George,
>
>On 1/13/2014 2:05 PM, George Neuner wrote:
>> On Sun, 12 Jan 2014 10:11:06 -0600, Robert Wessel
>> <robert...@yahoo.com> wrote:
>>
>>> *Token-Ring was in many ways a PITA to work with, and not really all
>>> that reliable, but it made 10base2 look like a complete joke in terms
>>> of reliability.
>>
>> Wiring TR was a PITA and the NICs initially were too complex to be
>
>Connectors were expensive. But, with a centralized MAU/hub/switch,
>the same sort of "star topology" related issues prevail.
>
>> reliable ... but that got fixed and TR's predictable timing made
>> analyzing systems and programming reliably timed delivery -
>> particularly across repeaters - easier even than on CAN.
>
>At one time, I did an analysis that suggested even 4Mb TR would
>outperform 10Mb ethernet when you were concerned with temporal
>guarantees.


Which was one of the touted features of TRN. Unfortunately for TRN,
approximately zero users actually cared about that.

Don Y

unread,
Jan 13, 2014, 7:35:02 PM1/13/14
to
Hi Robert,

On 1/13/2014 4:21 PM, Robert Wessel wrote:

>>> reliable ... but that got fixed and TR's predictable timing made
>>> analyzing systems and programming reliably timed delivery -
>>> particularly across repeaters - easier even than on CAN.
>>
>> At one time, I did an analysis that suggested even 4Mb TR would
>> outperform 10Mb ethernet when you were concerned with temporal
>> guarantees.
>
> Which was one of the touted features of TRN. Unfortunately for TRN,
> approximately zero users actually cared about that.

It's too bad that "fast" has won out over "predictable" (in
many things -- not just network technology).

IIRC, SMC was the only firm making TR silicon. (maybe TI had some
offerings?) Not sure if they even offer any, currently.

[I think I still have some TR connectors, NICs and even a "hub"
stashed... somewhere]

Robert Wessel

unread,
Jan 13, 2014, 7:45:41 PM1/13/14
to
Heck, I've still got a ring running...

I'm not sure how much presence SMC had in TRN, Thomas Conrad and Madge
were the big non-IBM players. IBM, of course, had its own chipsets,
and they did sell them to other vendors.

Tom Gardner

unread,
Jan 13, 2014, 7:46:08 PM1/13/14
to
On 14/01/14 00:35, Don Y wrote:
> Hi Robert,
>
> On 1/13/2014 4:21 PM, Robert Wessel wrote:
>
>>>> reliable ... but that got fixed and TR's predictable timing made
>>>> analyzing systems and programming reliably timed delivery -
>>>> particularly across repeaters - easier even than on CAN.
>>>
>>> At one time, I did an analysis that suggested even 4Mb TR would
>>> outperform 10Mb ethernet when you were concerned with temporal
>>> guarantees.
>>
>> Which was one of the touted features of TRN. Unfortunately for TRN,
>> approximately zero users actually cared about that.
>
> It's too bad that "fast" has won out over "predictable" (in
> many things -- not just network technology).

Token rings, e.g. FDDI and others, had config/management
problems that largely negated predictability guarantees,
e.g. dropped tokens, duplicated tokens, complexity (have
a look at all the FSMs!)

CSMA/CD is much easier to manage and faultfind.

Don Y

unread,
Jan 13, 2014, 7:54:42 PM1/13/14
to
Hi Robert,
Grrr... I misremembered! It was ARCnet that SMC supported. (cheaper)

Don Y

unread,
Jan 13, 2014, 8:00:09 PM1/13/14
to
Hi Tom,
The problem is you have to layer a *different* protocol onto those
media if you want deterministic behavior. AND, prevent any
"noncompliant" traffic from using the medium at the same time.

E.g., you could have "office equipment" and "process control
equipment" sharing a token-passing network and *still* have
guarantees for the process control subsystems. Not the case
with things like ethernet (unless you create a special
protocol stack for those devices and/or interpose some bit of
kit that forces them to "behave" properly).

Robert Wessel

unread,
Jan 13, 2014, 8:37:03 PM1/13/14
to
And Arcnet still exists too... (Although Arcnet was token-bus, not
token-ring).

Les Cargill

unread,
Jan 13, 2014, 11:40:40 PM1/13/14
to
As you are no doubt aware, what happened there was Ethernet
switching and it well and truly solved the collision problem.

Most, if not all packets live on a collision domain with exactly
two NICs on it - except for 802.11>x< , where just about
any concept smuggled form the other old standards doubtless lives in
the air link.

--
Les Cargill



Robert Wessel

unread,
Jan 13, 2014, 11:55:35 PM1/13/14
to
And given that most 100Mb and faster (wired) Ethernet links are full
duplex these days, there's effectively no collision domain at all.

OTOH, that doesn't prevent the switch from dropping packets if the
destination port is sufficiently busy.

Don Y

unread,
Jan 14, 2014, 12:49:53 AM1/14/14
to
Hi Les,

On 1/13/2014 9:40 PM, Les Cargill wrote:
It's not collisions that are the problem. Rather, it is
timeliness guarantees. A node on an ethernet switch has
no guarantee as to when -- or *if* -- its packets will
be delivered.

Switches have fixed size memories. There are no guarantees that
packets sent to the switch ever *go* anywhere.

By contrast, token passing networks gave assigned timeslots.
You *knew* when your "turn" to use the media would come along.
And, were *guaranteed* this by the basic design of the network
itself (not some other protocol layered on top of it).

Ever seen any timing guarantees for a generic network switch?
It's left as an exercise for the user: look at the packet buffer
size in the switch, determine whether it is a store-and-forward
switch or whether it can exploit cut-through technology, whether
it is blocking/nonblocking, *and* the datacomm characteristics
of all the other nodes on your network (will they cooperate
with each other's needs? Or, blindly use as much as they can
get??) and then try to come up with a hard-and-fast number
to determine the expected latency for a specific packet on a
specific node.

Repeat the exercise for a token passing network.

[If you think that ethernet makes those guarantees, then you
can elide all the acknowledgements in the protocols and still
be VERY CONFIDENT that everything STILL works properly :> ]

Don Y

unread,
Jan 14, 2014, 12:52:24 AM1/14/14
to
Hi Robert,

On 1/13/2014 9:55 PM, Robert Wessel wrote:

[attrs elided]

>>> The problem is you have to layer a *different* protocol onto those
>>> media if you want deterministic behavior. AND, prevent any
>>> "noncompliant" traffic from using the medium at the same time.
>>>
>>> E.g., you could have "office equipment" and "process control
>>> equipment" sharing a token-passing network and *still* have
>>> guarantees for the process control subsystems. Not the case
>>> with things like ethernet (unless you create a special
>>> protocol stack for those devices and/or interpose some bit of
>>> kit that forces them to "behave" properly).
>>
>> As you are no doubt aware, what happened there was Ethernet
>> switching and it well and truly solved the collision problem.
>>
>> Most, if not all packets live on a collision domain with exactly
>> two NICs on it - except for 802.11>x< , where just about
>> any concept smuggled form the other old standards doubtless lives in
>> the air link.
>
> And given that most 100Mb and faster (wired) Ethernet links are full
> duplex these days, there's effectively no collision domain at all.
>
> OTOH, that doesn't prevent the switch from dropping packets if the
> destination port is sufficiently busy.

Or for the actions of one node influencing the delivery of traffic
from *another* node! ("Betty in accounting is printing a lengthy
report -- the CNC machines have stopped as their input buffers are
now empty...")

upsid...@downunder.com

unread,
Jan 14, 2014, 1:23:54 AM1/14/14
to
On Mon, 13 Jan 2014 00:30:38 -0700, Don Y <th...@isnotme.com> wrote:

>> Since branches are not allowed in 10Base2, you have to run the bus via
>> _all_ devices, one cable to the T-connector and an other cable back,
>> quickly extending past the 200 m limit.
>
>I don't design aircraft carriers! :> 10m is more than enough to
>run from one end of a piece of equipment to the other -- stopping
>at each device along the way. 10Base2 was a win when you had lots of
>devices "lined up in a row" where it was intuitive to just "daisy
>chain" them together. E.g., imagine what a CAN bus deployment
>would look like if it had to adhere to a physical star topology
>(all those "nodes" sitting within inches of each other yet unable to
>take advantage of their proximity for cabling economies -- instead,
>having to run individual drops off to some central "hub/switch")
>
>[As we were rolling our own hardware, no need for T's -- two BNC's
>on each device: upstream + downstream.]

Did you by chance have one male and one female BNC on the device ?

And what happens, when someone wants to pull out the device ? _All_
traffic in the net is disrupted, until the person finds how to join
the two cables together, i.e. finds a female-female adapter after a 15
minute search :-)

Robert Wessel

unread,
Jan 14, 2014, 1:41:14 AM1/14/14
to
Even without errors, the constraints weren't very tight - your node
could well get the next available slot after the 200 other nodes
waiting to transmit a 4K* frame, even with priority reservations
(admittedly that could be limited by controlling the number of nodes
on the ring). That would be the better part of two seconds. And if
there was any sort of token recovery action going on, several seconds
of disruption were normal.


*4472 bytes for 4Mb TRN, 17800 for 16Mb

Don Y

unread,
Jan 14, 2014, 2:01:08 AM1/14/14
to
Its a subassembly. You can't operate it without all the components
in place and operational. So, if you want to remove a sensor/actuator,
the entire subassembly is down for the duration (like replacing
*one* wheel shoe on a car -- impractical to drive it in that condition!)
Once the "component" has been replaced and reinstalled on the network
segment, the entire subassembly can be brought on-line, again.

How do you move air if the actuator that runs the motor is broken
and removed from service? How do you measure the airflow if the
sensor that monitors it is broken and removed from service? How
do you heat the air if the actuator that applies heat is missing?
Sense the temperature if the sensor that monitors it is missing?
Etc.

Never a need to "patch around" a missing/removed component.

upsid...@downunder.com

unread,
Jan 14, 2014, 2:04:08 AM1/14/14
to
Horrible idea of putting office Windows machines in the same network
as some real process control. These days firewalls are used between
the networks and often even special gateways in a DMZ.

Don Y

unread,
Jan 14, 2014, 5:23:41 AM1/14/14
to
Repeat the exercise with *shared* fabric scaled back to 10Mb speeds
(to compare fairly with 4Mb TR). Or, switched fabric. How big will
the packet buffer need to be in that switch? What admission policies
will it exhibit? How will a node know a priori how long it must wait
to be *sure* the message it sends will get delivered? What happens if
one (or 199!) of the other nodes decides it has "a lot more to say"
while the node is waiting for its packet to be delivered? Will those
other nodes *appear* to be conspiring to lock out that node?

Scale TR up to today's speeds. Pick a comparable ethernet switch and
try to come up with a definitive answer. Without being able to
characterize the rest of the traffic on that network...

I.e., token passing strategies inherently implement a concept of
fairness and equal access. You have to *hope* your ethernet switch
does -- and try to get the vendor to provide you with quantitative
details of its (current!) implementation. Then, pray they never
discontinue it! *And* hope the rest of the nodes on YOUR network
behave equitably.

[Getting details of switch internals is sort of like asking The Colonel
for his "secret recipe" [1]]

I've been through this exercise before. Getting deterministic behavior
from ethernet -- WITHOUT PROTOCOL MODIFICATIONS -- is a real hassle.
Witness the assortment of automation protocols "over ethernet"...

> *4472 bytes for 4Mb TRN, 17800 for 16Mb

[1] Actually, I think an analysis of it was done in The Straight Dope
or one of Poundstone's books... short answer: "nothing special"

Don Y

unread,
Jan 14, 2014, 5:26:53 AM1/14/14
to
I don't see a reference to "Windows" anywhere in the above...
Regardless, it doesn't change the idea conveyed.


Tom Gardner

unread,
Jan 14, 2014, 5:30:12 AM1/14/14
to
You are stating the obvious "swings", and not mentioning the
"roundabouts".

If you want *guaranteed* behaviour, you have to consider:
- What are the guarantees if a node inserts a second token?
- What are the guarantees the token gets dropped?

And, of course, there are other failure modes that are more
of an issue in token rings, e.g. partitioning, subtle
incompatibility between different vendor's protocol stacks
etc.

In practice CSMA/CD is usually more robust than token ring.

Sure CSMA/CD has well-understood limitations which need to be
worked around. But the workarounds for token ring deficiencies
are very similar to those for CSMA/CD deficiencies, so there's
no added disadvantage to just using CSMA/CD.

Les Cargill

unread,
Jan 14, 2014, 1:44:38 PM1/14/14
to
So VLANs... and 802.11Q or other CoS/QoS ....

Or hie down t' the Wally World and buy a Netgear
switch or two and leave Betty's print job alone ...

--
Les Cargill

Robert Wessel

unread,
Jan 14, 2014, 1:56:39 PM1/14/14
to
I didn't say that TRN didn't provide stronger guarantees than Ethernet
(it did), rather that the guarantees were sufficiently weak that they
were mostly useless.


>Scale TR up to today's speeds. Pick a comparable ethernet switch and
>try to come up with a definitive answer. Without being able to
>characterize the rest of the traffic on that network...


TRN was going switched too. And the guarantees on TRN are impossible
to quantify "without being able to characterize the rest of the
traffic on that network."


>I.e., token passing strategies inherently implement a concept of
>fairness and equal access. You have to *hope* your ethernet switch
>does -- and try to get the vendor to provide you with quantitative
>details of its (current!) implementation. Then, pray they never
>discontinue it! *And* hope the rest of the nodes on YOUR network
>behave equitably.
>
>[Getting details of switch internals is sort of like asking The Colonel
>for his "secret recipe" [1]]


Plenty of switches implement enough VLAN and traffic shaping support
to deal with almost all applications' requirements. TRN might have
nominally better fairness (OK, you'll *definitely* get a chance to
send a packet in the next several seconds some time), but you could
still see packet losses and that did nothing for a receiver being too
busy to pick up a packet. So you basically still have all the same
problems to deal with. I'm not saying switched Ethernet solves all
(or even any) of the problems, just that TRN really didn't either.

upsid...@downunder.com

unread,
Jan 14, 2014, 2:00:54 PM1/14/14
to
As strange as it may sound, Ethernet is used on Airbus A350 and A380
planes. Of course the devices have strict throughput control
mechanisms in the form of the AFDX protocol
http://en.wikipedia.org/wiki/Avionics_Full-Duplex_Switched_Ethernet

Les Cargill

unread,
Jan 14, 2014, 2:04:24 PM1/14/14
to
It all depends on your application. You pay for better service.

> Switches have fixed size memories. There are no guarantees that
> packets sent to the switch ever *go* anywhere.
>

Sure. So don't run them at high utilization unless you
have to. If you have to, get your checkbook out.

> By contrast, token passing networks gave assigned timeslots.

NO, they do not. TDM has timeslots; token
passing { ARCnet, Token Ring } work differently
for different switch topologies. Classic coax*
Token Ring is a ring, and each NIC forwards on behalf
of its neighbor unless the destination address is
the NIC's address.

*may also have been true of twisted pair; don't recall;
most twisted pair ran just like Ethernet w.r.t cabling;
the switches/hubs did all the footwork.

Latter day Token Ring switches look just like Ethernet
switches. Indeed, products would allow layer 2 switching
between Ethernet and Token ring. At the very least they'd
route.

TDM is *a way*, but it's not *THE* way. If you
can deal with retransmission, then Ethernet provides a lot
from bandwidth for a lot less money.


> You *knew* when your "turn" to use the media would come along.
> And, were *guaranteed* this by the basic design of the network
> itself (not some other protocol layered on top of it).
>

that "guarantee" is a holdover from the cognitive dissonance from
the largely now-abandoned COTs network. I can sympathize with
the desire to run a clock from Maine to San Diego, but ....

what that cognitive dissonance was grounded in was the sure
and certain knowledge that telecomms wasn't quite a market product,
and some sort of central dogma was needed...

It was all fine when a T1 was all the backhaul you'd ever need.

> Ever seen any timing guarantees for a generic network switch?
> It's left as an exercise for the user: look at the packet buffer
> size in the switch, determine whether it is a store-and-forward
> switch


OOf. That gets ugly.

> or whether it can exploit cut-through technology, whether
> it is blocking/nonblocking,

They're *all" blocking past some limit.

> *and* the datacomm characteristics
> of all the other nodes on your network (will they cooperate
> with each other's needs? Or, blindly use as much as they can
> get??) and then try to come up with a hard-and-fast number
> to determine the expected latency for a specific packet on a
> specific node.
>
> Repeat the exercise for a token passing network.
>

So just how low of a latency do you *need*? By "need",
I mean "will negatively affect performance by this cost
measure that relates to dollars."

SO go gitcha a big ole ATM switch, and do that
if it turns out that way. Maybe MPLS, other stuff.

You will run into much larger timers in an IP stack
than in the media, anyway.

Meanwhile, *one* half-duplex RS485 link and...

> [If you think that ethernet makes those guarantees, then you
> can elide all the acknowledgements in the protocols and still
> be VERY CONFIDENT that everything STILL works properly :> ]
>

But bit errors...

Even RS232 over six inches will have a potential
BER of more than (1/10^9th).

>> Most, if not all packets live on a collision domain with exactly
>> two NICs on it - except for 802.11>x< , where just about
>> any concept smuggled form the other old standards doubtless lives in
>> the air link.
>

--
Les Cargill


upsid...@downunder.com

unread,
Jan 14, 2014, 2:36:00 PM1/14/14
to
On Tue, 14 Jan 2014 13:04:24 -0600, Les Cargill
<lcarg...@comcast.com> wrote:


>TDM is *a way*, but it's not *THE* way. If you
>can deal with retransmission, then Ethernet provides a lot
>from bandwidth for a lot less money.

I was once looking for using 68360 and PPC QUICC communication
processor TSA for TDMA multiplexing a low number of bits from a large
number of nodes, but it did not materialize.

One interesting alternative using at least some Ethernet hardware is
the Ethernet Powerlink http://en.wikipedia.org/wiki/Ethernet_Powerlink
which also can efficiently transfer a few bits from each mode.


Robert Wessel

unread,
Jan 14, 2014, 2:38:28 PM1/14/14
to
On Tue, 14 Jan 2014 13:04:24 -0600, Les Cargill
<lcarg...@comcast.com> wrote:

>Don Y wrote:
>> By contrast, token passing networks gave assigned timeslots.
>
>NO, they do not. TDM has timeslots; token
>passing { ARCnet, Token Ring } work differently
>for different switch topologies. Classic coax*
>Token Ring is a ring, and each NIC forwards on behalf
>of its neighbor unless the destination address is
>the NIC's address.


Jut to pick a nit... "Token-Ring" is both a general name for a
networking scheme, and a particular networking technology, popularized
by IBM and standardized as 802.5.

In the case of the latter, there never was any coax support for TRN,
although other token passing systems did support coax. The old thick
cables were *shielded* twisted pair, but definitely not coax. IBM
made provisions for using the "IBM Cabling System" (mainly all the STP
from wall ports to the patch panels to transport the "Category A" (aka
coax) 3270 terminal connections over STP, (you needed appropriate
baluns*) and twinax (5250) stuff**, as well as a bunch of other things
(blue for loops, LSCs for 8100 MCL loops, adapters for WE-404
connectors for the store systems, adapters for async/serial devices).
It was quite a menagerie.

IBM did offer a networking option for 3174 (3270 terminal
controllers), that allowed you to use a 3270 coax card in a PC as a
network adapter - the 3174 bridged that onto the TRN it was connected
to, but that was pretty removed from actual TRN.


*These were the "red" baluns (you could do "Category B" connections
too, those needed the yellow baluns). The standard baluns were
integrated into adapter cables (data connector on one end, balun in
the middle - with the color code - and the coax connector on the
other).

**Green "twinax impedance matching device".

Les Cargill

unread,
Jan 14, 2014, 6:33:58 PM1/14/14
to
There's nothing strange about it.

> Of course the devices have strict throughput control
> mechanisms in the form of the AFDX protocol
> http://en.wikipedia.org/wiki/Avionics_Full-Duplex_Switched_Ethernet
>

Yep.

--
Les Cargill



Les Cargill

unread,
Jan 14, 2014, 6:47:59 PM1/14/14
to
Robert Wessel wrote:
> On Tue, 14 Jan 2014 13:04:24 -0600, Les Cargill
> <lcarg...@comcast.com> wrote:
>
>> Don Y wrote:
>>> By contrast, token passing networks gave assigned timeslots.
>>
>> NO, they do not. TDM has timeslots; token
>> passing { ARCnet, Token Ring } work differently
>> for different switch topologies. Classic coax*
>> Token Ring is a ring, and each NIC forwards on behalf
>> of its neighbor unless the destination address is
>> the NIC's address.
>
>
> Jut to pick a nit... "Token-Ring" is both a general name for a
> networking scheme, and a particular networking technology, popularized
> by IBM and standardized as 802.5.
>

Yep.

> In the case of the latter, there never was any coax support for TRN,
> although other token passing systems did support coax. The old thick
> cables were *shielded* twisted pair, but definitely not coax.

I believe that there was 802.5 over coax in some form. We
used to have to connect it for a regression test
before each release.

Although this:
http://interfacecom.blogspot.com/2011/08/network-interface-controller.html

"Madge 4 / 16 Mbit / s Token Ring ISA-16 NIC"

Hopefully, that NIC is not an Arcnet NIC masquerading as Token
Ring :)

> IBM
> made provisions for using the "IBM Cabling System" (mainly all the STP
> from wall ports to the patch panels to transport the "Category A" (aka
> coax) 3270 terminal connections over STP, (you needed appropriate
> baluns*) and twinax (5250) stuff**, as well as a bunch of other things
> (blue for loops, LSCs for 8100 MCL loops, adapters for WE-404
> connectors for the store systems, adapters for async/serial devices).
> It was quite a menagerie.
>

Sounds like it! I never dealt with actual IBM stuff; I worked
for people who competed with them.

> IBM did offer a networking option for 3174 (3270 terminal
> controllers), that allowed you to use a 3270 coax card in a PC as a
> network adapter - the 3174 bridged that onto the TRN it was connected
> to, but that was pretty removed from actual TRN.
>
>
> *These were the "red" baluns (you could do "Category B" connections
> too, those needed the yellow baluns). The standard baluns were
> integrated into adapter cables (data connector on one end, balun in
> the middle - with the color code - and the coax connector on the
> other).
>

Wow, that's kind of a mess! Realistically, by the time I got to dealing
with Token Ring ( mid-90s ) it was largely UTP or STP into RJ45.

We had some doohickey that had a BNC connector that we had to
run the regression test with, but I can't really remember what it was.

> **Green "twinax impedance matching device".
>

--
Les Cargill

Robert Wessel

unread,
Jan 14, 2014, 7:49:43 PM1/14/14
to
On Tue, 14 Jan 2014 17:47:59 -0600, Les Cargill
The top (long) card is a Madge TRN adapter (which is where the poorly
placed caption belongs), the half-high/short card below that is a
generic Ethernet NIC with a thinnet and 10baseT connection, probably
an NE2000 clone of some sort. I can't quite read the back of the chip
which might offer more specifics.

https://en.wikipedia.org/wiki/File:EISA_TokenRing_NIC.JPG
https://en.wikipedia.org/wiki/File:Network_card.jpg


>> IBM
>> made provisions for using the "IBM Cabling System" (mainly all the STP
>> from wall ports to the patch panels to transport the "Category A" (aka
>> coax) 3270 terminal connections over STP, (you needed appropriate
>> baluns*) and twinax (5250) stuff**, as well as a bunch of other things
>> (blue for loops, LSCs for 8100 MCL loops, adapters for WE-404
>> connectors for the store systems, adapters for async/serial devices).
>> It was quite a menagerie.
>>
>
>Sounds like it! I never dealt with actual IBM stuff; I worked
>for people who competed with them.
>
>> IBM did offer a networking option for 3174 (3270 terminal
>> controllers), that allowed you to use a 3270 coax card in a PC as a
>> network adapter - the 3174 bridged that onto the TRN it was connected
>> to, but that was pretty removed from actual TRN.
>>
>>
>> *These were the "red" baluns (you could do "Category B" connections
>> too, those needed the yellow baluns). The standard baluns were
>> integrated into adapter cables (data connector on one end, balun in
>> the middle - with the color code - and the coax connector on the
>> other).
>>
>
>Wow, that's kind of a mess! Realistically, by the time I got to dealing
>with Token Ring ( mid-90s ) it was largely UTP or STP into RJ45.
>
>We had some doohickey that had a BNC connector that we had to
>run the regression test with, but I can't really remember what it was.


Was it networking? Perhaps the 3174 thing I mentioned ("3174 Peer
Communications"). From the PC's perspective, it looked much like a
Token Ring card one past the device driver. IBM provided DOS and OS/2
("LAN Support") drivers that made it pretty transparent.

There was a different, and earlier, IBM networking product, PC-Net (or
"IBM PC Network" or something like that), that used coax in one of its
two forms (the broadband version), and needed a head unit to translate
between the send and receive frequencies. It was more a shared bus,
but the broadband version would be (semi) star wired. While pretty
much obsolete at that point, IBM was still supporting it.

George Neuner

unread,
Jan 14, 2014, 9:59:35 PM1/14/14
to
On Mon, 13 Jan 2014 22:40:40 -0600, Les Cargill
<lcarg...@comcast.com> wrote:

>As you are no doubt aware, what happened there was Ethernet
>switching and it well and truly solved the collision problem.
>
>Most, if not all packets live on a collision domain with exactly
>two NICs on it - except for 802.11>x< , where just about
>any concept smuggled form the other old standards doubtless lives in
>the air link.

The collision domain is now the destination port(s) in the switch.
Switch buffering can impose (bounded but) arbitrary latency or drop
packets entirely if buffer capacity is exceeded.

The trend has been to place more and more memory into switches so that
vendors can claim "no dropped packets". But that has resulted in a
pervasive latency problem commonly known as "bufferbloat".
http://cacm.acm.org/magazines/2012/1/144810-bufferbloat/fulltext

George

George Neuner

unread,
Jan 14, 2014, 10:18:43 PM1/14/14
to
Hi Don,

On Mon, 13 Jan 2014 15:14:52 -0700, Don Y <th...@isnotme.com> wrote:

>> DDI rings had the same good features (and, of course, the same bad
>> ones).
>
>Not fond of optical "switches"? :>

CDDI ran over copper wire 8-) At up to 200Mbps - until GbEthernet
came along, it was the fastest (standard) copper in town.

George

upsid...@downunder.com

unread,
Jan 14, 2014, 11:43:10 PM1/14/14
to
For hard real time applications, the value is useless, if it arrives
after the deadline. On the other hand, loosing some sample values now
and then is usually not a big deal, as long as the loss is detected
(serial numbers etc.).

A lot of buffer space (either in the TCP/IP stack or in the
transmission queue in an Ethernet switch) can be quite harmful, if a
message has been obsoleted while in queue. When the frame has finally
been forwarded, it is obsolete, since no one is interested in it any
more, but still it floats around the network, potentially causing
congestion in an other switch along the path.

So when designing a large realtime system, you have to think what
traffic is transported in which way, such as TCP/IP pipes vs. MAC/UDP
frames, various QoS assignments in switches etc. so that the whole
system behaves gracefully even when approaching an overload situation.

Aleksandar Kuktin

unread,
Jan 15, 2014, 7:16:40 PM1/15/14
to
I was busy for a few days and unable to be present for the discussion.

To my shock, you people have produced way more content then I expected
and than I can consume right now, so it'll take me a few days to catch up
to everything (especially, just like my device, I am also resource
constrained - trying to run a full time job and two non-trivial projects
at the same time (this being one of those two) is quite taxing, to say
the least).

Grant Edwards

unread,
Jan 17, 2014, 10:20:56 AM1/17/14
to
Everywhere I've ever been on four different continents, "office
equipment" means "Windows".

--
Grant Edwards grant.b.edwards Yow! I selected E5 ... but
at I didn't hear "Sam the Sham
gmail.com and the Pharoahs"!
0 new messages