Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Satellite delays slow UUCP

1 view
Skip to first unread message

Jerry Aguirre

unread,
Apr 21, 1986, 3:20:37 PM4/21/86
to
I have noticed a problem while running UUCP thru a circuit that includes
a satellite delay. Even though the channel is configured for 2400 baud
the thru-put slows to under 500 baud.

When I monitor the link I notice that the sender will send a burst of
data, the line will be idle for approximately 1 second, the receiver will
send a short burst of data (presumably one or more acks), and then the
sender will immediately send another burst of data.

The lines are clean (error corrected) and the round trip delay is
approximately 1 second. The UUCP is what came with 4.2BSD.

The UUCP 'g' protocol seems configured around the magic number of 8
packets of (I think) 64 bytes each. That should be enough to keep the
line busy until the ack for the first packet is received.

Has anyone analyzed this problem and come up with a bug fix.

Jerry Aguirre @ Olivetti ATC
{hplabs|fortune|idi|ihnp4|tolerant|allegra|glacier|olhqma}!oliveb!jerry

Erik E. Fair

unread,
Apr 22, 1986, 11:15:23 PM4/22/86
to
In article <8...@oliveb.UUCP> je...@oliveb.UUCP (Jerry Aguirre) writes:

[UUCP `g' protocol over satellite link]

>The lines are clean (error corrected) and the round trip delay is
>approximately 1 second. The UUCP is what came with 4.2BSD.
>
>The UUCP 'g' protocol seems configured around the magic number of 8
>packets of (I think) 64 bytes each. That should be enough to keep the
>line busy until the ack for the first packet is received.
>
>Has anyone analyzed this problem and come up with a bug fix.

You are correct about the eight-packets-in-flight limit in `g' protocol.

The right thing to do is write your own protocol module for uucico,
which better fits the link layer you're using. There is precedent:

protocol organization link layer
-------- ------------ ----------
t CSS (seismo) TCP/IP
f CWI (mcvax) X.25 (standard with PAD)
x AT&T X.25 with VPM

You say that your satellite connection is error corrected; is it also
flow controlled? If so, the `t' protocol is probably what you want.
It's a waste to run `g' protocol over an error corrected link anyway
because the effort that `g' puts into checksumming is wasted.

Erik E. Fair styx!fair fa...@lll-tis-b.arpa

Chris Lewis

unread,
Apr 23, 1986, 6:41:58 PM4/23/86
to
In article <8...@oliveb.UUCP> je...@oliveb.UUCP (Jerry Aguirre) writes:
>I have noticed a problem while running UUCP thru a circuit that includes
>a satellite delay. Even though the channel is configured for 2400 baud
>the thru-put slows to under 500 baud.
>
>When I monitor the link I notice that the sender will send a burst of
>data, the line will be idle for approximately 1 second, the receiver will
>send a short burst of data (presumably one or more acks), and then the
>sender will immediately send another burst of data.
>
>The lines are clean (error corrected) and the round trip delay is
>approximately 1 second. The UUCP is what came with 4.2BSD.
>
>The UUCP 'g' protocol seems configured around the magic number of 8
>packets of (I think) 64 bytes each. That should be enough to keep the
>line busy until the ack for the first packet is received.

We were grunging around in the "g" protocol and seem to have discovered
that, yes UUCP does use a 8 slot circular list of packets, but it is
using a 3 packet window - it can send up to 3 packets before requiring
an acknowledge. Hence, you can send up to 3 packets before having
to wait for the acknowledge.

Further, I believe BSD uucp's use "select" with timers to do
their reads - the receiver goes back into a "select" to read the packet,
and only if the select times out does it actually send an acknowledge for
a packet it has already received. However, the transmitter stalls
if it has sent 3 packets out - so in your case (if the transmitter can keep
up to the receiver), you have to wait the full round-trip time after
the last packet in a window before the receiver times out and sends the
acknowledge.

Simple arithmetic would show that sending 3 packets at 2400 baud takes roughly
1960/2400 seconds (.80 sec., say - this is assuming each character is 10
bits (8 data, one start, one stop), and ignoring packet header overhead), and
then you have to wait for the 1 sec. round trip (plus some overhead on
the other end of course) -> approx 2 seconds to send 3 packets and get an
acknowledge. That's only a .8/2 duty cycle. Effectively something like
960 baud. Not factored in is the window timeout etc. Not good for your
situation. You could always try changing the "WINDOWS" define in pk.h to
something more than 3 (but <= 8!) in the transmitting uucp, but I make no
guarantees that it'll work, nor that it'll be compatible with systems that
haven't been zapped in this way.

Another possibility, you could always defeat the "select" and insist on
sending acknowledges on all packets asynchronously with the transmitter,
then you'd have a string of acknowledges spaced out going back over the
line to the transmitter. That might help. Or not....

Note: my comments are from scanning BSD 4.3 UUCP sources. Some of
the details may be different in 4.2 source.

BTW: we appear to have found a problem with 4.3 "g" protocol in this area,
but the kludge we've inserted is "tripping" more often than I believe
it should be. When I finally get a "good" solution to it, I'll post
details (it's a trivial change to pk0.c). Unless Rick Adams (who has
gotten details by mail) tells me we're all wet (eg: it should be fixed
somehow else, or we've got an old copy...). If we're not all wet, it
appears that UUCP over a line with such a delay would never work with 4.3 UUCP.

All this time I was hoping I'd never have to dive that far in to UUCP.
Sigh...
--
Chris Lewis,
UUCP: {allegra, linus, ihnp4}!utzoo!mnetor!clewis
BELL: (416)-475-8980 ext. 321

clewis...@mnetor.uucp

unread,
Apr 24, 1986, 12:41:00 AM4/24/86
to

Lee M J McLoughlin

unread,
Apr 28, 1986, 5:48:28 PM4/28/86
to
Standardly uucp only has a window of 3 and a packet size of 64 bytes
(it is compiled in under most versions). I've tried transfering stuff
over networks which seem to match your description and found the best
thing to do is to allow the window and packet size to be specified in
the L.sys file for each connection. Unfortunetly the remote machine
must also be told about any changes in window/packet sizes before the
g-protocol is invoked. So if you wanted to do this you would have to
upgrade both ends.

You can then increase the ammount of data you are prepared to send in
advance of an acknowledgement and so increase your throughput.

These fixes are built into UKUUCP (the UK standard uucp).

Okay let's hear the Honey DanBer crowd beat that :-)
--
--
UKUUCP SUPPORT Lee McLoughlin
"What you once thought was only a nightmare is now a reality!"

Janet: lm...@uk.ac.ukc, lm...@uk.ac.ic.doc
DARPA: lmjm%uk.ac.doc.ic@ucl-cs
Uucp: lm...@icdoc.UUCP, ukc!icdoc!lmjm

Sam Kendall

unread,
May 2, 1986, 11:23:12 PM5/2/86
to
lm...@icdoc.UUCP says that UKUUCP is hacked to allow one to specify the
window and packet size in L.sys for each connection. It would be more
interesting, though of course much harder, to have UUCP experiment
during each call to find optimal values for these parameters. This
would take a new protocol, I guess.

The approach of having UUCP experiment is preferable, first, because it
requires less human parameterization; and second, because it adjusts
better to fluctuating conditions. For instance, the window and packet
size should go up when there are satellite delays on the line, but they
should go down when there is noise on the line that causes garbling.
U.S. transcontinental calls vary from call to call both in the
presence/absence of satellite delays (actually, this may only vary
between long-distance companies, not between calls using the same
company) and in the amount of noise. So it would be nice to have UUCP
compensate dynamically.

Of course, there are other types of connections that favor
different window and packet sizes, with or without automatic
experimentation. There are half-duplex 9600 baud modems, I have read,
that transmit without errors and simulate full duplex by frequent
handshaking. There can sometimes be delays of a few seconds in the
transition between send and receive, though. It seems like a very large
window and packet sizes would make UUCP work quite well with this sort
of modem, so that it could pay for itself if bought for sites that
exchange lots of data long-distance. Large packet sizes are of course
appropriate for any transmission medium that does its own
error-checking.

Finally, a couple of notes. (1) Even though it looks like (when you
watch the send/receive lights) that the packet size should be larger,
4.3 UUCP here gets 210 to 220 cps using a Hayes Smartmodem 2400. Pretty
good. (2) I'm not actually sure what "window size" is. If I look like
a total fool in mentioning it above, please forgive me.

----
Sam Kendall { ihnp4 | seismo!cmcl2 }!delftcc!sam
Delft Consulting Corp. ARPA: delftcc!s...@NYU.ARPA

Chris Torek

unread,
May 11, 1986, 7:05:12 PM5/11/86
to
In article <3367@mnetor> clewis%mne...@mnetor.UUCP writes:
>Further, I believe BSD uucp's use "select" with timers to do their
>reads - the receiver goes back into a "select" to read the packet,
>and only if the select times out does it actually send an acknowledge
>for a packet it has already received.

Not quite. The select() to which you refer is no doubt my own
addition. Someone (I cannot recall who, if ever I knew) discovered
that at 300 and 1200 baud, uucico was causing an inordinate number
of context switches, slowing down the machine for everyone. The
cause was determined to be in pkcget(): It would try to read() 64
characters, receive about 10, try for another 54, get 6, try for
48, get 15, ..., and in the process make many essentially useless
system calls and cause all those context switches.

The System III style tty driver has a relatively clean way of fixing
this: one asks the kernel to return from a read() only after a
minimum number of input characters, or minimum time delay, whichever
occurs first. (The details of this mechanism are irrelevant here,
but I do wish to mention that they were either incorrectly or
misleadingly documented at one time---this is apparent from the
number of times I have seen answers---*differing* answers---to
questions about the actual workings of VMIN and VTIME. I do not
know whether this has been rectified.)

At any rate, us poor 4BSD folks :-) were stuck with a lesser
solution. Instead of having the kernel do all our work for us, we
had to fix it in user code. Clearly this kind of thing is appropriate
for kernels, and if you believe that, I have an ISAM file system
for sale. (Just to forestall any askers: no, no, I was only kidding!
Please do not ask me for my ISAM file system; I do not have one.)

The original 4.1BSD solution was something along these lines:

/* linebaudrate==0 -> unknown */
if (linebaudrate > 0 && linebaudrate < 2400 && nleft > 50)
sleep(1);

This has the desired affect of allowing the full packet to dribble
in, but tends to introduce ACK delays, especially at 1200 baud.
The granulatity in sleep() is simply too large. With 4.2, however,
came the ability to generate sub-second sleeps, and I decided to
put this to use.

Now, the `right' amount of time to wait can be determined directly
from the number of characters needed and the baud rate: If you
need <n> characters, that will take 10*<n>/<baud> seconds to dribble
in over a serial link. Unfortunately, this simple rule tends to
fail for a number of reasons, the largest being delays between one
read() system call and the next.

What I did was to assume that the each time pkcget() was called,
about 1/10th of one second had passed since the previous call. I
then computed the remaining time until <n> characters could be
expected to have been received by the kernel. If this time was at
least 1/50th of one second, I used a select() system call to wait
exactly this long. (Select() is both simpler and more efficient
than using ITIMER_REAL and alarms.) If the following read() did
not receive the expected <n> characters, I tried the whole thing
again, but this time I assumed that no time had passed since the
last pkcget().

The 1/50th of a second comes from knowing that the actual select()
delay granularity is 1/100th of a second, and from making a very
pessimistic estimate of the overhead of two context switches (out
of, and in 10 milliseconds, back into, uucico). If the machine is
heavily loaded, perhaps that is not such a pessimistic estimate
after all. At any rate, it seems to work.

The 1/10th second `guess' was simply a guess, but it too seems to
work well in practice: with some extra debugging, I determined that
the select() delays usually resulted in a full <n> characters, but
sometimes were too short, with an extra four or five needed to fill
the packet. The too-short delays were rare enough that I figured
decreasing the between-pkcget()-delay-guess would likely slow down
ACKs somewhat; and I was willing to trade the remaining bit of CPU
load for shorter-duration phone calls.
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 1415)
UUCP: seismo!umcp-cs!chris
CSNet: chris@umcp-cs ARPA: ch...@mimsy.umd.edu

0 new messages