Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Vacuum tubes in spacecraft?

648 views
Skip to first unread message

John Benham

unread,
Jan 12, 1994, 5:27:11 PM1/12/94
to
In article <1994Jan12....@newsgate.sps.mot.com>,
ma...@bigfoot.sps.mot.com (Mark Monninger) writes:
> I recently came across an article in another newsgroup that quoted a NASA
> report about the Mars Observer disappearance and one item caught my
attention:
>
> >> ... Mars Observer had turned off its transmitter as a
> >> precautionary measure to protect the transmitter tubes from shock
> >> just before it pressurized its onboard propellant tanks on August
> >> 21. ...
>
> Do they use vacuum tubes in spacecraft transmitters? I understand that
tubes
> are more radiation-resistant than semiconductors, that they would be in a
> near-perfect vacuum in space, etc., but I'm still surprised. Does
anyone have
> any info on this?
>
> Enquiring minds want to know.
>
> Thanks.
>
> Mark

I believe they still use TWTs because of their superior frequency/output
power rating - I guess vacuum
devices still cant be beat for some applications - I'll bet you probably
have one in front of you right now :)

John F. Woods

unread,
Jan 12, 1994, 5:16:19 PM1/12/94
to
ma...@bigfoot.sps.mot.com (Mark Monninger) writes:
>I recently came across an article in another newsgroup that quoted a NASA
>report about the Mars Observer disappearance and one item caught my attention:
>>> ... Mars Observer had turned off its transmitter as a
>>> precautionary measure to protect the transmitter tubes from shock
>>> just before it pressurized its onboard propellant tanks on August
>>> 21. ...
>Do they use vacuum tubes in spacecraft transmitters? I understand that tubes
>are more radiation-resistant than semiconductors, that they would be in a
>near-perfect vacuum in space, etc., but I'm still surprised. Does anyone have
>any info on this?

They used tubes because at the time they designed it, vacuum tubes that could
provide the needed power output at microwave frequencies were cheaper and more
reliable than comparable semiconductors. In fact, power microwave
semiconductors are still kind of expensive, at least in small quantities.

Robert Casey

unread,
Jan 12, 1994, 9:23:25 PM1/12/94
to
Well, once it's out in space, it wouldn't need the glass envelope anymore.

John Whitmore

unread,
Jan 13, 1994, 2:18:33 AM1/13/94
to
In article <38...@ksr.com>, John F. Woods <j...@ksr.com> wrote:

>ma...@bigfoot.sps.mot.com (Mark Monninger) writes:
>>>> ... Mars Observer had turned off its transmitter as a
>>>> precautionary measure to protect the transmitter tubes from shock
>>Do they use vacuum tubes in spacecraft transmitters? I understand that tubes
>>are more radiation-resistant than semiconductors

>They used tubes because at the time they designed it, vacuum tubes that could


>provide the needed power output at microwave frequencies were cheaper and more
>reliable than comparable semiconductors. In fact, power microwave
>semiconductors are still kind of expensive, at least in small quantities.

I don't think expense was the reason. It was radiation hardness,
primarily (and vacuum tubes are a LOT more reliable in space than
semiconductors, so you wouldn't have to use as many backups).

I recall one of the Voyager spacecraft had to be operated
with the power to the onboard computer REVERSED for a period, in order
to correct some radiation-induced degradation. Yes, that's right:
they hooked up the computer to the wrong polarity of power, deliberately.

John Whitmore

Robert Davis

unread,
Jan 13, 1994, 11:06:47 AM1/13/94
to

I dunno where I read the information about vacuum tubes, but yes,
I have read that NASA uses tubes for exactly the reasons you specify.
Tubes are resistant to the various forms of radiation in space.
And tubes do not need envelopes ... the vacuum of space is better
than the vacuum in any vacuum tube ever used on Earth.
====
Still, I think that not enclosing the tubes in something is kind of
dumb 'cause any gases vented by the spacecraft will continue to travel
along with the spacecraft and degrade the vacuum around the tubes.

Bob Breivogel

unread,
Jan 13, 1994, 12:58:37 PM1/13/94
to
rda...@nyx10.cs.du.edu (Robert Davis) writes:


>I dunno where I read the information about vacuum tubes, but yes,
>I have read that NASA uses tubes for exactly the reasons you specify.
>Tubes are resistant to the various forms of radiation in space.
>And tubes do not need envelopes ... the vacuum of space is better
>than the vacuum in any vacuum tube ever used on Earth.

Vacuum tubes used in spacecraft still use envelopes. Since testing is required
on the ground, the tubes would be non-functional!

Also, radiation hardness is a minor concern, otherwise tubes would also
be used in the onboard computers!

The primary tube that is still used is the TWT, or Traveling Wave Tube. It
is used for high power, higher frequency microwave amplifiers. I believe
that communications satellites widely use them.

Mark Monninger

unread,
Jan 12, 1994, 10:21:39 AM1/12/94
to
I recently came across an article in another newsgroup that quoted a NASA
report about the Mars Observer disappearance and one item caught my attention:

>> ... Mars Observer had turned off its transmitter as a


>> precautionary measure to protect the transmitter tubes from shock

>> just before it pressurized its onboard propellant tanks on August
>> 21. ...

Do they use vacuum tubes in spacecraft transmitters? I understand that tubes

are more radiation-resistant than semiconductors, that they would be in a
near-perfect vacuum in space, etc., but I'm still surprised. Does anyone have
any info on this?

Enquiring minds want to know.

Thanks.

Mark

Chris Kerlin

unread,
Jan 14, 1994, 11:04:30 AM1/14/94
to
At least as of a couple years ago...
Several shuttle onboard computers made by IBM use magnetic core memory
("beads and string") which limits the range of windspeeds that can be
accomodated at liftoff.

But is anyone suprised that 1960s technology is used on the shuttle;
I'd hate to guess at how much of the design is on Hollerith cards.

mofo

unread,
Jan 14, 1994, 9:06:30 PM1/14/94
to
in article <1994Jan14.1...@lmpsbbs.comm.mot.com> eck...@email.mot.com (Chris Kerlin) scribbles in crayon:

>At least as of a couple years ago...
>Several shuttle onboard computers made by IBM use magnetic core memory
>("beads and string") which limits the range of windspeeds that can be
>accomodated at liftoff.
>
>But is anyone suprised that 1960s technology is used on the shuttle;
>I'd hate to guess at how much of the design is on Hollerith cards.
>

i can vouch for that. i've seen the core sheets. i still have a vial of
the raw core; looks like iron filings, theyre so small. ibm has a peculiar
habit of taking an antiquated technology to the limit. look how much mileage
they got out of their tso/mvs 60's era operating system. thats the one where
you have to manually allocate disk tracks before saving a file. millions of
corporate customers around the world think this is the state of the art in
modern computer equipment, mostly because ibm says so.

d
--
mo...@netcom.com fold, mutilate, and spindle *this*

Lee Devlin

unread,
Jan 15, 1994, 12:35:44 PM1/15/94
to
Chris Kerlin (eck...@email.mot.com) wrote:
: At least as of a couple years ago...

: Several shuttle onboard computers made by IBM use magnetic core memory
: ("beads and string") which limits the range of windspeeds that can be
: accomodated at liftoff.

: But is anyone suprised that 1960s technology is used on the shuttle;
: I'd hate to guess at how much of the design is on Hollerith cards.

Richard Feynman discussed this in his excellent book _What_Do_You_Care
What_Other_People_Think_.

For those of you who don't know Feynman, he was the Nobel Prize-winning
scientist that was part of the commission that investigated the
Challenger accident. I was amazed to read that something as
sophisticated as the shuttle would use something as antiquated as core
memory. However, the shuttle's control software represents a large
development investment and parts of it are borrowed from previous
generation space programs. It also been thoroughly tested and therefore
prohibitively expensive to change. That's why they're stuck with ancient
hardware with which to run it.

--
Lee Devlin | HP Little Falls Site | phone: (302) 633-8697
| 2850 Centerville Rd. | email:
| Wilmington, DE 19808 | dev...@lf.hp.com

Dana Myers

unread,
Jan 15, 1994, 4:29:16 PM1/15/94
to
In article <2h99hg$6...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:
>Chris Kerlin (eck...@email.mot.com) wrote:
>: At least as of a couple years ago...
>: Several shuttle onboard computers made by IBM use magnetic core memory
>: ("beads and string") which limits the range of windspeeds that can be
>: accomodated at liftoff.
>
>: But is anyone suprised that 1960s technology is used on the shuttle;
>: I'd hate to guess at how much of the design is on Hollerith cards.
>
>Richard Feynman discussed this in his excellent book _What_Do_You_Care
>What_Other_People_Think_.
>
>For those of you who don't know Feynman, he was the Nobel Prize-winning
>scientist that was part of the commission that investigated the
>Challenger accident. I was amazed to read that something as
>sophisticated as the shuttle would use something as antiquated as core
>memory. However, the shuttle's control software represents a large
>development investment and parts of it are borrowed from previous
>generation space programs. It also been thoroughly tested and therefore
>prohibitively expensive to change. That's why they're stuck with ancient
>hardware with which to run it.


I'm sure none of us armchair designers would bother to consider that
core memory is considerably more immue to radiation induced errors
than semiconductor memory while we're re-designing the shuttle system,
which operate in a region of considerably higher ambient radiation
levels than on Earth...
--
* Dana H. Myers KK6JQ, DoD 466 | Views expressed here are *
* (310) 348-6043 | mine and do not necessarily *
* Dana....@West.Sun.Com | reflect those of my employer *
* This Extra supports the abolition of the 13 and 20 WPM tests *

T. G. Booth

unread,
Jan 15, 1994, 6:41:44 PM1/15/94
to
In article <1994Jan14.1...@lmpsbbs.comm.mot.com>,

eck...@email.mot.com (Chris Kerlin) wrote:
>
> At least as of a couple years ago...
> Several shuttle onboard computers made by IBM use magnetic core memory
> ("beads and string") which limits the range of windspeeds that can be
> accomodated at liftoff.

I'd like to see a bit of substantiation on this statement. I really doubt
that processor speed is the problem. I believe you'll find that the
limitation on winds is due to other factors such as structural loads
imposed by aerodynamic forces, cross-wind limits on landing if a return to
launch site abort is required, etc.



> But is anyone suprised that 1960s technology is used on the shuttle;
> I'd hate to guess at how much of the design is on Hollerith cards.

No one should be surprised to find 1960s technology in STS hardware; in my
view, NASA has had a tradition of not always rushing to embrace the lastest
technology (at least in manned space flight) due to manned
safety/reliability issues, going back to the Mercury program. As for the
use of Hollerith cards, I wouldn't know, but I'd expect that if you can
find them at NASA, they'll be tucked away in someone's desk drawer, the
information on the cards having been transferred to 9-track tapes, floppy
disks, etc. a decade ago.

TGB

Michael Stein

unread,
Jan 15, 1994, 3:25:00 PM1/15/94
to
>: At least as of a couple years ago...
>: Several shuttle onboard computers made by IBM use magnetic core memory
>: ("beads and string") which limits the range of windspeeds that can be
>: accomodated at liftoff.
>
>: But is anyone suprised that 1960s technology is used on the shuttle;
>: I'd hate to guess at how much of the design is on Hollerith cards.

I remember reading somewhere that the original shuttle designers
were given a choice by IBM. IBM said something like "we can use
this existing, space rated hardware (core etc), or do a new
design. The new design will take longer." The shuttle designers
didn't want the shuttle delayed and told IBM that. So here we
are -- even though other delays in the shuttle program added a
few *years* to first shuttle flight.

Stephen C. Trier

unread,
Jan 15, 1994, 11:45:02 PM1/15/94
to
In article <2h9n7c...@abyss.west.sun.com>,

Dana Myers <my...@sunspot.West.Sun.COM> wrote:
>I'm sure none of us armchair designers would bother to consider that
>core memory is considerably more immue to radiation induced errors
>than semiconductor memory....

Yes, I was thinking about that. I remember reading, not six years ago,
about _new_ core memory products designed to be used in automatically
assembled PC boards, with simple TTL interfaces to microprocessors and
all the other things we expect these days. They were fantastically
expensive and invariably rated for military temperature ranges. Their
intended use was radiation-hardened electronics.

I'm skeptical radiation is the reason behind core on the shuttle. I'm
also not sure the shuttle still uses core. I've read that among the
post-Challenger changes, the shuttles were equipped with modernized
computers with substantially more memory. You tell me what that means
(the popular press is never too specific). It could be it's just more
core, or perhaps it has moved up to semiconductors.

Stephen

--
Stephen Trier KB8PWA "Is this what Andy Warhol meant by '15 minutes
Other: tr...@ins.cwru.edu of fame?'" - lis...@vpnet.chi.il.us
Home: s...@po.cwru.edu "As this is alt, that should probably be '15 Mb
of flame.'" - art...@Smallworld.co.uk

Gary Coffman

unread,
Jan 16, 1994, 11:01:08 AM1/16/94
to
In article <mofoCJn...@netcom.com> mo...@netcom.com (mofo) writes:
>in article <1994Jan14.1...@lmpsbbs.comm.mot.com> eck...@email.mot.com (Chris Kerlin) scribbles in crayon:
>>At least as of a couple years ago...
>>Several shuttle onboard computers made by IBM use magnetic core memory
>>("beads and string") which limits the range of windspeeds that can be
>>accomodated at liftoff.
>>
>>But is anyone suprised that 1960s technology is used on the shuttle;
>>I'd hate to guess at how much of the design is on Hollerith cards.
>>
>
>i can vouch for that. i've seen the core sheets. i still have a vial of
>the raw core; looks like iron filings, theyre so small. ibm has a peculiar
>habit of taking an antiquated technology to the limit.

Which in this case was exactly what NASA wanted. SEUs are a problem with
any solid state memory device. Core doesn't suffer from SEUs. It also
has the charm of being non-volatile so the computers can be powered down
when not in use for power saving, yet still be loaded and ready to execute
when powered back up. Core is a perfectly good technology if you don't
need vast amounts of RAM, and Shuttle doesn't. Core didn't set a speed
limit on Shuttle computers, the slow bit slice parts, and the need to
*vote* on every command did that. Man rated flight systems can't be allowed
to lock up, core dump, or issue fraudulent commands to the flight hardware.
At the time Shuttle was designed, there was nothing more reliable than
core available. There still isn't, but ECC washed memory can suffice if
care is taken. Ordinary microprocessors can't be trusted either since
their registers and on board caches are subject to SEUs.

Gary
--
Gary Coffman KE4ZV | You make it, | gatech!wa4mei!ke4zv!gary
Destructive Testing Systems | we break it. | uunet!rsiatl!ke4zv!gary
534 Shannon Way | Guaranteed! | emory!kd4nc!ke4zv!gary
Lawrenceville, GA 30244 | |

Lee Devlin

unread,
Jan 16, 1994, 2:43:25 PM1/16/94
to
Dana Myers (my...@sunspot.West.Sun.COM) wrote:

: I'm sure none of us armchair designers would bother to consider that


: core memory is considerably more immue to radiation induced errors
: than semiconductor memory while we're re-designing the shuttle system,
: which operate in a region of considerably higher ambient radiation
: levels than on Earth...

I spend a lot of time testing for radiated immunity (and radiated
emissions) problems on microprocessor-based instruments. Semiconductor
memories generally do not have immunity problems with radiated fields.
Besides, there's hardly an EM immunity problem that a Faraday cage can't
solve if you can constrain your system to operate in one. Memory and
CPU certainly fit in that category. Actually, most often they ARE caged
to constrain the EM energy their clocks produce. This radiated energy
doesn't bother them, but rather interferes with radio recievers and
small signal transducers.

The statement about ambient radiation being higher in space than on
earth is confusing. To what radiation are you referring? Cosmic?
Transmitters (intentional and otherwise) are responsible for nearly all
the EM fields in the frequency range that can induce unwanted currents
in circuits and the farther you are from the source, the smaller they are.

Dave Jacobowitz

unread,
Jan 16, 1994, 1:07:23 PM1/16/94
to
OK, so how susceptible to SEUs are normal computer hardware? I know I
have seen pictures of the astronauts using regular laptop computers on
the shuttle to their normal work. They obviously are not controlling
the flight hardware with them, but if I took a notebook computer with
me on the shuttle, how often should I expect it to crash or output bad
data?

dave jacobowitz
dg...@virginia.edu


hpmvd069

unread,
Jan 16, 1994, 5:03:00 PM1/16/94
to
Lee Devlin (dev...@lf.hp.com) wrote:
> Dana Myers (my...@sunspot.West.Sun.COM) wrote:

> : I'm sure none of us armchair designers would bother to consider that
> : core memory is considerably more immue to radiation induced errors
> : than semiconductor memory while we're re-designing the shuttle system,
> : which operate in a region of considerably higher ambient radiation
> : levels than on Earth...
>

> (stuff about electromagnetic radiation deleted)


>
> The statement about ambient radiation being higher in space than on
> earth is confusing. To what radiation are you referring? Cosmic?
> Transmitters (intentional and otherwise) are responsible for nearly all
> the EM fields in the frequency range that can induce unwanted currents
> in circuits and the farther you are from the source, the smaller they are.

We're talking about *ionizing* radiation. (Ionizing) radiation
resistance is an important design spec even in low earth orbit (LEO).
The shuttle passes through or near the South Atlantic Anomaly on most
flights.


Jeff
--
=============================================================================
Jeff Gruszynski Any Standard Disclaimers Apply
Semiconductor Test Equipment
Systems Engineer
Hewlett-Packard
=============================================================================
(415) or T 694-3381
Jeff_Gr...@hpatc3.desk.hp.com
=============================================================================

Jack GF Hill

unread,
Jan 16, 1994, 11:45:37 AM1/16/94
to
> >At least as of a couple years ago...
> >Several shuttle onboard computers made by IBM use magnetic core memory
> >("beads and string") which limits the range of windspeeds that can be
> >accomodated at liftoff.
> >
> >But is anyone suprised that 1960s technology is used on the shuttle;
>
> i can vouch for that. i've seen the core sheets. i still have a vial of
> the raw core; looks like iron filings, theyre so small. ibm has a peculiar
> habit of taking an antiquated technology to the limit.
Oh boy! As a dedicated IBM-basher, this is painful, but let me remind
all you technocrats that the environments spacecraft operate in are
quite hostile indeed to solid-state CMOS devices. This is not IBM's
doing,they have simply taken a proven technology, albeit old and
perhaps outdated for terristrial use, and applied it to the VERY
hostile environment of space.

The core memeroy has several advantages over solid-state, not the
least of which is that if power is lost, for whatever reason, the
memory holds whatever settings it had...for longer than a UPS will
hold CMOS... Hollow-state is not suseptible to Ultra-violet erasure,
nor to EMP erasure...sure it takes very different designs, and has
limits that silicon technology does not, but it works in a very
hostile environment, and that is why it is used, NOT because IBM says
it should -- the equipment was SPEC'ed that way!

Ooooh.... that hurt me to type! ;^)

> they got out of their tso/mvs 60's era operating system. thats the one where
> you have to manually allocate disk tracks before saving a file. millions of
> corporate customers around the world think this is the state of the art in
> modern computer equipment, mostly because ibm says so.

And here I can breath again, because what has been typed is so: MVS
was a pig, and the choices exercised by it to allocate resources were
following an easier path... however, as any gray-flannel three-piece
suiter will tell ya: "No consultant ever got fired for recommending
IBM..." (Unless it was to Honeywell (BULL), Sperry, Burroughs, or
DEC ;^)
73,
Jack, W4PPT/Mobile (75M SSB 2-letter WAS #1657 -- all from the mobile! ;^)

+--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--+
| Jack GF Hill |Voice: (615) 459-2636 - Ham Call: W4PPT |
| P. O. Box 1685 |Modem: (615) 377-5980 - Bicycling and SCUBA Diving |
| Brentwood, TN 37024|Fax: (615) 459-0038 - Life Member - ARRL |
| ro...@jackatak.raider.net - "Plus ca change, plus c'est la meme chose" |
+--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--+

Lee Devlin

unread,
Jan 17, 1994, 12:10:10 AM1/17/94
to
I (dev...@lf.hp.com) wrote:

: Richard Feynman discussed this in his excellent book _What_Do_You_Care
: What_Other_People_Think_.

: For those of you who don't know Feynman, he was the Nobel Prize-winning
: scientist that was part of the commission that investigated the
: Challenger accident. I was amazed to read that something as
: sophisticated as the shuttle would use something as antiquated as core
: memory. However, the shuttle's control software represents a large
: development investment and parts of it are borrowed from previous
: generation space programs. It also been thoroughly tested and therefore
: prohibitively expensive to change. That's why they're stuck with ancient
: hardware with which to run it.

I realize it is anathema to provide sources in a forumn that thrives on
speculation, conjecture, and hearsay but I'm going to go even further
since this will otherwise turn into a 'string that will not die'

Please forgive me for I'm about to do the unthinkable. I'm actually
going to quote from the book I referenced above. Remember, this man was
a Nobel prize-winning physisist who had inside access to the shuttle
development personnel during the investigation of the Challenger.

From page 192 of _What_Do_You_Care_What_Other_People_Think_:

"Although there's a lot of good software being written at Johnson, the
computers on the shuttle are so obsolete that the manufacturers don't
make them anymore. The memories in them are the old kind, made with
little ferrite cores that have wires going through them. In the
meantime we've developed much better hardware: the memory chips of
today are much, much smaller; they have much greater capacity; and
they're much more reliable. They have internal error-correcting codes
that automatically keep the memory good. With today's computers we
can design separate program modules so that changing the payload
doesn't require so much program rewriting.

Because of the huge investment in the flight simulators and all the
other hardware, to start all over again and replace the millions of
lines of code that they've already built would be very costly."

(There's a lot of associated information that discusses how each mission
is flown on simulators using the same software before the actual mission.)

So all of you core memory apologists can give it rest now. I think it's
pretty clear from the above excerpt that the core memory is a vestige of
the shuttle's past. It is not required for its alleged reliability or
immunity to extraterrestrial electromagnetic events.

There's a lot of great information about the shuttle program in the
book. I believe someone mentioned in an earlier posting that the
shuttle doesn't need much memory and that's why core is acceptable.
Here's why that's not correct:

From page 190:

"The shuttle's computers don't have enough memory to hold all the
programs for the whole flight. After the shuttle gets into orbit, the
astronauts take out some tapes and load in the program for the next
phase of the flight -- there are a many as six in all. Near the end
of the flight, the astronauts load in the program for coming down.

The shuttle has four computers on board, all running the same
programs. All four are normally in agreement. If one computer is out
of agreement, the flight can still continue. If only two computers
agree, the flight has to be curtailed and the shuttle brought back
immediately."

If memory requirements really were minimal, it wouldn't be necessary to
be carrying all those tapes along.

While I'm at it, I should also mention that the shuttle astronauts have
taken the HP41C calculator with them and gave it good reports for its
reliablity. And no, it doesn't use core memory :-).

Gary Coffman

unread,
Jan 17, 1994, 9:09:24 AM1/17/94
to

We're not talking about RF. We're refering to *ionizing* radiation. This
causes ionized paths in the substrate of ICs that can cause flipped bits.
These effects are generally temporary, hence *single* event upsets, but
cummulative damage can be permanent, and the stray heavy nuclei can cause
permanent stuck bits.

Gary Coffman

unread,
Jan 17, 1994, 9:15:07 AM1/17/94
to

The Shuttle stays in a low enough orbit to receive considerable protection
from radiation via the Earth's magnetic field, but it does pass through
the SAA on most missions, and the radiation flux is higher there. Satellites
that pass through the Van Allen belts get a real dose, and high intensity
cosmic rays are a continous problem. If you look at the telemetry from
the amateur satellites, you can get a good idea of the frequency of SEUs.
Without ECC memory, the average laptop would likely suffer a flipped bit
one or more times a day. If that bit is in an active area of program memory,
you may have a crash. If it hits the stack, you'll almost certainly have
a crash.

Mr P I Neaves

unread,
Jan 17, 1994, 10:50:39 AM1/17/94
to
In article <1994Jan17.1...@ke4zv.atl.ga.us>,
ga...@ke4zv.atl.ga.us (Gary Coffman) writes:
%In article <2hc5ct$k...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:
%%Dana Myers (my...@sunspot.West.Sun.COM) wrote:
%%
%%: I'm sure none of us armchair designers would bother to consider that
%%: core memory is considerably more immue to radiation induced errors
%%: than semiconductor memory while we're re-designing the shuttle system,
%%: which operate in a region of considerably higher ambient radiation
%%: levels than on Earth...
%%
%%I spend a lot of time testing for radiated immunity (and radiated
%%emissions) problems on microprocessor-based instruments. Semiconductor
%%memories generally do not have immunity problems with radiated fields.

[stuff binned]

%We're not talking about RF. We're refering to *ionizing* radiation. This
%causes ionized paths in the substrate of ICs that can cause flipped bits.
%These effects are generally temporary, hence *single* event upsets, but
%cummulative damage can be permanent, and the stray heavy nuclei can cause
%permanent stuck bits.
%
%Gary
%

Thats why "silicon on saphire" technology is employed in space systems and rad-hard applications.
Prevents latchup etc.

--
Phil

Rajiv Dewan

unread,
Jan 17, 1994, 11:00:48 AM1/17/94
to
In article <1994Jan17.1...@ke4zv.atl.ga.us>,
Gary Coffman <ga...@ke4zv.atl.ga.us> wrote:

>Without ECC memory, the average laptop would likely suffer a flipped bit
>one or more times a day. If that bit is in an active area of program memory,
>you may have a crash. If it hits the stack, you'll almost certainly have
>a crash.

Most IBM PC clones have just a parity check. No ECC. Unless a chip is bad,
you almost never see a memory fault.

Use of ECC protected memory is much more common in minis and larger
machines.

Rajiv
aa9ch
r-d...@nwu.edu

Gary Coffman

unread,
Jan 17, 1994, 9:53:11 AM1/17/94
to
In article <2hd6ji$q...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:

>I (dev...@lf.hp.com) wrote:
>
>Please forgive me for I'm about to do the unthinkable. I'm actually
>going to quote from the book I referenced above. Remember, this man was
>a Nobel prize-winning physisist who had inside access to the shuttle
>development personnel during the investigation of the Challenger.

Feynman was a visionary physicist, a great teacher, and a wonderful
populariser of science, but he was not a computer systems reliability
*engineer*. He's commenting outside his field here.

>From page 192 of _What_Do_You_Care_What_Other_People_Think_:
>
> "Although there's a lot of good software being written at Johnson, the
> computers on the shuttle are so obsolete that the manufacturers don't
> make them anymore. The memories in them are the old kind, made with
> little ferrite cores that have wires going through them. In the
> meantime we've developed much better hardware: the memory chips of
> today are much, much smaller; they have much greater capacity; and
> they're much more reliable. They have internal error-correcting codes
> that automatically keep the memory good. With today's computers we
> can design separate program modules so that changing the payload
> doesn't require so much program rewriting.
>
> Because of the huge investment in the flight simulators and all the
> other hardware, to start all over again and replace the millions of
> lines of code that they've already built would be very costly."
>
>(There's a lot of associated information that discusses how each mission
>is flown on simulators using the same software before the actual mission.)
>
>So all of you core memory apologists can give it rest now. I think it's
>pretty clear from the above excerpt that the core memory is a vestige of
>the shuttle's past. It is not required for its alleged reliability or
>immunity to extraterrestrial electromagnetic events.

It's unsurprising that Shuttle computers aren't made anymore. They were
special built for the Shuttle, and we aren't building any more Shuttles.
However, the Shuttle computers have recently been upgraded, and three
of the four now do use some ECC memory for non-critical data storage.
ECC is *not* proof against SEUs, but it's much better than unprotected
memory. ECC can protect against single bit errors, and most double bit
errors, but a triple hit and it's game over. Core doesn't care.

>There's a lot of great information about the shuttle program in the
>book. I believe someone mentioned in an earlier posting that the
>shuttle doesn't need much memory and that's why core is acceptable.
>Here's why that's not correct:
>
>From page 190:
>
> "The shuttle's computers don't have enough memory to hold all the
> programs for the whole flight. After the shuttle gets into orbit, the
> astronauts take out some tapes and load in the program for the next
> phase of the flight -- there are a many as six in all. Near the end
> of the flight, the astronauts load in the program for coming down.
>
> The shuttle has four computers on board, all running the same
> programs. All four are normally in agreement. If one computer is out
> of agreement, the flight can still continue. If only two computers
> agree, the flight has to be curtailed and the shuttle brought back
> immediately."
>
>If memory requirements really were minimal, it wouldn't be necessary to
>be carrying all those tapes along.

There's no good reason to have the entire software load for the entire
flight profile in the computer at once. Loading in the proper programs
at the proper time from tape is perfectly acceptable. Does your computer
keep every bit of software you use in memory at all times? Of course not.
That's why we have hard disks. However, hard disks aren't that sturdy
when subjected to launch vibration and G forces, so Shuttle uses tape
instead for program storage.

Note, Apollo used the HP-35 calculator as backup for it's computers,
so the necessary data loads aren't that large. Shuttle carries the
HP-41CV as an ultimate backup to it's computers. If all four main
computers fail, the pilot can manually calculate trajectories with
it's aid. Naturally it isn't as robust as the normal computers, but
it's better than nothing, and takes up little mass or volume.

Note also that program bloat can be traced almost completely to having
excess RAM available. Programs naturally expand to fill the space available,
witness Wordstar. It was a great fast program on a 48 kb Z-80 system, but
now it's a multi-megabyte dog on a Windows PC with 8+ megs of RAM. And
that's 166 times more RAM to develop a bit error that can crash the system.
To a large degree, reliability is a function of parts count. The fewer
parts, the less to go wrong.

Grant W. Petty

unread,
Jan 17, 1994, 12:02:55 PM1/17/94
to

I don't think you have to worry about the vented gases traveling with
the spacecraft for any appreciable length of time and thus spoiling
the vacuum. Any unconfined gas in the vacuum of outer space will
expand outward at an extremely high rate of speed. In fact, each
individual gas molecule will travel more or less indefinitely (i.e.,
until it runs into something) in a straight line with whatever speed
and direction it happened to have when it cleared the vent port.


--
Grant W. Petty gpe...@rain.atms.purdue.edu
Asst. Prof. of Atmospheric Science
Dept. of Earth & Atmospheric Sciences (317) 494-2544
Purdue University, West Lafayette IN 47907-1397 FAX:(317) 496-1210

Jim Cathey

unread,
Jan 17, 1994, 9:32:39 PM1/17/94
to
In article <2hd6ji$q...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:
>While I'm at it, I should also mention that the shuttle astronauts have
>taken the HP41C calculator with them and gave it good reports for its
>reliablity.

Hmm, "Seems to work OK every time we try it" versus "Guaranteed to work
perfectly by the manufacturer under the conditions specified" are quite
different. No way I'd want to take a ride with computers put together by
the sort-parts-for-faster-than-specified-operation crowd, whose machines'
testing is about as thorough as the HP's in this report.

Maybe the HP _is_ good enough to run the shuttle, but for something that
critical I'd want some proof, both theoretical and experimental.

To continue to partially quote Feynman, while he thought the Shuttle's
computer systems seemed crufty, he also admitted that they were the only
major (read: extremely complex) subsystem of the Shuttle that _didn't_
have performance problems, and that worked exactly as designed. Maybe
these rickety old computers are doing something right? (They may be
a royal pain in the ass, but so long as they do the job right they've
got a lot going for them.)

--
+----------------+
! II CCCCCC ! Jim Cathey
! II SSSSCC ! ISC-Bunker Ramo
! II CC ! TAF-C8; Spokane, WA 99220
! IISSSS CC ! UUCP: uunet!isc-br!jimc (ji...@isc-br.isc-br.com)
! II CCCCCC ! (509) 927-5757
+----------------+
One Design to rule them all; one Design to find them.
One Design to bring them all and in the darkness bind
them. In the land of Mediocrity where the PC's lie.

John Whitmore

unread,
Jan 17, 1994, 7:12:31 PM1/17/94
to
In article <1994Jan17.1...@ke4zv.atl.ga.us>,
Gary Coffman <ga...@ke4zv.atl.ga.us> wrote:
>In article <2hd6ji$q...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:
>>I (dev...@lf.hp.com) wrote:
>>Please forgive me for I'm about to do the unthinkable. I'm actually
>>going to quote from the book I referenced above. Remember, this man was
>>a Nobel prize-winning physisist who had inside access to the shuttle
>>development personnel during the investigation of the Challenger.

>Feynman was a visionary physicist, a great teacher, and a wonderful
>populariser of science, but he was not a computer systems reliability
>*engineer*. He's commenting outside his field here.

Feynman's job description at Los Alamos (working on
the first A-bomb) was 'Head of computation section', and his
last employment (at the time of his death) was with Thinking
Machines, Inc. While he is best known for his physics work,
his fast logarithm algorithm is also pretty widely respected.

He was NOT outside his field here. And, he wouldn't
care whether he was or not (you judge the ideas on their
own merits, not on the merits of their spokesmen).

John Whitmore


David Moisan

unread,
Jan 17, 1994, 6:49:40 PM1/17/94
to
In article <2hd6ji$q...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:

>While I'm at it, I should also mention that the shuttle astronauts have
>taken the HP41C calculator with them and gave it good reports for its
>reliablity. And no, it doesn't use core memory :-).

Hmmm, will they take an HP48 with them? That thing could probably run
the Shuttle for them. <bg> And double as a Game Boy! :)

On a related topic, it was mentioned in sci.space.shuttle once, during
last year's interminable shuttle launch aborts, that the Space Shuttle
Main Engine controllers used 68000's. Those can't possibly use core.
And as the Shuttle apparently passes through the Anomaly on its way
up, the controllers would be subject to SEU's, no? Sensor failures on
the engines, I've heard of; controller crashes, I have not. Why?

...Dave

--
| David Moisan, N1KGH /^\_/^\ moi...@silver.lcs.mit.edu |
| 86 Essex St. Apt #204 ( o ^ o ) n1...@amsat.org |
| Salem. MA 01970-5225 | | ce...@cleveland.freenet.edu |
| |

maurice.r.baker

unread,
Jan 17, 1994, 7:29:18 PM1/17/94
to
In article <1994Jan17.1...@ke4zv.atl.ga.us> ga...@ke4zv.atl.ga.us (Gary Coffman) writes:
>
>Note, Apollo used the HP-35 calculator as backup for it's computers,

Perhaps I am mistaken, but didn't the HP-35 calculator become available in 1972 ?
Many of the Apollo flights (if not all) were over by then. Of course, there was
still Skylab and ASTP, but I'm not sure that this is what you meant.

M. Baker
WA3ZXO
(who worked all summer between 11th and 12th grade to buy an HP-35 for $395)

Gary Coffman

unread,
Jan 18, 1994, 9:37:17 AM1/18/94
to
In article <2hec4f$e...@crocus.csv.warwick.ac.uk> es...@csv.warwick.ac.uk (Mr P I Neaves) writes:
>
>Thats why "silicon on saphire" technology is employed in space systems and rad-hard applications.
>Prevents latchup etc.

Too bad RCA is gone. :-(

Gary Coffman

unread,
Jan 18, 1994, 9:39:20 AM1/18/94
to
In article <2hecng$3...@news.acns.nwu.edu> rde...@casbah.acns.nwu.edu (Rajiv Dewan) writes:
>In article <1994Jan17.1...@ke4zv.atl.ga.us>,
>Gary Coffman <ga...@ke4zv.atl.ga.us> wrote:
>
>>Without ECC memory, the average laptop would likely suffer a flipped bit
>>one or more times a day. If that bit is in an active area of program memory,
>>you may have a crash. If it hits the stack, you'll almost certainly have
>>a crash.
>
>Most IBM PC clones have just a parity check. No ECC. Unless a chip is bad,
>you almost never see a memory fault.

Unless you're operating in a higher than ground level normal radiation
flux, as you are in space.

co...@yertle.cc.utexas.edu

unread,
Jan 18, 1994, 11:42:05 AM1/18/94
to

>in article <1994Jan14.1...@lmpsbbs.comm.mot.com> eck...@email.mot.com (Chris Kerlin) scribbles in crayon:
>At least as of a couple years ago...
>Several shuttle onboard computers made by IBM use magnetic core memory
>("beads and string") which limits the range of windspeeds that can be
>accomodated at liftoff.

I am fairly certain that these cores are not out in the wind. The
shuttle was designed to a 2G acceleration limit and many of the bits
are no stronger than 2G requires. The wings, which are out in the
wind, are not strong enough to withstand the speeds implied by 2G
acceleration from the ground.

Electronic items must be "space qualified" which is a lengthy process
that insures less than leading edge technology. Space qualified parts
must be fairly cosmic ray and radiation resistant which guarantees
that some technology will never qualify for mission critical
applications. Many manufacturers are not interested in making qualified
parts because demand is rather limited.


Dana Myers

unread,
Jan 18, 1994, 1:41:47 PM1/18/94
to
In article <2hc5ct$k...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:
>Dana Myers (my...@sunspot.West.Sun.COM) wrote:
>
>: I'm sure none of us armchair designers would bother to consider that
>: core memory is considerably more immue to radiation induced errors
>: than semiconductor memory while we're re-designing the shuttle system,
>: which operate in a region of considerably higher ambient radiation
>: levels than on Earth...
>
>I spend a lot of time testing for radiated immunity (and radiated
>emissions) problems on microprocessor-based instruments. Semiconductor
>memories generally do not have immunity problems with radiated fields.
>Besides, there's hardly an EM immunity problem that a Faraday cage can't
>solve if you can constrain your system to operate in one. Memory and
>CPU certainly fit in that category. Actually, most often they ARE caged
>to constrain the EM energy their clocks produce. This radiated energy
>doesn't bother them, but rather interferes with radio recievers and
>small signal transducers.

I'm talking about atomic particles moving very rapidly; you know,
radioactivity like the kind produced by plutonium, or cosmic rays.
I'm not talking about EM radiation, like that produced by an radio
transmitter.

>The statement about ambient radiation being higher in space than on
>earth is confusing. To what radiation are you referring? Cosmic?
>Transmitters (intentional and otherwise) are responsible for nearly all
>the EM fields in the frequency range that can induce unwanted currents
>in circuits and the farther you are from the source, the smaller they are.

Right; but, as explained above, I'm talking about things like alpha
particles and stuff like that. The Earth's atmosphere shields us
from a lot of the stuff from outer space. Find someone at HP who
works on electronics for satellites and ask them about rad-hardness.

Martin Vuille

unread,
Jan 18, 1994, 2:30:00 PM1/18/94
to
D>investigated the Challenger accident. I was amazed to read that
D>something as sophisticated as the shuttle would use something as
D>antiquated as core memory. However, the shuttle's control software

Perhaps you should consider that, no matter how "antiquated" it is,
there may be some technical advantages to core memory.

For one it is non-volatile, with no need for external backup supplies,
and can be modified an unlimited number of times, in contrast to many
"newer" solid-state non-volatile memory technologies. Each word/byte can
be individually erased/modified, and the access time for read is the
same as for write (that's because every read is destructive and is
therefore a write. :-)

For another, core memory is much more rad-hard than any semiconductor
memory.

Every technology has advantages as well as disadvantages.
MV

ProControl |
R.R. No. 2 | "Your partner in
Kemptville, ON K0G 1J0 | successful product
Tel.: (613) 258-0021 | development"
Fax: (613) 258-2542 |
martin...@synapse.org |
---
þ CmpQwk #UNREGþ UNREGISTERED EVALUATION COPY

Gary Coffman

unread,
Jan 19, 1994, 1:45:27 AM1/19/94
to
In article <CJsuo...@cbnewsh.cb.att.com> mr...@cbnewsh.cb.att.com (maurice.r.baker) writes:
>In article <1994Jan17.1...@ke4zv.atl.ga.us> ga...@ke4zv.atl.ga.us (Gary Coffman) writes:
>>Note, Apollo used the HP-35 calculator as backup for it's computers,
>
>Perhaps I am mistaken, but didn't the HP-35 calculator become available in 1972 ?
>Many of the Apollo flights (if not all) were over by then. Of course, there was
>still Skylab and ASTP, but I'm not sure that this is what you meant.

Yeah, ASTP, the Soviets were impressed, especially when they found out
*anyone* could just go into a store and buy one.

T.G. Booth

unread,
Jan 19, 1994, 3:15:44 AM1/19/94
to
In article <2hf86k$l...@bronze.lcs.mit.edu>, moi...@bronze.lcs.mit.edu (David
Moisan) wrote:
>
> [TEXT DELETED]

>
> On a related topic, it was mentioned in sci.space.shuttle once, during
> last year's interminable shuttle launch aborts, that the Space Shuttle
> Main Engine controllers used 68000's. Those can't possibly use core.
> And as the Shuttle apparently passes through the Anomaly on its way
> up, the controllers would be subject to SEU's, no? Sensor failures on
> the engines, I've heard of; controller crashes, I have not. Why?
>

Let me try to clarify this one a bit. The main engine controllers are used
only during the ascent phase to control the three SSMEs (which stands for
space shuttle main engines). Their work is done when the LOX and liquid
hydrogen have been depleted from the external tank, which happens about 9
minutes after liftoff (talk w/ the folks who cruise sci.space.shuttle, if
you want a complete ascent timeline). Once on orbit, the main engine
controllers, like the main engines themselves, are 'just along for the
ride.' I believe that the South Atlantic Anomaly is well away from the
point at which main engine cutoff occurs along the southernmost allowable
trajectory from KSC, which makes SEU from the South Atlantic Anomaly a kind
of a 'don't care' with respect to the main engine controllers. This spiel
is from my memory only, so if I've mangled things a bit, I appologize in
advance...

By the way, the computers on STS which used core memory were the general
purpose computers or GPCs, if you're acronym-happy.

TGB

mofo

unread,
Jan 19, 1994, 12:10:28 PM1/19/94
to
in article </S=Booth/G=T/I=G/OU=MSMAIL/O=DEN.MMAG/PRMD=MMC/ADMD=TELEMAIL/C=US/-190194...@129.243.31.151> /S=Booth/G=T/I=G/OU=MSMAIL/O=DEN.MMAG/PRMD=MMC/ADMD=TELEMAIL/C=US/@x400.den.mmc.com (T.G. Booth) scribbles in crayon:

>Let me try to clarify this one a bit. The main engine controllers are used
>only during the ascent phase to control the three SSMEs (which stands for
>space shuttle main engines). Their work is done when the LOX and liquid
>hydrogen have been depleted from the external tank, which happens about 9
>minutes after liftoff (talk w/ the folks who cruise sci.space.shuttle, if
>you want a complete ascent timeline). Once on orbit, the main engine
>controllers, like the main engines themselves, are 'just along for the
>ride.' I believe that the South Atlantic Anomaly is well away from the
>point at which main engine cutoff occurs along the southernmost allowable
>trajectory from KSC, which makes SEU from the South Atlantic Anomaly a kind
>of a 'don't care' with respect to the main engine controllers. This spiel
>is from my memory only, so if I've mangled things a bit, I appologize in
>advance...
>

i've heard enough about this so-called anomaly that i wish someone would define
what theyre talking about. what is it?

d
--
mo...@netcom.com does the pope crap in the woods?
does a bear fart in his robes?

Richard Steven Walz

unread,
Jan 19, 1994, 11:14:41 AM1/19/94
to
In article <2hhah...@abyss.west.sun.com>,

Dana Myers <my...@sunspot.West.Sun.COM> wrote:
>In article <2hc5ct$k...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:
>>Dana Myers (my...@sunspot.West.Sun.COM) wrote:
>>
>>: I'm sure none of us armchair designers would bother to consider that
>>: core memory is considerably more immue to radiation induced errors
>>: than semiconductor memory while we're re-designing the shuttle system,
>>: which operate in a region of considerably higher ambient radiation
>>: levels than on Earth...
>>
>>I spend a lot of time testing for radiated immunity (and radiated
>>emissions) problems on microprocessor-based instruments. Semiconductor
>>memories generally do not have immunity problems with radiated fields.
>>Besides, there's hardly an EM immunity problem that a Faraday cage can't
>>solve if you can constrain your system to operate in one. Memory and
>>CPU certainly fit in that category. Actually, most often they ARE caged
>>to constrain the EM energy their clocks produce. This radiated energy
>>doesn't bother them, but rather interferes with radio recievers and
>>small signal transducers.
>
>I'm talking about atomic particles moving very rapidly; you know,
>radioactivity like the kind produced by plutonium, or cosmic rays.
>I'm not talking about EM radiation, like that produced by an radio
>transmitter.
>
>>The statement about ambient radiation being higher in space than on
>>earth is confusing. To what radiation are you referring? Cosmic?
>>Transmitters (intentional and otherwise) are responsible for nearly all
>>the EM fields in the frequency range that can induce unwanted currents
>>in circuits and the farther you are from the source, the smaller they are.
>
>Right; but, as explained above, I'm talking about things like alpha
>particles and stuff like that. The Earth's atmosphere shields us
>from a lot of the stuff from outer space. Find someone at HP who
>works on electronics for satellites and ask them about rad-hardness.
> * Dana....@West.Sun.Com | reflect those of my employer *
------------------------------------
Just a note. Alpha particles unless they are high energy solar won't go
through paper. Beta (electrons) won't make it through the shuttle
compartment except at monstrous temperatures (speeds). The only thing left
is the solar wind, shielded as we are in low orbit by the Van Allen belts,
and cosmic rays, which do pass all the way through the earth! Some of them!
The solar flux that is lower in velocity is responsible for most excess
computer errors. Also, let us remember that the shuttle equipment may
produce high DC magnetic fields from some equipment. These are NOT shielded
by EM Faraday cages, and can go through even sheet steel! If you have some
ferrite beads for memory, one good mag pulse can wipe them inside anything.
But they are probably more immune or else better proven and reliable. I
have a board out of a DEC that is 8K x 16 and it is 7" by 10" in area and I
haven't the vaguest idea where they got the people to do all that under a
microsope! They must have used #48 gauge wire!!! Can't even see the beads
without a microscope!
-Steve Walz rst...@armory.com

Brady Joseph

unread,
Jan 19, 1994, 3:10:51 PM1/19/94
to
mark they sure do use tubes in space craft. It is very difficult to get more than a few watts at Ku band with a solid state amplifier. Almost all the existing communication satellites use Travelling Wave Tube (TWT) amplifiers.

regards Joe Brady


hpmvd069

unread,
Jan 19, 1994, 6:34:38 PM1/19/94
to
mofo (mo...@netcom.com) wrote:
> i've heard enough about this so-called anomaly that i wish someone would define
> what theyre talking about. what is it?

Quite right, sorry about that. The South Atlantic Anomaly is a place in
the southern Atlantic Ocean where some of the field lines of southern
magnetic pole emerge at a "prematurely" low latitude from the earth.
The result is that charge particles from the solar wind are funneled
toward the earth at an unusually low latitude. (The poles have higher
radiation levels at altitude than in the lower latitudes because of this
funneling.)

Doug Braun

unread,
Jan 20, 1994, 10:05:01 AM1/20/94
to
In article <qe99Fc...@jackatak.raider.net>, ro...@jackatak.raider.net (Jack GF Hill) writes:

|> The core memeroy has several advantages over solid-state, not the
|> least of which is that if power is lost, for whatever reason, the
|> memory holds whatever settings it had...for longer than a UPS will
|> hold CMOS... Hollow-state is not suseptible to Ultra-violet erasure,

Really? The one or two CMOS chips that could replace the core memory
could be backup-powered for at least a year by a lithium battery the size
of a quarter..

UV erasure is not a problem if chips are packaged...


-------------------------------------------------------------------
Doug Braun Intel Israel, Ltd. M/S: IDC-42 (new mailstop!)
Tel: 011-972-4-655069 dbr...@inside.intel.com

Daniel T Senie

unread,
Jan 21, 1994, 9:12:59 PM1/21/94
to
In article <1994Jan17.1...@ke4zv.atl.ga.us> ga...@ke4zv.atl.ga.us (Gary Coffman) writes:

There were plans at one point to use a 386-based system in the shuttle's
main computer systems, though the last time I heard anything about them
was before Challenger. (The garbage scow - earth controlled garbage
collector droid also has not been mentioned in years).

>
>>There's a lot of great information about the shuttle program in the
>>book. I believe someone mentioned in an earlier posting that the
>>shuttle doesn't need much memory and that's why core is acceptable.
>>Here's why that's not correct:
>>
>>From page 190:
>>
>> "The shuttle's computers don't have enough memory to hold all the
>> programs for the whole flight. After the shuttle gets into orbit, the
>> astronauts take out some tapes and load in the program for the next
>> phase of the flight -- there are a many as six in all. Near the end
>> of the flight, the astronauts load in the program for coming down.

From what I recall the tapes are kind of strange too. 9 track tape which
is read as 5 separate data streams when moving one way, 4 when moving
the other. Each track independent. Something about minimizing the amount of
movement needed to load a program, from what I recall...

>>
>> The shuttle has four computers on board, all running the same
>> programs. All four are normally in agreement. If one computer is out
>> of agreement, the flight can still continue. If only two computers
>> agree, the flight has to be curtailed and the shuttle brought back
>> immediately."
>>
>>If memory requirements really were minimal, it wouldn't be necessary to
>>be carrying all those tapes along.
>
>There's no good reason to have the entire software load for the entire
>flight profile in the computer at once. Loading in the proper programs
>at the proper time from tape is perfectly acceptable. Does your computer
>keep every bit of software you use in memory at all times? Of course not.
>That's why we have hard disks. However, hard disks aren't that sturdy
>when subjected to launch vibration and G forces, so Shuttle uses tape
>instead for program storage.

The latest hard disks are quite capable of handling the G forces present
in most any craft. The ratings on some of the new 3.5" and smaller drives
are something like 100Gs when off, and 20 or more Gs while running. Laptops
have had a wonderful influence on hard disk design.

>
>Note, Apollo used the HP-35 calculator as backup for it's computers,
>so the necessary data loads aren't that large. Shuttle carries the
>HP-41CV as an ultimate backup to it's computers. If all four main
>computers fail, the pilot can manually calculate trajectories with
>it's aid. Naturally it isn't as robust as the normal computers, but
>it's better than nothing, and takes up little mass or volume.

Are they still using the HP41s? I'd figured they'd have moved the programs
over to the grid laptops by now.

>
>Note also that program bloat can be traced almost completely to having
>excess RAM available. Programs naturally expand to fill the space available,
>witness Wordstar. It was a great fast program on a 48 kb Z-80 system, but

common misconception. Wordstar NEVER fit in 48K. Sure it would run in a
machine that had 48K, but every time you hit ^Y to delete a line, it
had to swap in an overlay from disk to do the function, then swap back
to the main code. When more memory is available, it is possible to
improve performance.

>now it's a multi-megabyte dog on a Windows PC with 8+ megs of RAM. And
>that's 166 times more RAM to develop a bit error that can crash the system.
>To a large degree, reliability is a function of parts count. The fewer
>parts, the less to go wrong.
>

Actually in software the less is better philosopy does not always hold. To
get program size smaller, one could always skip the bounds checking and input
parameter checking. Fewer parts, but less reliability...

>Gary
>--
>Gary Coffman KE4ZV | You make it, | gatech!wa4mei!ke4zv!gary

--
---------------------------------------------------------------
Daniel Senie Internet: d...@world.std.com
Daniel Senie Consulting n1...@world.std.com
508-365-5352 Compuserve: 74176,1347

Bruce Walzer

unread,
Jan 22, 1994, 3:37:38 PM1/22/94
to
d...@world.std.com (Daniel T Senie) writes:

>In article <1994Jan17.1...@ke4zv.atl.ga.us> ga...@ke4zv.atl.ga.us (Gary Coffman) writes:
>>In article <2hd6ji$q...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:
>>>I (dev...@lf.hp.com) wrote:
>>

[discussion of space shuttle computers and an analogy to PC software
deleted]

>Actually in software the less is better philosopy does not always hold. To
>get program size smaller, one could always skip the bounds checking and input
>parameter checking. Fewer parts, but less reliability...

Bounds checking and and parameter checking make reliable software? I think
that's a bit general in the case of shuttle computers and word processors. I
got really happy when Display Write 4 used to lock up on an error whenever I
hit the left margin with the cursor. "Attempt to backspace past left
margin" it said helpfully.

Imagine the delight and relief of the shuttle pilot if the computer said
something like "Subscript out of range. Program halted." when s/he was on
final approach.

--
Bruce Walzer |Voice: (204) 783-4983
Winnipeg MB |Internet: bwa...@lark.muug.mb.ca
Canada |Amateur Radio: VE4XOR

Lyndon Nerenberg

unread,
Jan 23, 1994, 1:03:33 AM1/23/94
to
d...@world.std.com (Daniel T Senie) writes:

>The latest hard disks are quite capable of handling the G forces present
>in most any craft. The ratings on some of the new 3.5" and smaller drives
>are something like 100Gs when off, and 20 or more Gs while running. Laptops
>have had a wonderful influence on hard disk design.

Make sure you understand the fine print. Many of these drives have their
published specifications based upon a SINGLE foo-G drop. This does not
equate to the sort of cruft a drive could expect to go through during a
*single* shuttle launch, let alone many.

For comparison, look at the building code specifications for the Los
Angeles ares. There are buildings and highway superstructures rated to
withstand a .4G accelleration during a 'quake. What isn't generally specified
is the *duration* of that .4G accelleration. .4G for 5 seconds can be *very*
differrent from .4G for 30 seconds. A shuttle launch is very comparable to
the recent Los Angeles 'quake if you're talking disk drives and buildings.

--lyndon

Daniel T Senie

unread,
Jan 23, 1994, 11:21:12 PM1/23/94
to
In article <1994Jan22.2...@lark.muug.mb.ca> bwa...@lark.muug.mb.ca (Bruce Walzer) writes:
>d...@world.std.com (Daniel T Senie) writes:
>
>>In article <1994Jan17.1...@ke4zv.atl.ga.us> ga...@ke4zv.atl.ga.us (Gary Coffman) writes:
>>>In article <2hd6ji$q...@hpavla.lf.hp.com> dev...@lf.hp.com (Lee Devlin) writes:
>>>>I (dev...@lf.hp.com) wrote:
>>>
>
>[discussion of space shuttle computers and an analogy to PC software
>deleted]
>
>>Actually in software the less is better philosopy does not always hold. To
>>get program size smaller, one could always skip the bounds checking and input
>>parameter checking. Fewer parts, but less reliability...
>
>Bounds checking and and parameter checking make reliable software? I think
>that's a bit general in the case of shuttle computers and word processors. I
>got really happy when Display Write 4 used to lock up on an error whenever I
>hit the left margin with the cursor. "Attempt to backspace past left
>margin" it said helpfully.

It's too bad that the programmer of Display Write 4 failed to add the code that
DID SOMETHING when a boundary condition was hit. I maintain that I'd still
rather have the checks there are DO SOMETHING about the case.

It is ENTIRELY possible for software to detect faults and respond to them. Rather
than either hitting a boundary and giving up, or not bothering to check a
boundary and corrupting data it makes sense to detect, report and correct
for software faults where possible.

>
>Imagine the delight and relief of the shuttle pilot if the computer said
>something like "Subscript out of range. Program halted." when s/he was on
>final approach.

Imagine when the computer fails to detect a bounds condition, corrupts memory
and fails to continue processing joystick commands to the flaps. If the
computer detects its bounds problem, it can at least take itself out of the
loop and nominate a backup system when backup systems are available (as is
the case on the shuttle). These principles apply to real time, mission
critical, fault tolerant and Online Transaction Processing systems. In all
cases the customer has paid for a system to solve a problem under all
conditions. Stopping and giving a panic message, or failing to detect a
fault are equally unacceptable solutions.

>--
>Bruce Walzer |Voice: (204) 783-4983
>Winnipeg MB |Internet: bwa...@lark.muug.mb.ca
>Canada |Amateur Radio: VE4XOR

pete...@physc1.byu.edu

unread,
Jan 24, 1994, 11:24:44 AM1/24/94
to

Actually there are two other driving forces in the software bloat that are only
allowed to operate because of the cheap RAM: 1) The demand for software that
requires no thought or training. Some of this is good and some of it is
useless "creeping featurism" (I have yet to understand why a top quality word
processing package would require you to remove your hands from the keyboard to
perform basic formatting functions). And 2) the trend toward using high-level
languages for all software development. Most of those "lean and mean" packages
of the past were written in optimized assembly code because RAM was tight. Now
they don't have to put any effort into optimizing the code and are able to
write using high-level languages and compilers that generate absolutely
horrendous code (from an efficiency standpoint). Yes, that allows them to meet
the demands of (1) above more quickly but at a tremondous cost in storage space
(just for grins look at the bloat in the distribution disks for ANY package
over the last few years - things that used to be delivered on 2 360K floppies
now require 4 to 6 1.44M floppies and add data compression to boot). Whether
this whole trend is good or bad is a totally religious argument (hardware is
cheap and features are nice versus why do I need so much hardware just to run
a simple application).

However, any way you look at it my hat goes off to the programmers who are able
to fit the entire control program for the Shuttle into the memory on those
computers. I can guarantee they are not using the bloated high-level languages
that you normally see in the PC world to do that.

Bryan Peterson, ki7td
pete...@physc1.byu.edu

David Lesher

unread,
Jan 25, 1994, 2:14:57 PM1/25/94
to
ga...@ke4zv.atl.ga.us (Gary Coffman) writes:

>Note, Apollo used the HP-35 calculator as backup for it's computers,
>so the necessary data loads aren't that large.

Err,
The 35 came out in early 72. It took me 6 months to get one.
Thus later Apollo flights may have used them, but not early
ones.

--
A host is a host from coast to coast.................wb8foz@nrk.com
& no one will talk to a host that's close...........(v)301 56 LINUX
Unless the host (that isn't close)....kibo# 665.99.........pob 1433
is busy, hung or dead..............vr....................20915-1433

John Haddy

unread,
Jan 25, 1994, 12:33:08 AM1/25/94
to

|> However, any way you look at it my hat goes off to the programmers who are able
|> to fit the entire control program for the Shuttle into the memory on those
|> computers. I can guarantee they are not using the bloated high-level languages
|> that you normally see in the PC world to do that.
|>
|> Bryan Peterson, ki7td
|> pete...@physc1.byu.edu

As far as I know (from a job application with the European Space Agency),
all the code for the Ariane system is written in Modula-2 (which, by any
definition, is a high-level language).

Modern optimizing compilers can do a _very_ good job. In many instances,
it'd take a top-flight programmer to do better at the assembler level
(as well as an awful lot more time).

JohnH

0 new messages