[sci.space.shuttle] Shuttle Computer Info?

labc-4da@e260-4d.berkeley.edu (Bob Heiney) (05/05/89)

My recent reading of the late Richard Feynman's "What Do *You* Care What
Other People Think" (very good, by the way), and today's launch has gotten
me wondering about the shuttle's computer systems.

What kind of information can I get on the design, implementation, etc. of
shuttle software?  Is there someone I should write?

I'm interested in this info at almost any technical level, though source
code (in Fortran?) would be a little much.  :-)

Thanks,

-------------------------------------------------------------------------------
| Bob Heiney                         "And in the end, the love you            |
| labc-4da@rosebud.Berkeley.edu       take is equal to the love you make."    |
|                                                     -- The Beatles          |
-------------------------------------------------------------------------------

maine@drynix.dfrf.nasa.gov (05/06/89)

In article <24055@agate.BERKELEY.EDU> labc-4da@e260-4d.berkeley.edu 
(Bob Heiney) writes:

>   What kind of information can I get on the design, implementation, etc. of
>   shuttle software?  Is there someone I should write?

Try JSC, maybe starting with PAO.  They'll give you the name of somebody
who knows more and you can chain your way to the real information.

>   I'm interested in this info at almost any technical level, though source
>   code (in Fortran?) would be a little much.  :-)

I think that the computers are IBM AP101s (at least they were in the early
phases and I can't believe that they'd replace these with incompatible new
computers and go through the _unbelievable_ man-rating required for new
software).  I don't know what the source code is written in, but I don't 
think it's Fortran.  We used AP101s in the second phase of our F-8 Digital 
Fly-by-Wire program, in part because Shuttle was going to use these.  
The flight control system software was written by Draper Labs, but I'm 
fairly sure they didn't use Fortran.  Anyway, the source code was written 
on a mainframe and a load module is produced (translator? compiler?), which 
is then loaded into the onboard computers.  I can find out a lot more of 
the details, but it will take a while.

Incidently, it wasn't a bad idea to use the F-8 DFBW to test the AP101s for
the Shuttle, since we found at least one generic problem.  Before we went to
the AP101s we used Apollo (the space program, not the workstation) computers
since we had to have flight-rated computers and there were _very_ few of those
in the late 60s and early 70s.

M F Shafer
NASA Ames-Dryden Flight Research Facility
shafer@elxsi.dfrf.nasa.gov

NASA management doesn't know what I'm doing and I don't know what
they're doing, and everybody's happy this way.

phil@hypatia.rice.edu (William LeFebvre) (05/06/89)

In article <24055@agate.BERKELEY.EDU> labc-4da@e260-4d.berkeley.edu (Bob Heiney) writes:
>What kind of information can I get on the design, implementation, etc. of
>shuttle software?  Is there someone I should write?
>
>I'm interested in this info at almost any technical level, though source
>code (in Fortran?) would be a little much.  :-)

Well, I happen to have access to someone who knows a great deal about the
on-board computer system.  You didn't really specify, but I assume that
you are referring to the software that's used in the on-board computers
(the General Purpose Computers, or GPC's).  Correct?

I don't know how much I can find out about the history of the programs
(design and implementation issues), but I can get an answer for just about
any technical question concerning the GPCs.

I'll tell you what I know about them off the top of my head:  there are
five General Purpose Computers (GPC) on board.  The hardware in each GPC
is identical:  it is an IBM AP-101/B computer.  Typical IBM segmented
architecture, but memory is addressed on 16-byte boundaries (IBM calls
them "half-words", the rest of the world calls them "words" or "shorts").
Each GPC has 212 half-words of iron-ferrite core (yes, iron-ferrite core)
memory (that's the equivalent of 424K).

Nominal ascent and entry configuration has GPCs 1 thru 4 running the
primary flight software and GPC 5 running "BFS", the backup flight
software.  The primary software was written by IBM, BFS was written by a
team at Rockwell.  The primary software controls all the flight critical
stuff, the BFS just calculates and only goes into action if it is needed.
Only the commander and pilot can make the decision to switch to the BFS
(it will never happen automatically), and that has never happened in a
real flight.  All the software was originally written in HAL/S, some weird
almost-structured algol-like language.

What else do you want to know?

			William LeFebvre
			Department of Computer Science
			Rice University
			<phil@Rice.edu>

P.S.: at one point (long ago) the wonderful media was misinforming the
public by saying that "four computers were built by IBM and one was built
by Rockwell".  Don't believe it.  It's the software that's different,
*not* the hardware.

sd@chem.ucsd.edu (Steve Dempsey) (05/06/89)

The September 1984 edition of Communications of the ACM (vol. 27, no. 9)
contains several articles on the Shuttle's computer hardware and software.
The titles are:

	Case Study: The Space Shuttle Primary Computer System

    Special Section on Computing in Space

	Development and Application of NASA's First Standard Spacecraft Computer

	Design, Development, Integration:  Space Shuttle Primary Flight
	Software System

	Architecture of the Space Shuttle Primary Avionics Software System

The cover photograph is a remote view of the Challenger taken from the
Shuttle pallet satellite (SPAS-1) with the entire background filled with
clouds.  This picture alone is worth digging out the journal.

ncoverby@ndsuvax.UUCP (Glen Overby) (05/07/89)

In article <3227@kalliope.rice.edu> phil@hypatia.rice.edu (William LeFebvre) writes:
>In article <24055@agate.BERKELEY.EDU> labc-4da@e260-4d.berkeley.edu (Bob Heiney) writes:
>I don't know how much I can find out about the history of the programs
>(design and implementation issues), but I can get an answer for just about
>any technical question concerning the GPCs.

>What else do you want to know?

Thanks for the interesting info on the computers.  I don't recall this from
the 1984 CACM article.

The flight software (both on the ground and on-board) gives me the
impression of being crufty.  On the failed attempt last Friday, the
countdown was halted at 31 seconds; thats exactly when control of the launch
is turned over to the on-board computers.  So why do they decide to hold the
launch at that point?  Most of what causes the countdown to hold will have
existed for several minutes at least, so why don't the ground computers
decide to hold the launch then, or have the on-board computers say "I'm
gonna hold when I get control".

On another subject; I recall reading that the Shuttle's computers were so
stuffed that to add something, something else had to be removed.  With a
16MB address space, it would seem that more memory could be added and this
could be avoided.
--
                Glen Overby     <ncoverby@plains.nodak.edu>
        uunet!ndsuvax!ncoverby (UUCP)   ncoverby@ndsuvax (Bitnet)

jjb@sequent.UUCP (Jeff Berkowitz) (05/07/89)

In article <24055@agate.BERKELEY.EDU>
  labc-4da@e260-4d.berkeley.edu (Bob Heiney) writes:

>What kind of information can I get on the design, implementation, etc. of
>shuttle software?  Is there someone I should write?

As noted in another posting, the Sept 84 CACM has excellent coverage.
I found the section about the synchronization of the primary computers
particularly fascinating.  The four flight computers don't run in lockstep
at the hardware level; all cross checking is under the control of the
HAL/S code.  Synch points and checks are only as good as the real time
software which implements them.  There is some discussion of the synch
bug which delayed the very first shuttle launch.  The article also goes
into the (excruciating:-) configuration control process used for the
software.  Highly recommended.
-- 
Jeff Berkowitz N6QOM			uunet!sequent!jjb
Sequent Computer Systems		Custom Systems Group

phil@titan.rice.edu (William LeFebvre) (05/09/89)

In article <2645@ndsuvax.UUCP> ncoverby@ndsuvax.UUCP (Glen Overby) writes:
>The flight software (both on the ground and on-board) gives me the
>impression of being crufty.  On the failed attempt last Friday, the
>countdown was halted at 31 seconds; thats exactly when control of the launch
>is turned over to the on-board computers.  So why do they decide to hold the
>launch at that point?  Most of what causes the countdown to hold will have
>existed for several minutes at least, so why don't the ground computers
>decide to hold the launch then, or have the on-board computers say "I'm
>gonna hold when I get control".

I don't really understand how that qualifies the software as "crufty".
It's just The Way It Is Done.  If anything happens between T-5 minues and
T-31 seconds that requires a hold, the hold won't happen until T-31
seconds.  I remember for one of the recent flights (26, I believe), one of
the launch controllers called that there would be a T-31s hold.  This was
around T-3 mins.  But during the next minute, some people decided that the
hold was not necessary.  In other words, the situation that a computer had
flagged as violating launch criteria was determined by the launch crew as
OK, and the count was not held at T-31s.  If the computer had held right
away, it would have really disrupted the flow.

[This following paragraph is my own ideas and theory:]
Probably part of the reason they do things that way has to do with the
wholw philosophy behind the countdown.  Certain things are scheduled to
take place at certain times in the count.  If you can guaranteed that the
could will proceed from T-5m to T-31s without interruption, then it seems
to me that planning and executing procedures becomes much easier.  I
seriously doubt that they do it that way because of "crufty" software!

>On another subject; I recall reading that the Shuttle's computers were so
>stuffed that to add something, something else had to be removed.  With a
>16MB address space, it would seem that more memory could be added and this
>could be avoided.

I know this is true for ascent.  The primary software (PASS) takes up all
the available memory in the GPC (no virtual memory here).  I virtually
positive that that is NOT true for the software they run on orbit (it
doesn't really have to do much anyway) and I believe that there is still a
fair amount of room in the program they use for entry.

They keep promising the great GPC upgrade:  among other things it should
include 256K words of battery backed static RAM.

			William LeFebvre
			Department of Computer Science
			Rice University
			<phil@Rice.edu>

"Flight, DPS.......GPC 4 failed to sync."

phil@titan.rice.edu (William LeFebvre) (05/09/89)

In article <3227@kalliope.rice.edu> phil@hypatia.rice.edu (William LeFebvre) writes:
>I'll tell you what I know about them off the top of my head:  there are
>five General Purpose Computers (GPC) on board.  The hardware in each GPC
>is identical:  it is an IBM AP-101/B computer.  

Well, that will teach me to write "off the top of my head"!  Let's see how
many mistakes we can find:

>Typical IBM segmented architecture,

No, not typical.  No base registers.  It is really more like "extended
addressing" or bank switching.  The data bus is only 16 bits wide.  There
are extra bits in the program status word that define which "bank" to
address.

>but memory is addressed on 16-byte boundaries

Of course I meant "16-BIT" boundaries.

>Each GPC has 212 half-words of iron-ferrite core (yes, iron-ferrite core)

Well, it is iron-ferrite core, but there's a few more of them than 212.  I
seemed to have dropped a "K".  But it isn't 212K anyway, so the whole
statement was wrong.

THIS IS REALLY CORRECT (honest):  there are 208K half-words, which is the
same as 212,996 half words (K=1024), which is the same as 416K bytes.

You'd think I'd learn......

			William LeFebvre
			Department of Computer Science
			Rice University
			<phil@Rice.edu>

phil@titan.rice.edu (William LeFebvre) (05/09/89)

In article <2645@ndsuvax.UUCP> ncoverby@ndsuvax.UUCP (Glen Overby) writes:
>On another subject; I recall reading that the Shuttle's computers were so
>stuffed that to add something, something else had to be removed.  With a
>16MB address space, it would seem that more memory could be added and this
>could be avoided.

Where do you get 16MB address space from?  And even if the computer can
address that much, the address bus can still be a limiting factor.

Here's some more nitty gritty (are you sure you really want to know all
this?).  A GPC really consists of two boxes.  Those of you who watched the
on-board replacement procedure last Sunday saw the two separate boxes.  I
believe they replaced both of them.  One box is the CPU and the other box
is the IOP, or Input/Output Processor.  In the CPU there are 10 banks of
16K halfwords each, for a total of 160K halfwords.  But there's also some
memory in the IOP:  6 banks of 8K each, totalling 48K.  48+160 gives 208.
In the CPU, the address bus is only 16 bits wide.  The rest of memory is
addressed with "extended addressing" via extra bits in the PSW.  But the
bus between the IOP and the CPU that is used for DMA is only 18 bits plus
a parity bit, and it must specify the entire address.  Therein lies the
problem with upgrading the GPC.  They can easily go to 256K halfwords, but
that doesn't really give them much more memory.  If they go beyond 256K,
then the IOP cannot address the additional memory (if they put the memory
in the IOP box, then the CPU couldn't address it).  This problem is not
insurmountable, but it would require a much more clever compiler to put
all the right data in the right places.  And that would result in some
serious rearranging of the resulting assembly code.  And that would
require serious testing and recertification.

They want to go to 512K halfwords, which is the most the CPU can currently
address.  The PSW contains three extra bits for data fetches and three
extra bits for program fetches, so the limit is 2^19, or 512K.  Anything
higher would require serious redesign of the hardware.  If they want the
IOP to address anything higher than 256K, that would also require a
serious redesign.

Fun, huh?

			William LeFebvre
			Department of Computer Science
			Rice University
			<phil@Rice.edu>

slr@skep2.ATT.COM (Shelley.L.Rosenbaum.[ho95c]) (05/10/89)

In article <2645@ndsuvax.UUCP> ncoverby@ndsuvax.UUCP (Glen Overby) writes:
>On another subject; I recall reading that the Shuttle's computers were so
>stuffed that to add something, something else had to be removed.  With a
>16MB address space, it would seem that more memory could be added and this
>could be avoided.

One of the things that had to be eliminated from the software during
countdown was the constant update from the IMUs (Inertial Measurement
Units).  During one phase of countdown, the computers would be too
overloaded, so the IMU inputs were ignored.  The IMUs are "recalibrated"
during one of the holds (T-9 minutes comes to mind, but don't hold me
to it).

[Note:  I used to work for Singer-Kearfott, which made the IMUs; my
supervisor had been on the shuttle project, and he gave me the above
info.]

-- 
Shelley L. Rosenbaum, Air Traffic Control Systems, AT&T Bell Laboratories
{allegra, att, arpa}!ho95c!slr     slr@ho95c.att.arpa      (201) 949-3615

"Surrounded by a thin, thin, thin, 16-millimeter shell."

thomas@mvac23.UUCP (Thomas Lapp) (05/12/89)

> >   I'm interested in this info at almost any technical level, though source
> >   code (in Fortran?) would be a little much.  :-)
> 
> I think that the computers are IBM AP101s (at least they were in the early

Obviously, the AP101's are fairly small, but how do they fit in with the
rest of what IBM produces.  Is it a mini-mainframe, a specialized design-
to-spec. machine, or what?

thanks,

                         - tom
==============================================================================
                                          ! NOTICE: Site 'mvac' is no more.
uucp:     ...!udel!mvac23!thomas          !         all mail must be sent to
Internet: mvac23!thomas@udel.edu          !         site 'mvac23' instead.
Internet: mvac23!thomas@udel.edu          !          Thanks.
-------------------------------------------------------------------------------

hollombe@ttidca.TTI.COM (The Polymath) (05/18/89)

In article <26.UUL1.3#5131@mvac23.UUCP> mvac23!thomas@udel.edu writes:

}Obviously, the AP101's are fairly small, but how do they fit in with the
}rest of what IBM produces.  Is it a mini-mainframe, a specialized design-
}to-spec. machine, or what?

From foggy memory:

The principles of operation of the AP-101 are (is?) similar to the System
370.  The major exception is the AP-101 has two sets of 8 general purpose
registers instead of the 370's one set of 16. (This is based on my
experience with their respective assembler's).

Physically, each AP-101 consists of two metal boxes (very) roughly 1' x
1.5' x 3', each and weighing about 90 lbs, each.  My memory gets much
foggier here, but I recall one box is the CPU and the other is sort of a
giant math co-processor.  The boxes are connected by a 2" diameter cable.

Memory is non-volatile (a source of much confusion to novices [i.e.:  Me]
and much amusement to old hands at Rockwell's Avionics Development Lab).
Mass data storage is provided by the Main Memory Unit (MMU), a mag-tape
based device that would put Rube Goldberg to shame.  Each program the
Shuttle computers need to run is stored on the MMU in triplicate.

IBM expended a _lot_ of effort (and NASA's money) to insure that each and
every AP-101 is as near exactly like each and every other AP-101 as
possible.

Disclaimer:

The above is what I remember from working in the ADL over 6 years ago.  My
memory is probably a bit degraded and some things may have changed since
then.

-- 
The Polymath (aka: Jerry Hollombe, hollombe@ttidca.tti.com)  Illegitimati Nil
Citicorp(+)TTI                                                 Carborundum
3100 Ocean Park Blvd.   (213) 452-9191, x2483
Santa Monica, CA  90405 {csun|philabs|psivax}!ttidca!hollombe

phil@titan.rice.edu (William LeFebvre) (05/18/89)

In article <4452@ttidca.TTI.COM> hollombe@ttidcb.tti.com (The Polymath) writes:
>From foggy memory:

Let me clear some of the fog!  :-)
[Seriously, I'm not trying to show off or criticize, I'm just trying to
gently correct minor mistakes.  No insult intended.]

>Physically, each AP-101 consists of two metal boxes (very) roughly 1' x
>1.5' x 3', each and weighing about 90 lbs, each.  My memory gets much
>foggier here, but I recall one box is the CPU and the other is sort of a
>giant math co-processor.  The boxes are connected by a 2" diameter cable.

The second box is the IOP (Input/output processor).  I don't think it does
any math crunching.

>Mass data storage is provided by the Main Memory Unit (MMU), a mag-tape
                                      ^^^^
                                      Mass Memory Unit.

And there's actually two of them (they are identical, with identical
copies of the software on separate tapes).  On the panel that controls the
GPC's (O-6), you can select which MMU that a given GPC should read from.

By the way, it takes a non-trivial amount of time to load a program in
from the MMU (as you would expect).  The ascent software is in parts.  For
launch, only the software needed for a nominal ascent is in primary
memory.  If they have to abort RTLS (return to launch site) or TAL
(trans-atlantic abort), they have to load a different program in from the
MMU.  This does not sit well with some people, and it is one reason they
want to extend the memory capacity of the machines.

Funny MMU story.  One of my wife's former DPS co-workers called her one
day not too long ago and told her that someone he knew was doing a school
report on the Shuttle's MMU.  He asked her if she could send this person
as much information on the MMU's as she could find (and that would be
intelligible to someone without an in-depth knowledge of the shuttle).
She was bit baffled by this:  why would anyone care about the MMU's?  But
she honored his request anyway.  About a week later, he called back and
apologized.  Seems that this student wanted information about the Manned
Maneuvering Unit, *not* the Mass Memory Unit.  But if you say "MMU" to
someone who works (or in his case, recently worked) in DPS, their first
thought is "Mass Memory Unit"!  Can you say "acronym overload"?

			William LeFebvre
			Department of Computer Science
			Rice University
			<phil@Rice.edu>

hollombe@ttidca.TTI.COM (The Polymath) (05/20/89)

In article <3287@kalliope.rice.edu> phil@Rice.edu (William LeFebvre) writes:
}In article <4452@ttidca.TTI.COM> hollombe@ttidcb.tti.com (The Polymath) writes:
}>From foggy memory:
}
}Let me clear some of the fog!  :-)
}[Seriously, I'm not trying to show off or criticize, I'm just trying to
}gently correct minor mistakes.  No insult intended.]

And none taken.

}>Physically, each AP-101 consists of two metal boxes (very) roughly 1' x
}>1.5' x 3', each and weighing about 90 lbs, each.  My memory gets much
}>foggier here, but I recall one box is the CPU and the other is sort of a
}>giant math co-processor.  The boxes are connected by a 2" diameter cable.
}
}The second box is the IOP (Input/output processor).  I don't think it does
}any math crunching.

Right you are.  It begins to come back to me now.  My personal memory
is degraded more than I thought.

}>Mass data storage is provided by the Main Memory Unit (MMU), a mag-tape
}                                      ^^^^
}                                      Mass Memory Unit.

Right again.

Thanks for the corrections.

-- 
The Polymath (aka: Jerry Hollombe, hollombe@ttidca.tti.com)  Illegitimati Nil
Citicorp(+)TTI                                                 Carborundum
3100 Ocean Park Blvd.   (213) 452-9191, x2483
Santa Monica, CA  90405 {csun|philabs|psivax}!ttidca!hollombe