[sci.space.shuttle] Shuttle computer reprogramming

glennw@nsc.nsc.com (Glenn Weinberg) (09/30/88)

Everyone undoubtedly knows by now that the shuttle has made it into space
again.  You probably also know that the launch was delayed for over 90
minutes while NASA waited for favorable upper atmosphere wind conditions.

The networks explained the problem as being that a certain wind speed and
direction (determined by historical data) is programmed into the shuttle
computers and that reprogramming would be difficult and require retesting.

The question I, as a software engineer, have is: why should this require
reprogramming at all?  Why couldn't the program have been written to
accept input as to meteorological conditions at launch time?  Seems to me
that hard-wiring this data into the program isn't particularly good
programming practice.

I don't want to believe that the shuttle programmers simply missed this.
Is it a memory space issue?  A computation time issue?  Is the meteorological
data somehow massaged before it's loaded into the program?  Or is it simply
a testability issue, in the sense that the program couldn't be tested without
some data, and NASA wouldn't feel confident enough in the program running
with data not known in advance?

I'm interested in any information you folks out there might have, purely
from a professional curiosity point of view of course.

In any event, we software types should all be happy that the shuttle wasn't
grounded by software again, as it was the first time we tried to launch it!
-- 
Glenn Weinberg					Email: glennw@nsc.nsc.com
National Semiconductor Corporation		Phone: (408) 721-8102
(My opinions are strictly my own, but you can borrow them if you want.)

lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) (09/30/88)

My favorite theory is that the delays were intentional to foil anybody
trying to fly in a kamikaze model airplane with a zip gun.   :-)

Even the carefully orchestrated hesitation in the last couple of minutes when
they were threatening to stop the clock at 31 seconds and then didn't would
tend to throw off anyone relying on close timing.	:-) :-) :-)  (mostly)

Larry Wall
Official Spokescritter for the Pasadena Paranoid Programmers Party.
(I'm obviously not speaking for anyone else around here...  Am I?)
lwall@jpl-devvax.jpl.nasa.gov

hinojosa@hp-sdd.hp.com (Daniel Hinojosa) (09/30/88)

In article <6689@nsc.nsc.com> glennw@nsc.UUCP (Glenn Weinberg) writes:
>Everyone undoubtedly knows by now that the shuttle has made it into space
>again.  
 
Yes. To say that this launch was meerly beautiful is more than   
understatement. I have been a lifelong space-phile. I attribute this
to growing up a Vandenberg AFB. Yesterdays launch was the peak of
excitment for me in the last 2.8 years. Congratulations to all of
NASA and all the engineering firms who contributed to one of Americas
greatest moments. 

[deleted stuff]

>The question I, as a software engineer, have is: why should this require
>reprogramming at all?  Why couldn't the program have been written to
>accept input as to meteorological conditions at launch time?  Seems to me
>that hard-wiring this data into the program isn't particularly good
>programming practice.
>
>I don't want to believe that the shuttle programmers simply missed this.
>Is it a memory space issue?  ... [deleted possibilities]

Someone here at work mentioned that the shuttle has less memory than
many home computers, in the neighborhood of 640K. Let's assume for
now that this is the case. Not only does that seem like an amazingly
low amount of memory to help resolve such issues, but it seems like
it could easily be an item that might have been considered for improvement
after the Challenger incident. 

>I'm interested in any information you folks out there might have, purely
>from a professional curiosity point of view of course.
 
I hope this information is in the game. Please follow up so that I
too may learn more about this!

About the plume. I suppose as NASA says, it COULD have been an optical
illusion, but from all of these responses, I rather doubt that many
people could be wrong. I saw it, and wondered... A plume caused the 
last accident, could it be I'm looking for a reason for this to fail?
Nawwww, I love the shuttle program. I would NEVER want to see that
happen. Ever. So now I wonder why NASA decided to stay with O rings
at all. I seem to recall in the days right after Challenger, seeing
news stories about boosters. The Air Force had made boosters for use
at Vandenberg that were all one piece. I think they were a type of
fibre glass, and spun into a cylindrical shape. These were lighter,
therefor allowing a heavier payload. What's the story?

-- 
| 
| e- mail: hinojosa@hp-sdd.hp.com

rubinoff@linc.cis.upenn.edu (Robert Rubinoff) (09/30/88)

In article <1543@hp-sdd.HP.COM> hinojosa@hp-sdd.hp.com.UUCP (Daniel Hinojosa) writes:
>Someone here at work mentioned that the shuttle has less memory than
>many home computers, in the neighborhood of 640K. Let's assume for
>now that this is the case. Not only does that seem like an amazingly
>low amount of memory to help resolve such issues, but it seems like
>it could easily be an item that might have been considered for improvement
>after the Challenger incident. 
>

Part of the problem is that the shuttle uses magnetic core memory (remember
that?) because it's non-volatile.  In fact, I believe they always load the
next program they're going to use into the computer as soon as they're done
with the current one, so that if they have problems, as soon as they get the
computers back up they'll be ready to run.

There was a long case study of the shuttle computer systems a few years ago
in the Communications of the ACM.  The computers are actually fairly old
technology all around; they're special purpose versions of IBM 360 processors.
They wanted technology that had been extensively tested over a long period
of time, since it would be difficult to get a service rep up to the shuttle
in case of trouble.  

   Robert

knudsen@ihlpl.ATT.COM (Knudsen) (09/30/88)

Well, the network (ABC) said it was memory limitations.
The computers each have at most 512K each (they never said exactly),
maybe only 256 or 64K.  I gather the programs are comp9led with
a #define WIND_SPEED 12345 or some such thing becasue there's no
room for a routine to read the stuff in.  Probably so little
space that a "load immediate #speed" instruction was needed,
rather than "load address from table."

The PBS station did a special last nite on the shuttle.
One of the flight crew said that every time they want to
put a new capability into the computers (like yet another
emergency abort scenario), something else has to be
taken out.  Like choose which emergency procedures you
want this time.

So I'll bet every byte of that code is hand-optimized
to hell and back.  I doubt any hi-level language,
or even C, got anywhere near those computers.

Would someone who knows like to tell us how much RAM & ROM
these babies have?  I know they use core memory (a good
idea in some senses).  Also what CACM issue described them
(around 1982 I think)?

tif@cpe.UUCP (10/01/88)

Written  4:20 pm  Sep 29, 1988 by nsc.UUCP!glennw in cpe:sci.space.shuttle
>The networks explained the problem as being that a certain wind speed and
>direction (determined by historical data) is programmed into the shuttle
>computers and that reprogramming would be difficult and require retesting.
>
>The question I, as a software engineer, have is: why should this require
>reprogramming at all?  Why couldn't the program have been written to
>accept input as to meteorological conditions at launch time?  Seems to me
>that hard-wiring this data into the program isn't particularly good
>programming practice.

If I'm right, there will be several responses saying the same thing...

I interpreted the reports differently than you.  I understood that what
was hard-coded was the "bad weather" limit, beyond which a launch would
not take place.  They felt that the limit was too strict but rather than
change and retest the software, they just made a waiver and proceeded.

Am I talking about the same thing as you?

			Paul Chamberlain
			Computer Product Engineering, Tandy Corp.
			{convex,killer}!ninja!cpe!tif

schouten@uicsrd.csrd.uiuc.edu (10/01/88)

Pardon me if this is an old rehashed issue, but
What kind of computers do they use on the shuttle ?
Why do they need 5 of in case they disagree ?

I've heard they have very small memory ("less than a typical
modern PC" according to one astronaut)
Why don't they modernize ?

Please e-mail responses if this has been covered already.

Dale A. Schouten
Center of Supercomputing Research and Development
UUCP:	 {seismo,pur-ee,convex}!uiucdcs!uicsrd!schouten
ARPANET: schouten%uicsrd@a.cs.uiuc.edu
CSNET:	 schouten%uicsrd@uiuc.csnet
BITNET:	 schouten@uicsrd.csrd.uiuc.edu

cjl@ecsvax.uncecs.edu (Charles Lord) (10/01/88)

In article <1543@hp-sdd.HP.COM>, hinojosa@hp-sdd.hp.com (Daniel Hinojosa) writes:
> So now I wonder why NASA decided to stay with O rings
> at all. I seem to recall in the days right after Challenger, seeing
> news stories about boosters. The Air Force had made boosters for use
> at Vandenberg that were all one piece. I think they were a type of
> fibre glass, and spun into a cylindrical shape. These were lighter,
> therefor allowing a heavier payload. What's the story?

I read that the main reason for M-T's contract was pork barrel
politics on some senator's part.  There are capable SRB mfg
facilities on the Gulf that are capable of making one-piece SRBs
and barging them to KSC.  The whole reason for o-rings is that
you cannot ship a whole SRB from Utah to Florida intact.
-- 
 *  Charles Lord               ..!decvax!mcnc!ecsvax!cjl  Usenet (old) *
 *  Cary, NC                   cjl@ecsvax.UUCP            Usenet (new) *
 *  #include <std.disclamers>  cjl@ecsvax.BITNET          Bitnet       *
 *  #include <cutsey.quote>    cjl@ecsvax.uncecs.edu      Internet     *

jetzer@studsys.mu.edu (jetzer) (10/01/88)

> So now I wonder why NASA decided to stay with O rings
> at all. I seem to recall in the days right after Challenger, seeing
> news stories about boosters. The Air Force had made boosters for use
> at Vandenberg that were all one piece. I think they were a type of
> fibre glass, and spun into a cylindrical shape. These were lighter,
> therefor allowing a heavier payload. What's the story?


As I recall, NASA has been working on redesigned boosters for quite some
time (before the Challenger disaster?).  However, the boosters won't be
ready for quite some time yet.  Rather than ground the program until the
new boosters are ready, NASA has redesigned the old boosters, incorporating
as many safety features as necessary to be as safe as possible.
-- 
Mike Jetzer
"Hack first, ask questions later."

mrb1@homxc.UUCP (M.BAKER) (10/01/88)

Hi ---

"Electronic Engineering Times" has run several good articles on
the Shuttle computer systems, etc. within the last month (sorry
I don't have the exact issue dates handy).  I remember an article
describing the on-board computers (1970s vintage, mfd. by IBM,
with core memory) which mentioned plans for upgrading them in the
near future.  Also, the latest issue talks about the computers &
display systems in Mission Control, and how they are just getting
around to replacing Apollo-era stuff (monochrome text-only displays
connected to old mainframes, which show messages in hexadecimal
requiring look-up in a code book or reference card, etc.) with
more timely technology.  Both very interesting articles, and well
worth looking for at your local library, info. center, or whatever.

homxc!mrb1    (mrb1 @ homxc.att.com)

phil@titan.rice.edu (William LeFebvre) (10/01/88)

In article <6980@ihlpl.ATT.COM> knudsen@ihlpl.ATT.COM (Knudsen) writes:
>Well, the network (ABC) said it was memory limitations.
>...
>I gather the programs are compiled with
>a #define WIND_SPEED 12345 or some such thing

It's not just memory and it's not as simple as changing a constant...
more on that later.

>One of the flight crew said that every time they want to
>put a new capability into the computers (like yet another
>emergency abort scenario), something else has to be
>taken out.

This is true.  The software used on ascent almost completely fills the
memory capacity of the computers.  To add anything requires removing
something else (or at least getting the memory from somewhere).

>So I'll bet every byte of that code is hand-optimized
>to hell and back.  I doubt any hi-level language,
>or even C, got anywhere near those computers.

They call the language HAL/S (for "High-level Assembly Language").  I have
seen some of the flight software (about a page or so) and it looked a
whole lot like IBM assembly language.  When I saw "BAL" I knew instantly
what it meant.

>Would someone who knows like to tell us how much RAM & ROM
>these babies have?  I know they use core memory...

No RAM, no ROM.  They use iron-ferrite core memory.  I believe I mentioned
in this list previously what I *thought* was a correct figure for the
amount of memory.  It turns out I was wrong.  It is most definitely 208K
half-words (IBM-ese for "16 bit word").  That would be 416K bytes.  But
the machine is only half-word addressable, so giving the size in terms of
bytes is a little misleading.  Remember that K=1024, so that's almost 213
thousand half-words (it's possible that some non-technical types would
incorrectly say 212K or 213K).

Now as to why they can't put all the difference scenarios into one program
(and some of this is conjecture on my part)....They have calculated ahead
of time a handful of different flight profiles.  The profiles are based,
in part, on the wind velocities and directions at different altitudes.
Apparently, this is a very complicated model and takes quite a bit of CPU
time and power to calculate the expected route.  There is just no way that
the on-board computers are going to be able to perform the calculations in
real time.  I suspect that it's hard for any computer to do the
calculations in real time (maybe a Cray).  So they perform all these tough
calculations ahead of time and the on-board software becomes much simpler.
So it's not really a matter of simply changing a constant.  My
understanding of this is very limited and my explanation may be a bit
fuzzy, but it made sense to me.  This has alot to do with flight dynamics
which is apparently a very intense science.

			William LeFebvre
			Department of Computer Science
			Rice University
			<phil@Rice.edu>

phil@titan.rice.edu (William LeFebvre) (10/02/88)

In article <3665@homxc.UUCP> mrb1@homxc.UUCP (M.BAKER) writes:
>Also, the latest issue talks about the computers &
>display systems in Mission Control, and how they are just getting
>around to replacing Apollo-era stuff (monochrome text-only displays
>connected to old mainframes, which show messages in hexadecimal
>requiring look-up in a code book or reference card, etc.)

HAH!  There was a plan "in work" to replace the old Apollo-era consoles
with something based on workstation computers (Sun or Masscomp), but hope
for that happening any time soon went out the window with the budget cuts.
There is also a great deal of conservative-style inertia that prevents
such changes from taking place at a reasonable pace.  Right now, the only
real time data that the controllers see is read in from the downlink and
processed *only* by the Mission Operations Computer (an old IBM
behemoth---actually there are several available in case one goes down
during flight).  All displays are driven by the same computer.  There are,
naturally, some NASA types that believe that this is the only way to do
it, because it's the way it's always been done.  They basically don't
trust the "new-fangled" micro computers to process the data fast enough or
(maybe even) correctly.  I heard of a project that was to have a "proof of
concept" demo during this flight showing some sort of PC processing raw
data as it came off the satellite and displaying it in real time.  It was
intended to blow a few minds.  I don't know if it really demoed or what
happened with it---I've been too enthralled in the details of the flight
itself to even remember it.

Every controller position does have a Masscomp workstation that is
connected to the internal network, and they can use the Masscomps to get
chunks of "near real-time data" and to do other things (such as uplink
commands).  If you look closely on the NASA Select views, you will see the
Masscomps.  But the real time data processing---the consoles that
matter---are still driven by this old IBM thing.

[ Whoops, hope I didn't say too much. ]

	William LeFebvre
	Department of Computer Science
	Rice University
	<phil@Rice.edu>
	These are my opinions and mine alone!
	No one else can have them.  So there!

spcecdt@ucscb.UCSC.EDU (Space Cadet) (10/03/88)

In article <1543@hp-sdd.HP.COM> hinojosa@hp-sdd.hp.com.UUCP (Daniel Hinojosa) writes:
}Someone here at work mentioned that the shuttle has less memory than
}many home computers, in the neighborhood of 640K. Let's assume for
}now that this is the case. Not only does that seem like an amazingly
}low amount of memory to help resolve such issues, but it seems like
}it could easily be an item that might have been considered for improvement
}after the Challenger incident. 

	 I thought that the shuttle computers were replaced a few years ago
with new ones, since the original computers used early-70's technology.
Surely the new ones have > 640k?  Well, maybe not... :-(

paisley@cme-durer.ARPA (Scott Paisley) (10/03/88)

In article <1938@kalliope.rice.edu>, 
   phil@titan.rice.edu (William LeFebvre) writes:

[deleted stuff]

> (and some of this is conjecture on my part)....They have calculated ahead
> of time a handful of different flight profiles.  The profiles are based,
> in part, on the wind velocities and directions at different altitudes.
> Apparently, this is a very complicated model and takes quite a bit of CPU
> time and power to calculate the expected route.  There is just no way that
> the on-board computers are going to be able to perform the calculations in
> real time.  I suspect that it's hard for any computer to do the
> calculations in real time (maybe a Cray).  So they perform all these tough
> calculations ahead of time and the on-board software becomes much simpler.

[more deleted stuff]

Why can't they download the programs onto the computers on the shuttle
just a few hours before launch?  Each program would be extensively
tested of course.  As for the wind velocity conditions, the programmers 
could have (say) 5 programs prepared and tested for different wind
conditions.  So, say six hours before launch they could get the
current atmosphere conditions, calculate all the magic numbers on the
cray, and then download the tested program (which they would apply
these magic numbers into) and then test the newly loaded program
onboard the shuttle. (however they would do that. :-) Now I admit that
this idea is very rough and I'm sure has some problems that I didn't
think of.  But I think that the flexibility of software is what makes
software so wonderful.  I would have hated to seen the launch aborted
because they didn't have flexibility of software.
-- 
Scott Paisley		ARPA : paisley@cme-durer.arpa (preferred)
			BITNET : paisley@cmeamrf

"Super Science mingles with the bright stuff of dreams."

adolph@ssc-vax.UUCP (Mark C. Adolph) (10/04/88)

In article <6689@nsc.nsc.com> glennw@nsc.UUCP (Glenn Weinberg) writes:
>The question I, as a software engineer, have is: why should this require
>reprogramming at all?  Why couldn't the program have been written to
>accept input as to meteorological conditions at launch time?  Seems to me
>that hard-wiring this data into the program isn't particularly good
>programming practice.
>
>I don't want to believe that the shuttle programmers simply missed this.
>Is it a memory space issue?  ... [deleted possibilities]

I heard on a newscast an explanation of the problem.  The shuttle
computers included in them data to handle the expected range of
atomspheric conditions for September over the Cape.  The upper
atmospheric winds happened to be outside the range covered by the data
in Discovery's memory.

As a real-time programmer, this seems like a reasonable explanation to
me.  If you have limited memory and limited time to search tables, you
want to include the minimum data that it will take to accomplish the
mission.  I'm sure that the software accepts inputs and makes
corrections in real-time.  This would be much tougher to do if one had
to consider all possible atmoshperic conditions for all times of the
year during each launch.  Often, things that look like bad programming
practices become indispensible in a real-time, embedded application.

-- 

					-- Mark A.
					...uw-beaver!ssc-vax!adolph

kevin@gtisqr.UUCP (Kevin Bagley) (10/05/88)

In article <6980@ihlpl.ATT.COM> knudsen@ihlpl.ATT.COM (Knudsen) writes:
>Well, the network (ABC) said it was memory limitations.
>The computers each have at most 512K each (they never said exactly),
>maybe only 256 or 64K.
	This is ridiculous!  One would assume that with the re-vamping
	of Discovery and 2.5+ years to accomplish it, that an analysis
	of the computer systems was performed.  Was this done?  Did
	they find the computer systems to be adequate?  I find it
	unbelievable that a technological feat like the shuttle is
	equipped with computer systems smaller than most Cash Registers.

>One of the flight crew said that every time they want to
>put a new capability into the computers (like yet another
>emergency abort scenario), something else has to be
>taken out.
	I think that NASA *MUST* do the things necessary to get our (Taxpayers)
	Shuttle up to current computer technology.  Any defense NASA?

>So I'll bet every byte of that code is hand-optimized
>to hell and back.  I doubt any hi-level language,
>or even C, got anywhere near those computers.
	I would expect no hi-level language since optimization is
	the only way to assure bullet proof code.

>Would someone who knows like to tell us how much RAM & ROM
>these babies have?
	I will be very interested in hearing this also.  I would also
	be interested in hearing the justification for "just get by"
	computer capabilities.
-- 
    ____                 Kevin Bagley  "I did not say this, I am not here."
     )__) __    _   _    Global Tech. Int'l Inc.
  __/__/ (_/\_/(_) /_)_  Mukilteo WA  98275
              __/        UUCP: uw-beaver!uw-nsr!uw-warp!gtisqr!kevin

leem@jplpro.JPL.NASA.GOV (Lee Mellinger) (10/06/88)

In article <1938@kalliope.rice.edu> phil@Rice.edu (William LeFebvre) writes:
|In article <6980@ihlpl.ATT.COM> knudsen@ihlpl.ATT.COM (Knudsen) writes:
|>Well, the network (ABC) said it was memory limitations.
|>...
|>I gather the programs are compiled with
|>a #define WIND_SPEED 12345 or some such thing
|
|It's not just memory and it's not as simple as changing a constant...
|more on that later.
|
|>One of the flight crew said that every time they want to
|>put a new capability into the computers (like yet another
|>emergency abort scenario), something else has to be
|>taken out.
|
|This is true.  The software used on ascent almost completely fills the
|memory capacity of the computers.  To add anything requires removing
|something else (or at least getting the memory from somewhere).
|
|>So I'll bet every byte of that code is hand-optimized
|>to hell and back.  I doubt any hi-level language,
|>or even C, got anywhere near those computers.
|
|They call the language HAL/S (for "High-level Assembly Language").  I have
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

HAL/S was written by Intermetrics, a company associated with MIT, and
was named for the then head of the development Dr. Halcomb.  It is a
high level realtime language based on PL/1.  It produces highly
optimized code that in a series of trials against assembly code
written by very senior IBM programmers imposed about a 10% penalty in
time and memory.

|seen some of the flight software (about a page or so) and it looked a
|whole lot like IBM assembly language.  When I saw "BAL" I knew instantly
|what it meant.
|
|>Would someone who knows like to tell us how much RAM & ROM
|>these babies have?  I know they use core memory...
|
|No RAM, no ROM.  They use iron-ferrite core memory.  I believe I mentioned
|in this list previously what I *thought* was a correct figure for the
|amount of memory.  It turns out I was wrong.  It is most definitely 208K
|half-words (IBM-ese for "16 bit word").  That would be 416K bytes.  But
|the machine is only half-word addressable, so giving the size in terms of
|bytes is a little misleading.  Remember that K=1024, so that's almost 213
|thousand half-words (it's possible that some non-technical types would
|incorrectly say 212K or 213K).
|
|
|			William LeFebvre
|			Department of Computer Science
|			Rice University
|			<phil@Rice.edu>

The computers are 4Pi/AP-101's which are esentially ruggedized IBM
360's in a small box.  That is the four prime computers, the fifth is
a Rockwell/Autonetics machine.  All four are online during launch and
landing on a synchronized bus.  They all vote on the solutions, and if
one dissagrees, the others vote it out and it is taken offline.  The
Autonetics machine is there in case there is a generic HW or SW
failure that would take out all four prime machines.

To answer someone's suggestion, they cannot be downloaded from the
ground.

Lee


-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|Lee F. Mellinger                         Jet Propulsion Laboratory - NASA|
|4800 Oak Grove Drive, Pasadena, CA 91109 818/393-0516  FTS 977-0516      |
|-------------------------------------------------------------------------|
|UUCP: {ames!cit-vax,psivax}!elroy!jpl-devvax!jplpro!leem                 |
|ARPA: jplpro!leem!@cit-vax.ARPA -or- leem@jplpro.JPL.NASA.GOV            |
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

david@beowulf.JPL.NASA.GOV (David Smyth) (10/06/88)

-In article <2993@jpl-devvax.JPL.NASA.GOV> leem@jplpro.JPL.NASA.GOV (Lee Mellinger) writes:
->In article <1938@kalliope.rice.edu> phil@Rice.edu (William LeFebvre) writes:
->|In article <6980@ihlpl.ATT.COM> knudsen@ihlpl.ATT.COM (Knudsen) writes:
->|
->|>One of the flight crew said that every time they want to
->|>put a new capability into the computers (like yet another
->|>emergency abort scenario), something else has to be
->|>taken out.
->|
->|This is true.  The software used on ascent almost completely fills the
->|memory capacity of the computers.  To add anything requires removing
->|something else (or at least getting the memory from somewhere).
->|
->|>So I'll bet every byte of that code is hand-optimized
->|>to hell and back.  I doubt any hi-level language,
->|>or even C, got anywhere near those computers.

I am absolutely certain that the code could be re-written to free-up alot
of space.  I got the OK to re-write what I was responsible for, but it
was a HUGE hassle.  That code has NOT been hand optimized to hell and back,
it has been hand PATCHED, in machine code, to hell and back!  Why?  Because
the code has undergone lengthy, very expensive testing over the last decade
and a half, and nobody wants to give the compiler a chance to screw things
up.  Every once in a long while, a routine will be re-compiled, but only after
all attempts to just patch it are exhausted.

->|seen some of the flight software (about a page or so) and it looked a
->|whole lot like IBM assembly language.  When I saw "BAL" I knew instantly
->|what it meant.

Probably you saw the patches for existing code.  As far as I know, the initial
implementation of all routines was done in HAL/S, and modifications then done
in AP-101 machine code.

->The computers are 4Pi/AP-101's which are esentially ruggedized IBM
->360's in a small box.  That is the four prime computers, the fifth is
->a Rockwell/Autonetics machine.  All four are online during launch and
->landing on a synchronized bus.  They all vote on the solutions, and if
->one dissagrees, the others vote it out and it is taken offline.  The
->Autonetics machine is there in case there is a generic HW or SW
->failure that would take out all four prime machines.

When was this change implemented?  Originally, all 5 machines were AP-101s.
The four primaries ran IBM developed PASS Primary system, and the one
back-up was an identical AP-101 running an Intermetrics developed BFS
Backup Flight System.  All 5 online at launch on the synchronized bus,
the fifth watching for anytime the primaries disagree, and annunciating the
errors to the pilots.  The pilots can manually disable any of the 4 primaries
which are failing, but the standard procedure (was - still is?) to switch
to the BFS and abort.

The backup was an afterthought to take care of generic SW failures.  The
SW architecture of the BFS is completely different from the PASS in case
there is some bizarre architectural problem nobody found during testing:
for example, loss of synch at boot, which was only discovered (well, correctly
diagnosed) following the launch hold at T-12 minutes on the first flight!

I do not believe the backup computer is in any way different from the
primaries.

phil@titan.rice.edu (William LeFebvre) (10/06/88)

In article <2993@jpl-devvax.JPL.NASA.GOV> leem@jplpro.JPL.NASA.GOV (Lee Mellinger) writes:
...
>HAL/S was written by Intermetrics, a company associated with MIT, and
>was named for the then head of the development Dr. Halcomb.  It is a
>high level realtime language based on PL/1.

Well, that ain't what I saw.  If that's the case, then I'm not sure what I
was looking at that day.  Except that it was somehow associated with the
on-flight software.

Unfortunately, I am not very familiar with the background of HAL/S (I only
know what I'm told).

>The computers are 4Pi/AP-101's which are esentially ruggedized IBM
>360's in a small box.  That is the four prime computers, the fifth is
>a Rockwell/Autonetics machine.

Wrong!  All FIVE general purpose computers (GPCs) are the same hardware.
They all sit on the same synchronized bus.  Hardware-wise, they are
totally identical.  It's the SOFTWARE that's different.  Four run the
standard flight software while the fifth runs the backup flight software
(the latter being written by Rockwell).  The backup software can be loaded
into any of the 5 GPCs, just as the main flight software can be loaded
into any of them.  Excuse me for being so adament about this, but this is
a common misconception.

			William LeFebvre
			Department of Computer Science
			Rice University
			<phil@Rice.edu>

karn@thumper.bellcore.com (Phil R. Karn) (10/07/88)

It is certainly clear that the shuttle computers are ancient technology,
and newer stuff would be much smaller and more capable. For example, the
AMSAT Microsat satellites, currently under development for a June 1989
launch, will carry a NEC V40 microprocessor and about 10 megabytes of
static RAM.  Running in this computer will be a real-time multitasking
operating system capable of executing .exe files generated by an MS-DOS
linker (with a special library) and uploaded from the ground. As far as
we know, this will be the first-ever spacecraft to support the TCP/IP
protocols.

In addition, there will be one UHF transmitter, four VHF receivers, I/O
interfaces, modems, power conditioner, Ni-Cd battery, solar arrays, etc.
Everything fits in a *7-inch* cube. Total power budget is something like
7 watts, with much of this going to the transmitter. Yet one of the five
modules making up the satellite is currently empty! (We call this the
"TSFR" module -- This Space For Rent).

Although I have every reason to believe that this design will work as
advertised, I'm not sure I'd be willing to bet my life on it -- yet.
Many of the component types will be flying in space for the first time,
and their reliability and radiation hardness aspects are not well known.
So I can understand NASA's reluctance to upgrade the STS computer
hardware unless or until the added functionality becomes necessary.

By the way, this is an excellent illustration of how excessive reliance
on large manned missions has hindered technological development. The
latter inherently requires risk-taking that is acceptable only when
human lives and/or extremely large sums of money are not at stake.

Phil

leem@jplpro.JPL.NASA.GOV (Lee Mellinger) (10/07/88)

In article <2994@jpl-devvax.JPL.NASA.GOV> david@beowulf.JPL.NASA.GOV (David Smyth) writes:
|-In article <2993@jpl-devvax.JPL.NASA.GOV> leem@jplpro.JPL.NASA.GOV (Lee Mellinger) writes:
|->In article <1938@kalliope.rice.edu> phil@Rice.edu (William LeFebvre) writes:
|->|In article <6980@ihlpl.ATT.COM> knudsen@ihlpl.ATT.COM (Knudsen) writes:
|->|
|
|Probably you saw the patches for existing code.  As far as I know, the initial
|implementation of all routines was done in HAL/S, and modifications then done
|in AP-101 machine code.
|
|->The computers are 4Pi/AP-101's which are esentially ruggedized IBM
|->360's in a small box.  That is the four prime computers, the fifth is
|->a Rockwell/Autonetics machine.  All four are online during launch and
|->landing on a synchronized bus.  They all vote on the solutions, and if
|->one dissagrees, the others vote it out and it is taken offline.  The
|->Autonetics machine is there in case there is a generic HW or SW
|->failure that would take out all four prime machines.
|
|When was this change implemented?  Originally, all 5 machines were AP-101s.
|The four primaries ran IBM developed PASS Primary system, and the one
|back-up was an identical AP-101 running an Intermetrics developed BFS
|Backup Flight System.  All 5 online at launch on the synchronized bus,
|the fifth watching for anytime the primaries disagree, and annunciating the
|errors to the pilots.  The pilots can manually disable any of the 4 primaries
|which are failing, but the standard procedure (was - still is?) to switch
|to the BFS and abort.
|
|The backup was an afterthought to take care of generic SW failures.  The
|SW architecture of the BFS is completely different from the PASS in case
|there is some bizarre architectural problem nobody found during testing:
|for example, loss of synch at boot, which was only discovered (well, correctly
|diagnosed) following the launch hold at T-12 minutes on the first flight!
|
|I do not believe the backup computer is in any way different from the
|primaries.

Mea Culpa.  They tell me the first thing to go is ... I forgot.
Anyway, you are correct about the hardware being the same, but
sometime back, Aviation Week reported that the code for the backup
machine was written by Autonetics.

Lee

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|Lee F. Mellinger                         Jet Propulsion Laboratory - NASA|
|4800 Oak Grove Drive, Pasadena, CA 91109 818/393-0516  FTS 977-0516      |
|-------------------------------------------------------------------------|
|UUCP: {ames!cit-vax,psivax}!elroy!jpl-devvax!jplpro!leem                 |
|ARPA: jplpro!leem!@cit-vax.ARPA -or- leem@jplpro.JPL.NASA.GOV            |
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

karn@thumper.bellcore.com (Phil R. Karn) (10/07/88)

> Everything fits in a *7-inch* cube. Total power budget is something like

Oops, even I get lazy sometimes by not checking my references before
posting. :-) Microsat is actually a cube 230 mm (9 inches) on a side,
not including the whip antennas. It's still pretty compact, though.

An excellent collection of articles on Microsat and related topics can
be found in the recently published proceedings of the ARRL Amateur Radio
7th Computer Networking Networking Conference, held October 1 in
Columbia, MD.  Copies are $12 and can be purchased from ARRL, 225 Main
St, Newington, CT 06111.

Phil

wes@obie.UUCP (Barnacle Wes) (10/07/88)

In article <5485@ecsvax.uncecs.edu>, cjl@ecsvax.uncecs.edu (Charles Lord) writes:
> I read that the main reason for M-T's contract was pork barrel
> politics on some senator's part.  There are capable SRB mfg
> facilities on the Gulf that are capable of making one-piece SRBs
> and barging them to KSC.  The whole reason for o-rings is that
> you cannot ship a whole SRB from Utah to Florida intact.

Yes, yes, an no.  The reason for the o-rings is that Thiokol couldn't
figure out how to build a single-pour booster motor without having it
crack internally (which is fatal to the motor).  I know for a fact that
you can ship a booster roughly the size of an SRB from Vandenberg AFB to
Hill AFB Utah on a rail car, we just got ours Friday that way.  I think
the shuttle SRBs are longer than the Peacekeeper booster, but that isn't
a problem - the rails between here and the coast are pretty straight
these days.

	Wes Peters

P.S. In case you're wondering, I work for GTE at the Peacekeeper System
Engineering Test Facility (SETF) at Hill AFB.
-- 
                     {hpda, uwmcsd1}!sp7040!obie!wes

         "How do you make the boat go when there's no wind?"
                                 -- Me --

wes@obie.UUCP (Barnacle Wes) (10/07/88)

In article <1938@kalliope.rice.edu>, phil@titan.rice.edu (William LeFebvre) writes:
> time and power to calculate the expected route.  There is just no way that
> the on-board computers are going to be able to perform the calculations in
> real time.  I suspect that it's hard for any computer to do the
> calculations in real time (maybe a Cray).  So they perform all these tough
> calculations ahead of time and the on-board software becomes much simpler.

In article <653@web.cme-durer.ARPA>, paisley@cme-durer.ARPA (Scott Paisley) replies:
| Why can't they download the programs onto the computers on the shuttle
| just a few hours before launch?  [....]
| But I think that the flexibility of software is what makes
| software so wonderful.  I would have hated to seen the launch aborted
| because they didn't have flexibility of software.

Actually, it does not take all that much time to generate a flight path
for a ballistic (or semi-ballistic) object traveling to orbit (or
near-orbit).  But it does take quite a while to "fly out" the generated
path on a simulator to make sure the "targeting program" didn't screw
something up.  This is a very important step, I can assure you.
-- 
                     {hpda, uwmcsd1}!sp7040!obie!wes

         "How do you make the boat go when there's no wind?"
                                 -- Me --

david@beowulf.JPL.NASA.GOV (David Smyth) (10/08/88)

-phil@Rice.edu (William LeFebvre) writes:
->leem@jplpro.JPL.NASA.GOV (Lee Mellinger) writes:
->>The computers are 4Pi/AP-101's which are esentially ruggedized IBM
->>360's in a small box.  That is the four prime computers, the fifth is
->>a Rockwell/Autonetics machine.
->
->Wrong!  All FIVE general purpose computers (GPCs) are the same hardware.
->They all sit on the same synchronized bus.  Hardware-wise, they are
->totally identical.  It's the SOFTWARE that's different.  Four run the
->standard flight software while the fifth runs the backup flight software
->(the latter being written by Rockwell).  The backup software can be loaded
->into any of the 5 GPCs, just as the main flight software can be loaded
->into any of them. ...

The primary software was written by IBM, the backup by Intermetrics
under contract to Rockwell.  Rockwell only supplied the lab for debugging,
no software or software design whatsoever.

I was there...

henry@utzoo.uucp (Henry Spencer) (10/09/88)

In article <1344@thumper.bellcore.com> karn@thumper.bellcore.com (Phil R. Karn) writes:
>... Running in this computer will be a real-time multitasking
>operating system capable of executing .exe files generated by an MS-DOS
>linker (with a special library) and uploaded from the ground...

Hmm, MSDOS in space.  Better watch out for viruses... :-)
-- 
The meek can have the Earth;    |    Henry Spencer at U of Toronto Zoology
the rest of us have other plans.|uunet!attcan!utzoo!henry henry@zoo.toronto.edu

henry@utzoo.uucp (Henry Spencer) (10/10/88)

In article <213@obie.UUCP> wes@obie.UUCP (Barnacle Wes) writes:
>... I know for a fact that
>you can ship a booster roughly the size of an SRB from Vandenberg AFB to
>Hill AFB Utah on a rail car, we just got ours Friday that way.  I think
>the shuttle SRBs are longer than the Peacekeeper booster, but that isn't
>a problem - the rails between here and the coast are pretty straight
>these days.

MUCH longer, and that is precisely the probelm.  "Pretty straight" is not
good enough for something that would probably span four railroad cars
(I don't have precise numbers on hand).  Also, the destination of interest
is the Cape, not Vandenberg...
-- 
The meek can have the Earth;    |    Henry Spencer at U of Toronto Zoology
the rest of us have other plans.|uunet!attcan!utzoo!henry henry@zoo.toronto.edu

karn@thumper.bellcore.com (Phil R. Karn) (10/10/88)

No, it's not MS-DOS. That should be obvious from the term
"multi-tasking".  The OS is called "qCF" and is by a company called
Quadron Services, Inc.  The principals of this company just happen to be
several very active amateur packet radio enthusiasts.

When I said that it ran .exe files, I meant that they had the same load
image format as a MS-DOS .exe file. However, standard MS-DOS executables
will NOT run under this system; they must be linked with a library
provided with qCF. The use of .exe format is simply to facilitate the
use of standard PC development tools.

To quote the major author of qCF (Harold Price, NK6K), "I expect the
Microsat CPU to crash occasionally". By that, he meant that he expected
a lively, ongoing series of software experiments on at least one
spacecraft, and that inevitably one of these would cause a crash (there
is no hardware memory protection in the V40).  In other words, the low
cost of the mission and its non-life-and-death nature (no humans on
board, remember?) make it feasible to carry out interesting and useful
experiments that would otherwise be too risky. Trading off continuous
availability in favor of development progress makes sense here. If a
spacecraft computer crashes, you just boot it again on the next pass;
it will keep itself safe until then.

By the way, the Microsat project has already attracted an enormous amount
of interest from outside the amateur satellite program, since many people
have begun to see the advantages of a simple, small and inherently
cheap satellite. Weber State College (Utah) has a Microsat of their own
that will fly with us, and it will carry a CCD camera experiment in the
otherwise unused module. (Weber State was the group that launched NUSAT
on the Shuttle some years back).

Phil

cc1@valhalla.cs.ucla.edu (R...for Rabbit) (10/11/88)

In article <1988Oct9.222636.26406@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
^In article <213@obie.UUCP> wes@obie.UUCP (Barnacle Wes) writes:
^>... I know for a fact that
^>you can ship a booster roughly the size of an SRB from Vandenberg AFB to
^>Hill AFB Utah on a rail car, we just got ours Friday that way.  I think
^>the shuttle SRBs are longer than the Peacekeeper booster, but that isn't
^>a problem - the rails between here and the coast are pretty straight
^>these days.
^MUCH longer, and that is precisely the probelm.  "Pretty straight" is not
^good enough for something that would probably span four railroad cars
^(I don't have precise numbers on hand).  Also, the destination of interest
^is the Cape, not Vandenberg...

Just do something like this:



                           /\
                          /  \
                          |  |
                          |==|
                          |  |
                          |  |
                          |  |
                          |  |
                          |  |
                          |  |
                          |  |
                          |  |
                          |==|
                         /|  |\
                        / |  | \
     ...  =======--=================--============  ...
              oo    oo           oo    oo
         ##########################################


			--R for Rabbit


(Oh yeah, :-), in case you didn't figure it out.

ejnihill@sactoh0.UUCP (Eric J. Nihill) (10/12/88)

In article <16665@shemp.CS.UCLA.EDU>, cc1@valhalla.cs.ucla.edu (R...for Rabbit) writes:
> ^In article <213@obie.UUCP> wes@obie.UUCP (Barnacle Wes) writes:
> Just do something like this:
> 
>                            /\
>                           /  \
>                           |  |
>                           ~  ~
>                           |==|
>                          /|  |\
>                         / |  | \
>      ...  =======--=================--============  ...
>               oo    oo           oo    oo
>          ##########################################

This could prove to be a very interesting mode of shipping.
 If you could somehow get from point A to B with no overhead
obstructions, (whats the booster height?), there may be at least
two minor details. By not distributing X amount of weight over
4 railroad cars, we would place quite a load on one car. I wonder
what the railroad trackbed load capacity is per axle? The wind
could also have fun with our transport. That may be quite a bit
of leverage that we have sticking up in the air. How much wind
would it take to tip over our aprox. 5 foot axle length flatcar?
(discounting roadbed tilt on curves)
  These may be just minor problems that a creative mind may be
more than able to overcome.
                              Humbly submitted;-) Eric






-- 
#################################################################
#  Sign In Triplicate before   #  Serving The State Capitol Of  #
#  Discarding:________________ #  California: sactoh0           #
#################################################################

henry@utzoo.uucp (Henry Spencer) (10/12/88)

In article <1347@thumper.bellcore.com> karn@thumper.bellcore.com (Phil R. Karn) writes:
>... In other words, the low
>cost of the mission and its non-life-and-death nature (no humans on
>board, remember?) make it feasible to carry out interesting and useful
>experiments that would otherwise be too risky...
>If a spacecraft computer crashes, you just boot it again on the next pass;
>it will keep itself safe until then.

In other words, it's risky but safe?  You couldn't trust it on a manned
spacecraft, but you do trust it to be rebootable?  Be consistent, please,
Phil -- either the thing can be trusted not to explode, or it can't.  If
the only consequence of a crash is denial of non-vital services, then the
low cost is the factor that matters, since it makes occasional temporary
loss of service acceptable, and the unmanned nature is irrelevant.  (For
example, non-space-hardened computers have flown on the shuttle, in non-
critical support roles.)

I take it that the Microsat hardware is not capable of doing anything
irrevocable, like switching its receivers off?
-- 
The meek can have the Earth;    |    Henry Spencer at U of Toronto Zoology
the rest of us have other plans.|uunet!attcan!utzoo!henry henry@zoo.toronto.edu

eugene@eos.UUCP (Eugene Miya) (10/12/88)

You see understanding the problem is very simple (based on the one
thing I help launch). Solving it is tough.

You have this vehicle which has to traverse the thickness of the atmosphere,
but it's not that simple, the stuff moves in different directions at
different altitudes.  Slight differences may mean big expendatures of
fuel later down the flight path.  It's like the difference real and
apparent velocity in a plane.  Don't forget it's all transparent up
there.  I ask you the wind direction and velocity at 30K feet can you
tell me just by looking?  I doubt it (p.s. I want [u,v,w] not 95 deg.
true/39 knots).

The Shuttle has a unique problem: it's got wings.  These get in the way,
cylinders have few forces working on them (I had an Atlas).  The wings
are trying to "lift" (not a good word since it's upside down).  The
nunber of vectors to describe the forces acting on a flight path in
thick atmosphere gets complex.  The Shuttle can't sense wind very well
in flight, so weather ballons are sent up before hand.  It's just a mess.
Just deal with things in orbit and let the aerodynamics people deal with
these problems ;-).

Another gross generalization from

--eugene miya, NASA Ames Research Center, eugene@aurora.arc.nasa.gov
  resident cynic at the Rock of Ages Home for Retired Hackers:
  "Mailers?! HA!", "If my mail does not reach you, please accept my apology."
  {uunet,hplabs,ncar,decwrl,allegra,tektronix}!ames!aurora!eugene
  "Send mail, avoid follow-ups.  If enough, I'll summarize."

tif@cpe.UUCP (10/13/88)

Written  3:34 pm  Oct 10, 1988 by ucla-cs.UUCP!cc1 in cpe:sci.space.shuttle
>In article <1988Oct9.222636.26406@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>>MUCH longer, and that is precisely the probelm.  "Pretty straight" is not
>>good enough for something that would probably span four railroad cars
>
>Just do something like this:
>... [Funny picture of SRB on end]

I find it kindo' hard to believe that they couldn't solve such
a seemingly trivial problem.  How about putting it on 4 (or more)
"skateboards" so that when you went around a corner it just had
enough play so as not to strain it?   Or, suspend it on 4 (or
more) hangars (hanging thing-a-ma-jobbies).

I'm not trying to start a big discussion of possibilities but I
don't see it as an unsurmountable (or even difficult) task.
I.e.  We can (could) put a man on the moon but we can't put a
long stick on a train?

Boy, I just love (using lots of) parenthesis ('(' and ')').

			Paul Chamberlain
			Computer Product Engineering, Tandy Corp.
			{convex,killer}!ninja!cpe!tif

fosler@inmet.UUCP (10/18/88)

    HAL/S was developed by Intermetrics, Inc. for NASA starting in 1970.
The prototype language, HAL, was implelemented in 1971, and was used
successfully to verify early shuttle software design concepts and algorithms.
The new verion, HAL/S, was define in 1972, with the first compiler for
the IBM 360/370 in 1973, and the onboard Shuttle computer (IBM Ap-101) in
1974.  (Taken from the foreword to "Programming in HAL/S").

    While I have never work on the HAL/S compiler, it was told to me that
the compiler maintenance had been moved from the Cambridge office to the
Houston, Texas office four or five years ago.  At that time, the Houston
office was working on improving the optimization of the code generated,
and modifing the compiler to support more memory (to work with the new
computers that had more memory).  I can not remember what had to be
changed to support the increased memory, but I do know from working on
a similar computer for the NAVY that the IBM instructions and bus structure
puts limits on the total amount of memory that the computer can work with.
The new IBM computers might have more memory, but more then likely, NASA
will have to come up with a new computer if they want the computer support
more then several megabytes of memory.
   I also know from the NAVY project that I have work with, that project hate
to recompile any code for any reason.  They have found that they can get
around all of the retesting of their code if they just make patches.

Carl Fosler
Intermetrics, Inc.