[comp.software-eng] H/W vs. S/W

tada@athena.mit.edu (Michael Zehr) (03/19/88)

I was discussing this with a friend recently, and he mentioned another
difference between H/W and S/W which may account for varying reliability.

With S/W, the "turnaround" time is typically compile time.  A programmer
can make changes and almost immediately see what the difference are.

With H/W, the "turnaround" time is the time it takes to make a new chip, which
is *much* longer.

So... the hardware people have to be very certain that there work is correct
because iterations through the design/test/modify loop are much longer and 
more expensive.  This forces them to check their work very carefully.  
Programmers on the other hand can have the liberty of being careless.  They
can try something and see if it works, as opposed to design something that
works and then try it to verify that it works.  

Now I'm not saying that all programmers are more sloppy than all hardware
engineers, but this could make programmers tend to be more careless.
(I know, I'm one of them -- I typically keep my compiler running almost
continuously :-)


-------
michael j zehr
"My opinions are my own ... as is my spelling."

palmer@ncifcrf.gov (Thomas Palmer) (03/19/88)

In article <3867@bloom-beacon.MIT.EDU>, tada@athena.mit.edu (Michael Zehr) writes:
> 
> So... the hardware people have to be very certain that there work is correct
> because iterations through the design/test/modify loop are much longer and 
> more expensive.  This forces them to check their work very carefully.  
> Programmers on the other hand can have the liberty of being careless.  They
> can try something and see if it works, as opposed to design something that
> works and then try it to verify that it works.  
> 

Fred Brooks tells an interesting story during lectures in his
software engineering course.  The story involves a single-user mid-sixties
machine during the hardware testing stages.  Software developers were
given very limited access time to the machine.  They prepared for these
sessions *very* carefully.  They had all their test data ready; were
prepared for various outcomes, and could rapidly arrive at hypotheses
as to what went wrong.  They had a PLAN.

Obviously, at that time the goal was to maximize machine time, not people
time.  Now, people time is much more expensive than raw cycles.  However, the
discipline to design and think prior to sitting down at a 3 MIP workstation
can't be overrated.

I didn't mention that those early software developers had to contend
with possible hardware bugs too.  Can you imagine?


  -Tom

 Thomas C. Palmer	NCI Supercomputer Facility
 c/o PRI, Inc.		Phone: (301) 698-5797
 PO Box B, Bldg. 430	Uucp: ...!uunet!ncifcrf.gov!palmer
 Frederick, MD 21701	Arpanet: palmer@ncifcrf.gov

mjl@ritcv.UUCP (Mike Lutz) (03/19/88)

In article <3867@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes:
>With S/W, the "turnaround" time is typically compile time.  A programmer
>can make changes and almost immediately see what the difference are.
>
>With H/W, the "turnaround" time is the time it takes to make a new chip, which
>is *much* longer.
>
>So... the hardware people have to be very certain that there work is correct
>because iterations through the design/test/modify loop are much longer and 
>more expensive.  This forces them to check their work very carefully.  

There are, of course, ways to address this, such as the Cleanroom Software
approach at IBM (described in several papers by Harlan Mills).  In this
approach, semi-forl methods of program proofs and rigorous reviews replace
the compile/debug cycle.  Testing is only done to predict product reliability;
it's considered a failure to have many errors found in testing.  And it's
only at test time that the code gets *compiled* at all.

Mike Lutz
rochester!ritcv!mjl
-- 
Mike Lutz	Rochester Institute of Technology, Rochester NY
UUCP:		{allegra,seismo}!rochester!ritcv!mjl
CSNET:		mjl%rit@csnet-relay.ARPA

sommar@enea.se (Erland Sommarskog) (03/21/88)

Several have mentioned in this discussion that it's rather cycle time
which decides thorough you are before testing. I can't but agree. And
to toss in my two cents, I'd like to tell about my experience from a
project which contained both software and hardware development. (I
was on the software side.)

In this project they didn't use custom design, but just simple PALs.
I would say there was no difference between the frequency of errors.
We were both as bad. (And when an error was revealed, we always
blamed the other side for it, of course.) You should have seen the
first prototypes of boards. Full of loose threads, circuits upside
down and I don't know what. But also newer CADs got threads after a while.
  Also, the speed of the first prototypes was slower than the one of 
the software. 

Someone in the Soft-Eng Digest said that when you stripped away
microprograms and such, the hardware had very less complexity than
the software. I have to object here. May be the logic is simpler, but
other factors, unknown to software people, do appear. They have
to take physical factors in regard. Disturbances, race conditions,
variation in components, influence of temperature and so forth.
And when you have tested a program, you have tested it. This is not
true for a hardware board. You have to test every single instance
to check for bad solderings, circuits etc.
-- 
Erland Sommarskog       
ENEA Data, Stockholm        
sommar@enea.UUCP           "Si tu crois l'amour tabou...
                            Regarde bien, les yeux d'un fou!!!" -- Ange

leem@jplpro.JPL.NASA.GOV (Lee Mellinger) (04/02/88)

In article <363@ncifcrf.ncifcrf.gov> palmer@ncifcrf.gov (Thomas Palmer) writes:
:In article <3867@bloom-beacon.MIT.EDU>, tada@athena.mit.edu (Michael Zehr) writes:
:> 
:Fred Brooks tells an interesting story during lectures in his
:software engineering course.  The story involves a single-user mid-sixties
:machine during the hardware testing stages.  Software developers were
:given very limited access time to the machine.  They prepared for these
:sessions *very* carefully.  They had all their test data ready; were
:prepared for various outcomes, and could rapidly arrive at hypotheses
:as to what went wrong.  They had a PLAN.
:
:I didn't mention that those early software developers had to contend
:with possible hardware bugs too.  Can you imagine?
:
:
: Thomas C. Palmer	NCI Supercomputer Facility

I don't have to imagine, I spent many an hour in the wee hours of the 
morning trying figure out why the machine wasn't doing what I wanted
it to, if it was doing anything at all, all the while never sure if
my programs were wrong, or the hardware documents that were used to 
write from were correct, or the hardware was broken (that occurred
quite often on the prototypes and engineering models) or the hardware
logic had a design flaw (that occurred all to often).

Lee

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|Lee F. Mellinger                         Jet Propulsion Laboratory - NASA|
|4800 Oak Grove Drive, Pasadena, CA 91109 818/393-0516  FTS 977-0516      |
|-------------------------------------------------------------------------|
|UUCP: {ames!cit-vax,psivax}!elroy!jpl-devvax!jplpro!leem                 |
|ARPA: jplpro!leem!@cit-vax.ARPA -or- leem@jplpro.JPL.NASA.GOV            |
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-