[comp.software-eng] Provocative statement

cmb@castle.ed.ac.uk (Colin Brough) (04/23/91)

The following is quoted from an article I saw in comp.parallel this
morning.  It does, however, raise some very interesting points.  (The
discussion is on the difference in approach in developing an
understanding of parallel processing between Europe and the US.)

In article 2355 of comp.parallel, Steven Ericsson Zenith
<zenith@ensmp.fr> writes:

> So what's the point? Fact is the USA has a wealth of systems experience
> that Europe doesn't have. Yes, there are a few innovative thinkers in
> concurrency (in particular those I mentioned in my earlier mail) in
> Europe. In the USA computer architecture is better understood (mostly in
> the Bay Area). I fear whilst Europe is still staring into it's navel,
> the USA will have built the machines and we'll find air traffic control
> systems still programmed in C with sockets. Ok, one or two might fall
> out of the air every so often .. but the odds are probably tolerable.
> 
> And that's the point most Europeans don't understand. Engineer's don't
> build bridges to fine tolerances - as suggested by the Computer Science
> formal methods community. They use over-kill in the main. Materials and
> designs proven to work from experience and then some!! Very few bridges
> fall down. The number that do is a tolerable expediency.

The interesting point is not so much the difference between Europe and
the US, but rather the 'over-kill' approach.  Do people think this is
one way in which 'software engineering' will progress in the future?

I await the discussion with interest...

__________________________________________________________________________
Colin Brough                           Edinburgh Parallel Computing Centre
cmb@castle.ed.ac.uk                    James Clerk Maxwell Building
cmb%ed.ac.uk@nsfnet-relay.ac.uk        Mayfield Road
                                       Edinburgh  EH9 3JZ
Phone: +44 31-650-5022                 SCOTLAND
Fax:   +44 31-662-4712
__________________________________________________________________________

marick@m.cs.uiuc.edu (Brian Marick) (04/24/91)

cmb@castle.ed.ac.uk (Colin Brough) writes:
>In article 2355 of comp.parallel, Steven Ericsson Zenith
><zenith@ensmp.fr> writes:

>> Engineer's don't
>> build bridges to fine tolerances - as suggested by the Computer Science
>> formal methods community. They use over-kill in the main. Materials and
>> designs proven to work from experience and then some!! 

>I await the discussion with interest...

Such a discussion invariably involves many uncompromising people
making absolute statements on topics they know nothing about.  I, for
one, am tired of statements beginning, "REAL Engineers do ..."  by
people who have never actually *met*, say, a civil engineer.

Herman Petroski's _To Engineer is Human_ has chapters devoted to
tolerances, overkill, the non-linear effects of design and requirement
changes, and so on.  It has case studies like the Tacoma Narrows
bridge, the Kansas City skywalk failure, and the Grumman FLXBLE buses.

Brian Marick
Motorola @ University of Illinois
marick@cs.uiuc.edu, uiucdcs!marick

cole@farmhand.rtp.dg.com (Bill Cole) (04/25/91)

Colin Brough writes:
|>  
|> The interesting point is not so much the difference between Europe and
|> the US, but rather the 'over-kill' approach.  Do people think this is
|> one way in which 'software engineering' will progress in the future?
|> 
|> I await the discussion with interest...

I'd tend to agree.  Non-Software Engineers tend to 'over engineer' products
so that the product won't fail under even the worst stress that is beyond
imagination.  There are, of course, those who will under engineer to
maximize 'profit'.

Those of us in software tend to make the assumption that we know all the
variables and can write sufficiently robust code to deal with almost
anything that happens.  Nonsense!  What we do in most cases is construct
a logic diagram that excludes events we haven't considered or don't want
to consider.  My picture is that a lot of us have written the minimum
acceptable code that will work if the users don't stress it too badly.

/Bill
Disclaimer on file.........

sakkinen@jyu.fi (Markku Sakkinen) (04/25/91)

In article <9776@castle.ed.ac.uk> cmb@castle.ed.ac.uk (Colin Brough) writes:
> ...
>In article 2355 of comp.parallel, Steven Ericsson Zenith
><zenith@ensmp.fr> writes:
>> ...
>> And that's the point most Europeans don't understand. Engineer's don't
>> build bridges to fine tolerances - as suggested by the Computer Science
>> formal methods community. They use over-kill in the main. Materials and
>> designs proven to work from experience and then some!! Very few bridges
>> fall down. The number that do is a tolerable expediency.
>
>The interesting point is not so much the difference between Europe and
>the US, but rather the 'over-kill' approach.  Do people think this is
>one way in which 'software engineering' will progress in the future?

If the bridge designer wants to have a greater security factor,
(s)he can specify a little thicker steel and cables than suggested
by standard calculations.  The software designer cannot say:
"This system has to be really safe and secure, so let's put in
30% more code!"

Markku Sakkinen
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
          SAKKINEN@FINJYU.bitnet (alternative network address)

hawksk@lonex.radc.af.mil (Kenneth B. Hawks) (04/26/91)

In article <1991Apr25.133216.20855@jyu.fi> sakkinen@jytko.jyu.fi (Markku Sakkinen) writes:
writes:
>
>If the bridge designer wants to have a greater security factor,
>(s)he can specify a little thicker steel and cables than suggested
>by standard calculations.  The software designer cannot say:
>"This system has to be really safe and secure, so let's put in
>30% more code!"
>
I must respectfully DISAGREE!   If a software engineer wants a "safer"
                                              ^^^^^^^^
program, then standard safety components should be incorporated into the
algorithm implemented.  For example, using a table look-up method in lieu
of calculating the Tangent of 90 degrees.  This accomplishes two things,
the algorithm executes faster, and the processor cannot "hang" trying to
compute infinity.  Other "safety" measures can include such 'standard'
techniques as looking at all input variables for range accuracy and 
reasonableness prior to using them.  Checking ones' answer, can also be
accomplished.    30% more code?   Maybe.  Nuclear release, and weapon
system navigation software must have _designed_in_ safety features. 

After 26 years of software development for aerospace systems, believe me
that there are lots of solid, standard "safety" components that are
applicable.  The fundamental premise I have always advocated is, "each
program must be engineered to solve the problem at hand."

Kenneth B. Hawks                                   |\   /|   "Fox Forever"
Rome Laboratory, Griffiss AFB, NY                   ^o.o^
hawksk@lonex.radc.af.mil                            =(v)=
Disclaimer:  There is no one else here who thinks like I do; therefore....

reggie@paradyne.com (George W. Leach) (04/26/91)

In article <1991Apr25.133216.20855@jyu.fi> sakkinen@jytko.jyu.fi (Markku Sakkinen) writes:
>If the bridge designer wants to have a greater security factor,
>(s)he can specify a little thicker steel and cables than suggested
>by standard calculations.  

	The Brooklyn Bridge is an excellent example of this philosophy.
It was built long before the automobile, and today well after 100 years
it is still going strong.

>The software designer cannot say:
>"This system has to be really safe and secure, so let's put in
>30% more code!"

	No, you miss the point.  Overkill in terms of tolerances.  We expect
to process 1,000 transactions per minute.  Well design for greater than that
number.  Etc.....

	Often, we are constrained by costs.  A bridge is built once and
expected to stand for many years.  Software systems expected lifetimes
are much smaller.  Furthermore, we constantly revamp software systems.

-- 
George W. Leach					AT&T Paradyne 
reggie@paradyne.com				Mail stop LG-133
Phone: 1-813-530-2376				P.O. Box 2826
FAX: 1-813-530-8224				Largo, FL 34649-2826 USA

jls@rutabaga.Rational.COM (Jim Showalter) (04/27/91)

>If the bridge designer wants to have a greater security factor,
>(s)he can specify a little thicker steel and cables than suggested
>by standard calculations.  The software designer cannot say:
>"This system has to be really safe and secure, so let's put in
>30% more code!"

I disagree strongly with this. It has been my experience that the
systems that are engineered from the outset to have excellent
error detection and correction mechanisms are quite robust and
fault-tolerant. Often, the amount of error code that is involved
CAN be about 30% of the total.

Paradoxically, it has also been my experience that these safety-
engineered systems are engineered well throughout, and so tend
not to NEED the error checking that was added. On the other hand,
systems that are written without much error checking seem to be
infected with an overall attitude of slovenliness, and so are the
ones most prone to failure.
--
* "Beyond 100,000 lines of code, you should probably be coding in Ada." *
*      - P.G. Plauger, Convener and Secretary of the ANSI C Committee   *
*                                                                       *
*                The opinions expressed herein are my own.              *

daves@hpopd.pwd.hp.com (Dave Straker) (04/27/91)

>> formal methods community. They use over-kill in the main. Materials and
>> designs proven to work from experience and then some!! Very few bridges
>> fall down. The number that do is a tolerable expediency.
>
>The interesting point is not so much the difference between Europe and
>the US, but rather the 'over-kill' approach.  Do people think this is
>one way in which 'software engineering' will progress in the future?

'Overkill' is really about risk management. If you want high quality,
you must spend time and effort 'making certain'.

A simple example: You write a program. It compiles and links ok. It
could be considered overkill to test it. After all, you were careful
with the design and coding. But you do it, to minimise the risk of
defects remaining in the released product. And the more you test,
the more defects you find, but at a reducing rate. This is where you
decide your acceptable level of risk. For a simple program to count
words in a text file, you may well accept a high level of risk. For a 
space shuttle control program, you would put in a lot more 'overkill'
testing.

Perhaps the original author's point is that the Europeans are more
risk-averse than Americans. This seems to be true in the case of
financial investment - many good British inventions have gone to
the US for want of development capital. But there again, look at
the state of the US economy!

Dave Straker            Pinewood Information Systems Division (PWD not PISD)
[8-{)                   HPDESK: David Straker/HP1600/01
                        Unix:   daves@hpopd.pwd.hp.com

windley@ted.cs.uidaho.edu (Phillip J. Windley) (04/29/91)

I apologize, I have lost the original reference and am not able to properly
attribute this quote:

   > formal methods community. They use over-kill in the main. Materials and

This misses the point of engineering.  The goal is not to overkill, but to
get some acceptable safety factor WITHIN A CERTAIN COST.  I had a professor
that said an engineer is someone who can do with a dollar what any fool
could do with two.

   > designs proven to work from experience and then some!! Very few bridges
   > fall down. The number that do is a tolerable expediency.

This is even worse.  In the 19th century this was true.  Now it more a
matter of analysis than experience (although it is certainly true that the
analysis is built on a solid foundation of experience).  The reason that I
discount experience is that the point of engineering education is to codify
this experience so that new engineers can use it without years on the job.
(Note that trades based on experience are best passed on by an apprentice
program, not college.)  Analysis is in part codification.

The reason that a civil engineer can build a bridge and overdesign within
an acceptable cost is because engineers can analyze their designs.  Years
of development have given practicing civil engineers the math and analysis
tools necessary to say that if I use this design, I have a safety factor of
4 (or 10).  Further, the civil engineer can compare one design with a
cheaper one and conclude with reasonable confidence that they give
equivalent safety for differing costs and then select the cheapest.

I do not believe that there are many software engineers who can do this for
code.  There are certainly fewer who learned it in school.  Most graduating 
CS majors wouldn't even be able to analyze a sort routine and give a cogent
argument that it works (note that I didn't say formal proof).

Most engineers do analysis as the major part of their job.  The design is
a small part of their product; analysis is the large part.  Until software
engineering has analysis tools analogous to other engineering disciplines,
it is at best a craft.


--
Phil Windley                          |  windley@cs.uidaho.edu
Assistant Professor		      |  windley@cheetah.cs.uidaho.edu
Department of Computer Science        |
University of Idaho                   |  Phone: 208.885.6501  
Moscow, ID 83843                      |  Fax:   208.885.6645

jls@rutabaga.Rational.COM (Jim Showalter) (04/30/91)

>	Often, we are constrained by costs.  A bridge is built once and
>expected to stand for many years.  Software systems expected lifetimes
>are much smaller.

The "expected" lifetimes are smaller, but the ACTUAL lifetimes
turn out to be quite large. This is part of the problem!
--
* "Beyond 100,000 lines of code, you should probably be coding in Ada." *
*      - P.J. Plauger, Convener and Secretary of the ANSI C Committee   *
*                                                                       *
*                The opinions expressed herein are my own.              *

pcooper@yoda.eecs.wsu.edu (Phil Cooper - CS495) (05/02/91)

In article <WINDLEY.91Apr29095455@panther.ted.cs.uidaho.edu> windley@ted.cs.uidaho.edu (Phillip J. Windley) writes:
>
>Most graduating CS majors wouldn't even be able to analyze a sort routine 
>and give a cogent argument that it works (note that I didn't say formal 
>proof).

    Phillip, were it not for your excellent taste in first names, and the
fact that you teach a mere 7 miles from my home, I would be tempted to flame
you for this comment.  I can certainly recognize a functional sort routine 
when I see one, and I expect most of my fellow graduating CS majors could 
as well.  If your students can't, then you should take a long look at your
curriculum and (more importantly IMO) faculty.  I hope you don't go around
dropping these little snide remarks around your own students as casually as
you do in this group.  Instilling confidence in your students is very
important.  Comments like the one quoted above do nothing positive.  Besides,
Even if what you say is true (and I don't beleive it is), whose fault is
it?  it is YOUR job to TEACH students how to do this type of analysis.  If
they can't, then it is a failing of the University of Idaho's Computer
Science dept., rather than of the students.

'nuff said.

Phillip R. Cooper


-- 
/********************************************************************\
*   Real Life:   Phillip R. Cooper                                   *
*       Email:   pcooper@yoda.eecs.wsu.edu                           *
*  Disclaimer:   Disclaimer?? I don't need no stinkin' disclaimer!!! *

windley@ted.cs.uidaho.edu (Phillip J. Windley) (05/04/91)

In article <1991May2.074129.22155@serval.net.wsu.edu> pcooper@yoda.eecs.wsu.edu (Phil Cooper - CS495) writes:

   In article <WINDLEY.91Apr29095455@panther.ted.cs.uidaho.edu> 
   windley@ted.cs.uidaho.edu (Phillip J. Windley) writes:
   >
   >Most graduating CS majors wouldn't even be able to analyze a sort routine 
   >and give a cogent argument that it works (note that I didn't say formal 
   >proof).

   I can certainly recognize a functional sort routine when I see one, and
   I expect most of my fellow graduating CS majors could as well.  If your
   students can't, then you should take a long look at your curriculum and
   (more importantly IMO) faculty.  I hope you don't go around dropping
   these little snide remarks around your own students as casually as you
   do in this group.  Instilling confidence in your students is very
   important.  Comments like the one quoted above do nothing positive.
   Besides, Even if what you say is true (and I don't beleive it is), whose
   fault is it?  it is YOUR job to TEACH students how to do this type of
   analysis.  If they can't, then it is a failing of the University of
   Idaho's Computer Science dept., rather than of the students.

First, I apologize if you took this as a disparaging comment on students
because it wasn't.  It was a disparaging comment on curriculum and our
understanding of how to analyze software.  If you think, however, that this
is a problem limited to Idaho, you are sadly mistaken.  

You missed my point. I didn't say that most CS majors couldn't *recognize*
a functional sort routine.  Indeed, I'm sure that most of our students (and
even some of yours ;-) could recognize one and write one.  I said that they
couldn't give a cogent argument (i.e. an informal proof) as to WHY it was
correct.  Further, most couldn't even define what corectness means for a
sort routine in anything but English.

This is, of course, a problem that goes beyond correctness.  It extends to
performance, function, and most other things that people would like to
understand about software. 

You are right that something needs to change.  Not just with Idaho's
curriculum, but with WSU's and most other places.  It's more pervasive,
however, than just adding a course or two; much of what's needs to be
taught is still inadequately understood and is the topic of research
efforts here and elsewhere.

Cheers,

--phil--


--
Phil Windley                          |  windley@cs.uidaho.edu
Assistant Professor		      |  windley@cheetah.cs.uidaho.edu
Department of Computer Science        |
University of Idaho                   |  Phone: 208.885.6501  
Moscow, ID 83843                      |  Fax:   208.885.6645

cole@farmhand.rtp.dg.com (Bill Cole) (05/04/91)

Jim Showalter writes:
|> >If the bridge designer wants to have a greater security factor,
|> >(s)he can specify a little thicker steel and cables than suggested
|> >by standard calculations.  The software designer cannot say:
|> >"This system has to be really safe and secure, so let's put in
|> >30% more code!"
|> 
|> I disagree strongly with this. It has been my experience that the
|> systems that are engineered from the outset to have excellent
|> error detection and correction mechanisms are quite robust and
|> fault-tolerant. Often, the amount of error code that is involved
|> CAN be about 30% of the total.
|> 
|> Paradoxically, it has also been my experience that these safety-
|> engineered systems are engineered well throughout, and so tend
|> not to NEED the error checking that was added. On the other hand,
|> systems that are written without much error checking seem to be
|> infected with an overall attitude of slovenliness, and so are the
|> ones most prone to failure.

Yup, I believe you're correct, Jim.  The issue here is why isn't
more software engineered to these standards?  Could it be that we,
as a group, get bored with the end-game and finish the product as
quickly as possible?  Do we have enough time/resources to do it right
in the first place?  Are we so certain that specific circumstances
can't happen that we never account for them in our code?  

/Bill
I'm glad to see that there are people older than me out there.

straub@jogger.cs.umd.edu (Pablo A. Straub) (05/07/91)

In article <1991May3.195844.25823@dg-rtp.dg.com> cole@farmhand.rtp.dg.com
(Bill Cole) writes:
|>Jim Showalter writes:
|>  >If the bridge designer wants to have  a  greater  security  factor,
|>  >(s)he  can specify a little thicker steel and cables than suggested
|>  >by standard calculations.  The software designer cannot say:  "This
|>  >system  has  to be really safe and secure, so let's put in 30% more
|>  >code!"
|> 
|> I disagree strongly with this.  It has been my  experience  that  the
|> systems  that  are engineered from the outset to have excellent error
|> detection  and   correction   mechanisms   are   quite   robust   and
|> fault-tolerant.  Often, the amount of error code that is involved CAN
|> be about 30% of the total.

Yes, you can engineer software robustness. But  let's  not  stretch  the
analogy  to far.  Software is still not continuous and robustness is not
achieved by adding more of the same.  As Jim Showalter explained to  us,
program  robustness  is achieved by changing the design, not just adding
more code.

Pablo Straub

styri@cs.hw.ac.uk (Yu No Hoo) (05/07/91)

In article <1991May3.195844.25823@dg-rtp.dg.com> cole@farmhand.rtp.dg.com
(Bill Cole) writes:
>>Jim Showalter writes:
>>  >If the bridge designer wants to have  a  greater  security  factor,
>>  >(s)he  can specify a little thicker steel and cables than suggested
>>  >by standard calculations.  The software designer cannot say:  "This
>>  >system  has  to be really safe and secure, so let's put in 30% more
>>  >code!"
>> 
>> I disagree strongly with this.  It has been my  experience  that  the
>> systems  that  are engineered from the outset to have excellent error
>> detection  and   correction   mechanisms   are   quite   robust   and
>> fault-tolerant.  Often, the amount of error code that is involved CAN
>> be about 30% of the total.

I guess we're about to mix up "design" and "implementation" here. Robust
code is usually a result of the design. The bridge design analogy of adding
more lines of code (eg. of the error checking kind) would probably be to
add more wires and beams (just in case...) based on "gut feeling" more or
less. The resulting bridge may develop a failure due to excessive weight.

----------------------
Haakon Styri
Dept. of Comp. Sci.              ARPA: styri@cs.hw.ac.uk
Heriot-Watt University          X-400: C=gb;PRMD=uk.ac;O=hw;OU=cs;S=styri
Edinburgh, Scotland

martin@edwards-tems.af.mil (05/08/91)

In article <34081@mimsy.umd.edu>, straub@jogger.cs.umd.edu (Pablo A. Straub)
writes: >  
> Yes, you can engineer software robustness. But  let's  not stretch  the 
> analogy  to far.  Software is still not continuous and robustness is not 
> achieved by adding more of the same.  As Jim Showalter explained to  us, 
> program  robustness  is achieved by changing the design, not just adding 
> more code. 
>  

     Neither continuity nor linearity are universally present in physical
systems.  You are just as likely to make any complex physical structure (e.g. a
bridge) weaker as you are stronger by adding "more of the same".

     I can guarantee you that a fiberglass airplane will be made weaker by
adding a "little" extra resin on each layer of cloth (Burt Rutan has repeatedly
proven this).  If you want to make it stronger than its original design
strength, you must redesign it.

     There is no difference between software and hardware when it comes to
engineering in robustness.  Sometimes it is a linear process (providing larger
buffers and arrays than you could possibly envision anyone ever using) and
sometimes it isn't.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
Gary S. Martin                !  (805)277-4509  DSN 527-4509
6510th Test Wing/TSWS         !  Martin@Edwards-TEMS.af.mil
Edwards AFB, CA 93523-5000    ! 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 

hogg@amos.trl.oz.au (Stephanie Hogg) (05/09/91)

In article <34081@mimsy.umd.edu>, straub@jogger.cs.umd.edu (Pablo A.
Straub) writes:
> ...
> Yes, you can engineer software robustness. But  let's  not  stretch  the
> analogy  to far.  Software is still not continuous and robustness is not
> achieved by adding more of the same.  As Jim Showalter explained to  us,
> program  robustness  is achieved by changing the design, not just adding
> more code.
> 
I hope you don't really think that safety in say building a bridge is a 
result of adding a few more tonnes of concrete.  What if that concrete 
went in the wrong place?  Safety in all engineering fields is (hopefully!) 
achieved by design.

> Pablo Straub

Stephanie Hogg
________________________________________________________________________
Stephanie Hogg				Internet:	s.hogg@trl.oz.au
Telecom Research Laboratories
 PO Box 249 Clayton 3168		Phone:	+61 3 541 6802
 Victoria, Australia			Fax:	+61 3 543 1944
________________________________________________________________________

windley@ted.cs.uidaho.edu (Phillip J. Windley) (05/09/91)

In article <1991May9.044816.2709@trl.oz.au> hogg@amos.trl.oz.au (Stephanie Hogg) writes:

   I hope you don't really think that safety in say building a bridge is a 
   result of adding a few more tonnes of concrete.  What if that concrete 
   went in the wrong place?  Safety in all engineering fields is (hopefully!) 
   achieved by design.

An demonstrated by analysis.

--phil--

--
Phil Windley                          |  windley@cs.uidaho.edu
Assistant Professor		      |  windley@cheetah.cs.uidaho.edu
Department of Computer Science        |
University of Idaho                   |  Phone: 208.885.6501  
Moscow, ID 83843                      |  Fax:   208.885.6645