[comp.software-eng] Development vs Engineering

duncan@dduck.ctt.bellcore.com (Scott Duncan) (10/10/90)

I am running the CASE'90 session "The Effect of CASE on Software Engineering"
and would like some discussion/feedback on the following issue since I think
it bears on the discussion which will take place in this session.

	What methods, tools, practices do you feel must/should be
	employed in the creation of software for the process of
	creation to be called software _engineering_ as opposed
	to software _development_?  That is, are their activities
	or technologies that you feel, if lacking, would prevent
	you from considering development activity (even very
	conscientiously carried out otherwise) to deserve the
	label _engineering_?

I believe this is relevant to my session because, if CASE it taken to mean
"computer aided/assisted/automated software _engineering_," then I would like
to know what essential elements such computer support should contain.  I am
aware of various books/articles on the subject -- though mention of people's
favorites would be fine.  What I am looking for is your immediate reaction to
this question, i.e., from your own experience/practice, what comes to mind as
essential?

Email would be appreciated (which I would summarize and post) only to avoid
too much influence among one another at the very outset.  I am trying to get
the reaction of practitioners first, then, after posting a summary, see what
sort of consensus would occur when the collective responses are presented be-
fore everyone.

This is a bit like how I'll be running the session, and I wanted to "trial" it
here. I also, of course, want people's opinions, but I am also interested in
seeing what kind of agreement might exist, before group discussion on the
subject occurs.

Thanks...

Speaking only for myself, of course, I am...
Scott P. Duncan (duncan@ctt.bellcore.com OR ...!bellcore!ctt!duncan)
                (Bellcore, 444 Hoes Lane  RRC 1H-210, Piscataway, NJ  08854)
                (908-699-3910 (w)   609-737-2945 (h))

gt@hpfcmgw.HP.COM (George Tatge) (10/12/90)

>
>	What methods, tools, practices do you feel must/should be
>	employed in the creation of software for the process of
>	creation to be called software _engineering_ as opposed
>	to software _development_?  That is, are their activities
>	or technologies that you feel, if lacking, would prevent
>	you from considering development activity (even very
>	conscientiously carried out otherwise) to deserve the
>	label _engineering_?
>

Even though you asked for e-mail, I think this is a great question and
would like to see some interactive discussion...

Since the focus is on "engineering" this is one case where it is clearly
reasonable to look at other, older engineering disciplines to see what
has been accomplished.  It is probably fair to say that first civil and
then mechanical engineers defined the role.  Without going through a long
history from the pyramid engineers to Henry Ford, lets just jump to a few
conclusions.

A. Engineers are optimizers.
A1. They design/test to reduce costs.
A2. They design/test to reduce weight.
A3. They design/test to improve safety.
A4. They design/test to improve reliability.
A5... etc.

B. Engineers are innovators.  Often, the solution to the above problems
lies in a clever new design or manufacturing technique.  Rarely, but
significantly, an entirely different approach is taken to the problem
instead of further refinements to known and trusted solutions.  Examples
include such things as trusses, air bags and velcro.  By and large though,
engineering involves optimizing and incremently improving existing, known
solutions.  In other words, engineering is an evolutionary based discipline.

Applying these ideas to the original question, a few things emerge.

1. Since there can be no engineering without testing to insure that the
product is safe and reliable, a software engineer must have access to
test equipment and tools.  Obviously, everyone would like these to be
as automated as possible (eg. the robot arm that opens and closes the door a 
few trillion times to test reliability) because testing can become boring
very quickly.

2. Since cost optimization is a key role of engineering, there must be a
reliable means of measuring cost.  This has, thus far, completely evaded
the software community at large.  If a mechanical engineer proposes a 
newly designed water pump, he can make very accurate estimates about the
manufacturing costs and the warranty costs of his design.  When a software
engineer proposes a new bottom level sort routine, nobody really has
much of an idea as to what "cost" might mean in this context.  The problem
only gets worse on the macro scale when we talk of adding "major new
features" to existing software.

3. I've saved the most important for last.  My experience is that most
software people cringe at the idea of becomming engineers.  If it is truly
the goal of someone or some group to turn software development into an
engineering discipline then I suggest that it is at least a thirty year
process and will require a radical change in the educational philosophies
espoused at the universities where software people are indoctrinated.


George Tatge

windley@cheetah.cs.uidaho.edu (Phil Windley/20000000) (10/13/90)

In article <2450009@hpfcmgw.HP.COM> gt@hpfcmgw.HP.COM (George Tatge) writes:

   3. I've saved the most important for last.  My experience is that most
   software people cringe at the idea of becomming engineers.  If it is truly
   the goal of someone or some group to turn software development into an
   engineering discipline then I suggest that it is at least a thirty year
   process and will require a radical change in the educational philosophies
   espoused at the universities where software people are indoctrinated.


I have to agree with George here (I seem to remember disagreeing with him
on something before ;-).  Software engineering isn't engineering just
because it has engineering in the name. (One could say the same for
Computer Science, but that's a different soapbox.)  Turing it into an
engineering discipline is going to take LOTS of time.  

Consider that it can take 4 or 5 years for a single curriculum change to
have an affect on graduates.  And, of course, curriculum is just a tiny
part of the whole thing.  

There's the whole registration thing which was discussed here a month or so
ago.  Most states (all states???) require engineers to be licensed to
practice engineering (I'm using "practice" in the legal sense where it is
defined by statute).  We're a long way from being able to write a test
which defines the discipline.

Having said that, I believe that even this long process is inevitable.


--
Phil Windley                          |  windley@cheetah.cs.uidaho.edu
Department of Computer Science        |  windley@ted.cs.uidaho.edu
University of Idaho                   |
Moscow, ID 83843                      |  Phone: (208) 885-6501

ogden@seal.cis.ohio-state.edu (William F Ogden) (10/13/90)

In article <2450009@hpfcmgw.HP.COM> gt@hpfcmgw.HP.COM (George Tatge) writes:

   ....
>1. Since there can be no engineering without testing to insure that the
>product is safe and reliable, a software engineer must have access to
>test equipment and tools.

Actually quite a bit of engineering is accomplished without testing.
You rarely see testing of a bridge or building or dam, for example.
Safety and reliability are achieved by analysis, modeling and simulation,
inspection during construction, etc. I'd say that the jury is still out
on which techniques will prove the most effective for software engineering.

/Bill

cox@stpstn.UUCP (Brad Cox) (10/14/90)

>Actually quite a bit of engineering is accomplished without testing.
>You rarely see testing of a bridge or building or dam, for example.
>Safety and reliability are achieved by analysis, modeling and simulation,
>inspection during construction, etc. I'd say that the jury is still out
>on which techniques will prove the most effective for software engineering.

The testing went on behind the scenes,
in building standard reference manuals for
the materials approved for use in building 
bridges, builds or dams.

This is where we depart from engineering...
i.e. that programmers feel competent to reinvent
our raw materials from first principles.
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

bwb@sei.cmu.edu (Bruce Benson) (10/15/90)

In article <5682@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes:

>This is where we depart from engineering...
>i.e. that programmers feel competent to reinvent
>our raw materials from first principles.

Why does this look like we are blaming programmers for the woes of the
field?  Would we have "engineering" if it were as easy to change raw
materials as it is to change code?  Would software be as bad if we
weren't cost and schedule driven?  Beating up programmers never
solved any perceived problem....

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

jad@hpcndnm.cnd.hp.com (John Dilley) (10/15/90)

>In article <9028@fy.sei.cmu.edu> bwb@sei.cmu.edu (Bruce Benson) writes:
>>In article <5682@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes:

>>This is where we depart from engineering...
>>i.e. that programmers feel competent to reinvent
>>our raw materials from first principles.

>Why does this look like we are blaming programmers for the woes of the
>field?  Would we have "engineering" if it were as easy to change raw
>materials as it is to change code?  Would software be as bad if we
>weren't cost and schedule driven?  Beating up programmers never
>solved any perceived problem....

	Software engineering is a young discipline.  In the early days
I imagine those who built bridges and dams would make mistakes similar
in kind to the mistakes we make.  Without a ready supply of experience
and pre-fabricated materials, civil engineers would also have to build
their creations from raw materials.  Only after many years of building
these things do we understand what we need to build a really good dam.
And a really safe one.

	I expect one could list similar experiences in the electrical
engineering field.  When chip design was young, how often did designers
[re-]create cells and [re-]learn techniques for creating effective and
efficient circuits?  Not being an EE, I rely on the wisdom of the net.

	Now in software engineering we are faced with another difficult
set of problems.  As we learn new techniques, and create more library
software we are able to move ahead faster and facter.  One day we will
no longer need to write string copy and sorting routines to get our jobs
done (actually, that day is here, isn't it?).  My hope is that we will
begin to truly REUSE the experience we (as a collective) gain from our
software projects.  And if we can reuse tools to build more powerful
tools we will again be able to make the leaps we have seen in other,
more successful engineering disciplines.

	Software engineering is a young science.  And it appears to be
maturing slowly from my perspective... but these things take time.  It's
already possible to see the long way wa've come in SWE -- but the field
is so broad, and the problems so difficult that we can also see how much
farther we really have to go.

                          --      jad      --
			      John Dilley
			    Hewlett-Packard
                       Colorado Networks Division
UX-mail:      		     jad@cnd.hp.com
Phone:                       (303) 229-2787

gt@hpfcmgw.HP.COM (George Tatge) (10/15/90)

>
>   ....
>>1. Since there can be no engineering without testing to insure that the
>>product is safe and reliable, a software engineer must have access to
>>test equipment and tools.
>
>Actually quite a bit of engineering is accomplished without testing.
>You rarely see testing of a bridge or building or dam, for example.
>Safety and reliability are achieved by analysis, modeling and simulation,
>inspection during construction, etc. I'd say that the jury is still out
>on which techniques will prove the most effective for software engineering.
>
>/Bill
>----------

Bill brings up a good point.  First, I'd have to say I overstated the 
case (unintentionally) and then add that I think Bill has also.  There
is some engineering that can NOT rely on testing, but this is in the 
minority.  I'd have to argue that the examples you used don't apply 
all that well.  For one thing, many of the "inspections" on large 
CE projects are made up of real tests of the subcomponents.  Core sampling
and testing of the concrete, etc.  Secondly, the bridge sustains heavy
amounts of testing during construction with the heavy equipment traffic
(it's not like they build a bridge and there is never one vehicle on it
until they open it for Monday rush hour!), the building is constantly
"tested" during construction and even the reservoir is (of necessity)
filled gradually.  .

gt

davidm@uunet.UU.NET (David S. Masterson) (10/16/90)

In article <5682@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes:

   This is where we depart from engineering...
   i.e. that programmers feel competent to reinvent
   our raw materials from first principles.

A couple of questions.

Before there was object-oriented programming, there was function-oriented
programming.  Everyone invented useful, *reuseable* functions to do all sorts
of basic things (how many sort routines have you seen?).  Yet, it went very
little further than the basics (I never saw a reusable accounting function).
Why?  And why should one expect that reusable, object-oriented programming
will go any further?
--
====================================================================
David Masterson					Consilium, Inc.
uunet!cimshop!davidm				Mtn. View, CA  94043
====================================================================
"If someone thinks they know what I said, then I didn't say it!"

donm@margot.Eng.Sun.COM (Don Miller) (10/16/90)

In article <84754@tut.cis.ohio-state.edu> William F Ogden <ogden@cis.ohio-state.edu> writes:
>In article <2450009@hpfcmgw.HP.COM> gt@hpfcmgw.HP.COM (George Tatge) writes:
>
>   ....
>>1. Since there can be no engineering without testing to insure that the
>>product is safe and reliable, a software engineer must have access to
>>test equipment and tools.
>
>Actually quite a bit of engineering is accomplished without testing.
>You rarely see testing of a bridge or building or dam, for example.
>Safety and reliability are achieved by analysis, modeling and simulation,
>inspection during construction, etc. I'd say that the jury is still out
>on which techniques will prove the most effective for software engineering.
>
>/Bill

Let's distinguish between the test execution phase and the whole testing
process which includes requirements and design verification as well as
test execution itself.  

Also, I bet the first car over the bridge isn't the first of rush hour the
morning after the bridge is built.  :-)

Don Miller
Sun Microsystems, Inc.
Software Quality Engineering
donm@eng.sun.com

bwb@sei.cmu.edu (Bruce Benson) (10/16/90)

In article <CIMSHOP!DAVIDM.90Oct15100313@uunet.UU.NET> cimshop!davidm@uunet.UU.NET (David S. Masterson) writes:

>Before there was object-oriented programming, there was function-oriented
>programming.  Everyone invented useful, *reuseable* functions to do all sorts
>of basic things (how many sort routines have you seen?).  Yet, it went very
>little further than the basics (I never saw a reusable accounting function).
>Why?  And why should one expect that reusable, object-oriented programming
>will go any further?

I just never want to write another sort, search, link list, stack, queue,
b-tree, etc, algorithm again.  These things have been around a long time and
are fundamental to the discipline.  I would be happy if I could work at this
simple level.  Aren't these the basic building blocks of our field? True, I
can (and do) pull these algorithms out of a book (the engineering handbook
approach we seem to be striving for) but this is not as satisfying nor as
productive as just instantiating: 

     sort(sort_method, data_structure, key(s)...)  

Performance (space/time complexity) can be tuned by the sort_method and
other parameters and if all else fails THEN code your own.  

The heck with an accounting function, I would be happy with the basics.

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

frodo@b11.ingr.com (Chuck Puckett) (10/16/90)

|windley@cheetah.cs.uidaho.edu (Phil Windley/20000000) writes:
||In article <2450009@hpfcmgw.HP.COM> gt@hpfcmgw.HP.COM (George Tatge) writes:
||
||  3. I've saved the most important for last.  My experience is that most
||  software people cringe at the idea of becomming engineers.  If it is truly
|
|I have to agree with George here (I seem to remember disagreeing with him
|on something before ;-).  Software engineering isn't engineering just
|because it has engineering in the name. (One could say the same for
|Computer Science, but that's a different soapbox.)  Turing it into an
						     >----<
|engineering discipline is going to take LOTS of time.  

A Freudian slip, nicht wahr?

Chuck "Did the machine pass The Test?? Well, did it?" Puckett
 

mcgregor@hemlock.Atherton.COM (Scott McGregor) (10/16/90)

In article <CIMSHOP!DAVIDM.90Oct15100313@uunet.UU.NET>,
cimshop!davidm@uunet.UU.NET (David S. Masterson) writes:

> Before there was object-oriented programming, there was function-oriented
> programming.  Everyone invented useful, *reuseable* functions to do all sorts
> of basic things (how many sort routines have you seen?).  Yet, it went very
> little further than the basics (I never saw a reusable accounting function).

Actually, for a very long time most every accounting system was a custom
job.  It was assumed that no company would want to change its human
accounting procedures and conventions, and there was enough variation
that even though the basic accounting unctions are similar everyone had
their own twists as far as how charts of accounts were organized, etc.
At custom software shops (I worked with one in mid '70s) we did make
every installation a "custom" one with new software, but we "reused"
code by copying lines out of one customer's system into another.  In the
80's largely with the advent of PCs, there started to be non-custom
shrink-wrap software "accounting products".  These often come in
"modules" that do an accounting "function" like General Ledger, or
Payroll Administration, much higher level than "Subroutines", but you
can think of these things as being very large objects that are being
reused by the companies that buy them.  You can mix and match "modules"
from a common vendor to meet your needs.   Of course, many of these
small companies that started with these PC products grew, and they tend
to continue to favor "standardized" software over custom designed
software maintained by their MIS departments.  Many very large companies
continue to run the Nth generation of the software that they wrote in
the '70s, but even some of these are looking to reduce their maintenance
costs by switching to packaged solutions--even if it means some changes
to local conventions.  The pressure for these companies is that their
overhead in maintenance can be larger than their younger competitors who
have standard off-the-shelf accounting packages that they don't need a
staff of programmers to maintain.

Main points:

* People have always reused software (generally their own!)--often by
  copying in the editor.
* Granularity matters. High-level functionality (shrinkwrapped software)
  and low level (a few related lines of code) might be stable states--in
       between levels (subroutines) may not be stable.
* Expectations and flexibility of the customer and of the developer
  affect the equilibrium solutions.

Scott McGregor
mcgregor@atherton.com

cdk@neptune.dsc.com (Colin Kelley) (10/17/90)

In article <5682@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes:

>This is where we depart from engineering...
>i.e. that programmers feel competent to reinvent
>our raw materials from first principles.

I see two problems which have kept software development from becoming an
engineering endevour:

- Software is built on hardware.  And the hardware has been changing at a
  phenomenal rate.  Forth and assembly language were interesting when we
  only had 4K RAM; now that we can get 8M pretty cheap, we don't need to
  optimize for memory usage very much.  This has lead to shifts in the popular
  languages and development environments.  (In turn, hardware is built on
  physics, which hasn't changed that much since the invention of the
  semiconductor transistor.  The advent of optical computing might be the
  first major hardware shake-up in a while.)

- Software is where you throw all the concepts which haven't been fleshed-out
  yet!  I'm not the only one who has wished that software development were
  as easy as putting together TTL chip building blocks.  But if the needs are
  well-understood enough (and unlikely to change every week for the next year)
  we _would_ put the software into hardware.  The problem is that software
  can be changed on a minute's notice, so it encourages techniques and designs
  which depend on this...

  [An interesting related question, though:  how can you get the software
  interfaces generic enough that you really can plug modules together,
  ala TTL?]

I do have to say that I am constantly amazed what an immature "science"
software development really is.  I look forward to the day when I don't have
battle for complete requirements definition, rigorous unit testing, deferred
optimization, etc. at every new company I work for.  When even these basic
principles aren't agreed upon, there's no hope for software development to
become an engineering discipline.

                        -Colin Kelley, Digital Sound Corporation
                        cdk%neptune%dschub@hub.ucsb.edu
                        ...!pyramid!ucsbcsl!dschub!neptune!cdk
                        (805) 569-0154 x 247

davidm@uunet.UU.NET (David S. Masterson) (10/17/90)

In article <32090@athertn.Atherton.COM> mcgregor@hemlock.Atherton.COM
(Scott McGregor) writes:

   Main points:

   * People have always reused software (generally their own!)--often by
     copying in the editor.

Wellll, I get the point (and have seen it *a lot* around here), but, with
respect to object-orientation, it would seem that copied code is not the form
of reusable code we want to achieve.  Truly reusable objects are not changed
in their reuse (and are, therefore, saleable), they are just made part of a
new implementation through some sort of in-direction mechanism.

   * Granularity matters. High-level functionality (shrinkwrapped software)
     and low level (a few related lines of code) might be stable states--in
	  between levels (subroutines) may not be stable.

Exactly my point.  The same statement of granularity can be applied to
high-level and low-level *objects*.  Might we not also have the same problem
with the in-between objects?  Or don't people think the in-between levels are
worth worrying about?

   * Expectations and flexibility of the customer and of the developer
     affect the equilibrium solutions.

I think Brad Cox's idea, though, is to move the developer up closer to the
customer.  To do this, though, requires a sound base to develop on -- which
falls back on the question of granularity (move the low-level up towards the
in-between level).
--
====================================================================
David Masterson					Consilium, Inc.
uunet!cimshop!davidm				Mtn. View, CA  94043
====================================================================
"If someone thinks they know what I said, then I didn't say it!"

mitchell@chance.uucp (George Mitchell) (10/17/90)

In article <1624@dschub.dsc.com> cdk@neptune.dsc.com (Colin Kelley) wrote:
`I see two problems which have kept software development from becoming an
`engineering endevour:
`
`- Software is built on hardware.  And the hardware has been changing at a
`  phenomenal rate.
The instruction set architectures have not changed much more rapidly
than the IC technologies.  HOL changes are slower yet.

`- Software is where you throw all the concepts which haven't been fleshed-out
`  yet!
This IS a problem.  Is the developer responsible for accepting an
undefined task?

`  [An interesting related question, though:  how can you get the software
`  interfaces generic enough that you really can plug modules together,
`  ala TTL?]
Could (object oriented) domain analysis help?
--
George Mitchell, MITRE, MS Z676, 7525 Colshire Dr, McLean, VA  22102
email: gmitchel@mitre.org  [alt: mitchell@community-chest.mitre.org]
vmail: 703/883-6029         FAX: 703/883-5519

mcgregor@hemlock.Atherton.COM (Scott McGregor) (10/18/90)

In article <CIMSHOP!DAVIDM.90Oct16142827@uunet.UU.NET>,
cimshop!davidm@uunet.UU.NET (David S. Masterson) writes:

> Wellll, I get the point (and have seen it *a lot* around here), but, with
> respect to object-orientation, it would seem that copied code is not
the > > form of reusable code we want to achieve. 

Probably not.  However, I bring it up because I believe that the major
forces affecting reuse behavior are sociological and psychological, and
not technological.   People like reusing their own code because they
have faith in their previous efforts, and because they can use easy
ASSOCIATIVE memory ways of finding it.  They like copying code from
an editor because they don't have to think deeply about writing generalized
code the first time, it can be easily and incrementally modified to the task
at hand, and because copying and pasting is a daily reinforced activity in
most editing tasks (including non-coding tasks).  When uncertain, the code
can also be examined directly to give one a sense of confidence.  Laziness
plays a certain part in the copying decision, but so does enjoyment in
coding (discovering) a new algorithm which might overwhelm laziness whenever
a fit is uncertain. 

I believe that another related form of copying is use of standard
subroutine libraries (e.g. libc, libBSD, Xtk, etc.)  Here, the user
has less certainty about the implementation, and in fact it is common
to see people who have implemented their own memory allocation, sort
or string routines  because they thought they could do better than
the standard ones, because they were unaware of a standard subroutine's
existance, or because they wanted their own custom interface.  But for
another set of programmers these subroutines are merely treated as
extended primitives in the programming language they use.  

Brad Cox has noted the similarity between code module/object reuse and
interchangable components in firearms.  One of the interesting thing to 
note about that is that most of the components are visible to the user 
who, if a knowledgable gunsmith, can visually determine their quality. 
The packaging for most subroutine libraries and object libraries is 
not their viewable code form, but a compiled form, requiring considerable
more effort for an expert to judge quality and suitability from.   On the
other hand, for non-experts this is irrelevant.  They like the fact that
they DON'T HAVE TO BE EXPERT, but can make decisions on marketing glossies
alone and the have greater faith in the expertise of the producer than
they have in their own abilities.   I believe that Brad has noted 
gunsmith opposition to interchangable parts when introduced, but that it
was the soldiers in the field and other consumers who preferred them.

This leads to a sociological observation, namely that one will see
object-level reuse begin with the nonexperts first (accountants who
purchase a shrink-wrapped accounting package, not a programmer).  And then you
will probably see acceptance by the fringe programmers (4GL users, Lotus
macro writers et. al who don't think they are programmers), and then
applications programmers (who will use a library of "standards" such as
Xtk...)  In other words, object style reuse is most desirable to those
who have no interest in doing the work in the first place!  It should
not be surprising to see that in fact shrink-wrap software, 4GL and
other non-procedural applications use is up but we often don't count that as
object reuse.   Object oriented libraries aimed at "professional programmers"
are facing a more difficult battle. 

A related problem in the acceptance of object level libraries is the 
retrieval problem.  For many non-programmers, a human interface to a 
library (i.e. a reference librarian) is an acceptable, and often preferred
retrieval interface.  In general, program libraries are supported only
by tools such as KWIC and grep lookups and not with human reference
librarians.  From a personal experience with an exception to this 
generalization--I have observed that the reactions of the users to
the human supported and tool supported access methods are different,
and that the tool support does not in fact capture much of the
disambiguation and filtering that librarians provide.  This has
often lead to undermining of large libraries as the categorization,
maintenance, and retrieval functions are often onerous to programmers
who would rather enjoy the discovery feelings that come from writing
their own solution.  This is further complicated by the fact that
project managers usually can only hire programmers, not reference
librarians, and suffer from the "when all you have is a hammer the
solution to every
problem needs to be a nail" syndrome.

> I think Brad Cox's idea, though, is to move the developer up closer to the
> customer.  To do this, though, requires a sound base to develop on -- which
> falls back on the question of granularity (move the low-level up towards the
> in-between level).

It may be that the evidence of shrink-wrap software, 4GLs, etc. is
indicating more customers moving closer to doing development themselves
(assembling macros...) without realizing it, and that if intermediate
granularity will be supported it may move from higher levels down, and
not as quickly the other way around.  It is still hard to say for sure.

Scott McGregor

EGNILGES@pucc.Princeton.EDU (Ed Nilges) (10/18/90)

In article <32174@athertn.Atherton.COM>, mcgregor@hemlock.Atherton.COM (Scott McGregor) writes:

>gunsmith opposition to interchangable parts when introduced, but that it
>was the soldiers in the field and other consumers who preferred them.

I didn't like Brad's gunsmithing metaphor.  Computer programs aren't
guns.  For one thing, they are creatures of a symbolic rather than
force-determined realm.  For another, because gunsmithing evolved
from a cottage industry into one of interchangeable parts does not
imply that that's a good way for software to evolve.  Brad's "primitive"
programmers were not analogous to one-of-a-kind craftspersons, but
were already (even in the 1950s) enmeshed in a highly industrialized
culture.  These programmers not only did not oppose standardization,
it was they who INVENTED the use of subroutines and programming
languages.  So it's perfectly possible that programming may represent
the Hegelian synthesis of thesis (cottage industry) and antithesis
(the assembly line), with a return to the cottage manufacture OF
STANDARDIZED PARTS which are then exchanged over the network by
neo-craftspeople.

Also, when you speak of soldiers "preferring" guns made of standardized
parts, you may have shifted levels from when you spoke of scouts
"preferring" a gun which they manufactured themselves, or at least
could field-repair.  Soldiers (and, I'd warrant, nonexpert computer
users) don't "prefer" a gun, or program. They are issued their tools by
The Powers that Be, who of course would prefer standardization.
The brass, just like individual gunsmiths, can err, but when they
err, they err big: what was the firearm which kept on failing in
Vietnam?  In like manner, compiled libraries can propagate errors
much faster than Brad's bad old craftspeople.

I like collections of source programs like NUMERICAL RECIPES...
too bad there aren't more.  Note that there is no reason why
such collections can't be distributed in both source and object
form to keep nonexperts happy (although I suspect the real reason
for an object-code-only policy is not to avoid confusing nonexperts:
I suspect it is to keep people in the dark about (1) your trade
secrets and (2) the poor quality of your code.)

UH2@psuvm.psu.edu (Lee Sailer) (10/20/90)

In article <1624@dschub.dsc.com>, cdk@neptune.dsc.com (Colin Kelley) says:

>- Software is where you throw all the concepts which haven't been fleshed-out
>  yet!  I'm not the only one who has wished that software development were
>  as easy as putting together TTL chip building blocks.  But if the needs are
>  well-understood enough (and unlikely to change every week for the next year)
>  we _would_ put the software into hardware.  The problem is that software
>  can be changed on a minute's notice, so it encourages techniques and designs
>  which depend on this...


I agree.  Imagine this taken to the ridiculous extrmeme.  Some programmers
develp a killer application in a language they invent.  When finished, they
announce that they are finished, and hand the whole thing over to the hardware
engineers.  The engineers then try to build a computer that implements the
language, and runs the application.  Meanwhile, the programmers complain
about how long it takes the hardware engineers to do the work, and blame
them for all the weird problems that arise.

                                           lee

coms2269@waikato.ac.nz (Brent C Summers) (10/22/90)

Distribution: comp
Organization: University of Waikato, Hamilton, New Zealand
Lines: 27

EGNILGES@pucc.Princeton.EDU (Ed Nilges) writes:
> I like collections of source programs like NUMERICAL RECIPES...
> too bad there aren't more.  Note that there is no reason why
> such collections can't be distributed in both source and object
> form to keep nonexperts happy (although I suspect the real reason
> for an object-code-only policy is not to avoid confusing nonexperts:
> I suspect it is to keep people in the dark about (1) your trade
> secrets and (2) the poor quality of your code.)

True these are both possible reasons for object-only distribution, but consider
the reliability and standardisation issues also.  If (for instance) the ISO
Modula-2 Library standard is ever finished, I expect to be able to walk into
an organisation and - while initially ignorant of their methods and local
standards - at least be comfortable with the standard library I know and hate.

Distribution in source enables "programmer reassurance", but not as much as
formal verification (or automated verification even?) might, and it leaves
the door open to tampering with "standard" software tools/materials.  I can't
find it in my heart to believe that the majority of programmers will resist
for long the temptation to "just tweak this a bit so it'll do what I want..."
thus ruining not only that implementation, but also eventually the entire
standard.

+-bcs, U of Waikato, NZ--------------------------------------------------+
|  "Now if only I could get Comp. Serv. to give me a sane username..."   |
|      All opinions expressed are, of course, solely my own errors.      |
+-------------------------------------------------coms2269@waikato.ac.nz-+

itcp@praxis.co.uk (Tom Parke) (10/22/90)

mitchell@chance.uucp (George Mitchell) writes:

>In article <1624@dschub.dsc.com> cdk@neptune.dsc.com (Colin Kelley) wrote:
>`I see two problems which have kept software development from becoming an
>`engineering endevour:
>`
>`- Software is built on hardware.  And the hardware has been changing at a
>`  phenomenal rate.
>The instruction set architectures have not changed much more rapidly
>than the IC technologies.  HOL changes are slower yet.

As George points out the instruction set is not the problem, but that
does not mean there is no problem. The hardware *has* changed at a
phenomenal rate, the changes in processing power, memory and mass
storage continually re-write the rules for all those implementation
trade off decisions. They increase users expectations of what the
software should do for them. There are changes in networking and user
interfaces that give rise to considerations that didn't even exist ten
years ago. Finally because of hardware changes, where the hardware is
located, how it is used and who it is used by has all changed, these
all effect what the software has to do and how it does it.

Software is not actually usually built on hardware but on yet more
software. This too has changed phenomenally, but in fact offers some
salvation. Once a layer of software is good enough (eg. Unix, TCP-IP,
X, SQL) and sufficiently established and too expensive for most to
want to re-invent it, then it seems to offer some hope of actually
insulating us from the headlong rush of hardware development. Or at
least buffering up the hardware changes so it impacts developers less
frequently.

Elsewhere recently in this or a parallel stream, Brad Cox implied
software engineers were like civil engineers who re-invented the
brick every project. All too frequently perhaps its the software
engineer who's given a different brick and who has to re-invent the wall.

-- 

Tom Parke (my opinions and spelling are strictly temporary) 
itcp@praxis.co.uk
Praxis, 20 Manvers St., Bath BA1 1PX, UK 

EGNILGES@pucc.Princeton.EDU (Ed Nilges) (10/22/90)

In article <2043.27232d06@waikato.ac.nz>, coms2269@waikato.ac.nz (Brent C Summers) writes:

>Distribution: comp
>Organization: University of Waikato, Hamilton, New Zealand
>Lines: 27
>
>EGNILGES@pucc.Princeton.EDU (Ed Nilges) writes:
>> I like collections of source programs like NUMERICAL RECIPES...
>> too bad there aren't more.  Note that there is no reason why
>> such collections can't be distributed in both source and object
>
>Distribution in source enables "programmer reassurance", but not as much as
>formal verification (or automated verification even?) might, and it leaves
>the door open to tampering with "standard" software tools/materials.  I can't
>find it in my heart to believe that the majority of programmers will resist
>for long the temptation to "just tweak this a bit so it'll do what I want..."
>thus ruining not only that implementation, but also eventually the entire

Standards are supposed to be voluntary; the rewards of adhering to a stan-
dard are supposed to outweigh  the rewards of departing from it.  The
majority of programmers may, or may not, be sensible enough to make a
back up copy before "tweaking" a piece of software.  However, I find
the idea that libraries must be object-code only (OCO) solely to prevent such
"tweaks" (1) repugnant and (2) dishonest.

It's a repugnant idea since software is sufficiently error-prone that
the only way a standard library will ever be fully tested is through a
SOCIAL process, consisting in the verifications of the original devel-
opment team, together with the activities of any and all readers, who
are encouraged to comment on any errors they may find.  This is the
way Knuth, for example, was able to offer n*2 dollars every year for
bugs found in the source code of TEX: the very process of publishing
TEX buys you a vast testing group.  I am not saying that you should
subject the public to Chernobyl-like tests of your code.  This process
should follow all other standard tests.

It's dishonest because the real motivation of OCO is not to protect
the public from irresponsible programmers tweaking your code (by the
way, code that is sufficientl robust and general needs fewer tweaks).
It is (1) to hide trade secrets and (2) to hide blunders.

cox@stpstn.UUCP (Brad Cox) (10/27/90)

In article <32174@athertn.Atherton.COM> mcgregor@hemlock.Atherton.COM (Scott McGregor) writes:
| Brad Cox has noted the similarity between code module/object reuse and
| interchangable components in firearms.  One of the interesting thing to 
| note about that is that most of the components are visible to the user 
| who, if a knowledgable gunsmith, can visually determine their quality. 
| The packaging for most subroutine libraries and object libraries is 
| not their viewable code form, but a compiled form, requiring considerable
| more effort for an expert to judge quality and suitability from.   On the
| other hand, for non-experts this is irrelevant.  They like the fact that
| they DON'T HAVE TO BE EXPERT, but can make decisions on marketing glossies
| alone and the have greater faith in the expertise of the producer than
| they have in their own abilities.   I believe that Brad has noted 
| gunsmith opposition to interchangable parts when introduced, but that it
| was the soldiers in the field and other consumers who preferred them.

Just a note to point out that the article Scott is referring to will appear
in the next (November) issue of IEEE Software, "Planning the Software
Industrial Revolution". A shorter (butchered...thanx, Byte...never will
you publish for me again) version appeared in this month's Byte magazine,
titled "There *is* a silver bullet".
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

basti@orthogo.UUCP (Sebastian Wangnick) (10/30/90)

Moin,

let me put some ideas of mine into this discussion.
IMHO, engineering began only 80 years ago when industrial production 
was reorganized as to seperate planning from execution of work 
(Taylor, Ford et al).

IMHO, it is a legal approach to compare the organization of software
making and its evolution to the evolution of industrial production 
as a whole. Doing so reveals that we have reached the level of 
[manufaktur (don't know the exact english term)], where previously
autonomous workmen come together to share resources and ultimately
to divide the making of their goods into little pieces of work 
that can then be executed by not-so-skilled workers. Somewhere within
this process of course the opportunity to become an independent workman
again is lost due to specialization.

Now (and again IMHO), the pure term software engineering 
shows the big direction towards industrial software production.
Many have tried to proove that the separation of planning and
execution of work is not that oeconomical in terms of productivity
and profit, but mostly motivated by the need to control the workers.
This is true to software making as well. Don't forget that software
is becoming an indispensible building block of high-industrialised societies.

That leads me to another aspect of this discussion.
I often ask myself whether the fundamental organization of exchange
in all those societies, money, stemming from the ancient mediterraneans,
will be accompanied and maybe ultimately replaced by the exchange of
information. The first duty of state was to assure the laws of market.
Extending this duty to information exchange we might find that the laws
of the information exchange market, for example all those restricted rights
legends in the headers of all those program sources on the net,
will have to be guaranteed by state, too. But doesn't this lead us
to an illiberal police state where every information exchange must be
supervised by the state?

Sebastian Wangnick (basti@orthogo.uucp)