[comp.software-eng] Soft-Eng Digest V3 #16

MDAY@XX.LCS.MIT.EDU (Moderator, Mark S. Day) (11/07/87)

Soft-Eng Digest             Fri,  6 Nov 87       Volume 3 : Issue  16 

Today's Topics:
                     Software Technology (8 msgs)
                HICSS-22: Call for Papers and Referees
                 Neural Networks Applied Optics Issue
----------------------------------------------------------------------

Date: 1 Nov 87 18:57:13 GMT
From: amdcad!cae780!leadsv!esl!ssh@ucbvax.Berkeley.EDU  (Sam)
Subject: Software Technology

Let's look at this cost issue.  Hardware gets faster and cheaper by
orders of magnitude.  Software productivity cannot keep up; any
improvements are measured in percentages, not in orders of magnitude.
Therefore we hear the argument "let hardware do the job... it's not
worth it to burn the money to improve the hardware for code or
performance...".

I consider this socially irresponsible.  Rather than company XYZ
paying the extra $YYY to properly structure the code, they rely on the
ZZZ thousands of consumers to pay extra for disk storage, power, time,
etc. to store, feed, and wait for this software.  This is an order of
magnitude not taken into account in rebuttals to the original
observation.

Do customers understand why processors with supposedly 10-30 times
more power as their old machine have no more than incremental
improvements in functionality and performance, yet take roughly 10
times the disk storage?  (Yes, I and others *can* cite examples of
this).

Let's reserve the productivity / portability argument for those few of
a kind cases such as custom-designed software (eg. military /
government contracts), but let's not get carried away by excusing the
laziness of the commercial software market.

Granted there's a middle ground; it's not so slanted in favor of
software sloth.
					-- Sam Hahn

------------------------------

Date: 2 Nov 87 17:08:47 GMT
From: necntc!linus!philabs!pwa-b!mmintl!franka@ames.arpa  (Frank Adams)
Subject: Software Technology

One point which is worth making here, I think.  There are really three areas
of technology here, not two:

1) Basic hardware - how small can you make your components, how close
together can you pack them, how fast do they operate?

2) Hardware design - how effectively do you put together your components?

3) Software - how well do you use the resulting system?

It is the first of these, and only the first, which has seen orders of
magnitude improvements.  I think both hardware design and software design
have made really quite impressive advances.  In any other context, they
would be seen as areas of outstanding development.

But the basic hardware has been doubling its efficiency every few years!

This is unprecedented in any area of technology.  There is no reason we
should expect hardware and software design to advance at the same pace, just
because they happen to be dealing with that technology.  Do you expect your
car to be twice as good at half the price as five years ago, just because
computers are?
-- 

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

------------------------------

Date: 2 Nov 87 10:10:10 GMT
From: chris@mimsy.umd.edu  (Chris Torek)
Subject: Software Technology 

[...]

[The person complaining about the C compiler reserving unused registers]
appears to be using the Portable C Compiler.  The compiler is more
than a decade old, and is not bad for what it was, which most
certainly does not include optimisation.  Better compilers are
available.  DEC sells their VMS C compiler for Ultrix; Tartan
labs sells an optimising PCC; GNU CC optimises.  The last is even
free.

[...]

Who cares how lazy the market is?  The idea behind competition is
that if someone *can* do better, someone *will*.  Perhaps those
who have faster, smaller software (that is nonetheless later to
market) have failed in the competition because that is not what
the customers really want.  Or perhaps not.  What does it matter?
Instead of railing against the inefficiency of some software, ask
*and wait* for something more efficient, or write it yourself.  If
everything out there is worse than it might be, improve upon it
and sell it (or give it away).

I think that everything *is* worse than it might be, but the cost
of improving it is more than people wish to spend.  That this is
not currently true of hardware is an interesting fact, but not a
reason to claim that software technology sucks.  It does make a
good point for comparison, however.  Why *are* we willing to spend
millions of dollars on hardware improvements, but not on software
improvements?  I suspect the answer is this:  an improvement in
hardware affects every bit of software that runs on that hardware.
Rewriting one program to make it more efficient affects only that
one program.  There is at least one place this is not true, namely
inside compilers.  And surprise! people are spending quite a bit
on compiler development too.

(It all comes back to counting the beans.  Some beans are multipliers,
and fewer of those is more significant than fewer beans in some of
the addends.  In fact, it reminds of tuning programs: of optimising
the 90% of the code that takes only 10% of the time versus the 10%
that takes 90% of the time.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

------------------------------

Date: 3 Nov 87 06:53:16 GMT
From: cbosgd!clyde!burl!codas!usfvax2!pdn!alan@ucbvax.Berkeley.EDU  (Alan Lovejoy)
Subject: Software Technology

I detect a basic philosophical disagreement over the purpose of an HLL
in this discussion.  Is an HLL meant as an abstraction that provides a
standard, portable paradigm of computation, or should it be a set of
abstraction mechanisms, or should it be both?

An HLL which is just a set of abstraction mechanisms would be based upon
the machine language and programming model of some cpu, and would rely
upon the programmer to define his own abstractions, including
information hiding, type checking, operator/procedure overloading,
inheritance/subclassing, and parameterized data and process
abstractions.  Macro assemblers are a rather primitive example of this
type of language, Forth is another (slightly more advanced, but not a
pure example).  The idea has a certain stunning simplicity, but no one
has ever actually implemented anything like this that really matches
the ideal.

Most HLL's present the programmer with an invariant, take-it-or-leave-it
computational paradigm.  The problem with this is that the paradigm may
not interact well with the underlying hardware architecture (or with 
the application--but that's a different issue).  No one has demonstrated
that there is some 'optimum' computational paradigm, and so each
hardware and language designer happily does his own thing.  One of the
major arguments for RISC is that the level of abstraction of the computational 
paradigms of most of the popular languages is too near--yet too
different from--the level of abstraction of most hardware computational
paradigms.  By 'reducing' the abstraction level ('complexity') of the
hardware paradigm, the job of producing an efficient translation from
the software paradigm into the hardware paradigm becomes easier, because
the hardware instructions make fewer assumptions (do less) that conflict
with the semantics of the software instructions.  The term 'reduced' in
RISC should be understood as 'more generalized'.  The more microcode
executed for each machine instruction, the more likely it is that the
user didn't *need* all those microcode instructions to accomplish his
task.  Imagine being forced to program in assembly language with only
push, pop and JSR instructions.  Why call a subroutine when all you
wanted to do was increment an integer by one?

Of course, the problem can also be tackled by raising the abstraction
level of HLL computational paradigms.  This leads to the problem
of the oversized screw-driver:  it can be indispensible for some jobs
but fail miserably for others.  What's needed is a tool chest which
contains screw-drivers of every size (a language or languages that
can operate on a wide range of abstraction levels, from bare hardware
to referential transparency, polymorphism, data hiding and lambda
functions).

--alan@pdn

------------------------------

Date: 3 Nov 87 13:31:12 GMT
From: cbosgd!clyde!burl!codas!usfvax2!pdn!pdnbah!reggie@ucbvax.Berkeley.EDU  (George Leach)
Subject: Software Technology

      Unfortunately, more often than not a single tool is chosen with
which all work is performed!  Ever try to install a tape deck in your
car with a hammer?  The appropriate tool should be applied to the
appropriate job.  Different HLLs can be used for different pieces of 
a software system (provided they mesh well at the interfaces) in order
to take advantage of features to make life easier.  But it may even be
taken a step further.  If you find an organization that attempts to
prototype a system before the actual implementation work is undertaken,
many times the implementation language and the prototype language are
one and the same!  There are VHHL languages, such as SETL, which are 
intended to take the level of abstraction to a point where something
may be quickly prototyped, test, refined,.....  There are also VHHLs
that are used as specification languages in some circles.

Furthermore, from the ergonomics world we can learn from tool design.  
Biomechanical analysis is used to influence hand tool design, so that 
the tools are easier to use and reduce fatigue, making the worker more 
productive.  You can find some examples of this in software tools, EMACS
for example.  However, we still have a long way to go.  Perhaps AI will
help out here.  


George W. Leach					Paradyne Corporation
{gatech,codas,ucf-cs}!usfvax2!pdn!reggie	Mail stop LF-207
Phone: (813) 530-2376				P.O. Box 2826
						Largo, FL  34649-2826

------------------------------

Date: Wed, 04 Nov 1987 17:50 PST
From: PAAAAAR%CALSTATE.BITNET@wiscvm.wisc.edu
Subject: Software Technology

If this sounds a little bitter it is because I wrote my first program
in the 50's and keep seeing the same arguments come round all the time
 and yet nobody will warrantee their software as having any value.

Why is it that when we discuss software engineering we fall into a
 discussion of Languages?
Can anyone name an engineering discipline that does not use tools to
 make things and graphics + more or less math to design them?
Can anyone name a single Language construct that has NOT
 been considered HARMFUL by some
 experienced and prize winning Computer Scientist?
At the last count there were about 300 high level languages - do we need
 another one?
Is there any evidence to show that using the right language implies that I
 solve the right problem?
Is it not true that all languages that are powerful enough (=Turing machines)
 are all equally powerful and all capable of producing programs in which a
 small change leads to an endless loop.

Herman Rubin (purdue) asked for an assembler with a sensible syntax.
Fraser Duncan made one.
 - it ran on three micro CPUs with the similar syntax and has been
 described (complete with listings) in a book:
 "Micro-processor Programming and Software Development"
 Prentice Hall International, 1979.
 It is just work to build your own

Erland Sommarskog asked about range checking in C. It must be explicit in the
 code if done at all.
 It took us three months to remove the bugs from a shell on our Unix system
 that were solely due to this cause - most of them closed the system down...

Some discussion asked for extensible languages. This was originated in
 the sixties (ALGOL A,B, C). Every attempt has failed to stop the demands
 for more extensibillity...

There was some discussion of speed and 'efficiency' with some people noting
 that a fast program that fails is not necesarily a good target!
 Glenn Emelko (...ncoast!crds) asks for a 200 times improvement
 in power - I want to see any program that I can not crash, PERIOD.

I am starting to flame.... so let me state my thesis

         PROGRAMMING LANGUAGES ARE PART OF THE PROBLEM,
         NOT PART OF THE SOLUTION.

Dick Botting(doc-dick), Comp Sci, Cal State, San Ber'do
PAAAAAR%CALSTATE.BITNET@WISCVM.WISC.EDU(until December 1987)
5500, State University Pkwy, San Bernardino, CA 92407
(714) 887-7368 (voice) (714)887-7365 (modem: login as guest)
Copywrite - Give credit where credit is due and send 50% of your profit.
Disclaimer - This message may or may not do anything to anybody
  if they use it any way whatsoever
  and I and my employees are not responsible for the consequences.

------------------------------

Date: 6 Nov 87 18:59:35 GMT
From: uwspan!root@unix.macc.wisc.edu  (John Plocher)
Subject: Software Technology

I would like to recomend an article in the November 1987 issue of Unix Review
for everyone's reading.  The article:

			No Silver Bullets
			      by
		      Frederick P. Brooks, Jr
		      Kenan Professor of CS at
			 UNC-Chapel Hill

"The frustrations of software development are often nightmarish.  But search
as we might, we'll never come upon a single development - be it in technology
or in management - that of itself will provide an order-of-magnitude
productivity improvement"

-- 
Email to unix-at-request@uwspan with questions about the newsgroup unix-at,
otherwise mail to unix-at@uwspan with a Subject containing one of:
	    386 286 Bug Source Merge or "Send Buglist"
(Bangpath: rutgers!uwvax!uwspan!unix-at & rutgers!uwvax!uwspan!unix-at-request)

------------------------------

Date: 6 Nov 87 23:44:22 GMT
From: pioneer!eugene@ames.arpa  (Eugene Miya N.)
Subject: Software Technology

	[Re:]	No Silver Bullets
		      by
	      Frederick P. Brooks, Jr

I enjoyed this article also published in IEEE Computer (vol. 20, no. 4,
April 1987) because FB (project head of another well known operating system)
was fervently anti-Unix a few years ago.  This article shows that he has
come around a little.

From the Rock of Ages Home for Retired Hackers:

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

------------------------------

Date: 5 November 1987, 17:08:51 EST
From: Bruce Shriver <SHRIVER@ibm.com>
Subject: HICSS-22: Call for Papers and Referees

=====================================================================


      HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES

      HICSS-22 SOFTWARE TRACK INTENT TO PARTICIPATE FORM

             Twenty-Second Annual HICSS Conference
                     Jan. 3-6, 1989, Hawaii

 GENERAL INFORMATION
 HICSS provides  a forum  for the  interchange of  ideas, re-
 search  results,  development activities,  and  applications
 among	academicians and  practitioners  in the  information,
 computing, and  system sciences.  HICSS is  sponsored by the
 University of Hawaii  in cooperation with the	ACM, the IEEE
 Computer Society, and the Pacific Research Institute for In-
 formation  Systems and  Management (PRIISM).	HICSS-22 will
 consist of  tutorials, open  forums, task forces,  a distin-
 guished lecturer  series, and	the presentation  of accepted
 manuscripts which emphasize  research and development activ-
 ities in software technology, architecture, decision support
 and knowledge-based  systems, emerging technologies  and ad-
 vanced applications.  The best  papers, selected by the pro-
 gram committee in each of these areas, are given an award at
 the meeting.  There is a high degree of interaction and dis-
 cussion among the conference  participants as the meeting is
 conducted in a workshop-like setting.

 INSTRUCTIONS FOR SUBMITTING PAPERS
 Manuscripts should be 22-26 typewritten, double-spaced pages
 in length.  Please do not  send submissions that are signif-
 icantly shorter  or longer than  this. Papers must  not have
 been previously  presented or published, nor  currently sub-
 mitted for journal publication.  Each manuscript will be put
 through a  rigorous  refereeing process.  Manuscripts should
 have a title page that includes the title of the paper, full
 name of its author(s), affiliation(s), complete physical and
 electronic address(es), telephone  number(s) and a  300-word
 abstract of the paper.

 DEADLINES FOR AUTHORS
 o   A 300-word abstract is due by March 1, 1988
 o   Feedback to author concerning abstract by March 31, 1988
 o   Six copies of the manuscript are due by June 6, 1988.
 o   Notification of accepted papers by September 1, 1988.
 o   Accepted manuscripts,  camera-ready, are due  by October
     3, 1988.

 DEADLINES FOR MINI-TRACK, SESSION, AND TASK-FORCE COORDINATORS
 If you  would like to	coordinate a mini-track,  session, or
 task force, you  must submit for consideration a  3 page ab-
 stract in  which you describe	the topic you  are proposing,
 its timeliness  and importance, and its  treatment in recent
 conferences and workshops before December 15, 1987.

 PLEASE COMPLETE THE FOLLOWING FORM AND RETURN IT TO:
 Bruce D. Shriver
 HICSS-22 Conference Co-Chairman
   and Software Technology Track Coordinator
 IBM T. J. Watson Research Center
 P.O. Box 704
 Yorktown Heights, NY 10598
 (914) 789-7626
 CSnet: shriver@ibm.com
 Bitnet: shriver@yktvmh

 Name      ______________________________________________________
 Address:  ______________________________________________________
 City:     ______________________________________________________
 Phone No. ______________________________________________________
 Electronic Mail Address: _______________________________________

 I would like to coordinate a mini-track or session in:
       I would like to coordinate a task-force in:
           I will submit a paper in:
                I will referee papers in:

 ___  ___  ___ ___  Algorithms, Their Analysis and Pragmatics
 ___  ___  ___ ___  Alternative Language and Programming Paradigms
 ___  ___  ___ ___  Applying AI Technology to Software Engineering
 ___  ___  ___ ___  Communication & Protocol Software Issues
 ___  ___  ___ ___  Database Formalisms, Software and Systems
 ___  ___  ___ ___  Designing & Prototyping Complex Systems
 ___  ___  ___ ___  Distributed Software Systems
 ___  ___  ___ ___  Electronic Publishing & Authoring Systems
 ___  ___  ___ ___  Language Design & Language Implementation Technology
 ___  ___  ___ ___  Models of Program and System Behavior
 ___  ___  ___ ___  Programming Supercomputers & Massively Parallel Systems
 ___  ___  ___ ___  Reuseability in Design & Implementation
 ___  ___  ___ ___  Software Design Tools/Techniques/Environments
 ___  ___  ___ ___  Software Related Social and Legal Issues
 ___  ___  ___ ___  Testing, Verification, & Validation of Software
 ___  ___  ___ ___  User Interfaces
 ___  ___  ___ ___  Workstation Operating Systems and Environments
 ___  ___  ___ ___  Other ______________________________

------------------------------

Date: Wed, 4 Nov 87 18:12 EDT
From: <MIKE%BUCASA.BITNET@MITVMA.MIT.EDU>
Subject: Neural Networks Applied Optics Issue

             NEURAL NETWORKS:  A special issue of Applied Optics
                     December 1, 1987 (vol. 26, no. 23)
            Guest editors: Gail A. Carpenter and Stephen Grossberg


     The Applied Optics special issue on neural networks brings together a
selection of research articles concerning both biological models of brain and
behavior and technological models for implementation in government and
industrial applications.  Many of the articles analyze problems about pattern
recognition and image processing, notably those classes of problems for which
adaptive, massively parallel, fault-tolerant solutions are needed, and for
which neural networks provide solutions in the form of architectures that will
run in real-time when realized in hardware.

     The articles are grouped into several topics: adaptive pattern recognition
models, image processing models, robotics models, optical implementations,
electronic implementations, and opto-electronic implementations. Each type of
neural network model is typically specialized to solve a variety of problems.
Models of back propagation, simulated annealing, competitive learning, adaptive
resonance, and associative map formation are found in a number of articles.
Each of the articles may thus be appreciated on several levels, from the
development of general modeling ideas, through the mathematical and
computational analysis of specialized model types, to the detailed explanation
of biological data or the fabrication of hardware. The table of contents
follows.

     Single copies of this special issue are available from the Optical Society
of America, at $18/copy. Orders may be placed by returning the form below, or
by calling (202) 223-8130 (ask for Jeana Macleod).
-------------------------------------------------------------------------------

 Please send ____ copies of the Applied Optics special issue on neural networks

 (vol. 26, no. 23) to:
NAME: __________________________________________________

ADDRESS:  _______________________________________________

_______________________________________________

_______________________________________________


TELEPHONE(S):___________________________________________

 TOTAL COST: $ ____________ $18/copy, including domestic or foreign surface
                                postage (+ $10/copy for air mail outside U.S.)

 PAYMENT: _____ Check enclosed (payable to Optical Society of America, or OSA)

        or _____ Credit card: American Express ____ VISA ____ MasterCard ____
                             Account number  __________________________________
                             Expiration date _________________________________

                             Signature (required)
                                        ____________________________

 SEND TO: Optical Society of America
           Publications Department
           1816 Jefferson Place NW Or call: (202) 223-8130 (Jeana Macleod)
           Washington, DC 20036 USA (credit cards)
_______________________________________________________________________________


             NEURAL NETWORKS: A special issue of Applied Optics
                     December 1, 1987 (vol. 26, no. 23)
            Guest editors: Gail A. Carpenter and Stephen Grossberg



                            TABLE OF CONTENTS



ADAPTIVE PATTERN RECOGNITION MODELS

   Teuvo Kohonen.  Adaptive, associative, and self-organizing functions in
   neural computing

   Gail A. Carpenter and Stephen Grossberg.  ART 2: Self-organization of
   stable category recognition codes for analog input patterns

   Jean-Paul Banquet and Stephen Grossberg.  Probing cognitive processes
   through the structure of event-related potentials during learning: An
   experimental and theoretical analysis

   Bart Kosko.  Adaptive bidirectional associative memories

   T.W. Ryan, C.L. Winter, and C.J. Turner.  Dynamic control of an artificial
   neural system: The Property Inheritance Network

   C. Lee Giles and Tom Maxwell.  Learning and generalization in high order
   neural networks: An overview

   Robert Hecht-Nielsen.  Counterpropagation networks

   Kunihiko Fukushima.  A neural network model for selective attention in
   visual pattern recognition and associative recall



IMAGE PROCESSING MODELS

   Michael H. Brill, Doreen W. Bergeron, and William W. Stoner.  Retinal
   model with adaptive contrast sensitivity and resolution

   Daniel Kersten, Alice J. O'Toole, Margaret E. Sereno, David C. Knill, and
   James A. Anderson.  Associative learning of scene parameters from images



ROBOTICS MODELS

   Jacob Barhen, N. Toomarian, and V. Protopopescu.  Optimization of the
   computational load of a hypercube supercomputer onboard a mobile robot

   Stephen Grossberg and Daniel S. Levine.  Neural dynamics of attentionally
   modulated Pavlovian conditioning: Blocking, inter-stimulus interval, and
   secondary reinforcement



OPTICAL IMPLEMENTATIONS

   Dana Z. Anderson and Diana M. Lininger.  Dynamic optical interconnects:
   Volume holograms and optical two-port operators

   Arthur D. Fisher, W.L. Lippincott, and John N. Lee.  Optical implementations
   of associative networks with versatile adaptive learning capabilities

   Clark C. Guest and Robert Te Kolste.  Designs and devices for optical
   bidirectional associative memories

   Kelvin Wagner and Demetri Psaltis.  Multilayer optical learning networks



ELECTRONIC IMPLEMENTATIONS

   Larry D. Jackel, Hans P. Graf, and R.E. Howard.  Electronic neural-network
   chips

   Larry D. Jackel, R.E. Howard, John S. Denker, W. Hubbard, and S.A. Solla.
   Building a hierarchy with neural networks: An example - image vector
   quantization

   A.P. Thakoor, A. Moopenn, John Lambe, and Satish K. Khanna.  Electronic
   hardware implementations of neural networks



OPTO-ELECTRONIC IMPLEMENTATIONS

   Nabil H. Farhat.  Opto-electronic analogs of self-programming neural nets:
   Architectures and methodologies for implementing fast stochastic learning
   by simulated annealing

   Yuri Owechko.  Opto-electronic resonator neural networks

------------------------------

End of Soft-Eng Digest
******************************

-------