[comp.software-eng] Soft-Eng Digest V4 #3

MDAY@XX.LCS.MIT.EDU (Moderator, Mark S. Day) (01/21/88)

Soft-Eng Digest             Thu, 21 Jan 88       Volume 4 : Issue  3 

Today's Topics:
                          Complexity Metrics
                             Top Ten Time
                  EMACS' learnability and usability 
                   Offices versus Cubicles (3 msgs)
     Examples: Optimizing Complexity Reduces Code Quality (2 msgs)
     Software Complexity Measures Will Never Be Accurate (3 msgs)
           Proposal to Build Neural Network Simulator Tools
----------------------------------------------------------------------

Date: Mon, 11 Jan 88 16:53 GMT
From: mcvax!prg.oxford.ac.uk!tom@uunet.UU.NET
Subject: Complexity Metrics

In an item in digest vol 4 iss 2, Ken Magel <ndsuvax!ncmagel@uunet.uu.net>
asks what sort of proof, evidence, or demonstration would be needed to cause
consideration of use of metrics.
My answer's quite simple: I have a number of post-hoc metrics [time to test,
effort to test, bug-rate after testing, bug rate after release, etc.]. All I
want of a metric is that it correlates at least as well with these post-hoc
metrics as does size of module (ie number of lines of code). So far, I'm
unaware of any metric that achieves this for the sort of complex operating
system code that my outfit deals with; so the best metric is a simple line
count. 
The remarks from the ex-Raytheon guy in the same digest rung a bell; remember
eliminating goto? It's easy to end up with a pile of nested ifs and a big
enough collection of flying ducks (end-ifs) to paper a room. Sure goto is 
bad if misused, but not using it where it's appropriate is worse: and metrics
are bad if misused too - but has anyone found a case where not using them is
worse?
I know one metric that's far better than line count, but it can't be
adequately formalised so it ain't scientific (but it is engineering?): get
half a dozen senior guys to look over a module and rate it on a scale of
1 to 10, then take the second most pessimistic rating.

------------------------------

Date: Mon, 11 Jan 88 10:37:48 PST
From: Eugene Miya <eugene@ames-nas.arpa>
Subject: Top Ten Time

Time again to access the "top ten" texts of software
engineering.  In the past, I've taken two different tacts to
collect the information: blind solicitation and posting an
initial list.  This time, I'll take to editing the list of
specifically biased articles, etc.  I'm willing to iterate.
You can either send suggested texts now, or I will post
suggestions one at a time for yea/nay votes.

--eugene miya
  eugene@ames-nas.arpa or the other usual paths.

------------------------------

Date: Mon, 11 Jan 88 11:25:51 -0500
From: dan@WILMA.BBN.COM
Subject: EMACS' learnability and usability 

Andy Freeman <ANDY@SUSHI.STANFORD.EDU> writes:

> EMACS' ~20 basic movement and text manipulation commands are fairly
> easy to remember...

Having the basic editing commands be "fairly easy to remember" is not
good enough for something that was held up as an example of an
excellent human interface (which is what prompted my message; the fact
that my reply was delayed three weeks makes it look a bit out of
context).  If the 20 basic editing commands
were better organized, they'd be trivial to remember.

The main reason I'm picking on EMACS (GNU EMACS in particular, since
it's the only one I know, but I understand others are similar) is
because it would have required virtually no effort to solve these
problems; what was missing was the attitude that these *are* problems,
and that this program's user interface could be made better.  For
example, a recommended technique for dealing with a series of commands
which do similar things on different kinds of objects is to create a
matrix of operations vs.  objects and have the obvious command
combinations.  (Vi does this a little.)  When I first started using
GNU EMACS, I was often trying to remember some of the region-oriented
and word-oriented commands.  No matter how many times I looked them
up, they didn't stick.  I moved the EMACS word operations to all begin
with ^W, simultaneously making them more mnemonic and removing an
annoying single-keystroke destroy-the-world operation (kill-region).
I also moved the region operations to all begin with ^R (kill-region
became ^R^K).  Suddenly I had no problem at all remembering them!

Unfortunately, one reaction to statements like this is "well, *I*
never had any trouble" and a smug feeling.  That reaction is part of
the problem.  Just because some people picked up EMACS in thirty
minutes and never had any trouble remembering the keystrokes for
transpose-paragraph doesn't mean EMACS is wonderful; it means those
people have an uncanny ability to memorize a bunch of only vaguely
related facts.  This is an ability programmers tend to have/pick up,
which distorts their ideas on how to make a user interface easy to
use.  Perhaps the most important idea here is that the best way for
any tool to work is not necessarily the way programmers think it
should work.

	Dan Franklin

------------------------------

Date: 18 Jan 88 14:05:08 GMT
From: mtune!codas!usfvax2!pdn!reggie@rutgers.edu  (George W. Leach)
Subject: Offices versus Cubicles

       Is anyone aware of any empirical studies or experiments to determine
the impact upon programmer productivity (or any related field) of providing
offices with walls and doors and as opposed to cubicles?


       I have worked in both types of environments and by far I feel that
an office provides the better environment by reducing noise levels from
other people's phones, typing, conversations, etc.....  However, if asked
to gauge the relative value in terms of productivity I would not be able
to come up with a value for an office over a cubicle.

Thank You,


-- 
George W. Leach					Paradyne Corporation
{gatech,rutgers,attmail}!codas!pdn!reggie	Mail stop LF-207
Phone: (813) 530-2376				P.O. Box 2826
						Largo, FL  34649-2826
------------------------------

Date: 19 Jan 88 18:02:13 GMT
From: beach.cis.ufl.edu!hbh@bikini.cis.ufl.edu  (Hillard Holbrook)
Subject: Offices versus Cubicles

Two references that may help:
  DeMarco & Lister, "Programmer Performance and the Effects of the Workplace"
     IEEE 8th annual Conference on Software Engineering, 1985
  McCue, Gerald,  "Teresa Laboratory- Design for program development"
     IBM Systems Journal, Vol 17 No. 1,  1978

- a personal note: fooey on cubicles!!!!

		Hilliard Holbrook hbh@beach.cis.ufl.ed

------------------------------

Date: 20 Jan 88 19:20:17 GMT
From: tikal!amc!pilchuck!rice@beaver.cs.washington.edu  (Ken Rice)
Subject: Offices versus Cubicles

I just finished reading a terrific new book:
	
	PEOPLEWARE: Productive Projects and Teams
	by Tom DeMarco & Timothy Lister
	Dorset House Publishing Co., Inc.

This book has quite a bit to say about the increased productivity that
can result from private, quiet offices. I heartily recommend this book
to anyone who's interested in the sociological aspects of programming
(or other) projects.

Ken Rice

------------------------------

Date: Mon, 11 Jan 88 12:30:56 est
From: Bard Bloom <bard@THEORY.LCS.MIT.EDU>
Subject: Examples: Optimizing Complexity Reduces Code Quality

> original fragment:
> 
>    if (a == value1) then
>       y = target1;
>    else if (b == value2) then
>       y = target2;
>    else if (c == value3) then
>       y = target3;
>    else if (d == value4) then
>       y = target4;
>    else if (e == value5) then
>       y = target5;
>    else if (f == value6) then
>       y = target6;
>    end if;
> 
> transformed fragment:
> 
>    new_func((a == value1), target1);
>    new_func((b == value2), target2);
>    new_func((c == value3), target3);
>    new_func((d == value4), target4);
>    new_func((e == value5), target5);
>    new_func((f == value6), target6);
> 
> with new_func defined as:
> 
> new_func(relation_value, target)
> {
>    if (relation_value) then
>       y = target;
>    end if;
> }
> 
> ...
> Some may argue that the transformed code is better.  I argue, though, that is
> hides the logic, while the original code was quite clear, therefore easily
> maintained.
>

It's worse than that: the transformed code is different from the original
whenever the cases are not exclusive.  If all of the tests are true, then the
first fragment will end with y=target1 and the second one with y=target6.

-- Bard

------------------------------

Date: Tue, 19 Jan 88 19:41:00 PST
From: PAAAAAR%CALSTATE.BITNET@MITVMA.MIT.EDU
Subject: Examples: Optimizing Complexity Reduces Code Quality

apollo!marc@beaver.cs.washington.edu  (Marc Gibian) quotes this fragment
>
>   if (a == value1) then
>      y = target1;

    [... see previous message -- MSD ]

>   end if;
>
Marc Gives a less "complex" but hard to understand solution.

Some lateral thinking leads to an anomaly.  In ALGOL60 I wrote:
 y:=if a = value1 then target1
   else if b = value2 then target2
   else if c = value3 then target3
   else if d = value4 then target4
   else if e = value5 then target5
   else if f = value6 then target6;

In C I've used:
 y:=( a == value1 ? target1
   : b == value2 ? target2
   : c == value3 ? target3
   : d == value4 ? target4
   : e == value5 ? target5
   : f == value6 ? target6
   );
What happens to the complexity if I rewrite it like this:

 y:= t[a==value1]*target1 + t[b==value2]*target2 + ...;

       where t[true]=1 and t[false]=0.

        *Bit Chasers can recode the above using AND & OR.

What is the Cyclomatic Complexity of an Assignment?

Are there ANY programs that can't be "simplified" this way?

Dick Botting
PAAAAAR@CCS.CSUSCC.CALSTATE(doc-dick)        paaaaar@calstate.bitnet
            PAAAAAR%CALSTATE.BITNET@CUNYVM.CUNY.EDU
Dept Comp Sci., CSUSB, 5500 State Univ Pkway, San Bernardino CA 92407
voice:714-887-7368           modem:714-887-7365 -- Silicon Mountain
"where smog of LA is blown away, and the sun shines bright all the day"!

------------------------------

Date: 14 Jan 88 01:05:52 GMT
From: rochester!crowl@bbn.com  (Lawrence Crowl)
Subject: Software Complexity Measures Will Never Be Accurate

Algorithmic measures of algorithmic complexity will never be accurate.  The
problem is that a larger set of assumptions, or programmer state, allows
smaller programs.  Since algorithms are unlikely to be able to capture the set
of assumptions in a piece of code, the measures must rely on program size,
operator relationships, etc.  Smaller programs are more likely to have smaller
complexity values, reguardless of the set of assumptions.  However, the set of
assumptions, or programmer state, required to understand a code fragment is
just what makes a piece of code difficult to understand.   Algorithmic
measures of complexity fail to capture the set of assumptions within a piece
of code, and therefore are inaccurate measures of actual complexity.  
-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

------------------------------

Date: 16 Jan 88 00:57:39 GMT
From: hao!umigw!steve@husc6.harvard.edu  (steve emmerson)
Subject: Software Complexity Measures Will Never Be Accurate

Interesting.  This dovetails nicely with another software-metric assertion:
namely, that software metrics are applicable only *within* an organization
for the purpose of comparing one routine with another.  It could be that
each software development entity constructs, conciously or otherwise, a 
standard set of assumptions.  Thus enabling the valid *intra*-entity use of 
software metrics; but hopelessly failing for inter-entity comparisons.

-- 
Steve Emmerson                     Inet: emmerson@miami.miami.edu [192.31.89.4]
SPAN: miami::emmerson (host 3.2)         emmerson%miami.span@star.stanford.edu
UUCP: ...!hao!umigw!miami!emmerson       emmerson%miami.span@vlsi.jpl.nasa.gov

------------------------------

Date: 19 Jan 88 14:50:11 GMT
From: uh2@psuvm.bitnet  (Lee Sailer)
Subject: Software Complexity Measures Will Never Be Accurate

This is a good point.  In *today's* programming languages, there are too many
pieces of required knowledge that are not coded, so complexity measures
based on the code alone will be of limited accuracy.
     
This bring to my mind two developments in CS theory.
     
PROGRAM CORRECTNESS PROOFS:  All the work on this seems to require the
addition of "assertions" to the code, that is, logical statements of
all the assumptions that must hold before, after, and even during the
execution of a piece of program.  (See Gries, the Science of Programming)
     
SPECIFICATION LANGUAGES:  The ideal spec. lang. would lead to the user
making explicit all of the *unstated* assumptions about the process.
Furthermore, research is moving towards specification which can be executed
directly, omitting the coding step altogether.
     
I conclude then that complexity measures should at least have both the code
and the specifications as input.  Complexity could be perceived as
sort of a ratio of actual code complexity to specification complexity.
     
Disclaimer:  I am interested in these ideas, but unread.  If this sounds
to you like something that someone has written, please send me the reference.
     
------------------------------

Date: Sat, 16 Jan 88 23:39:58 EST
From: weidlich@ludwig.scc.com (Bob Weidlich)
Subject: Proposal to Build Neural Network Simulator Tools

             A PROPOSAL TO THE NEURAL NETWORK RESEARCH COMMUNITY
                                 TO BUILD A
       MULTI-MODELED LAYERED NEURAL NETWORK SIMULATOR TOOL SET (MLNS)

                               Robert Weidlich

                           Contel Federal Systems

                              January 11, 1988


The technology of neural networks is in its infancy.  Like all other major new
technologies  at  that  stage, the development of neural networks is slowed by
many impediments along the road to realizing its potential to solve many  sig-
nificant  real  world problems.  A common assumption of those on the periphery
of neural network research is that the major factor holding back  progress  is
the  lack  of hardware architectures designed specifically to implement neural
networks.  But those of us who use neural networks on a day to day basis real-
ize  that  a  much more immediate problem is the lack of sufficiently powerful
neural network models. The pace of progress in the technology will  be  deter-
mined  by the evolution of existing models such as Back Propagation, Hopfield,
and ART, as well as the development of completely new models.

But there is yet another significant problem that inhibits  the  evolution  of
those  models:  lack  of  powerful-yet-easy-to-use,  standardized, reasonably-
priced toolsets.  We spend months of time building our  own  computer  simula-
tors,  or  we spend a lot of money on the meager offerings of the marketplace;
in either case we find we spend more  time  building  implementations  of  the
models  than  applying  those  models to our applications.  And those who lack
sophisticated computer programming skills are cut out altogether.

I propose to the  neural  network  research  community  that  we  initiate  an
endeavor  to  build  a suite of neural network simulation tools for the public
domain.  The team will hopefully be composed of a cross-section  of  industry,
academic  institutions,  and  government, and will use computer networks, pri-
marily Arpanet, as its  communications  medium.   The  tool  set,  hereinafter
referred  to  as  the  MLNS,  will ultimately implement all of the significant
neural network models, and run on a broad range of computers.

These are the basic goals of this endeavor.

     1.   Facilitate the growth and evolution of neural network technology  by
          building  a set of powerful yet simple to use neural network simula-
          tion tools for the research community.

     2.   Promote standardization in neural network tools.

     3.   Open up neural network technology to  those  with  limited  computer
          expertise  by  providing powerful tools with sophisticated graphical
          user interfaces.  Open up neural network technology  to  those  with
          limited budgets.

     4.   Since we expect neural network models to evolve rapidly, update  the
          tools to keep up with that evolution.

This announcement is a condensation of a  couple  of  papers  I  have  written
describing  this proposed effort.  I describe how to get copies of those docu-
ments and get involved in the project, at the end of this announcement.

The MLNS tool will be distinctive in that will incorporate a layered  approach
to its architecture, thus allowing several levels of abstraction.  In a sense,
it is a really a suite of neural net tools,  one  operating  atop  the  other,
rather  than  a  single tool. The upper layers enable users to build sophisti-
cated applications of neural networks which provide  simple  user  interfaces,
and hide much of the complexity of the tool from the user.

This tool will implement as many significant neural network models (i.e., Back
Propagation,  Hopfield, ART, etc.) as is feasible to build.  The first release
will probably cover only 3 or 4 of the more popular models.  We will  take  an
iterative  approach  to  building  the  tool and we will make extensive use of
rapid prototyping.

I am asking for volunteers to help build the tool.  We will rely  on  computer
networks,  primarily  Arpanet  and those networks with gateways on Arpanet, to
provide our communications utility.  We will need a variety of skills  -  pro-
grammers  (much  of  it  will  be written in C), neural network "experts", and
reviewers.  Please do not be reluctant to  help  out  just  because  you  feel
you're  not  quite experienced enough; my major motivation for initiating this
project is to round-out my own neural networking  experience.   We  also  need
potential  users  who  feel they have a pretty good feel for what is necessary
and desirable in a good neural network tool set.

The tool set will be 100% public domain; it will not be the  property  of,  or
copyrighted by my company (Contel Federal Systems) or any other  organization,
except for a possible future non-commercial organization that we may  want  to
set up to support the tool set.

If your are interested in getting involved as a designer, an advisor, a poten-
tial  user,  or if you're just curious about what's going on, the next step is
to download the files in which I describe this project in detail.  You can  do
this by ftp file transfer and an anonymous user.  To do that, take the follow-
ing steps:

        1.   Set up an ftp session with my host:

                     "ftp ludwig.scc.com"
			(Note:  this is an arpanet address.  If you are
			 on a network other than arpanet with a gateway
			 to arpanet, you may need a modified address
			 specification.  Consult your local comm network
			 guru if you need help.)

        2.   Login with the user name "anonymous"
        3.   Use the password "guest"
        4.   Download the pertinent files:

                     "get READ.ME"         (the current status of the files)
                     "get mlns_spec.doc    (the specification for the MLNS)
                     "get mlns_prop.doc    (the long version of the proposal)

If for any reason you cannot download the files, then call  or  write  me  the
following address:

             Robert Weidlich
             Mail Stop P/530
             Contel Federal Systems
             12015 Lee Jackson Highway
             Fairfax, Virginia  22033
                     (703) 359-7585  (or)  (703) 359-7847
                              (leave a message if I am not available)
                     ARPA:  weidlich@ludwig.scc.com

------------------------------

End of Soft-Eng Digest
******************************
-------