[comp.software-eng] Software Complexity Measures Will Never Be Accurate

crowl@cs.rochester.edu (Lawrence Crowl) (01/14/88)

Algorithmic measures of algorithmic complexity will never be accurate.  The
problem is that a larger set of assumptions, or programmer state, allows
smaller programs.  Since algorithms are unlikely to be able to capture the set
of assumptions in a piece of code, the measures must rely on program size,
operator relationships, etc.  Smaller programs are more likely to have smaller
complexity values, reguardless of the set of assumptions.  However, the set of
assumptions, or programmer state, required to understand a code fragment is
just what makes a piece of code difficult to understand.   Algorithmic
measures of complexity fail to capture the set of assumptions within a piece
of code, and therefore are inaccurate measures of actual complexity.  
-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

steve@umigw.MIAMI.EDU (steve emmerson) (01/16/88)

In article <5874@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>...   Algorithmic
>measures of complexity fail to capture the set of assumptions within a piece
>of code, and therefore are inaccurate measures of actual complexity.  

Interesting.  This dovetails nicely with another software-metric assertion:
namely, that software metrics are applicable only *within* an organization
for the purpose of comparing one routine with another.  It could be that
each software development entity constructs, conciously or otherwise, a 
standard set of assumptions.  Thus enabling the valid *intra*-entity use of 
software metrics; but hopelessly failing for inter-entity comparisons.

-- 
Steve Emmerson                     Inet: emmerson@miami.miami.edu [192.31.89.4]
SPAN: miami::emmerson (host 3.2)         emmerson%miami.span@star.stanford.edu
UUCP: ...!hao!umigw!miami!emmerson       emmerson%miami.span@vlsi.jpl.nasa.gov

UH2@PSUVM.BITNET (Lee Sailer) (01/19/88)

In article <5874@sol.ARPA>, crowl@cs.rochester.edu (Lawrence Crowl) says:
>
>Algorithmic measures of algorithmic complexity will never be accurate.  The
     
Never say "never".  8-)
     
>problem is that a larger set of assumptions, or programmer state, allows
>smaller programs.  Since algorithms are unlikely to be able to capture the set
>of assumptions in a piece of code, the measures must rely on program size,
>operator relationships, etc.
     
This is a good point.  In *today's* programming languages, there are too many
pieces of required knowledge that are not coded, so complexity measures
based on the code alone will be of limited accuracy.
     
This bring to my mind two developments in CS theory.
     
PROGRAM CORRECTNESS PROOFS:  All the work on this seems to require the
addition of "assertions" to the code, that is, logical statements of
all the assumptions that must hold before, after, and even during the
execution of a piece of program.  (See Gries, the Science of Programming)
     
SPECIFICATION LANGUAGES:  The ideal spec. lang. would lead to the user
making explicit all of the *unstated* assumptions about the process.
Furthermore, research is moving towards specification which can be executed
directly, omitting the coding step altogether.
     
I conclude then that complexity measures should at least have both the code
and the specifications as input.  Complexity could be perceived as
sort of a ratio of actual code complexity to specification complexity.
     
Disclaimer:  I am interested in these ideas, but unread.  If this sounds
to you like something that someone has written, please send me the reference.