[comp.software-eng] Metric for Requirements Complexity?

gary@suite.sw.oz.au (Gary Corby) (05/24/91)

Given two Software Requirements Specifications, is there a metric which
will describe the relative complexity of the systems they describe? 
Please note I am referring to the complexity of the system, rather than any
sort of stylistic measure of the document per se.

We are looking for this sort of measure because we wish to determine whether
we are getting better or worse at writing SRS documents as time goes on.
This seems to require measuring the number of faults discovered in each SRS
document and plotting the values as a function of the date of writing.
The only problem is that not all SRSs describe equally complex systems,
and more convoluted systems can expect to contain a greater number of errors.

Does anyone know a solution or have a suggestion?

Gary
-- 
Gary Corby  (Friend of Elvenkind)			Softway Pty Ltd
						ACSnet: gary@softway.oz
					UUCP: ...!uunet!softway.oz!gary

hlavaty@CRVAX.Sri.Com (05/24/91)

In article <gary.675044073@suite.sw.oz.au>, gary@suite.sw.oz.au (Gary Corby) writes...
>Given two Software Requirements Specifications, is there a metric which
>will describe the relative complexity of the systems they describe? 
>Please note I am referring to the complexity of the system, rather than any
>sort of stylistic measure of the document per se.
> 
>We are looking for this sort of measure because we wish to determine whether
>we are getting better or worse at writing SRS documents as time goes on.
>This seems to require measuring the number of faults discovered in each SRS
>document and plotting the values as a function of the date of writing.
>The only problem is that not all SRSs describe equally complex systems,
>and more convoluted systems can expect to contain a greater number of errors.
> 
>Does anyone know a solution or have a suggestion?
> 
>Gary
>-- 
The book "Controlling Software Project" by Tom Demarco outlines a possible 
approach that might interest you.  As you point out, the problem is always that
one SRS is not the same as another, so you can't compare them directly.  What
Demarco proposes is that you break down the SRS into nodes, each of which is a
different category (database, scientific, user-interface, etc...).  They provide
starting complexities for each node, but they recommend that you start keeping a
database of your own experiences so that over time your complexity weights are
more accurate for your own company.  You get the complexity weights by analyzing
the end result (after coding), when this information is available.  The bottom
line is that anything done at requirements cannot be as accurate as an analysis
done at the end, but it gives you an approximation that you can keep refining
over time.  This is just a quick overview; if it sounds interesting, read the 
book.

jls@netcom.COM (Jim Showalter) (05/25/91)

Requirements usually specify things like the number of languages,
development hosts, targets, configurations, programming paradigms
(e.g. AI, real-time, MIS). You also usually know the number of
contractors involved, the number of sites involved, and the
mapping of processes to processors. All of these factors contribute
to complexity, so it seems that you could devise a metric that
assigned weights to each factor and did the arithmetic to arrive
at a number or set of numbers that said, essentially "This proposed
system as specified in these requirements is about a 7 on a scale
of ten.".

I found it interesting in thinking about your question that the
factors that I've found to most affect complexity ARE available
in the requirements (explicitly or implicitly). It is not so much
the implementation details of whether one uses one sort algorithm
vs another that have an impact on the overall complexity of the
project--it is the macro-scale factors listed above that determine
the true complexity.

There are other things affecting the success of a project that aren't
directly related to complexity: the sophistication of the development
environments used, the skill levels of the people on the team, the
quality of the interpersonal dynamics on the team, etc. These are
NOT usually in the requirements, but might need to be taken into
account to normalize your comparisons of SRSs.
-- 
**************** JIM SHOWALTER, jls@netcom.com, (408) 243-0630 ****************
*Proven solutions to software problems. Consulting and training on all aspects*
*of software development. Management/process/methodology. Architecture/design/*
*reuse. Quality/productivity. Risk reduction. EFFECTIVE OO usage. Ada/C++.    *

orville@weyrich.UUCP (Orville R. Weyrich) (05/25/91)

In article <gary.675044073@suite.sw.oz.au> gary@suite.sw.oz.au (Gary Corby) writes:
>Given two Software Requirements Specifications, is there a metric which
>will describe the relative complexity of the systems they describe? 

Have you considered the book "Function Point Analysis" by J. Brian Dreger,
Prentice Hall 1989?



--------------------------------------           ******************************
Orville R. Weyrich, Jr., Ph.D.                   Certified Systems Professional
Internet: orville%weyrich@uunet.uu.net             Weyrich Computer Consulting
Voice:    (602) 391-0821                         POB 5782, Scottsdale, AZ 85261
Fax:      (602) 391-0023                              (Yes! I'm available)
--------------------------------------           ******************************

oman@med.cs.uidaho.edu (05/29/91)

In article <gary.675044073@suite.sw.oz.au> gary@suite.sw.oz.au (Gary Corby) writes:
>Given two Software Requirements Specifications, is there a metric which
>will describe the relative complexity of the systems they describe? 
>Please note I am referring to the complexity of the system, rather than any
>sort of stylistic measure of the document per se.
>

For two years I tried to do this using standardized IEEE-formated SRS
documents. All of my attempts failed because of the natural language 
component of the SRS content (ie., it cannot be parsed).

I was, however, successful in measuring complexity of design documents, as
were a number of other researchers.  For design metrics see my paper  in
Journal of Sys. & Software, July, 1990, or the papers by Deiter Rombach
and Sallie Henry in IEEE Software, March, 1990.

Paul.

-----------------------------------------------------------------------------
--   Paul W. Oman, Ph.D., C.S. Dept., Univ. of Idaho, Moscow, ID, 83843    --
--     em: oman@cs.uidaho.edu    ph: 208-885-6589    fx: 208-885-6645      --
-----------------------------------------------------------------------------

kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) (06/05/91)

In article <gary.675044073@suite.sw.oz.au>, gary@suite.sw.oz.au (Gary Corby) writes:
> Given two Software Requirements Specifications, is there a metric which
> will describe the relative complexity of the systems they describe? 
> Please note I am referring to the complexity of the system, rather than any
> sort of stylistic measure of the document per se.
> 
> We are looking for this sort of measure because we wish to determine whether
> we are getting better or worse at writing SRS documents as time goes on.
> This seems to require measuring the number of faults discovered in each SRS
> document and plotting the values as a function of the date of writing.
> The only problem is that not all SRSs describe equally complex systems,
> and more convoluted systems can expect to contain a greater number of errors.
> 
> Does anyone know a solution or have a suggestion?

Solutions - never.....suggestions - always.

I have read a few of the replies to your post, and they started drifting back
into the metrics discussion again, which is relevant, but IMHO not complete.  A
number of the replies mentioned some very interesting metrics (rutabagas,
coffee comsumed, etc.) that could at least be theoretically correlated to
success.  I don't recall seeing the old nostrum: correlation does not imply
causation.  The correlations will be especially difficult to determine because
of the Hawthorne effect also, but you gotta start somewhere.  

I am in fundamental agreement with those who propose some measures, collecting
some data relative to those measures, computing some metrics, and then trying
to determine some correlations.  But then you must perturb the process to find
out if your correlation holds true.  If your perturbed measure does not change
the success rate, then maybe what you are measuring is of no value.   And you
do not know a priori what measures you should use.  Start somewhere. 

Another issue: you discuss complexity of the system.  I think what you really
want to know; is the content of the SRS describe a successful product?  Could
you not apply McCabe's measure to any executable specification language?  Could
you in some manner construct complexity diagrams of the SRS?  Heck, try
diagramming the sentences, treat it as a graph, make the ends connect, and
compute complexity.  You'll have a number.  It may tell you something about how
you write the English language.  See if reducing this number reduces the number
of errors detected in design, etc.

Last issue:  What do better and worse mean?  If your English is better, is your
content necessarily any better?  Vice versa?  As I said earlier, I think what
you want is a measure of how closely your original SRS content accurately
described the resultant product multiplied by customer satisfaction with that
product.  If customer satisfaction is zero for a perfect SRS, you are out of
business.

George X. Kambic
[...]
Section 1.2.5.a.x of net functional specification
	The user shall disclaim all information on the net.

[...]
Execution: I disclaim all information I place on the net.

Wow!  I met the spec.