[comp.software-eng] Identifying high quality software development efforts

marks@ssdevo.enet.dec.com (Randy Marks) (10/24/90)

I am trying to identify companies (or groups within companies) which
are producing very high quality software.  I've spent some time in the
library following various references and browsing through trade publications
devoted to quality.  I've been unable to find information which provides
a comparative analysis of companies in terms of software quality.

A cOne idea that I had, but which has not yielded results is to
1. determine what software-specific quality awards/recognition exist
   (either "industry" awards or company specific awards given to a supplier)
   Then find out which companies/groups have been honored with those awards.

I realize that there are an infinite number ways to measure software quality,
and for the purposes of what I'll be using this info., it doesn't really
matter.  Also, I don't really care what the nature of the software is.
At this stage of my research, I will settle for anything I can get.

So, are you aware of any software quality awards, or any comparative analysis
that has been done on companies which develop software?  If so, I'd
appreciate hearing about them.

	Randy Marks 

(UUCP)	                {decvax,ucbvax,allegra}!decwrl!ssdevo.enet!marks
(INTERNET)              marks@ssdevo.enet.dec.com
(domain-based INTERNET) marks%ssdevo.enet@decwrl.dec.com
..........................................................................
		From rare places come rare experiences.
..........................................................................

cox@stpstn.UUCP (Brad Cox) (10/27/90)

In article <1869@shodha.enet.dec.com> marks@ssdevo.enet.dec.com (Randy Marks) writes:
>I am trying to identify companies (or groups within companies) which
>are producing very high quality software.

Contact the authors for a copy of "Software Technology in Japan"; Frank McGarry,
Code 552, NASA/GSFC and Prof. Victor R. Basili, Institute for Advanced Computer
Studies; Univ of Md, College Park Md 20742 301 454 5497 basili@mimsy.umd.edu.

Specific examples: U.S: typical 1 to 3 errors/KSLOC, .01 to .1 for critical
software; Japan: some companies report .008 errors/KSLOC. 

In other words, error rates one to two *orders*of*magnitude* less *in*spite*of*
being many years behind in software technology.

Of course, everyone knows that what happened to automobile manufacturing
could never happen in software. Hmmm...
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

rcd@ico.isc.com (Dick Dunn) (10/31/90)

cox@stpstn.UUCP (Brad Cox) writes about some interesting statistics.  I'd
like to dig a little below the surface--i.e., find out what these numbers
really mean.  My notes are intended as "devil's advocate" views, or "cheap
shots" if you will.  That is, they're not intended as an attack on what
Brad has said; they're just intended to provoke a little more dicussion.

> Specific examples: U.S: typical 1 to 3 errors/KSLOC, .01 to .1 for critical
> software; Japan: some companies report .008 errors/KSLOC. 

First obvious shot: KSLOC doesn't measure program size, useful content, or
anything else, to any useful level.  That is, it might account for as much
as an order of magnitude difference in error rate.  (I've seen factors of
2-4 code expansion out of people being "paid by the line" relative to what
a decent programmer would write.)

Next, how could you possibly measure maintenance programming (by which I
mean not so much repair as modification) in KSLOC?  Most programmers do NOT
spend their time writing new code; they spend it modifying existing code to
do different things.

Then, what is the relative cost of the software?  How many dollars/yen per
KSLOC?

.008 errors/KSLOC is one error in 125,000 lines.  Many average-plodding
programmers won't write that much in a lifetime.  Hmmm...

What type of code shows that error rate?

> In other words, error rates one to two *orders*of*magnitude* less
> *in*spite*of* being many years behind in software technology.

Perhaps it's not "in spite of" but "because of"!  Perhaps by the time Japan
has accumulated thirty years or so of badly re-re-re-re-worked code,
trashed out and stretched beyond its breaking point five ways, with the
original authors long in the grave and all the original design criteria
violated and/or forgotten, their error rates will be as high as ours?
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...but Meatball doesn't work that way!

sartin@hplabsz.HPL.HP.COM (Rob Sartin) (11/01/90)

>First obvious shot: KSLOC doesn't measure program size, useful content, or
>anything else, to any useful level.  That is, it might account for as much

That point was really driven home for me recently.  I was working for
about a month and a half on cleanup and functionality enhancement of
some code.  It turned out that the cleanup opened the door for lots of
neat functionality enhancements for very little cost.  By the time
things were done I had a "productivity" of around -2000 LOC (or NCSS as
we call them) with several pages of documentation describing the new
functionality.  Additionally, though I didn't run metrics, I suspect
complexity metric went down and I know branch coverage went way up in
the tests.

Rob

warren@eecs.cs.pdx.edu (Warren Harrison) (11/05/90)

>> Specific examples: U.S: typical 1 to 3 errors/KSLOC, .01 to .1 for critical
>> software; Japan: some companies report .008 errors/KSLOC. 

When throwing these numbers around, we need to make sure we're not comparing
apples and oranges. I'm not sure where each individual has gotten their
statistics from who mentions such numbers, but the Japanese typically
report their LOC figures in "assembler equivalent lines" which means they
multiple their FORTRAN, C, or whatever lines by some appropriate expansion
constant to get the AELs. I strongly suspect the bug/KLOC figures are
actually bugs/KAEL. Obviously this makes things look much better ... for
example, lets say a 5,000 line FORTRAN project has 15 bugs ... this gives
a bug/KLOC ratio of 3/KLOC, but multiply 5,000 by the magic expansion factor
(let's say 1 to 10) and we get 15/50,000 or .3 bugs per KLOC.

As I said, this is just a guess, but the AEL metric is mentioned many times
in a mongraph entitled Japanese Software Engineering which is a collection
of papers by Japanese Software Engineering figures.

Warren

==========================================================================
Warren Harrison                                          warren@cs.pdx.edu
Department of Computer Science                                503/725-3108
Portland State University   

marks@ssdevo.enet.dec.com (Randy Marks) (11/12/90)

In article <534@pdxgate.UUCP>, warren@eecs.cs.pdx.edu (Warren Harrison) writes...
>>> Specific examples: U.S: typical 1 to 3 errors/KSLOC, .01 to .1 for critical
>>> software; Japan: some companies report .008 errors/KSLOC. 
> 
> 
>As I said, this is just a guess, but the AEL metric is mentioned many times
>in a mongraph entitled Japanese Software Engineering which is a collection
>of papers by Japanese Software Engineering figures.

Are these papers in Japanese or English?  Can somebody provide additional
info. so that I might have our library obtain copies for me?

	Randy Marks

P.S. I have acess to a translator, so even if they are in Japanese, I'm
interested.

(UUCP)	                {decvax,ucbvax,allegra}!decwrl!ssdevo.enet!marks
(INTERNET)              marks@ssdevo.enet.dec.com
(domain-based INTERNET) marks%ssdevo.enet@decwrl.dec.com
..........................................................................
		From rare places come rare experiences.
..........................................................................

stank@anvil.WV.TEK.COM (Stan Kalinowski) (11/12/90)

In article <534@pdxgate.UUCP> warren@eecs.UUCP (Warren Harrison) writes:
>>> Specific examples: U.S: typical 1 to 3 errors/KSLOC, .01 to .1 for critical
>>> software; Japan: some companies report .008 errors/KSLOC. 
>
>When throwing these numbers around, we need to make sure we're not comparing
>apples and oranges. I'm not sure where each individual has gotten their

Indeed.  In addition to the units of measure, we need to be sure that
we are comparing similar types of software.  The design approach and
Q/A method used in an embedded system for a consumer product is quite
different from the techniques used for applications on a general
purpose computer.  I know there should be no difference, quality is
quality, but in practice some markets are more tollerant to bugs and
actually prefer large gains in performance/functionality while
accepting marginally higher error rates.  A good example of this
tradeoff is the X Window System, in it's early releases it was buggy,
but as time goes on it gets better.  It could have been delayed until
all the bugs were worked out, but then the X consortium wouldn't have
gotten the user feedback that allowed them to better tune the product
to the user's needs.

Of course, here at Tek, we achieve high performance/functionality
without the bugs. :-)

							stank

US Mail: Stan Kalinowski, Tektronix, Inc., Network Displays Division
         PO Box 1000, MS 60-850, Wilsonville OR 97070   Phone:(503)-685-2458
e-mail:  {ucbvax,decvax,allegra,uw-beaver}!tektronix!orca!stank
    or   stank@orca.WV.TEK.COM

warren@eecs.cs.pdx.edu (Warren Harrison) (11/13/90)

In article <1965@shodha.enet.dec.com> marks@ssdevo.enet.dec.com (Randy Marks) writes:
>
>In article <534@pdxgate.UUCP>, warren@eecs.cs.pdx.edu (Warren Harrison) writes...
>>As I said, this is just a guess, but the AEL metric is mentioned many times
>>in a mongraph entitled Japanese Software Engineering which is a collection
>>of papers by Japanese Software Engineering figures.
>
>Are these papers in Japanese or English?  Can somebody provide additional
>info. so that I might have our library obtain copies for me?

The papers are in a collection entitled "Japanese Perspectives in Software
Engineering", edited by Yoshihiro Matsumoto and Yutaka Ohno, published by
Addison Wesley, ISBN 0 201 41629 8.

After checking, I find my acronym was incorrect - it's EASL (equivalent
assembler source lines). Specific numbers mentioned in the papers was (for
1985) 1.6K EASL/month, not including reused code, 3.1K if you include
reused code. The SOftware Engineering text by Schach (ISBN 0 256 08515 3,
Irwin Publishers) cites an expansion factor of 4 for EASL which translates
into about 400 lines of new "high level" code a month.

As I said in my earlier posting, I'm not sure if the other numbers from Japan
we always hear are EASL or actual high level lines of code, but in the
Addison Wesley collection, manyof the authors specify their data in terms
of EASL.

Warren

==========================================================================
Warren Harrison                                          warren@cs.pdx.edu
Department of Computer Science                                503/725-3108
Portland State University   

zhou@brazil.psych.purdue.edu (Albert Zhou) (11/18/90)

This reminds me of a highly amusing book. The title is "Frozen keyboard", and
the subtitle is "--How to live with bad software". This book will increase
users' tolerance on software bugs, ideal for free distribution with any  
bug-haunted software.