[comp.software-eng] Software Quality

snoopy@sopwith.UUCP (Snoopy T. Beagle) (11/30/88)

In article <1988Nov15.180821.20324@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:

| The popular software distribution from a certain
| university in southern California is a good example of interesting ideas
| often marred by first-cut [i.e. poorly thought out, messy, sometimes
| incomplete] designs and implementations.

| This is not to say that any random commercial organization, like, say,
| one whose name has three initials and an "&" in it, will *necessarily*
| do better.  But those people can, in theory, afford to spend some money
| on quality assurance.  Universities generally can't.

Does this mean I should "rm -rf cnews" rather than trying to get it to
build?  :-)  Can I trust software from a certain university in eastern
Canada? :-)

These days, a vender is likely to be pushing both hardware and software
out the door as soon as possible so that they can rake in the bucks for
whizzy new feature foobar before their competitor beats them to it.  They
may very well argue that they can't spend any more time/money on quality.

If you want better quality, you need to get customers to demand it.
Customers with large budgets.

It isn't who you work for, it is your state-of-mind that counts.  Tools like
code inspections can help, but they may not buy you much if you're just going
through the motions.
    _____     
   /_____\    Snoopy
  /_______\   
    |___|     tektronix!tekecs!sopwith!snoopy
    |___|     sun!nosun!illian!sopwith!snoopy

henry@utzoo.uucp (Henry Spencer) (12/02/88)

In article <70@sopwith.UUCP> snoopy@sopwith.UUCP (Snoopy T. Beagle) writes:
>| This is not to say that any random commercial organization, like, say,
>| one whose name has three initials and an "&" in it, will *necessarily*
>| do better.  But those people can, in theory, afford to spend some money
>| on quality assurance.  Universities generally can't.
>
>Does this mean I should "rm -rf cnews" rather than trying to get it to
>build?  :-)  Can I trust software from a certain university in eastern
>Canada? :-)

You pays your money and you takes your chances! :-)

Some people can write good software without a QA group standing over them
with a club.  Some can't.  If there *is* a club-equipped CA-group, the odds
of getting consistently good software are better.  If there isn't, as in
universities, much depends on who wrote the stuff, and on whether they got
out on the right side of bed that morning.  (Even I, normally the absolute
pinnacle of programming perfection, have been known to produce code with
occasional trivial, unimportant flaws on a bad day.  :-) :-) :-) :-))

>These days, a vender is likely to be pushing both hardware and software
>out the door as soon as possible so that they can rake in the bucks for
>whizzy new feature foobar before their competitor beats them to it.  They
>may very well argue that they can't spend any more time/money on quality.

Yes, unfortunately, some QA departments come with a leash rather than a
club as standard equipment...
-- 
SunOSish, adj:  requiring      |     Henry Spencer at U of Toronto Zoology
32-bit bug numbers.            | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) (10/12/89)

From article <16193@vail.ICO.ISC.COM>, by rcd@ico.ISC.COM (Dick Dunn):
> Of course good people will produce better software than bad people. [...] 
> But the professional won't do as good a job as if there were more 
> external interest in quality and less extreme time/money pressure.

    A long schedule can provide enough time to get a product almost
    perfect, but with very negative economic consequences.  It is 
    *management's* responsibility to balance quality requirements
    against other requirements when determining the schedule.

    At some point, the tradeoff must be made, and the engineers
    must then produce the best possible product within the specified
    minimum standards of quality and the cost and schedule constraints. 

> And what's this about advanced programming languages?  Wolfe has, in other
> postings, been an outspoken advocate of Ada, which is certainly *not*
> advanced.  

    Oh, then let's talk about how "advanced" C is, over in comp.lang.misc.


    Bill Wolfe, wtwolfe@hubcap.clemson.edu

PAAAAAR@CALSTATE.BITNET (10/13/89)

Software quality is a multidimensional idea!  There is an
excelent philosophical discussion of Quality in that very
odd piece of fiction:
Author: Pirsig
Title: Zen and the Art of Motorcycle Maintenance.
You might be able to squeeze some interesting ideas out of it.

Two other back ground sources are
Industrial Quality Assurance -- whats sauce for the goose is source fo
Tom Gilb's scattered publications on "Design by Objectives"

Another fundamental idea is in Herbert Simon's Science of the Artificial -
He points out that normally you can only optimise on one
property of a system after specifying constraints on the
the other components of "system quality".

Thus we can imagine that a piece of software, when produced will
have the following qualities:
RAM required  -- minimum, maximum, average, typical
Disk storage  -- ditto
Speeds        -- ditto
              -- for one transaction/interactoion, for a batch...
CPU Cycles used -- typical and avergae, and worst,....
Time to produce
Cost of production
Lines of Executable Code
Readabillity  -- mean time to answer questions about source code
              -- etc
Maintainabillity -- mean time to make small change, prob of change working,...
Completeness  -- proportion of functions/transactions implemented...
Portabillity  -- mean time to change to another system
debuagabillity -- mean time to find a seeded bug or two
Probable number of bugs -- F*(N-S)/S) where
                          N bugs are seeded and F+S are found, with
                           S of the found bugs having been seeded.
And so on......
(And yes these are not the best,...)

Now in Crunch mode the Time to produce is Fixed and the rest are negotiable.
In other cases the time to produce may be the factor we try to optimise
withing the requirement of 100% completeness but no portabillity...

You might start with
Charles Babbage - Ensuring the quality is part of the cost of every product
     (or something like that - the ref has escaped me...)
dick botting

Dr. Richard J. Botting,
Department of computer science,
California State University, San Bernardino,
PAAAAAR@CCS.CSUSCC.CALSTATE
paaaaar@calstate.bitnet
PAAAAAR%CALSTATE.BITNET@CUNYVM.CUNY.EDU

mjl@cs.rit.edu (10/13/89)

In article <6756@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu writes:
>From article <16193@vail.ICO.ISC.COM>, by rcd@ico.ISC.COM (Dick Dunn):
>> And what's this about advanced programming languages?  Wolfe has, in other
>> postings, been an outspoken advocate of Ada, which is certainly *not*
>> advanced.  
>
>    Oh, then let's talk about how "advanced" C is, over in comp.lang.misc.

I'm into lingua-snobbery as much as anyone else, but is this really
the forum for it?  And using it to willyhorton someone else's position
certainly does nothing to give added credibilty to your own.  So
let's drop this and get back to the interesting quality issues that
started this thread.

Mike Lutz
Mike Lutz	Rochester Institute of Technology, Rochester NY
UUCP:		{rutgers,cornell}!rochester!ritcv!mjl
CSNET:		mjl%rit@relay.cs.net
INTERNET:	mjl@cs.rit.edu

rcd@ico.ISC.COM (Dick Dunn) (10/14/89)

William Thomas Wolfe writes:

>     ...It is 
>     *management's* responsibility to balance quality requirements
>     against other requirements when determining the schedule.

No.  Management assists in this.  Management makes the final call, but the
engineers are a major part of the decision process.  After all, it's an
engineering exercise to determine how long it takes to do the engineering.

Also, as I've pointed out (ad nauseam for other readers, I suspect), the
quality characteristics I've been talking about are not readily quantifi-
able, whereas the requirements I've been talking about are necessarily
objective.  >>In this context<< the phrase "quality requirements" is a
contradiction.

note in passing...I had said:
> > And what's this about advanced programming languages?  Wolfe has, in other
> > postings, been an outspoken advocate of Ada, which is certainly *not*
> > advanced.  
...and Wolfe dodges it:
>     Oh, then let's talk about how "advanced" C is, over in comp.lang.misc.

No, let's not.  I won't assert that C is advanced.  It's suitable.  Nor did
I assert that "advanced programming languages" were essential.  Try to
answer the question, Bill:  Are you advocating use of an advanced program-
ming language, or are you advocating the use of Ada?
-- 
Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
   ...No DOS.  UNIX.

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) (10/15/89)

From rcd@ico.ISC.COM (Dick Dunn):
>>     *management's* responsibility to balance quality requirements
>>     against other requirements when determining the schedule.
> 
> No.  Management assists in this.  Management makes the final call, but the
> engineers are a major part of the decision process.  After all, it's an
> engineering exercise to determine how long it takes to do the engineering.

    Engineers repeatedly provide information as a function of the varying
    constraint parameters which are specified by management.  Management 
    makes the decision.  Engineers then comply.  

    Notice: *Engineering* assists.  *Management* DECIDES. 

> note in passing...I had said:
>> > And what's this about advanced programming languages?  Wolfe has, in 
>> > other postings, been an outspoken advocate of Ada, which is certainly
>> > *not* advanced.  
>> Oh, then let's talk about how "advanced" C is, over in comp.lang.misc.
> 
> No, let's not.  I won't assert that C is advanced.  

   Glad to hear it.

> answer the question, Bill:  Are you advocating use of an advanced 
> programming language, or are you advocating the use of Ada?

   I advocated precisely what I said: "advanced programming languages",
   along with CASE tools and so on.  YOU brought up Ada, not I.

   And I *will* assert that Ada is advanced, within the domain of
   PRODUCTION programming languages; followups to comp.lang.misc
   if you'd like to pursue it.


   Bill Wolfe, wtwolfe@hubcap.clemson.edu
 

rcd@ico.isc.com (Dick Dunn) (10/19/89)

Bill Wolfe said, in relation to the ongoing discussion about factoring
quality into scheduling:
> >>     *management's* responsibility to balance quality requirements
> >>     against other requirements when determining the schedule.

I complained about the characterizations of engineering and management
roles:
> > No.  Management assists in this.  Management makes the final call, but the
> > engineers are a major part of the decision process.  After all, it's an
> > engineering exercise to determine how long it takes to do the engineering.

There are some statements about interactions inherent in what I said.  I
really mean that the engineers had better *participate* in the decision
process.  I should have noted that, in addition to the engineering aspects
of determining schedule, you have to keep the old responsibility/authority
equation in balance--and it's the engineers who have the responsibility for
meeting the schedule.

Wolfe counters:
>     Engineers repeatedly provide information as a function of the varying
>     constraint parameters which are specified by management.  Management 
>     makes the decision.  Engineers then comply.  

The first part works, but poorly.  The problem is that (in software buzz-
terms) it starts with a waterfall model, then wraps a loop around the out-
side.  That is, management provides constraints, engineers provide infor-
mation in response--that's the waterfall pair of steps.  It can take a lot
of iterations to get a decent answer compared to what happens if you plunk
engineers and managers down together and let them work on all of the
constraints together.

As for the "management decides, engineers comply" - in some sense that's
what happens, but it's an incredibly poor choice of words.  That is,
regardless of the actions, an organization which sees its process in those
terms is not healthy...it has management problems.

>     Notice: *Engineering* assists.  *Management* DECIDES. 

In a wider context, this would be very unhealthy.  Engineering does the
work in an engineering project, and management assists the engineers in
getting it done.  (This comes from the observation that the end product is
the result of engineering effort, not management effort.)

But let's not lose sight of the fact that we're talking about setting the
schedule, budget, and related constraints--this is a different context
from the "real work" in the project.  My contention is that even this
initial work is substantially an engineering effort.  If engineers are to
be involved, you don't involve them as "subordinates".

A perspective:  When someone (even in our company) asks me who I work for,
I say "Interactive Systems."  If I'm then pressed for the name of a
*person*, I say "Oh, you must mean `Who do I report to?'" and give them my
manager's name.  The distinction between "work for" and "report to" is
important in what it says about structure versus control.
-- 
Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
   ...No DOS.  UNIX.

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) (10/22/89)

 [I have removed comp.sw.components from the newsgroup list, since
  this thread ceased to be relevant to that newsgroup some time ago] 
  
From rcd@ico.isc.com (Dick Dunn):
> Bill Wolfe said, in relation to the ongoing discussion about factoring
> quality into scheduling:
>> >>     *management's* responsibility to balance quality requirements
>> >>     against other requirements when determining the schedule.
> 
> I complained about the characterizations of engineering and management
> roles:
>> > No.  Management assists in this.  Management makes the final call, but the
>> > engineers are a major part of the decision process.  After all, it's an
>> > engineering exercise to determine how long it takes to do the engineering.
> 
> There are some statements about interactions inherent in what I said.  I
> really mean that the engineers had better *participate* in the decision
> process.  I should have noted that, in addition to the engineering aspects
> of determining schedule, you have to keep the old responsibility/authority
> equation in balance--and it's the engineers who have the responsibility for
> meeting the schedule.

   This is exactly correct, and here we agree completely.  (See below)
 
% Wolfe counters:
%>     Engineers repeatedly provide information as a function of the varying
%>     constraint parameters which are specified by management.  Management 
%>     makes the decision.  Engineers then comply.  
% 
% The first part works, but poorly.  The problem is that (in software buzz-
% terms) it starts with a waterfall model, then wraps a loop around the out-
% side.  That is, management provides constraints, engineers provide infor-
% mation in response--that's the waterfall pair of steps.  It can take a lot
% of iterations to get a decent answer compared to what happens if you plunk
% engineers and managers down together and let them work on all of the
% constraints together.

   But these iterations can occur with everyone in the same room,
   working together.  Management specifies some parameters, engineers
   provide information as to probable cost/schedule requirements,
   loop until the meeting is over.
 
> As for the "management decides, engineers comply" - in some sense that's
> what happens, but it's an incredibly poor choice of words.  That is,
> regardless of the actions, an organization which sees its process in those
> terms is not healthy...it has management problems.

   No, it in fact shows that the organization IS healthy.  Engineers
   are not, and should not be in the business of trying to be, responsible
   for setting cost and schedule constraints.  They ARE responsible for
   providing as much information as possible to management as it weighs
   technical and economic factors in the process of making the decision.

   If engineers were responsible for setting cost and schedule constraints,
   they would tend to disregard economic factors and the organization would
   quickly find itself out of business.  Projects would be technical works
   of genius, with complete formal specifications for each subprogram or
   data abstraction, 3-D images illustrating the design, etc., at a cost 
   which would result in fatal financial illness for the company. 
  
>>     Notice: *Engineering* assists.  *Management* DECIDES. 
> 
> In a wider context, this would be very unhealthy.  Engineering does the
> work in an engineering project, and management assists the engineers in
> getting it done.  (This comes from the observation that the end product is
> the result of engineering effort, not management effort.)
  
   The statement above relates only to the topic at hand, which is
   "Who is responsible for setting cost/schedule constraints?".  When
   we move to the new and different question, "Who is responsible for 
   delivering the product?", then in THAT context I agree completely,
   but this has nothing to do with the topic at hand.

> But let's not lose sight of the fact that we're talking about setting the
> schedule, budget, and related constraints--this is a different context
> from the "real work" in the project.  My contention is that even this
> initial work is substantially an engineering effort.  If engineers are to
> be involved, you don't involve them as "subordinates".

   Sure you do.  Who says superiors and subordinates can't participate
   in the same meeting?  Who says superiors and subordinates can't have
   free and open discussions?  I don't, and I hope you aren't implying
   this either.
 

   Bill Wolfe, wtwolfe@hubcap.clemson.edu

mjl@cs.rit.edu (10/24/89)

In article <6847@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu writes:
>From rcd@ico.isc.com (Dick Dunn):
>> As for the "management decides, engineers comply" - in some sense that's
>> what happens, but it's an incredibly poor choice of words.  That is,
>> regardless of the actions, an organization which sees its process in those
>> terms is not healthy...it has management problems.
>
>   No, it in fact shows that the organization IS healthy.  Engineers
>   are not, and should not be in the business of trying to be, responsible
>   for setting cost and schedule constraints.  They ARE responsible for
>   providing as much information as possible to management as it weighs
>   technical and economic factors in the process of making the decision.
>
>   If engineers were responsible for setting cost and schedule constraints,
>   they would tend to disregard economic factors and the organization would
>   quickly find itself out of business.  Projects would be technical works
>   of genius, with complete formal specifications for each subprogram or
>   data abstraction, 3-D images illustrating the design, etc., at a cost 
>   which would result in fatal financial illness for the company. 

I can't let this one pass, if for no reason than to defend the honor of
engineers.  I don't think we can (or should) use the Nuremburg defense
"I was only doing what I was told." If engineers realize that
managment's requests/requirements/demands are impossible to achieve,
they have the right -- the obligation -- to object, if for no other
reason than to protect the shareholder's interests against the
unrealizable plans of management!

Like most others who've been in this business for any time, I've seen
examples of what Bill is talking about.  Still, this was usually due
less to engineers violating budget and time constraints than it was a
lack of management direction -- by which I mean setting clear goals for
what was to be done.  People fritter their time away when they don't
have a clear vision of where they're going.  But once goals are
established, the engineers should have significant if not primary
control over how to get there.  Most fiascos I've seen have resulted
from management inability or unwillingness to face facts: what the
project would cost and how long it would take.

I have no problem with management proposing budget and schedule
constraints, but the engineering staff is best qualified to determine
whether there is a feasible solution within these constraints.  And
"feasiblity" in this case includes adequate time to produce a quality
design that has a high probability of successful evolution over the
lifetime of the product.  Bill's snide remarks notwithstanding,
sometimes this will mean creating a formal spec. and a variety of
related models to ensure a well-engineered piece of software.

In my experience, managers consistently underestimate software product
lifetime, possibly because they base their estimates on hardware
development models.  When the hardware subsystems are no longer up to
snuff, they are are often redesigned *from scratch* using the latest
technology.  However, most organizations work under the implicit
assumption that software need *never* be redesigned from scratch --
it can always be modified to meet new requirements.  I have in mind
several projects I've been involved with over the past couple of years,
where the hardware base evolved in stages from first generation mini's
(like PDP8's and Nova 1200's) to 68K's and 80xxx's.  While the current
processor boards and attendant interfaces look nothing like their
ancestors, the current software system architectures still reflect the
original designs (or lack thereof) from the early 70's.

For those interested in the issues surrounding long-term product
evolution, I strongly recommend Belady & Lehman's book "Program
Evolution: Processes of Software Change" (Academic Press, 1985, ISBN
0-12-442441-4).  The typography is awful, and I think the proof reading
phase was skipped, but the ideas and insights are well worth the time
it takes to ferret them out.
Mike Lutz	Rochester Institute of Technology, Rochester NY
UUCP:		{rutgers,cornell}!rochester!ritcv!mjl
CSNET:		mjl%rit@relay.cs.net
INTERNET:	mjl@cs.rit.edu

rcd@ico.isc.com (Dick Dunn) (10/24/89)

Bill Wolfe again...

(>  [I have removed comp.sw.components from the newsgroup list, since
>   this thread ceased to be relevant to that newsgroup some time ago] 
No, somehow you didn't.  I hope I have!)

> > As for the "management decides, engineers comply" - in some sense that's
> > what happens, but it's an incredibly poor choice of words...
>... No, it in fact shows that the organization IS healthy.  Engineers
>    are not, and should not be in the business of trying to be, responsible
>    for setting cost and schedule constraints...

My argument is with the obvious connotations of control.  The schedule,
budget, specifications, and other constraints are an *agreement* between
the management and technical staff.  Management makes the decision of
go/no-go on the project.  But the engineers' attitude is NOT one of compli-
ance...it is one of agreement.  You MUST understand the difference...if the
engineers don't agree, they won't really comply.  (The problem may stay
below the surface for a bit, but it WILL surface.)

>    If engineers were responsible for setting cost and schedule constraints,
>    they would tend to disregard economic factors and the organization would
>    quickly find itself out of business.  Projects would be technical works
>    of genius, with complete formal specifications for each subprogram or
>    data abstraction, 3-D images illustrating the design, etc., at a cost 
>    which would result in fatal financial illness for the company. 

Well, I've reached the limit of my patience with this bullshit.  I've
worked as an engineer for quite a few years, and I've played my part in
setting cost and schedule constraints.  None of the crap that Wolfe tosses
off (as if he knew how it works) has befallen the projects I've worked on.
Engineers do NOT disregard economic factors, for the simple reason that
they want to eat too.

None of the companies I've been involved with has quickly found itself out
of business as a result of deeply involving the engineers.  Quite the
contrary; I've found engineers more than willing to work in understanding
the totality of the project in order to make their contributions more
valuable.  They find that they can help the project by letting their
involvement exceed the boundaries of technical expertise and lap over
into other considerations.  This is often reciprocated by management taking
an interest in the technical issues, creating a more cohesive approach to
the project, reducing "cultural" misunderstandings, and building a sense
of teamwork instead of us/them-ness.

Or maybe I'm just used to working with real engineers.

[some comments establishing proper context deleted - see parent]
> > ...If engineers are to
> > be involved, you don't involve them as "subordinates".

>    Sure you do.  Who says superiors and subordinates can't participate
>    in the same meeting?  Who says superiors and subordinates can't have
>    free and open discussions?  I don't, and I hope you aren't implying
>    this either.

I am implying...nay, asserting...something much more fundamental:

    An engineer is NOT subordinate to a manager.

    A manager is NOT superior to an engineer.

    An engineering organization is NOT a caste system.

    Organizations frequently have hierarchical structures for reasons
    related to communication, accounting, reflection of project structural
    hierarchy, etc.  It is critically important to the success of a tech-
    nical organization that this hierarchy carry as little connotation as
    possible of increasing value, power, or importance as one moves up the
    hierarchy, because contributions must be valued by their worth rather
    than their origin.

    A manager is NOT a superior, but merely a person serving a qualitatively
    different purpose.  Management 101: A manager is an assistant to the
    people who report to the manager.

    This is NOT the army.
-- 
Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
   ...No DOS.  UNIX.

sps@ico.isc.com (Steve Shapland) (10/25/89)

	QUALITY <==> Meets the customer's needs!

Bill Wolfe and Dick Dunn have been discussing the roles of
and relationships between engineers and managers during
the software development process. This whole process should be
a team effort, with the different attributes of each team member
(engineer, manager, sales person, etc) driving the process during 
different phases of the project.

Both of you seem to have missed who sets the initial constraints
of the product.  The initial bounds are set not by anyone within
your  company, but by your customers!
The sales force (marketing) represents the customer during the
planning stages of the project. Through market surveys
(perhaps just "wish-list" conversations with potential customers),
provide these bounds.  If you can't sell it, why bother to develop it? 
	(There are some valid reasons,
	but they do not generate current business revenue.)
Engineers and managers with "their ears to the ground" may also
have some of this information.

Marketing brings "What can we sell it for?",
"What features does it have to have?", and
"When do we have to have it available for our customers?"
to the table.

Engineering brings "What is (technically) feasible?",
"What will it cost?", and "How long will it take?"
to the table.

Management brings "What resources are available?",
corporate policy/business practices, and project authorization
to the table.

Then the fun begins!?! The endless negotiations regarding
cost, content, and delivery time, until a compromise is reached
and a project plan is formulated.
(Yes, Dick, if they don't agree, the project is doomed!)

Ultimately, it is management's job to DECIDE!
That's what they get paid for, making business decisions,
like "Is the proposed project profitable for the company?:
	Unlike the army, the engineer may chose to not work
	on the project.  The only risk is unemployment.
	But the struggle is the same, the need to eat (live)
	vs. the need to retain one's ethics and self respect. 
Having decided to do the project, management now assigns the
resources for implementation. Engineering and sales have
assisted management in making this decision by providing
alternatives and evaluations.

The next phase, design, is where the engineers call the shots.
That is what they get paid for, making technical decisions.
Does this design/coding practice perform the specified task?
Does it result in a quality product? A maintainable product?
Management assists the engineers by tracking progress against
the plan, insuring that resources remain available, and
dealing with the administrative tasks of business.
Sales may help by providing business/marketing analysis of 
some of the technical alternatives.

Later it will be marketing's turn,
deciding where to apply their efforts for maximum profit.
Engineering again provides support and assistance,
by providing technical background for sales force.

One thing that Dick aluded to was that management and engineers
are DIFFERENT, not superior/inferior! 
Fruit and grain are different, both are needed but neither is better.
A quality product needs to be fed a "balanced diet" of
management, sales, and engineering.  If the fruit gets left out,
the watch out for scurvy.

Steve Shapland
ico!sps 303/449-2870 x147

mikeh@dell.dell.com (Mike Hammel) (10/26/89)

In article <1989Oct24.060132.1660@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
-[previous comments deleted, most of which I agree with]
-I am implying...nay, asserting...something much more fundamental:
-
-    An engineer is NOT subordinate to a manager.
-
-    A manager is NOT superior to an engineer.
-
-    An engineering organization is NOT a caste system.

Up to this point I agreed with how you have viewed manager/engineer relations,
but here I have to disagree.  In the sense of abilities, no, an engineer is 
not subordinate to a manager.  But in the sense of responsibilities the 
engineer must be, to some extent, subordinate.  If the engineer was not then
very little would get done if the two were at odds.  I worked at a place not
long ago where I *very* much disagreed with me second level manager, and told
him so at times.  But when the rest of the group agrees to abide by the managersdecision then I have no choice but to make the best of it or become the scape
goat for the projects downfall (which I knew would come, but I sure didn't 
want anyone dropping it on me).

-    A manager is NOT a superior, but merely a person serving a qualitatively
-    different purpose.  Management 101: A manager is an assistant to the
-    people who report to the manager.

A manager is someone who can lead and assists, when necessary, those who report
to the manager to find direction and achieve goals.  My difference with your
description lies with the managers ability to lead, which I find very 
important (not all engineers are good at setting or striving for goals, some
have to be prodded along).

Note: I agreed with most of what you said, its just that I've had a rough time
with managers these past two years who couldn't lead, and the ones that could
ended up being forced out by higher-ups who couldn't.

Michael J. Hammel   | UUCP(preferred): ...!cs.utexas.edu!dell!Kepler!mjhammel
Dell Computer Corp. | Also: ...!dell!mikeh  or 73377.3467@compuserve.com
Austin, TX	    | Phone: 512-338-4400 ext 7169  
Disclaimer equ Standard

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) (10/30/89)

 [Somehow, comp.software-eng got dropped from the newsgroup list in 
   favor of comp.sw.components; I have added it back, since this thread
   *really* belongs in comp.software-eng rather than comp.sw.components!] 

From mjl@cs.rit.edu:
>>   [I wrote:] Engineers
>>   are not, and should not be in the business of trying to be, responsible
>>   for setting cost and schedule constraints.  They ARE responsible for
>>   providing as much information as possible to management as it weighs
>>   technical and economic factors in the process of making the decision.
>
> If engineers realize that managment's requests/requirements/demands 
> are impossible to achieve, they have the right -- the obligation -- 
> to object, if for no other reason than to protect the shareholder's 
> interests against the unrealizable plans of management! [...] Most 
> fiascos I've seen have resulted from management inability or 
> unwillingness to face facts: what the project would cost and 
> how long it would take.

   Perhaps so, but I think it is a gross misinterpretation of my 
   position to imply that I am suggesting that engineers should
   not make their objections known to management.  This comes under
   the part about "providing as much information as possible..."!!

   Now assume that engineering has made its views known, and that
   management sticks with its decision.  We must consider:

     1) The engineers could be wrong.  Management could be taking
        a calculated risk that research efforts taking place in
        another part of the company will provide the tools needed
        to bridge the gap, management could be dealing with a group
        of engineers which largely is fresh out of college, etc.;
        we cannot automatically discount the possibility that the
        management has access to better information.  Indeed, it is
        a primary function of management to see to it that information
        is considered on a larger scale than simply that information
        which is visible to a single department.

     2) What if management *is* wrong?  Amdahl, I believe, gambled that
        it would be possible to perform wafer-scale integration and lost;
        but that does not mean that such projects should have been met
        with engineer-level revolution.  Management may well have more
        in mind than simply achieving the direct objectives of the project.

     3) Engineers are free to resign if they believe the project and/or
        the company will go down the tubes.  Many good products have 
        resulted from engineers deciding to form their own company, 
        and I believe that engineers should be perfectly free to take 
        this route if they deem it appropriate.  Alternatively, they 
        can buy 51% of the company's stock and then fire management.  
        Or they can try to convince upper management or the stockholders
        that management is in need of closer supervision.  But it is NOT
        within their realm of options to usurp management's rights and
        responsibilities.  
         
> sometimes [good design] will mean creating a formal spec. and a 
> variety of related models to ensure a well-engineered piece of software.

    The citation did NOT imply that formal specs are not valuable,
    only that a formal specification of EVERYTHING is not presently
    applicable to many situations due to the great cost of specifying
    things with that level of detail.  Clearly, there are some situations
    in which the currently extremely high costs are justified.


    Bill Wolfe, wtwolfe@hubcap.clemson.edu
 

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) (10/30/89)

From rcd@ico.isc.com (Dick Dunn):
> (>  [I have removed comp.sw.components from the newsgroup list, since
>>   this thread ceased to be relevant to that newsgroup some time ago] 
> No, somehow you didn't.  I hope I have!)

   Well, I edited it out, and the article didn't show up in c.s.c 
   locally, so maybe there's a news software bug out there someplace.

> My argument is with the obvious connotations of control.  The schedule,
> budget, specifications, and other constraints are an *agreement* between
> the management and technical staff.  Management makes the decision of
> go/no-go on the project.  But the engineers' attitude is NOT one of compli-
> ance...it is one of agreement.  You MUST understand the difference...if the
> engineers don't agree, they won't really comply.  (The problem may stay
> below the surface for a bit, but it WILL surface.)

   It is not necessary to agree, only to have confidence that management
   knows what it is doing.  Management may well have access to more 
   information, or have subtle objectives not directly stated in the
   list of project objectives.

> Engineers do NOT disregard economic factors, for the simple reason that
> they want to eat too.

   Perhaps some do not, but they are not trained to handle them,
   and management presumably is.  Nor do they normally have the
   experience in handling economic factors that management has.

>     Organizations frequently have hierarchical structures for reasons
>     related to communication, accounting, reflection of project structural
>     hierarchy, etc.  It is critically important to the success of a tech-
>     nical organization that this hierarchy carry as little connotation as
>     possible of increasing value, power, or importance as one moves up the
>     hierarchy, [...]

   First, superior/subordinate does not necessarily reflect any sort 
   of value judgement; it implies only that one is working at a higher
   level than another, considering things from a broader perspective.

   Secondly, I would suggest that however much it may be desirable to
   have minimal value connotations associated with one's position in
   the hierarchy, the fact is that the United States is among the most
   dedicated practitioners of this sort of value judgement, as reflected
   in the ratios of CEO to worker salaries.  If indeed the lack of such
   a situation is "critically important to the success of a technical
   organization", then there must be a large number of US technical
   organizations whose catastrophic failure is clearly imminent. 

   Finally, if it is considered appropriate to remove the linkage 
   between height in the organizational hierarchy and "increasing 
   value, power, and importance", then I would submit that this is
   not the proper forum in which to discuss this rather widespread 
   sociopolitical phenomenon.


   Bill Wolfe, wtwolfe@hubcap.clemson.edu

jmi@devsim.mdcbbs.com ((JM Ivler) MDC - Douglas Aircraft Co. Long Beach, CA.) (11/14/89)

In article <944@abvax.UUCP>, ejp@vixen (Ed Prochak) writes:
>>
>>> Bill Wolfe and Dick Dunn have been discussing the roles of
>>> and relationships between engineers and managers during
>>> the software development process. This whole process should be
>>> a team effort...
>>>...Both of you seem to have missed who sets the initial constraints
>>> of the product.  The initial bounds are set not by anyone within
>>> your  company, but by your customers!
>>
>>I didn't miss it...I just didn't think that the role of the customer was
>>all that relevant to the relative roles of engineering and management.  To
>>the customer, you have to appear unified.
>>
>>Also, be careful about how you express the role of the customer.  They are
>>the final arbiters, but they aren't the designers or implementors, and you
>>want to watch that.  (Was it Wirth or Hoare who said that if there's any-
>>thing worse than "design by committee", it's "design by customer"?)
> 
> I guess I have a much different view of who the customer is. Surely the
> company ultimately must sell product to remain viable, but that doesn't
> fully define customer. In the context of quality I was taught by my
> previous employer that the customer is the next person down the line (the
> production line that is). Thus, for software developers the customer is the
> software maintenance crew. With this perspective, the idea of customer is
> greatly expanded.
> 

At DAC we are very aware of "quality" and we insist that it be built into every
plane that you fly in. More important than "quality" is the concept of "first
time quality." This means that everything is delivered to specs the first time
out.

The key is the specs. The name of the team I'm in is Software Technologies -
Tools and we are responsible for developing tools that are used in aircraft
simulations as well as the support of the software that creates those
simulations. 

Currently we are awaiting the requirements from the users (internal) that would
define an integrated software development environment. The system would perform
all the varied support functions of the software lifecycle including initial
product development, configuation management, problem reporting and tracking
and maintenance. As the project lead, I have requested requirements from my end
users and have received design. I have returned them and re-requested
requirements. This is a constant problem. Everyone wants to do design! Even the
customer!

Once the requirements are delivered with clear and concise milestones, then the
design can be worked. In order to create a quality product the customer *must*
provide clear concise instructions on *what functionality they wish* not on how
the software should perform that functionality. This also provides a clear
benchmarking milestone for testing of the completed product.

One key requirement to devleoping a quality product here at DAC also includes
the concept of product maintainability and integrity. Therefore we also have
the "next generation developers" as customers. These customers define the
standards that will be used in developing the code. They are also part of the
software inspection team (here we use Fegans Inspection process) and they are
required to sign off and accept the software prior to the release of the
product.

Quality is never "built into" a product unless an effort is made to do just
that at the design level. Once the requirements are obtained, it is the
analyst/developers responsibility to look at the requirements and determine
what the next level of functionality the customer wants will be, and then plan
for that. No additional functionality is permitted to be added to the initial
product release, althogh the development staff can issue the requests for the
additional functionality after the initial release. Remember, the software is
to be delivered exactly as was requested, but one key of quality software is to
plan for the future growth of a product and develop a design that allows for
the future enhancements.

sbang@iesd.auc.dk (Stig Bang) (03/13/91)

Greetings all professionals:

We are a group of engineering students from Aalborg University Center
(AUC), Denmark, who are currently working with aspects of obtaining
Software Quality. We have read both books and articles on the general
quality aspects (Pirsig; Juran; Deming; Crosby; Feigenbaum; etc.) and
some related to software quality (Glass; Vincent, Waters & Sinclair;
etc.)

We do _not_ believe in counting significant source statements and
cyclomatic code analysis, and similar stuff as primary activities for
achieving better software quality.

Instead, we think that:

            a) Software Quality is primarily achieved through good
               management of the development process.

            b) Successful Quality Management requires motivation 
               and engagement of the individual and a professional
               attitude towards software development.

            c) The hardest task of introducing Quality Management
               in a company is the change of peoples' habits,
               not the application of new tools and methodologies. 

What do YOU think about Software Quality issues?  Can anyone confirm
or dispute any of these statements? What literature would you
recommend in this area?


       ___        _       _      ______      sbang@iesd.auc.dk
      / _ \      | |     | |    / _____\     Group S10D-E2218,
     / / \ \     | |     | |   / /           Aalborg University Center,
    / /___\ \    | |     | |  | |            Institute for Electronic Systems,
   / _______ \   | |     | |  | |            Department of Mathematics 
  / /       \ \   \ \___/ /    \ \______       and Computer Science,
 /_/         \_\   \_____/      \______/     Fredrik Bajersvej 7,
                                             DK-9220 Aalborg Ost, Denmark.

   The common sense principle:               Telephone: +45 98 15 85 22
     Common sense is uncommon.               Telex:     69790 aub dk
             -- Tom Gilb.                    Telefax:   +45 98 15 81 29

--
sbang@iesd.auc.dk

jgautier@vangogh.ads.com (Jorge Gautier) (03/14/91)

Congratulations!  You are on the right track.  My experience confirms
all your statements.  If these are the opinions of students, there is
hope for the software world.  Feel free to send me e-mail if you wish
to further discuss these issues.

By the way, don't be swayed by the advocates of "integrated CASE
tools" and "software metrics"-- these people are not software
developers.  I expect you'll get the inevitable DeMento :-) line of
"We can't control what we can't measure."  Here's a good reply to
that: "We can't control or meaningfully measure what we don't
understand."
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

jgautier@vangogh.ads.com (Jorge Gautier) (03/14/91)

I guess I should clarify that last one-liner before I start getting
too many flames.  I meant to say: "We can't apply metrics that we
don't understand."  There, much better.  Flame away...
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

root@afit.af.mil (03/14/91)

jgautier@vangogh.ads.com (Jorge Gautier) writes:

...some stuff deleted...

>By the way, don't be swayed by the advocates of "integrated CASE
>tools" and "software metrics"-- these people are not software
>developers. 

1)  This is a pretty sweeping statement.  Seems more likely to be opinion
than fact.

2)  I guess I am to infer that CASE tools and metrics are not useful.
I would say that many organizations probably put too much stock in them,
but that it not a reason to dismiss them.  At the VERY LEAST, metrics
can provide important data for trend analysis (where is our process going
wrong?).  But I suspect that as we get a better grasp on software metrics,
we will be able to do more and more with them.   You may not agree with
any of this, but I refuse to put them aside because they might have
been misapplied in the past (or because the advocates are "not software
developers").

3) You have to remember that software metrics is still a young discipline.
There are a lot of folks looking into measurement of software -- we've
come a long way from "lines of code."  By the way, research in software
metrics can contribute to our understanding of software in general (see
below).

>I expect you'll get the inevitable DeMento :-) line of
>"We can't control what we can't measure."  Here's a good reply to
>that: "We can't control or meaningfully measure what we don't
>understand."

This does not mean that metrics are not useful; only that we still
have plenty of work to do in understanding software, so that we (whoever
we are) can better measure and control (whatever that means).


David R. Luginbuhl
Air Force Institute of Technology
AFIT/ENG
Wright-Patterson AFB, OH  45433
dluginbu@galaxy.afit.af.mil

>DISCLAIMER: All statements in this message are false.

Boy, was I glad to see this.  :--)

campbelr@hpcuhe.cup.hp.com (Bob Campbell) (03/15/91)

> We do _not_ believe in counting significant source statements and
> cyclomatic code analysis, and similar stuff as primary activities for
> achieving better software quality.

Ahhhh, metrics!

As you know, there are many different metrics.  Some of them are actually
useful.  The problem that is usually encountered is that they are put on
a chart somewhere and not actually used in the way intended.


> Instead, we think that:
> 
>             a) Software Quality is primarily achieved through good
>                management of the development process.

Biggest part of it.  There is no good excuse for not thinking a
project through start to finish.  Testing should be discussed whenever
functionality is discussed.
 
>             b) Successful Quality Management requires motivation 
>                and engagement of the individual and a professional
>                attitude towards software development.

Depends on how you define successful.  It is certainly most pleasant
when all parties involved act in a professional and interested manner.
 
>             c) The hardest task of introducing Quality Management
>                in a company is the change of peoples' habits,
>                not the application of new tools and methodologies. 

Yes.  When deadlines raise their ugly head, people drop to the old
"tried and true" methods that got them out the door on time in the
past.  A part of this is that the QA function often becomes an
adversarial role rather than a team member.
 
> What do YOU think about Software Quality issues?  Can anyone confirm
> or dispute any of these statements? What literature would you
> recommend in this area?

Read everything.  Then visit "The real world" and find out what has
been found to be useful.  There have been times when a pundit will
give a talk with a panel of disciples present.  When questioned, it
becomes apparent that the panel only uses the parts that they like
while the pundit claims it is part of an integrated whole.


---------------------------------------------------------------------------
Bob Campbell                Some times I wish that I could stop you from
campbelr@hpda.cup.hp.com    talking, when I hear the silly things you say.
Hewlett Packard                                    - Elvis Costello

warren@eecs.cs.pdx.edu (Warren Harrison) (03/15/91)

In article <dluginbu.668955936@galaxy.afit.af.mil> root@afit.af.mil writes:
>jgautier@vangogh.ads.com (Jorge Gautier) writes:
>
>...some stuff deleted...
>
>>By the way, don't be swayed by the advocates of "integrated CASE
>>tools" and "software metrics"-- these people are not software
>>developers. 
>
>1)  This is a pretty sweeping statement.  Seems more likely to be opinion
>than fact.
>

I agree ... some of us do develop software ... for example a product I
wrote with my wife just received the 1990 Computer Language Magazine
Productivity Award (other Award winners were Microsoft Windows 3.0 and
CADRE's Teamwork product), and in general received lots and lots of
good reviews in the trade press over the past 3 years or so. Our user
base is in the thousands, and we do all our own technical support and
upgrades. I don't think you'll find many people who are bigger metric
advocates than me (of course the product that won the award was a metric
analyzer - but we wrote the product *because* we were metric advocates, we
didn't become metric advocates because we wrote the product) Granted,
it is a relatively small project (2 people working 3 years to evolve it to
its present state - but one of us has a PhD so maybe it's better to say
1.5 people working 3 years ;-)

We had both cut lots of code at a variety of places before we ever started
tackling this project. Our experiences back then is what turned us into
metric advocates.

Warren

==========================================================================
Warren Harrison                                          warren@cs.pdx.edu
Center for Software Quality Research                          503/725-3108
Portland State University/CMPS   

seb1@druhi.ATT.COM (Sharon Badian) (03/15/91)

From: news@iesd.auc.dk
>Instead, we think that:
>
>            a) Software Quality is primarily achieved through good
>               management of the development process.
>
>            c) The hardest task of introducing Quality Management
>               in a company is the change of peoples' habits,
>               not the application of new tools and methodologies. 

in article <JGAUTIER.91Mar13125937@vangogh.ads.com>, jgautier@vangogh.ads.com (Jorge Gautier) says:
> By the way, don't be swayed by the advocates of "integrated CASE
> tools" and "software metrics"-- these people are not software
> developers.  I expect you'll get the inevitable DeMento :-) line of
> "We can't control what we can't measure."  Here's a good reply to
> that: "We can't control or meaningfully measure what we don't
> understand."

I put these two together because I think it is important to discuss them
together. First, what does "good management of the development process"
mean? It means to make decisions about the people you use for your
development, the processes you use, the tools you use, the metrics you
use to measure progress. It means deciding to use CASE tools, because
when you get down to it, the methods and tools work! It doesn't mean
that CASE tools will solve all your problems (and any software manager
or engineer worth their salt will tell you this).

Second, what does it mean "to change peoples' habits?" Habits in what?
Developing software is not a bunch of habits. It's a collection of processes
that people execute to produce a product. I don't believe Mr. Deming
would like to hear anyone say the hardest task is changing peoples'
habits. You make it sound like something is *wrong* with the people. The
people do what they are told to do and know how to do. They will find
better ways to do what they do when given the chance and some amount
of structure around what they do.

If we are talking about the way people view quality, then yes, we have
to often change the way they think and behave. But, I view this as
going far beyond "changing habits." This involves changing the way
people view the world and approach solving problems. But, I certainly
agree that is a very difficult thing to do (not necessarily the most
difficult though).

Sharon Badian
AT&T Bell Laboratories
att!druhi!seb1

abg@mars.ornl.gov (Alex Bangs) (03/15/91)

In article <SBANG.91Mar13155652@empty4.iesd.auc.dk> sbang@iesd.auc.dk (Stig Bang) writes:
>
>Instead, we think that:
>
>            a) Software Quality is primarily achieved through good
>               management of the development process.
>
>            b) Successful Quality Management requires motivation 
>               and engagement of the individual and a professional
>               attitude towards software development.
>
>            c) The hardest task of introducing Quality Management
>               in a company is the change of peoples' habits,
>               not the application of new tools and methodologies. 

I agree with most of this. I am currently taking a course in cleanroom
software engineering, where the whole process emphasizes quality from
design through implementation (including proofs of the implementation), and
finally using statistical testing to provide statistics about how
confident you are in the software. In other words, if you want to have
quality, then you should be able to back it up with something.

The item I would add to point (b) above is to emphasize that writing
software is a form of engineering, and it should be treated with all the
rigors of engineering. The "software-as-an-art-form" approach might have
application for some people, but in any realm where money or safety is
on the line, that attitude has got to go.

Alex L. Bangs ---> bangsal@ornl.gov         Of course, my opinions are
Oak Ridge National Laboratory/CESAR            my own darned business...
Autonomous Robotic Systems Group

alan@tivoli.UUCP (Alan R. Weiss) (03/15/91)

In article <SBANG.91Mar13155652@empty4.iesd.auc.dk> sbang@iesd.auc.dk (Stig Bang) writes:
>
>Greetings all professionals:
>
>We are a group of engineering students from Aalborg University Center
>(AUC), Denmark, who are currently working with aspects of obtaining
>Software Quality. We have read both books and articles on the general
>quality aspects (Pirsig; Juran; Deming; Crosby; Feigenbaum; etc.) and
>some related to software quality (Glass; Vincent, Waters & Sinclair;
>etc.)

I commend you for studying this new engineering and business discipline.
You may choose to study other researchers in this field, such as
Fletcher Buckley, William Howden, Boris Beltzer, Tom Gilb, Michael
Fagin, and others.  Keep reading this newsgroup for net.luminaries.


>We do _not_ believe in counting significant source statements and
>cyclomatic code analysis, and similar stuff as primary activities for
>achieving better software quality.

Could you please state your case as to why you do not "believe"
that these tools are useful?  I don't believe that anyone advocates
them as primary activities; rather, they are the moral equivalent
of hammers and screwdrivers: mere tools to be used with discretion.

BTW, I AM a believer in measuring cyclometric complexity; its a useful
data point to guide developers/engineers in re-examining specific modules,
and can help estimate test efforts.  Lines of code vs. function points ....
Regardless of which you prefer (please no more tedious flames on this),
you need to measure defect density somehow.  Any ideas from your group?

>
>Instead, we think that:
>
>            a) Software Quality is primarily achieved through good
>               management of the development process.


We have a saying here in America:  "mom and apple pie."  Your statement
is valid; but can you define "good management of the development
process?"


>
>            b) Successful Quality Management requires motivation 
>               and engagement of the individual and a professional
>               attitude towards software development.

Good point.  But, again, this is fairly evident.  The *real* problem
comes in analyzing WHY so many software project seem to go
astray.  Most engineers I know WANT to produce good software.
Most Managers I know WANT their engineers to produce good
software.  So what's the *REAL* problem, folks?

Can you say, "schedule?"  :-)
 
For more information, look at Barry Boehm's papers and articles.

>
>            c) The hardest task of introducing Quality Management
>               in a company is the change of peoples' habits,
>               not the application of new tools and methodologies. 


True, so true.  The question now shifts to, "WHY don't people
want to change?"  Perhaps this young science of software QA has
not proven its value?  Most of us know better.  Unfortunately,
as in any young science, new techniques (which involve both
tools and methodologies) are created, promoted, and then the
logic of the market takes over.  Some succeed, some fail.  We need
to find a way to increase the velocity of the experiments,
trumpet the successes, and learn from the failures.

This, of course, is exactly what management expert Tom Peters calls
the "try it/break it/fix it/try it" system.  Notice that it is
indeed a management problem.

In the United States more so than in other nations, there is an
emphasis on quarterly earnings.  How does this affect product cycles,
research cycles, and software quality?  How does time-to-market
stack up against comittment-to-market?  Fruitful areas for your
group to research!!


>What do YOU think about Software Quality issues?  Can anyone confirm
>or dispute any of these statements? What literature would you
>recommend in this area?
>

Get a firm grounding in business, and you'll understand the
mise en scene we operate in. Finance, marketing, development/engineering,
quality, manufacturing, sales, and most importantly CUSTOMERS
all interplay in a maelstrom of change.  Understand it, and
you'll advance the science.  Ignore business, and you're doomed
to the backwaters of academia.  Learn from the lesson of
macroeconomics!
 


>
>       ___        _       _      ______      sbang@iesd.auc.dk
>      / _ \      | |     | |    / _____\     Group S10D-E2218,
>     / / \ \     | |     | |   / /           Aalborg University Center,
>    / /___\ \    | |     | |  | |            Institute for Electronic Systems,
>   / _______ \   | |     | |  | |            Department of Mathematics 
>  / /       \ \   \ \___/ /    \ \______       and Computer Science,
> /_/         \_\   \_____/      \______/     Fredrik Bajersvej 7,
>                                             DK-9220 Aalborg Ost, Denmark.
>
>   The common sense principle:               Telephone: +45 98 15 85 22
>     Common sense is uncommon.               Telex:     69790 aub dk
 
>             -- Tom Gilb.                    Telefax:   +45 98 15 81 29
>
>--
>sbang@iesd.auc.dk


_______________________________________________________________________
Alan R. Weiss                           TIVOLI Systems, Inc.
E-mail: alan@tivoli.com                 6034 West Courtyard Drive,
E-mail: alan@whitney.tivoli.com         Suite 210
Voice : (512) 794-9070                  Austin, Texas USA  78730
Fax   : (512) 794-0623     "I speak only for myself, not for TIVOLI"
_______________________________________________________________________

djbailey@skyler.mavd.honeywell.com (03/15/91)

In article <JGAUTIER.91Mar13181645@vangogh.ads.com>, jgautier@vangogh.ads.com (Jorge Gautier) writes:
> [...] "We can't apply metrics that we don't understand."  

First, let me recommend a couple of my favorite books: SOFTWARE SYSTEM 
TESTING AND QUALITY ASSURANCE by Boris Beizer published in 1984 by Van 
Nostrand Reinhold, ISBN 0-442-21306-9 and WICKED PROBLEMS, RIGHTEOUS 
SOLUTIONS by Peter DeGrace and Leslie Hulet Stahl published in 1990 by 
Yourdon Press.

Peter DeGrace has some excellant comments on software development, 
software engineering, and science. He compares the present state of 
the art of software development with the level of understanding of 
architecture by the builders of the great medieval cathedrals of 
europe. We have a lot of practical knowledge but we don't understand 
the underlying science very well.

I think that is true. The general difficulty in estimating jobs and 
meeting schedules supports this view. Software development takes place 
in our heads. It is difficult to observe and measure what happens.

Now, back to the quote. The problem is that we attach too much 
significance to metrics without understanding them. If you want to 
investigate software development scientifically, you have to measure 
something. You also have to evaluate your measurement techniques. 
Developing good measurement techniques is extremely important.
You can't develop a science without measurements.

-- Don Bailey

gwharvey@lescsse.uucp (Greg Harvey) (03/18/91)

In <1991Mar15.095125.44@skyler.mavd.honeywell.com> djbailey@skyler.mavd.honeywell.com writes:

>> [...] "We can't apply metrics that we don't understand."  
[stuff deleted]
>Now, back to the quote. The problem is that we attach too much 
>significance to metrics without understanding them. If you want to 
>investigate software development scientifically, you have to measure 
>something. You also have to evaluate your measurement techniques. 
>Developing good measurement techniques is extremely important.
>You can't develop a science without measurements.

We have a quote that may apply to "meaurement without understanding"--"Fundamentally, they
don't know what they're doing."

This phrase is used generically toward those (who shall rename mainless) around us who
prefer the golden goose approach to life...i.e. if we just had a golden goose (or silver
bullet, or the pot at the end of the rainbow) life would be wonderful.  Believing that
life will be wonderful does not replace sane attempts to make it so.  The attempts to
make life wonderful must be accompanied by realistic attempts to understand just how much
more wonderful life has become.  Software quality metrics are not academic exercises!

Quality is measurable.
Good quality is quantifiable.  
[We] Measure what is important.

These statements form the rhetorical foundation for a much needed "quality revolution."  
Software QA is just one area where these belief statements can be applied effectively.
Zero defects is a difficult concept to grasp because we humans lack perfection and even
have difficulty visualizing perfection.  Zero defects is an accountability method where
we introspectively examine defects in order to determine how each occurred.  If done
honestly and carefully, the person comes to the realization that faulty results follow
faulty methods.  An honest person realizes, at the exact same instant, that faulty methods
are not character flaws, but instead are opportunities for improvement!

Simple accountability, which most people avoid (myself included!), helps us do our best in 
every life situation.  Admittedly, being responsible or accountable for our actions can 
make life uncomfortable.  Software QA strives to recreate the desire for "directed 
perfection" in the software creator.  It creates this desire by measuring the effectiveness, 
as best it is able, of the creators and discussing the results with them.  Its focus should 
be enhancing the creative powers of the individual through incremental improvement of method.

We will never accomplish what we can't at least imagine!
(--I don't know who to attribute this to, but it isn't original with me!)


On the other hand...

Now, class, let's review what happens when U.S. software has 35 defects per 100 lines
of code and <Place country name here> software has 0.1 defects per 100 lines of code.
Can innovation make up for lost competitiveness due to poor quality?  I don't know, let's
ask the Detroit automakers for a historical perspective? 


--
If you get the impression I'm not qualified to speak for my company, it's
because I ain't, I can't, I don't, I won't, and I don'wanna.
Greg Harvey                    Internet: lobster!lescsse!gwharvey@menudo.uh.edu
Lockheed, Houston Texas        UUCP:     lobster!lescsse!gwharvey

reggie@paradyne.com (George W. Leach) (03/19/91)

In article <1962@pdxgate.UUCP> warren@eecs.UUCP (Warren Harrison) writes:
>I don't think you'll find many people who are bigger metric
>advocates than me (of course the product that won the award was a metric
>analyzer - but we wrote the product *because* we were metric advocates, we
>didn't become metric advocates because we wrote the product) Granted,
>it is a relatively small project (2 people working 3 years to evolve it to
>its present state - but one of us has a PhD so maybe it's better to say
>1.5 people working 3 years ;-)

>We had both cut lots of code at a variety of places before we ever started
>tackling this project. Our experiences back then is what turned us into
>metric advocates.



I tried to e-mail this, but it bounced.


	What I am interested in hearing is how you utilized metrics to
measure the processes involved in developing this product over the past
three years?


-- 
George W. Leach					AT&T Paradyne 
reggie@paradyne.com				Mail stop LG-133
Phone: 1-813-530-2376				P.O. Box 2826
FAX: 1-813-530-8224				Largo, FL 34649-2826 USA

jgautier@vangogh.ads.com (Jorge Gautier) (03/20/91)

In article <gwharvey.669271366@node_25b73> gwharvey@lescsse.uucp (Greg Harvey) writes:
>   On the other hand...
>
>   Now, class, let's review what happens when U.S. software has 35 defects per 100 lines
>   of code and <Place country name here> software has 0.1 defects per 100 lines of code.

Here's a perfect example of a silly application of metrics.  What does
"defects per line of code" mean?  It depends on how you count, doesn't
it?  Show these numbers to someone who doesn't know anything about
software and they will conclude that U.S. software is of lower quality
than the unnamed country's.  Is that a valid conclusion?
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

jgautier@vangogh.ads.com (Jorge Gautier) (03/20/91)

The "realization that faulty results follows faulty methods" has been
with some of us even before the advent of metrics.  Good programmers
are humble enough to admit to their faults and work towards improving
their methods.  To attach metrics to this fundamental activity of
process quality improvement--"let's see, last month you did 7.2 on our
quality index, this month you did 8.1: good job!"--is ... well, I
don't have the words to describe it.  Of course, there are those who
don't understand anything unless it's a number, so whatcha gonna do...
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

jgautier@vangogh.ads.com (Jorge Gautier) (03/20/91)

In article <dluginbu.668955936@galaxy.afit.af.mil> root@afit.af.mil writes:
>   >By the way, don't be swayed by the advocates of "integrated CASE
>   >tools" and "software metrics"-- these people are not software
>   >developers. 
>
>   1)  This is a pretty sweeping statement.  Seems more likely to be opinion
>   than fact.

The original message referred to software quality.  I was warning the
author that there are many who believe and advocate metrics and tools
as the silver bullet for low software quality.  I am reaffirming the
author's statement that programmer attitude and mentality is the most
important and essential aspect of software quality.  "A fool with a
tool is still a fool."

>   2)  I guess I am to infer that CASE tools and metrics are not useful.

No.  I didn't mean to imply that.

>   At the VERY LEAST, metrics
>   can provide important data for trend analysis (where is our process going
>   wrong?).  

I disagree.  At the very least, metrics can provide you with useless
numbers.

>   But I suspect that as we get a better grasp on software metrics,
>   we will be able to do more and more with them.

That sure would be nice.

>   There are a lot of folks looking into measurement of software -- we've
>   come a long way from "lines of code."

Who's "we"?  There's people on this net asking for lines of code tools.

>   By the way, research in software
>   metrics can contribute to our understanding of software in general (see
>   below).

Research into software development, perhaps using some kind of
metrics, can contribute to our understanding of software in general.

>   This does not mean that metrics are not useful; only that we still
>   have plenty of work to do in understanding software, so that we (whoever
>   we are) can better measure and control (whatever that means).

Agreed.

--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

norton@manta.NOSC.MIL (Scott Norton) (03/20/91)

In article <JGAUTIER.91Mar19174642@vangogh.ads.com> jgautier@vangogh.ads.com (Jorge Gautier) writes:

>[...]  To attach metrics to this fundamental activity of
>process quality improvement--"let's see, last month you did 7.2 on our
>quality index, this month you did 8.1: good job!"--is ... well, I
>don't have the words to describe it.  Of course, there are those who
>don't understand anything unless it's a number, so whatcha gonna do...

One of the central concepts of Total Quality Management is the need
for data: numerical measures of the quality of a process.  These
measures of performance are not imposed by the manager, but developed
by the people who are most familiar with the process--the employees
who actually do the work.  The manager leads thhe improvement of the
process, making sure the improvement is to the quality of the product,
but the task of defining the metric is the workers'.

Of course, there are other uses for software metrics: one example I saw
in this newsgroup was the rule of thumb relating source lines of
code and test time.  But for statistical quality control, in a TQM 
organization, the measurements are determind by the people who do
the process, and used by them to improve that process.

Scott A. Norton
LT          USN
norton@NOSC.MIL

jih@ox.com (John Hritz) (03/20/91)

Although I agree with your three premises, I would not rule out the use of
metrics as a method for quality improvement.  Without an objective 
measurement by which to gauge success, you're just taking pot shots at
the defect issue.  As I think you are asserting, quality starts from the
lowest levels of the development staff, but also must have buy-in from
top management.  It's a sad fact, but most development groups reward staffers
that hack together code quickly, marvelling over their productivity.  They
then elevate them to wizardhood when they call them in on nights and weekends
prior to major releases to bail the company out of a jam.  These guys are
being heralded for correcting problems in their own code!!!
-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
     John I. Hritz                               Photons have mass?!
     jih@ox.com                                       I didn't know they
     313-930-9126                                             were catholic!

djbailey@skyler.mavd.honeywell.com (03/20/91)

In article <JGAUTIER.91Mar19174642@vangogh.ads.com>, jgautier@vangogh.ads.com (Jorge Gautier) writes:
> [...] To attach metrics to this fundamental activity of
> process quality improvement--"let's see, last month you did 7.2 on our
> quality index, this month you did 8.1: good job!"--is ... well, I
> don't have the words to describe it.  [...]

Instead of "good job!", the appropriate comment is, "Do you think our
quality index is measuring something useful?"  Measurements are
important but it's just as important to avoid giving them more 
credibility than they deserve.

-- Don J. Bailey

pattis@cs.washington.edu (Richard Pattis) (03/21/91)

I once read a quote that went something like, "It is easier to measure
something than to understand what is being measured."  No doubt a physicist
said it. Anyone got a source?

Rich Pattis

daves@hpopd.pwd.hp.com (Dave Straker) (03/21/91)

> their methods.  To attach metrics to this fundamental activity of
> process quality improvement--"let's see, last month you did 7.2 on our
> quality index, this month you did 8.1: good job!"--is ... well, I
> don't have the words to describe it.  Of course, there are those who

"Process data must not be used to compare projects or individuals. Its
 purpose is to illuminate the product being developed and to provide an
 informed basis for improving the process. When such data is used by
 management to evaluate individuals or teams, the reliability of the 
 data itself will deteriorate."

     --- Watts Humphrey, "Managing the Software Process", Addison Wesley 1989

Dave Straker            Pinewood Information Systems Division (PWD not PISD)
[8-{)                   HPDESK: David Straker/HP1600/01
                        Unix:   daves@hpopd.pwd.hp.com

dougs@baldwin.WV.TEK.COM (Doug Schwartz;685-2700;61-252;Baldwin) (03/23/91)

In article <1991Mar20.043236.1236@ox.com> jih@ox.com (John Hritz) writes:
>Although I agree with your three premises, I would not rule out the use of
>metrics as a method for quality improvement.  Without an objective 
>measurement by which to gauge success, you're just taking pot shots at
>the defect issue.  As I think you are asserting, quality starts from the

And of course we are forgetting that there can be different levels of defects
or the code can function in a manner that some of us might call defective:

   Returns incorrect value
   Program won't let you do something that you think it should
   Program does something unexpected
   Inconsistent user interface
   Confusing/missing/incorrect messages
   Program bombs with tricky/difficult to reproduce input parameters
   Program looks like it works, but doesn't
   Program doesn't save work or saves junk
   Installation program mucks with user/system files at will
   Trips switch to start WWIII

Although all could be considered defects, I think the last is more serious than
any of the previous (or all combined).  What about defects by omission?
How do we rate them? e.g. if you don't test for divide-by-zero, is this a
defect?
--
        Doug Schwartz           dougs@orca.wv.tek.com
        Tektronix
        Wilsonville, OR

jgautier@vangogh.ads.com (Jorge Gautier) (03/23/91)

In article <36650001@hpopd.pwd.hp.com> daves@hpopd.pwd.hp.com (Dave Straker) writes:
>   "Process data must not be used to compare projects or individuals. Its
>    purpose is to illuminate the product being developed and to provide an
>    informed basis for improving the process. When such data is used by
>    management to evaluate individuals or teams, the reliability of the 
>    data itself will deteriorate."

If this process data is to be used to improve the process, it must be
pretty damn good.  I mean, it must be measuring the methods used for
development and their causal relationships to the quality of the
product.  Care to share these metrics with the rest of us?  I'm
serious.  At least provide some references, please.
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) (03/23/91)

In article <10412@orca.wv.tek.com> dougs@baldwin.WV.TEK.COM (Doug Schwartz;685-2700;61-252;Baldwin) writes:

   any of the previous (or all combined).  What about defects by omission?
   How do we rate them? e.g. if you don't test for divide-by-zero, is this a
   defect?

Testing for divide by zero was rendered obsolete by IEEE 754. +-Inf
and NaN are perfectly valid states.

What your program should do with them varies upon application domain.
But testing for zero before the divide isn't necessary (nor, in fact,
desireable). 
--
----------------------------------------------------------------
Keith H. Bierman    kbierman@Eng.Sun.COM | khb@chiba.Eng.Sun.COM
SMI 2550 Garcia 12-33			 | (415 336 2648)   
    Mountain View, CA 94043

kambic@iccgcc.decnet.ab.com (03/26/91)

In article <36650001@hpopd.pwd.hp.com>, daves@hpopd.pwd.hp.com (Dave Straker) writes:
[...SLIGHTLY EDITED...]
>> their methods.  To attach metrics to this fundamental activity of
>> process quality improvement--"let's see, last month THE CODE WAS 7.2 on our
>> quality index, this month THE CODE 8.1: good CODE!"--is ... well, I
>> don't have the words to describe it.  Of course, there are those who
> 
> "Process data must not be used to compare projects or individuals. Its
>  purpose is to illuminate the product being developed and to provide an
>  informed basis for improving the process. When such data is used by
>  management to evaluate individuals or teams, the reliability of the 
>  data itself will deteriorate."
> 
>      --- Watts Humphrey, "Managing the Software Process", Addison Wesley 1989
No question about the statement.  Now, even with all of the good words, how do
we get Humphrey's principle into a useful paradigm.  If x produces buggy code
from month to month to month, does management let it continue or do they get
this person into a bug reducing class.  Or do they continue to rework the code
because the "data must not be used.." to evaluate x.  Tough problem.  This may
be taking the problem to an extreme but if data indicates that the problem may
be in a person's techniques or style, what do you do if you are management or a
member of a team?

GXKambic
standard disclaimer

ewoods@hemel.bull.co.uk (Eoin Woods) (03/26/91)

dougs@baldwin.WV.TEK.COM (Doug Schwartz;685-2700;61-252;Baldwin) writes:
>In article <1991Mar20.043236.1236@ox.com> jih@ox.com (John Hritz) writes:
>>Although I agree with your three premises, I would not rule out the use of
>>metrics as a method for quality improvement.  Without an objective 
>>measurement by which to gauge success, you're just taking pot shots at
>>the defect issue.  As I think you are asserting, quality starts from the

>And of course we are forgetting that there can be different levels of defects
>or the code can function in a manner that some of us might call defective:

>   Returns incorrect value
     [ ... deleted  ... ]
>   Trips switch to start WWIII

>Although all could be considered defects, I think the last is more serious than
>any of the previous (or all combined).  What about defects by omission?
>How do we rate them? e.g. if you don't test for divide-by-zero, is this a
>defect?
So? - we attempt to classify defects and then assess our software quality 
taking the relative severities of the defects found into account (e.g.
A - major system defect rendering it un-usable; B - intermediate defect 
requiring actual user action to work around; C - minor defect that while it
does not require the user to take action to avoid it, should not be present).
Obviously systems with many type 'A' faults are worse than those with many
type 'C' (and should never see customers in that state anyway).

As to defects by omission, they can mostly be classified in the same way.

Eoin.
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~    Eoin Woods, Software Development Group, Bull HN Information Systems,   ~
~                Maxted Road, Hemel Hempstead, Herts HP2 7DZ, UK.           ~
~                Tel : +44 442 232222 x4823   Fax : +44 442 236072          ~
~      < Eoin.Woods@hemel.bull.co.uk  or   ...!uunet!ukc!brno!ewoods>       ~
~          < When do we start news group comp.os.emacs ?  :-) >             ~

norton@manta.NOSC.MIL (Scott Norton) (03/27/91)

In article <3955.27ee3172@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com writes:
>In article <36650001@hpopd.pwd.hp.com>, daves@hpopd.pwd.hp.com (Dave Straker) writes:
>[...SLIGHTLY EDITED...]
>>>  To attach metrics to this fundamental activity of
>>> process quality improvement--"let's see, last month THE CODE WAS 7.2 on our
>>> quality index, this month THE CODE 8.1: good CODE!"--is ... well, I
>>> don't have the words to describe it.
>> 
>> "Process data must not be used to compare projects or individuals. Its
>>  purpose is to illuminate the product being developed and to provide an
>>  informed basis for improving the process. When such data is used by
>>  management to evaluate individuals or teams, the reliability of the 
>>  data itself will deteriorate."
>> 
>>      --- Watts Humphrey, "Managing the Software Process", Addison Wesley 1989
>No question about the statement.  Now, even with all of the good words, how do
>we get Humphrey's principle into a useful paradigm.  If x produces buggy code
>from month to month to month, does management let it continue or do they get
>this person into a bug reducing class.  Or do they continue to rework the code
>because the "data must not be used.." to evaluate x.  Tough problem.  This may
>be taking the problem to an extreme but if data indicates that the problem may
>be in a person's techniques or style, what do you do if you are management or a
>member of a team?

You have to "embrace the new philosophy" and "Drive out fear."

The TQM way of using the metric is that the metric that quantifies the
buggyness of the code was developed in the first place by people
who were part of the code writing process: maybe a spec writer, a
coder, and a tester.  _They_ are the ones who decided that
mistakes in the code-writing process might be causing poor quality
in their product (internal product, such as the modules they write).
So, they establish the metrics and apply them.  They notice that coder
"X" has a higher bug rate, and ask management to get "X" more training.

In the jargon of TQM, the group that tries the metric and evaluates
its result is a "Process Action Team."  It is composed of the experts
on the process: the people who do it.  It can be augmented by outside
specialists.  The classic example is a statistician, but here an expert
in software metrics could be called to give his advice, but the Process
Action Team will apply the metrics, evaluate the results, and recommend
options to improve the process.

Finally, the team will continue to monitor the process, and see that
sending coder "X" to school actually helped.

I'm not trying to minimize the difficulty of choosing good metrics
and applying them properly.  In this example, I might look at counting
modules that failed unit test, and maybe group them by some causes
like "Ambiguous spec", "Math error", "Typo", "Control Flow Error",
and "All Others", and then if it turns out that All Others has
the only significant data, break it down again.  But I'm not in the
process; I don't know it the way those programmers do.

LT Scott A. Norton, USN  <norton@NOSC.MIL>

dparter@shorty.cs.wisc.edu (David Parter) (04/03/91)

>In article <36650001@hpopd.pwd.hp.com>, daves@hpopd.pwd.hp.com (Dave Straker) writes:
>> [someone else writes]:
>> 
>> "Process data must not be used to compare projects or individuals. Its
>>  purpose is to illuminate the product being developed and to provide an
>>  informed basis for improving the process. When such data is used by
>>  management to evaluate individuals or teams, the reliability of the 
>>  data itself will deteriorate."
>> 
>>      --- Watts Humphrey, "Managing the Software Process", Addison Wesley 1989

[reformated so it is more readable]

> No question about the statement.  Now, even with all of the good words,
> how do we get Humphrey's principle into a useful paradigm.  If x
> produces buggy code from month to month to month, does management let
> it continue or do they get this person into a bug reducing class.  Or
> do they continue to rework the code because the "data must not be
> used.." to evaluate x.  Tough problem.  This may be taking the problem
> to an extreme but if data indicates that the problem may be in a
> person's techniques or style, what do you do if you are management or a
> member of a team?

If a specific member of the team is producing buggy code from month to
month, that is, the quality of his or her work is siginificantly worse
than that of the rest of the team, the other members of the team,
probably including management, already know this. 

I was about to go on about what metrics are good for, but it is covered
in the second sentence of the quote above from Humphry:

	"[Process data] ... Its purpose is to illuminate the product
	 being developed and to provide an informed basis for improving
	 the process."

--david
-- 
david parter					dparter@cs.wisc.edu

saxena@motcid.UUCP (Garurank P. Saxena) (04/04/91)

dparter@shorty.cs.wisc.edu (David Parter) writes:

>>In article <36650001@hpopd.pwd.hp.com>, daves@hpopd.pwd.hp.com (Dave Straker) writes:
>> No question about the statement.  Now, even with all of the good words,
>> how do we get Humphrey's principle into a useful paradigm.  If x
>> produces buggy code from month to month to month, does management let
>> it continue or do they get this person into a bug reducing class.  Or
>> do they continue to rework the code because the "data must not be
>> used.." to evaluate x.  Tough problem.  This may be taking the problem
>> to an extreme but if data indicates that the problem may be in a
>> person's techniques or style, what do you do if you are management or a
>> member of a team?

>If a specific member of the team is producing buggy code from month to
>month, that is, the quality of his or her work is siginificantly worse
>than that of the rest of the team, the other members of the team,
>probably including management, already know this. 

The issue raised by Dave Straker is germane to the area of metrics and
measurements in general. More so, because managers of software engineers
have very little objective data on which to  base their evaluation of these 
engineers. Thus, as soon as they get these metrics, they will start 
using them to evaluate the person's abilities. So far so good. However, if 
the manager takes positive steps and tries to improve the person's 
capabilities, that is a good use of the metrics data. What engineers
fear most is that the metrics will be utilised in a manner which will 
affect them negatively. 
From my own previous experience, when we were writing the Software Requirement 
Specifications for a new product, we decided to measure the number of 
changes made per draft version of the Specifications document. 
This measure was put in the Revision History page of the 
Specification document itself. The idea behind this was to have 
some measure of changes occuring in this document over time. 

All the team members agreed that this was a good idea. However, 
when the time came to issue the Specifications document to the 
world (which included the upper management), all the
engineers screamed and shouted about removing the numbers from the
Revision History page of the document, as it would reflect on the
quality of the people working on this document. The final decision
was to leave the numbers in, but not explain what they meant!

As Humphrey says in a recent paper ("Predicting (Individual)
Software Productivity", IEEE Transactions on S/W Engg, Feb. 1991),
"In estimating the productivity of software individuals and groups,
it is important to have an objective method for utilizing 
available data. Without such a basis, it is hard to guard against the 
pressure to use productivity values that give the most desirable
business results."

As a colleague of mine puts it, the ideal way to use metrics is
akin to the confessional box in church. You report your metrics,
without being identified and they are never to be used against
you and all is forgiven.

------------------------------------------------------------------
Garurank P Saxena		"when the fight begins within
CASE Development Group 		  himself, then a man's worth
Motorola Inc			  something".
Arlington Heights, Illinois	Phone: (708) - 632 - 4757
				Fax:   (708) - 632 - 4430
------------------------------------------------------------------

mct@praxis.co.uk (Martyn Thomas) (04/10/91)

I certainly agree with Stig Bang's original points. I recommend that his
group looks at some of the successful quality management system standards,
for example, ISO 9001 as applied to software.

At Praxis, we have five year's experience of working within an (originally
BS 5750, now ) ISO 9001 regime. On average, we get three independent audit
visits (with two day's warning) each year from the British Standards
Institution. They can call up any project file, and any company activity,
and inspect to see that our announced quality standards are being followed.

Clearly, following standards doesn't guarantee quality (whatever you choose
to mean by the word) but it shows that your processes are probably
repeatable and under control. That would seem to be a minimum requirement
for any engineering process (which I believe sw-eng to be) and a
prerequisite for quality improvement.

There are military standards, too, such as the NATO AQAP 1 and 13.

Sorftware Engineering is, by my definition:

1	Applied Computer Science  +
2	Controlled and repeatable processes +
3	Effective quality management.

Formal methods and ISO 9001 are two essential building-blocks, in my
opinion.

BTW, I recommend "Strategies for Software Engineering", M A Ould, Wiley
1990. (He's a colleague at Praxis, but don't hold that against him!).
-- 
Martyn Thomas, Praxis plc, 20 Manvers Street, Bath BA1 1PX UK.
Tel:	+44-225-444700.   Email:   mct@praxis.co.uk

kambic@iccgcc.decnet.ab.com (04/13/91)

In article <1991Apr2.200958.8208@spool.cs.wisc.edu>, dparter@shorty.cs.wisc.edu (David Parter) writes:
>>In article <36650001@hpopd.pwd.hp.com>, daves@hpopd.pwd.hp.com (Dave Straker) writes:

> If a specific member of the team is producing buggy code from month to
> month, that is, the quality of his or her work is siginificantly worse
> than that of the rest of the team, the other members of the team,
> probably including management, already know this. 
> 
> I was about to go on about what metrics are good for, but it is covered
> in the second sentence of the quote above from Humphry:
> 
> 	"[Process data] ... Its purpose is to illuminate the product
> 	 being developed and to provide an informed basis for improving
> 	 the process."

I don't think that there is an answer here yet.  If the other members of the
team and management know about the bug issue, how did they find out about it? 
Is it quantitatively within the bounds of acceptable number of bugs for the
current stage of the product and company, or is it greater than that bound?  If
the number is greater, then what do you do?  Quoting Humphrey is not a solution
to the problem.  If a problem resides in the code coming from one person,
then how do you improve the process without taking some action with respect to
that person that includes the measured information on bug rates?  What
does the team do, ignore it?  What does management do?  Say that they know that
the code is buggy but that no action can be taken because that action would be
using a metric to evaluate a person?  How about an answer?  What do you do? 
Say you are in a level five organization, and want to improve the process but
you have to keep removing bugs introduced by one person.  What is the action
you take?  

GXKambic
standard disclaimer

kambic@iccgcc.decnet.ab.com (04/13/91)

In article <4909@berry7.UUCP>, saxena@motcid.UUCP (Garurank P. Saxena) writes:
> dparter@shorty.cs.wisc.edu (David Parter) writes:
> So far so good. However, if 
> the manager takes positive steps and tries to improve the person's 
> capabilities, that is a good use of the metrics data. What engineers
> fear most is that the metrics will be utilised in a manner which will 
> affect them negatively. 
These are messages that top management must pound into everyone.  Both the 
engineers and management are measuring to improve, not to evaluate, and that 
both engineers must learn to take risks: the risk of presenting metrics 
data so that everyone knows product/process status, and management risk 
of waiting to let the engineers apply the metrics to improve the process.

GXKambic
standard disclaimer