[comp.software-eng] bridge building and discipline

kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) (05/07/91)

In article <1991May3.234349.14026@auto-trol.com>, alesha@auto-trol.com (Alec Sharp) writes:
> In article <1991May3.142824.208@keinstr.uucp> chaplin@keinstr.uucp (chaplin) writes:
>>In article <9105012313.AA23259@enuxha.eas.asu.edu> koehnema@enuxha.eas.asu.edu (Harry Koehnemann) writes:
>>>In article <1259@grapevine.EBay.Sun.COM> chrisp@regenmeister.EBay.Sun.COM (Chris Prael) writes:
[...arguments removed...]

> I just happen to think that C only works with
> software engineering if discipline is mixed in, and this seems to be a
> commodity in short supply, certain among most of the software 
> developers I've worked with in the last ten years.
> 
> So, people being people, I sort of figure I'm safer if no one has
> guns, and I sort of figure people do better software engineering if
> they have a language more conducive to it than C.

This is dangerously close to big-brotherism.  Since people are people and 
can't be taught or persuaded to be disciplined, remove <things> that permit 
them to be undisciplined and therefore easier to control.  

George X. Kambic
Disclaimers reduce the need for lawyers.

jls@netcom.COM (Jim Showalter) (05/09/91)

>> I just happen to think that C only works with
>> software engineering if discipline is mixed in, and this seems to be a
>> commodity in short supply, certain among most of the software 
>> developers I've worked with in the last ten years.
>> 
>> So, people being people,
>> I sort of figure people do better software engineering if
>> they have a language more conducive to it than C.

>This is dangerously close to big-brotherism.  Since people are people and 
>can't be taught or persuaded to be disciplined, remove <things> that permit 
>them to be undisciplined and therefore easier to control.  

If someone with an eye on the bottom line determines that undisciplined
programmers are costing the company money, it is NOT big brotherism to
impose a more software-engineering-oriented language on them: this is
called "capitalism".

kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) (05/14/91)

In article <1991May9.053311.800@netcom.COM>, jls@netcom.COM (Jim Showalter) writes:
[...]
> 
> If someone with an eye on the bottom line determines that undisciplined
> programmers are costing the company money, it is NOT big brotherism to
> impose a more software-engineering-oriented language on them: this is
> called "capitalism".

This is a point that I thought I raised earlier, and on which I  received zero
feedback.  I will paraphrase: The Watts Humphrey (and others) paradigm is that
the process must be measured, not the people executing the process.  Then
whatever is wrong with the process must be fixed.  This implies that
no one is somehow responsible for a failure in the process.  There seems to 
be a furious effort to neglect the reality of this point.  The effort to
collect meaningful metrics data will make it possible to determine if
individual contributors are not meeting project goals.  At what point does the
"bottom-line" person determine the state and fate of this person.  The
paraphrased statement attributed to Humphrey implies never.  This is unreal.

Comments?

GXKambic
standard disclaimer

jgautier@vangogh.ads.com (Jorge Gautier) (05/14/91)

In article <4563.282e83ea@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
> In article <1991May9.053311.800@netcom.COM>, jls@netcom.COM (Jim Showalter) writes:
> [...]
> > 
> > If someone with an eye on the bottom line determines that undisciplined
> > programmers are costing the company money, it is NOT big brotherism to
> > impose a more software-engineering-oriented language on them: this is
> > called "capitalism".
>
> This is a point that I thought I raised earlier, and on which I  received zero
> feedback.  I will paraphrase: The Watts Humphrey (and others) paradigm is that
> the process must be measured, not the people executing the process.  Then
> whatever is wrong with the process must be fixed.  This implies that
> no one is somehow responsible for a failure in the process.  There seems to 
> be a furious effort to neglect the reality of this point.

Incompetence is a taboo subject, just like anything that happens below
the waist.  It's real and it happens all the time, but nobody wants to
talk about it (including me :).

> The effort to
> collect meaningful metrics data will make it possible to determine if
> individual contributors are not meeting project goals.  At what point does the
> "bottom-line" person determine the state and fate of this person.  The
> paraphrased statement attributed to Humphrey implies never.  This is unreal.

I think the desire for metrics is an admission by management types
that they really don't know what's going on in their projects.  Good
managers are able to tell if the project goals are being met, who's
doing well and who's screwing up without any metrics.  Metrics mania
indicates that someone doesn't understand software development and/or
they're desperate to figure out what's causing poor quality or project
failure.  The "logical" arguments for metrics and the assurances that
they won't be used for evaluating people are merely persuasive
techniques used to facilitate the establishment of a questionable
practice (just like the ads that tell us how "beef fits into today's
balanced diets.")
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

kummer@possum.den.mmc.com (Jim Kummer) (05/14/91)

>I think the desire for metrics is an admission by management types
>that they really don't know what's going on in their projects.  Good
>managers are able to tell if the project goals are being met, who's
>doing well and who's screwing up without any metrics.  Metrics mania
>indicates that someone doesn't understand software development and/or
>they're desperate to figure out what's causing poor quality or project
>failure.

Metrics are not generally needed by the line supervisors and managers 
directly over a project.  Rather, the metrics are needed by the 
managers' managers, and theirs above them, who admittedly do not 
know what's going on on the project.  In an atmosphere of mutual trust 
and bilateral acceptance of responsibility, the need for metrics is 
lessened, but not removed.

---- Jim Kummer -------- kummer@pogo.den.mmc.com -----------
disclaimer:  the above opinions are mine, and not necessarily those of 
   anyone else.

-- 
 
-- Regards ----- James Kummer --- Martin Marietta Corporation --
----- kummer@pogo.den.mmc.com --- Information and Communication Systems --
 

jls@netcom.COM (Jim Showalter) (05/16/91)

I find it odd that metrics have been
successfully used in just about all other industries to improve quality,
reduce risk, identify incipient problems, increase productivity, etc, but
that many in the software industry feel that metrics cannot do the same
for software. The argument appears to be that metrics "cramp" a programmer's
"creativity", which seems to be the same argument used against any other
attempt to impose some sense of engineering discipline on the "art" of
writing software.

At root, I think we are in the midst of a paradigm shift, from software
as an artistic pursuit to software as an engineered commodity. It is not
surprising that those from the old culture do not welcome the shift to
the new culture, any more than those with a geocentric model of the
universe welcomed the shift to a heliocentric model. Their resistance to
the changes manifests itself in a variety of ways, from arguments against
software patents ("monopolistic", "stifles free exchange of ideas", etc)
to arguments against software metrics and methodologies. But, business
is business, and, in the end, things that cost businesses time and money
will be displaced by things that improve the bottom line. It will just
take time, as do all cultural shifts, particularly since many of those
currently managing software projects are themselves products of the
culture being displaced.
-- 
**************** JIM SHOWALTER, jls@netcom.com, (408) 243-0630 *****************
* Proven solutions to software problems. Consulting and training in all aspects*
* of software development. Management/process/methodology. Architecture/design/*
* reuse. Quality/productivity. Risk reduction. EFFECTIVE OO techniques. Ada.   *

alesha@auto-trol.com (Alec Sharp) (05/16/91)

>
>> The effort to
>> collect meaningful metrics data will make it possible to determine if
>> individual contributors are not meeting project goals.  At what point does the
>> "bottom-line" person determine the state and fate of this person.  The
>> paraphrased statement attributed to Humphrey implies never.  This is unreal.
>
>I think the desire for metrics is an admission by management types
>that they really don't know what's going on in their projects.  Good
>managers are able to tell if the project goals are being met, who's
>doing well and who's screwing up without any metrics.  Metrics mania
>indicates that someone doesn't understand software development and/or
>they're desperate to figure out what's causing poor quality or project
>failure.  The "logical" arguments for metrics and the assurances that
>they won't be used for evaluating people are merely persuasive
>techniques used to facilitate the establishment of a questionable
>practice (just like the ads that tell us how "beef fits into today's
>balanced diets.")
>--

Why do so many people posting news assume that we are all perfect?
Half the software developers out there are below average.  Half the 
managers are below average.  I'm sure there are substantial numbers of
managers who don't know what's going on in their projects and yes, are
desparate to figure out what's causing poor quality.  Why penalize
them just because they aren't "good" managers?  They are what their
organizations have, so they may as well use whatever tools and
techniques best help them meet the goals of their organizations.

Please, let's get away from this elitism in posting and start
addressing the "normal" software development process where many of the
people are below average and the procedures and tools must be able to
address their needs and the needs of their organizations as they
struggle to meet quality and schedule goals.

Alec...



















-- 
------Any resemblance to the views of Auto-trol is purely coincidental-----
Don't Reply - Send mail: alesha%auto-trol@sunpeaks.central.sun.com
Alec Sharp           Auto-trol Technology Corporation
(303) 252-2229       12500 North Washington Street, Denver, CO 80241-2404

orville@weyrich.UUCP (Orville R. Weyrich) (05/16/91)

In article <1991May15.223719.10256@auto-trol.com> alesha@auto-trol.com (Alec Sharp) writes:
>
>Why do so many people posting news assume that we are all perfect?
>Half the software developers out there are below average.  Half the 
>managers are below average.  I'm sure there are substantial numbers of
>managers who don't know what's going on in their projects and yes, are
>desparate to figure out what's causing poor quality.  Why penalize
>them just because they aren't "good" managers?  They are what their
>organizations have, so they may as well use whatever tools and
>techniques best help them meet the goals of their organizations.
>
>Please, let's get away from this elitism in posting and start
>addressing the "normal" software development process where many of the
>people are below average and the procedures and tools must be able to
>address their needs and the needs of their organizations as they
>struggle to meet quality and schedule goals.

Please, let's devote our energies to finding ways to empower the half of
the world which is below average to perform like the "elite" do today.
Of course this will also probably cause the performance of the elite to
improve still further, so we will be caught in an ever-deepening spiral of
IMPROVEMENT :-).



--------------------------------------           ******************************
Orville R. Weyrich, Jr., Ph.D.                   Certified Systems Professional
Internet: orville%weyrich@uunet.uu.net             Weyrich Computer Consulting
Voice:    (602) 391-0821                         POB 5782, Scottsdale, AZ 85261
Fax:      (602) 391-0023                              (Yes! I'm available)
--------------------------------------           ******************************

chrisp@regenmeister.EBay.Sun.COM (Chris Prael) (05/17/91)

From article <1991May15.180943.6796@netcom.COM>, by jls@netcom.COM (Jim Showalter):
> I find it odd that metrics have been
> successfully used in just about all other industries to improve quality,
> reduce risk, identify incipient problems, increase productivity, etc, but
> that many in the software industry feel that metrics cannot do the same
> for software. 

The problem is comparing apples and figs.  In the industries in which
metrics have been used successfully, they have been applied to the
manufacturing and service processes of the industries.  No one has ever
successfully applied metrics to the design and development  processes 
of any industry outside of software.  So why would anyone who is not a 
sleepwalker conclude that metrics could be successfully applied to the 
design and development process in software?

Chris Prael

jgautier@vangogh.ads.com (Jorge Gautier) (05/17/91)

In article <1991May15.223719.10256@auto-trol.com> alesha@auto-trol.com (Alec Sharp) writes:
> >I think the desire for metrics is an admission by management types
> >that they really don't know what's going on in their projects.  Good
> >managers are able to tell if the project goals are being met, who's
> >doing well and who's screwing up without any metrics.  Metrics mania
> >indicates that someone doesn't understand software development and/or
> >they're desperate to figure out what's causing poor quality or project
> >failure.  The "logical" arguments for metrics and the assurances that
> >they won't be used for evaluating people are merely persuasive
> >techniques used to facilitate the establishment of a questionable
> >practice (just like the ads that tell us how "beef fits into today's
> >balanced diets.")
> >--
> Why do so many people posting news assume that we are all perfect?

Where in my posting did I assume that everybody's perfect?

> Half the software developers out there are below average.  Half the 
> managers are below average.

Really?  Well, if this were so, then the other halves would be above
average, wouldn't they?  Are you sure you don't mean median instead of
average?  If you're talking average, the majority could be either
above or below the average.  And what's the metric here anyway?

> I'm sure there are substantial numbers of
> managers who don't know what's going on in their projects and yes, are
> desparate to figure out what's causing poor quality.  Why penalize
> them just because they aren't "good" managers?  They are what their
> organizations have, so they may as well use whatever tools and
> techniques best help them meet the goals of their organizations.

I don't see any penalization in my posting.  I'm not trying to prevent
anyone from using metrics.  I agree with your last sentence.  Let
everyone use whatever they want.  I'm just presenting my opinions
which are based on experience and observation.  Nobody has to agree
with them.  If you don't want to see them, just put me in your news
kill file.

> Please, let's get away from this elitism in posting and start
> addressing the "normal" software development process where many of the
> people are below average and the procedures and tools must be able to
> address their needs and the needs of their organizations as they
> struggle to meet quality and schedule goals.

You're free to address whatever you want in your postings.  I will do
the same.  Whining because others don't address your concerns strikes me
as unreasonable in an open public forum.
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

rbe@yrloc.ipsa.reuter.COM (Robert Bernecky) (05/17/91)

In article <1991May14.150350.2837@den.mmc.com> kummer@possum.den.mmc.com (Jim Kummer) writes:
>>I think the desire for metrics is an admission by management types
>>that they really don't know what's going on in their projects.  Good
>>managers are able to tell if the project goals are being met, who's
>>doing well and who's screwing up without any metrics.  Metrics mania
>>indicates that someone doesn't understand software development and/or
>>they're desperate to figure out what's causing poor quality or project
>>failure.


I thought this was true for a long time, even when I was running a group
of about 25 people responsible for the design, development, and  support of
SHARP APL, which was used for some very critical applications, such as 
merchant banks dealing in large amounts of $$. 

We developers had an attitude that "we knew what we were doing, and how 
long it would take, etc".

Charles Chandler was on the receiving end of customer flak, and spent a lot
of time convincing us that without measurement, we could NOT make 
such claims.    It took me more than 20 years in the software biz to 
realize he was right. 

Furthermore, Chan, and HIroshi Isobe, of Hitachi, led the way in 
encouraging us to adopt  formal testing of our code. Isobe was, in fact,
incredulous, that we allowed code to escape without formal review.

We (developers) had an ego problem, which might be stated as:
   "Why should I spent time writing test suites, when I could be writing
      new code? I KNOW my code works!"

I instigated a program at I.P. Sharp to require a minimal degree of
code coverage of all changed modules, and the results were surprising to 
all of us:
   My own "extensivelyl tested" code, when I subjected it to simple 
     code coverage tests: "Provide a suite which executes ALL instructions
       in the code". (not even testing all paths..!)

     showed up a number of system crash bugs, even though the code had 
      been running for several months on internal systems.

Other people in thegroup were, although initially skeptical, became
enthused, for reasons similar to mine.

This approach also hold for project scheduling, in my bookL

if you educate programmers on When they are late, and try to learn WHY
they are late (Are programmers EVER early?) then they will 
adopt those methods which tell them why, and STOP BEING LATE>!

If you don't believe this, give it a FAIR try on a large project. If 
it works, great! If it doesn't work, try to understand why, 


Bob
>
>Metrics are not generally needed by the line supervisors and managers 
>directly over a project.  Rather, the metrics are needed by the 
>managers' managers, and theirs above them, who admittedly do not 
>know what's going on on the project.  In an atmosphere of mutual trust 
>and bilateral acceptance of responsibility, the need for metrics is 
>lessened, but not removed.
>
>---- Jim Kummer -------- kummer@pogo.den.mmc.com -----------
>disclaimer:  the above opinions are mine, and not necessarily those of 
>   anyone else.
>
>-- 
> 
>-- Regards ----- James Kummer --- Martin Marietta Corporation --
>----- kummer@pogo.den.mmc.com --- Information and Communication Systems --
> 

Robert Bernecky      rbe@yrloc.ipsa.reuter.com  bernecky@itrchq.itrc.on.ca 
Snake Island Research Inc  (416) 368-6944   FAX: (416) 360-4694 
18 Fifth Street, Ward's Island
Toronto, Ontario M5J 2B9 
Canada

orville@weyrich.UUCP (Orville R. Weyrich) (05/17/91)

In article <JGAUTIER.91May16132945@vangogh.ads.com> jgautier@vangogh.ads.com (Jorge Gautier) writes:
>In article <1991May15.223719.10256@auto-trol.com> alesha@auto-trol.com (Alec Sharp) writes:
>> Half the software developers out there are below average.  Half the 
>> managers are below average.
>
>Really?  Well, if this were so, then the other halves would be above
>average, wouldn't they?  Are you sure you don't mean median instead of
>average?  If you're talking average, the majority could be either
>above or below the average.  And what's the metric here anyway?

The word "average" is a bit ambiguous. It can mean "aritmetic mean", "median",
or something else. Most often it means "arithmetic mean". In the case where
the distribution is assumed to be normal (very common, because of the
central limit theorem), then "arithmetic mean" and "median" are both estimators
for the same quantity.

--------------------------------------           ******************************
Orville R. Weyrich, Jr., Ph.D.                   Certified Systems Professional
Internet: orville%weyrich@uunet.uu.net             Weyrich Computer Consulting
Voice:    (602) 391-0821                         POB 5782, Scottsdale, AZ 85261
Fax:      (602) 391-0023                              (Yes! I'm available)
--------------------------------------           ******************************

jgautier@vangogh.ads.com (Jorge Gautier) (05/18/91)

In article <1991May17.064623.23670@yrloc.ipsa.reuter.COM> rbe@yrloc.ipsa.reuter.COM (Robert Bernecky) writes:
<deleted>

If you had seen my earlier postings, you would know that I do not
object to useful measurements of code such as branch coverage.  I do
object to metrics that are invalid for their intended purpose,
unreliable and full of unrealistic assumptions.  Why do people think
that if one metric is useful, all metrics must be?

jls@netcom.COM (Jim Showalter) (05/18/91)

>The problem is comparing apples and figs.  In the industries in which
>metrics have been used successfully, they have been applied to the
>manufacturing and service processes of the industries.  No one has ever
>successfully applied metrics to the design and development  processes 
>of any industry outside of software.  So why would anyone who is not a 
>sleepwalker conclude that metrics could be successfully applied to the 
>design and development process in software?

I think the apples and figs here are due to a completely different
perspective on programming as a creative enterprise. As far as I'm
concerned, programming-in-the-small is a solved problem, so much
so that it SHOULD be treated like an assembly line process. What,
after all, is so creative about implementing 9 line class member
functions? Design is a creative process, but implementation is in
many ways no more remarkable than shingling a roof, and just as
amenable to statistical controls.
-- 
**************** JIM SHOWALTER, jls@netcom.com, (408) 243-0630 *****************
* Proven solutions to software problems. Consulting and training in all aspects*
* of software development. Management/process/methodology. Architecture/design/*
* reuse. Quality/productivity. Risk reduction. EFFECTIVE OO techniques. Ada.   *

hlavaty@CRVAX.Sri.Com (05/21/91)

In article <4563.282e83ea@iccgcc.decnet.ab.com>, kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes...
>In article <1991May9.053311.800@netcom.COM>, jls@netcom.COM (Jim Showalter) writes:
>[...]
>>
>The effort to
>collect meaningful metrics data will make it possible to determine if
>individual contributors are not meeting project goals.  At what point does the
>"bottom-line" person determine the state and fate of this person.  The
>paraphrased statement attributed to Humphrey implies never.  This is unreal.

Try reading "Controlling Software Projects" by Tom DeMarco and Tim Lister.  They
propose a system where the metrics person is NOT the management chain - in fact,
it is forbidden for the metrics person to EVER reveal a specific name to a 
management person.  The reason is exactly as you describe - once anyone got
burned because of the metric data, the accuracy of all subsequent data is shot.
Of course, getting management to swallow this is the hard part...:=)  However,
if they do, at least they get metrics data that shows them more than they would
without it.  Another basic premise of the concept is that most people want to 
do a good job, and if the metrics person shows an individual how they rate 
compared to everyone else, most will try and maximize the metric result to make
themselves look (and feel) good.  In practice, I'll bet the metrics person has
to be a very smooth talker! :=)
By the way, I don't see this as really relevant to the original point of 
imposing basic software engineering principles or HOLs that have been proven
successfull by others.   Collecting individual metrics is a much bigger step
fraught with a lot more problems, such as the ones you describe.

Jim Hlavaty

kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) (05/21/91)

b.com> <jgautier.91may13125016@vangogh.ads.com>
Followup-To: .com> <jgautier.91may13125016@vangogh.ads.com>

Distribution: na
Lines: 19

In article <JGAUTIER.91May13125016@vangogh.ads.com>, jgautier@vangogh.ads.com (Jorge Gautier) writes:
> In article <4563.282e83ea@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
>> In article <1991May9.053311.800@netcom.COM>, jls@netcom.COM (Jim Showalter) writes:
>> [...]
> I think the desire for metrics is an admission by management types
> that they really don't know what's going on in their projects.  Good
> managers are able to tell if the project goals are being met, who's
> doing well and who's screwing up without any metrics.

I agree that a good manager can "sense" trouble before he can measure it,
but how does one know if project goals are being met if
there is no measurement of what's going on.  I also agree that the desire for
metrics is based on a desire to know, but if don't want to know quantitatively
about the project, what is it that you want to know, and of what value is that
information to understanding where the project is?

GXKambic
standard disclaimer

jgautier@vangogh.ads.com (Jorge Gautier) (05/21/91)

In article <4639.283807a0@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
> I agree that a good manager can "sense" trouble before he can measure it,
> but how does one know if project goals are being met if
> there is no measurement of what's going on.

What are the project goals?  So many bugs per line of code, so much
code exercised by tests, so many bugs found per hour, etc.?  Or a
satisfied customer?  Can today's metrics really predict time, cost and
customer satisfaction for all the different kinds of projects that are
undertaken?

> I also agree that the desire for
> metrics is based on a desire to know, but if don't want to know quantitatively
> about the project, what is it that you want to know, and of what value is that
> information to understanding where the project is?

I'm not saying I don't want to know quantitative information, but I
think qualitative information is much more valuable and reliable.
Assuming you can interpret it, which requires knowledge about and
experience in software development and usage.  Decision making by
numbers is a poor substitute for qualitative judgement, although it
makes for good entertainment if you can afford the time.
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

hlavaty@CRVAX.Sri.Com (05/21/91)

In article <JGAUTIER.91May20191237@vangogh.ads.com>, jgautier@vangogh.ads.com (Jorge Gautier) writes...
>In article <4639.283807a0@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:

>I'm not saying I don't want to know quantitative information, but I
>think qualitative information is much more valuable and reliable.
>Assuming you can interpret it, which requires knowledge about and
>experience in software development and usage.  Decision making by
>numbers is a poor substitute for qualitative judgement, although it
>makes for good entertainment if you can afford the time.

Noooooooooooo!  I totally disagree with this.  Quantitative information must
by definition be more valuable and reliable than qualitative information.  
Whatever numbers make up the quantitative information are indisputable 
(assuming the data was gathered correctly) - they become a fact.  What the
numbers *mean* is usually still open to interpretation, but at least two or
more people have a common reference point from which to discuss things.  The
problem with qualitative information is that it is NOT reliable as soon as
more than one person enters the discussion.  I do not have your values, 
experiences, or knowledge (at least, not necessarily) so I can not possibly
accept qualitative information/decisions from you with the same impact that 
the information has on you.  How do *I* know your right?  Why do I care? What
if it's my job to care?  Am I supposed to accept your "feelings" on a multi-
million dollar development?  If you had a fine track record on a similar
project in the past, I just might do that.  But if I am not familiar with
your past history, or this project is new for you, I cannot in good conscience
accept your unsubtatianted opinion.

Where qualitative judgements really break down is when two people with
different opinions view the same situation differently.  Who's right?  Many
times you just wind up in a pissing contest (to think how many meetings I've
sat in and watched that happen!).  Qualitative information is by definition
only reliable to yourself.  If that's all that matters in your organization
(if so, I would assume it to be small), you can get away with it.  I would
still say your missing an oportunity, however.

The advantage of metrics is that they facilitate a common ground for discussion
between people and organizations.  You may disagree with me what the numbers
mean, but we now have a common reference point.  The real trick to metrics is
really just to start measuring *something*.  After trial and error you will
arrive at "things" to track that will work for you and your organization (all
of which are peculiar, IMHO).  What you are after are "early warnings" of
impending problems that allow you time to fix them up front - before they
are problems.  I would argue that someone with a lot of experience that isn't
using metrics consciously is actually using them unconsciously (or intuitively).


Jim Hlavaty
Standard Disclaimers Apply

jls@netcom.COM (Jim Showalter) (05/22/91)

>The advantage of metrics is that they facilitate a common ground for discussion
>between people and organizations.  You may disagree with me what the numbers
>mean, but we now have a common reference point.  The real trick to metrics is
>really just to start measuring *something*.  After trial and error you will
>arrive at "things" to track that will work for you and your organization (all
>of which are peculiar, IMHO).  What you are after are "early warnings" of
>impending problems that allow you time to fix them up front - before they
>are problems.  I would argue that someone with a lot of experience that isn't
>using metrics consciously is actually using them unconsciously (or intuitively)

Indeed. Think of metrics like the SAT college admissions test. It doesn't
purport to measure intelligence, it just claims to be a reasonably accurate
predictor of success in college. The evidence supports this claim: SOMETHING
that has some bearing on success in college is being measured by the SAT's,
since those with lower scores tend to do worse in college. So, here is an
example of a metric that seems to be a good early warning of future success
in college, and yet WHY it does so has never been established. Basically,
they tuned the tests experimentally to get the correlation they desired:
the tests were retro-engineered from the results. There is no reason to
assume that similar techniques will not work for software metrics.

Furthermore, the above discussion assumes that there are no existing metrics
that have any validity, a claim I'm quite suspicious of. I should think
that, at a minimum, metrics that measure the percent reuse, the number of
bugs per source line, the number of edit/compile/debug iterations per
component, and the degree of complexity per component should all be useful
and valid measures of project sanity. The argument that bugs per line is
meaningless because complexity varies from unit to unit can be partially
offset by normalization relative to the complexity measure for the unit;
more importantly, even if this normalization is not performed, I think it
is of GREAT value to a project manager to be able to peer into the code
for a project and find, at a glance, the top 5% in terms of bugginess,
iterations, and complexity. What if, for example, having such metrics on
hand the project manager was able to determine that the vast majority of
the problems seemed to be confined to a couple of subsystems (say, the
user interface and the database)? Wouldn't this be a good thing to know?
If I had that kind of information on hand as a manager, the next logical
step would be to do some additional data collection to try to determine
if the subsystems in question needed more resource, a longer schedule,
a redesign, some diagnostic prototyping, etc. The metrics would allow
me to FOCUS my attention where it was most needed. Without such metrics,
I'm basically flying blindfolded.

There sure is a lot of resistance to this rather simple idea. I think
it comes from the same source as bad code in the first place: the ego-
driven hacker culture fearing that all the "fun" will be taken out
of programming if it has to become a real branch of engineering. And
yet, what an immature attitude this really is. If a project of which
one is a member is in trouble, it is in the best interests of all concerned
that steps be taken to bail out the project, right? If taking such
steps results in the exposure of incompetence, poor design, sloppy
coding, etc, tough--there is absolutely NO business case for allowing
such things to exist in the first place, let alone persist. It's not
a matter of cramping artistic expression: it's a simple matter of
running a business instead of an artists' collective. I'll believe
that discipline is a bad thing when I see hardware engineers, civil
engineers, mechanical engineers, and aerospace engineers start hacking
THEIR stuff together instead of following disciplined engineering
practices.
-- 
**************** JIM SHOWALTER, jls@netcom.com, (408) 243-0630 ****************
*Proven solutions to software problems. Consulting and training on all aspects*
*of software development. Management/process/methodology. Architecture/design/*
*reuse. Quality/productivity. Risk reduction. EFFECTIVE OO usage. Ada/C++.    *

jgautier@vangogh.ads.com (Jorge Gautier) (05/23/91)

In article <24563@unix.SRI.COM> hlavaty@CRVAX.Sri.Com writes:
> more than one person enters the discussion.  I do not have your values, 
> experiences, or knowledge (at least, not necessarily) so I can not possibly
> accept qualitative information/decisions from you with the same impact that 
> the information has on you.  How do *I* know your right?  Why do I care? What
> if it's my job to care?  Am I supposed to accept your "feelings" on a multi-
> million dollar development?  If you had a fine track record on a similar
> project in the past, I just might do that.  But if I am not familiar with
> your past history, or this project is new for you, I cannot in good conscience
> accept your unsubtatianted opinion.

If you don't trust me, why the heck did you hire me to do the job?
Why don't you just do it yourself if you're the only one who can
interpret information and make decisions?  Don't you think that I can
substantiate my "feelings"?  (Or does everything have to look like a
number? :)  Who's got a problem here, the person who is reporting
something or the person who refuses to accept the report?  If the
report is irrelevant to the project, just say so.  If you don't want
to accept it because it doesn't come in "metric normal form," that is
your problem.

> Where qualitative judgements really break down is when two people with
> different opinions view the same situation differently.  Who's right?  Many
> times you just wind up in a pissing contest (to think how many meetings I've
> sat in and watched that happen!).  Qualitative information is by definition
> only reliable to yourself.  If that's all that matters in your organization
> (if so, I would assume it to be small), you can get away with it.  I would
> still say your missing an oportunity, however.

I would say you're the one who's missing an opportunity if you're
ignoring qualitative information.  Many things that happen at the
reality level have not been quantified, and yet they can have a
dramatic impact on the project.  For example, how do you measure a bad
design?  This can be very problem-dependent; are you including a model
of the problem in your metrics?  Or would you wait until it is
implemented so you can measure the defect density, because this is
*something* (in your words) that can be measured?  What is your early
warning for this situation?

> The advantage of metrics is that they facilitate a common ground for discussion
> between people and organizations.  You may disagree with me what the numbers
> mean, but we now have a common reference point.  The real trick to metrics is
> really just to start measuring *something*.  After trial and error you will
> arrive at "things" to track that will work for you and your organization (all
> of which are peculiar, IMHO).  What you are after are "early warnings" of
> impending problems that allow you time to fix them up front - before they
> are problems.  I would argue that someone with a lot of experience that isn't
> using metrics consciously is actually using them unconsciously (or intuitively).

"When searching for the least common denominator, beware of the
occassional division by zero."

A disadvantage of metrics is that it is easier to LIE with them.  I
recall a meeting with a previous manager during which he "re-weighted"
the severity of the outstanding defects in a system because, well,
they weren't really that bad, were they.  These were cases were the
system clearly did not satisfy the requirements, and worse.  Our
common ground for discussion simply facilitated the lie.  Nobody had
the guts to say "we don't care if the system doesn't work sometimes,"
they simply changed the numbers to make everything look good.  Similar
problem with other metrics:  An unreported defect is not a measured
defect.  The number of reported defects does not tell you how many
defects the system has.  The rate of defect discovery depends on how
hard you look for defects, and this is not constant over time.  The
amount of time spent looking for defects and/or the reporting and
classification of defects can be fudged to make the rate look better
or worse.  Lines of code or other size metrics can be twisted to make
defect density look better or worse.  Test coverage metrics don't tell
you how good your test suites are.  Etcetera.  If you think that the
numbers correspond to the reality of the project and that a change in
one implies a change in the other, think again.
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

hlavaty@CRVAX.Sri.Com (05/24/91)

In article <JGAUTIER.91May22143130@vangogh.ads.com>, jgautier@vangogh.ads.com (Jorge Gautier) writes...
>In article <24563@unix.SRI.COM> hlavaty@CRVAX.Sri.Com writes:
>> more than one person enters the discussion.  I do not have your values, 
>> experiences, or knowledge (at least, not necessarily) so I can not possibly
>> accept qualitative information/decisions from you with the same impact that 
>> the information has on you.  How do *I* know your right?  Why do I care? What
>> if it's my job to care?  Am I supposed to accept your "feelings" on a multi-
>> million dollar development?  If you had a fine track record on a similar
>> project in the past, I just might do that.  But if I am not familiar with
>> your past history, or this project is new for you, I cannot in good conscience
>> accept your unsubtatianted opinion.
> 
>If you don't trust me, why the heck did you hire me to do the job?
>Why don't you just do it yourself if you're the only one who can
>interpret information and make decisions?  Don't you think that I can
>substantiate my "feelings"?  (Or does everything have to look like a
>number? :)  Who's got a problem here, the person who is reporting
>something or the person who refuses to accept the report?  If the
>report is irrelevant to the project, just say so.  If you don't want
>to accept it because it doesn't come in "metric normal form," that is
>your problem.

I almost certainly *didn't* hire the person who did the job.  The contract
was awarded to a company, who decides who will manage the software development
as they see fit.  Alternatively, if I am a manager repsonsible for you and
your project directly, metrics offer a way for you to "substantiate" your
opinion of the health and status of your development.  I don't like to use
the phrase "don't you trust me" because it turns the issue into one of
personal integrity.  I don't view it that way at all; things are assumed to
be at a professional level.  What were talking about here is merely an
engineering judgement, which I don't see as related to your presonal integrity.
As a rule, I assume everyone has personal integrity until proven otherwise.  
Also as a rule, I question everyone's engineering judgement until I have 
worked with them enough to trust them implicitly.

Unfortunately, I seem to have conveyed the impression that I view metrics as the
sole way of knowing the health/status of a development.  This is not my
position.  I view metrics as a usefull tool to try and cut through the 
opinions/values/subjectivenss of a discussion and lead discussion to the areas
that warrant attention (see my post Metric Example).  If a metric is showing
something negative, does it mean the program is failing?  Not necessarily, but
it alomst always points to a possible problem that may (or may not) need 
fixing.  One of the interesting things about metrics is the more you use
them together, the more powerfull they each become.  One metric by itself
doesn't mean much, but five of them each measuring something different about
a development start to give a good picture of the health/status when looked
at in total, and form a good basis for a management discussion.

>> Where qualitative judgements really break down is when two people with
>> different opinions view the same situation differently.  Who's right?  Many
>> times you just wind up in a pissing contest (to think how many meetings I've
>> sat in and watched that happen!).  Qualitative information is by definition
>> only reliable to yourself.  If that's all that matters in your organization
>> (if so, I would assume it to be small), you can get away with it.  I would
>> still say your missing an oportunity, however.
> 
>I would say you're the one who's missing an opportunity if you're
>ignoring qualitative information.  Many things that happen at the
>reality level have not been quantified, and yet they can have a
>dramatic impact on the project.  For example, how do you measure a bad
>design?  This can be very problem-dependent; are you including a model
>of the problem in your metrics?  Or would you wait until it is
>implemented so you can measure the defect density, because this is
>*something* (in your words) that can be measured?  What is your early
>warning for this situation?

Suprise!  I agree with you 100%.  I certainly don't ignore the qualitative
information.  That is another piece of the puzzle.  But it does have to be
taken with a grain of salt (or metrics). :=)

Measuring a bad design is very difficult.  So is finding out that you have
one when your not measuring it (unless you are the developer.  The problem 
arises when the developer either isn't aware of it or chooses to not do
anything about it).  I would try to avoid at all costs finding out that I
had a bad design before I got to code testing.  Errors are a lot easier to 
fix up front than later.  If I am a customer rep, I do spot checks on design
units to give me some indication (I pick the units - get's around the
phenomenon you describe below).  If I am a manager, I make sure my team (or
manager(s) below me) are doing smart things like design walkthroughs.  I am
not aware of any well established metric here except for token/node analysis,
which can require a significant amount of extra work.  I suppose the spot
sampling of design units could be a metric (if all four that I looked at had
problems, it's time to look VERY CLOSELY at what's going on).

> 
>> The advantage of metrics is that they facilitate a common ground for discussion
>> between people and organizations.  You may disagree with me what the numbers
>> mean, but we now have a common reference point.  The real trick to metrics is
>> really just to start measuring *something*.  After trial and error you will
>> arrive at "things" to track that will work for you and your organization (all
>> of which are peculiar, IMHO).  What you are after are "early warnings" of
>> impending problems that allow you time to fix them up front - before they
>> are problems.  I would argue that someone with a lot of experience that isn't
>> using metrics consciously is actually using them unconsciously (or intuitively).
> 
>"When searching for the least common denominator, beware of the
>occassional division by zero."
> 
>A disadvantage of metrics is that it is easier to LIE with them.  I
>recall a meeting with a previous manager during which he "re-weighted"
>the severity of the outstanding defects in a system because, well,
>they weren't really that bad, were they.  These were cases were the
>system clearly did not satisfy the requirements, and worse.  Our
>common ground for discussion simply facilitated the lie.  Nobody had
>the guts to say "we don't care if the system doesn't work sometimes,"
>they simply changed the numbers to make everything look good.  Similar
>problem with other metrics:  An unreported defect is not a measured
>defect.  The number of reported defects does not tell you how many
>defects the system has.  The rate of defect discovery depends on how
>hard you look for defects, and this is not constant over time.  The
>amount of time spent looking for defects and/or the reporting and
>classification of defects can be fudged to make the rate look better
>or worse.  Lines of code or other size metrics can be twisted to make
>defect density look better or worse.  Test coverage metrics don't tell
>you how good your test suites are.  Etcetera.  If you think that the
>numbers correspond to the reality of the project and that a change in
>one implies a change in the other, think again.

Again, I have to agree with you for the most part.  But this behavior is
exactly why metrics are necessary.  People with the attitude you describe 
are the bane of the manager.  They purposely distort things for the short
term gain, leaving the fall guy later down in the development stream. 
Without metrics, how can you ever get a straight answer out of them?  They
will tell you everything is fine, and you have nothing to go on to get a 
better answer out of them.  As you point out, even by using metrics a 
person bent on distorting the picture can still muddy up the waters.  My
contention is that they can't do it as much.  With metrics, you now have a
"rope" on them that they have to deal.  Over time, you can keep throwing
ropes on them untill you do get the real picture, despite their best efforts.
For example, let's take defect density, which you correctly point out is
directly dependent on how much effort you spend looking for problems.  Well,
let's measure that too in conjunction with defect density and plot them
both over time on the same graph!  "Gee, Jorge, you found a lot of errors
last month, yet this month you spent almost no time looking for new errors.
No wonder you didn't find any new ones!  Why did you make that decision?"

Unfortunately, the above discussion focuses on the "metrics as oppression"
approach whereby a manager can use them to control someone below him
that he dosn't trust.  This is possible (over time), but misses the best use
of metrics.  When the developers do not have the attitudes described above,
but are really interested in the long term quality of a project, the metrics
become an invaluable control mechanism to highlight areas that MAY need 
attention, and can communicate these areas quite effectively to upper
management.  "Look, boss, see how our unit testing is falling behind our
projected rate?  If we don't fix this soon, we're not going to make 
schedule.  We've traced the problem to the fact that we don't have enough
time on the mainframe.  Do you think you could get us more time?"

More food for thought.  And let me say that I am thoroughly enjoying these
discussions.  (yes, this is a thinly veiled attempt at flame reduction) :=)

Jim Hlavaty

kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) (05/29/91)

In article <24527@unix.SRI.COM>, hlavaty@CRVAX.Sri.Com writes:
> In article <4563.282e83ea@iccgcc.decnet.ab.com>, kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes...
>>In article <1991May9.053311.800@netcom.COM>, jls@netcom.COM (Jim Showalter) writes:
>>[...]
[...]
> 
> Try reading "Controlling Software Projects" by Tom DeMarco and Tim Lister.  They
> propose a system where the metrics person is NOT the management chain - in fact,
> it is forbidden for the metrics person to EVER reveal a specific name to a 
> management person.  The reason is exactly as you describe - once anyone got
> burned because of the metric data, the accuracy of all subsequent data is shot.
> Of course, getting management to swallow this is the hard part...:=)  However,
> if they do, at least they get metrics data that shows them more than they would
> without it.  
I know what you mean.  This is a good approach, in fact very much like the 
position I am in right now, and I understand the "sales" problem.  I think that 
I am raising discussions on this point because management won't accept the
Humphrey viewpoint blindly, not if they have any good management sense. 

> In practice, I'll bet the metrics person has to be a very smooth talker! :=)
I'm probably the counterexample.
> By the way, I don't see this as really relevant to the original point of 
> imposing basic software engineering principles or HOLs that have been proven
> successfull by others.   Collecting individual metrics is a much bigger step
> fraught with a lot more problems, such as the ones you describe.
Applying anything someone else has done is impossible, because we're
"different".  Even "top" level metrics have ramifications for groups that cause
concerns in those groups.  No one wants to be associated with the "failed"
project.

GXKambic
standard disclaimer

kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) (05/29/91)

Lines: 19

In article <JGAUTIER.91May20191237@vangogh.ads.com>, jgautier@vangogh.ads.com (Jorge Gautier) writes:
> In article <4639.283807a0@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
[...]
> 
> I'm not saying I don't want to know quantitative information, but I
> think qualitative information is much more valuable and reliable.
> Assuming you can interpret it, which requires knowledge about and
> experience in software development and usage.  

Interesting point.  What qualitative information do you collect, and how do you
interpret it?  Not trying to be funny here, but is it 2x, 10x or 100x more
valuable than quantitative information?  Better yet, what relative worth does
it have to specific metrics?  Guess I am trying to quantify qualitative
indicators, but this would help point to where we should be spending our time
in assessing sw projects.

GXKambic
standard disclaimer

alan@tivoli.UUCP (Alan R. Weiss) (06/01/91)

Hiya, George!  Good to talk with you over the phone.

In article <4707.284370a9@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
>In article <24527@unix.SRI.COM>, hlavaty@CRVAX.Sri.Com writes:
>> In article <4563.282e83ea@iccgcc.decnet.ab.com>, kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes...
>>>In article <1991May9.053311.800@netcom.COM>, jls@netcom.COM (Jim Showalter) writes:
>>>[...]
>[...]
>> 
>> Try reading "Controlling Software Projects" by Tom DeMarco and Tim Lister.  They
>> propose a system where the metrics person is NOT the management chain - in fact,
>> it is forbidden for the metrics person to EVER reveal a specific name to a 
>> management person.  The reason is exactly as you describe - once anyone got
>> burned because of the metric data, the accuracy of all subsequent data is shot.
>> Of course, getting management to swallow this is the hard part...:=)  However,
>> if they do, at least they get metrics data that shows them more than they would
>> without it.  
>I know what you mean.  This is a good approach, in fact very much like the 
>position I am in right now, and I understand the "sales" problem.  I think that 
>I am raising discussions on this point because management won't accept the
>Humphrey viewpoint blindly, not if they have any good management sense. 

Watts Humphrey is a very bright person, but its clear from the SEI
(Software Engineering Institute at Carnegie-Mellon) program that
Watts never had to answer to the board of directors, stockholders,
or customers.

>> In practice, I'll bet the metrics person has to be a very smooth talker! :=)
>I'm probably the counterexample.

Hey, me too, me too!  Smooth talk is helpful, but sincerity and a
willingness to modify your position (i.e. say you're WRONG if you ARE
wrong) are even better.


>GXKambic
>standard disclaimer



_______________________________________________________________________
Alan R. Weiss                           TIVOLI Systems, Inc.
E-mail: alan@tivoli.com                 6034 West Courtyard Drive,
E-mail: alan@whitney.tivoli.com	        Suite 210
Voice : (512) 794-9070                  Austin, Texas USA  78730
Fax   : (512) 794-0623
_______________________________________________________________________

jgautier@biscuit.ads.com (Jorge Gautier) (06/02/91)

In article <4708.28437219@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
> Interesting point.  What qualitative information do you collect, and how do you
> interpret it?  

Nothing too fancy, just things that are mostly obvious to anyone who
has developed or maintained software.  For example, a piece of
code with lots of global variables, hidden inheritance hierarchies,
spaghetti control flow and no design documentation or internal
comments will be difficult to maintain.  Another example: releasing
code without testing it is usually dangerous.  It can even be based on
theory, like for example, trying to recognize C statements
incrementally without a token stream that includes both EOFs.  If you
see things like this in your project, you may be in for some
interesting times.   Conversely, if you see well-organized, documented
and readable code, regression testing before release, or designs based
on well-established theory, many problems are prevented and it gives
you a little more confidence in the project.

> Not trying to be funny here, but is it 2x, 10x or 100x more
> valuable than quantitative information?  Better yet, what relative worth does
> it have to specific metrics?  Guess I am trying to quantify qualitative
> indicators, but this would help point to where we should be spending our time
> in assessing sw projects.

Well, I just haven't seen any quantitative information that will help
me predict the kinds of problems that I can predict based on
information like the examples above.  What are the quantitative
metrics that can predict the problems/cost of bad design, bad code or
bad process, and take into account problem and project idiosyncracies?
This is not a rhethorical question, I really would like to see
something more formalized along these lines.  Until then, I will trust
qualitative information more than the quantitative metrics I have seen
in application.
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) (06/06/91)

> <470 <jgautier.91jun1141401@biscuit.ads.com>
Followup-To:  <470 <jgautier.91jun1141401@biscuit.ads.com>

Lines: 46

In article <JGAUTIER.91Jun1141401@biscuit.ads.com>, jgautier@biscuit.ads.com (Jorge Gautier) writes:
> In article <4708.28437219@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
>> Interesting point.  What qualitative information do you collect, and how do you
>> interpret it?  
> Nothing too fancy, just things that are mostly obvious to anyone who
> has developed or maintained software.  For example, a piece of
> code with lots of global variables, hidden inheritance hierarchies,
> spaghetti control flow and no design documentation or internal
> comments will be difficult to maintain.  
This is current thinking.  What is "lots"?  Do you have any proof that this 
is so?  How much more expensive is it to maintain this code than create 
new code?  
> Another example: releasing
> code without testing it is usually dangerous.  
What if it was designed using Cleanroom technology?  Usually is the key word
here.  When do you test?  How much?  
> It can even be based on
> theory, like for example, trying to recognize C statements
> incrementally without a token stream that includes both EOFs.  If you
> see things like this in your project, you may be in for some
> interesting times.   Conversely, if you see well-organized, documented
> and readable code, regression testing before release, or designs based
> on well-established theory, many problems are prevented and it gives
> you a little more confidence in the project.
Assuming that the content of the functional specification is  what the custome
wants.
[...]
> What are the quantitative
> metrics that can predict the problems/cost of bad design, bad code or
> bad process, and take into account problem and project idiosyncracies?
> This is not a rhethorical question, I really would like to see
> something more formalized along these lines.  Until then, I will trust
> qualitative information more than the quantitative metrics I have seen
> in application.
I think I understand the point you are making.  IMHO we have got to start
somewhere, and a lot of the first guesses are going to be wrong.  But since the
science of developing software isn't around to help yet, maybe we need to begin
to do some measurements/correlations to put in place some pragmatic/heuristic
rules to point the way.  We're Brahe, heading towards Kepler, but not Newton
yet.

Good stuff.

GXKambic
The size and extent of this disclaimer is immeasurable.

jgautier@vangogh.ads.com (Jorge Gautier) (06/07/91)

In article <4796.284d01e1@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
> > What are the quantitative
> > metrics that can predict the problems/cost of bad design, bad code or
> > bad process, and take into account problem and project idiosyncracies?
> > This is not a rhethorical question, I really would like to see
> > something more formalized along these lines.  Until then, I will trust
> > qualitative information more than the quantitative metrics I have seen
> > in application.
>
> I think I understand the point you are making.  IMHO we have got to start
> somewhere, and a lot of the first guesses are going to be wrong.  But since the
> science of developing software isn't around to help yet, maybe we need to begin
> to do some measurements/correlations to put in place some pragmatic/heuristic
> rules to point the way.  We're Brahe, heading towards Kepler, but not Newton
> yet.

That's just what my examples were, heuristics.  They are not proven or
foolproof.  A lot of these rules seem to be problem/situation
dependent, i.e., they need more specialized preconditions in order to
obtain the desired results.  But what else do we have?  I tend to be
skeptical of people who claim to have THE method that is guaranteed to
produce cheap good software on time.  As far as I know, all we have
are heuristics, and judging from the state of the practice, not very
good ones at that.  I would think that a method guaranteeing optimal
decisions about software development activities w.r.t. cost, quality
and schedule would be famous by now.

Measurements and correlations gathering by those who are directly
involved in making decisions about how software is designed,
implemented and verified are small but important steps towards a
science of software.  Unfortunately, not everyone understands that
these measurements and correlations are not perfect, they are subject
to revision and they do not (yet?) fully support decision making in
software development projects.  We also need people who can make good
decisions no matter what the numbers "say."
--
Jorge A. Gautier| "The enemy is at the gate.  And the enemy is the human mind
jgautier@ads.com|  itself--or lack of it--on this planet."  -General Boy
DISCLAIMER: All statements in this message are false.

kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) (06/14/91)

In article <JGAUTIER.91Jun6154936@vangogh.ads.com>, jgautier@vangogh.ads.com (Jorge Gautier) writes:
> 
> In article <4796.284d01e1@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com (George X. Kambic, Allen-Bradley Inc.) writes:
>> > What are the quantitative
>> > metrics that can predict the problems/cost of bad design, bad code or
>> > bad process, and take into account problem and project idiosyncracies?
>> > This is not a rhethorical question, I really would like to see
>> > something more formalized along these lines.  Until then, I will trust
>> > qualitative information more than the quantitative metrics I have seen
>> > in application.
>> I think I understand the point you are making.  IMHO we have got to start
>> somewhere, and a lot of the first guesses are going to be wrong.  But since the
>> science of developing software isn't around to help yet, maybe we need to begin
>> to do some measurements/correlations to put in place some pragmatic/heuristic
>> rules to point the way.  We're Brahe, heading towards Kepler, but not Newton
>> yet.
> That's just what my examples were, heuristics.  They are not proven or
> foolproof.  A lot of these rules seem to be problem/situation
> dependent, i.e., they need more specialized preconditions in order to
> obtain the desired results.  But what else do we have?  I tend to be
> produce cheap good software on time. 
Agreed
> As far as I know, all we have
> are heuristics, and judging from the state of the practice, not very
> good ones at that.  
Not so sure about this.  May be site dependent.
> I would think that a method guaranteeing optimal
> decisions about software development activities w.r.t. cost, quality
> and schedule would be famous by now.
Absolutely.  
> Measurements and correlations gathering by those who are directly
> involved in making decisions about how software is designed,
> implemented and verified are small but important steps towards a
> science of software.  Unfortunately, not everyone understands that
> these measurements and correlations are not perfect, they are subject
> to revision and they do not (yet?) fully support decision making in
> software development projects.  We also need people who can make good
Yep.  Yep.  Yep.  As usual, we had to work through the words a few times 
to get pretty close agreement.

GXKambic
standard disclaimer