[comp.object] Guthery slams OOP

plogan@mentor.com (Patrick Logan) (11/28/89)

I have not seen the DDJ article but I have read some things from his
OOPSLA presentation. They seem to be basically the same.

I agree with most responders that his arguments use incorrect logic and
slanted examples. I came away with two useful points, though.

(1) It is good to scan through papers like this to be prepared for
responding when it may really count. I.e. someone you work for may
have similar opinions. I've had other people in the past challenge my
claims for object-oriented programming. Now that OOP is becoming
over-rated instead of under-rated, this may not occur much.

(2) In the paper I read he mentioned (several times) the lack of hard
data for claims of productivity. This is largely true, sorely lacking,
and difficult to gather as well as apply.

Can anyone provide evidence to the contrary for point #2?

Thanks.

-- 
Patrick Logan                | ...!{decwrl,sequent,tessi}!mntgfx!plogan 
Mentor Graphics Corporation  | plogan@pdx.MENTOR.COM                    
Beaverton, Oregon            |

dlw@odi.com (Dan Weinreb) (11/29/89)

In article <1989Nov28.004005.3482@mentor.com> plogan@mentor.com (Patrick Logan) writes:

   (2) In the paper I read he mentioned (several times) the lack of hard
   data for claims of productivity. This is largely true, sorely lacking,
   and difficult to gather as well as apply.

   Can anyone provide evidence to the contrary for point #2?

In fact, can anyone produce ANY study that provides evidence that ANY
programming language, or methodology, increases productivity?  The
hard part is performing a controlled experiment: you need to have two
programmers (or teams of programmers) with exactly the same skills and
talents and experience (how would you measure that?), and pose them
the same problem, and give them the same amount of time and the same
amount of distractions.  Then, when they each produce results, you
need a highly objective way to evaluate the results, and must decide
upon a metric to use.  It's hard to see how a really good experiment
could be formulated.

You might want to look into the work that the Human Factors community
has done, trying to determine which of several text editors is "best".
They made valient try, but their best results are still scientifically
pretty weak; there are too many holes that can be punched in the
reasoning.  Too bad it's so hard.

For productivity, the best I have found to go on is subjective
reports.  If fifteen people tell me that their C debugging
productivity was greatly increased by using Saber C, for example, my
reaction is not to say "come back when you have quantiative,
scientifically controlled results"; my reaction is to decide that it's
worth my trouble to go try it out and see what I think, and do so, and
form my own opinion.

Dan Weinreb		Object Design, Inc.		dlw@odi.com