drich@klaatu.lanl.gov (David O. Rich) (02/15/91)
When it comes time to review a particular OO-Design, or even an OO-Implementation, what kinds of qualitative/quantitative measures are used? Do people talk in terms of object coupling, cohesion, etc? (seems like personal preferences tend to dominate here) Is there anyone out there practicing the Law of Demeter? What is the current status of this project (I have a couple of references on this, but the latest one is out of IEEE Software, Sept. *1989*). Comments, pointers, etc., to "dor@lanl.gov" -- I will post a summary (or we can hash it out right here in public if people prefer ;-). Dave -- ============================================================= David Rich | Military Systems Analysis Group (A-5) Email: dor@lanl.gov | Mail Stop F602 Phone: (505) 665-0726 | Los Alamos National Laboratory FAX : (505) 665-2017 | Los Alamos, NM 87545 <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>< "...in the abundance of water, the fool is thirsty..." =============================================================
pkr@media01.UUCP (Peter Kriens) (02/19/91)
> When it comes time to review a particular OO-Design, or even an > OO-Implementation, what kinds of qualitative/quantitative measures are > used? Do people talk in terms of object coupling, cohesion, etc? > (seems like personal preferences tend to dominate here) -------- You are very right. Last year I tried to start a discussion abour "object- normalisation". I always liked the fact that the database people had a rather nice mathematical technique to to see how good the data model was. I have a hunch that a likewise technique should be possible in the oo world as well. Though I do not have any formal techniques I often "feel" that a design is good. I think the following elements could be start of formalizing these metrics: 1. a method should only do 1 thing. If you do not do this and have methods that combine the implementation of different functionality, you end up rewriting the same functionality in more places and subclassing becomes a lot more difficult. This means that one metric could be the size of the methods. 2. Physical or functional coupling Often you see methods directly referring classes. Though this is sometimes usefull, it often makes code harder to reuse. For example, in ST/V you can refer to the Mouse or the Cursor. One is a global variable that refers to an instance of an object that is responsible for the the mouse cursor or the key-cursor. If you directly refer to the mouse, your code does not work without a mouse. These kind of references are very often. In something like a := Dictionary new, there is not much choice but in many cases it is wise to parametrize the class which is used to create new objects. 3. References to other objects If you get the feeling that ~ryou have to write too many get/set messages to get access to other objects instance variables, you usually try to put the methods in the wrong place or your class layout is wrong. Minimizing access to instance variables of other objects could be another metrics. 4. Global variables Globally variables usually point to a wrong class layout. So keeping them down is another vital metric. 5. Reuse of components. If there are no standard components used, you usually doing too much work 6. Too many classes. A good object oriented design tends to have a few basic application classes and many standard classes.I often see designs where things ai]re made classes which could be better parametrized (is this english?). For example I saw a class button with a Help, Shuffle and Play subclass. The only thing these classes did was print their name and call a procedure. This was much better if the button class had an instance variable for the name and action. 7. Subclasses with too many methods A subclass should reuse the code of its parent If you end up rewriting, not adding, a lot of methods you might go in the wrong direction. 8. Conditionals on the class If you end up havinf*g a design where decisions are made which depend on the class, you probably are doing something wrong. (Like (a isKindOf String) ifTrue: [ ] ifFalse: [ ]). 9. Minimisation on conditional statements I am a strong believer that the complexity of code is proportional, maybe even quadratic, to the number of "if" statements. In oops it is often possible to let the class decide what should be done instead of testing it. This often results in oops programs having fewer conditional statements than traditional designs. I think that further reducing this number increases the quality of the code. Ofcourse all of the above is IMHU. I would really like it if many people would react on this so that these loose remarks could become into something more formal. Peter Kriens aQute software architects pkr@media01.uucp
drich@klaatu.lanl.gov (David O. Rich) (02/20/91)
In article <2040@media01.UUCP> pkr@media01.UUCP (Peter Kriens) writes:
3. References to other objects
If you get the feeling that you have to write too many
get/set messages to get access to other objects
instance variables, you usually try to put the methods in the
wrong place or your class layout is wrong. Minimizing access
to instance variables of other objects could be another
metrics.
Yes, in fact this particular "metric" is one of the key factors in the
Law of Demeter.
"Assuring Good Style for Object-Oriented Programs," K. Lieberherr & I.
Holland, Northeastern University, IEEE Software (September 1989).
8. Conditionals on the class
If you end up having a design where decisions are made
which depend on the class, you probably are doing something
wrong. (Like (a isKindOf String) ifTrue: [ ] ifFalse: [ ]).
I agree with you on this point. The interesting thing is that I've
been involved in some pretty heated discussion with people who argue
the exact opposite on this point. The argument is usually in the form
of "static binding" is better than "dynamic binding" since static
binding means a programmer knows exactly which method he/she is
calling and that code written under this assumption is more readable
(i.e., decisions that are made depending on the class only makes the
code more readable since it is clear what is going on).
Dave
--
=============================================================
David Rich | Military Systems Analysis Group (A-5)
Email: dor@lanl.gov | Mail Stop F602
Phone: (505) 665-0726 | Los Alamos National Laboratory
FAX : (505) 665-2017 | Los Alamos, NM 87545
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><
"...in the abundance of water, the fool is thirsty..."
=============================================================
klimas@iccgcc.decnet.ab.com (02/21/91)
In article <DRICH.91Feb19110509@klaatu.lanl.gov>, drich@klaatu.lanl.gov (David O. Rich) writes: > "Assuring Good Style for Object-Oriented Programs," K. Lieberherr & I. > Holland, Northeastern University, IEEE Software (September 1989). > > 8. Conditionals on the class > If you end up having a design where decisions are made > which depend on the class, you probably are doing something > wrong. (Like (a isKindOf String) ifTrue: [ ] ifFalse: [ ]). > > I agree with you on this point. The interesting thing is that I've > been involved in some pretty heated discussion with people who argue > the exact opposite on this point. The argument is usually in the form > of "static binding" is better than "dynamic binding" since static > binding means a programmer knows exactly which method he/she is > calling and that code written under this assumption is more readable > (i.e., decisions that are made depending on the class only makes the > code more readable since it is clear what is going on). The use of case statements is a poor programming practise in OOP as it directly (and negatively) impacts code maintainability and reusability. OOP languages are supposed to support polymorphism because it has proven to be a good solution to improving reusability and maintainablility. (NOTE: For someone who is not familiar with the problem that is caused by case statements, consider that whenever a new object involved in a case comparison is created, all other tests against that object must be re-examined in the code for validity. This increases the "surface area" that a programmer must be aware of to make a change, and hence the possibility that an error might occurr.) It is my experience that the best OO programmers write the least number of case statements. We have also had contests with some of our better OO programmers to write case statement free code and it can be done without too much trouble.
garry@ithaca.uucp (Garry Wiegand) (02/21/91)
pkr@media01.UUCP (Peter Kriens) writes: > 6. Too many classes. > ... For example I saw > a class button with a Help, Shuffle and Play subclass. The > only thing these classes did was print their name and > call a procedure. This was much better if the button class > had an instance variable for the name and action. I was arguing exactly this point a few days ago with a MacApp fan. He claimed that overriding a method - by deriving a new class from "Button" - was *exactly* the most elegant way to program menus in MacApp C++. I argued that creating classes for which there would probably only ever be one instance seemed syntactically clumsy (in C++). I argued further that if you instead got involved with pointers-to-functions as part of the Button instance data (as Peter suggests) then you've fallen back into the plain-C style of being object-oriented and thus C++ hasn't done you much good. What's the net.religion on the subject of "too many classes" and, for example, used-only-once menu buttons? Garry Wiegand --- Ithaca Software, Alameda, California ...!uunet!ithaca!garry, garry%ithaca.uucp@uunet.uu.net
cote@subsys.enet.dec.com (Michael P.A. Cote) (02/22/91)
> From: drich@klaatu.lanl.gov (David O. Rich) > When it comes time to review a particular OO-Design, or even an > OO-Implementation, what kinds of qualitative/quantitative measures are > used? Do people talk in terms of object coupling, cohesion, etc? > (seems like personal preferences tend to dominate here) The fact that personal preferences tend to dominate is merely an indication of the current maturity level of OO technique. :-) There are most definitely coupling and cohesion metrics that can be applied to objects. (See below) Appropriate coupling and cohesion metrics should apply to each and every level of abstraction of the problem space. Therefore, object coupling and cohesion metrics should be taken in addition to metrics based on the individual methods. After all a strongly cohesive object with terrible coupling between methods will not help out the maintainability of the system as a whole. > From: pkr@media01.UUCP (Peter Kriens) > You are very right. Last year I tried to start a discussion abour "object- > normalisation". I always liked the fact that the database people had a > rather nice mathematical technique to to see how good the data model was. I also like the idea of "object normalisation". In fact, I have done some designs that start with a logical data model of the "virtual database" basis of the application and then normalize that logical data model. From that point, I was able to simply assign objects to represent each of the entities in the logical data model. This technique is nice for defining the core objects of an application that is based on some type of Object Oriented Database. However, as you extend the application analysis to cover the fringes, you need some better metrics to guide you. I found a good reference on Object Oriented Coupling and Cohesion. This article was recommended to me by Larry Constantine. Embley, D., and Woodfield, S., "Cohesion and Coupling for Abstract Data Types", Sixth Annual Phoenix Conference on Computers and Communication. Phoenix, Feb 1987; 229-234 They provide 5 levels of cohesion for objects: (from best to worst cohesion) Model - Only one Domain; only operations defined on one domain; No other data abstractions are enclosed. Concealed - More than one useful data abstraction completely encapsulated within an object. Undelegated - A method is defined that operates on only a subset of the entire data domain. Multifaceted - Multiple data domains are joined by common operations. Separable - Unrelated data domains encapsulated within single object. In addition, there are 5 levels of coupling: (from best to worst coupling) * Items are not talked about in the article. Export - Uses only exported methods/domains independent of representation or implementation. * Overt - Uses implicitly exported method * Covert - Uses private domain or operation of another object. Surreptitious - No direct reference but implicit use of privileged knowledge of implementation of another object. Visible - Violate encapsulation with direct reference to internal implementation of another object. MPAC
drich@klaatu.lanl.gov (David O. Rich) (02/28/91)
My initial posting asked for references and pointers to object-oriented metrics. The responses sent directly to me were few; I've included excerpts from a couple of the more interesting ones below. I was disappointed not to hear more on the Demeter System and specifically the Law of Demeter (but, maybe that says it all). In any case, the "Emperor Strikes Back" series is far more interesting ;-). -------------------------------------------------------------------- From: root@nextserver.cs.stthomas.edu (Max Tardiveau) Luiz A. Laranjeira Software Size Estimation of Object-Oriented Systems IEEE Transactions on Software Engineering, May 1990, pp. 510-522 -------------------------------------------------------------------- From: Craig Hubley <craig@utcs.utoronto.ca> Measures that we and our clients have used include: - absolute detail size of the interface (i.e. # of functions + # of arguments + # of returns to each) (smaller is better) (applies to a single type, group of types, app., or domain) - absolute detail size of the type (as above PLUS number of internal functions/arguments/datums) (following principle that variables inhibit reusability) - absolute code size per unit of specification (i.e. function, type) (smaller is better) - relative scope restriction of the interface (in re-engineering) (i.e. 1 point per fn or data item removed from 1 scope level) (obviously this only works if a structured version exists) - relative cohesion (in re-engineering) (i.e. 1 point per fn/datum or fn/fn interaction moved within a type, which was formerly exposed to a wider scope) (this is a form of scope restriction) - absolute coupling between two types (measured in absolute details) (e.g. type calls fn in other with 2 arguments = 3 points) (again, smaller is better, following weak coupling principle) - relative coupling (i.e. how much is dependence increased/decreased by change) - standard empirical SW quality control metrics -- ============================================================= David Rich | Military Systems Analysis Group (A-5) Email: dor@lanl.gov | Mail Stop F602 Phone: (505) 665-0726 | Los Alamos National Laboratory FAX : (505) 665-2017 | Los Alamos, NM 87545 <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>< "...in the abundance of water, the fool is thirsty..." =============================================================