[comp.lang.c++] Obj-C 'vs.' C++ with & w/out MI

uucibg@sw1e.UUCP (3929]) (05/09/89)

In article <176@mole-end.UUCP> mat@mole-end.UUCP (Mark A Terribile) writes:
>C++ requires that if ``sprocket'' requests something of an object that has been
>passed to it, that object has been declared to have that capability (that
>member function must exist for the class of which the object is an instance).
...
>The programmer of ``sprocket'' must declare what he will accept, and the
>things that he (or his class) will accept must have a set of capabilities
>(member functions) which includes all that will be asked of them.

But in Objective-C, you can check these sorts of things at runtime if you
want.  You can do one of several things:

(A) be rigid and decide that inappropriate objects passed to your object mean
your program barfs out.  This might seem like an unacceptable choice.  But
note that you can redefine the 'error' method.  This error method is invoked
whenever the message resolution routine discovers that you're trying to
send an unsupported message to an object.  Thus, you could redefine (override)
the error method to do a gracefull shutdown.  If this sort of approach is
considered horrible, then look at all the programs that do it for things like
file size limitations, not enough memory (Bison), something went screwy. 
Thus, 'it bombs if you missed a bug' is no more true than for a normal
C program and is in fact significantly more robust than for normal C or C++.
This, I believe, is one of the big reasons behind the interest in exception
handling for C++.  Don't get me wrong, this isn't full exception handling, but
it helps.

(B) do runtime checks to make sure that the object passed responds to the
message you want to send.  This involves the use of the 'respondsTo' message.
This allows you to find out if object foo responds to an arbitrary message.
This is particularly usefull if you want to allow varying degrees of support
for a message protocol (sure I'll allow object's which can't be
'realtimeDisplay'-ed but only if they are at least 'display'-able, since they
are in essence real-time displayable: they just don't change over time).

(C) (with vsn 4.0 of Stepstone's compiler) you can prototype without
type-checking and then turn on type-checking when it gets to production code.
This amounts to Smalltalk-like prototyping (without the nice browser <yet?>)
but with production code a la virtual functions in C++.  Now, if they would
just add MI and non-virtual functions for classes... :-) (of course, runtime
MI message resolution in a Smalltalk-like world can get prohibitively 
expensive very quickly I should think).

I wrote:
>> ... This means that the person writing the code for the supplier must decide
>> a priori what tasks 'sprocket' might be usefull in solving.  But the whole
>> point of writing suppliers it to avoid making decisions about what kinds of
>> problems can be solved with code for 'sprocket' and instead let the client
>> programmer look at available classes and decide what to use.

Mark responded:
>This is getting pretty close to one of the uses of Multiple Inheritance.
>If I have a Doojigger which I want processed by a Sprocket, then Doojigger
>has to somehow inherit the capabilities that Sprocket will use.  If it
>doesn't, I'm free (using MI) to quickly wrap a derived class around Doojigger
>recasting the operations that are needed into those that are available.
...
>This means I have to do a little work.  In return, I am guaranteed that I won't
>get a runtime error from the method dispatch system.  In code that I am
>building into a product, I think the tradeoff is appropriate.
...
>It also means that the Sprocket must get its ``print'' from a visible, if
>not standard, set of declarations.  The first part is easy; the second is
>significantly harder.  .......

I agree wholeheartedly with this.

>.....................  It's less of a problem in a large project that has
>its own standard environment or in a case where you've bought a complete
>bunch of goodies from one software author/vendor.

True.  This is one that I'm still thinking on.  However, it would seem that
there would need to be certain standards (ack! yet more) regarding the format
of classes and their documentation.  Just as in the hardware world, not every
'sprocket' would be compatible with every 'cog' (TTL, ECL, CMOS, etc?).
However, just as in the hardware world, things can be tweaked.

I wrote:
>> ...  Furthermore, the 'precompiled classes' scenario prevents this option
>> anyway since you can't control the inheritance tree of the precompiled
>> classes you (theoretically) buy from some software vendor.  ...  This
>> severely limits the reusability of the code for 'sprocket'.

Mark said:
>Again, MI does provide a solution.

Very true.  I don't have enough experience with MI to know how much it overlaps
with weak-typing in terms of solving the same set of problems (as discussed
above).  I'm working on this knowledge.

>I'm not so worried about software re-use.
Then I think that in some sense we're tugging at different ends of the
OO-towel :-).  Not totally, but to at least some degree.

>There are also two ways to view OOP.  One is to view it as a way to build
>programs from existing pieces; the other is to view it as a way to organize
>the structure of a problem so that a well-structured program may be written.
...
>Both approaches have their place, but on the large scale, I think that the
>latter is the way to get programs for which the efforts of ``testers''
>... will more rapidly
>converge on a program which for which liability can be risked when it is
>placed in the hands of thousands of customers.

Hmmm, why do you make this assertion?

Mark writes: [in regard to strong-versus-weak typing]
>.....  Getting it right in concept and reliable in fact are more important to
>me.  A program that is not reliable is a legal, ethical, and moral liability;
>if it's so unrealiable that it has to be tweaked over hundreds of times because
>it was put together from the parts that were there, rather than built from
>parts whose correct relationship to each other was established, it's a
>financial liability as well.  Those finances finance our paychecks.

So you wish to depend upon the compiler to assure that the types of the
arguments are correct.  Personally, I consider this to be about 5% of the
'relationship verification' process.  I don't think I buy this one.

>Oh, and if the C++ templates can be implemented (3.0 ?) they will probably
>improve the situation still further.

But only if you have all source code available.  This kills the 'cogs
and sprockets from a vendor' idea.  That's like every computer manufacturer
being forced to build their own chip manufacturing facility for 80+% of
their chips (and many of the rest of the components too).

>(This man's opinions are his own.)
>From mole-end				Mark Terribile

Brian R. Gilstrap                          Southwestern Bell Telephone
One Bell Center Rm 17-G-4                  ...!ames!killer!texbell!sw1e!uucibg
St. Louis, MO 63101                        ...!bellcore!texbell!sw1e!uucibg
(314) 235-3929                             ...!uunet!swbatl!sw1e!uucibg
#include <std_disclaimers.h>

mat@mole-end.UUCP (Mark A Terribile) (05/12/89)

``For those who came in late''
	Brian Gilstrap and I have been going around the horn on the benefits of
	Objective-C, in particular the way they implement type-by-type
	invocation of objects.

> But in Objective-C, you can check these sorts of things at runtime if you
> want.  You can do one of several things:
 
> (A) be rigid and decide that inappropriate objects passed to your object mean
> your program barfs out.  This might seem like an unacceptable choice.  But
> note that you can redefine the 'error' method.

This is probably useful in a statistical analysis package.  It *might* be
acceptable in an application generator.  It probably would not be acceptable
in financial operations software, and it would be *solidly* unacceptable in
any real-time control system.

If you can avoid it by type checking, why allow it?

> ...  If this sort of approach is considered horrible, then look at all the
> programs that do it for things like file size limitations, not enough memory
> (Bison), something went screwy. Thus, 'it bombs if you missed a bug' is no
> more true than for a normal C program ...

Why add another failure mode?  Also, if a system is designed without free
store management and without recursion (many embedded systems are) the
likelyhood of such a failure is vastly reduced.

> ... and is in fact significantly more robust than for normal C or C++.

Can you demonstrate this?

Brian goes on to mention the availability of exception handling.

> (B) do runtime checks to make sure that the object passed responds to the
> message you want to send.  This involves the use of the 'respondsTo' message.
> This allows you to find out if object foo responds to an arbitrary message.

I think that the only difference between this and exception handling is how
the reacting code is written.

> (C) (with vsn 4.0 of Stepstone's compiler) you can prototype without
> type-checking and then turn on type-checking when it gets to production code.

Why not prototype with type-checking from the beginning?

> This amounts to Smalltalk-like prototyping (without the nice browser <yet?>)

Well, if you want to tinker at the terminal, it may make sense.  If you prefer
to work out your design before you commit it to the computer, you can at least
get the types right to begin with.  Personally, I prefer to have my errors
caught at the earliest possible time.  If I don't catch them and the compiler
can, more power to it.

> but with production code a la virtual functions in C++.  Now, if they would
> just add MI and non-virtual functions for classes... :-) ...

Let's start by getting the simple things right first!

> ... (of course, runtime MI message resolution in a Smalltalk-like world can
> get prohibitively expensive very quickly I should think).

I suspect that there are speedups, but ambiguity problems add yet another
family of ``we can't detect it until we run that case and get an exception.''
Oh, but the name space of messages is global rather than class-based?  Thank
you, I'll take the limited scope for readability and safety.

We agree that MI allows capabilities to be added to an existing inheritance
structure

> >.....................  It's less of a problem in a large project that has
> >its own standard environment or in a case where you've bought a complete
> >bunch of goodies from one software author/vendor.
> 
> True.  This is one that I'm still thinking on.  However, it would seem that
> there would need to be certain standards (ack! yet more) regarding the format
> of classes and their documentation.  ...

Well, do UNIX man pages represent a standard?  The value of a class is
circumscribed by the quality with which it is documented.  Vendors who wish
to sell classes for lot$ of dollar$ will provide better documentation.

Quality of documentation is a growing issue in computing.  Consider shareware
which may be freely duplicated, but for which full documentation is provided
only upon payment of a registration fee.  I was privileged, about a year ago,
to see some of the preliminary documentation for C++ 2.0 .  The quality of
the documentation for some of the supplied classes was much increased over
that of 1.2.1 , but the improvements served (for me) mostly to highlight how
far we have to go.

It's clearly an issue, but I can't see how the (staticly) less structured
interfaces of Objective C can help the matter.

C++ allows the definition of operators on classes.  The ability to write
arithmetic operations on arithmetic types is a big documentation aid.

The ability to overload operators for such things as extended arithmetic types
is a huge maintainability aid; it means that millions of lines of code can
be changed over from longs to extended precision with nothing but header file
changes and the writing of the class itself to provide the operations.

> ...  Just as in the hardware world, not every 'sprocket' would be compatible
> with every 'cog' (TTL, ECL, CMOS, etc?).  However, just as in the hardware
> world, things can be tweaked.

Agreed.  It should be less costly in software; in C++ it can be.

> >I'm not so worried about software re-use.
> Then I think that in some sense we're tugging at different ends of the
> OO-towel :-).  Not totally, but to at least some degree.

Absolutely.
 
> >There are also two ways to view OOP.  One is to view it as a way to build
> >programs from existing pieces; the other is to view it as a way to organize
> >the structure of a problem so that a well-structured program may be written.
> ...
> >Both approaches have their place, but on the large scale, I think that the
> >latter is the way to get programs for which the efforts of ``testers''
> >... will more rapidly converge on a program which for which liability can
> >be risked when it is placed in the hands of thousands of customers.
> 
> Hmmm, why do you make this assertion?

Experience, which is admittedly subjective.  Let me get out a rather worn
soapbox.  (Ahem!)

	The ``zero defects'' advocates state that quality is adherence to
	specifications.  If the specifications are faulty, they do not
	adhere to their specifications which, in the end, are the problem
	to be solved by the product or system to be built.  (I do not say
	that Z-D is the only valid QC method, or the best, or the most
	advanced.  I do claim that it is useful and that it is applicable
	to software because it considers, in an intuitive and general way,
	the role of specifications in achieving quality--and places them
	at the same level as everything else.)

	There are several ways to ensure or gain confidence that the
	product adheres to its specification.  One is to test the
	product in service; the other is to compare its construction
	to the plans.

	Comparing the construction to the plans will only provide
	confidence if the parts are assembled in units or ``modules''
	whose grouping provides useful boundaries to check against
	the plans, which must also have useful boundaries.  These
	boundaries should be the same for the plans as specifications-for-
	the-product as they are for the plans as products-of-their-own-
	specifications.  This is hard to achieve unless the boundaries
	represent fundamental boundaries in the problem itself.

	Testing will only provide confidence if we assume that there
	are a limited number of ``corner cases'' and that we can identify
	the places where these are likely to occur.  Corner cases occur
	when one case must be handled as a subset of supercase-A in one
	place and as a subset of supercase-B in another, where the case
	falls on one side of a boundary for one purpose and on the other
	side of a corresponding boundary for another purpose.  To prevent
	these bad behaviors, we attempt to find the fundamental structure
	the problem, and the fundamental boundaries that result.  Cases
	tend to fall unambiguously on one side of the other of fundamental
	boundaries; the same cannot be said of artifical boundaries.

	Thus both approaches to improving confidence in software depend on
	having the program follow the fundamental boundaries and structure
	of the program, and avoid the introduction of artifical boundaries.

Because the cost of fixing a problem (or a mistargeting of effort) increases
by at least an order of magnitude at every step, it is vital to identify the
problem as early in the process as possible.  Getting the overall structure
right is more important than having linked lists right, so long as the linked
lists are coded in only one place and are coded as simply as possible.

> Mark writes: [in regard to strong-versus-weak typing]
>  ...
> So you wish to depend upon the compiler to assure that the types of the
> arguments are correct.  Personally, I consider this to be about 5% of the
> 'relationship verification' process.  I don't think I buy this one.

Rather say that I believe that decomposition of the problem and structuring
the program to reflect that decomposition are most important, and that the
compiler's type-checking is a vital part of ensuring that a particular family
of errors is avoided.

Because C++ encourages many more ``invocations'' or ``transactions'' between
classes, the likelyhood that a mismatch in the design will be reflected in
an error in the interfaces is increased.  This is an intuitive statement, made
without benefit of proof or scientific evidence.  (But I have some confidence
in it until evidence to the contrary is produced.)

Except for very sophisticated classes (pattern matchers, let us say) I don't
believe that re-use of pieceparts across products will save as much time and
money as using OOP to write the program to better reflect the problem.  I
would rather spend more time programming and less time fixing what the testers
find and forcing them to test it again.  Granted, I enjoy programming and I
get paid for it, but I also believe that it's the right place to catch the
errors.

On the other hand, the C++ class structure (and maybe Objective-C; I can't
say for sure) allows an existing program of moderate size to be encapsulated
whole so that it can be re-used intact, with all the risks and benefits
involved.
 
> >Oh, and if the C++ templates can be implemented (3.0 ?) they will probably
> >improve the situation still further.
> 
> But only if you have all source code available. ...

It's not clear to me how much actually has to be available.  If it becomes
a problem, we will probably see the development of ``parsed, encoded, but not
compiled'' ``source code'' and compilers that accept them.

> ... This kills the 'cogs and sprockets from a vendor' idea.

Or limits it to somewhat stronger licensing agreements than we observe now.
-- 

(This man's opinions are his own.)
From mole-end				Mark Terribile