[comp.object] The Emperor Strikes Back

steve@cs.qmw.ac.uk (Steve Cook) (02/16/91)

A while ago, there was an article by Scott Guthery in Dr Dobbs
Journal called "Are The Emperor's New Clothes Object-Oriented?"
This article was reprinted in the February edition of
Microsystems Design in the UK.  I wrote a reply article, and I
thought the net might have some fun with it.  Here it is.


The Emperor Strikes Back
Steve Cook, Object Designers Ltd

Introduction

This is a reply to Scott Guthery's article 'Are The Emperor's New Clothes
Object-Oriented?', the colourful hatchet-job on object-oriented programming
published elsewhere in this issue.  Guthery wields some punchy phrases, and
hacks out a biting metaphor or two.  If you didn't know better, you might
think he knows what he's talking about.  The trouble is, he mostly doesn't.
It's perfectly true that there is lots of hype about object-orientedness.
Today, every peddler of a software panacea must claim it's object-oriented.
There have been software panaceas for as long as there has been money to be
made from people who'll buy them.  Object-oriented is just the latest in a
long and dishonourable line; its predecessors were Artificial Intelligence
and CASE, and I'm sure you can name many others.
At the same time, there have been honest efforts to develop our
understanding of software, and sometimes these have led to genuine
improvements.  From my experience, I believe that within the collection of
techniques often called object-oriented technology there are some important
breakthroughs which need to be developed and exploited.
Guthery questions object-oriented programming, and attempts to demolish the
claims made by its proponents.   There is a grain of truth in some of his
points, but in every clear statement of fact in his article, he is just
plain wrong.
Let's look at his first claim: 'The biggest OOP projects undertaken to date
seem to be the OOP development systems'.  Off the top of my head, I can cite
the following counter-examples.   The Apple Macintosh operating system was
first developed for the Apple Lisa in Clascal, an object-oriented extension
to Pascal.  The NeXT operating system is fully object-oriented.   In the
area of applications there are large medical systems by Hewlett Packard,
spacecraft activity planning at the Jet Propulsion Laboratory, and the
Manufacturing Activity Control System at McDonnell Douglas.  Many CASE
tools, including Arthur Anderson's Foundation and Cadre's Teamwork are built
using object-oriented languages.  Every workstation and PC window manager
uses object-oriented principles, even if they don't use an object-oriented
language.  There are hundreds more examples.

What is an object?

According to Guthery, 'stripped of its fancy jargon, an object is a
lexically-scoped subroutine with multiple entry points and persistent
state'.  If that's not fancy jargon, I don't know what is; but let's have a
closer look at it.
The killer is 'lexically-scoped'.  It's a while since I heard the phrase, so
I checked it in Abelson and Sussman's book, 'Structure and Interpretation of
Computer Programs'.  According to them, lexical scoping means that free
variables in procedures are statically bound to variables in
textually-enclosing procedures, just like free variables in nested
procedures in Pascal.  But the most fundamental idea of object-oriented
programming is dynamic  binding of operation names to operations at
run-time.  Lexical scoping has nothing whatever to do with objects.  Can
somebody who's failed so completely to grasp the basic idea be trusted to
criticise the whole technology?
The argument about the definition of C++ taking 9 years is disingenuous.
I'm not going to defend C++.  You can do object-oriented programming in it,
as well as other kinds (including the sort that Guthery presumably wants us
all to do).  It has good and bad points, and this isn't the place to discuss
them.  It's true to say that version 1 was released to the world in 1985,
and a slightly incompatible version 2 in 1989, but how this can be used as
an argument against object-oriented programming in general beats me.

OOP Code Reuse

Guthery's diatribe about code re-use completely overlooks the distinction
between compile-time and run-time.  Object-oriented hierarchies are not
object hierarchies, they are class hierarchies.  'Clipping the objects out
of the hierarchy' simply doesn't make sense, so no wonder you can't do it.
So what's the point?  Because there is a grain of truth in what he says, and
it has to do with linking technology.  The bottom line is his claim that
systems will be bigger and, therefore, slower.   But run-time system size
actually depends on what the linker includes in the executable file.  In C++
for example, with proper library archiving, what is linked is the transitive
closure of what is referenced, exactly as with C.
Let's be clear, because this is important.  It is false  that
object-oriented programming requires that the entire hierarchy gets linked
into your run-time.  It's also true  that many object-oriented programming
systems do this.  That is a property of the implementation of those systems,
not  a fundamental property of object-oriented programming.

Combining Object Hierarchies

Well, class hierarchies, actually.  Can objects from one language
communicate with those in another? The answer, contradicting what Guthery
says, is yes - using exactly the same technology as you would use at any
other time, with normal subroutine linkages.
'You can't send arguments from one C++ hierarchy to another'.  On the face
of it, this is a meaningless statement.  Neither C++, nor any other
object-oriented programming language, define any concept called 'sending
arguments to hierarchies'.   Let's re-interpret this statement as 'you can't
call functions defined in one hierarchy from another'.  You can.  Let's try
another interpretation: 'you can't pass parameters defined in one hierarchy
to a function defined in another'.  You can do that too.  Guthery's up a gum
tree with this one.
'Object-oriented programming makes integration much more difficult'.  On the
contrary, many people think that objects are exactly the technology that's
needed to make integration easier.  The Object Management Group, with a
hundred members including almost all of the major corporate computer
vendors, has been established on exactly this premise.  Are they really all
mesmerised by the Emperor's Clothes?  It's worth noting that they have
started out from a piece of existing technology, namely HP's New Wave
product, which gives integration between PC applications at a level not
achieved by any other technology to date.  Now I'll admit that New Wave may
not be such a great commercial success at present, but nobody's claiming
that it is technically flawed.

Tuning an object-oriented program

I've had some experience of this, and what Guthery says is piffle.  You tune
an object-oriented program by finding and optimising the bottlenecks, just
like you always did.  Whether you need to drop into C, assembler, microcode,
or send your computation off to a Cray to do it quickly enough, there is
absolutely no problem, and lots of systems do it.
The overhead of 'running around the hierarchy', as Guthery calls it, must
mean the overhead of resolving method lookup in a dynamically-typed language
like Smalltalk.   Now it's absolutely true that Smalltalk-like languages
have a run-time overhead of up to 500%, as compared with comparable
algorithms written in optimised C.  Nobody's hiding this.  But how this
overhead amounts to 'the exclusion of quality, usability and
maintainability' defeats me - as far as I'm concerned, after 7 years of
Smalltalk programming experience, the very reason for sacrificing
performance is to achieve greater quality, usability and maintainability.

Managing a Team

This is the part of Guthery's article which contains the biggest grain of
truth.  It's true that managing a development in the context of hierarchies
requires different techniques, and several companies have indeed come
unstuck by adopting object-orientation.  My own reaction to these failures
is to make an argument for different management techniques, and more
particularly, better tools, to get the benefits of object-orientation in
teams.  This is an area of the technology where more work is definitely
needed.
The sideswipe about debugging is grossly misleading.  Yes, C++ can be hard
to debug, but this is because of features of the language which have nothing
to do with object-orientation.  The debugging environment for Smalltalk is
second to none, and makes bugs remarkably simple to find.

Can OO languages co-exist

At the level of subroutines, any object-oriented language can call any
other, with the same practical restrictions as any other language
technology.  There is a commercial Objective-C product with a Fortran
interface (NeXT) and an object-oriented Lisp product with C and Fortran
interfaces (Symbolics).
It's true that it is not normally possible for a class in one language to
inherit from a class in another (although the NeXT system does allow this up
to a point).  This is because of a lack of agreement about the semantics of
inheritance.  There is plenty of research about this, and no doubt there
will be a solution in the future.  Is an intelligent reaction to this
difficulty simply to ignore the whole technology?  I'll bet the Japanese
don't think so.

Consequences of Persistent State

It seems as if what Scott Guthery means by persistent state is what most
people mean by data abstraction.   If he thinks that data abstraction is a
semantic minefield to be avoided, he's missed out on the last 20 years of
programming methodology.

Removing the Development System

In this section, Guthery shows he doesn't know the difference between
'object-oriented' and 'interpreted'.  Does he really think C++, Eiffel and
Objective-C have interpreted virtual machine technology?  They don't.  Does
he think there are no compiled LISPs?  There are.  And it's wrong to suggest
that the runtime cost of object-oriented programming can't be estimated.
The time costs are well-known, and the space costs depend in a measurable
way upon compilation and linking technology.

Conclusion

With any technology it's possible to take the union of the problems of each
representative of the technology and present them as problems of the
technology as a whole.  This is just as dishonest as doing the same with the
technology's benefits, as the hype-merchants and panacea-peddlers do.
Guthery writes vigorously, but mostly doesn't know what he's talking about.
When he does, he adjusts his targets to suit his assaults: so when he's
attacking performance, he attacks virtual machines, and when he's attacking
debugging and language complexity, he attacks C++.
Object-oriented programming technology is still immature.  It has benefits,
and costs.  It is important, as Guthery says, to gather the evidence
carefully before we make claims.  He says there's been 'absolutely no
evidence gathered or experiments performed to validate the claims made for
object-oriented programming'.  Again, he's wrong.  He should read a pair of
papers in Journal of Object-Oriented Programming, Vol 3 No. 1 and Vol 3 No.
3, by Dennis Moreau and Wayne Dominick of the University of Southwestern
Louisiana, describing research carried out for NASA.   They carefully set up
a framework for evaluating programming methodologies, carry out experiments,
and measure the results, which show that object-oriented programming is
around twice as productive as conventional development.  They say 'as a
direct result of the success of this research, the USL NASA project has, as
of 1987, made a major commitment to object-oriented systems technology as
the primary software development and evaluation foundation for all future
workstation components of the USL NASA Project.'
Simply put, Scott Guthery hasn't done his homework.



-- 

Steve Cook                   steve@cs.qmw.ac.uk
Object Designers Ltd
Glebe House                  Telephone 0279 755396
Great Hallingbury
Bishops Stortford
Herts CM22 7TY

macrakis@gr.osf.org (Stavros Macrakis) (02/21/91)

Most of your rebuttal to Guthery's article (which I have not read)
makes sense.

However, you are wrong on the lexical scoping issue.

An object is indeed a "lexically-scoped subroutine with multiple entry
points and persistent state".  This doesn't detract from its value.

	-s

kers@hplb.hpl.hp.com (Chris Dollin) (02/22/91)

Steve Cook writes a vigerous rebuttal of Scott Guthery's article (alas, I have
not read the original; like a *really* bad book, I might have enjoyed it). I
wish to respond to two points. Steve says:

   What is an object?

   According to Guthery, 'stripped of its fancy jargon, an object is a
   lexically-scoped subroutine with multiple entry points and persistent
   state'.  If that's not fancy jargon, I don't know what is; but let's have a
   closer look at it.
   The killer is 'lexically-scoped'.  It's a while since I heard the phrase, so
   I checked it in Abelson and Sussman's book, 'Structure and Interpretation of
   Computer Programs'.  According to them, lexical scoping means that free
   variables in procedures are statically bound to variables in
   textually-enclosing procedures, just like free variables in nested
   procedures in Pascal.  But the most fundamental idea of object-oriented
   programming is dynamic  binding of operation names to operations at
   run-time.  Lexical scoping has nothing whatever to do with objects.  Can
   somebody who's failed so completely to grasp the basic idea be trusted to
   criticise the whole technology?

Guthery is "right" (ie, that is indeed one way of regarding an object; I'm not
sure that it says anything new). I think Guthery is thinking of an objectish
implementation in a Lisp-like (actually this example is Pop-like, but the
principle's the same) language where we might say:

    define new_object( f1, f2, f3 ) as
        procedure ( method ) as
	    if method == "m1" then m1_expression( f1, f2, f3 )
	    elseif method == "m2" then m2_expression( f1, f2, f3 )
	    ...
        endif
    enddefine

the m1_expression (etc) are the expressions for the method bodies. The
if-expression encodes "multiple entry points". The fi are the persistent state,
because they are lexically-scoped variables, and full lexical scope is used to
smuggle them into the procedure representing the object. An object is thus
represented as a procedure which, when applied to an method-name argument, does
"the appropriate thing".

I seem to recall that such a method is described in Abelson and Sussman. It's
nice that it can be done at all, but it's not as nice as supporting the object
orientation (whatever that means ...) directly in the language. I *don't* think
you can use it as evidence that Guthery has "failed so completely to grasp the
basic idea" of OOP. (You may, or course, have other evidence.)

Steve also says:

   Consequences of Persistent State

   It seems as if what Scott Guthery means by persistent state is what most
   people mean by data abstraction.   If he thinks that data abstraction is a
   semantic minefield to be avoided, he's missed out on the last 20 years of
   programming methodology.

Not having read the original, I can't tell if Guthrey really does mean "data
abstraction" when he says "persistent state". I presume he means that objects
have little pieces of state inside them (I sometimes call them "statelets")
which persist for the lifetime of the object and result (in reference-based
OOLs, in any case) in the possibility of lots of aliasing (I send a message to
O1, and some O2 gets changed as a result). This is a source of great power, but
it can be a source of great confusion too (witness, for example, the advocates
of functional languages, who are so worried by the non-transparency of states
that they design state-free languages [*1]). I'm not sure that I'd call it a
"semantic minefield", however.

[*1] I myself am an advocate of functional languages, when I havn;t got my
Pop11 hat on ...
--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

alms@cambridge.apple.com (Andrew L. M. Shalit) (02/23/91)

In article <3351@sequent.cs.qmw.ac.uk> steve@cs.qmw.ac.uk (Steve Cook) writes:

   The killer is 'lexically-scoped'.  It's a while since I heard the phrase, so
   I checked it in Abelson and Sussman's book, 'Structure and Interpretation of
   Computer Programs'.  According to them, lexical scoping means that free
   variables in procedures are statically bound to variables in
   textually-enclosing procedures, just like free variables in nested
   procedures in Pascal.  But the most fundamental idea of object-oriented
   programming is dynamic  binding of operation names to operations at
   run-time.  Lexical scoping has nothing whatever to do with objects.  Can
   somebody who's failed so completely to grasp the basic idea be trusted to
   criticise the whole technology?


I agree with Steve.  In object-oriented programming, the meaning of an
identifier (message or variable) depends on the reciever (which is
part of the run-time program state).  This is essentially a
disciplined form of dynamic binding.  It's very different from lexical
binding, in which the meaning of an identifier can be derived purely
from the surrounding program text.
--

barmar@think.com (Barry Margolin) (02/23/91)

In article <ALMS.91Feb22151210@ministry.cambridge.apple.com> alms@cambridge.apple.com (Andrew L. M. Shalit) writes:
>In article <3351@sequent.cs.qmw.ac.uk> steve@cs.qmw.ac.uk (Steve Cook) writes:
>   The killer is 'lexically-scoped'.
>I agree with Steve.  In object-oriented programming, the meaning of an
>identifier (message or variable) depends on the reciever (which is
>part of the run-time program state).  This is essentially a
>disciplined form of dynamic binding.  It's very different from lexical
>binding, in which the meaning of an identifier can be derived purely
>from the surrounding program text.

I believe the lexical scoping referred to is the implementation of
instances as procedures that remember their instance variables in lexical
variables, e.g.:

(define (make-frob iv1 iv2 iv3)
  (lambda (operation args)
    (cond ((eq operation 'set!-iv1)
	   (set! iv1 (car args)))
	  ((eq operation 'iv1)
	   iv1)
	  ...
	  ((eq operation 'sum)
	   (+ iv1 iv2 iv3)))))

(set! my-frob (make-frob 1 3 5))
(my-frob 'iv1 nil) => 1
(my-frob 'set!-iv1 '(4))
(my-frob 'sum nil) => 10

--
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

pcg@cs.aber.ac.uk (Piercarlo Grandi) (02/26/91)

On 15 Feb 91 20:50:37 GMT, steve@cs.qmw.ac.uk (Steve Cook) said:

steve> A while ago, there was an article by Scott Guthery in Dr Dobbs
steve> Journal called "Are The Emperor's New Clothes Object-Oriented?"
steve> [ ... the author ... ] If you didn't know better, you might think
steve> he knows what he's talking about.  The trouble is, he mostly
steve> doesn't.

No, his points mostly have merit, if you concede that he is apparently
discussing OO programming in its historical manifestations, not in its
conceptual basis. Many of his points are about serious historical
problems that await solution or whose solution is only very recent.

Still his overall conclusion is wrong; OO programming technology is not
a silver bullet, and there are problems, but it is arguably, with all
its defects, better than many alternatives.  The claim is not that OO
technology is perfect, which is he seems to use as a pointless straw
man, but that it is less imperfect than others because it has one
important conceptual advantage.

It is thus especially unfortunate that your rebuttal of his article is
marred by one particular colossal misconception:

steve> What is an object?

steve> According to Guthery, 'stripped of its fancy jargon, an object is
steve> a lexically-scoped subroutine with multiple entry points and
steve> persistent state'. [ ... ]

steve> But the most fundamental idea of object-oriented programming is
steve> dynamic binding of operation names to operations at run-time.

Dynamic overloading has nothing to do with objects or OO programming;
not all languages that have dynamic overloading/latent types are OO and
not all OO languages have dynamic overloading/latent types.

steve> Lexical scoping has nothing whatever to do with objects.

Conversely, lexical scoping is essential to the principal style of
object implementation, the closure based one. The other one being the
one based on continuations (actors) instead of lexically scoped
closures.

steve> Can somebody who's failed so completely to grasp the basic idea
steve> be trusted to criticise the whole technology?

Unfortunately he *has* grasped the basic idea of how objects are usually
*implemented* (as lexically scoped closures with multiple entry points),
and you haven't.

But Mr. Guthery has missed the main point. He is wrong in dismissing
objects as being 'just' lexical closures, because lexical closures are
very powerful technology indeed, and in particular they are an eminently
appropriate implementation for the basic idea behind OO programming.

The basic idea of OO programming is that its decomposition paradigm by
type instead of by function results in less tightly coupled and thus
more reusable modules (which may be *implemented* as closure or
continuation generators) than other decomposition paradigms, at least in
a majority of cases.

It is this idea that the Mr. Guthery does not really discuss, and this
makes his article largely irrelevant to an argument on the merits of OO
programming instead of those of some of its past manifestations.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

johnson@cs.uiuc.EDU (Ralph Johnson) (02/27/91)

Piercarlo Grandi argues with Steve Cook's criticism of Guthry's
original definition of an object.

steve> What is an object?

steve> According to Guthery, 'stripped of its fancy jargon, an object is
steve> a lexically-scoped subroutine with multiple entry points and
steve> persistent state'. [ ... ]

steve> But the most fundamental idea of object-oriented programming is
steve> dynamic binding of operation names to operations at run-time.

pcg> Dynamic overloading has nothing to do with objects or OO programming;
pcg> not all languages that have dynamic overloading/latent types are OO and
pcg> not all OO languages have dynamic overloading/latent types.

steve> Lexical scoping has nothing whatever to do with objects.

pcg> Conversely, lexical scoping is essential to the principal style of
pcg> object implementation, the closure based one. The other one being the
pcg> one based on continuations (actors) instead of lexically scoped
pcg> closures.

I will stand on Steve's side here.  While closures can be used to
implement objects, it is not the principal style.  The principal
style is that an argument is a record with one field pointing to
an array of procedures, and that procedure calling is always done
by indirection through this array.  This is how C++ and Smalltalk
do it, if you are willing to think of a Smalltalk class as an array 
of procedures.

Further, I believe that dynamic binding is essential to object-oriented
programming, and hereby declare any programming language without it to
be not object-oriented.  In fact I am happy to consider languages like
Emerald to be object-oriented even though they do not have inheritance,
while I would never accept Ada because it rules out dynamic binding.

One of the main ideas of an object is that its user does not have to
know anything about how to implement operations upon it, that the
object is in complete control.  Different objects will have the same
sets of operations but implement them differently.  Any one of these
objects can be replaced by any other.  This is essential for the
development of groups of components that can be mixed and matched
with each other.  

There is nothing wrong with a language implementation that tries to
bind operations as early as possible, but that is an optimization.
In short, if C++ did not have virtual functions, it would not be
object-oriented.

steve> Can somebody who's failed so completely to grasp the basic idea
steve> be trusted to criticise the whole technology?

I agree.  I read Guthry's article a year or so ago and thought it was
mostly wrong. I sent him a detailed (and polite) discussion of the paper
but he seemed to ignore it, because he published his article without any
changes.  Perhaps my letter didn't get to him in time.

As far as I can tell, Guthry only made one good point.  That point was
that object-oriented programming panders to the worst side of hackers,
the side that wants to build tools for building tools for building tools,
without ever getting anything out the door or having to worry about
meeting deadlines.  OOP people say that developing libraries of reusable
programs requires a lot of time and lots of iteration and that it is hard
to predict how long it will take or even if it is done.  This makes
OOP very hard to manage.

Guthry is right.  The OOP people are right.  However, this is only
discussing the design of libraries of reusable components.  If you
are just trying to build applications then it is not as important
for the software to be reusable.  Reusable software is wonderful
to use, but hard to develop.  You should buy it from someone else
if at all possible.  You should never try to develop it yourself
under a tight schedule.

Ralph Johnson -- University of Illinois at Urbana-Champaign

jgk@osc.COM (Joe Keane) (02/27/91)

I don't want to get into a big debate about Guthery's article, but i would
like to comment on the `lexically scoped' issue.

Steve Cook's argument against lexical scoping is that the actual routine which
is called depends on which object the method is called on.  This is true, but
i don't think it contradicts lexical scoping.  Taking this argument to the
extreme, we could say that C structs are not lexically scoped because the
value of a field depends on what struct you look at.  This is simplistic, but
it shows where i'm coming from.

In languages like C++ an object belongs to some actual class, and the routine
to be run is determined entirely by this class.  The actual class is lexically
scoped, but a class which includes subclasses is not lexically scoped.  A
given instance may be of an unknown (to the base class) subclass, so it has no
idea what routine will actually be called.  I think this is what Steve's
argument really says.  We just have to be clear about whether we're talking
about an object, an actual class, or a class including subclasses.  By the
way, i really wish there were separate words for the last two concepts.

To make this a little more clear, let's try to imagine what dynamic scoping
would mean.  In Common Lisp we can declare some variables special, so let's
try the same thing for methods.  This would mean that you could change the
definition of a method in some routine, maybe for a given object or for a
whole class.  It could be changed permanently, or when you leave it could be
restored to its original definition.  This isn't supported by C++, unless of
course you bash the virtual function tables.  It seems like a neat idea,
although i'm not sure how useful it really is.  FL advocates would say this
causes more problems than it solves, and i think i agree.

pcg@cs.aber.ac.uk (Piercarlo Grandi) (02/28/91)

On 22 Feb 91 20:12:10 GMT, alms@cambridge.apple.com (Andrew L. M. Shalit) said:

alms> In object-oriented programming, the meaning of an identifier
alms> (message or variable) depends on the receiver (which is part of
alms> the run-time program state).

As written, this statements is nonsensical. It is also easy to find
counterexamples to any of the senseful version that you can derive from
it.

alms> This is essentially a disciplined form of dynamic binding.  It's
alms> very different from lexical binding, in which the meaning of an
alms> identifier can be derived purely from the surrounding program
alms> text.

Think again. Do you *really* think that scoping has got anything to do
with overloading? Do you *really* think that how you match a function
and a type has got anything to do with how a variable is bound to a value?


I think that if this is how OO programming is perceived at large, a big
problem exists. Guthery confuses object instances with object types,
Cook thinks that dynamic overloading is the basic idea of OO and not its
decomposition paradigm, and you confuse overloading with scoping.



--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

pcg@cs.aber.ac.uk (Piercarlo Grandi) (02/28/91)

On 23 Feb 91 07:52:48 GMT, barmar@think.com (Barry Margolin) said:

barmar> I believe the lexical scoping referred to is the implementation
barmar> of instances as procedures that remember their instance
barmar> variables in lexical variables, e.g.: [ ... ]

I will, as my habit is, expand a bit on this. You will already know all
this, but somebody else seems in need of a bit of refreshing :-).

What you describe is actually how objects are implemented in all OO non
actor languages. The procedure you mention is actually the code, in the
compiler or interpreter, that resolves the name of an operation to its
implementation for a given object type. It can be of the form you
mention (exampel in Scheme):

    (define (some-class variable1 variable2 variable3 ...)
	(letrec (

		(method1 (lambda (...) ...))
		(method2 (lambda (...) ...))
		(method3 (lambda (...) ...))
		...
		(self
		    (lambda message ; the function/type overload resolver
			(case (head message)

			    ('variable1	variable1)
			    ('variable2 variable2)
			    ('variable3 variable3)
			    ('...	...)

			    ('method1   (apply method1 (rest message)))
			    ('method2   (apply method2 (rest message)))
			    ('...	...)))))

	    ; return the closure which is the object
	    self))

    (set! an-instance (some-class ... ... ...))
    (an-instance 'method1 ...)

This is almost equivalent (modulo the fact these have manifest types
and by default do overload resolution at compiletime) to the Simula 67:

    CLASS some_class(variable1,variable2,variable3);
	... variable1;
	... variable2;
	... variable3;
	... PROCEDURE method1(...) ....;
	... PROCEDURE method2(...) ....;
	... PROCEDURE method3(...) ....;
    BEGIN COMMENT overload resolution done by the compiler;
    END some-class;

    REF (some_class) an_instance;

    an_instance :- NEW some_class(...,...,...);
    an_instance.method1(...);

and of the C++:

    class some_class
    {
    public:
	...		variable1;
	...		variable2;
	...		variable3;
	...

	...		method1(...) { ... };
	...		method2(...) { ... };
	...		method3(...) { ... };
	...

	some_class(...,...,...)
	    : variable1(...), variable2(...), variable3(...)
	    { /* overload resolution done by the compiler */ };
    };

    some_class an_instance(...,...,...);
    an_instance.method1(...);

Anybody can easily add a Smalltalk version, and so on.

I hope that all this is clear not just to Margolin and company.

Please, please note that the bodies of the methods are in all three
cases lexically closed with respect to the names of instance variables
and of the methods themselves, and that this is *necessary*. If method
bodies are not bound lexically in the closure which is the object,
things will not work.

As to overloading resolution, the lambda in the Scheme version, which is
the overloading resolutor, can be easily factored out at compile time if
'(head message)' is statically known (the default case in C++ and Simula
67, for example, and a clever Scheme compiler can easily do that from
the above definition), or it can be implemented as a general purpose
function/type dispatcher that can resolve overloadings at run time (as
in CLOS, for example).

In the latter case overloading resolution is entirely different from
dynamic scoping; in CLOS overloading resolution is done on the types of
multiple objects, which is so clearly different in nature from scoping.

With a little syntactic sugaring one can make the overload resolver
above implicit in the definition of a class in Scheme too. Margolin will
not need it, but some unbelievers :-) may well check Abelson & Sussman,
which give ample examples of all this, syntactic sugaring included.

It is IMNHO obvious that all non actor based OO implementations are
shaped like above, irrespective of whether the operator resolution is
code is in the compiler or in the interepreter, and that all the methods
of a class *must* be lexically closed with respect to the members and
methods of the same class, if at all; CLOS does not have member methods,
so it does not need to close their namespace around that of the relevant
class(es).

Clear enough now? Can we get on to serious discussion like about the
merits of important things like the OO decomposition paradigm and its
strenghts and limitations?
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

pcg@cs.aber.ac.uk (Piercarlo Grandi) (03/01/91)

On 22 Feb 91 08:39:52 GMT, kers@hplb.hpl.hp.com (Chris Dollin) said:

    [ ... on whether objects are closures with an overload resolution
    procedure associated to them ... ]

kers> [ ... ] I seem to recall that such a method is described in
kers> Abelson and Sussman. It's nice that it can be done at all, but
kers> it's not as nice as supporting the object orientation (whatever
kers> that means ...) directly in the language.

Actually, as I hopefully clearly show in other articles in this
newsgroup, there is absolutely no difference; if your language, like
most Schemes, has got macros, you never need to know that objects are
indeed implemented as closures with an overload resolutor (I think
Poplog has got the necessary facilities too). Also, if your compiler is
clever this need not be less efficient than having OO as primitives, as
the overload resolutor can be inlined and evaluated at compile time if
its parameters (the name of the method, the types of its arguments) are
statically known, as it happens fairly often, and using a closure as a
record can be quite efficient too, and give wonderful ease of debugging.

The Abelson & Sussman book actually shows this quite clearly; they also
demonstrate that with a little bit of macro based sugaring one can have
far more flexible facilities, includign visibility control,
accessibility control, multiple implementations for the same interface
and viceversa, than in most OO languages where closures and overload
resolvers are hidden in the compiler or interpreter.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

pcg@cs.aber.ac.uk (Piercarlo Grandi) (03/01/91)

On 26 Feb 91 19:13:36 GMT, johnson@cs.uiuc.EDU (Ralph Johnson) said:

johnson> While closures can be used to implement objects, it is not the
johnson> principal style.  The principal style is that an argument is a
johnson> record with one field pointing to an array of procedures, and
johnson> that procedure calling is always done by indirection through
johnson> this array.  This is how C++ and Smalltalk do it, if you are
johnson> willing to think of a Smalltalk class as an array
johnson> of procedures.

No, that is *not* how C++ and Smalltalk do it. We are speaking of
completely different things here. Scoping and overloading are compeltely
different issues.

In your picture the object *is* a closure, because methods of a class
are lexically closed with respect to its fields, to the point that the
relevant offsets are hardcoded in the function implementations (shallow
binding implementation of lexical scoping), in both C++ and Smalltalk.

The pointer to the array of procedures is merely one row of the 2D
associative table that the overload resolver code needs to match
function interfaces with value types and get the relevant function
implementations, and has nothing to do with whether that is a closure or
not, and indeed is not part of the object at all.

That pointer is just a type code. Some less constraining OO
implementations use an arbitrary number as type code and use hashing to
do function interface/value type resolution.

Smalltalk/Actor oriented people, being locked in this particular
implementation of the overload resolver, are blinkered and cannot see
that what they call "dynamic binding" is really dynamic overload (on
just the first argument!) resolution and has nothing to do with scoping,
and is not an essential aspect of the system, while scoping is essential
to keep together the attributes of a class (for traditional class based
OO systems, e.g. not CLOS).

This view also makes them (has made be them in the past, but I repented
and was saved!) blind to the possibility of overload resolution based on
multiple arguments, and arbitrary mix and match of interfaces and
implementations, and creates a lot of non problems centered around
artificious things like "inheritance".

This has at times crazy consequences; for example in C++ static
overloading is resolved on all the arguments to a method, but virtuals
(dynamic overloading) are overloaded only with respect to the first
argument, an arbitrary restriction.

johnson> Further, I believe that dynamic binding is essential to
johnson> object-oriented programming, and hereby declare any programming
johnson> language without it to be not object-oriented.

Is the converse also true? That is, if you have dynamic binding the
language *is* OO?

johnson> One of the main ideas of an object is that its user does not
johnson> have to know anything about how to implement operations upon
johnson> it, that the object is in complete control.  Different objects
johnson> will have the same sets of operations but implement them
johnson> differently.

This applies also to languages that do not support directly the OO
decomposition paradigm, I am afraid. ML for example.

I actually see things very differently. I see the original OO/capability
function interface/value type matrix, and at each element a function
implementation, and OO programming being about clustering this matrix in
the type dimension(s), not in the interface one, because it is presumed
that implementations operating on the same type(s) are more strongly
related than implementations with the same interface, and thus a
decomposition paradigm that clusters them together results in better
modularation and thus reuse than other decomposition paradigms in most
cases.

Whether "interface/type(s) -> implementation" resolution is dynamic or
static or how the matrix is implemented are totally immaterial issues in
this view. I represent that OO programming is accurately described by it.

johnson> Any one of these objects can be replaced by any other.  This is
johnson> essential for the development of groups of components that can
johnson> be mixed and matched with each other.

Even more essential would be algebras that allow more flexible mix and
match than is traditionally possible in most OO languages, like one
interface with multiple implementations, or viceversa, and arbitrary
algebras on interface and implementations definitions (something that
some people call inheritance) or overloading based on multiple types (a
ninterface/type matrix with more than 2 dimensions). These are not
encompassed by your definition of OO, but IMNHO they are all part of the
OO decomposition paradigm, as above.

johnson> In short, if C++ did not have virtual functions, it would not
johnson> be object-oriented.

Here we disagree completely. You make OO depend on one linguistic
feature. I make it depend on support for particular style of
decomposition.

I tend to think this is more poignant, unless you can show that the one
linguistic feature is *essential* in supporting OO decomposition
paradigm, which cannot be done, because there are large OO applications
that do not need it.

Would you dare go as far as saying that a program written in C++ or
Simula 67 in which virtual is never required and used *cannot be* called
an OO program, even if it it written according to the OO decomposition
paradigm?

I would then be able to say that Smalltalk is not OO because it does not
support directly multiple inheritance (or arbitrary mix and match of
interfaces and implementations) or overloading on more than one argument
(general clustering of operations in the type dimension(s) of the
matrix).

These are all things that are in general useful, even if they are not
necessary for large and significant applications. You can still build OO
programs, that is those built according to the OO decomposition
paradigm, even if you don't have dynamic overloading, just as if you
don't have multiple overloading, just as if you don't have arbitrary mix
and match of interfaces or implementations, just as if you don't have
full latent types.

If each of these is missing you cannot tackle some class of OO
applications, but the lack of any or all of these does not mean that you
are not doing OO programming.

Naturally an argument can be made that dynamic overloading, while not
being an essential component of the OO decomposition paradigm, makes
using this paradigm much more natural on a possibly large class of
applications (those where some degree of type latency is required).

The same can be said of multiple inheritance, multiple argument
overloading, and arbitary mix and match of interfaces and functions, of
course.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

pcg@cs.aber.ac.uk (Piercarlo Grandi) (03/01/91)

On 28 Feb 91 21:42:44 GMT, pcg@cs.aber.ac.uk (myself) wrote:

pcg> You can still build OO programs, that is those built according to
pcg> the OO decomposition paradigm, even if you don't have dynamic
pcg> overloading, just as if you don't have multiple overloading, just
pcg> as if you don't have arbitrary mix and match of interfaces or
pcg> implementations, just as if you don't have full latent types.

For example, it is IMNHO obvious that there are important applications
in which you *must* be able to change dynamically the type of a value,
and some in which you must be change to change dynamically the
*definition* of the type of a value. Self certainly allows the former,
and I am not sure that it allows the latter (other languages do).

Now, a fully featured OO system *must* have them, because some
applications require these facilities, and OO programming is a fully
general style of programming, isn't it?

So is Self the "most" OO language?

pcg> If each of these is missing you cannot tackle some class of OO
pcg> applications, but the lack of any or all of these does not mean
pcg> that you are not doing OO programming.

My point, and I want to insist strongly, is that dynamic overloading
and the other features dont' define OO programming, they define its
range of *applicability* for a particular language/implementation.

Again it may be argued that for example without the range of
applicability of the OO decomposition paradigm is so limited or
awkward that de facto it is essential.  I would disagree with such a
position, but we lack statistics or any evidence on which classes of
applications can be more naturally expressed if dynamic overloading,
or multiple overloading, or multiple inheritance, or general interface
and implementation mix and match, or unbounded latent types, or
polymorphism, or dynamic type change, or dynamic type definition
change are available.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

mario@cs.man.ac.uk (Mario Wolczko) (03/02/91)

In the Grandi-Johnson debate, I take PCG's side: dynamic binding is
*not* essential to OOP, and cite POOL-T as a perfectly good OO
language without dynamic binding.  In fact, I think PCG has explained
what constitutes OOP pretty well (rather better than most published
material!) 

However, this part has me really foxed:
> Smalltalk/Actor oriented people, being locked in this particular
> implementation of the overload resolver, are blinkered and cannot see
> that what they call "dynamic binding" is really dynamic overload (on
> just the first argument!) resolution and has nothing to do with scoping,
> and is not an essential aspect of the system, while scoping is essential
> to keep together the attributes of a class (for traditional class based
> OO systems, e.g. not CLOS).
If what we call "dynamic binding" is really "dynamic overload
resolution" (a term I have never heard before), then what *is*
"dynamic binding"?  Are you restricting the term to
non-lexically-scoped variables (aka "special" or "fluid") as in Lisp?

Mario Wolczko

   ______      Dept. of Computer Science   Internet:      mario@cs.man.ac.uk
 /~      ~\    The University              uucp:      mcsun!ukc!man.cs!mario
(    __    )   Manchester M13 9PL          JANET:         mario@uk.ac.man.cs
 `-':  :`-'    U.K.                        Tel: +44-61-275 6146  (FAX: 6280)
____;  ;_____________the mushroom project___________________________________

pcg@cs.aber.ac.uk (Piercarlo Grandi) (03/03/91)

On 1 Mar 91 20:46:29 GMT, mario@cs.man.ac.uk (Mario Wolczko) said:

mario> However, this part has me really foxed:

pcg> Smalltalk/Actor oriented people, being locked in this particular
pcg> implementation of the overload resolver, are blinkered and cannot
pcg> see that what they call "dynamic binding" is really dynamic
pcg> overload (on just the first argument!) resolution and has nothing
pcg> to do with scoping, and is not an essential aspect of the system,
pcg> while scoping is essential to keep together the attributes of a
pcg> class (for traditional class based OO systems, e.g. not CLOS).

mario> If what we call "dynamic binding" is really "dynamic overload
mario> resolution" (a term I have never heard before),

I cannot claim I invented it, unfortunately :-). I cannot remember
exactly, but I probably saw it in some text on CLOS or Trellis or
similar.

mario> then what *is* "dynamic binding"?

Let's explain this in detail :-).

What binding and overloading have in common and leads people into error
is that both require a runtime (dynamic) associative search of some
database.

Binding requires searching a tree, the identifier/reference environment
list, for an identifier.

Overloading requires searching a multidimensional sparse matrix, the
function interfaces/arguments types table, for an identifier and at
least one type.

When the matrix is bidimensional, that is overloading resolution is done
only with respect to one argument type, then each row of the sparse
matrix can be implemented as a list of identifiers, and overloading
resolution may superficially resemble binding resolution, because both
then eventually imply searching an a-list for an identifier.

Naturally this is only incidental to a particularly restricted form of
overloading and to a particular implementation of that restricted form.
The fact that these are those prevalent in the Smalltalk world is
unfortunate, as I remarked above, because it induces confusion.

I would not use therefore "dynamic binding" as a synonym for the more
generic "runtime associative search" of a table using a key that
contains also an indetifier. The Lisp people don't either; 'getprop' is
not usually described as doing dynamic binding, even if 'getprop' has
does imply runtime associative search of a tablee using an identifier
(or more, in some implementations) as a key.

mario> Are you restricting the term to non-lexically-scoped variables
mario> (aka "special" or "fluid") as in Lisp?

Yes, because I think that overloading resolution and scoping
implementation are semantically completely different things, as they
involve different entities for different purposes.

I will take the opportunity to post my own "humpty-dumpty" (a word means
what I want it to mean! :->) glossary:

<constant>	A value known from the text of a program.

<reference>	A value that may denote another value.

<identifier>	An element of program text that denotes a <constant>.

<variable>	An <identifier> that denotes a <constant> <reference>.

<scope>		The domain of visibility of a <variable>.

<binding>	The technique used to implement <scoping>.

<static scoping>
		Where the relevant <scope> is that of where 
		a <function implementation> is defined.

<dynamic scoping>
		Where the relevant <scope> is that of where and when
		a <function implementation> is executed.

<shallow binding>
		A <binding> technique based on Dijkstra's displays

<dynamic binding>
		A <binding> technique based on a-lists. Also called
		deep binding.

<function interface>
		The textual form for invoking a function. Usually,
		the identifier denoting that function.

<function implementation>
		The code for computing a function. Usually, the
		specification of a closure, and a block of statements.

<overloading>	The ability to use the same <function interface> for
		several <function implementations>; which of the latter
		actually gets invoked depends on context, usually the
		number and types of the arguments. Note that this is a
		property of interfaces, not implementations.
	
<static overloading>
		The ability to identify from the program text which
		<function implementation> applies when a given
		<function interface> is used from the program text.
		Usually this requires <manifest types> for the arguments.

<dynamic overloading>
		The ability to identify which <function implementation>
		applies when a given <function interface> is used, when
		this depends from values that are not evident in the
		program text.

<polymorphism>	The ability to apply the same <function implementation>
		to values of different types. Polymorphism need not
		apply to all parameters, and need not apply to all types
		for that parameter. To achieve polymorphism with
		respect to a parameter, all uses of that parameters must
		be via <overloaded> or <polymorphic> operations. The
		intersection of the applicability of these is the extent
		of polymorphism with respect to that parameter. Note
		that this is a property of implementations, not of
		interfaces.

<static polymorphism>
		<Polymorphism> thanks to <static overloadings> in the
		<function implementation>.

<dynamic polymorphism>
		<Polymorphism> thanks to <dynamic overloadings> in the
		<function implementation>.

I would also like to offer some very unusual definitions:

<value>		A number.

<object>	Synonym for <value>.

<type>		The set of all values over which a given set of <function
		implementations> is meaningfully defined. Note that a
		<type> may be _represented_ in several ways, but this
		is incidental.

These are very unconventional, but please consider them carefully. They
have some unsual properties; for example, the same type can belong to
several different types, depending on which set of functions is chosen
to characterize it. This seems to closely correspond to intuition (for
example NATURAL and POSITIVE are different but overlapping types). Note
also that in a sense it is the death of OO programming in one aspect;
<object> has not special status. It becomes a redundant synonym for
<value>.

It is obvious at this point I hope that CLOS & C. have had a mind
blowing (in every sense :->) effect on my view of what OO is all about.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

sakkinen@tukki.jyu.fi (Markku Sakkinen) (03/04/91)

In article <PCG.91Mar2185335@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>
> [largest part of article deleted]
>

>I would also like to offer some very unusual definitions:
>
><value>		A number.
>
><object>	Synonym for <value>.

What utter nonsense!

>
><type>		The set of all values over which a given set of <function
>		implementations> is meaningfully defined. Note that a
>		<type> may be _represented_ in several ways, but this
>		is incidental.
>
>These are very unconventional, but please consider them carefully. They
>have some unsual properties; for example, the same type can belong to
>several different types, depending on which set of functions is chosen
>to characterize it. This seems to closely correspond to intuition (for
>example NATURAL and POSITIVE are different but overlapping types).

Eh?  A _type_ can belong to other types? Probably a writing error.
Also: "depending on which set of function is _chosen_ to characterize it"
does not make sense, because you just totally based the definition of
type on the set of functions.  Alternatively, if you want to regard
types just as sets, functions have no role to play in the definition.
Anyway, why function _implementations_?

> Note
>also that in a sense it is the death of OO programming in one aspect;
><object> has not special status. It becomes a redundant synonym for
><value>.

If _you_ want to omit the perhaps most important distinction in OOP,
that certainly does not mean the death of OOP in general.
See for instance "Values and objects in programming languages"
by B.J. MacLennan (SIGPLAN Notices, December 1982).
It has been reprinted in a recent IEEE tutorial collection on OOP.

> ...

Markku Sakkinen
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
          SAKKINEN@FINJYU.bitnet (alternative network address)

pcg@cs.aber.ac.uk (Piercarlo Antonio Grandi) (03/05/91)

On 4 Mar 91 07:26:31 GMT, sakkinen@tukki.jyu.fi (Markku Sakkinen) said:

sakkinen> In article <PCG.91Mar2185335@odin.cs.aber.ac.uk>
sakkinen> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:

pcg> I would also like to offer some very unusual definitions:
pcg> <value>		A number.
pcg> <object>		Synonym for <value>.

sakkinen> What utter nonsense!

Well, in a computer and in Godel's mind all values/objects are numbers.

pcg> <type>	The set of all values over which a given set of <function
pcg>		implementations> is meaningfully defined. Note that a
pcg> 		<type> may be _represented_ in several ways, but this
pcg> 		is incidental.

pcg> These are very unconventional, but please consider them carefully.

A line that apparently has been missed :-).

pcg> They have some unsual properties; for example, the same type can
pcg> belong to several different types, depending on which set of
pcg> functions is chosen to characterize it.

sakkinen> Eh?  A _type_ can belong to other types? Probably a writing
sakkinen> error.

Yes a type, I meant ot write that a *value* can be seen as belonging to
several different types.

sakkinen> Also: "depending on which set of function is _chosen_ to
sakkinen> characterize it" does not make sense, because you just totally
sakkinen> based the definition of type on the set of functions.

Precisely, and this is what every OO language does. I mean that every
different choice of a set of functions gives a different type. For
example, the +,-,*,/ operators define the type NONZERO, if we remove /
we get the type INTEGER. The same value, say 5, thus belongs to both
NONZERO and INTEGER, because it may be subjected to both sets of
operations.

sakkinen> Alternatively, if you want to regard types just as sets,
sakkinen> functions have no role to play in the definition.

Well, what else? All one really manipulates in a computer is numbers
(this applies to maths too); these numbers are the encodings used to
represent certain entities.

How do you partitions all these numbers in sets? You must have some
filter. I have come to reckon that the only sensible filter is to choose
a set of function implementations; all the values to which a particular
set may be applied define a type.

I don't want to enter a long discussion, but I think some example will
be nice.  Suppose that you want to represent complex numbers as two 32
bit floats juxtaposed. Now each complex value is just a 64 bit number,
and in way of principle indistinguishable from any other 64 bit number.
But we know that we have two functions, 're' and 'im', that applied to a
64 bit value will deliver a relevant 32 bit value. All the 64 bit values
to which I can apply 're' and 'im' define the complex type, in this
example. But this is a weak definition; this is really the definition of
pair, not of complex. But naturally a complex *is* a pair.

We can add more and more restrictive functions to strengthen our
definition of complex. There are many schemes.

For example by using data or address space tagging I can encode an
arbitrary type code in each value, and then define a function 'typeof'
that extracts the typecode from a value. Then all such values to which I
can apply 'typeof' form a type, say 'object'. I can then define a tag
code for complex numbers, and then define 'complex-re' and 'complex-im'
as something like:

	complex-re = \ x : typeof x == complex -> re x;
	complex-im = \ x : typeof x == complex -> im x;

and then I have a new complex type, let's call it 'complex-object'.

As you see a type is the set of values to which I can apply a chosen set
of functions. More or less arbitrarily.

sakkinen> Anyway, why function _implementations_?

Because it is the function implementation that accepts only a certain
subset of all values. A function interface can be overloaded dynamically
or statically, and indeed accept valus of many types, as above.

If you want, this is an extremely machine oriented way of looking at
things, and not even terribly original (ancestry goes back to Wirth's
paper "Notes on data structuring" in "Structured Programming"), but it
can be regarded also as terribly mathematical.

My purpose in posting it is not just to outrage readers :-), but also to
show that the OO decomposition paradigm is really fundamentally based on
recursively clustering conceptually and textually those function
implementations that are related by the structure of data they
manipulate.

Everything else is just implementation detail.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@aber.ac.uk

sakkinen@tukki.jyu.fi (Markku Sakkinen) (03/05/91)

In article <PCG.91Mar4214858@aberdb.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Antonio Grandi) writes:
>On 4 Mar 91 07:26:31 GMT, sakkinen@tukki.jyu.fi (Markku Sakkinen) said:
>
>sakkinen> In article <PCG.91Mar2185335@odin.cs.aber.ac.uk>
>sakkinen> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>
>pcg> I would also like to offer some very unusual definitions:
>pcg> <value>		A number.
>pcg> <object>		Synonym for <value>.
>
>sakkinen> What utter nonsense!
>
>Well, in a computer and in Godel's mind all values/objects are numbers.

Sorry, I was ambiguous.  If we look from a machine-oriented perspective,
I can agree with the first definition, it's only the second one that
does not make sense.  Continuing from the machine-oriented view
with coarse definitions:
  <variable>  an area of storage
  <object>    synonym for <variable>
  <value>     a bit pattern stored in an object

>pcg> These are very unconventional, but please consider them carefully.
>
>A line that apparently has been missed :-).

Well, no:  careful consideration does not always lead to acceptance :-).
The definition of 'object' certainly was unconventional,
and I continue to disagree sharply with it.
After the new clarifications, your definition of 'type' makes sense;
but as you admit, it is not very unconventional.

> ...
>I don't want to enter a long discussion,  [...]

Hard to believe :-).

> ...
>Everything else is just implementation detail.

I can already sense a faint smell of the Turing tar-pit.

Markku Sakkinen
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
          SAKKINEN@FINJYU.bitnet (alternative network address)

davis@barbes.ilog.fr (Harley Davis) (03/05/91)

In article <4572@osc.COM> jgk@osc.COM (Joe Keane) writes:

   To make this a little more clear, let's try to imagine what dynamic scoping
   would mean.  In Common Lisp we can declare some variables special, so let's
   try the same thing for methods.  This would mean that you could change the
   definition of a method in some routine, maybe for a given object or for a
   whole class.  It could be changed permanently, or when you leave it could be
   restored to its original definition.  This isn't supported by C++, unless of
   course you bash the virtual function tables.  It seems like a neat idea,
   although i'm not sure how useful it really is.  FL advocates would say this
   causes more problems than it solves, and i think i agree.

In fact, this is in Common Lisp:  The WITH-ADDED-METHODS special form
dynamically adds methods to a generic function; these methods are
removed after control passes out of the WITH-ADDED-METHODS body.

However, I don't know of any vendors who fully support this special
form or anybody who has ever used it.  EuLisp's object system TELOS,
which is derived from CLOS, does not support this form because it was
deemed nearly useless.

-- Harley Davis
--
------------------------------------------------------------------------------
nom: Harley Davis			ILOG S.A.
net: davis@ilog.fr			2 Avenue Gallie'ni, BP 85
tel: (33 1) 46 63 66 66			94253 Gentilly Cedex, France

jgk@osc.COM (Joe Keane) (03/06/91)

In article pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>Smalltalk/Actor oriented people, being locked in this particular
>implementation of the overload resolver, are blinkered and cannot
>see that what they call "dynamic binding" is really dynamic
>overload (on just the first argument!) resolution and has nothing
>to do with scoping, and is not an essential aspect of the system,
>while scoping is essential to keep together the attributes of a
>class (for traditional class based OO systems, e.g. not CLOS).

I agree with this.  Selecting on just the first argument is really weird, but
if you only think in the OO paradigm you don't see this.  In most OO languages
the first argument even has special syntax, so the problem is not obvious.

I don't know how many times i really want an operation on two objects.  Adding
two numbers is a fine example of this.  Almost everyone is initially disgusted
by the bogus way this is done in Smalltalk.  ``Let me get this straight, you
make a + message, and you tack on the second number, and then you send that
message to the first number.''  I guess Smalltalk programmers get used to it,
which is too bad.

But in OO languages you must arbitrarily make one of the arguments special,
and then you get the benefits only on that argument.  I want reasonable
polymorphism on _both_ argument types.  What's annoying is that in C++ you can
do this with compile-time overloading, but run-time polymorphism has a
completely different mechanism.

What's even worse is that if you explain the problem to a Smalltalk person,
they don't see why this is a problem.  ``Just use double dispatching.''  Yeah
right, and i can use a case statement to implement polymorphism, but it kind
of misses the point.

In article <PCG.91Mar2185335@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo
Grandi) writes:
>[a lot of definitions of scoping, binding, overloading, and polymorphism]

This is pretty much right on.  Smalltalk programmers please take note of these
distinctions so you can understand what the rest of us are talking about :-).

I'd like to restrict the term `dynamic binding' to mean what it means in Lisp,
but unfortunately the word `binding' is commonly used to mean different
things, so i don't think we can get away with it.

>I would also like to offer some very unusual definitions:
>
><value>		A number.
>
><object>	Synonym for <value>.

Here's where i disagree, i bet you're not surprised.

My main objection is that an object isn't a value, it's a reference.  I'd call
it a stream, as in Lucid, but i think we're talking about the same thing.

If your language has immutable objects, then those are values.  Compile-time
constants can be included here.  Typically, OO languages don't make a good
distinction, if any, between mutable and immutable objects, which is a
weakness.  The C++ `const' keyword is a good start, but it's somewhat limited.

Defining a value as a number seems pretty silly.  It's sort of like, in set
theory, showing how you can define everything as a set.  I mean, it's nice to
know you can do it, but does it really help you?

Most people would agree that a number is just one kind of value, and you can't
have it both ways.  There are better ways of describing representations.  In C
you'd talk about byte strings and in Lisp you'd talk about S-expressions.  I'd
prefer something else, but i'll not get into that here.

><type>		The set of all values over which a given set of <function
>		implementations> is meaningfully defined. Note that a
>		<type> may be _represented_ in several ways, but this
>		is incidental.

I'd define <abstract type> and <implementation type>.  Your definition of
<type> seems to be a mixture of the two, because you talk about <function
implementations>, but then you talk about different representations.
--
Joe Keane, amateur mathematician

johnson@cs.uiuc.EDU (Ralph Johnson) (03/06/91)

It appears that Piercarlo and I use different terminology.  Perhaps
this explains our inability to agree.

It appears that Piercarlo thinks that the most important thing about
OOP is the fact that the names used to refer to instance variables
are hidden outside the object.  I think that this is what he means
by a closure.  It is not what I mean by a closure.  Moreover, I don't
think it is the most important part of OOP.  I guess the reason that
I feel this way is that it is no different from modules, and modules
are old, while OOP is new and exciting.    
	:-), in case anybody didn't guess.

More seriously, the most important aspect of object-oriented programming
is the way it changes the design of large systems.  Classes are not
large, so the design of individual classes is not what is important.
What is important is the fact that we build families of components
with identical interfaces, and can then build applications by mixing
and matching components.  The result is what I call "Tinker-toy"
programming (if I were European perhaps I would call it "Lego-block"
programming), which is where a reasonably small number of kinds of
components can be put together in infinite variety to build a
bewildering number of different applications.  User interface frameworks
like Interviews or MVC are good examples.

johnson> While closures can be used to implement objects, it is not the
johnson> principal style.  The principal style is that an argument is a
johnson> record with one field pointing to an array of procedures, and
johnson> that procedure calling is always done by indirection through
johnson> this array.  This is how C++ and Smalltalk do it, if you are
johnson> willing to think of a Smalltalk class as an array
johnson> of procedures.

pcg>No, that is *not* how C++ and Smalltalk do it. We are speaking of
pcg>completely different things here. Scoping and overloading are compeltely
pcg>different issues.

Please explain what you mean.  I am a Smalltalk implementor, and know
exactly how C++ is implemented.  Of course scoping and overloading are
different issues.  I stand by my statement that closures are not the
way Smalltalk is implemented.  If you look at the output of cfront then 
you will see that my description of C++ is completely accurate.  You can
think of objects as closures if you want, but that is not how most C++
(or Smalltalk) programmers think of them.

pcg>This view also makes them (has made be them in the past, but I repented
pcg>and was saved!) blind to the possibility of overload resolution based on
pcg>multiple arguments, and arbitrary mix and match of interfaces and
pcg>implementations, and creates a lot of non problems centered around
pcg>artificious things like "inheritance".

An aside: overloading is what Ada and C++ without virtual functions
uses, so I have to translate these sentences into something I can
understand.  In my opinion, Piercarlo's use of "overloading" is wrong,
but then he probably doesn't like my language, either.

It is easy to pick a method based on the classes of multiple arguments.
I do it all the time.  The standard technique used by Smalltalk programmers
is multiple dispatching.  See the paper in JOOP V2 N6 by Kurt Heble and me
on a programming environment tool that makes it easy.  The original paper
on this technique is by Dan Ingalls in OOPSLA'86.

Smalltalk programmers *all the time* mix and match interfaces and
implementations.  This can be hard to see by an outsider, since
interfaces are not declared and are never written down, but if you
try to type-check Smalltalk (as I have done) then you will see that
inheritance and subtyping are very different in Smalltalk.  Whether
they should be is another questions.

johnson> In short, if C++ did not have virtual functions, it would not
johnson> be object-oriented.

pcg>Here we disagree completely. You make OO depend on one linguistic
pcg>feature. I make it depend on support for particular style of
pcg>decomposition.

You can do object-oriented programming in languages that are not
object-oriented.  However, an object-oriented language *must* provide
some support for the essential characteristics of OOP.  I don't
know what you define as OOP.  I define OOP as the ability to plug
different objects together without worrying about their implementation,
and the ability to define abstract algorithms that only depend on the
abstract interface of their arguments.  Thus, CLOS, Self, and Emerald are certainly object-oriented.  I believe that this shows that I also define
OO as a particular style of decomposition.

johnson> Further, I believe that dynamic binding is essential to
johnson> object-oriented programming, and hereby declare any programming
johnson> language without it to be not object-oriented.

pcg>Is the converse also true? That is, if you have dynamic binding the
pcg>language *is* OO?

"Dynamic binding" can have several definitions.  However, if I
interpret this questions as meaning whether I would consider CLOS
(which does not protect the state of objects from being manipulated
by outsiders) and Emerald (which does not have implementation inheritance)
to be object-oriented languages, then, to be consistent with what I've
said earlier, the answer is "Yes".

pcg>Would you dare go as far as saying that a program written in C++ or
pcg>Simula 67 in which virtual is never required and used *cannot be* called
pcg>an OO program, even if it it written according to the OO decomposition
pcg>paradigm?

Of course.  Moreover, I claim that it is not written according to the
OO decomposition paradigm.  My version, at least.  It is hard to write
a small program that is.  OOP becomes important as programs get large.

Ralph Johnson -- University of Illinois at Urbana-Champaign

johnson@cs.uiuc.EDU (Ralph Johnson) (03/06/91)

Piercarlo's glossary makes things a little clearer to me.
(I think my site missed his "definitive" characterization
of OOP.)  I never use "binding" to mean the technique used
to implement scoping.  Scoping is pretty trivial, especially
if you, like me, believe that there is only one right way.
(I guess I don't really believe that, but it *is* trivial).
Therefore, I use "binding" to mean what Piercarlo means by
"overloading".

Like I said before, overloading has a well defined meaning
in C++ and Ada to be static overloading, which is not the
least bit object-oriented.

There are at least two kinds of polymorphism, parametric
polymorphism (as in ML) and inclusion polymorphism.  OOP 
polymorphism is inclusion polymorphism, though parametric 
polymorphism is also pretty useful.

Also, Piercarlo's definition of type was perhaps taken to
its most extreme by the language Russell, which had at least
one paper in TOPLAS.  The difference between OOP and Russell
are that objects must be self-interpreting.

Ralph Johnson -- University of Illinois at Urbana-Champaign

alms@cambridge.apple.com (Andrew L. M. Shalit) (03/06/91)

In article <PCG.91Mar2185335@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:

   mario> Are you restricting the term to non-lexically-scoped variables
   mario> (aka "special" or "fluid") as in Lisp?

   Yes, because I think that overloading resolution and scoping
   implementation are semantically completely different things, as they
   involve different entities for different purposes.

I agree with this.

However, where does this leave instance variable reference, as performed
in Smalltalk.  It's certainly not lexical (because the references to the
instance variables are free).  However, it's also not "dynamic scoping"
in the traditional sense of the term.  The discipline it implements is
closer to dynamic scoping than lexical scoping, because the value of the
lookup depends on a runtime value.  In this sense, an object provides
an explicit environment for resolving free variable references, a very
"dynamic" kind of thing.

   I will take the opportunity to post my own "humpty-dumpty" (a word means
   what I want it to mean! :->) glossary:

   [glossary deleted]

Your glossary doesn't take into account another item which is commonly
called "dynamic binding", but which can also be called "late binding"
or "dynamic linking".  That is, at what point is a function name
resolved to a function object?  Does this occur at compile time, or
run time?  Of course, this has nothing to do with whether polymorphism
or any other sort of dispatch is involved.

 -andrew

pcg@cs.aber.ac.uk (Piercarlo Antonio Grandi) (03/07/91)

On 5 Mar 91 22:54:58 GMT, johnson@cs.uiuc.EDU (Ralph Johnson) said:

johnson> It appears that Piercarlo and I use different terminology.  Perhaps
johnson> this explains our inability to agree.

Yes, yes. Humpty Dumpty, Shakespeare, and Cartese have all waxed on the
subject of terminology.

johnson> It appears that Piercarlo thinks that the most important thing about
johnson> OOP is the fact that the names used to refer to instance variables
johnson> are hidden outside the object.

No, I did not say that. I said that in 99% of OO systems methods are
lexically closed with respect to the class definition, which makes
objects into closures.

johnson> I think that this is what he means by a closure.  It is not
johnson> what I mean by a closure.

I think I have got big communications problems...

johnson> More seriously, the most important aspect of object-oriented
johnson> programming is the way it changes the design of large systems.

I did explicitly say that OO is about a specific decomposition
paradigm...

johnson> Classes are not large, so the design of individual classes is
johnson> not what is important.  What is important is the fact that we
johnson> build families of components with identical interfaces, and can
johnson> then build applications by mixing and matching components.

This is not unique to OO programming. Any decomposition paradigm is
about that. What is unique to the OO decomposition paradigm is the
particular philosophy used to guide dessigning component boundaries,

This philosphy is to design component boundaries so that as to cluser
those that operate on the same representation together, because it is
presumed that this minimizes subsystem connectivity. Other people
(including me, by the way) think that other philosphies are more
appropriate in that they imply a better choice of componennt boundaries.

johnson> I am a Smalltalk implementor, and know exactly how C++ is
johnson> implemented.  Of course scoping and overloading are different
johnson> issues.  I stand by my statement that closures are not the way
johnson> Smalltalk is implemented.

I am sorry to inform you that that is the case. Objects in Smalltalk
*are* closures, and Smalltalk classes are closure generators. I know
about Smalltalk implementation and indeed objects are implemented as
closures down to their minutest detail. I can understand that this is
not immediately obvious, though.

johnson> If you look at the output of cfront then you will see that my
johnson> description of C++ is completely accurate.

I know abpout the output of cfront, and I can tell you that your
impression of it is inaccurate. C++ does implements objects as closures;
I will go as far as saying that 'this' is the static link of the closure
object in C++. It is made explicit simply because the underlying C does
not have nested functions and closures, so one cannot, unlike in Scheme,
use language primitives to that effect. 

Are you sure you have compared the Scheme and C++ versions of my
example? They show very clearly my point.

johnson> You can think of objects as closures if you want, but that is
johnson> not how most C++ (or Smalltalk) programmers think of them.

Too bad for them. They miss out an important conceptual point that
Scheme programmers see very clearly. In one of my postings I have given
a class definition in C++, Scheme, Simula 67. The three definitions are
nearly equivaltn (modulo some language quirks). It is only in the Scheme
rendition that it is *evident* that objects are closures and dynamic
overloading is done by searching a sparse table. The C++ and Simula 67
versions are structurally identical, except that the fact that objects
are closures and overloading resolution in the sparse table is hidden in
the compiler.

What's the advantage of lookign at objects as closures? Well, that you
understand certain things better; also that they *need not* be closures.
For example in CLOS, which has multiple overloading, methods are not
lexically closed with respect to objects. Or maybe they are, in that
ambiguities are resolved by quoting the name of the argument.

I will give another example, in C++:

Suppose we have the following:

	struct Complex { float re, im; };

[1]	Complex operator +(const Complex &a,&b)
	{ return Complex(a.re+b.re,a.im+b.im; }

compared with

	struct Complex {
	    float re,im;

[2]	    Complex operator +(const Complex &b) const
	    { return re+b.re,im+b.im};
	};

and with

	struct Complex {
	    float re,im;

[3]	    Complex operator +(const Complex &b) const
	    { return this->re+b.re,this->im+b.im};
	};

In [1] operator + is not lexically closed in the definition of 'struct
Complex'; but in [2] and [3] it is, even if in [3] this fact is not
used.

I have already expressed my idea that having, like in Smalltalk, Simula
67, and C++ member functions, a single distinguished implicit parameter
with respect to which all overloading resolution is done is not a nice
thing.

But as long as this is true, objects will be naturally implemented as
lexical closures, and methods will be lexically closed in the name space
of the class definition of their first (implicit) argument.

johnson> An aside: overloading is what Ada and C++ without virtual
johnson> functions uses, so I have to translate these sentences into
johnson> something I can understand.

Ada and C++ and Algol 68 static overloading is not in any fundamental
way different from the ability of C++ or Simula 67 or Smalltalk to use
the same method name in different class definitions.

The difference is pure syntactic sugar; this is clearly illustrated in
Pop-2 where any function application "f(x,y,z)" can be written
equivalently as" x.f(y,z)". I have proposed therefore, because of this,
to eliminate member functions entirely from C++ and other OO languages,
as redundant (and actually damaging, because they confuse a lot of
issues), and allow like in Pop-2 any function application to be written
in either way.

johnson> In my opinion, Piercarlo's use of "overloading" is wrong, but
johnson> then he probably doesn't like my language, either.

It's not the language in itself, it's the conceptual confusion it
generates. One element of this confusion is that by giving different
names to the same concept tends to create the mistaken impression that
there are really two concepts.

I indeed think that the difficulty is that you see Smalltalk messaging
as something different from overloading, because Smalltalk terminology
is rather like that of Actor systems, confusingly, because it is not.

johnson> It is easy to pick a method based on the classes of multiple
johnson> arguments.  I do it all the time.  The standard technique used
johnson> by Smalltalk programmers is multiple dispatching.

This is irrelevant. You can *simulate* multiple overloading in
Smalltalk, as you can easily *simulate* runtime overloading in C ( the
Unix kernel and X windows do it all the time). But Smalltalk directly
provides overloading only on the first (implicit) argument.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@aber.ac.uk

johnson@cs.uiuc.EDU (Ralph Johnson) (03/07/91)

|> Your glossary doesn't take into account another item which is commonly
|> called "dynamic binding", but which can also be called "late binding"
|> or "dynamic linking".  That is, at what point is a function name
|> resolved to a function object?  Does this occur at compile time, or
|> run time?  Of course, this has nothing to do with whether polymorphism
|> or any other sort of dispatch is involved.
|> 
|>  -andrew

You can have late binding without polymorphism, but you can't have
object-oriented (i.e. inclusion) polymorphism without late binding.
Thus, I disagree with the last statement.

johnson@cs.uiuc.EDU (Ralph Johnson) (03/07/91)

In article <4655@osc.COM>, jgk@osc.COM (Joe Keane) writes:
|> 
|> I don't know how many times i really want an operation on two objects.  Adding
|> two numbers is a fine example of this.  Almost everyone is initially disgusted
|> by the bogus way this is done in Smalltalk.  ``Let me get this straight, you
|> make a + message, and you tack on the second number, and then you send that
|> message to the first number.''  I guess Smalltalk programmers get used to it,
|> which is too bad.
|> 
|> But in OO languages you must arbitrarily make one of the arguments special,
|> and then you get the benefits only on that argument.  I want reasonable
|> polymorphism on _both_ argument types.  What's annoying is that in C++ you can
|> do this with compile-time overloading, but run-time polymorphism has a
|> completely different mechanism.
|> 
|> What's even worse is that if you explain the problem to a Smalltalk person,
|> they don't see why this is a problem.  ``Just use double dispatching.''  Yeah
|> right, and i can use a case statement to implement polymorphism, but it kind
|> of misses the point.

Again, I will point to the article by Kurt Hebel and me in March/April 90
issue of JOOP.  It describes a double dispatching browser for Smalltalk
that makes double dispatching isomorphic to CLOS multimethods.

The case statement-like behavior of double dispatching is a feature of
multimethods in general.  Suppose I have a number of graphical objects
(lines, circles, etc) and several display systems (X, QuickDraw, Postscript)
and I add a new kind of graphical object like GreenBlob that can't be
described in more primitive operations.  I will have to define a "draw"
method for each display system.  Similarly, if I want to add an OS/2
display window then I will have to implement lots and lots of primitive
operations.  Of course, I can define drawing a rectangle in terms of
drawing lines and use one implementation for all display systems, but
that works just as well for double dispatching as for multimethods,
especially if you use inheritance along the class of arguments other
than the primary receiver, as we describe in the paper.

The article "Experience with CommonLoops" in OOPSLA'87 measured a
couple of systems and said that about 15% of the methods dispatched
on more than one argument.  This is well within the range where a
feature should be part of a language.  However, Smalltalk does not
include "if" statements in the language, but puts them in the class
library, and they are used in at least 15% of the methods.  The point
is that some features can be built into a language, and others can be
a programming convention.  If it is easy to do something with a
programming convention then there is no real reason to add it to the
language.

Saying that double dispatching is like case statements is silly.
Case statements make programs hard to extend.  Double dispatching
does not.  Graphical objects that require new drawing primitives
are always hard to mix and match with new drawing systems, but
graphical objects that are defined in terms of existing primitives
are easy to match with new drawing systems.  Multimethods will not
solve the problem.

Ralph Johnson -- University of Illinois at Urbana-Champaign

sakkinen@tukki.jyu.fi (Markku Sakkinen) (03/07/91)

In article <1991Mar5.225458.4408@m.cs.uiuc.edu> johnson@cs.uiuc.EDU (Ralph Johnson) writes:
>It appears that Piercarlo and I use different terminology.  Perhaps
>this explains our inability to agree.
>
> [largest part of article deleted]
>
>pcg>Would you dare go as far as saying that a program written in C++ or
>pcg>Simula 67 in which virtual is never required and used *cannot be* called
>pcg>an OO program, even if it it written according to the OO decomposition
>pcg>paradigm?
>
>Of course.  Moreover, I claim that it is not written according to the
>OO decomposition paradigm.  My version, at least.  It is hard to write
>a small program that is.  OOP becomes important as programs get large.
>
>Ralph Johnson -- University of Illinois at Urbana-Champaign

What, an occasion to disagree a bit with Ralph?
I would not say that a piece of code is necessarily non-OO in principle
if it does not happen to use virtual functions.
As a somewhat weak analogy, it is not a necessary condition for
structured programming that a programme contains 'while' loops.

Evidently, Ralph's real conviction is that if you really build
software of non-toy size in an object-oriented way, you will in practice
end up with some proportion of the procedures being virtual.
I am sure that he has a lot of empiric evidence to support
that conviction; I would not dare to object.

Markku Sakkinen
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
          SAKKINEN@FINJYU.bitnet (alternative network address)

sakkinen@tukki.jyu.fi (Markku Sakkinen) (03/08/91)

In article <PCG.91Mar6194500@aberdb.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Antonio Grandi) writes:
>On 5 Mar 91 22:54:58 GMT, johnson@cs.uiuc.EDU (Ralph Johnson) said:
>
>johnson> It appears that Piercarlo and I use different terminology.  Perhaps
>johnson> this explains our inability to agree.
>
> [a lot deleted]
>
>I know abpout the output of cfront, and I can tell you that your
>impression of it is inaccurate.  [...]
> ...
>
>I will give another example, in C++:

It would help to make people believe that you really know much about C++
if your examples were approximately correct.

>Suppose we have the following:
>
>	struct Complex { float re, im; };
>
>[1]	Complex operator +(const Complex &a,&b)
>	{ return Complex(a.re+b.re,a.im+b.im; }

That one parenthesis is missing does not mislead anybody.
However, you must either declare a constructor in the obvious way
within the 'struct' declaration
or use braces in the 'return' statement to make a structure initialiser.
Let's suppose the former alternative, i.e.
    Complex (float x, y) {re = x; im = y;}

>compared with
>
>	struct Complex {
>	    float re,im;
>
>[2]	    Complex operator +(const Complex &b) const
>	    { return re+b.re,im+b.im};
>	};

The compiler would complaining that you are returning a float instead
of Complex.  That's the stupid sequencing operator of C and C++.
You must actually write:
    { return Complex (re+b.re, im+b.im); }
assuming the same constructor as above.
A similar correction is needed for example [3].

> [rest deleted]

Markku Sakkinen
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
          SAKKINEN@FINJYU.bitnet (alternative network address)

pcg@cs.aber.ac.uk (Piercarlo Antonio Grandi) (03/11/91)

On 8 Mar 91 06:10:42 GMT, sakkinen@tukki.jyu.fi (Markku Sakkinen) said:

sakkinen> In article <PCG.91Mar6194500@aberdb.cs.aber.ac.uk>
sakkinen> pcg@cs.aber.ac.uk (Piercarlo Antonio Grandi) writes:

pcg> I will give another example, in C++:
pcg> [ ... sloppy coding ... ]

Apologies for the poor coding. I forgot a few things here and there... :-)

sakkinen> [ ... a "spelling" flame ... ]

OK, ok, it was sloppy coding. But the audience of this group is not made
just of C++ compilers :-); and my example, however flawed, was contrived
(hence the omission of irrelevant details like the constructor), and for
all its syntactic impropriety did manage to get across the point I was
making across, which was not with complex numbers or C++ or
constructors...

Moreover subject oriented ("ad hominem") spelling flames ("How can we
respect *you* if you cannot even spell 'antidisestablishmentarianism'
correctly?") or coding flames are not Object Oriented, and therefore
outside the scope of this newsgroup :-).
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@aber.ac.uk

sakkinen@tukki.jyu.fi (Markku Sakkinen) (03/11/91)

In article <PCG.91Mar10183850@aberdb.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Antonio Grandi) writes:
>On 8 Mar 91 06:10:42 GMT, sakkinen@tukki.jyu.fi (Markku Sakkinen) said:
>sakkinen> ...
> ...
>
>Moreover subject oriented ("ad hominem") spelling flames ("How can we
>respect *you* if you cannot even spell 'antidisestablishmentarianism'
>correctly?") or coding flames are not Object Oriented, and therefore
>outside the scope of this newsgroup :-).

Actually I agree.  But I was enticed to pick nits from your posting
because of an earlier passage in it:

>johnson> I am a Smalltalk implementor, and know exactly how C++ is
>johnson> implemented.  Of course scoping and overloading are different
>johnson> issues.  I stand by my statement that closures are not the way
>johnson> Smalltalk is implemented.
>
>I am sorry to inform you that that is the case. Objects in Smalltalk
>*are* closures, and Smalltalk classes are closure generators. I know
>about Smalltalk implementation  [...]
>
>johnson> If you look at the output of cfront then you will see that my
>johnson> description of C++ is completely accurate.
>
>I know abpout the output of cfront, and I can tell you that your
>impression of it is inaccurate.  [...]

By the way, it is useful that some people like you try to make
established concepts of OOP questionable.  That's the way to see
what is the real contribution of OOP ("if any", you would add)
and what on the opposite is more or less word magic.
Specifically:

1. I agree with you that it is unnecessary confusion to speak about
   "message passing" in Smalltalk and similar languages (I have even
   said that in a published paper).

2. I have also tried to minimise the role of inheritance and to explain
   it by other concepts.  However, I don't quite agree with your opinion
   that inheritance is totally superfluous and perhaps harmful (isn't
   that in essence what you have said?).

3. I don't buy your view of objects being simply values.  That is
   evidently the view of functional programming (perhaps logic
   programming too), but it simply does not fit all programming
   paradigms.

Markku Sakkinen
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
          SAKKINEN@FINJYU.bitnet (alternative network address)

pcg@test.aber.ac.uk (Piercarlo Antonio Grandi) (03/16/91)

On 11 Mar 91 06:23:02 GMT, sakkinen@tukki.jyu.fi (Markku Sakkinen) said:

sakkinen> By the way, it is useful that some people like you try to make
sakkinen> established concepts of OOP questionable.

On this I thank you, but I try to question established *misconceptions*
of OOP, which usually arise from familiarity with Smalltalk or other
narrowly defined systems.

sakkinen> 1. I agree with you that it is unnecessary confusion to speak about
sakkinen>    "message passing" in Smalltalk and similar languages (I have even
sakkinen>    said that in a published paper).

Alleluiah! Your published paper is not the first to make this point, but
its contribution is not irrelevant. 

sakkinen> 2. I have also tried to minimise the role of inheritance and
sakkinen> to explain it by other concepts.

Another excellent contirbution!

sakkinen> However, I don't quite agree with your opinion that
sakkinen> inheritance is totally superfluous and perhaps harmful (isn't
sakkinen> that in essence what you have said?).

No, I say that what we really want is unrestricted (when meaningful)
mixing and matching of interfaces and implementations (and
specifications) among themselves and with each other. What currently
passes for "inheritance" is really prefixing, and prefixing is a very
restrictive algebra for interfaces and implementations.

If you call "inheritance", contrarily to what I perceive to be the
common usage, as the ability to mix and match interfaces and
implementations (and specifications) _in general_, then I am all for
"inheritance".

I remember Betrand Meyer in his classic book first introducing the idea
that we want to do mix and match as I like it, and then slamming in
Eiffel which really only has (whether multiple or deferred) prefixing. A
big disappointment.

sakkinen> 3. I don't buy your view of objects being simply values.  That
sakkinen> is evidently the view of functional programming (perhaps logic
sakkinen> programming too), but it simply does not fit all programming
sakkinen> paradigms.

One of my problems is that at times I am too elliptic. I did not quite
say that objects are *just* values. What I said that if you define
<type> as the set of all naturals to which a given group of function
implementations is meaningfully appliable, then the notion of <object>
becomes quite redundant, or else misleading.

The reason, let's try to be clearer, is that what currently passes for
<object> as belonging to a <class> is really a particular, often
Smalltalk-centric, implementation of <typing>. CLOS is the obvious (if
incomplete) counterexample, even if one just uses its standard
metaobject protocol.

I am a bit reluctant to continue a discussion on my idea of <type>
because it is so unconventional and thus requires a lot of explanation,
but I will try to mention some related work.

I have recently noticed a definition of <type> that is in some way
analogous, in the language Emerald. There a <type> is the set of all
values to which a given set of functions *interfaces* is meaningfully
appliable. I disagree about interfaces, because it is a purely syntactic
requirement that creates what are IMNHO meaningless type equivalence
classes; I prefer a semantics based partitioning. Yet the similarity is
intriguing.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@aber.ac.uk

eliot@cs.qmw.ac.uk (Eliot Miranda) (03/19/91)

Piercarlo Grandi writes:
>One of my problems is that at times I am too elliptic. I did not quite
>say that objects are *just* values. What I said that if you define
><type> as the set of all naturals to which a given group of function
>implementations is meaningfully appliable, then the notion of <object>
>becomes quite redundant, or else misleading.
>
So lets not. Lets define a type (in object oriented systems) as a
set of possible values (elements of the type)
and a set of possible operations where
	operations are performed upon instances of the values of the type
	operations are named
	operations return an instance of a type
	operations have a (possibly empty) tuple of argument types
which is a nicely recursive definition.
And if we're serious, types are themselves elements of some type.

>The reason, let's try to be clearer, is that what currently passes for
><object> as belonging to a <class> is really a particular, often
>Smalltalk-centric, implementation of <typing>. CLOS is the obvious (if
>incomplete) counterexample, even if one just uses its standard
>metaobject protocol.
Classes are ** implementations ** of types. In particular, classes may
be supersets of types (more operations, superfluous state (cacheing)).
And most importantly, there can be ** many possible ** implementations of
a type.
For often pressing pragmatic reasons this is a ** good ** thing. A programmer
can trade time and space by choosing apropriate implementations of types.
Ralph's Typed Smalltalk really depends on this distinction.

There is a real dilemma for the object-oriented software mix + matcher.
On the one hand that person wants to compose new types & manipulate elements of
these.  On the other hand that person wants performance (space & time) and
to achieve performance they require a more intimate connection between
code and state (need to know more about representations).
By making classes explicitly implementations and having a separate typing scheme
(name space, set of values) the programmer then gets the best of both worlds.
They specify their system in terms of types & tune their system in terms of
implementations of types (classes).

As I see it:
	ACTRA provides this my renaming types as classes & having a
	separate implementation hierarchy.

	Self addresses the problem by optimising the hell out of small, loosely
	coupled software modules by agressive inlining & speculative execution.

	Ralph's Typed Smalltalk provides this by adding types
	(albeit in a rather non-object way).

	<untyped> Smalltalk fudges the issue by horribly confusing types by
	classes.  isKindOf: is an ** abomination ** & I try never to use it.

Ralph, am I right to think that in your system types are simply denoted
rather than reified as objects? And if so are there reasons why this should
be so?

>
>I am a bit reluctant to continue a discussion on my idea of <type>
>because it is so unconventional and thus requires a lot of explanation,
>but I will try to mention some related work.
Piercarlo you are an inveterate TEASE!
Just work on your explanation. Yor idea of type is bound to be worth hearing
about (even if only to try & knock it down :-))

>I have recently noticed a definition of <type> that is in some way
>analogous, in the language Emerald. There a <type> is the set of all
>values to which a given set of functions *interfaces* is meaningfully
>appliable. I disagree about interfaces, because it is a purely syntactic
>requirement that creates what are IMNHO meaningless type equivalence
>classes; I prefer a semantics based partitioning. Yet the similarity is
>intriguing.
But in the functional world this is precicely what types are, they are
signatures of functions over values.  Which is what my definition above is.
In the object-oriented world we should (and do) define type as signatures of
functions over objects.  Of course you can reduce objects to values, by making
their identity part of their value tuple.  This may be a more useful way
of thinking about distributed object systems but for the moment I feel that
preserving an identity-value dichotomy is a useful & natural way of thinking.

-- 
Eliot Miranda			email:	eliot@cs.qmw.ac.uk
Dept of Computer Science	Tel:	071 975 5229 (+44 71 975 5229)
Queen Mary Westfield College	ARPA:	eliot%cs.qmw.ac.uk@nsf.ac.uk	
Mile End Road			UUCP:	eliot@qmw-cs.uucp
LONDON E1 4NS