[comp.lang.misc] Teaching object-oriented paradigm to beginners?

gore@nucsrl.UUCP (Jacob Gore) (01/08/87)

This is directed to all people who like (or, at least, don't dislike) the
object-oriented programming/design paradigm.  If you are not one of such
people, please skip this discussion.

Suppose you wanted to convince the Computer Science faculty at your university
(or college or institute or whatever) that students should be taught to think
and program in the object-oriented paradigm from the very beginning.  What
arguments would you use?

Or, if you are against this idea, and somebody else was introducing it, what
arguments would you use against it?

Basically, my feeling is that object-oriented programming is not an "advanced"
concept in the sense that one needs a lot of programming experience to
comprehend it.  In fact, I think that it would be _easier_ to teach to
beginners, because it is very orthogonal and consistent.  But before I
formally present this idea to people who make decisions around here (many of
whom have a very foggy (if any) idea about what object-oriented programming
is), I'd like to get some opinions from the net folks.

Thanks

Jacob Gore
Northwestern University, Computer Science Research Lab
{ihnp4,chinet}!nucsrl!gore

sierchio@milano.UUCP (01/09/87)

Basically, there are several main advantages to the object-oriented
approach. One is the modularity. Major portions of programs at 
several levels in the hierarchy can change the way they work
without changing everything else (IMAGINE! for those of you still
using C or Pascal or such).

This is because one gets information to and from other objects
via a convention called message-passing. An object has variables
in which are contained values known only to itself. It also has
METHODS, which are essentially functions, procedures, etc.

Say you have an object BLAH.

BLAH has variables Color
	    and    Size.

We create an INSTANCE of BLAH, and call it CHARLEY (essentially, this is
like saying, in C, "BLAH	CHARLEY;").

Now, CHARLEY is an object of type BLAH, meaning that BLAH is the CLASS
of CHARLEY.  BLAH objects inherit certain methods common to all
objects.  Suppose we want to discover CHARLEY's Color -- then we send
CHARLEY a message and CHARLEY responds with the value of Color.

Now, suppose at a future date, Color is no longer a variable in objs
of type  BLAH, but is calculated as a result of some process--

it doesn't matter. You still send the same message to CHARLEY. From the
outside, you don't know if what you're asking for exists, or is made
for you. Neat, huh?

You can see that for the novice, it's a natural. You don't need to know
anything about an object other than its name, and what kind of things
you might want to ask about it.


I recommend that you investigate the object-oriented extensions to Lisp,
LOOPS and FLAVORS. I am currently using Franz Common Lisp with Flavors
as one of the extension packages. It runs on many UNIX machines, since
it was written in C under BSD Unix, and I like it a lot.

Is Gilbert Krulee still there?
If so, please tell him hi from me.  He may not remember, but no matter.

Write me via e-mail, and let me know how it's going.  And if Krulee
has an e-mail address, send it along to me.

There's a wealth of literature on Smalltalk -- good to peruse.  I
mentioned Lisp because I know it's an available language, and it can
be a good language for novice programmers who haven't learned any
bad thinking habits yet from procedural languages, and it supports
object-oriented programming.

so much for my ramblings. Good Luck,

	Michael Sierchio

UUCP:	ut-sally!im4u!milano!sierchio
ARPA:	sierchio@mcc.ARPA


-- 
	
	Michael Sierchio @ MCC Software Technology Program

	UUCP:	ut-sally!im4u!milano!sierchio
	ARPA:	sierchio@mcc.ARPA

	"WE REVERSE THE RIGHT TO SERVE REFUSE TO ANYONE"

ee161aba@sdcc18.ucsd.EDU (David L. Smith) (01/09/87)

In article <4000001@nucsrl.UUCP> gore@nucsrl.UUCP (Jacob Gore) writes:
>This is directed to all people who like (or, at least, don't dislike) the
>object-oriented programming/design paradigm.  If you are not one of such
>people, please skip this discussion.
>
>Or, if you are against this idea, and somebody else was introducing it, what
>arguments would you use against it?

I am not necessarily against it, and am learning a couple of object oriented
languages at the moment, but I can think of a couple of arguments to be
used against teaching it exclusively as an intro to programming.

My views are that the more you know about what's going on inside of the machine,
the better.  If you want to teach Joe Freshman how to program, teach him
whatever his brain can absorb easiest.  If you're teaching a budding programmer,
I think they should start off in machine language. This doesn't have to be 
a real machine language, but possibly a very simple one to get them used to 
what's actually happening inside.  I know a lot of students who have very 
little idea of what actually happens when their program executes.  Knowing 
machine language when I started learning Pascal made understanding pointers 
simple for me, but it was a real headache for a lot of other people I know.
Starting from the bottom and then working up in levels of abstraction will
tend (in my opinion) to make a better programmer at each level.  When you
are taught English (formally), first you learn the alphabet, then you learn 
words, etc.  You aren't set up with Shakespeare right away.

Another argument is that the "real world" doesn't use object oriented languages.
I don't support this; if we were only doing "real world" stuff we'd all
be learning COBOL right now.  However, this is an argument you may run
across.

Finally, the jump from a "standard" language to an object oriented one is
not easy.  It takes a lot of re-learning.  The jump from an object oriented
language back may be just as difficult.  If the program is planning to do
a lot of "standard" languages later in the course sequence, it may be
easier to start them with a "standard" language.  

-- 
			Cheers,
			David L. Smith
			{ucbvax, ihnp4}!sdcsvax!sdcc18!ee161aba
			"Consistency is the hobgoblin of little minds"

shebs@utah-cs.UUCP (Stanley Shebs) (01/09/87)

In article <602@sdcc18.ucsd.EDU> ee161aba@sdcc18.ucsd.edu.UUCP (David L. Smith) writes:

>My views are that the more you know about what's going on inside of the machine,
>the better.  If you want to teach Joe Freshman how to program, teach him
>whatever his brain can absorb easiest.  If you're teaching a budding programmer,
>I think they should start off in machine language.

This is a really bass ackwards approach to learning.  Rather than learning
general concepts, everybody gets immersed in the minutiae of (I assume)
Von Neumann machine architecture, which is about as bad of a start as I can
imagine.

>I know a lot of students who have very 
>little idea of what actually happens when their program executes.

"Actually happens"?  Do you mean assignments into registers, switching of
transistors, or flow of electrons and holes in semiconductors?  Even as you
decry abstraction, you're depending on it!

>Knowing 
>machine language when I started learning Pascal made understanding pointers 
>simple for me, but it was a real headache for a lot of other people I know.

That's why teaching Pascal is at least as bad an idea as teaching machine
language.  The semantics (i.e. true explanation) of Pascal is inextricably
tied with sequential machine architecture, so of course it's going to be
confusing to learn Pascal without knowing architecture.

It is much better to start off with a reasonable abstract idea of computation
and a language based on that idea.  For instance, Lisp and Scheme are
based on the idea of mathematical functions, Prolog is based on the idea
of proof, SETL is based on the idea of sets, and Smalltalk is based on the
idea of communication.  Although these are quite different ideas, they are
already somewhat familiar to the student and thus relate machines to the
rest of the world.  Later on, students can learn about the low-level details
of contemporary architecture and languages to go with it.  MIT for instance
starts all their CS freshcritters with Scheme, which is introduced as a sort
of fancy calculator with parens in all the wrong places :-).

>Starting from the bottom and then working up in levels of abstraction will
>tend (in my opinion) to make a better programmer at each level.

Maybe a better Von Neumann machine language hacker, but introductory computer
classes also teach the future innovators of the field, and you don't want to
deform their brains with the current set of mistaken beliefs!

>When you
>are taught English (formally), first you learn the alphabet, then you learn 
>words, etc.  You aren't set up with Shakespeare right away.

Not a good analogy.  English is a language, while computer science is a
field of study.  Of course, many people confuse the study of computer science
with the memorization of programming languages...

>Finally, the jump from a "standard" language to an object oriented one is
>not easy.  It takes a lot of re-learning.  The jump from an object oriented
>language back may be just as difficult.

It can be very easy if the basic concepts underlying all languages are
understood in the first place.  The *concept* of objects can be presented
in any language you like, even Pascal or machine language.

>			David L. Smith
>			{ucbvax, ihnp4}!sdcsvax!sdcc18!ee161aba


							stan shebs
							utah-cs!shebs

holloway@drivax.UUCP (Bruce Holloway) (01/09/87)

In article <4000001@nucsrl.UUCP> gore@nucsrl.UUCP (Jacob Gore) writes:
>Or, if you are against this idea, and somebody else was introducing it, what
>arguments would you use against it?

Object programming languages are nice, and are readily understandable,
especially if your mind hasn't already been warped by BASIC or something.

However, at UNH, everyone is required to use the computer to some extent.
The first question non-engineers ask is, "what good will this do me?" There
aren't a lot of surveying programs written in Smalltalk, for instance. So they
spend some time learning an object language, and immediately forget it. Then
they get out into the field, and have to use Fortran, or Pascal, or BASIC
(my brother-in-law is a surveyor, and that's what HE uses), and are totally
unprepared.

Worse, they have been taught to think in a totally different manner wrt
programming. So the only thing left them is some experience with computers.
And even then, probably not a micro.

I'd probably pick a structured BASIC to teach to beginners who were unlikely
to take any more programming courses. Or Pascal.
-- 
....!ucbvax!hplabs!amdahl!drivax!holloway
"What do you mean, 'almost dead'?" "Well, when you stop breathing, and moving
around, and seeing things... that kind of almost dead."

socha@drivax.UUCP (Henri J. Socha (socha)) (01/09/87)

Bruce, one of the things I learned in University was that when you get out
into the real world you spend the first x months learning how to do things
their way.  They hired you (and not someone else) because you proved that
you COULD learn (being almost more important than WHAT you learned).

I found that learning CONCEPTS about data structures, structured programming,
algorithms, etc. was more important than learning WATFIV, Assembly, C, etc.
I also feel that I was lucky that I did not! learn BASIC at first.
BASIC (at least BASIC in '68) destroys ones understanding of good
programming techniques (few recover). (** please no FLAMES from BASIC lovers **)

Object oriented programming should teach the importance of data structures,
algorithms, the advantages of data hiding, function organization,
re-use of software libraries, i.e. how to organize and write a programme.
Once you know how, you can pick up WHICH language at your place of employment.

Therefore, (finally :-) I feel that learning object oriented programming
techniques is an excellent way to learn about computer programming.
(Also a teeny bit on assembly language so they can appreciate what the
compiler/interpreter is doing for them.)

billw@navajo.STANFORD.EDU (William E. Westfield) (01/11/87)

One problem with languages that directly implment high level
abstractions is that they require the user to understand
high level abstractions too.  MACSYMA is a wonderful system
for doing calculus, for exmaple, but it won't help someone
who doesn't know calculus find the area under a curve.
Indeed, they would probably be better off knowing basic,
and having a picture with rectangles and what not.

In a similar way, object oriented programming seems to
take for granted a lot of things that are just as meaningless
to beginning CS types as TDZA 1,1 (inheritance of properties?
Properties? local storage ? temporary variables? (all from
the first 50 pages of smalltalk-80...))

BillW

mnc@m10ux.UUCP (01/11/87)

Thank you, Michael for a lucid, no-bullshit explanation of your view of the
object-oriented paradigm.  I have heard a similar summary from some of the
(precious few) other promoters of this paradigm who are willing to provide a
non-mystical comparison of it to more established programming techniques,
using conventional computer science terminology.  This leaves me with two
unanswered questions:

(1) What is the difference (other than a rather contrived, anthropomorphic
    view of the relationship between functions and values), between O-O
    programming and the ten (?) year old theory and practice called "abstract
    data types"?  The fundamental belief with technical importance in
    both methods (with which I completely agree) is that pieces of programs
    should be divided into chunks that have a precisely documented interaction
    with other chunks, but the internals of which are completely unknown (or
    irrelevant to) other chunks.  Put more concisely, "separate WHAT it does,
    from HOW it does it, so that the HOW can be changed without affecting the
    WHAT".  Isn't O-O programming simply a special case of, i.e. one particular
    method for achieving, abstract data types?  It seems to lie completely
    under the definition of an abstract data type as a collection of types
    and functions on those types, together with the notion that we can have
    multiple objects of the abstract data type.

(2) Why is it so hard for O-O programming advocates to explain the advantages
    of their programming methodology without slipping into obscure, specialized
    O-O jargon, or invoking the religious argument, "well you can't under-
    stand why it is better until you've done a large amount of O-O programming
    yourself"?
-- 
Michael Condict		{ihnp4|vax135}!m10ux!mnc
AT&T Bell Labs		(201)582-5911    MH 3B-416
Murray Hill, NJ

holloway@drivax.UUCP (Bruce Holloway) (01/12/87)

In article <602@sdcc18.ucsd.EDU> ee161aba@sdcc18.ucsd.edu.UUCP (David L. Smith) writes:
<Another argument is that the "real world" doesn't use object oriented languages.
<I don't support this; if we were only doing "real world" stuff we'd all
<be learning COBOL right now.  However, this is an argument you may run
<across.

If we were only doing "real world" stuff we'd all be using 'C', FORTRAN,
Pascal, BASIC, and assembly language right now.
-- 
....!ucbvax!hplabs!amdahl!drivax!holloway
"What do you mean, 'almost dead'?" "Well, when you stop breathing, and moving
around, and seeing things... that kind of almost dead."

kens@hpldory.HP.COM (Ken Shrum) (01/12/87)

I don't believe that there is one 'right' programming language to teach
to beginners.  It's important to remember that, for the vast majority
of users, software systems are tools and not of interest in their own
right.

If a user is interested in modeling a system, and if that system may be
modeled easily using an o-o approach (a non-trivial restriction), then
o-o is probably one of the best ways to go.  O-o design is built on the
concepts of modeling a system, and describing protocol independently of
how the internal state of the system is modeled.

In the course of system design I often do tasks that are best described
using linear programming techniques.  These include solving large
matrix equations, determining the extrema of constrained functions and
other numerical tasks.  The programs which I develop to determine
solutions to these problems are qualitatively different than the system
which I eventually build - the design style varies with the nature of
the problem.

	Yes, I could use o-o techniques to build such programs, and I
	do within reason.  O-o design just does not work significantly
	better than procedural design for such tasks.  What I'd really
	like is a specialized language...

O-o design is probably an excellent place to start for students who
will someday build many systems for which such techniques are
applicable.  Engineers (non-software) may want to start with numerical
methods, including ways to maximize precision, how to determine the
actual precision of results and techniques for solving matrix eqns,
etc.

	Ken Shrum
	Logic Design Operation
	Hewlett-Packard
	...!hplabs!hpldola!kens

goer@sphinx.UUCP (01/13/87)

I've been reading the discussion about novice programmers and o-o languages for
a day or two now.  One thing that stikes me is that no one is discussing things
like concrete applications.  Surely computer design, architecture, programming,
and what not are all interesting in and of themselves.  But most people who are
learning to program have some definite goal in mind - some problem they hope to
solve by programming.  People who are only cutting code for a living obviously
do not fit into this mold.  Nor do theorists of various stripes.  Folks like me
who are primarily involved in some other discipline, and who see programming
skills merely as a useful adjunct, need to concentrate on a language (or group
of languages) that will answer their immediate needs.  We probably won't 
have the time or inclination to learn several languages extremely well, so a
miss on the first try could be fatal (figure that one out :-)).
     What I'm leading up to is, in actuality, a question.  I myself am a begin-
ning programmer.  My experience has been with 8088 assembler and with BASIC.
No, my mind has not been bent too much by the latter of these, since the Micro-
soft Assembler for the IBM PC is structured too much like Pascal.  My inter-
est is, however, not in low-level programming.  I just thought I ought to get
familiar with computer hardware so that freaky glitches and bugs didn't give
me the chills.
     What I really want to accomplish by programming is to handle strings - 
strings of characters that make up graphic representations of natural languag-
es.  I need to be able to search, store, retrieve, and manipulate text in biz-
arre ways.  And I want to do so with a language that is not idiosyncratic (i.e.
it will be around ten years from now, with roughly the same syntax, on a reason
ably large number of machines).  What does that leave me with?  "Good program-
ming practices" are nice, but I'd be quite willing to sacrifice the theory of
good programs to the actual capacity for getting a program that gets the job
done written.  I mean, I'd hate to waste a year learning a language that's 
great pedagogically, and find after everything that this language really isn't
designed to accomplish my aims in a very elegant fashion!

                                          -Richard Goerwitz

sierchio@milano.UUCP (01/13/87)

To respond to your question, second one first:

It is hard for advocates to explain anything effectively -- we're
talking about religious issues, not intellectual ones. If it's
someone's religion, best not to futz with it. Ever try talking
to a FORTH nut?  Of course, the real value of something ought to
be apparent without devoting one's life to it first.  The O-O paradigm
offers a promise of stability in the environment, since it is inherently
modular.  We should see benefit in terms of reusability of programs
and concepts. I think the Xerox Altos and Star are successes -- if you
count the Macintosh as a Successor.  It's now ready to support the O-O
paradigm from the programmer's view, as well as the user's. 

Remember that we hackers are users, too! And the tools we use to build
these marvelous systems would be rejected as primitive by the newly
sophisticated general public. 

But when someone says, "you have to do a large amount of O-O before you
TRULY UNDERSTAND ...."  PUT YOUR HAND ON YOUR WALLET! "My guru says..."


Now, for your first question -- Abstract Data Types are Data Types.
Objects in O-O programming are code and data. This is nothing new, at
least at first glance -- LISP has always represented (until recently,
when it has been forced to accomodate everybody) functions and data
as lists, and that meant that programs could build and evaluate their
own little worlds.

What's different about O-O is that it has little to do with whatever
embedded language is used. Objects know about themselves. They have all
the code and data it takes to perform their design function. At least
that's the way it looks to the programmer. So, it is not merely a subset
of abstract data types, though it shares some conceptual foundations
with ADT's. It is tremendously appealing, and I think that its time has
finally come -- we should see O-O in efficient implementations across
a multitude of environments, and soon. 

As they say in auto racing -- "when the green flag drops, the bullshit stops."

We'll see, I guess. Personally, if it stops me from having to reinvent
the proverbial wheel on a continuous basis, I'll give it a go.

ciao!

-- 
	
	Michael Sierchio @ MCC Software Technology Program

	UUCP:	ut-sally!im4u!milano!sierchio
	ARPA:	sierchio@mcc.ARPA

	"WE REVERSE THE RIGHT TO SERVE REFUSE TO ANYONE"

bzs@bu-cs.BU.EDU (Barry Shein) (01/14/87)

From: kens@hpldory.HP.COM (Ken Shrum)
>I don't believe that there is one 'right' programming language to teach
>to beginners.  It's important to remember that, for the vast majority
>of users, software systems are tools and not of interest in their own
>right.

I tend to agree with you, unfortunately we sort of have to pick one
anyhow.

Actually, I am more and more drawn towards the concept of surveying
programming languages in a first 1-year course, possibly getting a
little proficiency in one but touching upon more than just one or two.

There is a (relatively old) textbook by Eliot Organick which was
intended to be a survey of Computer Science in the same spirit as
Halliday and Resnick is a survey of physics, Keeton is a survey of
biology and Sienko and Plane is a survey of (well, inorganic)
chemistry (Thomas' Calculus, Samuelson's Economics, hmm, something
is missing, no?)

The idea was basically to drive the subject matter by "great thoughts
in computer science" and leave the mechanics of programming to more of
a (very important) laboratory procedure. Consequently, you might do
mathematical algorithms in Fortran, some systems in C, string
processing in Snobol and symbolic computation in Lisp (maybe even
parse a toy grammar in Lex and Yacc.) Object oriented programming
would be a fine topic in this day and age and should involve an object
oriented langauge (I like LOOPS but the pragmatics of an intro course
would probably have one use C++ which isn't a bad thing.)

Oh, you won't get very proficient in any right away, but that's the
difference between an academic program and a trade school, the former
is trying to put the questions in your head and then answer them
rather than just make you memorize other people's answers, it's harder
and a bit more circuitous that way, but the extra ground covered is
well worth the effort.

I think something like this could be designed without being impossible
or overly superficial, I would love to see, for example, Donald Knuth
design such a text based upon his Art of Computer Programming (ie.
condense the whole mess back into an introductory student format),
unfortunately he'd probably use Mix or Web or TeX for examples :-),
maybe someone else could come up with something brilliant based on a
format like this.

But perhaps this strays from the topic...maybe not.

	-Barry Shein, Boston University

rentsch@unc.UUCP (01/17/87)

In article <4000001@nucsrl.UUCP> gore@nucsrl.UUCP (Jacob Gore) writes:

> Suppose you wanted to convince the Computer Science faculty at your
> university (or college or institute or whatever) that students should
> be taught to think and program in the object-oriented paradigm from
> the very beginning.  What arguments would you use?  

> Or, if you are against this idea, and somebody else was introducing
> it, what arguments would you use against it?  

> Basically, my feeling is that object-oriented programming is not an
> "advanced" concept in the sense that one needs a lot of programming
> experience to comprehend it.  In fact, I think that it would be
> _easier_ to teach to beginners, because it is very orthogonal and
> consistent.  But before I formally present this idea to people who
> make decisions around here (many of whom have a very foggy (if any)
> idea about what object-oriented programming is), I'd like to get some
> opinions from the net folks.  

The question raised is more complex than it appears at first blush,
and it has taken me some time to sort out in my own mind the various
concerns.  Now that I have had time to mull the question over for a
few days, I have some hope of giving some sensible answers.  The
various concerns are herein addressed in the order of most important
to least important.

The unasked question is "Is teaching OOP first a good idea?", to
which I answer yes.  I say yes because it is generally accepted that
the programming language (or: methodology) one learns first has the
strongest influence, and I believe OOP is the best (well-understood)
methodology now available for general purpose software writing.  [But
more about "well-understood" later.]  This reasoning would be my
primary argument that OOP should be taught first rather than later.

I believe, along with the original poster, that OOP is comparatively
easy to teach to beginners, although for different reasons than he
gives.  Objects in OOP have *intrinsic* behavior, which is somehow a
"more natural" way of thinking than the extrinsic behavior exhibited
by more conventional programming languages.  The "more natural"
aspect means not only that OOP will be easy to teach, but also that
OOP will be a better model for expressing programmatic ideas.  This
(not very scientific) observation -- i.e., that OOP is "more natural"
-- would be part of my second argument in favor of teaching OOP
first.  

In any discussion of OOP, one must be careful to separate the
understanding from the misconceptions, and this is far from trivial.
I presented my ideas on this in great detail several years ago
["Object Oriented Programming", SIGPLAN Notices, September 1982], and
I feel that paper is still good reading today.  "Well-understood"
does not mean "widely-understood", and it is sadly still true that
OOP is not understood or is even misunderstood by a great many
computer science professionals.  If I were to argue *against*
teaching OOP first, I would start by asking "Do we have someone who
is really qualified to teach OOP as a first, and therefore the most
important, programming discipline?"  Before deciding what you should
do, you should know what you do well, as this is crucial in
fundamental courses.  (These comments are not intended to reflect on
any particular persons -- I merely state that the question should be
asked.)  

Also, you should be careful to distinguish OOP as a principle and OOP
as embodied in a programming language.  In my experience, conveying
the ideas of OOP depends a lot on having support for OOP in the
language vehicle, more for example than communicating the ideas of
structured programming depends on the particular language used.  Even
if you have a language which supports OOP directly, the questions of
how purely the language presents OOP principles, and how much other
stuff the language contains, will make the proposed course offering
more or less attractive.  The original poster mentions OOP being
"very orthogonal and consistent".  Try looking at C++.  C++
definitely supports OOP, but is not very orthogonal and consistent.
Now, I agree that orthogonality and consistency are desireable
qualities, but some "OOP" languages have them and some do not.  The
decision of whether to teach OOP as a "first languge" cannot be made
without also knowing which language would be used as the vehicle for
presenting the ideas and principles of OOP.  

I note in passing that PASCAL is an existence proof of a language
which can be successfully taught even though it was essentially
unused outside the university environment.  Furthermore, by virtue
of being taught at many universities, PASCAL has become widely
available outside the university environment.  So, if someone says
that OOP is not in demand "in the real world", you can point out
that this is a non-argument, for which PASCAL provides the proof.
(You might further ask the pointed question of whether the job of
the university is to fulfill the demands of industry or to lead
industry in the right direction in spite of itself.  But that is
another topic.)

An argument against OOP is that, although many topics in Computer
Science can be presented within the OOP style, not all can.  So, you
had better be prepared to do also some more conventional education
along the way.  This seems like a side issue to me -- every
curriculum has courses designed to round out the various styles of
programming presented.

Some people, in arguing against OOP, say things like "it's too slow".
Even if this were true (with which I disagree), it is not an argument
against teaching OOP as a first programming language.  For if it
were, the logical conclusion is that the first language should be
assembly language.  (For those poor souls who *do* think we should
start with assembly language, ask if we should try to learn
sub-atomic particle physics by starting with the "Earth Wind Water
Fire" physics model of ancient times.)  


Well, that seems like more than enough.  Hope you got some good
ideas.  And good luck.

cheers,

Tim

rentsch@unc.UUCP (01/17/87)

In article <147@m10ux.UUCP> mnc@m10ux.UUCP writes:

> Why is it so hard for O-O programming advocates to explain the advantages
> of their programming methodology without slipping into obscure, specialized
> O-O jargon, or invoking the religious argument, "well you can't under-
> stand why it is better until you've done a large amount of O-O programming
> yourself"?

I would say this differently.  It is not that you can't understand
it, but that it is difficult for me to *explain* it until you have
done some (not necessarily large) amount of O-O programming yourself.
This difficulty seems to happen in spite of my best efforts, witness
my paper "Object Oriented Programming", SIGPLAN Notices, Sept 1982.
[Incidentally, the paper was written for just that reason -- to
explain what OOP is to someone who had not done any -- and it did
not succeed.]

As to why explaining OOP is so hard:  I suspect an understanding of
OOP requires a paradigm shift, and paradigm shifts are just
inherently difficult, especially without the experience of trying
them.  Remember the difficulty the quantum mechanics people had in
the 1920's and 30's?  Even Einstein, a brilliant physicist, was
reluctant to see things from the QM point of view, but now QM is an
accepted part of physics.  The difficulty there was the (radical)
paradigm shift that QM required before it could be understood.  



> What is the difference (other than a rather contrived, anthropomorphic
> view of the relationship between functions and values), between O-O
> programming and the ten (?) year old theory and practice called "abstract
> data types"?  The fundamental belief with technical importance in
> both methods (with which I completely agree) is that pieces of programs
> should be divided into chunks that have a precisely documented interaction
> with other chunks, but the internals of which are completely unknown (or
> irrelevant to) other chunks.  Put more concisely, "separate WHAT it does,
> from HOW it does it, so that the HOW can be changed without affecting the
> WHAT".  Isn't O-O programming simply a special case of, i.e. one particular
> method for achieving, abstract data types?  It seems to lie completely
> under the definition of an abstract data type as a collection of types
> and functions on those types, together with the notion that we can have
> multiple objects of the abstract data type.

OOP is not (in my opinion, at least!) simply a special case of
abstract data types.  Here are several reasons, ranging from
specific and particular to more general (which you may feel are
vague and contrived -- even so I present them).  I only hope that
among the various reasons presented, some point will strike home.
[I use Smalltalk as the archetypal OOP language.]

In Smalltalk, variables (and parameters) do not have a type, in the
sense that a type is known at compile time.  With abstract data
types this is not so -- even though we don't know how the operations
work, with ADT's we at least know what the operations *are*.  In
Smalltalk we don't even know that.  Now, you can argue about whether
this difference is good or bad, but it is certainly a difference,
and I claim a significant and useful one.

Second, classes in Smalltalk do not have "precisely documented
interactions" in the sense that even if you know the class of some
variable, you don't know anything about the operations except their
names (and so how many parameters each operation expects).  In
particular, there is no specification of what function the operation
will perform, or even of what "type" of arguments are required.
Again, contrast this to ADT's, which do have specifications.  You
might believe that this makes ADT's superior to OOP -- on which point
I disagree -- but even so you must admit this is a difference.  (Why
that difference makes OOP better is another matter.)  

Most ADT's cannot express very polymorphic operations, for example
stacks of mixed types (not to be confused with type parameterized
stacks, which can be instantiated multiple times for different
types).  In Smalltalk, it is trivial to write a class definition for
a stack which can mix integers, floats, rectangles, and even other
stacks within a single stack.  The ease of writing very polymorphic
code clearly distinguishes OOP from ADT's.

"Separate WHAT it does, from HOW it does it" certainly characterizes
ADT's.  But "this is what should be done" is not what I think of as
OOP; rather it is more like "what I WANT is blah", but leave the
decision of what is to be done (more than just how to do it) up to
the object.  I realize this may seem like a fine distinction (or
perhaps no distinction at all), but I assert that the distinction is
real and significant.

(The vaguest explanation of all, but still, I hope, useful.)  An ADT
is, indeed, a "collection of types and functions ON those types"
[emphasis mine].  In OOP, on the other hand, data and function are
inextricably linked -- rather than functions operating on data,
operations are intrinsic to the object just as much as data is
intrinsic to the object.  Data and operations cannot be separated in
OOP;  they are separate in ADT's, only collected together for the
purpose of presenting the data type to the "outside world", but
internally they are still separate.

Just as I believe your questions were asked sincerely, so I have
tried to give sincere and informative answers.  Hope they helped.
(See also the paper reference earlier.)

cheers,

Tim

shebs@utah-cs.UUCP (01/17/87)

In article <624@unc.unc.UUCP> rentsch@unc.UUCP (Tim Rentsch) writes:

>OOP is not (in my opinion, at least!) simply a special case of
>abstract data types.

Depends on what you mean by "abstract data type".  Modula-2's view of
ADTs for instance is totally useless, since a type is equated with
syntax rather than with semantics.  CLU is better but still restrictive
on what can be done.  If you want to see an ADT language with the right
amount of power, check out Goguen et al's OBJ2 (described in POPL 85 and
elsewhere).  It has everything Smalltalk has and more, not to mention
that it's much cleaner and less arbitrary than much of Smalltalk.

>In Smalltalk, variables (and parameters) do not have a type, in the
>sense that a type is known at compile time.

Use of the phrase "compile time" is misleading.  What a compiler can
or cannot do to a program is irrelevant - the issue is what can be expressed
in the language and how.  I assume that you really mean to say "the
value of a variable can be an object of any type at any time".  This is
polymorphism, which is certainly not unique to Smalltalk.  OBJ2 has it
for instance, under the guise of "parametrized objects" and "subsorts".

>With abstract data
>types this is not so -- even though we don't know how the operations
>work, with ADT's we at least know what the operations *are*.  In
>Smalltalk we don't even know that.

This has to do with "incrementality" of programming environments.  If
you had a Unix Smalltalk compiler that produced an a.out file, that compiler
must definitely know what operations it will include in the executable.
There are situations in which one could construct a new method on the fly,
but this is no different than constructing and evaluating Lisp expressions;
you just have to make sure that your a.out file includes an interpreter or
compiler to use the expression once created.

>Now, you can argue about whether
>this difference is good or bad, but it is certainly a difference,
>and I claim a significant and useful one.

Incremental programming environments and the ability to construct new
pieces of program are both Good Things, but they have nothing to do with
OOP directly.

> ... there is no specification of what function the operation
>will perform, or even of what "type" of arguments are required.
>Again, contrast this to ADT's, which do have specifications.

This is mistaken.  Smalltalk operations *do* have specifications.
"Specification" covers many sins; a definition of addition that says
"takes two objects, returns an object or causes a core dump" is perfectly
valid, though perhaps vague... It's hard to imagine a program that is not
the model of at least one specification! :-) :-) :-)

>You might believe that this makes ADT's superior to OOP -- on which point
>I disagree -- but even so you must admit this is a difference.

At best a difference of words, not of concepts.

>In Smalltalk, it is trivial to write a class definition for
>a stack which can mix integers, floats, rectangles, and even other
>stacks within a single stack.  The ease of writing very polymorphic
>code clearly distinguishes OOP from ADT's.

No, it clearly distinguishes inferior ADT systems from good ADT systems.
Polymorphism is independent of OOP and ADT - for instance, vanilla Lisp
and Prolog systems are polymorphic but are not considered object-oriented
or data-abstracted.  In my book, non-polymorphic languages are down with
assembly languages as things to avoid.

>(The vaguest explanation of all, but still, I hope, useful.)  An ADT
>is, indeed, a "collection of types and functions ON those types"
>[emphasis mine].  In OOP, on the other hand, data and function are
>inextricably linked -- rather than functions operating on data,
>operations are intrinsic to the object just as much as data is
>intrinsic to the object.  Data and operations cannot be separated in
>OOP;  they are separate in ADT's, only collected together for the
>purpose of presenting the data type to the "outside world", but
>internally they are still separate.

Pretty vague all right!  From denotational semantics we know that there
is no distinction between data and operations, only artificial distinctions
imposed by pragmatic considerations.  Once again, equivalence of data and
operation is not unique to OOP, it is an essential of any halfway-decent
programming language.

There *is* one aspect of the object-oriented paradigm that comes out better
than in other paradigms - concurrency.  Hewitt-style actors communicating
with each other make more sense to me than the bizarre concurrency formalisms
promulgated for procedural, functional, and logic paradigms... Anybody
see that differently?

>Tim

								stan shebs

rentsch@unc.UUCP (Tim Rentsch) (01/19/87)

Stan, I don't mean to be rude, but your posting completely misses the
point.  Normally I prefer not to respond point-by-point when
replying, but your article contains so much confusion and
misinformation that I do not see any reasonable alternative.  The
original article is here reproduced (in pieces) in entirety,
excepting signatures.  This was done in the interest of fair
presentation -- my apologies to those of you who dislike wading
through this kind of quagmire (I dislike it myself).  

To begin:

In article <4176@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley Shebs) writes:
> In article <624@unc.unc.UUCP> rentsch@unc.UUCP (Tim Rentsch) writes:
> 
> >OOP is not (in my opinion, at least!) simply a special case of
> >abstract data types.
> 
> Depends on what you mean by "abstract data type".  Modula-2's view of
> ADTs for instance is totally useless, since a type is equated with
> syntax rather than with semantics.  CLU is better but still restrictive
> on what can be done.  If you want to see an ADT language with the right
> amount of power, check out Goguen et al's OBJ2 (described in POPL 85 and
> elsewhere).  It has everything Smalltalk has and more, not to mention
> that it's much cleaner and less arbitrary than much of Smalltalk.

You take the (implicitly stated) point of view that OBJ2 is
synonymous with ADTs.  Well, that is a point of view, but it is not
the general point of view.  Smalltalk is generally considered to be
the archetypal OOP language.  OBJ2 is not generally considered to be
the archetypal ADT language.  Furthermore, Smalltalk predates the
term "OOP", whereas OBJ2 was introduced long after the term "ADT".
It seems to me that you are redefining the term to suit your
argument.  (Remember, the original poster refered to the "ten year
old idea of abstract data types".  This idea does not include OBJ2,
since OBJ2 did not then exist.)  

Also, your last sentence starts the trend of your article to argue
that OBJ2 is "better" than Smalltalk (and by implication that ADT's
are "better" than OOP).  If you want to believe that ADT's are better
than OOP (or that OBJ2 is better than Smalltalk), fine, but that is
not the issue;  the issue is how are they different.

More generally, your tone is one of debate.  My tone is one of
explanation.  The original poster asked if someone could explain how
OOP is different from ADT's;  this explanation is what I set out to
give.  Your debate style attitude does not contribute to explaining
your point of view.



> >In Smalltalk, variables (and parameters) do not have a type, in the
> >sense that a type is known at compile time.
> 
> Use of the phrase "compile time" is misleading.  What a compiler can
> or cannot do to a program is irrelevant - the issue is what can be expressed
> in the language and how.  I assume that you really mean to say "the
> value of a variable can be an object of any type at any time".  This is
> polymorphism, which is certainly not unique to Smalltalk.  OBJ2 has it
> for instance, under the guise of "parametrized objects" and "subsorts".

On the contrary, the phrase "compile time" is not misleading, nor is
it irrelevant.  Languages which are strongly typed (i.e., type is
determined and checked by compiler) are RESTRICTED as to what
programs are legal (i.e., "what can be expressed") when compared to
languages which are "typeless".  I confess to not being familiar
enough with OBJ2 to know this, but I assume that in OBJ2 the "type"
(OBJ2 may call it something different) of a variable is known and
checked by the compiler.  If so, then what can be expressed is
different that what could be expressed if the language were
"typeless", (i.e., in the sense that Smalltalk is typeless).  If not,
then how does the language represent ADT's, since variables are
"typeless"?  

Contrary to what you assume, I said what I meant, and was making a
statement about variables, not values of variables.  The difference
is significant, and non-trivial.

I did not claim that polymorphism is unique to Smalltalk, or even
that it is unique to OOP.  I accept that OBJ2 supports polymorphism,
under the guise of "parameterized objects" (are these like
parameterized types?)  and "subsorts" (are these like subclasses?).
But, Smalltalk's polymorphism does not come "under the guise" of some
special mechanism -- EVERY procedure is polymorphic, in the sense
that no special mechanisms or constructs (such as "parameterized
objects" or "subsorts") must be used to allow fully polymorphic use
of the procedure.  This flexibility is the clearest and most
identifiable distinction between OOP and ADT's.  

The last three sentences, when read in conjunction with your earlier
claim that "OBJ2 has everything Smalltalk has", raise an interesting
question.  In OBJ2, can the value of a variable be an object of any
type at any time?  If so, then why is polymorphism available (only)
under the guise of parameterized objects and subsorts?  If not, then
how can OBJ2 have everything that Smalltalk has?



> >With abstract data
> >types this is not so -- even though we don't know how the operations
> >work, with ADT's we at least know what the operations *are*.  In
> >Smalltalk we don't even know that.
> 
> This has to do with "incrementality" of programming environments.  If
> you had a Unix Smalltalk compiler that produced an a.out file, that compiler
> must definitely know what operations it will include in the executable.
> There are situations in which one could construct a new method on the fly,
> but this is no different than constructing and evaluating Lisp expressions;
> you just have to make sure that your a.out file includes an interpreter or
> compiler to use the expression once created.

It is true that Smalltalk supports incremental compilation, but that
fact had nothing to do with my statement.  The point is, when writing
a method (procedure) in Smalltalk, the operations provided by the
variables and parameters are not known, and, even in theory,
*unknowable*.  It is true that, IF they exist at all, those
operations will exist in the set of ALL operations, from any class.
But, the previous statement is a tautology -- it must be true in ANY
programming language.  Since it must be true in any programming
language, saying it is also true in Smalltalk is a rather weak
statement.

From your statement about "a Unix Smalltalk compiler" I can only
conclude that you have a severe misunderstanding both of Smalltalk
and of the the point I was making.  Yes, it is true that compiler
would have to know what operations to include -- and, from my
previous paragraph, we know that that set would be ALL operations.
That does not affect what the programmer knows when writing a
Smalltalk method, nor could it without changing the Smalltalk
language.



> >Now, you can argue about whether
> >this difference is good or bad, but it is certainly a difference,
> >and I claim a significant and useful one.
> 
> Incremental programming environments and the ability to construct new
> pieces of program are both Good Things, but they have nothing to do with
> OOP directly.

Again, you misunderstand.  My statement had nothing to do with
incremental compilation -- the point is that even in a statically
compiled OOP language we do not (in general, cannot) know the "type"
(e.g., operations) of variables and parameters.

And, although OOP does not require incremental compilation, OOP as
typified by Smalltalk does *provide* incremental compilation, and
this provision might be an argument in favor of Smalltalk over OBJ2.
(Again, I confess not knowing enough OBJ2 to be definite -- which is
why I said "might".)  While not germane to a discussion of the
differences between OOP and ADT's, this observation is certainly
germane to a discussion of which of Smalltalk and OBJ2 is "better";
once you start down the path, you had better be prepared for someone
else to follow it.



> > ... there is no specification of what function the operation
> >will perform, or even of what "type" of arguments are required.
> >Again, contrast this to ADT's, which do have specifications.
> 
> This is mistaken.  Smalltalk operations *do* have specifications.
> "Specification" covers many sins; a definition of addition that says
> "takes two objects, returns an object or causes a core dump" is perfectly
> valid, though perhaps vague... It's hard to imagine a program that is not
> the model of at least one specification! :-) :-) :-)

There EXIST specifications which do describe a methods behavior.
But, any such specifications are NOT present within Smalltalk.  ADT
languages have specifications explicitly provided as part of the
program.  Smalltalk programs do not.  

Yes, all programs fit at least one specification.  Whether that
specification is given along with the program is another matter.



> >You might believe that this makes ADT's superior to OOP -- on which point
> >I disagree -- but even so you must admit this is a difference.
> 
> At best a difference of words, not of concepts.

[This excerpt is so elliptic that I had to go back to a copy of my
 original posting to understand it.  The issue at hand is that
 Smalltalk does not have specifications or type declarations, whereas
 ADT's do.]

It is not clear what "concepts" you intend being compared here, but
there is clearly a (large) conceptual difference.  

The central point in OOP is that we don't know (and don't want to
know) how an object will behave in response to some message.
Contrast this with ADT's where specification of behavior is required
(and, by implication, desireable).  Opposite points of view always
constitute conceptual difference, in my book.



> >In Smalltalk, it is trivial to write a class definition for
> >a stack which can mix integers, floats, rectangles, and even other
> >stacks within a single stack.  The ease of writing very polymorphic
> >code clearly distinguishes OOP from ADT's.
> 
> No, it clearly distinguishes inferior ADT systems from good ADT systems.
> Polymorphism is independent of OOP and ADT - for instance, vanilla Lisp
> and Prolog systems are polymorphic but are not considered object-oriented
> or data-abstracted.  In my book, non-polymorphic languages are down with
> assembly languages as things to avoid.

(Here we go again on "OBJ2 is the one true ADT language" kick.
Sorry, that just isn't the general opinion.)  

Certainly other languages support polymorphism, but A is independent
of B iff A can exist without B *and* B can exist without A.  A
language which supports OOP will support polymorphism, so the
statement "Polymorphism is independent of OOP ..." is just wrong.

As to my point, it is not the polymorphism but the EASE of writing
VERY polymorphic code (emphasis added for clarity) which separates
OOP from ADT's.  Lots of languages are polymorphic -- few are as
polymorphic as Smalltalk.  No ADT language with which I am familiar
is as polymorphic as Smalltalk.  Is OBJ2 as polymorphic as Smalltalk?
You offer no evidence, one way or the other.  Even if OBJ2 is as
polymorphic as Smalltalk, does that mean ADT's (as generally
understood) are as polymorphic as OOP?  No, it does not.  



> >(The vaguest explanation of all, but still, I hope, useful.)  An ADT
> >is, indeed, a "collection of types and functions ON those types"
> >[emphasis mine].  In OOP, on the other hand, data and function are
> >inextricably linked -- rather than functions operating on data,
> >operations are intrinsic to the object just as much as data is
> >intrinsic to the object.  Data and operations cannot be separated in
> >OOP;  they are separate in ADT's, only collected together for the
> >purpose of presenting the data type to the "outside world", but
> >internally they are still separate.
> 
> Pretty vague all right!  From denotational semantics we know that there
> is no distinction between data and operations, only artificial distinctions
> imposed by pragmatic considerations.  Once again, equivalence of data and
> operation is not unique to OOP, it is an essential of any halfway-decent
> programming language.

(The repetition of my "vague" characterization is the kind of
inflammatory statement that does nothing to contribute to the
discussion.  Vague statements can be useful in explaining concepts.
By giving warning of the vagueness, I had hoped to clarify the
explanation following, while avoiding the kind of childish response
exhibited by the posting.  I see my hope was in vain.)

Denotational semantics is only one formal model of describing programs;
there are others.  Whether the distinction between program and data
is really there, or is "artificial", denotational semantics cannot
tell us, for it is only one point of view, not THE point of view.
Furthermore, the discussion is of ADT's, not denotational semantics;
apparently you have them confused.

"Once again", the sentence begins, and says something for the first
and only time.  Is this an inflammatory tactic, or just sloppy
language?

And, not to put too fine a point on it, I did not say "equivalence"
of data and operation, I said inseparability.  Binding together of
operation and data is supported by many languages, including both
ADT's and OOP -- I never said OOP was "unique" in this.  What makes
OOP different *from ADT's* is the degree to which the operations are
thought to be intrinsic rather than extrinsic.

As to whether "equivalence of data and operation ... is an essential
of any halfway-decent programming" -- well, the question is not
whether it is a good thing, but to what degree and extent it is
present in each of OOP and ADT's.



> There *is* one aspect of the object-oriented paradigm that comes out
> better than in other paradigms - concurrency.  Hewitt-style actors
> communicating with each other make more sense to me than the bizarre
> concurrency formalisms promulgated for procedural, functional, and
> logic paradigms...  Anybody see that differently?  

Oh, if you understand OOP in terms of actors, I can see why you're
confused.  (half :-)

Afterthought:  I wonder if the etymology of OBJ2 is "object oriented
extension to abstract data types"?  If so, that would explain a lot.

----------------------------------------------------------------

Finally, a parable (true story):

Once upon a time there was a scientific researcher who used rats in
experiments.  He needed some rats for some new experiments, and got
them from a local rat supply house.

Now, these rats just did not behave as he expected.  So, he ran some
control experiments, duplicating experiments he had done years
previously.  Sure enough, the results came out different.

Even though he had been using the same rat supply all these years,
he suspected that the rats of the present were different in some way
from the rats of the past.  So, he called up the rat supply house to
inquire.

"Those rats you sent over last week.  Are they the same kind of rats
as you sent me five years ago?"

"Yes, sir.  Those rats are identical to the previous rats."

"No chance there could be even a small difference?"

"No, sir.  Absolutely identical."

At this point the researcher was at a loss for explanation.
Something had to be different, and he couldn't figure out what it
was.  In desperation, he asked again:

"Are you absolutely sure that the rats I just got are EXACTLY the
same as the rats I got five years ago."  

"Yes, sir.  Those rats are absolutely identical to the ones you got
before.  In fact, they're better!"

"So, they *are* different."

"No, absolutely identical."

"Well, then they can't be better."

"Yes, they ARE better."

(End of parable.  I leave it to your imagination as to whether the
rats were identical, or were instead "better".  I will say that they
were only one or the other, as indeed they could not have been both.)


The point is that we are discussing the DIFFERENCE between ADT's and
OOP.  If ADT's are better than OOP, as you seem to imply, then they
must also be different.  How are they different?  That is the
question we are trying to answer, and about which you say very
little.

cheers,

Tim

tombre@crin.UUCP (Karl Tombre) (01/19/87)

In article <147@m10ux.UUCP> mnc@m10ux.UUCP writes:
>(1) What is the difference (other than a rather contrived, anthropomorphic
>    view of the relationship between functions and values), between O-O
>    programming and the ten (?) year old theory and practice called "abstract
>    data types"? 

There are many similarities of course. But in my view the difference comes
more from the analysis. Abstract Date Types reasoning is based on analysis
of the result you want to achieve, and building a system from this analysis,
through careful specification and so on. O-O analysis is to design and
implement a set of objects which provide several duties, that the user or
other objects can use at their convenience (this became really clear for me
after reading a publication by Bertrand Meyer). This being said, you can use
o-o analysis and program with abstract data types, or use an o-o language
with a classical analysis, but you may loose a lot by doing this...

>(2) Why is it so hard for O-O programming advocates to explain the advantages
>    of their programming methodology without slipping into obscure,
> specialized O-O jargon, 

Well that's a real problem. I also have suffered of the many different
terminlogies used by people building O-O environments.
-- 
--- Karl Tombre @ CRIN (Centre de Recherche en Informatique de Nancy)
UUCP:    ...!mcvax!inria!crin!tombre     EUROKOM: Karl Tombre CRIN
POST:    Karl Tombre, CRIN, B.P. 239, 54506 VANDOEUVRE CEDEX, France

lsr@apple.UUCP (Larry Rosenstein) (01/21/87)

Some of us at Apple have experience in implementing object-oriented systems
and seeing how people learn them.  A few of us implemented an
object-oriented application framework called MacApp, which people have been
using for about 2 years.  Users of MacApp are generally experienced
programmers who have little or no experience with object-oriented
programming.

There is some variation in how quickly people learn object-oriented
programming, and how effectively they apply it.  It seems to take a while
before people understand the concepts of objects, inheritance, and dynamic
binding (message passing).  Usually they flounder for a period of time (a
couple of weeks) until the concepts start making sense.  

After this point, they can start using object-oriented programming, but it
seems to take a few more weeks before they fully understand how to use
object-oriented programming in their application design.   In particular,
there is a distinction between defining an object type that serves a
particular purpose and defining an abstract object that can then be used in
many different ways without having to reimplement it each time.

I think learning an object-oriented language requires changing your
perspective on programming.  (This is especially true when using MacApp,
because most of the control structure of your application is implemented for
you.)  Some people can adapt to this change more quickly than others.

Part of the problem has been that there was no way to experiment with
object-oriented programming until a couple of years ago.  Now you can get
versions of Smalltalk on MS-DOS machines or the Macintosh, you can get C++
on a variety of machines, and you can get Object Pascal on the Macintosh.

Until these things came out, object-oriented programming was something you
could only discuss.  (I took a comparative programming languages class at
MIT several years ago, and the class on Smalltalk consisted of watching a
videotape of someone at Xerox using Smalltalk.)

-- 
Larry Rosenstein

Object Specialist
Apple Computer

AppleLink: Rosenstein1
UUCP:  {sun, voder, nsc, mtxinu, dual}!apple!lsr
CSNET: lsr@Apple.CSNET

shebs@utah-cs.UUCP (Stanley Shebs) (01/21/87)

In article <632@unc.unc.UUCP> rentsch@unc.UUCP (Tim Rentsch) writes:

>Stan, I don't mean to be rude, but your posting completely misses the
>point.  Normally I prefer not to respond point-by-point when
>replying, but your article contains so much confusion and
>misinformation that I do not see any reasonable alternative.

And here I thought *I* was clearing up confusion and misinformation!
I'm not trying to debate, but to clarify that there are multiple points
of view on OOP and ADTs.  There were statements with which I disagreed,
and the nature of the net is that controversial things will be challenged.
Saying that a controversial statement is an "explanation" is not going to
save one from that challenge!

>>>OOP is not (in my opinion, at least!) simply a special case of
>>>abstract data types.

I tend to view them as being more alike than different, but that is probably
because I see similarities between things rather than differences.  There
are good reasons to see similarities in a field (CS) which is prone to
reinventing wheels on the basis of trivial distinctions.  Unfortunately,
there is a lot of disagreement on what is "trivial" and what is not...

>You take the (implicitly stated) point of view that OBJ2 is
>synonymous with ADTs.  Well, that is a point of view, but it is not
>the general point of view.

OK then, I'm ignorant.  What are ADTs, and what is the archetypal ADT
language?  Some of the points made subsequently (but not reproduced here)
seem to assume particular limitations, but it was not at all clear what
those limitations are supposed to be.  For instance, an ADT without
subtypes ("subsorts") is worthless as far as I'm concerned.  I like OBJ2
because it has abstract data types without being crippled by them...

>Smalltalk is generally considered to be the archetypal OOP language.

Perhaps, but let's not confuse details of Smalltalk with the definition
of object-oriented programming.  Any definition of OOP should encompass
Simula, C++, the assorted Lisp object systems, and actors.  Older folks
seem to consider Simula as the archetypal OOP language, by the way, while
Scheme fans (and Abelson/Sussman's textbook) equate objects with functional
closures, and suggest that specialized OOP systems are unnecessary.

>On the contrary, the phrase "compile time" is not misleading, nor is
>it irrelevant.  Languages which are strongly typed (i.e., type is
>determined and checked by compiler) are RESTRICTED as to what
>programs are legal (i.e., "what can be expressed") when compared to
>languages which are "typeless".

There are a lot more kinds of language implementations than most people
realize.  Lisp alone has a gamut from pure interpretation to pure compilation,
including some strange combined approaches.  There are Pascal interpreters
too, but that doesn't mean Pascal is not strongly typed.  A Common Lisp
program with declarations probably will not compile if there are type
mismatches - does that mean that Lisp is strongly typed?  References to
"compile time" when discussing language semantics are mistakes.  The meaning
of programs is at issue, not language implementation strategies...

>I confess to not being familiar
>enough with OBJ2 to know this, but I assume that in OBJ2 the "type"
>(OBJ2 may call it something different) of a variable is known and
>checked by the compiler.

For openers, OBJ2 is based on term rewriting, and doesn't have a compiler
at all!  The word "sort" is used instead of "type", because "type" is not
a particularly precise term, and has undesirable connotations.  The only
variables are logical variables bound in environments, and their type is
always known, although perhaps not very accurately.  By "not very accurately",
I mean that because in both Smalltalk and OBJ2, types live in a hierarchy,
one always knows that the type of something is whatever is at the top of
hierarchy.  In Smalltalk, you know that everything is of class "object",
for instance.  OBJ2 proposes a "universal sort" U, which includes all types
as subtypes.  This is also like Common Lisp's type "t".

I recommend the paper on OBJ2 in POPL 85;  it is quite readable, and avoids
most of the category theory and logic that makes much modern ADT work
incomprehensible to ordinary mortals.  As a Lisp/Prolog hacker, I generally
abhor any notion of strong typing, but OBJ2's type system is so high-powered
that it does seem not restrictive at all.  (I hope I'm not overstating OBJ2's
capabilities - not having used it, I may be projecting features that I would
incorporate in an "ideal" language!)

>But, Smalltalk's polymorphism does not come "under the guise" of some
>special mechanism -- EVERY procedure is polymorphic, in the sense
>that no special mechanisms or constructs (such as "parameterized
>objects" or "subsorts") must be used to allow fully polymorphic use
>of the procedure.  This flexibility is the clearest and most
>identifiable distinction between OOP and ADT's.  

If you don't know what subsorts or parametrized objects are, then how
do you know they're "special constructs" and not something available
by default?  In fact they're declaration mechanisms, and don't affect
the rewrite rules directly.

>In OBJ2, can the value of a variable be an object of any
>type at any time?

Yes, to the extent that OBJ2 has variables.  But variables in a rewriting
system are not locations that you get data from and store data into - they
are more like mathematical variables that serve only to connect two places
together.  To find out more, read any Prolog textbook.

>From your statement about "a Unix Smalltalk compiler" I can only
>conclude that you have a severe misunderstanding both of Smalltalk
>and of the the point I was making.  Yes, it is true that compiler
>would have to know what operations to include -- and, from my
>previous paragraph, we know that that set would be ALL operations.

Yes, I must admit to having confused optimization with necessity, and
only partly saved myself by talking about construction of code and
interpreters etc.  In the total absence of information, a Smalltalk
object must be prepared to do something with any operation whatsoever,
which means that the method lookup process might have to go through
all sorts of tables trying to match names.  You couldn't even have a
fixed-shape table, because the system might dynamically create new
types (classes) and everybody must augment their lookup tables to
accommodate the new classes.  The performance of this sort of thing
is inherently dismal, but that's the price of flexibility.

I don't know if OBJ2 supports dynamic definition of new types, but this
particular issue touches on some of the deepest issues in type theory.
You can hack out systems like Smalltalk to do it, but key aspects of behavior
will be operationally-defined at best.  I suppose most people don't care
whether the system will crash if you redefine class Class, because they're
not going to do it anyway, but there is a fair amount of current research
directed at the question of what *should* happen.

>There EXIST specifications which do describe a methods behavior.
>But, any such specifications are NOT present within Smalltalk.  ADT
>languages have specifications explicitly provided as part of the
>program.  Smalltalk programs do not.  

In OBJ2, the only "specifications" are the rewrite rules, which, as it
happens, are also the program.  Perhaps this means OBJ2 is not an
ADT language...

>> >In Smalltalk, it is trivial to write a class definition for
>> >a stack which can mix integers, floats, rectangles, and even other
>> >stacks within a single stack.  The ease of writing very polymorphic
>> >code clearly distinguishes OOP from ADT's.
>> 
>> No, it clearly distinguishes inferior ADT systems from good ADT systems.
>> Polymorphism is independent of OOP and ADT - for instance, vanilla Lisp
>> and Prolog systems are polymorphic but are not considered object-oriented
>> or data-abstracted.  In my book, non-polymorphic languages are down with
>> assembly languages as things to avoid.
>
>(Here we go again on "OBJ2 is the one true ADT language" kick.
>Sorry, that just isn't the general opinion.)  

It's not at all clear to me how the assertion that polymorphism is a good
thing leads to the assertion that OBJ2 is the one true ADT language.
As I mentioned earlier, I am a Lisp and Prolog person.  I've never used
OBJ2, and probably never will, because I like to write real programs, and
OBJ2 just doesn't have the performance.  I do like to think about the next
generation of languages, and OBJ2 seems closer to that next generation...

>Lots of languages are polymorphic -- few are as
>polymorphic as Smalltalk.  No ADT language with which I am familiar
>is as polymorphic as Smalltalk.  Is OBJ2 as polymorphic as Smalltalk?
>You offer no evidence, one way or the other.

Read the POPL paper - I don't plan to type it all in myself.

>What makes OOP different *from ADT's* is the degree to which the operations
>are thought to be intrinsic rather than extrinsic.

Defining a concept by an attitude doesn't seem very precise?

>> There *is* one aspect of the object-oriented paradigm that comes out
>> better than in other paradigms - concurrency.  Hewitt-style actors
>> communicating with each other make more sense to me than the bizarre
>> concurrency formalisms promulgated for procedural, functional, and
>> logic paradigms...  Anybody see that differently?  
>
>Oh, if you understand OOP in terms of actors, I can see why you're
>confused.  (half :-)

Sigh, a frivolous response to a serious question...

>Afterthought:  I wonder if the etymology of OBJ2 is "object oriented
>extension to abstract data types"?  If so, that would explain a lot.

Ask Goguen, not me.

>The point is that we are discussing the DIFFERENCE between ADT's and
>OOP.  If ADT's are better than OOP, as you seem to imply, then they
>must also be different.  How are they different?  That is the
>question we are trying to answer, and about which you say very
>little.

I don't think I ever claimed that ADTs are "better" than OOP.  Seems unlikely,
since I believe them to be basically equivalent.  I wonder if you were
offended by my disparaging Smalltalk with respect to OBJ2.  Don't get me
wrong - I think Smalltalk is a great language, I have all the Smalltalk books,
I'm putting a section on Smalltalk internals in my thesis.  But let's not try
to equate Smalltalk with objects;  it will only confuse some and disenchant
others.  Rather, let us identify the important concepts of languages and
define them precisely, so that future languages can incorporate those
concepts both elegantly and efficiently.

>Tim

								stan

barber@rabbit1.UUCP (Steve Barber) (01/22/87)

In article <4153@utah-cs.UUCP>, shebs@utah-cs.UUCP (Stanley Shebs) writes:
> MIT for instance
> starts all their CS freshcritters with Scheme, which is introduced as a sort
> of fancy calculator with parens in all the wrong places :-).
> 
> 							stan shebs
> 							utah-cs!shebs

This is partly true, and where it is wrong may be crucial to the argument
about what to teach to beginners.  While it is true that the MIT Scheme
class is the first subject in the Computer Science curriculum there, an
argument can be made that it is not a class for beginning programmers.

First off, the class assumes that the student be a fairly fluent programmer
already (not an unreasonable assumption for an MIT freshman who intends to
be a CS major in the '80s), and they teach you all the Scheme you need to
know in the first 2-3 weeks!  (Just to nit-pick, it's not really a freshman
class anyway, it's for sophomores.  There's just a lot of overachievers at
MIT with advisors that either don't care or suffer from the same myopia the
students do.)  The MIT CS department doesn't offer a course in "programming"
at all (they dropped it claiming "lack of interest" or "lack of staff",
depending whom you believe).  The only "programming" classes at MIT are
taught by Civil Engineering, Mechanical Engineering, or the Management
School!  Which is kind of as it should be, since programming is just a tool
for these disciplines, not a central paradigm as it is for CS.

What the "Scheme class" is all about is learning how to manage complexity
in a problem by using modularization ("divide-and-conquer") and abstraction
("hide-the-details") which is absolutely the right way to program.  It also
happens to be a pretty good general methodology for technical (as opposed
to social) problem solving.

I don't think it is necessary to teach object oriented techniques to 
beginners as long as they are taught modularization and abstraction.
An object oriented approach is just a very nice special case, to be
taught to people who can reasonably be expected to either go on with
CS studies or may get to use what you've taught them.  If you teach
a Mech E. structured Fortran, he'll thank you for it, but if you teach
him Smalltalk, he'll drop the class to concentrate on design labs.
(No sexism intended.)

On the other hand, the only reason why I really appreciated learning all
these techniques with the fancy names is because I started with BASIC,
and learned the hard way how not to do things!

(By the way, the MIT "Scheme Class", actually titled "The Structure and
Interpretation of Computer Programs", was absolutely the best CS subject
I ever took.  If you don't have the opportunity to take it, get the
book (with the same title) by Abelson and Sussman, published jointly
by McGraw-Hill and The MIT Press.)




-- 
Steve Barber    Rabbit Software Corp.
...!ihnp4!{cbmvax,cuuxb}!hutch!barber  ...!psuvax1!burdvax!hutch!barber
(215) 647-0440  7 Great Valley Parkway East  Malvern PA 19355