kurt@fluke.UUCP (01/30/87)
One of the touted advantages of Smalltalk when it is called object oriented is dynamic binding. Specifically you can send a "foo" message to any class. The designer of the system insures that all relevant objects provide a method to "foo" themselves. Naturally, if an object doesn't know how to "foo" itself, you get a message not understood error. So here's the question...What has dynamic binding bought you? Don't you have to know pretty much what object you are sending "foo" messages to? Don't you have to inspect something in the object or remember when you created an object what type it is so you know it can receive a "foo"? Ignore the boring case where ALL your objects know how to "foo". You might just as well have used a language with early binding and inheritance (CLU, Simula, ...) Also, I have repeatedly heard the claim that dynamically bound languages are "just as fast" as static languages. OK, make me a believer. Who says so? I read a paper by a guy who said LISP could be optimized to a C-like level. In fact, he has a compiler that eliminates tail-recursion. And guess what. His Lisp solves Ackerman's function about as fast as PCC. Strangely enough, PCC doesn't eliminate tail recursion. Big Deal. I would like to learn of those papers that perport to show that late-bound languages can be made to compete with languages which do static binding when they can. Finally, I should say that I consider languages like CLU, Simula, HOPE, etc to be a major improvement on C. Inheritance and information hiding are a necessary part of any state-of-the-art language. I don't think the votes are all in on late binding yet though.
lsr@apple.UUCP (02/03/87)
In article <364@oracle.tc.fluke.COM> kurt@tc.fluke.COM (Kurt Guntheroth) writes: >Don't you have to know pretty much what object you are sending "foo" >messages to? Don't you have to inspect something in the object or >remember when you created an object what type it is so you know it can >receive a "foo"? If you write a function in C that returns a pointer, do you inspect it to see if it is valid? Part of the specification for a function should be that it returns a valid pointer. Similarly, part of the specification of a method should be that it returns an object that can understand "foo". The consequences of returning an invalid pointer are just as bad. To be concrete, suppose the message is Draw. Then you do need to know that the object you send a message to can understand Draw, but you do not need to know if it is going to draw a circle, rectangle, line, etc. So your program has to be correct in so far as it never confuses graphical objects with numbers, for example. But other than that, it does not have to keep track of the specific kind of each graphical object. The runtime system does that for you. >Also, I have repeatedly heard the claim that dynamically bound languages are >"just as fast" as static languages. OK, make me a believer. Who says so? Consider implementing a graphics editor in C or Pascal. (I will take Pascal, because I am more familiar with it.) In Pascal, you would define a variant record to describe the graphical objects. One field of that record would indicate the kind of object (circle, rectangle, polygon, etc.) Then you could define a Draw procedure that does a CASE statement on the kind of object and calls the proper graphics primitive. Consider using an object-oriented version of Pascal. You would define a separate object type for each kind of graphical object. Each graphical object would respond to a Draw message and draw itself. When the Draw message is sent, the runtime system, will look at the type of the object and call the appropriate drawing code. Internally, both approaches do the same thing. They look at a tag on the object, and select the appropriate piece of code based on the tag. There is no reason to assume that the object-oriented approach is going to be any slower than the conventional approach. (You can make the argument that the object-oriented approach might be faster, because the runtime system would have been optimized to do the lookup very quickly. Some system, for example, use a method cache to speed up the lookup.) People who think of object-oriented programming as slow, are considering Smalltalk in which every invocation can potentially require the run-time look up. In reality, it is possible to have a language like C++ or Object Pascal in which the run-time lookup is only used when required by the application. In addition, it is often possible to change a method call from being dynamically bound to being statically bound. For example, there may be cases where you can deduce at compile time the type of an object and statically bind a particular invocation. (In Object Pascal, we have a utility that can do this.) In this case, you pay for the overhead of dynamic binding only if you need it. The cases where you need dynamic binding are the same cases where you would write a CASE statement anyway, so you sacrifice nothing. -- Larry Rosenstein Object Specialist Apple Computer AppleLink: Rosenstein1 UUCP: {sun, voder, nsc, mtxinu, dual}!apple!lsr CSNET: lsr@Apple.CSNET
aweinste@Diamond.UUCP (02/03/87)
In article <364@oracle.tc.fluke.COM> kurt@tc.fluke.COM (Kurt Guntheroth) writes: >objects provide a method to "foo" themselves. Naturally, if an object >doesn't know how to "foo" itself, you get a message not understood >error. So here's the question...What has dynamic binding bought you? >Don't you have to know pretty much what object you are sending "foo" >messages to? Don't you have to inspect something in the object or All you have to know is that the object is foo-able; this is a far cry from knowing the type of object. In typical cases, this property will be guaranteed by the flow of control in the program and shouldn't require any inspection of the object at all. Dynamic binding is a win when you need to deal with a range of different objects that present a common interface. In such cases, dynamic binding allows the objects to be managed in a generic way. For example, imagine a display manager handling a collection of screen objects. To redraw the display, the manager can simply ask each object to draw itself. This can be done without dynamic binding, but it becomes more complex. Note also that with dynamic binding, the code to do this stays the same even if new object types are added. Dynamic binding is also a win where the object type will not be known until run-time. An example is device-independent I/O, as for example in the Unix system, where programs are typically written without knowing in advance whether their I/O is to/from a terminal or a file. Anders Weinstein
rentsch@unc.UUCP (02/08/87)
An advocate of dynamic binding (that's me!) enters the fray. What follows, however, is not a rebuttal but an attempt at explanation and presentation of a point of view. Kurt, you will see that your article has been reordered somewhat as I have tried to respond -- please bear with me to the end, and see what you think. Note to potential Follow-uppers: if someone has something constructive to say, fine. But please avoid the temptation to quote a line or two out of context and reply debate style. You will be wasting our time and your breath. I realize nits can be picked, but the points I present here are made over paragraphs, not sentences. Clear? In article <364@oracle.tc.fluke.COM> Kurt Guntheroth writes: > Also, I have repeatedly heard the claim that dynamically bound > languages are "just as fast" as static languages. OK, make me a > believer. Who says so? I will try. Please excuse the rather roundabout path we travel to the destination. Let us for the moment ignore the possibility of "optimizing away" the actual call in a staticly bound procedure call (which possiblity would only help the statically bound case). Even so, one statically bound procedure is faster than one dynamically bound procedure call (on the average). The reason for this slowdown is that the dynamic binding takes an extra step over static binding, which must (sometimes) be done sequentially. [The "sometimes" is also known as the "you can't guess right all the time" provision -- which explains the "on the average" from the previous paragraph.] So, on some level, the claim is false. But, in another way the claim is true, and more. Please read on. > I read a paper by a guy who said LISP could be optimized to a C-like > level. [My summary: the comparison was "apples and oranges".] It seems inherently difficult to compare early-bound and late-bound languages, and the LISP-C comparison author apparently fell into one of the many problem areas. Even so, it is worth *something* to know that compiled LISP runs as fast as PCC [the C compiler in the comparison], because then there is no reason to choose PCC over LISP on the basis of performance (and presumably LISP wins on other counts, but that's a different story). That's good to know for those out there who have no C but PCC and prefer LISP, but it still does not show that LISP is as fast as any C (i.e., statically bound language). Still, let's keep going. > One of the touted advantages of Smalltalk when it is called object > oriented is dynamic binding. Specifically you can send a "foo" message > to any class. The designer of the system insures that all relevant > objects provide a method to "foo" themselves. Naturally, if an object > doesn't know how to "foo" itself, you get a message not understood > error. So here's the question...What has dynamic binding bought you? Before answering this let me give some factual answers to the next few questions. > Don't you have to inspect something in the object or > remember when you created an object what type it is so you know it can > receive a "foo"? If by "you" you mean the interpreter, the answer is yes. If by "you" you mean the programmer, the answer is no. The Smalltalk interpreter does indeed inspect something, which is to say the class of the object in question, said class indeed having been stored by the interpreter when the object was created. The programmer is not responsible for either storing the class or inspecting it. > Don't you have to know pretty much what object you are sending "foo" > messages to? No, and let me give an example. Suppose we want to pass a parameter X which acts like a stream of Floats, specifically X responds to the message 'next' and always returns a Float. X could be any Stream with Floats in it (Stream being one of the supplied classes in Smalltalk), but X could also be an instance of class Random! Now, Stream and Random have almost nothing in common (they happen to share the 'next' message, and their only common superclass is Object). This example shows a useful case where we know essentially nothing about the object X (only that it responds to 'next' and returns a Float). > Ignore the boring case where ALL your objects know > how to "foo". You might just as well have used a language with early > binding and inheritance (CLU, Simula, ...) (1) Minor correction: CLU's provision for inheritance is weaker even than Simula's. (2) Binding in Simula is not exactly late, but it's not early either -- virtual procedures require indexing the virtual procedure vector, and hence constitute a weak form of delayed binding. (3) It is important to distinguish the "boring" case where all objects know how to 'foo' *the same way* (i.e., with the same procedure) from the "less boring" case where all object know how to 'foo' but they all do it *differently*. Note that the second requires more dynamic binding than the first. > Finally, I should say that I consider languages like CLU, Simula, HOPE, etc > to be a major improvement on C. Inheritance and information hiding are a > necessary part of any state-of-the-art language. I don't think the votes > are all in on late binding yet though. Side comment: I detect a hint of confusion between static binding and static typing. At the risk of repeating myself I caution against such confusion -- static type checking is perfectly compatible with dynamic binding. This compatibility does not, however, mean that statically checked systems can be statically bound. And, I apologize if in fact no confusion indeed exists. [And, glad to see more people realizing the value of inheritance etc as an improvement over (ugh!) C.] (End of side comment.) Dynamic binding is slow on current machines for the same reason that indexing was slow in early machines: there is no hardware assist to speed it up. Note that, even in current machines, indexing is not free; even so, the (time) cost is so small, and the benefits so large, that no serious machine architect of today would consider leaving it out. Dynamic binding is similar. Late bound procedure calls will always cost more (measured on a per-call basis) than early bound calls. But with appropriate hardware support, the overhead on late binding can be reduced to a level where the benefits clearly outweigh the costs (sorry, no argument, just my opinion -- but read on to the advantages and see if you don't agree with me). Continuing the parallel with indexing, just because late binding is provided doesn't have to mean early binding is prohibited -- just that early binding is used only rarely, in only those cases where the last drop of speed is important. > What has dynamic binding bought you? We finally return to the $64 question. The most immediately obvious consequence of Smalltalk's dynamic binding is incremental compilation. That is, I can start running Smalltalk, pick an arbitrary procedure (Smalltalk calls them methods) anywhere in the system, edit it, recompile and relink it into the running system, with immediate effect everywhere, in only a few seconds! My canonical example when giving introductory Smalltalk demos is floating point multiplication (Float *), which I usually change to return the value 50, independent of the arguments. Bang! The change takes effect instantly. Live with incremental compilation for only a little while (a week, say), and you won't want to give it up. A more subtle advantage of the particular brand of dynamic binding in Smalltalk is insensitivity to certain kinds of difficulties which other languages usually call "errors". Consider the example I just gave: Well, that returned value was *integer* 50, not Float 50.0. Even ignoring the fact of wrong result value, most systems would have problems because the "shape" of the answer was wrong, i.e., integer rather than floating point. Smalltalk does misbehave slightly because 50 is not the right answer, but it does NOT misbehave to the extent of interpreting 50 as some incredibly ridiculous floating point number. This robustness results directly from dynamic binding -- because which operation is invoked depends on the receiver's class at the very last moment. (If you have access to a Smalltalk system I urge you to try this experiment yourself -- just comment out the method definition for Float * and substitute ^50. The results are mildly amusing, and certainly not catastrophic. Change the ^50 to ^100 to further illustrate the change. Then (of course) change it back to be the original method.) Once you get used to writing code in typical Smalltalk style (some would say object oriented programming), you find that your procedures are naturally very polymorphic (Valley girls might even say "maximally polymorphic" :-). This polymorphism gives tremendous leverage in terms of code reusability, and in other ways more difficult to explain. A true-life example: I wrote a tiny but intellectually complex set of procedures to perform tensor multiplication. Ever try to test a general tensor product? There are so many numbers it is very difficult. But, by defining String * (multiplication) to be concatenation, and String + (addition) to be concatenation with a "+" in the middle, I have enough symbolic manipulation machinery to test my routines symbolically! All that remains is to put together the example symbolic tensors, multiply them together, and look at the result -- because the code for multiplication depended only on the availibility of '+' and '*' as operations on the tensor elements. Recompilation of the tensor code is not required, by virtue of dynamic binding. Even given dynamic binding, full polymorphism is difficult when constrained by completely static typing. (Not all the votes on static typing are in yet, either. :-) [Please note the ':-)' and don't bother to flame.] If even just a part of your system is not type checked, dynamic binding ala Smalltalk can turn messages like 'bus error -- core dumped' into 'message not understood'. If even ONE 'core dumped' message is changed into something more reasonable (and recoverable), that's a win in my book. This is another advantage of dynamic binding. Finally, and saving the best for last, dynamic binding can actually be faster than static binding, not in the small but in the large. The argument goes as follows: Smalltalk style polymorphism supports 'factoring', the property of one piece of code being in only one place. Smalltalk's dynamic binding allows the factored source code to be compiled into only one (factored) object-code object, which is shared among all the various "types" which use it. Because of factoring, code growth is more linear as a function of system functionality, which is to say Smalltalk object code get larger less quickly than statically bound systems. At some point -- exactly where depends on what you are building and what machine you are running on -- the smaller Smalltalk code will run faster, not because it is inherently faster but because the smaller system will fit in main memory of the machine on which you are running. This has long been recognized but rarely acknowledged -- it is system performance rather than microscopic performance that counts. It doesn't take a large ratio of stuff on the other side of the "access cliff" of disk memory before the smaller system starts to run faster because the larger system is running at disk speeds rather than memory speeds. Remember, with main memory speeds of today, disk runs 5 to 6 orders of magnitude slower. And, 5 to 6 (decimal) orders of magnitude are quite a lot. Factoring at the source code level can be accomplished by a variety of language designs, including statically bound ones. But, factoring at the object code level, which is what is needed for smaller systems, is better supported by dynamically bound languages. So, for any given size of main memory, more highly factored systems can pack more functionality in the space available. I claim that dynamic binding makes for more factored systems. Well, that's it. If I didn't convince you, I hope at least you have more to think about. cheers, Tim
rentsch@unc.UUCP (02/08/87)
In article <814@unc.unc.UUCP> rentsch@unc.UUCP (Tim Rentsch) writes: > Now, Stream and Random have almost nothing in common (they happen to > share the 'next' message, and their only common superclass is > Object). This example shows a useful case where we know essentially > nothing about the object X (only that it responds to 'next' and > returns a Float). My mistake: Random is in fact a subclass of Stream in the standard Smalltalk system. But, there is no reason it has to be -- Random could just as easily have been a subclass of Object. More to the point, as an sender to an argument of class Random, I don't know (obviously!) or care whether Random is related to Stream or not. cheers, Tim
willc@tekchips.UUCP (02/11/87)
In article <814@unc.unc.UUCP> rentsch@unc.UUCP (Tim Rentsch) writes: >Note to potential Follow-uppers: if someone has something >constructive to say, fine. But please avoid the temptation to quote >a line or two out of context and reply debate style. You will be >wasting our time and your breath. I realize nits can be picked, but >the points I present here are made over paragraphs, not sentences. >Clear? >Side comment: I detect a hint of confusion between static binding >and static typing. At the risk of repeating myself I caution against >such confusion -- static type checking is perfectly compatible with >dynamic binding. This compatibility does not, however, mean that >statically checked systems can be statically bound. And, I apologize >if in fact no confusion indeed exists. [And, glad to see more people >realizing the value of inheritance etc as an improvement over (ugh!) >C.] (End of side comment.) To the contrary, static type checking is incompatible with dynamic binding. I cannot disagree with the sentence "This compatibility does not, however, mean that statically checked systems can be statically bound" since I have been unable to divine its meaning. To understand why static type checking is incompatible with dynamic binding, let us consider the example of a dynamically bound Lisp in which the + operation applies only to objects of type number. It is impossible to perform static type checking on the procedure foo defined by (define (foo n) (+ x n)) because the compiler has no idea what x is going to be bound to dynamically. The compiler can't tell what x is going to be bound to by looking at all the places where x is bound, either: (define (bind-x-and-call-foo x n) (if (hairy-looking-predicate-that-always-returns-true n) (foo n) 36)) (define (bind-x-but-dont-call-foo x n) (if (not (hairy-looking-predicate-that-always-returns-true n)) (foo n) 37)) (define (hairy-looking-predicate-that-always-returns-true n) ; n is an integer encoding a two-dimensional map ; returns true if there exists a four-coloring of the map ; algorithm: exhaustion of cases ...) Similar examples can be constructed in other dynamically bound languages such as APL. The language that Mr Rentsch seems to be most fond of, Smalltalk-80, is not a dynamically bound language. I am by no means fluent in Smalltalk, but it appears to me that Smalltalk-80 variables are either global or local; local variables have lexical scope; local variables are not permitted to shadow global variables, I've been told; and there are a few oddball variables such as self and super that may not always obey the rules. Smalltalk-80 is, however, dynamically typed. I suspect that Mr Rentsch intended the term "dynamic binding" to refer to the dynamic method lookup described in the Smalltalk blue book. Because Smalltalk is dynamically typed, the compiler cannot in general determine the type of the object that will receive a given message. Hence the compiler cannot determine which method should be used to handle the message, and this determination must be made by the receiving object at run time. Thus the dynamic method lookup. Someone asked whether dynamically typed languages can be made to run as fast as statically typed languages. Well, if everything else is equal, then static types are going to run faster. It's easy to overstate the performance advantage of statically typed languages, though. If, for example, you're computing with null-terminated linked lists, then you're already working with a union type (nil union ...) so adding a few more types to the union (to get a union that encompasses all types, as in a dynamically typed language) isn't likely to slow your union discriminations very much. And if you use integer arithmetic instead of arithmetic modulo 2^32, then telling the compiler that a variable contains an integer doesn't tell the compiler whether a modulo 2^32 addition instruction can be used to increment it. And static types don't help with array subscript bounds checking. Hence static types don't help much with performance once you get beyond low-level languages like C and Ada; you can see this quite clearly if you try to compare the performance of a statically typed language like ML with a comparable dynamically typed language like Scheme. The performance differential attributable to static versus dynamic types will be swamped by unrelated differences in the quality of the implementations. The real advantage of statically typed languages is that they detect a certain class of bugs at compile time instead of run time. Finally, I would like to say that there is nothing in the concept of object-oriented programming that is incompatible with static types. I would like to say that, but I won't because the concept of object- oriented programming is so ill-defined that there are sure to be people out there for whom the very term "object-oriented programming" implies dynamic types. William Clinger Tektronix Computer Research Lab
mnc@m10ux.UUCP (02/13/87)
In this discussion, there have been some misunderstandings and arguments caused by the failure to give clear definitions of such terms as "dynamic binding". In order to promote a higher level of discourse, I herewith offer the following definitions, which are fairly clear and precise. I trust that they also agree with the widely accepted meanings: "dynamic vs. static binding" -- (unfortunately vague terms, since it is not clear WHAT is being bound -- a better term for static binding is "static scope" or "lexical scope".) A language is staticly bound iff for any two references to a variable named x in any program, it is computable whether the two references refer to the same object (i.e. the same location in memory). Otherwise it is dynamically bound. Note: This implies that programs in statically bound languages cannot compute variable names at run time -- names must appear literally in the text at the point of reference. Conversely, dynamically bound languages must employ some sort of symbol table at runtime, unlike statically bound langugages, even if they do not allow the construction of variable names at run time. Also, note that by "variable", above, I mean an identifier, not Pascal's notion of variable, which would include x[i], x^ and x.a. Thus in a statically bound language, you are not necessarily able to tell at compile time whether two occurrences of x[i] refer to the same object. Similarly, having pointers to functions does not make a language dynamically bound, either. "dynamic vs. static typing" -- A language is statically typed iff for any reference to a variable x in any program, the type of x is computable. Note: this makes the definition depend rather strongly on how you define the word "type" for any particular language. For example, LISP can be seen as a statically typed language in which every variable has the same type. This type is a structure containing a type tag field and a variant part (union) containing either a pair of pointers, called CAR and CDR, to other such structures, or a character string, integer, etc. This seems like cheating to me, but I don't think you can formulate a definition that prohibits this interpretation. A more reasonable definition might result if we said that you only know the type of a variable x if, given just the address of x (and without looking at the value of x) you know how to copy x to y, what the valid sub-pieces of x are, and whether it is well-defined to apply any given function f, whose type is known, to x. This, however, would seem to say that C variables of union types are dynamically typed (and perhaps they are). Using the above definitions, it is clear that a language cannot be dynamically bound and staticly typed, unless you make the bizarre requirement a la FORTRAN that every variable named x must have the same type, even if they are otherwise completely unrelated. Otherwise, it is not computable whether a reference to variable x is referring to the x that was defined as an integer or the x that was defined somewhere else as a character string. It is also clear that a language CAN be statically bound and dynamically typed (e.g. SCHEME, or any other statically bound version of LISP). Does this help clear things up a bit? -- Michael Condict {ihnp4|vax135}!m10ux!mnc AT&T Bell Labs (201)582-5911 MH 3B-416 Murray Hill, NJ
rentsch@unc.UUCP (02/15/87)
Before starting, let me say thank you to Will Clinger (for heeding my plea, in his civilized posting) and also say thank you to Doug Moen (for his followup on dynamic {scoping versus binding}). Doug's posting pretty much represents my point of view on the terminology and scoping issues (thanks for saving me some keystrokes, Doug!), so I confine myself to the remaining issues. In article <1040@tekchips.TEK.COM> willc@tekchips.UUCP (Will Clinger) writes: > In article <814@unc.unc.UUCP> rentsch@unc.UUCP (Tim Rentsch) writes: > >Side comment: I detect a hint of confusion between static binding > >and static typing. At the risk of repeating myself I caution against > >such confusion -- static type checking is perfectly compatible with > >dynamic binding. This compatibility does not, however, mean that > >statically checked systems can be statically bound. And, I apologize > >if in fact no confusion indeed exists. [And, glad to see more people > >realizing the value of inheritance etc as an improvement over (ugh!) > >C.] (End of side comment.) > To the contrary, static type checking is incompatible with dynamic > binding. I cannot disagree with the sentence "This compatibility > does not, however, mean that statically checked systems can be > statically bound" since I have been unable to divine its meaning. > [... some paragraphs deleted ...] > I suspect that Mr Rentsch intended the term "dynamic binding" to refer > to the dynamic method lookup described in the Smalltalk blue book. > Because Smalltalk is dynamically typed, the compiler cannot in general > determine the type of the object that will receive a given message. > Hence the compiler cannot determine which method should be used to > handle the message, and this determination must be made by the receiving > object at run time. Thus the dynamic method lookup. Granted: by 'dynamic binding' I do indeed mean binding of method name (aka message) to method body at runtime. Granted: a determination cannot in general be made as to the class of a message receiver (equivalent to the halting problem). Even so: by adding "type" declarations to a Smalltalk program, it is possible for the compiler to check "types" at compile time, and guarantee that 'message not understood' errors will not occur at runtime. This is what I meant when I said "static type checking is perfectly compatible with dynamic binding". On the other hand: even in the situation described in the paragraph above, it is not possible to know which method body will be bound to a given message (indeed more than one method body may be bound, at different times, to any particular message send). Even though we know at compile time that 'message not understood' errors cannot occur -- in other words the code is statically checked -- we do not know (cannot know) which methods will be bound to which message sends. This is what I meant when I said "This compatibility does not, however, mean that statically checked systems can be statically bound". [The quoted sentence wasn't very clear, was it? Oh, well.] > Someone asked whether dynamically typed languages can be made to run as > fast as statically typed languages. Well, if everything else is equal, > then static types are going to run faster. It's easy to overstate the > performance advantage of statically typed languages, though. If, for > example, you're computing with null-terminated linked lists, then you're > already working with a union type (nil union ...) so adding a few more > types to the union (to get a union that encompasses all types, as in a > dynamically typed language) isn't likely to slow your union discriminations > very much. And if you use integer arithmetic instead of arithmetic modulo > 2^32, then telling the compiler that a variable contains an integer doesn't > tell the compiler whether a modulo 2^32 addition instruction can be used to > increment it. And static types don't help with array subscript bounds > checking. Hence static types don't help much with performance once you > get beyond low-level languages like C and Ada; you can see this quite > clearly if you try to compare the performance of a statically typed > language like ML with a comparable dynamically typed language like Scheme. > The performance differential attributable to static versus dynamic types > will be swamped by unrelated differences in the quality of the > implementations. The real advantage of statically typed languages is > that they detect a certain class of bugs at compile time instead of run > time. Main point: the term "dynamic typing" as used here apparently means *both* dynamic method lookup and non-statically type checked systems. Except for that slight ambiguity I agree with most of the points made (exceptions follow). My main point is still that code may be statically checked (with the help of type declarations) even though it is dynamically bound. I agree: "The performance differential attributable to static versus dynamic types will be swamped by unrelated differences in the quality of the implementations." Almost always true. I agree: "The real advantage of statically type[checke]d languages is that they detect a certain class of bugs at compile time instead of run time." Amen. However: all else equal, staticly bound procedure calls will run faster *in the small*. But, dynamically bound method lookup can run faster *in the large* -- because reduced code size makes for less swapping. Minor clarification: Smalltalk method lookup is independent of how many "types" are in the "union". For this reason Smalltalk style dynamic method lookup will be faster, after some point, than the union discrimination approach to dynamic {whatever} used by languages with static {whatever}. ({Whatever} is here used in the same sense of 'method lookup' of 'dynamic method lookup'.) > Finally, I would like to say that there is nothing in the concept of > object-oriented programming that is incompatible with static types. > I would like to say that, but I won't because the concept of object- > oriented programming is so ill-defined that there are sure to be > people out there for whom the very term "object-oriented programming" > implies dynamic types. Granted: the term "object oriented programming" is hard to define. Granted: different people mean different things by the term. Point of view: Object oriented programming is a methodological mindset, one precept of which is that when sending a message we as the sender don't know the type of the receiver, and that we don't WANT to know. Note that I did not say we *can't* know, only that we don't know. Those holding this point of view might argue that it is perfectly possible to practice object oriented programming in FORTRAN, by "pretending" not to know the type of the receiver of the message. On the other hand, those same people might believe that the full power of object oriented programming can be achieved only by fully dynamic method lookup. From this point of view, whether or not OOP implies dynamic method lookup, it is better with it than without it. I believe: in practicing object oriented programming (and have said what I think it is, elsewhere [specifically SIGPLAN Sept 1982]). I believe: in the benefits of dynamic method lookup (which I call 'dynamic binding', whether mistakenly or not). I believe: in the benefits of static type checking -- at least on some days! cheers, Tim