[comp.object] Do we really need types in OOPL's?

dbrus.Unify.Com.brian@unify.uucp (Brian Meyerpeter ) (09/21/90)

I have a question that has been bugging me for a long time.  

Why does Eiffel try to be a typed language?  Shouldn't dynamic binding
allow for any type to be assigned to anytype?  It might be a nice feature
to restrict assignments (dynamic binding) in some cases but in others
I don't think so. 

In an ideally object-oriented language shouldn't you be able to assign
a class of any type to another class of any type as long as the 
client that is assigning them knows their interface and does not 
abuse it?

I would appreciate comments on this subject.  I will repost the
responses if I get enough.

--------------------------------------------------------------------
Brian Meyerpeter    "At 10 they're all 2's and at 2 they're all 10's"
                     Quote from the Red Barn at Chico State
--------------------------------------------------------------------

wwb@sps.com (Bud Bach) (09/21/90)

In article <0yw10qr@Unify.Com>, dbrus.Unify.Com.brian@unify.uucp (Brian Meyerpeter ) writes:
> I have a question that has been bugging me for a long time.  
> 
> Why does Eiffel try to be a typed language?  Shouldn't dynamic binding
> allow for any type to be assigned to anytype?  It might be a nice feature
> to restrict assignments (dynamic binding) in some cases but in others
> I don't think so.
> 
> In an ideally object-oriented language shouldn't you be able to assign
> a class of any type to another class of any type as long as the 
> client that is assigning them knows their interface and does not 
> abuse it?
> ...
There's the rub.  In large systems it is difficult to remember or "know"
the interface.  If an object is typed then, the client has some 
confidence that the object will plug into the system and respond to 
the appropriate messages.  

Also don't forget that objects may be passed as parameters or returned
as functions also.  As an implemeter of a class, it is useful to
restrict the kind of objects that may be passed to some useful subset.
For example, suppose you wish to export a method for adding the
contents of a container to another container.  What if the client
sends an integer instead of a container?  Granted, you might catch
this at runtime, but why not catch it sooner?


-- 
Bud Bach   					Voice:  407 984-3370
Software Productivity Solutions, Inc.		FAX:    407 728-3957
122 4th Avenue					email:  wwb@sps.com
Indialantic, FL  32903				or:     ...!uunet!sps!wwb

kimr@eiffel.UUCP (Kim Rochat) (09/22/90)

In article <0yw10qr@Unify.Com>, dbrus.Unify.Com.brian@unify.uucp (Brian Meyerpeter ) writes:
> I have a question that has been bugging me for a long time.  
> 
> Why does Eiffel try to be a typed language? 
> 
> In an ideally object-oriented language shouldn't you be able to assign
> a class of any type to another class of any type as long as the 
> client that is assigning them knows their interface and does not 
> abuse it?

The last comment is the reason Eiffel is typed.  Because of typing, the
compiler can verify that class interfaces (i.e., types) are properly used, 
and a type error cannot occur at run time.  In contrast, dynamically 
typed languages get type errors at run time instead of compile time.

Typing inhibits dynamic binding only in the sense that you are only
allowed to send polymorphic messages to (families of) objects which are 
guaranteed to understand them, and you are not allowed to send messages 
to objects which aren't guaranteed to understand them. The combination
of multiple inheritance, static typing, and dynamic binding works quite 
well in practice.

Static typing also allows optimizations to be made which are difficult
or impossible in a dynamically typed language, such as the removal of
the code from an application for methods which the compiler can
guarantee are never executed by the application.

In Dr. Meyer's book, "Object-Oriented Software Construction", you 
will a find a complete explanation of "why Eiffel is a typed language".

Kim Rochat
Interactive Software Engineering

responses to eiffel@eiffel.com

riks@servio.UUCP (Rik Smoody) (09/22/90)

In article <275@sps.com> wwb@sps.com (Bud Bach) writes:
>In article <0yw10qr@Unify.Com>, dbrus.Unify.Com.brian@unify.uucp (Brian Meyerpeter ) writes:
...
>There's the rub.  In large systems it is difficult to remember or "know"
>the interface.  If an object is typed then, the client has some 
>confidence that the object will plug into the system and respond to 
>the appropriate messages.  
>
>Also don't forget that objects may be passed as parameters or returned
>as functions also.  As an implemeter of a class, it is useful to
>restrict the kind of objects that may be passed to some useful subset.
>For example, suppose you wish to export a method for adding the
>contents of a container to another container.  What if the client
>sends an integer instead of a container?  Granted, you might catch
>this at runtime, but why not catch it sooner?
If "type" is confused with "isKindOf: someClass", then too often it is just not the right question.
Your container example is fine: somewhere in the system should be recorded what you mean
by "container".  If I have an object which I assert qualifies, how can we tell? 
Let's encapsulate the questions.

In the predestination model (where God created the world at compile time
and everything was deterministic after that), that information is somehow compiled in.
If inheritance is used, and the type information is (mis-)mapped onto class membership,
only a new class which IS a subclass of the chosen class is admissable.

Instead, I think we need to treat the Protocol itself as a first-class object.
Then it is easy enough to ask questions such as
	"Does my class named Pocket obey the Protocol known as #Container?".
		(The details would obviously verify that it could respond to #size, #do:, #at:, etc.
		The set of questions is stored in the Protocol)
If so, then I should be able to pass a Pocket full of things to your container...
even if God (or whoever beat on the system before the Big Bang) did not imagine Pocket.

Furthermore (not my recommendation, just an observation), an inheritance system
typically allows a user defined subclass to BREAK the (unmanaged) protocol
of a superclass.  E.g. if I created Bucket, subclass of Container, but overwrote
#do: such that it is not compatible with the normal expectation of #do:, then
even though a type-checking system would allow a Bucket, it would NOT work.

> In large systems it is difficult to remember or "know" the interface.
All the more important that a computer help us remember.
If there's a rule that must be obeyed by some parameter, I want to be able to
ask my development environment to tell me exactly what I need to know...
and I want it NOW, while I'm working on the problem, not an hour down
the road when some compiler tells me about a type mismatch.


> Why not catch it sooner?
I'll go along with that: catch errors where it is easiest or
most efficient (your choice) to do so, as long as you don't expect to
catch problems *before* they are defined.  In systems with frequent Big Bangs, you
can talk about deciding a lot of things at "compile time".
But in systems comprised of objects, rebuilding the
universe each time a new kind of thing comes along is just too expensive.
Instead we should be able to adjust dynamically.  It is possible for cooperating objects
to establish contracts such as "only send this message with an argument that obeys #Container".
A client agrees to the terms (for example, it sends an instance
variable which is constrained to obey #Container, or puts a single thing into a box for shipping)
so no effort is wasted on redundant checks.

There is really little difference in the kind of checking and contracts that can be formed,
but there is a big difference in flexibility of when they are formed.
Sometimes it is important to ask just the right question at the right time.

Let's not feel bad if it's not immediately obvious: even my favorite
OO environment (the Earth) is not perfect.  People sometimes ask the
wrong questions at the wrong times.  (oh what a lovely tangent to digress along  8-)

Rik Fischer Smoody			Some people at Servio might agree with me
riks@servio.slc.com	Ph: 503-690-3615	Fax: 629-8556

d87-mra@dront.nada.kth.se (Magnus Ramstr|m) (09/22/90)

In article <698@servio.UUCP> riks@servio.UUCP (Rik Fischer Smoody) writes:
>
>Furthermore (not my recommendation, just an observation),an inheritance system
>typically allows a user defined subclass to BREAK the (unmanaged) protocol
>of a superclass.  E.g. if I created Bucket,subclass of Container,but overwrote
>#do: such that it is not compatible with the normal expectation of #do:, then
>even though a type-checking system would allow a Bucket, it would NOT work.
>
Everywhere where a Bucket would be allowed as a Container, it would be a
Container. Virtual messages only would kick off differing methods, but
that is the idea. I too do not recommend breaking methods of superclasses,
but being able to add to them or implementing the same thing somewhat 
differently is very useful.
/mr
d87-mra@nada.kth.se (Magnus Ramstr|m). Student @ Dep. of Computer Science.

rick@tetrauk.UUCP (Rick Jones) (09/24/90)

In article <411@eiffel.UUCP> kimr@eiffel.UUCP (Kim Rochat) writes:
>In article <0yw10qr@Unify.Com>, dbrus.Unify.Com.brian@unify.uucp (Brian Meyerpeter ) writes:
>> I have a question that has been bugging me for a long time.  
>> 
>> Why does Eiffel try to be a typed language? 
>> 
>> In an ideally object-oriented language shouldn't you be able to assign
>> a class of any type to another class of any type as long as the 
>> client that is assigning them knows their interface and does not 
>> abuse it?
>
>The last comment is the reason Eiffel is typed.  Because of typing, the
>compiler can verify that class interfaces (i.e., types) are properly used, 
>and a type error cannot occur at run time.  In contrast, dynamically 
>typed languages get type errors at run time instead of compile time.

A fundamental confusion here is the distinction (or lack of it) between types
and classes.  Classification is really all about taxonomy, and being able to
describe something by saying "this is like X except ...".  This is a very good
method for defining a software module (i.e. a class), and the basis of code
re-use in OOPLs.

A type on the other hand is a definition of an interface.  It says what an
object is prepared to do for a client.  For dynamic binding to work reliably,
it is important that an object is never asked to do something it doesn't
support, therefore the issue of type conformance is important.  If this
conformance can be checked and guaranteed early, the predictable reliability of
the system is improved.

A class and a type don't have to be the same thing, but given the state of the
art in compilers it is a useful pragmatic to treat them as the same.  I.e. the
inheritance tree is used as the basis for determining type conformance.  For a
compiler to analyse type conformance purely based on interface definitions
independent of inheritance is clearly a non-trivial issue.  I'm not an expert
on compilers so I shall not attempt to comment further.

A view of the future evolution of Eiffel's type system is described in two
papers by Bertrand Meyer, re-posted in comp.lang.eiffel last July.  They
anticipate support for a notion of partial conformance, still based on class
inheritance but both less restrictive and more reliable (that's the way I read
it - perhaps someone would correct me if I've got it wrong?).


Along similar lines, I am working on a client/server architecture using OO
principles (which is the obvious way to treat it), and the problems of
encapsulation and conformance are being dealt with by having the client
interrogate the server to get a definition of its interface.  The client then
knows what it can or cannot do, and can make dynamic decisions as to how to
use the server;  it avoids the run-time problem of trying to "send a message"
and getting it bounced.

In my case the domain is clearly defined and the interface definition is
tightly structured, but is this approach applicable in a more general form in
an OOPL?

-- 
Rick Jones			The definition of atomic:
Tetra Ltd.				from the Greek meaning "indivisible"
Maidenhead, Berks, UK		So what is:
rick@tetrauk.uucp			an atomic explosion?

wwb@sps.com (Bud Bach) (09/24/90)

In article <698@servio.UUCP>, riks@servio.UUCP (Rik Smoody) writes:
> ...
> Instead, I think we need to treat the Protocol itself as a first-class object.

Good point, since protocol errors are quite common.  I am not aware of OOPLs 
that treat protocol like this.  Can you give me some references?

> 
> Let's not feel bad if it's not immediately obvious: even my favorite
> OO environment (the Earth) is not perfect.  People sometimes ask the
> wrong questions at the wrong times.  (oh what a lovely tangent to digress along  8-)

I hear you!
-- 
Bud Bach   					Voice:  407 984-3370
Software Productivity Solutions, Inc.		FAX:    407 728-3957
122 4th Avenue					email:  wwb@sps.com
Indialantic, FL  32903				or:     ...!uunet!sps!wwb

d87-mra@dront.nada.kth.se (Magnus Ramstr|m) (09/25/90)

In the otherwise very well written article <736@tetrauk.UUCP> 
rick@tetrauk.UUCP (Rick Jones) writes:
>
>A fundamental confusion here is the distinction (or lack of it) between types
>and classes.  Classification is really all about taxonomy, and being able to
>describe something by saying "this is like X except ...".  This is a very good
>method for defining a software module (i.e. a class), and the basis of code
>re-use in OOPLs.
>
Even though I am sure that the poster as well as most of you are well aware
of the mistake in this quote, I still want to point it out since it is a
common mistake, and it might confuse newcomers to OO.

Classification and inheritance is one basis of code reuse in OOPL, but not THE
basis. The most important basis for code reuse in OOPL's is polymorphism, which
allows for programming with interchangable parts.

Now we return to the subject of this discussion:
   Do we really need types in OOPL's?
or Do we really need early binding in OOPL's?
Several postings have pointed out advantages with early binding, but noone has
(yet in a posting that has reached this site) adressed the advantage of
late binding mentioned in the original posting:

In article <0yw10qr@Unify.Com>, dbrus.Unify.Com.brian@unify.uucp 
(Brian Meyerpeter ) writes:
> I have a question that has been bugging me for a long time.
>
> Why does Eiffel try to be a typed language?
>
> In an ideally object-oriented language shouldn't you be able to assign
> a class of any type to another class of any type as long as the
> client that is assigning them knows their interface and does not
> abuse it?
>

This can be achieved in a strongly typed OOPL by having one class as the
ancestor of all other classes, letting your variables refer to that class
and using virtual messages only. Therefore a strongly typed OOPL can
be transformed into a weakly typed one, while the reverse is not true.

When using a strongly typed OOPL the programmer has to compromise
between type checking and polymorphism.

I would say that we do not REALLY need types in OOPL's , but there are
many situations where they are useful.

Then of course it is very simple to create an OOPL using late binding only,
while it is hard to create a strongly typed OOPL (I am really impressed
by all the implementators of Simula).


d87-mra@nada.kth.se (Magnus Ramstr|m). Student @ Dep. of Computer Science.

caseau@maya.bellcore.com (Yves J Caseau) (09/26/90)

Previous postings assumed that classes and types, although different, could
be assimilated for the sake of compiler technology. I don't agree with that, 
there are many reasons why a type system should supersede the class taxonomy:

  - The union of some classes is not always a class, so an explicit union
    type is needed (see Johnson & al's work on TS optimizer for more 
    detailed explanations).
 
  - Set operations need "list/set_of_X" kind of constructs, which are not 
    usually in the class taxonomy (unless you have genericity). The type 
    system should infer the type of [[1 cons ()] car] as being "integer", you
    need a better type system to achieve this.

  - A certain amount of polymorphism is necessary (the famous stack_of_X 
    example), which also requires an extended type system if the compiler is 
    supposed to remove the overhead due to the high-level specification.
    In other words, it is easy to implement a stack_of_something with
    SMALLTALK, it is more difficult to compile this implementation into a
    very good code (without useless dynamic "type" checking).

  - Optimizations based on constant recognition may be directly supported
    by an extended type system.

There are many good papers explaining why an extended type system is necessary
(see POPL of this year). In the LAURE language, we have found it necessary to
quit identifiying classes and types to produce "really smart" code. My 
experience is that you need types in a high-level, nice, .... object-oriented
language to gain safety and *efficiency*.  


-- Yves Caseau
   caseau@bellcore.com

pcb@cacs.usl.edu (Peter C. Bahrs) (09/26/90)

Without being for or against, how do you implement polymorphism without types?

  For instance in OO|| syntax:

   Class A;
   A.(behavior x);
   A.(behavior y);

st. x and y are different objects.  How can you tell the difference?

In C++ or OO|| you would define 
    behavior (Classname id);
    behavior (Classname2 id);
and the compiler or interpreter will invoke the correct behavior/method based
on the class of the incoming objects.  Otherwise each behavior
must contain added code looking like a switch statement:
     behavior (id)
     {
        if ( (id.(class)).(== "Classname") )
          {
           /* do this */
          }
        else 
         {
          if ( (id.(class)).(== "Classname2") )
          ...
         }
      }

This approach adds more complexity to the code.  It is compounded even more
with variable number of arguments.  

Just a thought.

/*----------- Thanks in advance... --------------------------------------+
| Peter C. Bahrs                                                         |
| The USL-NASA Project                                                   |
| Center For Advanced Computer Studies      INET: pcb@gator.cacs.usl.edu |
| 2 Rex Street                                                           |
| University of Southwestern Louisiana      ...!uunet!dalsqnt!gator!pcb  | 
| Lafayette, LA 70504                                                    |
+-----------------------------------------------------------------------*/

tma@osc.COM (Tim Atkins) (09/27/90)

In article <1990Sep25.135145.3460@kth.se> d87-mra@dront.nada.kth.se (Magnus Ramstr|m) writes:

>
>This can be achieved in a strongly typed OOPL by having one class as the
>ancestor of all other classes, letting your variables refer to that class
>and using virtual messages only. Therefore a strongly typed OOPL can
>be transformed into a weakly typed one, while the reverse is not true.

	This is not really the case.  In a dynamically typed language
any message can be sent to any object.  In a staticly typed language with
a common base class only messages understood by the base class may be sent.
In practice the latter is much harder to deal with and tends to lead directly
to a need for MI and a lot of "adjective" classes to handle common protocol
that needs to be mixed and matched to give the desired effect.  The situtation
is somewhat ameliorated by genericity in Eiffel (parameterized types) and
by casting in C++.  The latter is not always type-safe.  In C++ 2.x a virtual
base, which is certainly likely in the presence of MI and polymorphism needs
can not be legally cast to a derived pointer type.  I and others have develop-
ed runtime work-arounds but they totally violate the supposed goodness of
static typing.

	I also disagree that "strong" typing cannot be simulated in a 
dynamically typed language.  Type or even protocol tests can be inserted at
will and could probably be automated in fairly clever and efficient fashion.

	As mentioned in earlier posts there are problems that cannot be 
reasonably solved in statically typed languages.  At the least a true message-
passing mechanism would be a nice addition to languages such as C++ for 
handling these cases.

Tim Atkins

rick@tetrauk.UUCP (Rick Jones) (09/27/90)

In article <15362@rouge.usl.edu> pcb@cacs.usl.edu (Peter C. Bahrs) writes:
>
>Without being for or against, how do you implement polymorphism without types?
>
>  For instance in OO|| syntax:
>
>   Class A;
>   A.(behavior x);
>   A.(behavior y);
>
>st. x and y are different objects.  How can you tell the difference?
>
>In C++ or OO|| you would define 
>    behavior (Classname id);
>    behavior (Classname2 id);
>and the compiler or interpreter will invoke the correct behavior/method based
>on the class of the incoming objects.  Otherwise each behavior
>must contain added code looking like a switch statement:
>	[code example]

What you are defining here is not polymorphism but function overloading, i.e.
multiple functions with the same name but different signatures in the same
class.  A language doesn't have to support overloading to allow polymorhism;
in fact Eiffel, the original subject of this thread, does not support
overloading of functions at all.

Polymorhism, using the same example style as above, would be:

	Class A;

	A = <instance of class B>;
	A.behaviour(x);

	A = <instance of class C>;
	A.behaviour(x);

or if you want it in Eiffel:

	a: A; b: B; c: C;

	b.Create; c.Create ;

	a := b; a.behaviour (x);
	a := c; a.behaviour (x);

The two calls to a.behaviour in fact refer to different objects, which belong
to different classes.  If the version of the "behaviour" routine invoked in
each case depends on the actual object type - i.e. (b) or (c), then the binding
is dynamic.  If the version of the routine is the one defined for (a)
regardless, the binding is static.

There is an implicit assumption here of reference semantics.  The objects (b)
and (c) are not copied into object (a) - (a) is merely a reference to the
actual objects.  If the assignments were copies, the (a) object would always be
an (a) object, and the copy operation between types would imply some form of
conversion.

The question of typing is how the rules for legality of assignment of either
(b) or (c) to (a) are defined in the language.  If the language is untyped
(Smalltalk, Objective-C), then there is no restriction whatever.  If object (c)
turns out not to have a "behaviour" routine, this is not detected until
run-time.  If the language is typed (Eiffel, C++), objects (b) and (c) must
conform to the type of (a), and are therefore guaranteed to have a "behaviour"
routine (or have object structures on which "behaviour" will work correctly if
the binding is static, as it may be in C++).  This usually means that they must
inherit from the class of (a), and so possess at least all the features of
class A.

The scope of the example only in fact requires that (b) and (c) possess a
"behaviour" routine, other routines possessed by (a) may not matter.  This
suggests that only partial conformance is actually required, but that's where
the design of the compiler starts to get fun ...

Sorry if this seems to be going back to basics, but the question did reveal
some fundamental misconceptions.

Anyone want to start a thread to discuss the benefits or otherwise of
overloading?

-- 
Rick Jones			The definition of atomic:
Tetra Ltd.				from the Greek meaning "indivisible"
Maidenhead, Berks, UK		So what is:
rick@tetrauk.uucp			an atomic explosion?

d87-mra@dront.nada.kth.se (Magnus Ramstr|m) (09/28/90)

In article <3832@osc.COM> tma@osc.UUCP (Tim Atkins) writes:
>In article <1990Sep25.135145.3460@kth.se> d87-mra@dront.nada.kth.se (Magnus Ramstr|m) writes:
>
>>
>>This can be achieved in a strongly typed OOPL by having one class as the
>>ancestor of all other classes, letting your variables refer to that class
>>and using virtual messages only. Therefore a strongly typed OOPL can
>>be transformed into a weakly typed one, while the reverse is not true.
>
>	This is not really the case.  In a dynamically typed language
>any message can be sent to any object.  In a staticly typed language with
>a common base class only messages understood by the base class may be sent.
>
Ok, I was unclear. The statement above is ment to be theoretical. The base
class must of course understand all messages, if only by invoking empty
methods.
>
>	I also disagree that "strong" typing cannot be simulated in a 
>dynamically typed language.  Type or even protocol tests can be inserted at
>will and could probably be automated in fairly clever and efficient fashion.
>handling these cases.
>
Again, theoretical. I wrote transform into, not simulate. A dynamically
typed language can not provide compile-time type checking, as I am sure
you would agree with. But, my article was unclear on the details, and I
thank you for pointing this out.
>
>Tim Atkins
>

d87-mra@nada.kth.se (Magnus Ramstr|m). Student @ Dep. of Computer Science.

leavens@cs.iastate.edu (Gary Leavens) (09/28/90)

In comp.object you write:

>Why does Eiffel try to be a typed language?  Shouldn't dynamic binding
>allow for any type to be assigned to anytype?  It might be a nice feature
>to restrict assignments (dynamic binding) in some cases but in others
>I don't think so. 

>In an ideally object-oriented language shouldn't you be able to assign
>a class of any type to another class of any type as long as the 
>client that is assigning them knows their interface and does not 
>abuse it?

You really answer your own question.
A programming language's type system can help ensure that,
you "know the interface" of each object and do not abuse it.
What does that mean?  Each object supports a protocol
(messages with certain arities).  So think of the type of
a variable (or expression) as the minimum protocol that
the objects that the variable can denote at run-time
is guaranteed to support.  This naturally leads to a (weak)
notion of subtype/supertype relationships.  That is,
if S is a weak subtype of T, then objects of type S respond to
all the messages to which objects of type T respond.

Now let the type system enforce the invariant that
each variable (and expression) of type T can only denote
objects of a weak subtype of T.  Furthermore, that you
are only allowed to send a message to an expression of
type T provided that it is in the protocol of T.
It follows that you will never get a "message not understood"
error at run-time.  (See why?)

Type systems like this have been known since Cardelli's
work in 1984, and have been used in Trellis/Owl.
This also seems to be the intention of the Eiffel type
system.  However, Eiffel goes further, and uses a stronger
subtype relationship that helps one reason about programs.
See Meyer's book, or my article "Reasoning about Object-Oriented
Programs that use Subtypes" (with William Weihl) that will
appear in ECOOP/OOPSLA '90.

	Gary Leavens

--
	229 Atanasoff Hall, Department of Computer Science
	Iowa State University, Ames, Iowa 50011-1040, USA
	phone: (515) 294-1580

cline@cheetah.ece.clarkson.edu (Marshall Cline) (10/02/90)

In article <3832@osc.COM> tma@osc.COM (Tim Atkins) writes:
...
>	I also disagree that "strong" typing cannot be simulated in a 
>dynamically typed language.  Type or even protocol tests can be inserted at
>will and could probably be automated in fairly clever and efficient fashion.

This reveals probably the most common communication problem: nomenclature.
What is usually meant by ``strong'' typing is ``static'' typing (cf. Booch
OOD, Meyer OOSC, etc).  Ie: saying that an OOPL is strongly typed means
things like: the compiler can tell, from the *text* of the program alone,
whether or not an object will be equipped to handle a particular message.
Strong (static) typing does *NOT* mean the compiler knows what the message
will *do*, the latter being static binding.

Static binding says the compiler knows (from the text of the program alone)
the exact member function that will be called.  Dynamic binding and static
typing coexist wonderfully, existence proofs being C++, Simula, Eiffel, etc.

Strong typing is especially valuable for programming-in-the-large where the
edit-compile-debug cycle is especially tedious.  All other things being
equal (they're not), ``sooner'' error detection is better than ``later''.

Marshall Cline

--
==============================================================================
Marshall Cline / Asst.Prof / ECE Dept / Clarkson Univ / Potsdam, NY 13676
cline@sun.soe.clarkson.edu / Bitnet:BH0W@CLUTX / uunet!clutx.clarkson.edu!bh0w
Voice: 315-268-3868 / Secretary: 315-268-6511 / FAX: 315-268-7600
Career search in progress; ECE faculty; research oriented; will send vita.
PS: If your company is interested in on-site C++/OOD training, drop me a line!
==============================================================================

euaabt@eua.ericsson.se (Anders.Bjornerstedt) (10/03/90)

tma@osc.COM (Tim Atkins) writes:

>In article <1990Sep25.135145.3460@kth.se> d87-mra@dront.nada.kth.se (Magnus Ramstr|m) writes:

>>
>>This can be achieved in a strongly typed OOPL by having one class as the
>>ancestor of all other classes, letting your variables refer to that class
>>and using virtual messages only. Therefore a strongly typed OOPL can
>>be transformed into a weakly typed one, while the reverse is not true.

>	This is not really the case.  In a dynamically typed language
>any message can be sent to any object.  In a staticly typed language with
>a common base class only messages understood by the base class may be sent.
>In practice the latter is much harder to deal with and tends to lead directly
>to a need for MI and a lot of "adjective" classes to handle common protocol
>that needs to be mixed and matched to give the desired effect.  The situtation
>is somewhat ameliorated by genericity in Eiffel (parameterized types) and
>by casting in C++.  The latter is not always type-safe.  In C++ 2.x a virtual
>base, which is certainly likely in the presence of MI and polymorphism needs
>can not be legally cast to a derived pointer type.  I and others have develop-
>ed runtime work-arounds but they totally violate the supposed goodness of
>static typing.

Strong typing is not the same thing as static typing. Magnus Ramstr|m talks
about strong typing and you then go on and talk about static typing.
An example: In Simula you have the notion of requalification, which is like
casting in C or C++ but causes a runtime check. This gives you both type safety
(in the sense of only allowing messages to be sent to an object that it
is guaranteed to understand) and the oportunity for compiler optimizations.
Your program also runs the risk of being aborted at run time if a 
qualification fails. But at least your program aborts at the precise point
where it is trying to do something not intended by the programmer. 
Requalifying a variable to be of a more specialized type is then a special
case of an assertion.

-----------------------
Anders Bjornerstedt
Ellemtel Telecommunication Systems Laboratories
Box 1505
S-125 25   Alvsjo
SWEDEN

email: Anders.Bjornerstedt@eua.ericsson.se

lgm@cbnewsc.att.com (lawrence.g.mayka) (10/03/90)

In article <CLINE.90Oct1164608@cheetah.ece.clarkson.edu>, cline@cheetah.ece.clarkson.edu (Marshall Cline) writes:
> Strong typing is especially valuable for programming-in-the-large where the
> edit-compile-debug cycle is especially tedious.  All other things being
> equal (they're not), ``sooner'' error detection is better than ``later''.

Dynamically typed languages such as Common Lisp and Smalltalk perform
incremental compilation and loading, so the delay in the
edit-compile-debug cycle is near-zero.  Compile-time typing is
precisely what *requires* those long edit-compile-debug delays that
are so intolerable in conventional large software systems.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

tma@m5.COM (Tim Atkins) (10/03/90)

In article <CLINE.90Oct1164608@cheetah.ece.clarkson.edu> cline@sun.soe.clarkson.edu (Marshall Cline) writes:
>In article <3832@osc.COM> tma@osc.COM (Tim Atkins) writes:
>...
>>	I also disagree that "strong" typing cannot be simulated in a 
>>dynamically typed language.  Type or even protocol tests can be inserted at
>>will and could probably be automated in fairly clever and efficient fashion.
>
>This reveals probably the most common communication problem: nomenclature.
>What is usually meant by ``strong'' typing is ``static'' typing (cf. Booch
>OOD, Meyer OOSC, etc).  Ie: saying that an OOPL is strongly typed means
>things like: the compiler can tell, from the *text* of the program alone,
>whether or not an object will be equipped to handle a particular message.
>Strong (static) typing does *NOT* mean the compiler knows what the message
>will *do*, the latter being static binding.
>

Granted.  I usually tend to steer away from the terms "strong" vs. "weak"
in regards to typing but I slipped up here.


>Static binding says the compiler knows (from the text of the program alone)
>the exact member function that will be called.  Dynamic binding and static
>typing coexist wonderfully, existence proofs being C++, Simula, Eiffel, etc.

	I take a bit of exception to C++ in this context.  Dynamic binding
means that that binding is done a run-time based on the run-time type of 
the receiver and the message sent.  C++ emphatically does not support this.
Instead it uses static type information to calculate the offset relative
to a type specific pointer location within the receiver where the function
may be found.  There is no run-time lookup at all except to simply do the
simple array reference.

>
>Strong typing is especially valuable for programming-in-the-large where the
>edit-compile-debug cycle is especially tedious.  All other things being
>equal (they're not), ``sooner'' error detection is better than ``later''.
>
>Marshall Cline
>

	Ah, but I would contend that static typing environments necessitate
to a large degree just the type of ills you represent them as curing.  It
is much more difficult to create an incremental, highly interactive develop-
ment environment for a statically typed language.  One also has commonly 
more code to debug as the re-useability is a bit more constrained by many of
the static typing models. In a dynamically typed interactive environment I
believe I will catch errors much faster because changes are incremental and
because I tend to test much more when I don't have to "edit-compile-link"
a new program just to test a new feature.  

- Tim Atkins

craig@Neon.Stanford.EDU (Craig D. Chambers) (10/04/90)

In article <3851@m5.COM> tma@m5.UUCP (Tim Atkins) writes:
>In article <CLINE.90Oct1164608@cheetah.ece.clarkson.edu> cline@sun.soe.clarkson.edu (Marshall Cline) writes:
>>Static binding says the compiler knows (from the text of the program alone)
>>the exact member function that will be called.  Dynamic binding and static
>>typing coexist wonderfully, existence proofs being C++, Simula, Eiffel, etc.
>
>	I take a bit of exception to C++ in this context.  Dynamic binding
>means that that binding is done a run-time based on the run-time type of 
>the receiver and the message sent.  C++ emphatically does not support this.
>Instead it uses static type information to calculate the offset relative
>to a type specific pointer location within the receiver where the function
>may be found.  There is no run-time lookup at all except to simply do the
>simple array reference.

C++ *does* support dynamic binding, implemented in cfront (and most
(all?) other C++'s) using indirect procedure calls (as you describe).
The fact that this implementation may be more efficient than some
other run-time dispatching implementations (such as table lookups)
doesn't mean it's not dynamic binding.  Of course, the C++ class/type
rules have been designed so that this implementation is possible.
Adding multiple inheritance has so complicated this implementation
that some of the other implementations, perhaps coupled with
optimizations like inline caching and/or customization, may start to
look more attractive.

And I'd like to add that Trellis/Owl is the best example of an
existing statically-typed object-oriented language I'm aware of, as
far as flexibility (and correctness) in the type system.

>>Marshall Cline

>- Tim Atkins

-- Craig Chambers

cline@cheetah.ece.clarkson.edu (Marshall Cline) (10/04/90)

In article <1990Oct2.170910.4805@eua.ericsson.se> euaabt@eua.ericsson.se (Anders.Bjornerstedt) writes:
>Strong typing is not the same thing as static typing.

From the rest of your remarks, I *think* we agree in concept, but the
terminology you use is perhaps non-standard.  Strong typing *DOES* mean
static typing [ex: Booch OOD, Meyer OOSC, etc].  It does not necessitate
static *binding*, as I and others have repeatedly said.

>Magnus Ramstr|m talks
>about strong typing and you then go on and talk about static typing.
>An example: In Simula you have the notion of requalification, which is like
>casting in C or C++ but causes a runtime check. [...]

Although this is `type safe', it is not statically typed (strongly
typed).  Like pointer coersions in C++, this is a hole in Simula's
strong type system (put there intentionally for very good pragmatic
reasons; real life software engineering isn't always as pretty as
Pascal).  Pointer coersions in C++ aren't type *safe*, but neither
they nor Simula's requalification is statically typable in general.


Also in the above referenced message, tma@osc.COM (Tim Atkins) writes:

>>In a dynamically typed language
>>any message can be sent to any object.  In a staticly typed language with
>>a common base class only messages understood by the base class may be sent.

You act like the ability to send *any* message to *any* object is a
wonderful feature of weakly (dynamically) typed languages!  In fact it
pointedly shows that the compiler can't (and doesn't try to) statically
verify that the message is ``safe'' (``safe'' meaning ``that object *IS*
equipped to handle this message'').  This very ``feature'' (``ability''
to send any message to any object without the compiler helping you
statically determine if you'll crash and burn at runtime) makes
dynamically typed languages the wrong choice for very large software
systems.  (Not that this is their goal in life -- they make *great*
interactive rapid prototyping environments -- but they don't scale --
which is ok so long as both tools are used for what they are designed).

Marshall Cline

--
PS: If your company is interested in on-site C++/OOD training, drop me a line!
PPS: Career search in progress; ECE faculty; research oriented; will send vita.
--
Marshall Cline / Asst.Prof / ECE Dept / Clarkson Univ / Potsdam, NY 13676
cline@sun.soe.clarkson.edu / Bitnet:BH0W@CLUTX / uunet!clutx.clarkson.edu!bh0w
Voice: 315-268-3868 / Secretary: 315-268-6511 / FAX: 315-268-7600

dlw@odi.com (Dan Weinreb) (10/05/90)

In article <3851@m5.COM> tma@m5.COM (Tim Atkins) writes:


	   I take a bit of exception to C++ in this context.  Dynamic binding
   means that that binding is done a run-time based on the run-time type of 
   the receiver and the message sent.  C++ emphatically does not support this.
   Instead it uses static type information to calculate the offset relative
   to a type specific pointer location within the receiver where the function
   may be found.  There is no run-time lookup at all except to simply do the
   simple array reference.

It depends at what level of abstraction you consider.  At the level of
abstraction of C++ semantics, as defined by the C++ Language Reference
Manual: when you invoke a virtual function member, the function that
gets called is determined by the type (class) of the object that the
function member is invoked on.  This is true whether it's implemented
by calculating offsets, looking up in hash tables, or whatever.  If
you ignore the question of how it's implemented, it certainly does
bind (that is, bind a particular function name to a specific virtual
function member) at runtime.

At the level of abstraction of the implementation, you say "There is
no run-time lookup at all except to simply do the simple array
reference."  Another way to say the same thing is "There is a
runtime lookup, which is implemented using a simple array reference."
No matter how you describe it, it's certainly determining at runtime
what function to call.

   >Strong typing is especially valuable for programming-in-the-large where the
   >edit-compile-debug cycle is especially tedious.  All other things being
   >equal (they're not), ``sooner'' error detection is better than ``later''.

	   Ah, but I would contend that static typing environments necessitate
   to a large degree just the type of ills you represent them as curing.  It
   is much more difficult to create an incremental, highly interactive develop-
   ment environment for a statically typed language.  One also has commonly 
   more code to debug as the re-useability is a bit more constrained by many of
   the static typing models. In a dynamically typed interactive environment I
   believe I will catch errors much faster because changes are incremental and
   because I tend to test much more when I don't have to "edit-compile-link"
   a new program just to test a new feature.  

   - Tim Atkins

As someone who has had very extensive experience using both kinds of
environment (Genera/Symbolics Common Lisp and SunOS/C++), I think I
can reliably report that there is a lot of truth in both of these
statements.  Static typing (typed variables) does catch more errors at
compile time, and it's easier to fix them when they're caught at
compile time, and if they're caught at compile time then you hear
about them even if they are in paths of the code that are very rarely
executed.  On the other hand, interactive incremental environments are
really great, and I sorely miss them in C++.

However, the two things are not mutually incompatible, although I
think I agree that it's not easy to put them together.  I recommend
that anybody interested in this look into Sabre C, which is a real
incremental interactive environment for C.  Rumor has it that they
are working on a C++ version, which I'll be looking forward to.

craig@Neon.Stanford.EDU (Craig D. Chambers) (10/05/90)

In article <CLINE.90Oct4111329@cheetah.ece.clarkson.edu> cline@sun.soe.clarkson.edu (Marshall Cline) writes:
>In article <1990Oct2.170910.4805@eua.ericsson.se> euaabt@eua.ericsson.se (Anders.Bjornerstedt) writes:
>>Strong typing is not the same thing as static typing.
>
>From the rest of your remarks, I *think* we agree in concept, but the
>terminology you use is perhaps non-standard.  Strong typing *DOES* mean
>static typing [ex: Booch OOD, Meyer OOSC, etc].  It does not necessitate
>static *binding*, as I and others have repeatedly said.
>
>>Magnus Ramstr|m talks
>>about strong typing and you then go on and talk about static typing.
>>An example: In Simula you have the notion of requalification, which is like
>>casting in C or C++ but causes a runtime check. [...]
>
>Although this is `type safe', it is not statically typed (strongly
>typed).  Like pointer coersions in C++, this is a hole in Simula's
>strong type system (put there intentionally for very good pragmatic
>reasons; real life software engineering isn't always as pretty as
>Pascal).  Pointer coersions in C++ aren't type *safe*, but neither
>they nor Simula's requalification is statically typable in general.

I disagree with your terminology.  Strong typing means that all
operations are type safe; static typing means that type safety can be
guaranteed statically by scanning the program text.  Most
dynamically-typed languages (i.e. languages that do not type check
programs statically but instead defer any necessary type checks until
run-time) are strongly typed (e.g. Lisp (at least interpreted
versions), Smalltalk, Self), since the run-time system ensures that
only legal (type safe) operations are applied to objects/values.  Many
statically-typed languages are also strongly typed.  Some languages
are mostly statically-typed, but include facilities/loopholes to defer
type checking until run-time (e.g. Simula and CLU).  Other languages
claim to be statically- and strongly-typed, but in fact contain holes
in the type system that can lead to violation of type safety (e.g.
Eiffel, Beta (I believe)).  Other statically-typed languages aren't
strongly-typed at all (e.g. C, C++); these are usually called
weakly-typed languages.

Since strong typing and static typing are frequently misused as
synonyms, I tend to avoid using the terms strong and weak typing
without including a definition of what I mean. Luckily, static typing
is pretty well universally understood to mean what I think it means.

Completely orthogonal to static vs. dynamic and strong vs. weak typing
is static vs. dynamic binding, sometimes called early and late
binding.  Languages with dynamic binding include some sort of run-time
dispatching to select an appropriate implementation for a procedure
call based on the run-time values of the call's arguments.  This
implementation may be indirect procedure calls ala C++, hash table
lookups ala Eiffel and Smalltalk, or some other implementation.  Of
course, in some situations the compiler may be able to find the single
matching implementation for a dynamically-bound call and thus reduce
it to a statically-bound call as an optimization (e.g. using type
analysis and customization in Self or using case analysis in Typed
Smalltalk), but in general this can't always be done.

To answer the original question about types in OOPLs, then, I think
that there is a place for static type information in object-oriented
programs.  The trick is to design a type system that both supports
making guarantees about type safety efficiently at compile-time and
doesn't overly constrain the kinds of programs people can write.
Using abstract interfaces/signatures/specifications as types is a good
way to do this.  See Ralph Johnson's earlier posting for a good
discussion of the relationship between types and classes.

-- Craig Chambers

lgm@cbnewsc.att.com (lawrence.g.mayka) (10/05/90)

In article <CLINE.90Oct4111329@cheetah.ece.clarkson.edu>, cline@cheetah.ece.clarkson.edu (Marshall Cline) writes:
> You act like the ability to send *any* message to *any* object is a
> wonderful feature of weakly (dynamically) typed languages!  In fact it
> pointedly shows that the compiler can't (and doesn't try to) statically
> verify that the message is ``safe'' (``safe'' meaning ``that object *IS*
> equipped to handle this message'').  This very ``feature'' (``ability''
> to send any message to any object without the compiler helping you
> statically determine if you'll crash and burn at runtime) makes
> dynamically typed languages the wrong choice for very large software
> systems.  (Not that this is their goal in life -- they make *great*
> interactive rapid prototyping environments -- but they don't scale --
> which is ok so long as both tools are used for what they are designed).

The "safety" of static/strong/compile-time typing is a guarantee of
definition/usage type agreement across a "finished" program assembled
at a single point in time.  This assumption is fundamentally
incompatible with the development and evolution of large software
systems, which - like living beings - are never "finished" until
they're *dead*!  Large systems - successful ones - undergo continuous
evolution from their initial genesis in the lab, throughout their
lifetime, all the way up until the retirement of the last remaining
specimen from service.  They must continually face new situations
different from those explicitly planned by the original designers.
The ability to reuse most software without change in new, unplanned
circumstances is essential, as is the related ability to introduce
even major architectural improvements incrementally.  Compile-time
"consistency" between two bodies of code is simply another word for
"dependency" - i.e., a need for synchronization that, on a massive
scale, slows down new development to a crawl.

The occurrence of a type error, far from being a "crash and burn at
runtime", is merely a software exception like any other, and responds
to similar treatment by the appropriate exception handler.  Keep in
mind that mere compile-time typing does not guarantee that exceptional
situations will never occur.  Far from it!  In fact, since static
typing virtually always implies a lack of dynamic typing, the former
engenders a false sense of security that may permit serious errors
(like a wild write from an uninitialized pointer) to cause
catastrophic damage before discovery.

I encourage those who claim that dynamic typing is "the wrong choice"
for large software systems to actually run experiments on large (e.g.,
over a million lines of source) software systems, comparing dynamic
vs. static typing - above all, for ease of modification and extension.

Note that for the statically typed system in the experiment, you must
not take advantage of any flexibility ascribable to dynamic typing
(e.g., multiple executables tied together by Korn shell
scripts/commands or ASCII pipes/files).  Such usage is simply an
admission that beyond a certain size, your total program is unwritable
and/or unusable without dynamic typing.  No, you must look instead at
a very large, statically typed software system that runs in a single
address space (or in multiple address spaces that communicate via
statically typed messages).


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

dlw@odi.com (Dan Weinreb) (10/05/90)

In article <1990Oct2.230958.16544@cbnewsc.att.com> lgm@cbnewsc.att.com (lawrence.g.mayka) writes:

   Dynamically typed languages such as Common Lisp and Smalltalk perform
   incremental compilation and loading, so the delay in the
   edit-compile-debug cycle is near-zero.  Compile-time typing is
   precisely what *requires* those long edit-compile-debug delays that
   are so intolerable in conventional large software systems.

Actually I think it's a bit more complicated than that.  You could
imagine a language like Lisp or Smalltalk in which the program could
be annotated with various assertions, which would be like little
statements that say "if the type of X is not integer, signal an error
now".  You could put assertions on variables, and such a little if
statement would autoamtically be generated upon each assignment.  You
could also add assertions for argument and returned values.

Then, you could make the compiler understand these assertions and
attempt to optimize them out at runtime.  For example, if "x" is
asserted to have only integer values, the expression (setq x (+ x 2))
(for you C people that means "the statement x += 2") would not need
any actual runtime checks.  (For now let's ignore any overflow
issues.)

Next, you teach the compiler that when it sees a call to a function
foo, it looks for foo's argument assertions and foo's return value
assertions, and these things get utilized by the aforementioned
optimization.  It's not so hard for the environment to know the
assertions of a called function.  The Symbolics interactive
environment already is capable of looking at a called function to see
how many arguments it expects, so that it can notify you at compile
time if you pass the wrong number of arguments.  Well, this is just an
extension of the same sort of mechanism.

Finally, you have your choice as to whether making these assertions is
purely optional (you add them wherever you want to), or totally
mandartory (every variable, every argument, every return value must
have an assertion).  Of course, since all types are subtypes of other
types, and all types are subtypes of the uber-type T (in Lisp), you
could always assert that something was of type T, which would be a
null assertion (true by definition) (easy to optimize!).


Contrariwise, it is possible to write an incremental environment for
C, as witness Sabre.  Of course, there are things in C that make a
real incremental compiler rather hard to do, like the C preprocessor.
But as long as it's used sparingly, the incremental environment
shouldn't have too much trouble.  (Lisp has a similar issue with
reader macros, solved by the fact that they're not used very much.)


I'm not saying that everything is equal to everything.  There are big
differences of all sorts between languages and between environments.
All I am trying to say here is that the differences may not be as
fundamental as they seem, and there is all kind of room for compromise
and/or getting the best of both worlds, or for getting those parts of
each world that you like best.

tma@osc.COM (Tim Atkins) (10/05/90)

	It appears I stuck my foot in it when I claimed that C++ virtual
	functions are not an example of dynamic binding.  That is of course
	wrong as they are.  However, we need another term whose meaning is
	the overly restrictive one I previously gave dynamic binding. This 
	term applies to languages that have a mechanism for fully run-time
	binding of function to message, receiver pair where no information
	was necessarily available for either member of the pair at compile
	time.  Apparently we are stuck with the somewhat perjorative
	(in some camps) term, late (or more descriptively, run-time ) 
	binding.

	This type of binding is extremely useful in situations such as:

		- distributed object networks

		- ODBMS query servers

		- prototyping environments

		- any system whose actions (messages to send) are speci-
		  fied by the user at run-time and more generally where
		  the target receiver is also user specified.  Particular-
		  ly in systems where the number of combinations possible
		  is too large to precompute

		- creating highly reuseable classes whose functionality is
		  user extendible.  An example would be the pluggable
		  views of Smalltalk.

	While it is possible to accomplish subsets of these types of systems
	in statically typed, dynamically bound languages it is excruciatingly
	difficult, IMHO, and I believe impossible in the most general cases.
	Yet such a capability is a logical extension of reuse techniques which
	predate OO technology.   My beef is not with the more limited dynamic
	binding generally available but with languages that provide no support
	for late binding.

- Tim Atkins

dal@ncs.dnd.ca (Andyne) (10/06/90)

In article <1990Oct5.010703.16019@Neon.Stanford.EDU> craig@Neon.Stanford.EDU (Craig D. Chambers) writes:

The trick is to design a type system that both supports
making guarantees about type safety efficiently at compile-time and
doesn't overly constrain the kinds of programs people can write.

The problem is that "making guarantees about type safety efficiently at
compile time" requires the use of the anti-monotinicity or 
contra-variant rule (used in Trellis/Owl, Emerald, Duo-Talk, and many 
other languages for checking `conformance') which DOES overly 
constrain the kinds of programs people can write (even the most intuitive 
subtype relationships do not hold). The other alternative is to claim
that "PRACTICAL SOFTWARE ENGINEERING does not require the contravariant
rule 95% of the time (even though novices might easily come up toy
examples where type failures occur without it), the flexibility offered by
 the covariant  rule
in defining subtype relations is worth more than the inability of 
guaranteeing type safety for 5% of the cases at compile time", and create
kludgy checks to handle the 5% case in an unsatisfactory manner.


Diptendu Dutta

Andyne Computing Limited
544 Princess Street, Suite 202
Kingston, Ontario, Canada K7L 1C7
(613) 548-4355

diptendu@andyne.dnd.ca

craig@Neon.Stanford.EDU (Craig D. Chambers) (10/06/90)

In article <1990Oct5.212947.19003@ncs.dnd.ca> dal@ncs.dnd.ca (Andyne) writes:
>The problem is that "making guarantees about type safety efficiently at
>compile time" requires the use of the anti-monotinicity or 
>contra-variant rule (used in Trellis/Owl, Emerald, Duo-Talk, and many 
>other languages for checking `conformance') which DOES overly 
>constrain the kinds of programs people can write (even the most intuitive 
>subtype relationships do not hold). The other alternative is to claim
>that "PRACTICAL SOFTWARE ENGINEERING does not require the contravariant
>rule 95% of the time (even though novices might easily come up toy
>examples where type failures occur without it), the flexibility offered by
> the covariant  rule
>in defining subtype relations is worth more than the inability of 
>guaranteeing type safety for 5% of the cases at compile time", and create
>kludgy checks to handle the 5% case in an unsatisfactory manner.
>
>Diptendu Dutta

I've heard this argument before, particularly from Bertrand Meyer, and
I still disagree.  I think it is possible to have type safety (e.g.
the contravariant rule) without overly constraining programs.  The
"problem" with the contravariant rule typically arises with
binary-type methods in which the argument should be the same type as
the receiver, and with parameterized collections in which a collection
of S should be assignable to a variable of type collection of T if S
is a subtype of T (this violates contravariance since the argument to
the store function is a more specific type (S) in the subtype than in
the supertype).

To support the first case, I've been working on adding static type
checking to a language with multiple dispatching. Multiple dispatching
handles binary messages much better than single dispatching, since
double dispatching is a pain both for the programmer and for the type
system.  Thus the type system would allow a subtype to specify
"covariant" methods, as long as the method dispatched on both
arguments.  It is still type safe, because the supertype's
implementation will be invoked if either of the argument's isn't the
more specific type.

Many of the parameterized collection problems can be solved without
resorting to multiple dispatching.  I believe the main reason for the
"practical software engineering" problems is that the type hierarchy
isn't factored well enough.  In particular, the read-only interface to
collections should be factored out into a supertype of the read-write
interface to collections.  The read-only interfaces of the two
collections described above would presumably be related as desired
(read-only-collection of S is a subtype of read-only-collection of T,
if S is a subtype of T), since none of the functions of
read-only-collections take S/T as arguments.  Each read-write
interface is certainly a subtype of the corresponding read-only
interface, but two read-write interfaces are not subtypes, as the
contravariant rule dictates.  Since I contend that the primary reason
for assigning a collection-of-S to a variable of type collection-of-T
is to use the collection-of-T in a read-only way (e.g. iterate through
it), what the programmer should have done is to declare the variable
as a read-only-collection-of-T. Then the assignment is legal (since
read-write-collection-of-S is a subtype of read-only-collection-of-T)
*and* type safe.  Other kinds of problems with collections are
probably an instance of the above binary problem, and amenable to
multiple dispatching solutions.

Does anyone have other common examples where these two approaches
wouldn't provide a clean solution?  Has anyone worked on static type
checking in the presence of multiple dispatching?  Seeking related
work....

Speaking of Bertrand Meyer and Eiffel, I believe his proposed fix to
the Eiffel type rules (as reposted to comp.eiffel about a month ago)
to preserve type safety in the presence of covariant type rules
amounts to changing the type checking to use contravariant rules.  As
I read it, he's proposing to allow covariant rules (and redeclaring an
inherited public method as private) as long as either there are no
assignments from the subtype to variables of the supertype, or there
are no uses of the covariant/redeclared functions in the supertype.
But if there are no assignments from the subtype to the supertype,
then there's no need for them to be related in the type hierarchy
(this means that Eiffel would have to support inheritance of code w/o
being a subtype, like private base classes in C++).  And if there are
assignments, then checking for uses simply means to ignore code that's
not used in a system, and then applying contravariant rules.  So
covariant programs that are really used in a covariant way won't pass
the proposed type checking rules, the same as if normal contravariant
rules were applied from the beginning.  So there goes the "support"
for practical software engineering.

-- Craig Chambers

stt@inmet.inmet.com (10/07/90)

Re: the advantages of static type-checking.

I think there is a fundamental difference between incremental
static checking, and run-time checking.  I whole-heartedly
agree with those who believe an incremental-development
environment is more productive and realistic than a batch-oriented
environment.  However, I still believe that it is critical
that each increment undergo some amount of static checking.

For complex systems, a huge amount of the code is only
conditionally executed, and it is infeasible to exhaustively
test such a system.  The only hope is to get as much help
as possible from static analysis, which looks at every
new line of code prior to run-time, to ensure that
it satisfies all static constraints which have been
established for the system.  

Of course, it is also important
to be able to change the set of static constraints during
the evolution of the system, and when such a change is
made, essentially all existing code of the system must
be reanalyzed.  Again, this should be done prior to
production use, though it is useful if it can be deferred
temporarily, so that all new development need not
come to a standstill.  Doing such massive reanalysis seems
like an excellent use for the millions of workstation MIPS
sitting idle every night.

In general, I would argue that if there are no
machine-enforceable static constraints on a complex
system, then the programmers themselves must have no
idea whether any given line of code is right or wrong,
except through (inherently infeasible) exhaustive testing.

S. Tucker Taft   stt@inmet.inmet.com    uunet!inmet!stt
Intermetrics, Inc.
Cambridge, MA  02138

pkr@media01.UUCP (Peter Kriens) (10/09/90)

I can't stop myself from entering the discussion. I have been
following it for quite some while now and it seems that most
people live in the type camp or the dynamic camp. 

I would like to join the discussion with some "statements".

1. Typing is preferred by people who never enjoyed a typeless
   language.

2. A typed language gives you the freedom to make more errors, 
   of which only some are catched by the compiler.

3. A strict typed compiler does not allow you to skip a thorough
   test.

4. The result of an error in a typed language is usually
   disastrous

5. A typed language forces you the write and maintain the same functionality
   multiple times.

6. A typed program makes your runtime a little faster but your
   development a lot longer.

7. With a typed language language the development cycle time is
   exponantially proportional with the project size and a dynamic
   language it is almost independent from project size.

Because some of these statements are quite provocative, I would like
to give some background for the fanatics who are really interested:

> 1. Typing is preferred by people who never enjoyed a typeless
>   language.

Most people I have met who enjoyed the joys of a "fast" Smalltalk system
seem to have a hard time going back to an environment like C(++). And
if you like at the amount of information you have to type in for a
C(++) program and Smalltalk, this is quite understandable. When I write
in C, I always feel like explaining to my 2 year old daughter what I 
would like here to do. If you look at a C program, it is 80 percent
telling the compiler the types, the templates, the keywords 
and 20% defining a function. 


> 2. A typed language gives you the freedom to make more errors, 
>   of which only some are catched by the compiler.

When you like at a C program it seems that number of errors you
are allowed to make are a magnit4;1Hude bigger. For example if I have
two structures A and B that do maintain something (information hiding, even
in C), than if I have two procedures pA and pB that are supposed to
do the work from the structures, I get a crash if I call pA(B) or
pB(A). So it is nice to have a compiler that checks this for me. In
Smalltalk, the run time mechanism makes the choice and this particualar
error could not occur. I agree that in some case it would be nice to have
some "capability checking", but it is undoubtedly true that there are
a lot more errors to make in C than in Smalltalk.

> 3. A strict typed compiler does not allow you to skip a thorough
>   test.

A lot of people say that static typing is necessary because a test will not
get to all the places in the code. If that is true, can you really have
confidence in the code? When I was writing in Pascal, a VERY strict typed
checked language, I never was able to have a faultless program the first
time it ran without compilation errors. So it seems that testing is
still necessary, even with static type checking. And if it is still
necessary, I would prefer an environment which told me where he went
beserk, instead of a crash because of an overwritten pointer or
something like that.

> 4. The result of an error in a typed language is usually
>   disastrous

It seems that errors in static typed languages are much more disastrous
than errors in dynamic typed language. When I was writing in C and
PL/M the errors I got were usually a machine that died on you. And then
after a night debugging you found the pointer that went astray or the
misinterpretation of that routine. In a dynamic language the error
shows up with a smoking gun in its hands.

> 5. A typed language forces you the write and maintain the same functionality
>    multiple times.

A great example of this is the select funtion in smalltalk. If I have
a collection I can call it to make a subselection according to a certain
criteria. One method works for all collection, while the collections
can contain any kind of object. Well lets compare that to the Booch
classes where he needs a float set, a float orderedcollection, an integer
sortedcollection etc. And then for each class you need to write
that select function.

Even in Smalltalk I think it is not perfect. Because of the fact that
inheritance is done by way of a tree, I still have to write that select
function in all branches that act like a collection, or build the tree
in a way which is unfavarouble in another way. I would like to assign
the select method to a number of classes.

> 6. A typed program makes your runtime a little faster but your
>   development a lot longer.

There is no dout about the fact that typed languages are faster. They
can bypass a lot of checking and indexing in runtime. And even though
vendors claim that a message send is only 25% slower than a procedure
call, this doesn't mean that the difference is is only 25%. In C many
statements are executed inline, in Smalltalk, everything is a message
send. This results in a considerable overhead. But how I have seen
so many times programmers spending much more time developing code than
that code will ever save in the field, that I wonder where the priority
should be. And because most programs usually only have a small critical
section, I think optimising that section and writing the remainder in
a better language is very much preferrable.

> 7. With a typed language language the development cycle time is
>    exponantially proportional with the project size and a dynamic
>    language it is almost independent from project size.

In a static typed language, the compiler needs to know "everything"
of all the classes. For example in C++, the compiler needs to
know the size of the instance variables to create stack space, and
for virtual binding it needs to know all the superclasses. This results
in an explosion of include files. Because there are so many include
files, the changes of a change in any of those files becomes greater,
a change enforces a recompile of all dependent modules. A normal
development cycle is normally linear dependent on project size, but
this close coupling of modules makes it exponential.

-------

When I write these lines I always try to think how my grandchilderen
in 2045 will look back to these days. I think they will have
the same feeling as we have now looking back at the first days
of automobiles and televisions. Things which seem funny to us now
were taken very serious then. But I think that they will have a much
more efficient way of telling the computer what to do than C.

=================================================================
Peter kriens		Tel. (31)23-319075
Mediasystemen		Fax. (31)23-315210
Waarderweg 19		Home (31)23-251942
Haarlem 		Postbox 4932 Zip 2003EX

render@cs.uiuc.edu (Hal Render) (10/10/90)

In article <1426@media01.UUCP> pkr@media01.UUCP (Peter Kriens) writes:
>> 1. Typing is preferred by people who never enjoyed a typeless
>>   language.
>
>Most people I have met who enjoyed the joys of a "fast" Smalltalk system
>seem to have a hard time going back to an environment like C(++). And
>if you like at the amount of information you have to type in for a
>C(++) program and Smalltalk, this is quite understandable. When I write
>in C, I always feel like explaining to my 2 year old daughter what I 
>would like here to do. If you look at a C program, it is 80 percent
>telling the compiler the types, the templates, the keywords 
>and 20% defining a function. 

Conversely, a substantial percentage of my method definitions in Smalltalk 
are often checks to verify the class of the objects to which I am sending
messages.  These things are much more concisely done in a typed language
by constraining variables and parameters to only hold objects of a certain
type.

>> 2. A typed language gives you the freedom to make more errors, 
>>   of which only some are catched by the compiler.
>
>When you like at a C program it seems that number of errors you
>are allowed to make are a magnit4;1Hude bigger. For example if I have
>two structures A and B that do maintain something (information hiding, even
>in C), than if I have two procedures pA and pB that are supposed to
>do the work from the structures, I get a crash if I call pA(B) or
>pB(A). So it is nice to have a compiler that checks this for me. In
>Smalltalk, the run time mechanism makes the choice and this particualar
>error could not occur. I agree that in some case it would be nice to have
>some "capability checking", but it is undoubtedly true that there are
>a lot more errors to make in C than in Smalltalk.

This is misleading at best.  In Smalltalk you can also "get a crash" if
you send the wrong message to the wrong kind of object, it is just that
you actually have to run the program to see if the message is indeed
wrong.  Worse, in Smalltalk the object could understand the message but 
still not handle it correctly because it was intended for a different kind
of object.  I can send an add: message to both a set and to an ordered
collection, but if I only want to send it to an ordered collection,
I either have to verify that only an ordered collection can be assigned
to the variable I am using as the target reference or I have to query
the target object with kindOf: to validate it's class.  It is simpler
in such cases to be able to statically declare that only objects
of a particular kind can be assigned to a certain variable and let the
compiler do the checking for me.

By the way, you can make just as many errors in Smalltalk as in C, it's
just that C will tell you about more of them at compile-time.
 
>> 3. A strict typed compiler does not allow you to skip a thorough
>>   test.
>
>A lot of people say that static typing is necessary because a test will not
>get to all the places in the code. If that is true, can you really have
>confidence in the code? When I was writing in Pascal, a VERY strict typed
>checked language, I never was able to have a faultless program the first
>time it ran without compilation errors. So it seems that testing is
>still necessary, even with static type checking. And if it is still
>necessary, I would prefer an environment which told me where he went
>beserk, instead of a crash because of an overwritten pointer or
>something like that.

Static typing does allow the compiler to test some things at compile-time,
but there are many errors can only be checked at run-time.   The problem 
with languages that do not support static typing is that *all* error-checking 
must be done at run-time.  This is the reason (I think) that languages
such as Smalltalk have superior symbolic debuggers--they're absolutely
necessary to make the language usable.  Good symbolic debuggers are also
necessary for statically-typed languages, because of the kinds of problems
that compilers cannot check.  Unfortunately, good symbolic debuggers for C, 
Pascal and similar seem to be few and far except on PCs.

>> 4. The result of an error in a typed language is usually
>>   disastrous
>
>It seems that errors in static typed languages are much more disastrous
>than errors in dynamic typed language. When I was writing in C and
>PL/M the errors I got were usually a machine that died on you. And then
>after a night debugging you found the pointer that went astray or the
>misinterpretation of that routine. In a dynamic language the error
>shows up with a smoking gun in its hands.

Again, this is because it is absolutely necessary for a dynamically-typed
language not to fail completely.  It would be almost impossible to 
debug a Smalltalk program from a core file with an assembly-level debugger 
because of all the virtual machine mechanisms between a user program and 
the target machine.  You can usually, however, debug a statically-typed 
program from a core file because there is a much more direct mapping between 
a user program and the machine.  Because such programs are easier to debug 
with low-level debuggers, it is often the case that compiler writers put 
only a bare minimum of debugger support in the compiled images they generate.
This makes the compiled images smaller and faster, which is one of the 
benefits that such statically-typed languages usually have to over 
dynamically-typed languages.  

Thus, you are trading load-image size and speed for debugging support.
This is why dynamically-typed languages are better for prototypes
while statically-typed langauges are better for production code.

>> 5. A typed language forces you the write and maintain the same functionality
>>    multiple times.
>
>A great example of this is the select funtion in smalltalk. If I have
>a collection I can call it to make a subselection according to a certain
>criteria. One method works for all collection, while the collections
>can contain any kind of object. Well lets compare that to the Booch
>classes where he needs a float set, a float orderedcollection, an integer
>sortedcollection etc. And then for each class you need to write
>that select function.

Again, this is not quite true.  If a language has INHERITANCE, you can often
define a method once and have it propagated to many different subclasses of 
object.  However, if some of the objects change the structure of their
instances, you may have to redefine the method.  Try subclassing any of
the Smalltalk collection classes and adding instance variables.  You'll
find that some of the copy methods will have to be redefined so that the
instance variable values get copied as well.

Admittedly, the type construction mechanisms for most statically-typed
languages are crude, lacking support for inheritance, genericity or 
polymorphism.  But dynamically-typed languages do not solve all the
problems associates with type construction.

>> 6. A typed program makes your runtime a little faster but your
>>   development a lot longer.
>
>There is no dout about the fact that typed languages are faster. They
>can bypass a lot of checking and indexing in runtime. And even though
>vendors claim that a message send is only 25% slower than a procedure
>call, this doesn't mean that the difference is is only 25%. In C many
>statements are executed inline, in Smalltalk, everything is a message
>send. This results in a considerable overhead. But how I have seen
>so many times programmers spending much more time developing code than
>that code will ever save in the field, that I wonder where the priority
>should be. And because most programs usually only have a small critical
>section, I think optimising that section and writing the remainder in
>a better language is very much preferrable.

Often the difference between a successful program and an unsuccessful program 
is speed of execution.  If I have to wait 6 more months for a fast version of 
a program, I'll usually do it.  There are still people who use micro-emacs,
Jove, and other versions of emacs instead of Gnu-emacs because Gnu-emacs
is slower.  It doesn't matter to them that Gnu-emacs is more powerful, for
some people the loss in performance is too much of a draback.

>> 7. With a typed language language the development cycle time is
>>    exponantially proportional with the project size and a dynamic
>>    language it is almost independent from project size.
>
>In a static typed language, the compiler needs to know "everything"
>of all the classes. For example in C++, the compiler needs to
>know the size of the instance variables to create stack space, and
>for virtual binding it needs to know all the superclasses. This results
>in an explosion of include files. Because there are so many include
>files, the changes of a change in any of those files becomes greater,
>a change enforces a recompile of all dependent modules. A normal
>development cycle is normally linear dependent on project size, but
>this close coupling of modules makes it exponential.

This is a fact of life, not a drawback of statically-typed languages.
If the programmer and the compiler figure out all the details of the 
type structure of a program before execution, then the program doesn't 
have to do it dynamically.  This means more work for the programmer but 
less for the program.  Thus development takes longer but the resulting program
will run faster.  

If the type structure of a program changes, then you still have to do 
the work to re-validate it, but who or what does this work depends on 
the language and the change control tools available.  It is just as
much hassle to change a class definition in Smalltalk as it is to change
a type definition in C, it's just that Smalltalk has better tools to 
help you find all references to the class.  Still, from what I've heard
some of the statically-typed OO languages (like Objective C and Eiffel)
also have good change-control tools, so Smalltalk may have some competition.

By the way, do you have any references to back up your claims of the
comparative development times?  If not, you shouldn't state such
things so freely because they may not actually be true.

>When I write these lines I always try to think how my grandchilderen
>in 2045 will look back to these days. I think they will have
>the same feeling as we have now looking back at the first days
>of automobiles and televisions. Things which seem funny to us now
>were taken very serious then. But I think that they will have a much
>more efficient way of telling the computer what to do than C.

I've written several thousand lines of C and several thousand lines
of Smalltalk.  Each has its place, altough I don't think anyone
who had the option to program in one or the other would choose C
unless performance is a factor.   Part of this is due to the great 
Smalltalk environment (the browser, debugger, and change manager in 
particular) and part of it is due to the Smalltalk language itself.
OO languages are easier to use because of modularity, inheritance, 
large class libraries and other things.  I think the programming 
languages that people will use in the future will be ones that combine 
OO features and strong support environments with efficiency of execution.
What these languages will look like is anybody's guess, and I'll leave that 
to the various language proponents to debate.

hal.

dlw@odi.com (Dan Weinreb) (10/10/90)

In article <60700003@inmet> stt@inmet.inmet.com writes:

   In general, I would argue that if there are no
   machine-enforceable static constraints on a complex
   system, then the programmers themselves must have no
   idea whether any given line of code is right or wrong,
   except through (inherently infeasible) exhaustive testing.

While I don't disagree with this, I'd like to point out that even if
every single static constraint passes, the programmer still does not
know (is that the same as "has no idea"?) whether the code is correct,
has one subtle bug, or is ridden through with bugs.  Many bugs are not
found by static tests.  So, while the checking of static constraints
probably increases your confidence in the correctness of your code by
some amount, you would still be well advised to test it -- including
the parts that are rarely used -- to see if all the array references
are within bounds, that you don't re-use freed storage, that there
aren't logical errors at a higher level, and so on.  The implied
assertion above, that a lot of testing is needed in the absence
of static checks but that testing can be dispensed with in the
presence of static checks, is an exagguration.

dlw@odi.com (Dan Weinreb) (10/10/90)

In article <1426@media01.UUCP> pkr@media01.UUCP (Peter Kriens) writes:

   I can't stop myself from entering the discussion. I have been
   following it for quite some while now and it seems that most
   people live in the type camp or the dynamic camp. 

Although you've been following it, it appears that you have not
paid attention to everything that has been said.  For one thing,
you're using the terms "typed" and "typeless" to mean "statically
typed" versus "dynamically typed".  You could also say "language
with typed variables" versus "languages with typeless variables".
But the latter are not typeless languages.

   2. A typed language gives you the freedom to make more errors, 
      of which only some are catched by the compiler.

Your single example to support this statement simply shows one
possible error of all possible errors that a programmer can make.  And
it really says something about the advantages of object-oriented
programming, not about "typeless" languages; C++ handles this
correctly.

   3. A strict typed compiler does not allow you to skip a thorough
      test.

Yes.

   4. The result of an error in a typed language is usually
      disastrous

Again, you seem to have one example about some computer you used
somewhere that crashed.  This is not what happens in most computers.
More important, this is a function of the language implementation,
not the language definition.

   5. A typed language forces you the write and maintain the same functionality
      multiple times.

No, only a poor statically typed language.  Better ones, like C++ as
currently defined including the parameterized type facility, handle
this properly.

   6. A typed program makes your runtime a little faster but your
      development a lot longer.

This is entirely a statement about the language implementation, not
the language definition.  It is possible to have an implementation of
a statically typed language that provides an incremental environment.
You can buy this right now from Symbolics (C, Pascal, Fortran) or
Sabre Software (C, and apparently soon C++).

   7. With a typed language language the development cycle time is
      exponantially proportional with the project size and a dynamic
      language it is almost independent from project size.

Same comment as 6.

dlw@odi.com (Dan Weinreb) (10/10/90)

In article <1990Oct9.190813.23402@ux1.cso.uiuc.edu> render@cs.uiuc.edu (Hal Render) writes:

								The problem 
   with languages that do not support static typing is that *all* error-checking 
   must be done at run-time.  This is the reason (I think) that languages
   such as Smalltalk have superior symbolic debuggers--they're absolutely
   necessary to make the language usable.  Good symbolic debuggers are also
   necessary for statically-typed languages, because of the kinds of problems
   that compilers cannot check.  

I agree with the last sentence above.  In my opinion, good symbolic
debuggers are needed just much whether the language is statically or
dynamically typed.  I use C++ these days, but even with static type
checking, I spend a lot of time tracking down bugs; it's really not
that different from when I was using Lisp, except that the compiler
catches more of the easy bugs for me.  Unfortunately, there are plenty
of hard ones that don't manifest themselves as type inconsistencies.

I think the reason that implementations of dynamically typed languages
tend to be accompanied by superior debuggers is that it's easier to
produce a good debugger for that style of implementation.  Again, it's
the style of implementation that counts, not the language definition.
The debuggers that you get with Sabre C and Symbolics C are great.
The debuggers that you get with optimized compiled Lisp on a Sun are
not so great.  It all has to do with the way the language system is
implemented, in particular how much information is easily available at
runtime.

One particularly neat thing about debugging in an interactive
environment such as Symbolics provides is that you can provide each
object with a "here's how to print me" and "here's how to describe my
contents" methods, which are a nice convenience.  While I can sort of
imagine how to hack this into the AT&T cfront implementation of C++
and the gdb debugger, let's say, it would be kind of kludgey and
error-prone for various reasons.  In practice, although it's not
impossible, it isn't done.

   Thus, you are trading load-image size and speed for debugging support.
   This is why dynamically-typed languages are better for prototypes
   while statically-typed langauges are better for production code.

<< Insert my usual comment about how you're really talking about the
distinction between language system implementations, rather than
language definitions. >> Try prototyping your C programs using Sabre
C, and then running them using a hairy optimizing C compiler of the
usual sort, and you'll see that it has nothing to do with the
language.

rnews@qut.edu.au (10/10/90)

In article <1990Oct5.013201.12459@cbnewsc.att.com>, lgm@cbnewsc.att.com (lawrence.g.mayka) writes:
> The occurrence of a type error, far from being a "crash and burn at
> runtime", is merely a software exception like any other, and responds
> to similar treatment by the appropriate exception handler.

Which in most cases of type errors will not be able to recover and
will propagate back up to the operating system and kill your system;
still not a terribly useful situation from the users point of view.

> I encourage those who claim that dynamic typing is "the wrong choice"
> for large software systems to actually run experiments on large (e.g.,
> over a million lines of source) software systems, comparing dynamic
> vs. static typing - above all, for ease of modification and extension.

Seeing as you are the one to do the encouraging would you care to show
us the results of your tests?  Or are your opinions based on personal
preference like everyone elses?

> Note that for the statically typed system in the experiment, you must
> not take advantage of any flexibility ascribable to dynamic typing
> (e.g., multiple executables tied together by Korn shell
> scripts/commands or ASCII pipes/files).  Such usage is simply an
> admission that beyond a certain size, your total program is unwritable
> and/or unusable without dynamic typing.  No, you must look instead at
> a very large, statically typed software system that runs in a single
> address space (or in multiple address spaces that communicate via
> statically typed messages).

Why is it that the evangelists in the dynamic typing crowd refuse to
believe that you can combine the advantages of both static and dynamic
typing in one language?  Look at Eiffel and Modula-3 as examples of
languages that have made an attempt at combining the best of both worlds.
My own language OOM2 will in its later incarnations also attempt to
achieve a good union of static and dynamic typing (it already does some).

Au revoir,

@~~Richard Thomas  aka. The AppleByter  --  The Misplaced Canadian~~~~~~~~~~~@
{ InterNet: R_Thomas@qut.edu.au           ACSNet:  richard@earth.qitcs.oz.au }
{ PSI:      PSI%505272223015::R_Thomas                                       }
@~~~~~School of Computing Science - Queensland University of Technology~~~~~~@

richieb@bony1.uucp (Richard Bielak) (10/10/90)

In article <1990Oct5.013201.12459@cbnewsc.att.com> lgm@cbnewsc.att.com (lawrence.g.mayka) writes:
[..lots of stuff left out..]
>
>I encourage those who claim that dynamic typing is "the wrong choice"
>for large software systems to actually run experiments on large (e.g.,
>over a million lines of source) software systems, comparing dynamic
>vs. static typing - above all, for ease of modification and extension.
>

I agree with Lawrence that static typic is no enough in any large
software system. However, we should look at static and dynamic typing
as complementary. One does not elimite the need for the other. 

Type checking at compile time protects the programmer from a lot of
stupid mistakes. For example, passing the right number and right type
of parameters to a routine.

At runtime, dynamic checking catches errors that could not be detected
at compile time. For example, checking array bounds in PASCAL.

IMHO an "industrial strength" language should provide both static and
dynamic checking. Since it is really *impossible* to write completely
correct software, we should at least be able to detect errors then they
happen. That's how its done in hardware. People don't publish proofs
that the memory chips are correct. Instead they provide facilities to
dectect errors, in case cosmic rays flipped some bits.


...richie


-- 
+----------------------------------------------------------------------------+
|| Richie Bielak  (212)-815-3072      |  If it happens,                     ||
|| USENET:        richieb@bony.com    |      it is possible!                ||
+----------------------------------------------------------------------------+

lgm@cbnewsc.att.com (lawrence.g.mayka) (10/11/90)

In article <18261.27131385@qut.edu.au>, rnews@qut.edu.au writes:
> In article <1990Oct5.013201.12459@cbnewsc.att.com>, lgm@cbnewsc.att.com (lawrence.g.mayka) writes:
> > The occurrence of a type error, far from being a "crash and burn at
> > runtime", is merely a software exception like any other, and responds
> > to similar treatment by the appropriate exception handler.
> 
> Which in most cases of type errors will not be able to recover and
> will propagate back up to the operating system and kill your system;
> still not a terribly useful situation from the users point of view.

If a type error kills your program, either your program or your system
is in serious need of improvement.  Providing restart alternatives may
be appropriate in interactive applications; an error report and
resumption of regular processing are more sensible in an unstaffed,
continuously running application.

> > I encourage those who claim that dynamic typing is "the wrong choice"
> > for large software systems to actually run experiments on large (e.g.,
> > over a million lines of source) software systems, comparing dynamic
> > vs. static typing - above all, for ease of modification and extension.
> 
> Seeing as you are the one to do the encouraging would you care to show
> us the results of your tests?  Or are your opinions based on personal
> preference like everyone elses?

Any such results would be highly proprietary.

> Why is it that the evangelists in the dynamic typing crowd refuse to
> believe that you can combine the advantages of both static and dynamic
> typing in one language?  Look at Eiffel and Modula-3 as examples of
> languages that have made an attempt at combining the best of both worlds.
> My own language OOM2 will in its later incarnations also attempt to
> achieve a good union of static and dynamic typing (it already does some).

Sorry, for my purposes I must rule out

a) Immature languages without commercial support (presumably OOM2).

b) Languages that essentially preclude precise (i.e., not
"conservative") garbage collection (Modula-3).

Eiffel is an interesting alternative, but has yet to show sufficient
maturity (e.g., multiple competitive implementations) and breadth of
applicability (e.g., employment as the principal language of a
competitive workstation operating system).  In any case, I do not see
Eiffel as taking me far enough toward the goals of my work.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

stt@inmet.inmet.com (10/11/90)

dlw@odi.com writes:
> /* Written 12:12 am  Oct 10, 1990 by dlw@odi.com */
> In article <60700003@inmet> stt@inmet.inmet.com writes:
> 
>    In general, I would argue that if there are no
>    machine-enforceable static constraints on a complex
>    system, then the programmers themselves must have no
>    idea whether any given line of code is right or wrong,
>    except through (inherently infeasible) exhaustive testing.
> 
> . . . The implied
> assertion above, that a lot of testing is needed in the absence
> of static checks but that testing can be dispensed with in the
> presence of static checks, is an exagguration.

I am sorry if I allowed you to infer this assertion.  I certainly
don't believe it myself.  Instead, my point was that most
programmers *do* have statically checkable rules which allow
them to use code-reading (aka bench-checking) as an important step in 
debugging.
Generally, some of these statically checkable rules are well enough defined
that they can be communicated to a compiler.  My point was
that if there is really *nothing* statically checkable, then it
must be impossible to bench-check code, or for that matter write
it in the first place with any idea of whether it would work.

Of course, to really know if it works, it must always be tested.
But an ounce of prevention (static checking) is worth much more 
than a pound of cure (dynamic testing) for most complex systems,
since it is so hard to perform exhaustive testing.

S. Tucker Taft     stt@inmet.inmet.com   uunet!inmet!stt
Intermetrics, Inc.
Cambridge, MA  02138

lins@Apple.COM (Chuck Lins) (10/12/90)

In article <1990Oct11.004854.11732@cbnewsc.att.com> lgm@cbnewsc.att.com (lawrence.g.mayka) writes:
>b) Languages that essentially preclude precise (i.e., not
>"conservative") garbage collection (Modula-3).
Ah, seems to preclude C++ too.

>
>Eiffel is an interesting alternative, but has yet to show sufficient
>maturity (e.g., multiple competitive implementations) and breadth of
>applicability (e.g., employment as the principal language of a
>competitive workstation operating system).  

Since when is the latter a requirement for determining the suitability of
a language for any particular fitness of purpose? There are numerous
application domains where language features necessary of writing an OS are
completely unnecessary.

>	Lawrence G. Mayka
>	AT&T Bell Laboratories
>	lgm@iexist.att.com
>


-- 
Chuck Lins               | "Is this the kind of work you'd like to do?"
Apple Computer, Inc.     | -- Front 242
20525 Mariani Avenue     | Internet:  lins@apple.com
Mail Stop 37-BD          | AppleLink: LINS@applelink.apple.com
Cupertino, CA 95014      | "Self-proclaimed Object Oberon Evangelist"
The intersection of Apple's ideas and my ideas yields the empty set.

lgm@cbnewsc.att.com (lawrence.g.mayka) (10/16/90)

In article <45571@apple.Apple.COM>, lins@Apple.COM (Chuck Lins) writes:
> In article <1990Oct11.004854.11732@cbnewsc.att.com> lgm@cbnewsc.att.com (lawrence.g.mayka) writes:
> >Eiffel is an interesting alternative, but has yet to show sufficient
> >maturity (e.g., multiple competitive implementations) and breadth of
> >applicability (e.g., employment as the principal language of a
> >competitive workstation operating system).  
> 
> Since when is the latter a requirement for determining the suitability of
> a language for any particular fitness of purpose? There are numerous
> application domains where language features necessary of writing an OS are
> completely unnecessary.

I was simply giving an example of a large, complex, fairly
general-purpose, commercially applicable systems program in which
performance is typically important.  Has anyone used Eiffel in such a
program?


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

timm@runxtsa.runx.oz.au (Tim Menzies) (10/17/90)

There was a contracter working with me once and I asked his opinion about
strongly typed languages. This guy had been around, seen a few disasters.
I said I was worried about using Smalltalk since it didn't have strong typing,
that bugs coold lurk in the system till after delivery, etc. etc. I said that
for big projects, I'd prefer something typed.

He exploded. To say he disagreed would be an understatement. In his view,
the thing that killed big projects had nothing to do with typing. I never
actually got a clear view of what he thought actually did kill big
projects, but in his experience, typing/ non-typing just wasn't a 
determining factor in the long term viability of a software package.

So I kept using Smalltalk. No problems yet. Currently, I have a name as
being able to deliver solutions faster than others. Of course, this
reputation only lasts till it stops, but right now I'd recommend Smalltalk/
Actor for delivering solutions to business problems in short periods
of time.

Just thought I'd type that in, strongly.

--
 _--_|\  Tim Menzies (timm@runxtsa.oz)         #include usual.disclaimer
/      \ HiSoft Expert Systems Group,          -----------------------------
\_.--._/ 2-6 Orion Rd Lane Cove, NSW, 2066     "If its only just ok, then it
      v  02 9297729(voice),61 2 4280200(fax)    probably isn't." 

render@cs.uiuc.edu (Hal Render) (10/20/90)

In article <2444@runxtsa.runx.oz.au> timm@runxtsa.runx.oz.au (Tim Menzies) writes:
    [an article about how Smalltalk's lack of strong-typing hasn't been
	a detriment to his work]

Assuming that any software you write is thoroughly tested, most 
type errors that would be caught at compilation can be caught during
testing.  I think that this is the reason that strong typing may
not be absolutely necessary.  However, any error that can be caught
by the compiler is one less that a human has to catch, and so I 
personally like having type-checking as part of a programming language.
In languages that don't have strongly-typed variables, I often
do the checking "by-hand" anyway.  

For example, If I write a method 'foo: bar' in Smalltalk and expect bar 
to be an instance of SomeClass, then the first thing that I'll do in
the method is send 'isKindOf: SomeClass' to bar and raise an error
if it returns false.  Although I would probably catch such an error 
during testing without this statement, checking the class explicitly 
means fewer potential error that I have to determine the cause of 
during runtime.  Now, if I had never had a problem with sending a 
message to the wrong kind of object I would say that checking an object's
class was not useful.  But, my experience has shown me that such errors 
are not unknown particularly when working with closely related objects.

Another potential benefit of strong typing is in allowing
compilers to produce more efficient code.  Although I haven't 
read much on the issue, I have heard that one of the reasons that
Ralph Johnson's Typed Smalltalk compiler may turn out to be quite usefule 
is that the type checking may allow the compierl to optimize method 
look-up.  Anything that can improve the speed of Smalltalk code will 
help it to gain wider acceptance among project managers which in 
turn would mean that more programmers could use Smalltalk instead of 
nastier, grundgier languages (pick one).  This would be A Good Thing.

hal.

craig@Neon.Stanford.EDU (Craig D. Chambers) (10/20/90)

In article <1990Oct19.180646.8649@ux1.cso.uiuc.edu> render@cs.uiuc.edu (Hal Render) writes:
>In article <2444@runxtsa.runx.oz.au> timm@runxtsa.runx.oz.au (Tim Menzies) writes:
>    [an article about how Smalltalk's lack of strong-typing hasn't been
>	a detriment to his work]
>
>Assuming that any software you write is thoroughly tested, most 
>type errors that would be caught at compilation can be caught during
>testing.  I think that this is the reason that strong typing may
>not be absolutely necessary.  However, any error that can be caught
>by the compiler is one less that a human has to catch, and so I 
>personally like having type-checking as part of a programming language.

You're assuming that you can add static type checking to a language
with little cost (just a few extra declarations).  This is not true in
existing statically-typed languages.  A statically-checkable type
system imposes restrictions on the kinds of programs that you can
write.  If the static type system isn't smart enough to guarantee that
you aren't possibly going to do anything wrong, then it will complain
and prevent you from running your program.  Static type systems and
checkers make conservative approximations about programs, and so lose
information that an "ideal type checker" would have used to show that
the program type checks.  In many situations, the approximations
aren't far off of reality, but in others the approximations lose too
much information.

Consider the perform: primitives in Smalltalk.  These are very hard to
type-check without doing analysis of the possible *values* of the
run-time selector.  No static type system I know of can statically
type check a perform: primitive in any but the most trivial cases.
The user interface in Smalltalk uses perform: all the time to execute
arbitrary user actions when a menu entry is selected. Become: is
another hard-to-type-check primitive in Smalltalk.

Another case in Smalltalk is the implementation of growable arrays (I
think; some collection in Smalltalk has this problem).  The
representation of a growable array is a vector of indexable fields, a
lower bound, and an upper bound (or maybe a length, I don't remember).
The elements of the growable array are in the fields indexed between
the lower bound and the upper bound; the contents of the extra padding
fields are nil.  This poses problems for a static type checker, since
the best static type systems that I've seen will treat the type of the
indexable fields as "T | Nil" (where "T" is the parameterized type of
the elements of the growable array).  So fetches out of the indexed
fields return objects that the type checker thinks are of type "T |
Nil", while the programmer knows that they really should be just "T".

>Another potential benefit of strong typing is in allowing
>compilers to produce more efficient code.  Although I haven't 
>read much on the issue, I have heard that one of the reasons that
>Ralph Johnson's Typed Smalltalk compiler may turn out to be quite usefule 
>is that the type checking may allow the compierl to optimize method 
>look-up.  Anything that can improve the speed of Smalltalk code will 
>help it to gain wider acceptance among project managers which in 
>turn would mean that more programmers could use Smalltalk instead of 
>nastier, grundgier languages (pick one).  This would be A Good Thing.

Typed Smalltalk's type system can't handle the above cases any better
than any other type system.  In fact, Typed Smalltalk includes a type
cast operation to force "T | Nil" to "T" where the programmer tells
the compiler he knows what he's doing; I don't know how Typed
Smalltalk type-checks perform:'s and become:'s.

And I disagree that type declarations are required to get good speed.
For one thing, good type systems specify *interface*, not
*representation*, to maximize generality and reusability.  But
optimization like compile-time message lookup require representation
information.  And techniques exist to infer the representation of
objects from code with no type declarations, like customization, type
prediction, type analysis, and splitting.  These techniques are used
in the compiler for Self, a dynamically-typed o-o language much like
Smalltalk, except that no cheating is allowed for common messages like
ifTrue, whileTrue, and +, and instance, class, and global variables
are all accessed using message passing.  The performance of Self is
around 50% to 75% of optimized C for small benchmarks translated from
C, even without type declarations, so the techniques for inferring the
representation of objects must work pretty well, at least for integers
and arrays (the data types manipulated in the benchmarks).  The
performance of Typed Smalltalk is about the same as Self for some very
small benchmarks like sumTo: (around 2.5 times slower than optimized
C), even though Typed Smalltalk has declared all the variables to be
SmallIntegers and generic arithmetic support has been disabled (i.e.
no overflow checks or type tests for arithmetic).

I agree wholeheartedly that anything that convinces programmers to use
nicer languages like Smalltalk (or, even better, Self) would be a
Great Thing, and better run-time performance is certainly one
important factor.  But I believe that static type declarations are not
that important for good run-time performance, especially if the type
declarations specify interface rather than representation; good
compiler techniques are much more important, and fortunately they now
exist.

-- Craig Chambers

dsa@dlogics.COM (David Angulo) (10/22/90)

In article <1990Oct19.220747.5536@Neon.Stanford.EDU>, craig@Neon.Stanford.EDU (Craig D. Chambers) writes:
| In article <1990Oct19.180646.8649@ux1.cso.uiuc.edu> render@cs.uiuc.edu (Hal Render) writes:
| 
| Consider the perform: primitives in Smalltalk.  These are very hard to
| type-check without doing analysis of the possible *values* of the
| run-time selector.  No static type system I know of can statically
| type check a perform: primitive in any but the most trivial cases.
| The user interface in Smalltalk uses perform: all the time to execute
| arbitrary user actions when a menu entry is selected. Become: is
| another hard-to-type-check primitive in Smalltalk.

The perform: is, in fact, one case where checking is NEEDED.  I have been
using c++ for nearly two years now and have just (the last two months) started
using Smalltalk.  I had a bug using the perform: message that took me two
days to find because the compiler wasn't smart enough to catch it for me.
(The cause: I had capitalized the symbol that was being used in the perform:
message.  The result: the compiler did not complain, the run time program
did not complain, but the application did not work - highly unsatisfactory).

| 
| Another case in Smalltalk is the implementation of growable arrays (I
| think; some collection in Smalltalk has this problem).  The
| representation of a growable array is a vector of indexable fields, a
| lower bound, and an upper bound (or maybe a length, I don't remember).
| The elements of the growable array are in the fields indexed between
| the lower bound and the upper bound; the contents of the extra padding
| fields are nil.  This poses problems for a static type checker, since
| the best static type systems that I've seen will treat the type of the
| indexable fields as "T | Nil" (where "T" is the parameterized type of
| the elements of the growable array).  So fetches out of the indexed
| fields return objects that the type checker thinks are of type "T |
| Nil", while the programmer knows that they really should be just "T".
| 

Well, I have made growable arrays in c++ for a long time with satisfactory
results so your objections must be implementation dependant and not due
to the methodology of static type checking in itself.


-- 
David S. Angulo                  (312) 266-3134
Datalogics                       Internet: dsa@dlogics.com
441 W. Huron                     UUCP: ..!uunet!dlogics!dsa
Chicago, Il. 60610               FAX: (312) 266-4473

lins@Apple.COM (Chuck Lins) (10/24/90)

In article <1990Oct19.180646.8649@ux1.cso.uiuc.edu> render@cs.uiuc.edu (Hal Render) writes:
>Assuming that any software you write is thoroughly tested, most 
>type errors that would be caught at compilation can be caught during
>testing.  

First, "most" is not the same as "all". Second, you may be forgetting the
relative costs here. A compilation of a few seconds (or even minutes) costs
far far less than the hours or days it takes to thoroughly test software. We
are talking orders of magnitude here.

-- 
Chuck Lins               | "Is this the kind of work you'd like to do?"
Apple Computer, Inc.     | -- Front 242
20525 Mariani Avenue     | Internet:  lins@apple.com
Mail Stop 37-BD          | AppleLink: LINS@applelink.apple.com
Cupertino, CA 95014      | "Self-proclaimed Object Oberon Evangelist"
The intersection of Apple's ideas and my ideas yields the empty set.

bevan@cs.man.ac.uk (Stephen J Bevan) (10/25/90)

In article <45940@apple.Apple.COM> lins@Apple.COM (Chuck Lins) writes :
> A compilation of a few seconds (or even minutes) costs
> far far less than the hours or days it takes to thoroughly test software.

I'm not sure I understand the above as it seems to imply that
using strong/static typing means you don't have to test your software??

Given that you do test your software (no matter what sort of typing is
used) the question is what % of errors in the dynamic system are due
to typing problems and not to general logic errors.

You could say that anything other than 0% shows that dynamic/weak
typing is ``wrong''.  However, I see the small % of typing errors,
(which will be caught during testing :-) as a small price to pay for
the flexibility of the language.  But then again, I'm only a student
in a ivory tower :-)

IMHO languages like ML (and also Haskell) go a long way towards
providing the sort of type protection I would like in a language.
However, both of these are functional languages.  I have yet to see an
equivalent in the OO world.  (Eiffel tries hard, but its not there
yet).

This doesn't mean I wouldn't use languages like C++ or Eiffel, I just
wouldn't hold up the strong typing as that much of an advantage over
languages like Smalltalk/CLOS.  IMHO the main advantage of strong
typing is to help the compiler to generate more efficient code.
However, the work done on SELF would seem to suggest you can get
equivalent speeds without the need of typing.  (More power to their
collective elbows I say).

Just rambling while my program compiles,

Stephen J. Bevan		bevan@cs.man.ac.uk

lgm@cbnewsc.att.com (lawrence.g.mayka) (10/25/90)

In article <45940@apple.Apple.COM>, lins@Apple.COM (Chuck Lins) writes:
> In article <1990Oct19.180646.8649@ux1.cso.uiuc.edu> render@cs.uiuc.edu (Hal Render) writes:
> >Assuming that any software you write is thoroughly tested, most 
> >type errors that would be caught at compilation can be caught during
> >testing.  
> 
> First, "most" is not the same as "all". Second, you may be forgetting the
> relative costs here. A compilation of a few seconds (or even minutes) costs
> far far less than the hours or days it takes to thoroughly test software. We
> are talking orders of magnitude here.

First, your cited costs are system-dependent.  For some software
systems recompilation takes hours or days, but incremental testing
takes seconds or minutes.

More importantly, though, the vast majority of type "mismatches"
detected by typical compile-time typing are not errors at all, but
rather artificial limitations on algorithmic capability.  In such
cases, a dynamically typed system is not "letting incorrect types
pass" but rather dealing with arbitrary objects correctly and
elegantly, without artifical type restrictions.  Instead, any object
which offers the operations required in a particular circumstance
(e.g., the operations actually invoked on a received argument) fulfils
the protocol.  This criterion of behavioral correctness instead of
representational correctness is even more important in continuously
evolving systems.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

lins@Apple.COM (Chuck Lins) (10/27/90)

In article <BEVAN.90Oct25122231@orca.cs.man.ac.uk> bevan@cs.man.ac.uk (Stephen J Bevan) writes:
>In article <45940@apple.Apple.COM> lins@Apple.COM (Chuck Lins) writes :
>> A compilation of a few seconds (or even minutes) costs
>> far far less than the hours or days it takes to thoroughly test software.
>
>I'm not sure I understand the above as it seems to imply that
>using strong/static typing means you don't have to test your software??

No. This would only be true if the only errors in software were errors caught
by static typing. Such is not the case. So we must still test - regardless of
language. Testing will end up consuming 30-50% of the total development cost.
Any time the computer can help detect errors long before we reach the 'test
phase' it is a good thing (and saves lots of money). This will become more
important as software is increasingly being used in life-critical software.

-- 
Chuck Lins               | "Is this the kind of work you'd like to do?"
Apple Computer, Inc.     | -- Front 242
20525 Mariani Avenue     | Internet:  lins@apple.com
Mail Stop 37-BD          | AppleLink: LINS@applelink.apple.com
Cupertino, CA 95014      | "Self-proclaimed Object Oberon Evangelist"
The intersection of Apple's ideas and my ideas yields the empty set.

lins@Apple.COM (Chuck Lins) (10/27/90)

In article <1990Oct25.131653.13463@cbnewsc.att.com> lgm@cbnewsc.att.com (lawrence.g.mayka) writes:
>In article <45940@apple.Apple.COM>, lins@Apple.COM (Chuck Lins) writes:
>> In article <1990Oct19.180646.8649@ux1.cso.uiuc.edu> render@cs.uiuc.edu (Hal Render) writes:
>> >Assuming that any software you write is thoroughly tested, most 
>> >type errors that would be caught at compilation can be caught during
>> >testing.  
>> 
>> First, "most" is not the same as "all". Second, you may be forgetting the
>> relative costs here. A compilation of a few seconds (or even minutes) costs
>> far far less than the hours or days it takes to thoroughly test software. We
>> are talking orders of magnitude here.
>
>First, your cited costs are system-dependent.  For some software
>systems recompilation takes hours or days, but incremental testing
>takes seconds or minutes.

Monolithic systems written in C and Pascal have serious problems with
recompilation. Change a comment - recompile the system :-) Which is why I
personally never use them anymore. Separate compilation has been around for
too many years to remember. Information hiding helps here too (as in Modula-2,
Ada, etc).

Incremental testing is also system-dependent. Your example of a large system
that takes days to recompile certainly cannot be adequately tested in minutes
or seconds. I do incremental compilation oand testing all the time. These
features seem to me more a function of the development environment than of
the language. But that's not part of the type discussion :-)

[accidentially deleted relavent comments about the restrictions of types]
I agree that it's better not to limit types based on internal representation
al details. Rather that semantic capabilities are more important here. This
seems to me similar to the 'type conformance' idea in the Emerald system (if
it's not the same thing). But this is still the concept of a type. A 'kinder,
gentler' type, but a type nonetheless. The only problem is capturing the
correct semantics for operations. What I'm trying to say (and doing it poorly)
is that "+" for numeric quantities means something very different from "+" for
strings (concatenation). Trying to "+" a number and a string may not have any
semantic meaning even though both objects have an operation called "+". (Yes,
we can postulate meaning for this specific instance; I'm reasonably certain
that there are situations where we couldn't, or everyone would disagree.
Thought I'm willing to be convinced otherwise :-)

-- 
Chuck Lins               | "Is this the kind of work you'd like to do?"
Apple Computer, Inc.     | -- Front 242
20525 Mariani Avenue     | Internet:  lins@apple.com
Mail Stop 37-BD          | AppleLink: LINS@applelink.apple.com
Cupertino, CA 95014      | "Self-proclaimed Object Oberon Evangelist"
The intersection of Apple's ideas and my ideas yields the empty set.

jimad@microsoft.UUCP (Jim ADCOCK) (11/01/90)

Those of us who can still remember programming large systems using the
untype-checked interfaces of K&R C -- verses the type-checked interfaces
of ANSI-C and C++, will certainly vote for the latter.

tma@osc.COM (Tim Atkins) (11/02/90)

As far as method lookup speed advantages possible with strong typing
some work I did in Objective C a couple of years ago may be of some
interest.  I attempted several improvments of the standard algorithm
and finally achieved a general lookup in the absence of strong typing
on the order of 15 assembler instructions on a Sun 3.  Adding type hints
(not strong typing but just a hint of the expected type of referenced
objects) and using code of the form:

	implementation = (runtime_type == expected_type) ? precomputed_imp :
		full_lookup(...);

cut the average lookup costs in half for the applications I instrumented.
Therefore, it seems to me that not strong typing, but weak type hints are
worthwhile for gaining improvements in this area.

- Tim Atkins

timm@runxtsa.runx.oz.au (Tim Menzies) (11/09/90)

In article <BEVAN.90Oct25122231@orca.cs.man.ac.uk> bevan@cs.man.ac.uk (Stephen J Bevan) writes:
>In article <45940@apple.Apple.COM> lins@Apple.COM (Chuck Lins) writes :

[a lot of stuff about type checking reducing program errors]

I've posted previously re the myth of "type  checking finds all you
errors". As a case in point, right now I'm debugging a highly interactive
environment written in Smalltalk  V/286. It is interesting to note that
the sort of errors I'm getting aren't type errors. Rather, the majority of
my time is taken up with state transitition errors. I've been a good
OO programmer and decentralised the control amongst all my independent
objects. Each object gets its  own window and  the  user is free to leap
around from window  to window doing whatever feels best for them at the
time. Now, my current set of errors are to do with things like "this
happens after that which updates varibale  X inappropriately becuase
that-thing-over-there hasn't happend first." 

I.E. "type" is not a problem in this appliction  right now. "Time" is.

If someone can tell me how software can automatically check for these 
sorts of runtime errors, I'd be most interested. I suspect the rigourous 
answer is something like the Eiffel assertion mechanism. 

--
 _--_|\  Tim Menzies (timm@runxtsa.oz)        "Its amazing how much 'mature
/      \ HiSoft Expert Systems Group,          wisdom' resembles being too
\_.--._/ 2-6 Orion Rd Lane Cove, NSW, 2d066    tired." - Lazarus Long
      v  02 9297729(voice),61 2 4280200(fax)             (a.k.a. Bob Heinlein)