[comp.object] Dynamic typing -- To Have and Have Not ...

pcg@test.aber.ac.uk (Piercarlo Antonio Grandi) (03/14/91)

On 12 Mar 91 08:45:22 GMT, brnstnd@kramden.acf.nyu.edu (Dan Bernstein) said:

Incidentally, I hereby announce my defeat; I have realized that too many
people are using dynamic and static typing to indicate when the type of
*variables* is known. So I will use these terms like that myself. I
propose the use of mutable/immutable typing for the case where the typing
of *values* can or cannot be changed at runtime.

brnstnd> <reference>		a section of program text that refers to
brnstnd> 			a value.

Normally this is called the scope, except in Algol 68 where there is a
distinction between range and reach. In all texts that I know reference
is more or less synonymous with pointer. People use reference after the
Algol 68 revised report, usually, or after Simula 67, which both use
reference in the way indicated above.

The normal definition of reference is something like "a value that may
refer to another value or to no particular value".

brnstnd> Note: If a statically typed language gives you ``the set of
brnstnd> expressible types'' and ``the set of all values'' as basic
brnstnd> types, and struct as a type composition operation, then you can
brnstnd> implement runtime polymorphism in that language. [ ... ]

Of course... But after you say:

brnstnd> Given this, I fail to see how dynamic typing can be regarded as
brnstnd> more than a syntactic feature. If you're given a program that
brnstnd> uses dynamic typing, you can just convert every reference in
brnstnd> the program to refer to a (type,value) pair, and poof! you have
brnstnd> a statically typed program.

This is entirely uninteresting! Recursion is merely a "syntactic
feature" too, by the same argument, as it can always be obviated by the
use of stacks or by iteration. Yet a lot of people think that recursion
should be provided as a "syntactic feature".

What we are interested here is in *architecture*, that is designing
boundaries between layers. In our particular case the boundary between
what the language provides directly and what the programmer can achieve
building on top of that.

Dynamic typing as a concept is simply necessary in a wide class of
applications; one has four choices, with respect to it and static typing:

1) Don't provide dynamic typing as a language primitive and make it
prohibitive to implement it on top of the provided language primitives.

2) Don't provide dynamic typing as a language primitive, and make it
possible to implement it more or less grossly on top of the provided
language primitives.

3) Provide dynamic typing only as a language primitive, and make it
possible to implement static typing by explicit checks.

4) Provide *both* dynamic and static typing as language primitives, with
no solution of continuity, as one becomes the other depending on which
variable's values are known at compiletime.

I reckon that Gudeman thinks that Bernstein advocates 1), while
Bernstein really advocates 2); Gudeman himself advocates 3), and I
advocate 4).

While I advocate 4), I think that there are good reasons to think that
3) is acrtually a more tnable proposition than 2): an efficient
implementation of dynamic typing as language primitive is not much worse
(thanks to caching and hinting) than one of static typing, or at least
the difference is not large for many complex *applications*, while a
clean implementation of dynamic typing given static typing primitives is
much harder to do, as the X windows example demonstrates so clearly.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@aber.ac.uk

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/15/91)

In article <PCG.91Mar13185855@aberdb.test.aber.ac.uk> pcg@test.aber.ac.uk (Piercarlo Antonio Grandi) writes:
> On 12 Mar 91 08:45:22 GMT, brnstnd@kramden.acf.nyu.edu (Dan Bernstein) said:
> Incidentally, I hereby announce my defeat; I have realized that too many
> people are using dynamic and static typing to indicate when the type of
> *variables* is known.

On the contrary. Typing refers to the typing of every single reference
in the program, not just ``x'' and ``y''.

> brnstnd> <reference>		a section of program text that refers to
> brnstnd> 			a value.
> Normally this is called the scope,

No. The scope of a variable is the largest section X of program text
such that any part of X may contain a reference to that variable.

> In all texts that I know reference
> is more or less synonymous with pointer.

Yes. The program text ``x + y'', for example, refers to the value
computed as the sum of x and y. You can draw an arrow from ``x + y'' to
that value if you want.

There are languages where a variable may have a reference---a section of
program text referring to a value---as a value in and of itself.
Sometimes the program text can be made implicit when you have a
reference, as in C++.

> 2) Don't provide dynamic typing as a language primitive, and make it
> possible to implement it more or less grossly on top of the provided
> language primitives.

So why do you consider this ``gross'' when it's implemented outside the
compiler but (presumably) not gross when it's inside the compiler?

As long as the language has good syntax, you can hide all the ugliness
of (e.g.) polymorphism inside a header file or library. I advocate that
this be done for any essentially syntactic feature.

> I reckon that Gudeman thinks that Bernstein advocates 1), while
> Bernstein really advocates 2); Gudeman himself advocates 3), and I
> advocate 4).

Actually, I think Gudeman thinks he advocates (4).

> an efficient
> implementation of dynamic typing as language primitive is not much worse
> (thanks to caching and hinting) than one of static typing,

For me compile time and typechecking are both important while I'm
writing a program. I cannot afford to choose between fast compilations
with no typechecking and slow compilations with full typechecking.
(Similarly, many programs take a noticeable amount of time for each
debugging run. I cannot afford to choose between fast compile time with
slow run time and slow compile time with fast run time. My solution for
those programs is to optimize by hand, once.)

Do you optimize programs while you're testing? Probably not. But when
each test run takes a noticeable amount of time, don't you wish that you
could make them run faster without wasting so much time on optimization?

Similarly, in a dynamically typed language, would you turn on strict
typechecking and other optimizations while you're testing? Probably not.
But when you make type errors, don't you wish that you could have found
them without wasting so much time on optimization? Well, you could. All
you had to do was use static typing.

---Dan

gudeman@cs.arizona.edu (David Gudeman) (03/15/91)

In article  <PCG.91Mar13185855@aberdb.test.aber.ac.uk> Piercarlo Antonio Grandi writes:
]...
]1) Don't provide dynamic typing as a language primitive and make it
]prohibitive...
]
]2) Don't provide dynamic typing as a language primitive, and make it
]possible to implement it...
]
]3) Provide dynamic typing only...
]
]4) Provide *both* dynamic and static typing...
]
]I reckon that Gudeman thinks that Bernstein advocates 1), while
]Bernstein really advocates 2); Gudeman himself advocates 3), and I
]advocate 4).

No, Gudeman thinks that Bernstein advocates (2), Gudeman himself
advocates (4), and Gudeman is willing to accept Grandi's word that
Grandi advocates (4).

Didn't anyone read "Runtime Polymorphism... part 2"?  OK, maybe part 1
one was so uninspiring that no one bothered with the sequal.  Anyway,
in that posting I described my preference for a system with optional
static typing.

Maybe my bickering with the B&D language people over the importance of
static typing has lead people to believe that I am opposed to static
typing in all forms, but I am not.  I am opposed to any language
feature that restricts my options and increases my effort in the name
of security.  The language designer has no idea how much security my
program requires.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

ram+@cs.cmu.edu (Rob MacLachlan) (03/16/91)

>>From: brnstnd@kramden.acf.nyu.edu (Dan Bernstein)
>Subject: Re: Dynamic typing -- To Have and Have Not (was Runti
>Date: 14 Mar 91 22:18:56 GMT
>
>For me compile time and typechecking are both important while I'm writing a
>program. [...] Do you optimize programs while you're testing? Probably not.
>But when each test run takes a noticeable amount of time, don't you wish
>that you could make them run faster without wasting so much time on
>optimization?
>
>Similarly, in a dynamically typed language, would you turn on strict
>typechecking and other optimizations while you're testing? Probably not.

I debug and test with typechecking on in Common Lisp, and so does everyone
else I know.  When properly supported (as in CMU Common Lisp), a powerful
dynamic type system is a fairly general assertion mechanism, addressing
consistency constraints outside the scope of conventional static typing
systems.  For example, you can:
    (declare (type (integer 3 27) i))

to say that I ranges from 3 to 47.  And this assertion will be checked.
The Common Lisp type system is general enough to express many interesting
consistency constraints, but simple enough so that compilers can use it to
do quite a bit of type inference.

>But when you make type errors, don't you wish that you could have found
>them without wasting so much time on optimization? 

Compilation speed is much less of a concern in environments that support
incremental compilation.  Although Lisp compilers tend to be rather slow
compared to conventional compilers, Lisp seems faster because changes
require much less recompilation.

>Well, you could. All
>you had to do was use static typing.

Do your programs ever dump core?  That's a run-time error check, just not a
very graceful one.  Most run-time type errors in Lisp systems are of the
"dereferenceing off into space" sort, which can't be detected at compile
time.

  Robert A. MacLachlan (ram@cs.cmu.edu)

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/18/91)

In article <1991Mar16.052952.10201@cs.cmu.edu> ram+@cs.cmu.edu (Rob MacLachlan) writes:
> I debug and test with typechecking on in Common Lisp, and so does everyone
> else I know.

I can't afford to use Lisp either: I don't find its slight advantages in
expressiveness to outweigh its slowness for all but the simplest
programs. Sure, these tradeoffs between compile time, run time, compile
space, run space, programming time, maintenance time, etc. will vary by
project and programmer---but static typing appears to greatly reduce
debugging time without hurting speed or space or effort for the vast
majority of projects. Why not take what you can get for free?

In contrast, the supposed conciseness of dynamically typed languages
costs dearly in compile time, run time, and (for projects with many
debugging runs) programming time. For these disadvantages it would have
to provide a huge benefit for maintenance, yet its proponents never seem
to come up with examples showing such a benefit.

> >Well, you could. All
> >you had to do was use static typing.
> Do your programs ever dump core?  That's a run-time error check, just not a
> very graceful one.

Yes, and one which could quite often have been prevented by stronger
compile-time checks.

---Dan

ram+@cs.cmu.edu (Rob MacLachlan) (03/19/91)

>From: brnstnd@kramden.acf.nyu.edu (Dan Bernstein)
>Subject: Re: Dynamic typing -- To Have and Have Not (was Runti
>Date: 18 Mar 91 03:21:05 GMT
>
>I can't afford to use Lisp either: I don't find its slight advantages in
>expressiveness to outweigh its slowness for all but the simplest
>programs.
>

If you are writing programs that can't afford a to run 50% to 100% longer than
a tense C implementation, then don't use Lisp.  The advantage of Lisp (and of
object-oriented programming) is not in efficiency, but in ease of evolving
solutions to poorly understood problems.

I agree that compile time type checking is a good thing -- the Python compiler
that I wrote for CMU Common Lisp does compile-time type checking wherever
possible.  If you want to, you can write statically type-checked programs in
Common Lisp.  This will get a compile-time type warning in CMU CL:
    (defvar *var* '(a b c))	
    (declaim (list *var*))
    (defun foo ()
      (+ *var* 13))

As I see it, the main difference between CL and a language such as C is that
CL knows that it doesn't always know the types of objects, whereas C pretends
that it does.  I think that dynamic typing is especially valuable in an
object-oriented programming system, since OO programs intensively manipulate
references to mutable objects.  It is very difficult to do static type
checking in such an environment.

Determining the power of the type system is a language design decision:
 -- The more of the language semantics you bring into the type system, the
    more complex inferences you can do at compile time.
 -- It is impossible to bring all the language semantics into the type system,
    since the only way to really find out what a program is going to do is to
    run it (the halting problem, etc.)
This means that:
 -- More powerful type systems offer more opportunities for optimization, but
    are also harder for compilers to understand.
 -- In any language, some programming errors can only be detected at run time.

> [...] static typing appears to greatly reduce
>debugging time without hurting speed or space or effort for the vast
>majority of projects.

Well, you say it's so, and I say it ain't...

Static type checking detects superficial errors; errors that would be detected
if you tested that branch in the code just once.  Such bugs may be common, but
fixing them is quite easy in any language.  Here is a concrete example of a
programming problem that exemplifies what programmers in any OO language spend
most of their *time* debugging:
    (do ((current foo (foo-next current)))
	((eq (foo-a current) 'yow!) current))

We search down a linked list of FOOs for a FOO whose A is YOW!.  If for some
reason, it isn't there, then we fly off the end of the list (and get a run-time
error.)  This is a nasty bug, because the problem isn't determining *that* we
flew off the end of the list, the problem is determining *why* YOW! wasn't in
the list.  And the only relevant tool that current languages offer is
run-time assertions:
    (assert (find-in #'foo-next 'yow! foo :key #'foo-a)))

Of course, you can do run-time consistency checks in any language.  The point
is that for finding the hard bugs, that is what you end up doing in any
language.

>> Do your programs ever dump core?  That's a run-time error check, just not a
>> very graceful one.
>
>Yes, and one which could quite often have been prevented by stronger
>compile-time checks.

I do seem some potential in powerful type-inferencing systems such as in ML,
but even in these languages, run-time assertions are important.

  Rob MacLachlan (ram@cs.cmu.edu)

kers@hplb.hpl.hp.com (Chris Dollin) (03/19/91)

Dan Bernstein writes:

   In contrast, the supposed conciseness of dynamically typed languages
   costs dearly in compile time, run time, and (for projects with many
   debugging runs) programming time. For these disadvantages it would have
   to provide a huge benefit for maintenance, yet its proponents never seem
   to come up with examples showing such a benefit.

Perhaps Dan would like to explain why dynamically typed languages "cost dearly
in compile-time", since the compiler is performing fewer checks? I can think of
three possible interpretations:

(a) The compiler is slower, because it has to generate extra code for the
necessary run-time checks.

Planting a few extra procedure calls is unlikely to take as much time as (say)
doing ML-style type unification.

(b) The compiler is faster, *but* it will be called on to compile entities very
many more times as their trivial type errors are detected.

Removing type errors doesn't tka that long (by observation). Also, for entities
that are complex enough that they take several passes before their type errors
are removed dynamically, they'll probably take several passes through the
static checking compiler - trashing any presumptive speed advantage.

(c) The compiler is written in the language it compiles, hence is dynamically
typed, hence is slow.

Begs the question. Also the compiler (being a relatively fixed application) may
have had optimisations applied to it - such as optional type declarations - to
make it go faster. It may have had optimisations applied to it that are only
possible by virtue of being dynamically typed.

Dan being Dan, he probably has some alternative (i) in mind; perhaps he would
enlighten us.
--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

oz@yunexus.yorku.ca (Ozan Yigit) (03/19/91)

In article <see ref> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:

>I can't afford to use Lisp either: I don't find its slight advantages in
>expressiveness to outweigh its slowness for all but the simplest
>programs.

Lisp or Scheme's advantages in expressiveness are not slight at all, but
still, it is amusing to see you acknowledge the expressiveness of those
languages you do not know much about _exceed_ those languages that you do
know something about.

Also, do you really understand what "expressiveness" mean? 

>In contrast, the supposed conciseness of dynamically typed languages
>costs dearly in compile time, run time, and (for projects with many
>debugging runs) programming time.

Dan, you have no idea what you are talking about.

oz

tmb@ai.mit.edu (Thomas M. Breuel) (03/20/91)

In article <22032@yunexus.YorkU.CA>, oz@yunexus.yorku.ca (Ozan Yigit) writes:
|> >In contrast, the supposed conciseness of dynamically typed languages
|> >costs dearly in compile time, run time, and (for projects with many
|> >debugging runs) programming time.
|> 
|> Dan, you have no idea what you are talking about.

I wouldn't be quite so harsh. Static type checking is very good
at eliminating a large fraction of those mistakes that people
commonly make. As a side-benefit, simpler compilers are able to
generate better code if type information is available at compile
time.

To me, polymorphic statically typed programming languages like
ML are currently the best compromise between flexibility, compile-time
checking, and efficiency.

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/20/91)

In article <22032@yunexus.YorkU.CA> oz@yunexus.yorku.ca (Ozan Yigit) writes:
> Lisp or Scheme's advantages in expressiveness are not slight at all, but
> still, it is amusing to see you acknowledge the expressiveness of those
> languages you do not know much about _exceed_ those languages that you do
> know something about.

Indeed, and I'm glad to hear that you've stopped beating your children.

Of the languages that I've used much, I find Forth the most expressive.
Lisp and C come in way below, and several other languages (which,
naturally, I've given up on) compete with Pascal and Fortran for the
bottom of the bucket. I don't know enough about Scheme to judge it, but
by all accounts it is more expressive than Fortran or C. Yet I continue
to use C (and, when necessary, Fortran) much more than any other. Why?
Because Forth is not portable, Lisp on anything but a Symbolics is so
slow that my test runs often take ten times as long, and Ada compilers
are snails.

Expressiveness is nice. Anything syntactic is nice. But I don't need
niceties. I need portability. I need fast compile times and run times so
that the machine's turnaround time during testing and debugging doesn't
become a significant part of my turnaround time. I need language power:
full access to what the machine can do. Lisp doesn't have any of this.

> Also, do you really understand what "expressiveness" mean? 

Yes, I think so. Do you?

> >In contrast, the supposed conciseness of dynamically typed languages
> >costs dearly in compile time, run time, and (for projects with many
> >debugging runs) programming time.
> Dan, you have no idea what you are talking about.

Actually, I have a reasonably good idea of what I'm talking about. My
comments on dynamically typed languages are based not only on my
experience but also on many objective and subjective articles by both
detractors from and proponents of such languages. As a matter of fact,
if you want to buck the establishment, it's your problem to prove that
dynamically typed languages aren't as inefficient as most experiments
have found them to be.

Would you write a compressor in a dynamically typed language?

---Dan

ram+@cs.cmu.edu (Rob MacLachlan) (03/20/91)

Subject: Re: blip [Re: Dynamic typing -- To Have and Have Not ...]
Date: 19 Mar 91 23:59:35 GMT

>Of the languages that I've used much, I find Forth the most expressive.  Lisp
>and C come in way below [...]  Yet I continue to use C (and, when necessary,
>Fortran) much more than any other. Why?  Because Forth is not portable, Lisp
>on anything but a Symbolics is so slow that my test runs often take ten times
>as long, and Ada compilers are snails.

Are you doing number crunching?  If so, non-Symbolics Lisp products perform
badly.  But for other problems, Lispms have been surpassed in speed by Lisps
running on conventional workstations (MIPS, SPARC, etc.)  And CMU Common Lisp
provides better-than-Lispm safety and debuggability with better-than-Lispm
speed (not to mention cost-effectiveness.)  CMU CL also offers good number
crunching performance in real programs (5x Allegro.)

>I need portability. I need fast compile times and run times so
>that the machine's turnaround time during testing and debugging doesn't
>become a significant part of my turnaround time. I need language power:
>full access to what the machine can do. Lisp doesn't have any of this.

I think that you overstate the case.  Common Lisp is at least as portable as
C, and Lisp systems offer unsurpassed compile-debug turnaround times (through
incremental compilation.)  "Full access to what the machine can do" is
somewhat more nebulous.  If you mean writing device drivers, then Lisp is not
for you.  But if you just mean that you want to use all the machine's
primitive datatypes with reasonable efficiency, then Lisp *can* do this
(although not all implementations provide as consistent support as CMU CL.)

Common Lisp certainly wins big compared to, say Pascal, in that it has about
all the operations that hardware implements (boolean arithmetic, bit-vectors,
decode-float, all the flavors of division, etc.)

>[...] if you want to buck the establishment, it's your problem to prove that
>dynamically typed languages aren't as inefficient as most experiments have
>found them to be.

Some statements I am willing to defend:
 -- For some uses, efficiency isn't everything (research, prototyping,
    development.) 
 -- For the things that C, Pascal, Modula, C++, etc. are used for, it is
    almost always possible to get a Lisp program to come withing a factor of
    two of the performance of these more conventional languages.
    (I am confining myself to general or "systems" programming, as opposed to
    data processing and scientific computing.)
 -- It can be very, very hard to get this good performance, but many of
    the performance tar-pits can be reliably negotiated when there is good
    compiler feedback.

>Would you write a compressor in a dynamically typed language?

If you mean something like the Unix system program "compress", then I
wouldn't.  But if your goal was to compress a complex input (like natural
language) down to the smallest amount of space, with run-time relatively
unimportant, then Lisp would be a good candidate.

  Rob MacLachlan (ram@cs.cmu.edu)

cs450a03@uc780.umd.edu (03/20/91)

Chris Dollin writes:
>Perhaps Dan would like to explain why dynamically typed languages
>"cost dearly in compile-time", since the compiler is performing fewer
>checks? I can think of three possible interpretations: ...

You missed one:  That the compiler was thrown together in a "short"
period of time, etc.   

And another:  Because the compiler is intended to optimize the hell
out of critical sections of code, it spends quite a bit of CPU on
optimizing.

Another one:  That there is a poor match between language operations
and machine architecture.  Part of the fix for this is coding style,
but it still feeds the other problems (makes them nice, healthy,
well-fed problems).

C'Est la vie.

Raul Rockwell

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/20/91)

In article <KERS.91Mar19082354@cdollin.hpl.hp.com> kers@hplb.hpl.hp.com (Chris Dollin) writes:
> Perhaps Dan would like to explain why dynamically typed languages "cost dearly
> in compile-time", since the compiler is performing fewer checks?

You're right, I should have said ``costs dearly in either compile time
or run time, and in either case (for projects with many debugging runs)
programming time.'' My point is that I can't afford to make that choice
except for projects where the compile time or run time is going to be
very small no matter what language I use. That's rarely the case.

---Dan

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/20/91)

In article <1991Mar20.041716.4486@cs.cmu.edu> ram+@cs.cmu.edu (Rob MacLachlan) writes:
> I think that you overstate the case.  Common Lisp is at least as portable as
> C,

What world do you live in? About two-thirds of the UNIX systems I use
can't even compile any of the available Lisps. In any case, portability
is defined by what's out there, not what could be out there; and about
95% of the machines I use do *not* support Lisp even if they *could*.

> and Lisp systems offer unsurpassed compile-debug turnaround times (through
> incremental compilation.)

``Unsurpassed'' is exaggeration, and even if compile times were instant
I'd spend forever just waiting for most programs to run.

> But if you just mean that you want to use all the machine's
> primitive datatypes with reasonable efficiency,

No. A machine is much more than its ``primitive datatypes.'' But Lisp
doesn't even provide full access to pointers.

> >[...] if you want to buck the establishment, it's your problem to prove that
> >dynamically typed languages aren't as inefficient as most experiments have
> >found them to be.
>  -- For some uses, efficiency isn't everything (research, prototyping,
>     development.) 

In fact, I've been focusing on the prototyping and development stage of
a program, because that's when it's most important to get good compile
times *and* run times.

---Dan

quale@picard.cs.wisc.edu (Douglas E. Quale) (03/20/91)

In article <11820:Mar1923:59:3591@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>
>Actually, I have a reasonably good idea of what I'm talking about. My
>comments on dynamically typed languages are based not only on my
>experience but also on many objective and subjective articles by both
>detractors from and proponents of such languages. As a matter of fact,

And Dan also claims that debugging and program development is faster in a
statically typed language than in a dynamically typed language.  Since you
claim your beliefs are based on your vast knowledge of the applicable
literature, please give us a reference supporting this dubious claim.

In another article Dan claims that Gnu Emacs is mostly written in C,
"with a small amount of helper code."  This is absolutely false.

-- Doug Quale
quale@khan.cs.wisc.edu

oz@yunexus.yorku.ca (Ozan Yigit) (03/21/91)

In article <see ref> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:

>Indeed, and I'm glad to hear that you've stopped beating your children.

I have no children, and likewise the suggested fallacy does not exist.
My observation is based on your past and present statements indicating
significant lack of insight into other programming languages you like 
to _talk_ about, e.g.:

| No. A machine is much more than its ``primitive datatypes.'' But Lisp
| doesn't even provide full access to pointers.

Imagine that! ;-)

>Of the languages that I've used much, I find Forth the most expressive.

Good for you. It must be all the nice syntax.

>			     I need fast compile times and run times so
>that the machine's turnaround time during testing and debugging doesn't
>become a significant part of my turnaround time.

Use the right compiler [and the right machine] for fast compile times,
right language for prototyping, and right environment for testing and
debugging. [yawn]

> I need language power: full access to what the machine can do.

You seem to be confusing language "power" with language "level".

> Lisp doesn't have any of this.

Is this "have" the same as your previously re-defined "have" to mean
something in relation to general computability, or is this something
more meaningful and useful?

>Actually, I have a reasonably good idea of what I'm talking about. My
>comments on dynamically typed languages are based not only on my
>experience but also on many objective and subjective articles by both
>detractors from and proponents of such languages.

Of course. Just as interesting, the neighbourhood cabbie was telling me
about how he gave a ride to Elvis, and he had witnesses to prove it. So?

>Would you write a compressor in a dynamically typed language?

Silly question, despite its answer. 1988, compress.lisp [UN*X compress in
Common Lisp] by Paul Fuqua, done.

oz
---
In seeking the unattainable, simplicity  |  Internet: oz@nexus.yorku.ca
only gets in the way. -- Alan J. Perlis  |  Uucp: utai/utzoo!yunexus!oz

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/21/91)

In article <22075@yunexus.YorkU.CA> oz@yunexus.yorku.ca (Ozan Yigit) writes:
> In article <see ref> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
> | No. A machine is much more than its ``primitive datatypes.'' But Lisp
> | doesn't even provide full access to pointers.
> Imagine that! ;-)

It doesn't. I can't even find a Lisp for the Convex that takes advantage
of array indexing: in C, p[i] runs as fast as *p on that machine, but
since Lisp doesn't truly grok pointers it falls flat.

Or is it heresy in your faith to even conceive of the idea that Lisp
doesn't understand pointers as well as C?

> >			     I need fast compile times and run times so
> >that the machine's turnaround time during testing and debugging doesn't
> >become a significant part of my turnaround time.
> Use the right compiler [and the right machine] for fast compile times,

Oh, what a *useful* suggestion. ``Your programs don't run fast enough?
Buy a Cray.'' Believe it or not, even on the Cray, the compiler takes a
noticeable amount of time, and there do exist programs that run for
longer than a second.

> > I need language power: full access to what the machine can do.
> You seem to be confusing language "power" with language "level".

No. Ada is in some ways more powerful than C: among its primitives, for
example, are tasks. This doesn't make it higher-level or lower-level
than C; it just adds some power, by providing more access to what the
machine (in this case, OS) can do.

> > Lisp doesn't have any of this.
> Is this "have" the same as your previously re-defined "have" to mean
> something in relation to general computability, or is this something
> more meaningful and useful?

Does ``any of this'' refer to some semantic feature of a programming
language? No. Therefore my previously defined ``have'' does not apply
here. You understand overloaded operations; why don't you understand
overloaded words?

> Of course. Just as interesting, the neighbourhood cabbie was telling me
> about how he gave a ride to Elvis, and he had witnesses to prove it. So?

Hey, bud, you started.

> >Would you write a compressor in a dynamically typed language?
> Silly question, despite its answer. 1988, compress.lisp [UN*X compress in
> Common Lisp] by Paul Fuqua, done.

You didn't answer the question. Would you write a compressor in a
dynamically typed language? I wouldn't, because each compile run takes a
noticeable amount of time, and each test run takes a noticeable amount
of time. If I used a dynamically typed language, I'd lose big on either
compile time, run time, or both. That can mean the difference between a
week and a month in project turnaround time, not to mention a slower
final program.

The dynamic-typing people claim that I'll get it all back in
maintenance, because my program will be shorter. I don't believe them:
dynamic typing wouldn't simplify my huptrie.h, for example. They claim
that I won't have as many testing and debugging runs. Sorry, but it just
isn't true: practically every bug I found and behavior I changed would
have been found and changed the same way in most languages, and I have
the project logs to prove it.

Maybe it's a silly question, but for me it exemplifies what's wrong with
dynamically typed languages.

---Dan

kers@hplb.hpl.hp.com (Chris Dollin) (03/21/91)

Dan tails with:

   Would you write a compressor in a dynamically typed language?

Odd you should ask that, but when I was trying to understand LZ compression, I
did just that.

It was fast enough. For a production version, I would have tightened up the
code, using my knowledge of the properties of the program - including types, of
course. [Actually I rewrote it in C because Pop11 wasn't - and alas, still
isn't - available on my home machine, but C is. Over the next year or so my Pop
clone should gradually start to generate native code; it will be an interesting
experiment to compare speeds then.]

--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

kers@hplb.hpl.hp.com (Chris Dollin) (03/21/91)

Dan replies:

   You're right, I should have said ``costs dearly in either compile time
   or run time, and in either case (for projects with many debugging runs)
   programming time.'' My point is that I can't afford to make that choice
   except for projects where the compile time or run time is going to be
   very small no matter what language I use. That's rarely the case.

OK, but I'm still puzzled as to why you included the option of dearer
compile-time costs at all. Can you illustrate a case where a compiler for a
dynamically-typed language is *slower* than a compiler for a statically typed
language - when the compiler is written in the same language for both (ie,
we're comparing apples and apples)?

As it happens, I have been writing a program in a dynamically-checked language
recently - a typechecker [ironic, eh?] for a specification language.

I am happy to report that the number of *type* errors that occurred was
trivially small; perhaps once there was a type error that took me longer then
fiven minutes to track dowm. [Usually it's a case of "bugger, forgot to change
the call of rename_sigentry in copy_module"; static typechecking would have
caught it about thirty seconds earlier.]

I am also happy to report that the genuine logical errors could not have been
caught by anything short of a full theorem prover operating from a fully
correct abstract specification. 

[I just timed how long it took to load the complete typechecker into the
system, starting from Unix command-line prompt and including the time it took
to type "load loader.p" to the editor; 1min 6sec for compiling 8000 lines of
source to MC68K machine code, on an HP9000/350. This time includes the program
setup-time (it constructs various tables of built-in identifiers on the way,
and reads a file describing the parse-tree magic numbers). Compiling the
database component from the toolset - written in C with a small Yacc grammar -
took 8 minutes for about 12000 lines of code, not including the time it would
take to build the standard library components; same machine, of course. The
figures are open to interpretation, but they're datapoints.]

[By the way, Dan, if you find Forth "expressive", you might like Pop; Forth's
open stack, sort-of Lispy datatypes, conventional syntax.]


--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

lavinus@csgrad.cs.vt.edu (03/22/91)

Hello, Type warriors...

I was meaning to stay out of this, but alas...

Do people out there really think that any one language is good for all applications?
Obviously, if you want to write a UN*X device driver, you do it in C, and if you want
to do heavy number crunching, you do it in Fortran (for speed), or better yet, in
Occam or something on a multiprocessor.  There are some applications for which dynamic
typing makes programming infinitely easier, and there are some for which dynamic
typing gains you little, and thus is not worth the efficiency hit (which is often
minor - compare programs written in C and Yale's T dialect of Lisp/Scheme sometime).
Aside from all that, expressiveness is more a matter of taste than anything else - 
some people just naturally think in one paradigm or another.  The arguments are thus
rendered rather pointless - it's not as though when this argument is won, we're
going to remove from existence all languages which belong to the losing side (that's
assuming the argument *can* be won).  It all comes down to something like, "Oh yeah,
well my Scheme compiler can beat up your C compiler..." :-)

Ah well, on with the flames...

Joe
--
_______________________________________________________________
                                                   _  _  __
  Joseph W. Lavinus (lavinus@csgrad.cs.vt.edu)     | / \ |_
  Virginia Tech, Blacksburg, Virginia            __| \_/ |_

amanda@visix.com (Amanda Walker) (03/22/91)

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:

   > | Lisp doesn't even provide full access to pointers.
   > Imagine that! ;-)

   It doesn't.

That depends on the Lisp.  T, for example, not only has a very good compiler,
but provides a data type called a "locative" that acts like a pointer, but
doesn't even get confused when GCs happen.  Most modern Lisps provide just
as much machine-level access as, say, C does.

   Or is it heresy in your faith to even conceive of the idea that Lisp
   doesn't understand pointers as well as C?

In my case, it's more experience than faith.  Have you actually looked at
what's been going on in the Lisp world over the past 5-7 years?  You might
be very surprised.

   Would you write a compressor in a dynamically typed language?

Yes.  In fact, I'd *rather* write a compressor in a dynamically typed
language.

   I wouldn't, because each compile run takes a
   noticeable amount of time, and each test run takes a noticeable amount
   of time.

That has mostly to do with the particular compiler you are using, and
not much to do with any inherent properties of the language itself.
The fact that C may be easier to compile (for some value of "easier")
than Lisp is a separate claim, and one I would be happy to concede.
However, ease of compilation is not the constraining factor in
software development.  If it were, we'd all be using assembler.  Not
that C is very much of an improvement, mind you...

   Maybe it's a silly question, but for me it exemplifies what's wrong with
   dynamically typed languages.

From what you've said so far, it sounds like descriptions of what's wrong
with the state of much the commercial dynamically-type language market.  On
that I have no argument, but I think that blaming this on the fact that
a language is dynamically typed is bordering on religious belief.

--
Amanda Walker						      amanda@visix.com
Visix Software Inc.					...!uunet!visix!amanda
-- 
"Many of the truths we cling to depend greatly on our point of view."
		--Obi-Wan Kenobi in "The Empire Strikes Back"

kend@data.UUCP (Ken Dickey) (03/23/91)

{Warning: I have not been following this thread}

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>... Would you write a compressor in a
>dynamically typed language? I wouldn't, because each compile run takes a
>noticeable amount of time, and each test run takes a noticeable amount
>of time. If I used a dynamically typed language, I'd lose big on either
>compile time, run time, or both. That can mean the difference between a
>week and a month in project turnaround time, not to mention a slower
>final program.

Interesting.  I don't know what languages you typically use [Pascal,
Eiffel, C?], but find many more good environments for fast program
development in dynamically typed languages.  If you want fast code,
there are some good Scheme compilers around.

Of course, I write software systems, not just programs.  If you are
doing device drivers, and really require speed, make use of your local
assembler and the hardware caches.

-Ken Dickey			kend@data.uucp

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/26/91)

In article <KERS.91Mar21121931@cdollin.hpl.hp.com> kers@hplb.hpl.hp.com (Chris Dollin) writes:
> OK, but I'm still puzzled as to why you included the option of dearer
> compile-time costs at all. Can you illustrate a case where a compiler for a
> dynamically-typed language is *slower* than a compiler for a statically typed
> language - when the compiler is written in the same language for both (ie,
> we're comparing apples and apples)?

Many people have spent many years trying to optimize dynamically typed
languages (i.e., to get rid of the dynamic typing in the object code),
with some success. When a compiler *doesn't* optimize a dynamically
typed language, the final code is hellishly slow (as in most Lisps).

---Dan

rick@cua.cary.ibm.com (Rick DeNatale) (03/26/91)

In article <14160@life.ai.mit.edu> tmb@ai.mit.edu writes:

>I wouldn't be quite so harsh. Static type checking is very good
>at eliminating a large fraction of those mistakes that people
>commonly make. 

There seems to be a discrepancy between reporters on this issue that is    
language dependent.

Type errors seem to be a big source of errors in C but seem to appear rarely	in Smalltalk programs.  At least this is true in my own experiences and 
others that I have talked to agree.  In Smalltalk the most common errors seem
to be object state problems.

I think that there is a reason for this, and it has to do with the purity of
object orientation extending down to the implementation/realization level.
In most 'normal' programming languages, a type error occurs when you present
a string of bits to a piece of code that expects to interpret that string of
bits in a particular way (short integer, float, char *, class foo), and
the bits aren't actually the right type of bits.

In a pure object oriented implementation, you just can't give the wrong string
of bits to a piece of code, the method dispatching sees to that.

I'm actually a little bit surprised that people put so much faith in compile
time type checking systems that they are willing to accept an implementation
that allows type errors that escape the compiler to cause wild branches and
other unsavory acts, and then demand that such stuff is required for
"industrial strength".

I haven't seen a strong typing system yet that doesn't require you to 
(hopefully carefully) circumvent it at times, or that is absolutely bullet
proof even without circumvention (array bounds checking, overflows etc.).
It takes real confidence to work in dangerous environments without safety
equipment.  Or maybe it's a less desirable quality than confidence!

Rick DeNatale

kers@hplb.hpl.hp.com (Chris Dollin) (03/26/91)

Dan responds:

   In article [one of mine] Chris Dollin writes:
   > OK, but I'm still puzzled as to why you included the option of dearer
   > compile-time costs at all. Can you illustrate a case where a compiler for a
   > dynamically-typed language is *slower* than a compiler for a statically typed
   > language - when the compiler is written in the same language for both (ie,
   > we're comparing apples and apples)?

   Many people have spent many years trying to optimize dynamically typed
   languages (i.e., to get rid of the dynamic typing in the object code),
   with some success. When a compiler *doesn't* optimize a dynamically
   typed language, the final code is hellishly slow (as in most Lisps).

Sorry, Dan, but this doesn't answer my question - can you tell us about a case
where a compiler in language L for a dynamically typed language D was slower
than a compiler in L for a statically typed language S, because of the absence
of compile-time type-testing?

As for ``the final code is hellishly slow'' when the compiler does not attempt
to optimise out dynamic typing, what sort of factor are you talking about? 2?
10? 100? [I'm prepared to pay a factor of 2, I'd be reluctant about a factor of
10, and would regard a factor of 100 as dreadful.]

As one of my earlier postings remarked, the Pop compiler (written in Pop11, a
DTL) compiles Pop code about 5 times faster than an HP-UX ANSI C compiler
(written in C, an STL). All other things are grossly unequal, of course,
because the C compiler is repeatedly re-reading header files because of
separate compilation, and because we are liberal with our use of repeated
include files; if all that accounts for a factor of 5 I'd be surprised.

I presume you mean that ``optimising out dynamic typing can be slow''. Sure.
Optimising compilers can be slow - gcc takes what seems an outlandishly long
time compiling one of my modules, because it's a single 800 line procedure.
(The Acorn C compiler is even slower on this example.) 

More will follow when I've time to gather my thoughts ...
--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/27/91)

In article <KERS.91Mar26111126@cdollin.hpl.hp.com> kers@hplb.hpl.hp.com (Chris Dollin) writes:
> Sorry, Dan, but this doesn't answer my question

Look, static typing gives me both fast compile times and fast run times.
Dynamic typing used to lose big on run times; all the interesting recent
work on dynamically typed languages has been in optimization, but for
those reasonably competitive run times they lose big on compile times.
Why should I pay that price, especially during development when I need
both fast compiles and fast runs for good turnaround?

> As for ``the final code is hellishly slow'' when the compiler does not attempt
> to optimise out dynamic typing, what sort of factor are you talking about? 2?
> 10? 100?

Between 2 and 10. It usually depends on how many of the library routines
have been written in a faster language.

---Dan

jls@rutabaga.Rational.COM (Jim Showalter) (03/27/91)

>I'm actually a little bit surprised that people put so much faith in compile
>time type checking systems that they are willing to accept an implementation
>that allows type errors that escape the compiler to cause wild branches and
>other unsavory acts, and then demand that such stuff is required for
>"industrial strength".

Who are these people? I'D certainly never accept such an implementation.
I demand strong compile time checking AND a validated compiler. But then.
that's why I prefer to work in Ada.

>I haven't seen a strong typing system yet that doesn't require you to 
>(hopefully carefully) circumvent it at times,

Agreed. But the point is that in a properly designed language you have
to go out of your way (by design) to effect such circumvention.

>or that is absolutely bullet
>proof even without circumvention (array bounds checking, overflows etc.).

Ada may not be completely bulletproof, but the examples you cite it
certainly detects and traps.
--
***** DISCLAIMER: The opinions expressed herein are my own. Duh. Like you'd
ever be able to find a company (or, for that matter, very many people) with
opinions like mine. 
              -- "When I want your opinion, I'll read it in your entrails."

px@fct.unl.pt (Joaquim Baptista [pxQuim]) (04/02/91)

In article <879@puck.mrcu> paj@mrcu (Paul Johnson) writes:

   On the other hand I am interested in the assertion that type errors
   are rare in Smalltalk development.  Does anyone have any statistics to
   back this up?  I think this discussion could probably do with an
   injection of fact, lest it degenerate into a language flame war.

I do not have any hard data, but I believe that I can give you a give
argument for it.

In a strongly typed language such as Eiffel, the programmer must
declare the type of all its variables, arguments, and such. If the
programmer later changes its mind, these type declarations must be
updated everywhere, which is a tedious and error-prone process.

Having no types just means that this sort of error does not happen,
while the other kinds of error probably remain at the same level.

--
Joaquim Manuel Soares Baptista, aka px@fct.unl.pt, px@unl.uucp
Snail: CRIA, UNINOVA, FCT/UNL, 2825 Mt Caparica, Portugal

So long, and thanks for all the fish.

pallas@eng.sun.com (Joseph Pallas) (04/03/91)

In <PX.91Apr1225918@hal.fct.unl.pt> px@fct.unl.pt (Joaquim Baptista
[pxQuim]) writes:

>Having no types ...

Nope, try again.  How about, "Having no declared types ..."

>just means that this sort of error does not happen,

Nope, try again.  How about, "just means that this sort of error is
not detected at compile time, but at run time,"

>while the other kinds of error probably remain at the same level.

People who think that dynamically typed languages result in typeless
programs need to reflect on what they are doing when they write their
programs.  If your program operates on some object, you can be certain
that there is a type implied by that operation.

joe

rick@cua.cary.ibm.com (Rick DeNatale) (04/12/91)

In article <879@puck.mrcu> paj@uk.co.gec-mrc (Paul Johnson) writes:
>
>I assume that this is a sideswipe at Eiffel.  It is true that the
>current implementation has this problem (amongst others).  However ISE
>have promised that their implementation of Eiffel 3 will fix this
>problem.  I assume other vendors will be following the same policy
>(anyone from Sig Computer want to comment?)  Basically you are
>criticising the language by critcising the implementation.  This is
>not a good argument.

Au contraire mon ami, I was not swiping at Eiffel, and particularly not 
at its implementation of which I have no direct experience.  I was criticizing
the practicality of currently implementable compile time type systems, that
all seem to require various loopholes, or at least the rearrangement of 
implementation hierarchies to deal with new components added to a system.

My idea of the power of object oriented programming is to get away from
the necessity of coming up with a new consistent set of axioms whenever I
want to maintain/evolve my software, and to do this with a set of independently
developed components.  Type hierarchies tied to class hierarchies seem to
force this type of re-analysis which seems antithetical to reuse.  Reuse
means keeping as much the same as possible, whilst some things are changing.
I want to avoid reimplementation wherever possible, and the arguments of
the strong typists always seem to take the form ... "well in this case
your hierarchy should look like this ... and this just looks like too much
reimplementation to me.

>On the other hand I am interested in the assertion that type errors
>are rare in Smalltalk development.  Does anyone have any statistics to
>back this up?  I think this discussion could probably do with an
>injection of fact, lest it degenerate into a language flame war.
>
I've talked about this with a number of other experienced Smalltalk 
programmers and there is a remarkable degree of concensus, at least 90%
of the "doesNotUnderstand" errors come from sending a message to nil, the
UndefinedObject.  In other words they are state errors (uninitialized 
variables).  The others seem to surface right away after I do a compile, test
cycle.  In a strongly typed language this would be a series of compiles to
get the type errors out.

I've demoed a variety of Smalltalk applications written by myself and others,
and I really can't recall ever being embarrassed by a "doesNotUnderstand"
error regardless of how high level the management I've showed it to.  In
fact when I'm pressed to generate such an error intentionally to demonstrate
the nice debugger, I often have a hard time figuring out how to gen up a 
nice little error!

Rick DeNatale
Of course my opinion is my own, who else would want it?