[net.lang] strong typing and the 'magic compiler'

taylor@sdccsu3.UUCP (12/23/83)

A public response to a point Jerry made;

	You commented that it would be nice if the compiler examined the
source code and figured out what type the variable was and flashed a warning
message if it COULDN'T figure it out.  Well, that's pretty much how APL works,
and believe me, that can cause major problems!  For example, if in APL you 
want to read a number as a string, no problem...then you can do 'string' 
things to it (slicing etc).  Later on you decide that it should revert to 
a number - so you merely use it as such.  THIS IS VERY CONFUSING!

	From experience, APL is very hard to understand ex post facto...even
if it's your own code!  This freedom with variables is one of the reasons.  

	Looking from another perspective, consider even a simple language with
relatively few constructs - Pascal - without having to explicitly type the 
variables.  Two problems arise from this; 1) (the Fortran syndrome) if you 
mis-spell a variable, the compiler will simply assume that it is a NEW
variable and create it...not telling you...  2) it would be pretty damn hard
to understand too!  (This is from years of grading Pascal programs here at
UCSD...WITH type statements!!!!)

	Besides, even if you wrote a pre-processor that would create the 
appropriate declarations, it would be a semantic jungle!  Consider;

	procedure confuse ;
	begin
		for i = 1 to 10 do (* nothing *) ;
		i := "this is a test string";
		if i>4 then write("wow!");
		i := 2;
	end;

	How on earth can that be translated into 'proper' Pascal?
	The only thing I could envision would be;

	procedure confuse;
	var ii : integer;
	    si: string;
	begin
	    for ii = 1 to 10 do (* nothing *) ;
	    si := "this is a test string";
	    ii := string_to_integer(si);
	    if ii>4 then write ("wow!");
	    ii := 2;
	end;

	Either way, this is pretty confusing code.  This is NOT a special 
case either...I think that this would occur quite often if acceptable by
the compiler.  I know I would like to be able to read numbers in as strings,
then later use them as if they were numerical values....BUT this does not
cloud my thinking - THIS IS A DANGEROUS THING WE DISCUSS HERE!  The possible
ramifications on programming languages are vast ... and destructive.

	The direction that I would prefer the typing issue to go in is rather
more general types than none at all;
	For example, the types 'integer' and 'real' wouldn't be distinguished,
neither would 'complex' and the other numerical types.  This way, as the
program continues, the variable could actually change type according to the
result of a given equation.  (YES I know about the difference internally, but
so what?? This is very trivial to implement!)
	Consider the advantages - no more errors on negative square roots,
no more loss of the remainder in integer division, etcetera etcetera.  
	Output formatting could be pretty simple too - if there is a complex
subfield, display it.  If there is a fractional part, display it.  And if
the number is greater than a default display size (say 10 to the 6th) then
express it in scientific notation.  This way, something like "write(2.0/2.0);"
would produce '1' as the result, not '1.0' or '0.1E1' or whatever other
strange permutation can be generated.

	NOTICE that this does not only apply to Pascal!   It could be used
in all languages!

	Further, I imagine in the future that the user would merely have
to distinguish between numbers and characters...and so the two base types
would be NUMERIC and ALPHABETIC with logical variables either as numerics
or their own subfield....

	Opposing AND supporting comments (Greg?) are welcome.


				Dave Taylor

				...sdccsu3!taylor  or  ...sdccs6!taylor

				at the University of California at San Diego