[comp.lang.fortran] FORTRAN 8X PRECISION

sjc@key.COM (Steve Correll) (11/09/89)

In article <MCCALPIN.89Nov6204250@masig3.ocean.fsu.edu>, mccalpin@masig3.ocean.fsu.edu (John D. McCalpin) writes:
> I have been talking with Gary Campbell of Sun Microsystems, and his
> proposal gets to the heart of the problem.  The problem with literal
> constants is that the stupid compilers insist on assigning a precision
> to them without looking at their context.  Then the context is used
> later on to decide what conversions are required to obey the
> mixed-mode arithmetic rules...
> Examples:
> 	REAL			A, B, C
> 	DOUBLEPRECISION		X, Y, Z
> 
> 	A = 2*B			! 2 is a REAL constant by implication
> 	X = Y+3.1*Z		! 3.1 is a DOUBLEPRECISION constant

1. Using context doesn't work in examples like "CALL XYZ(3.5)".

  Until you start using modules, the compiler cannot know the type of the dummy
  argument. On most machines, a real/doubleprecision mismatch gives a very
  wrong number. (An exception is the Vax, where the first half of the preferred
  doubleprecision format happens to match the bit layout of the real format.
  Thus, with help from little-endian addressing, a real/doubleprecision type
  mismatch may merely degrade precision. This is often a shock when porting VMS
  applications to other machines.)

2. Another alternative mentioned earlier, the PARAMETER statement, doesn't
  solve the problem either. In the following example (please correct me if I'm
  wrong) the standard permits the compiler to convert "1e50" from text to real
  and then to doubleprecision, possibly overflowing during the intermediate
  step. When you change REAL to DOUBLEPRECISION in IMPLICIT statements, you
  still need to change "e" to "d" in the exponents in PARAMETER statements.

	implicit doubleprecision (d)
	parameter (d = 1e50)

3. For many people, who simply want to change REAL to DOUBLEPRECISION globally
  because (for example) REAL on the Cray gives them the same results as
  DOUBLEPRECISION on the Vax, somehow extending the IMPLICIT statement to
  constants would work well. Unfortunately, that's not a particularly general
  or powerful approach, and therefore not attractive to a standards committee.
  The F88 approach solves the problem too, as soon as we all finish editing all
  our sources to put "_my_precision" after all the constants.:-) Unfortunately,
  it's going to be hard to find a missing "_my_precision" in a large program.

4. My experience with PL/I says that if the language designer lets the
  programmer specify the number of decimal digits of precision, the law of
  unintended consequences takes effect, and you get exactly the opposite of
  what you wanted. Very few PL/I programs say "float bin(my_precision)".  Most
  say "float bin(6)" or "float bin(5)" or whatever the programmer memorized as
  an idiom for "single precision" when s/he first learned the language. If
  several programmers work on the same source, it probably uses "6" in some
  places and "5" in others. It's actually riskier to change precision with a
  text editor than in Fortran 77, because you can be tripped up by a single "5"
  that should have been a "6".

  In this, the current F88 draft improves on the previous one. Because you must
  use an intrinsic to obtain precision in terms of digits, you are more likely
  to store the precision in a parameter and use that instead of repeating the
  number of digits (perhaps inconsistently) throughout the program.

  It's still difficult to ensure that you haven't omitted a "KIND=" after a
  REAL somewhere in a large program. The Pascal "type my_precison = real"
  facility works better, since it's easy to make sure that un-parameterized
  precision doesn't creep into your source by accident: use an editor to check
  that the word "real" appears only once. A similar facility vanished from F88,
  perhaps due to complaints about complexity.
-- 
...{sun,pyramid}!pacbell!key!sjc 				Steve Correll

mccalpin@masig3.ocean.fsu.edu (John D. McCalpin) (11/09/89)

In article <1209@key.COM> sjc@key.COM (Steve Correll) writes in reply
to my earlier posting on ways to deal with the Fortran-8X
specification of precision problem:

>1. Using context doesn't work in examples like "CALL XYZ(3.5)".

Good point....  This makes it even more important that the user can
tell the compiler what the precision of an un-suffixed literal
floating-point constant is supposed to be.

>2. Another alternative mentioned earlier, the PARAMETER statement, doesn't
> solve the problem either. In the following example (please correct me if I'm
> wrong) the standard permits the compiler to convert "1e50" from text to real
> and then to doubleprecision, possibly overflowing during the intermediate
> step. When you change REAL to DOUBLEPRECISION in IMPLICIT statements, you
> still need to change "e" to "d" in the exponents in PARAMETER statements.
>	implicit doubleprecision (d)
>	parameter (d = 1e50)

This one does not bother me too much, because the user has specified
something that the machine may not be able to give.  I would just hope
to be told about the overflow!

>3. For many people, who simply want to change REAL to DOUBLEPRECISION globally
> because (for example) REAL on the Cray gives them the same results as
> DOUBLEPRECISION on the Vax, somehow extending the IMPLICIT statement to
> constants would work well. Unfortunately, that's not a particularly general
> or powerful approach, and therefore not attractive to a standards committee.

I don't see why this approach is not "general" or "powerful", but in
any event it is one HELL of a lot more powerful than no ability to
specify at all!

> The F88 approach solves the problem too, as soon as we all finish editing all
> our sources to put "_my_precision" after all the constants.:-) Unfortunately,
> it's going to be hard to find a missing "_my_precision" in a large program.

And it is going to be difficult to read the !@#$%^ program once you've
finished making those conversions.

>4. My experience with PL/I says that if the language designer lets the
> programmer specify the number of decimal digits of precision, the law of
> unintended consequences takes effect, and you get exactly the opposite of
> what you wanted. Very few PL/I programs say "float bin(my_precision)".  Most
> say "float bin(6)" or "float bin(5)" or whatever the programmer memorized as
> an idiom for "single precision" when s/he first learned the language. 
	[....]
> In this, the current F88 draft improves on the previous one. Because you must
> use an intrinsic to obtain precision in terms of digits, you are more likely
> to store the precision in a parameter and use that instead of repeating the
> number of digits (perhaps inconsistently) throughout the program.

But you are not _required_ to used parameters.  I can virtually
guarantee that one of the most common problems that will show up with
new users (and perverse old users :-) ) is the abuse of the ability to
specify KIND's literally

	a(1:100) = a(1:100) + 0.3_4*b(1:100)	! KIND=4 is obviously 32-bit!

>...{sun,pyramid}!pacbell!key!sjc 				Steve Correll
--
John D. McCalpin - mccalpin@masig1.ocean.fsu.edu
		   mccalpin@scri1.scri.fsu.edu
		   mccalpin@delocn.udel.edu

bill@ssd.harris.com (Bill Leonard) (11/09/89)

> I have been talking with Gary Campbell of Sun Microsystems, and his
> proposal gets to the heart of the problem.  The problem with literal
> constants is that the stupid compilers insist on assigning a precision
> to them without looking at their context.  Then the context is used
> later on to decide what conversions are required to obey the
> mixed-mode arithmetic rules.  It would solve almost all of my problems
> to simply have the constants take on the precision that they are going
> to be coerced to anyway.  The compiler has to know how to do this
> anyway, in order to do mixed-mode arithmetic, so the added complexity
> is negligible.

Subroutine calls are a glaring exception to this statement.  Constants
passed as arguments have no context to determine their precision.
Therefore, the user must supply the context.  Since this occurs quite
often, the syntax should be clear, concise, and easy to learn and use.  The
assertion that it can be ugly because it is used infrequently is plain
wrong.

Our customers have made similar suggestions to us, but their problems are
usually related to compile-time arithmetic.  If they write a constant with
12 digits in it, they would prefer the compiler not truncate it to
single-precision until it absolutely must.
--
Bill Leonard
Harris Computer Systems Division
2101 W. Cypress Creek Road
Fort Lauderdale, FL  33309
bill@ssd.csd.harris.com or hcx1!bill@uunet.uu.net

hirchert@uxe.cso.uiuc.edu (11/10/89)

Rich Bleikamp (bleikamp@convex.UUCP) writes:
...
>But what do you do in a more complicated expression?  For example :
>
>   REAL A
>   DOUBLE PRECISION X
>
>   X = X + A + 3.14159 - 2.71828/2.0
>
>   Some users will want all these constants to be D.P.

I haven't seen Gary Cambell's proposal, so this might not be his answer, but
it is the one I would suggest.  According to the standard, the above statement
parses as

    X = ((X + A) + 3.14159) - (2.71828/2.0)

    X is double precisiona and A is real, so (X + A) is double precision.

    3.14159 is a constant, so it takes on the precision of the of the other
    argument of the binary operator (X + A) and is double precision.

    2.71828 and 2.0 are both constants, so the assignment of precision is
    pushed one level up the parse tree.  ((X + A) + 3.14159) is double
    precision, so the constant expression (2.71828/2.0) is made double
    precision, so both 2.71828 and 2.0 are made double precision.

If the original statement had been written as

    X = X + (A + 3.14159 - 2.71828/2.0)

    all the constants would have been single precision and only the final
    parenthesized expression would have been converted to double precision.

If it had been written as

    X = X + (A + 3.14159) - 2.71828/2.0

    then 3.14159 would have been single precision, but the other two constants
    would have been double precision.

>   Should commutive operations be handled special?  Use the "nearest" neighbor?
>   Or perhaps the destination?   None of these is particularily great.

Nothing special for commutative operators.  This is the "nearest" neighbor
in parse terms.  Destination would be relevant only if the entire right
hand side were a constant expression, as in

    X = 3.14159 - 2.71828/2.0

There are a few cases where no context is available, e.g.

    CALL MYSUB(3.14159)

    In this case, 3.14159 would have default precision in order to be
    compatible with FORTRAN 77.

I don't know whether Gary would retain the notation 0.1_MYKIND for special
cases or just depend on REAL(0.1,MYKIND).  (I would lean towards the latter
because I don't think those special cases would come up often enough to
justify reserving a special syntax for them.)

One way to look at these rules is that when the kind of a constant expression
is coerced implicitly (by being combined with another operand or by being
assigned) or explicitly (by use of the REAL intrinsic), then the constant
expression is originally evaluated in the destination kind.

>   I'd prefer a vendor supplied compiler option to force all literals
>   (and optionally REAL variables) to be double precision.  Unfortunately,
>   this is not going to make it into the standard.  Some way to coerce
>   all constants and variables of a particular type to a different "kind"
>   without resorting to PARAMTER statements, bizarre syntax, etc. is needed,
>   but doesn't seem to fit into the language.  Perhaps a "DEFAULT_REAL_KIND"
>   which can be user specified (outside the language, via compiler options)
>   should be a capability mandated by the standard.  Although this is not
>   very good software eng., it will do the most to promote portability of
>   existing programs.

The coercion approach that Gary is suggesting _does_ fit into the language.
The problem that killed this approach when it was suggested before is that
FORTRAN 77 explicitly requires constants with the same form to have the same
value.  I believe that this requirement has no "teeth" in these cases because
there was no requirement that coercions of the same value should produce the
same value, but this argument is hard to sell to the full committee.  I hope
Gary is more successful than I was because I believe it is a much cleaner
approach than what is currently in the draft.

Kurt W. Hirchert     hirchert@ncsa.uiuc.edu
National Center for Supercomputing Applications

hirchert@uxe.cso.uiuc.edu (11/11/89)

Steve Correll (sjc@key.COM) writes:
...
>1. Using context doesn't work in examples like "CALL XYZ(3.5)".
>
>  Until you start using modules, the compiler cannot know the type of the dummy
>  argument. On most machines, a real/doubleprecision mismatch gives a very
>  wrong number. (An exception is the Vax, where the first half of the preferred
>  doubleprecision format happens to match the bit layout of the real format.
>  Thus, with help from little-endian addressing, a real/doubleprecision type
>  mismatch may merely degrade precision. This is often a shock when porting VMS
>  applications to other machines.)

One slight correction: In Fortran 8x, the necessary context information could
also come from a procedure interface block, which could have been written
explicitly in the caller or provided via an INCLUDE statement.  (I would agree,
however, that using modules would be a more convenient way of providing the
context information.)

BTW, there's a lot of big-endian computers (e.g., IBM mainframes) that have
the same property as you ascribe to the VAX above.

>2. Another alternative mentioned earlier, the PARAMETER statement, doesn't
>  solve the problem either. In the following example (please correct me if I'm
>  wrong) the standard permits the compiler to convert "1e50" from text to real
>  and then to doubleprecision, possibly overflowing during the intermediate
>  step. When you change REAL to DOUBLEPRECISION in IMPLICIT statements, you
>  still need to change "e" to "d" in the exponents in PARAMETER statements.
>
>	implicit doubleprecision (d)
>	parameter (d = 1e50)

Half right.  The trick is to use a D exponent in the first place.  If the
constant being defined is of type REAL, it will converted.  If it is of type
DOUBLE PRECISION, you've supplied full accuracy.

>3. For many people, who simply want to change REAL to DOUBLEPRECISION globally
>  because (for example) REAL on the Cray gives them the same results as
>  DOUBLEPRECISION on the Vax, somehow extending the IMPLICIT statement to
>  constants would work well. Unfortunately, that's not a particularly general
>  or powerful approach, and therefore not attractive to a standards committee.
>  The F88 approach solves the problem too, as soon as we all finish editing all
>  our sources to put "_my_precision" after all the constants.:-) Unfortunately,
>  it's going to be hard to find a missing "_my_precision" in a large program.

I agree.  That's one reason I like Gary Campbell's approach.

>4. My experience with PL/I says that if the language designer lets the
>  programmer specify the number of decimal digits of precision, the law of
>  unintended consequences takes effect, and you get exactly the opposite of
>  what you wanted. Very few PL/I programs say "float bin(my_precision)".  Most
>  say "float bin(6)" or "float bin(5)" or whatever the programmer memorized as
>  an idiom for "single precision" when s/he first learned the language. If
>  several programmers work on the same source, it probably uses "6" in some
>  places and "5" in others. It's actually riskier to change precision with a
>  text editor than in Fortran 77, because you can be tripped up by a single "5"
>  that should have been a "6".
>
>  In this, the current F88 draft improves on the previous one. Because you must
>  use an intrinsic to obtain precision in terms of digits, you are more likely
>  to store the precision in a parameter and use that instead of repeating the
>  number of digits (perhaps inconsistently) throughout the program.
>
>  It's still difficult to ensure that you haven't omitted a "KIND=" after a
>  REAL somewhere in a large program. The Pascal "type my_precison = real"
>  facility works better, since it's easy to make sure that un-parameterized
>  precision doesn't creep into your source by accident: use an editor to check
>  that the word "real" appears only once. A similar facility vanished from F88,
>  perhaps due to complaints about complexity.

With the editors I use, it wouldn't be that hard to search for REAL followed
by a character other than "(", but not all editors allow you to search for
regular expressions.

Kurt W. Hirchert     hirchert@ncsa.uiuc.edu
National Center for Supercomputing Applications

sjc@key.COM (Steve Correll) (11/12/89)

 
Steve Correll (sjc@key.COM) writes:
>2. Another alternative mentioned earlier, the PARAMETER statement, doesn't
>  solve the problem either. In the following example (please correct me if I'm
>  wrong) the standard permits the compiler to convert "1e50" from text to real
>  and then to doubleprecision, possibly overflowing during the intermediate
>  step. When you change REAL to DOUBLEPRECISION in IMPLICIT statements, you
>  still need to change "e" to "d" in the exponents in PARAMETER statements.
>
>	implicit doubleprecision (d)
>	parameter (d = 1e50)

In article <50500166@uxe.cso.uiuc.edu>, hirchert@uxe.cso.uiuc.edu writes:
> Half right.  The trick is to use a D exponent in the first place.  If the
> constant being defined is of type REAL, it will converted.  If it is of type
> DOUBLE PRECISION, you've supplied full accuracy.

Sorry, I didn't make myself clear. Once you've changed the exponents from "e"
to "d", you can indeed leave them alone henceforth as you switch between
"real" and "doubleprecision". But that's irrelevant if you're faced with a
large existing program which used "e" in the first place. If it's not too
arduous and error-prone to go find all the "e"s and change them to "d"s, then
it's not too arduous and error-prone to append "_my_precision" in the fashion
of F88 (though you may debate the aesthetics thereof). I perceive that many
users feel that both approaches are too arduous and error-prone, and would
prefer an IMPLICIT facility for literal constants.
-- 
...{sun,pyramid}!pacbell!key!sjc 				Steve Correll

corbett@ernie.Berkeley.EDU (Robert Corbett) (11/12/89)

In article <1209@key.COM> sjc@key.COM (Steve Correll) writes:
>In article <MCCALPIN.89Nov6204250@masig3.ocean.fsu.edu>, mccalpin@masig3.ocean.fsu.edu (John D. McCalpin) writes:
>> I have been talking with Gary Campbell of Sun Microsystems, and his
>> proposal gets to the heart of the problem.  The problem with literal
>> constants is that the stupid compilers insist on assigning a precision
>> to them without looking at their context.  Then the context is used
>> later on to decide what conversions are required to obey the
>> mixed-mode arithmetic rules...
>> Examples:
>> 	REAL			A, B, C
>> 	DOUBLEPRECISION		X, Y, Z
>> 
>> 	A = 2*B			! 2 is a REAL constant by implication
>> 	X = Y+3.1*Z		! 3.1 is a DOUBLEPRECISION constant
>
>1. Using context doesn't work in examples like "CALL XYZ(3.5)".
>
>  Until you start using modules, the compiler cannot know the type of the dummy
>  argument.

     The example CALL XYZ(3.5) is not a problem.  Since there is no interface
block for XYZ, the type of the parameter is REAL.  Therefore, the assignment to
the dummy argument is handled as an assignment to a REAL variable.

     I have not seen Gary Campbell's proposal yet, but I suspect I shall see it
soon.

     In the December 1982 issue of SIGPLAN Notices and in the January 1983
issue of SIGNUM Newsletter, there is an article "Enhanced Arithmetic for
Fortran," which describes in detail a similar proposal.  The author describes
a scheme, originally proposed by W. Kahan, for assigning accuracies to
components of expressions in a way that produces intuitively correct results
in an efficient manner.  The scheme could easily be extended to Fortran 8x.

						Yours truly,
						Robert Paul Corbett

msf@rotary.Sun.COM (Michael Mike Fischbein) (11/13/89)

In article <1212@key.COM> sjc@key.COM (Steve Correll) writes:
>
>Sorry, I didn't make myself clear. Once you've changed the exponents from "e"
>to "d", you can indeed leave them alone henceforth as you switch between
>"real" and "doubleprecision". But that's irrelevant if you're faced with a
>large existing program which used "e" in the first place. If it's not too
>arduous and error-prone to go find all the "e"s and change them to "d"s, then
>it's not too arduous and error-prone to append "_my_precision" in the fashion
>of F88 (though you may debate the aesthetics thereof). I perceive that many
>users feel that both approaches are too arduous and error-prone, and would
>prefer an IMPLICIT facility for literal constants.

I beg to differ.  Using a regular expression search in a typical
programmer's text editor, it is easy to 'go find all the "e"s and
change them to "d"s'.  If we have available a tool such as sed(1) on
UNIX systems, the task is even simpler.

It is equally easy to use the same editor append a "_my_precision"
string, true, but that adds 14 characters to the line for every
constant containing an exponent.  Given FORTRAN-77s attention to
exactly which column you are in (coming up on 72?), this approach
requires careful checking to ensure proper continuation for any line
which is now extended past 72, particularly if dusty decks are to be
accomodated.  Also include the possibility of extending a line past the
maximum number of allowed characters, and you have a non-trivial
problem.

		mike


Michael Fischbein, Technical Consultant, Sun Professional Services
Sun Albany 518-783-9613     sunbow!msf or mfischbein@sun.com
These are my opinions and not necessarily those of any other person or
organization.

PDM1881 (11/13/89)

Let me comment on my own words, in the light what has happened since:

> For floating point, I want X3J3 to define a minimum number of decimal
> digits of precision to be delivered for an item of type REAL:  I
> suggest 10.  This number must be such that almost 60 bits are required
> to do the job.  In other words, 32 bits will not a REAL value hold.
> DOUBLE precision is defined as providing more precision than REAL.
> The new type LOW PRECISION selects a lower than REAL precision, if
> available.

Several people commented that the single precision found on IBM
mainframes, VAXes & PC's and many other 32-bit systems is valuable for
many applications.  The performance penalty on our IBM mainframe is
about 15% for going to all double, the actual precision is over 2.5
times greater, and many programmers would have no idea if they have a
precision problem or not.  However, I did not propose to eliminate
single precision, but simply to make it both (1) a compiler option, and
(2) a source change option.

However, I agree that DOUBLE PRECISION most often means about 64 bits of
data, roughly 15 decimal digits of precision.  I would be willing to
accept that as a definition.  REAL would mean less precision if
available, and a new EXTENDED PRECISION data type would be added.  The
compilers on the large word systems would then need to have an option to
"do it the old way," meaning, compile DOUBLE as EXTENDED.  (I don't use
any such systems, but I think that's CDC, Cray, etc.)

My claim to solving the IMPLICIT problem for constants has been refuted.
Extending the syntax of IMPLICIT seems attractive.  However, I am a
great fan of IMPLICIT NONE, and I'd like to explore ways of solving this
problem without extending IMPLICIT.

One argument was that the compiler has to know what precision is needed,
and should simply compile it.  This was refuted by the subroutine CALL
problem.  Then it was pointed out that this is solved by INTERFACE
blocks in 8x.  Well, I don't want interface blocks, but I agree that
this is the right approach.  What I would prefer, is to extend the
syntax of the EXTERNAL statement, for example, a routine calling

  REAL FUNCTION FUN (X, I)

could contain

  REAL     FUN
  EXTERNAL FUN (REAL, INTEGER)

Furthermore, such statements could be included in the function (or
subroutine) itself, for the compiler to verify.

I greatly prefer extensions to the existing structure of the language,
to the kinds of changes proposed by X3J3.

Pete Matthews
Draper Laboratory
Cambridge MA 02139

maine@elxsi.dfrf.nasa.gov (Richard Maine) (11/15/89)

In article <7662@xenna.Xylogics.COM> mvs.draper.com@RELAY.CS.NET!PDM1881 writes:
>  ....  Then it was pointed out that this is solved by INTERFACE
>  blocks in 8x.  Well, I don't want interface blocks, but I agree that
>  this is the right approach.  What I would prefer, is to extend the
>  syntax of the EXTERNAL statement, for example, a routine calling

>    REAL FUNCTION FUN (X, I)

>  could contain

>    REAL     FUN
>    EXTERNAL FUN (REAL, INTEGER)

>  Furthermore, such statements could be included in the function (or
>  subroutine) itself, for the compiler to verify.

>  I greatly prefer extensions to the existing structure of the language,
>  to the kinds of changes proposed by X3J3.

But what you have is little more than an interface block masquerading
under another name.  And unfortunately, yours is not as general as
the proposed 8x one.  It is far from trivial (in fact, its downright
difficult) to come up with language constructs that fit well in
all possible contexts of something like Fortran.  Although I was
not part of it, I can well appreciate the amount of work that somebody
on X3J3 had to do to avoid a standard that was full of inconsistencies
and ambiguities.  (There are bound to still be a few, nothing being
perfect, but on the whole a lot of care for such issues is apparent
in the draft).

For instance, how does your suggestion handle even such 77 constructs
as a subroutine that takes another subroutine or function as an
argument.  One could probably invent some syntax for that, but it's
likely to be rather kludgy.  (And I hesitate to even think about how
to incorporate things like data structures in your syntax).  The
proposed 8x interface blocks are general enough not to back us into
something difficult to extend.

Also, keep in mind, that interface blocks are not generally required if
you use modules.  For module routines, the interface information is
already avaliable and you really have to do nothing extra (except
for declaring the module in a USE statement).  The interface blocks
are needed only in special situations where you can't use modules.

Generally, lots of things in 8x are much cleaner if you use modules.
I would categorize modules as the most important change/improvement in
8x and a lot of the other changes are related.  If you don't like
modules, then I can understand that you aren't going to like 8x because
they are integral to the language.

--

Richard Maine
maine@elxsi.dfrf.nasa.gov [130.134.1.1]

hirchert@uxe.cso.uiuc.edu (11/15/89)

Pete Matthews (uucp@xylogics.UUCP) writes:
>                    The performance penalty on our IBM mainframe is
>about 15% for going to all double, the actual precision is over 2.5
>times greater, and many programmers would have no idea if they have a
>precision problem or not.

15% sounds about right for the instruction speed penalty, but what about the
effects of the increased storage?  This can include problems with the cache
or virtual memory working set, as well as I/O bandwidth issues.  (Actually,
I'm sympathetic to the idea that most numerical work is done in more than 32
bits and that in most cases the performance penalty is minor, but we should
recognize that this isn't always true.)

>However, I agree that DOUBLE PRECISION most often means about 64 bits of
>data, roughly 15 decimal digits of precision.  I would be willing to
>accept that as a definition.  REAL would mean less precision if
>available, and a new EXTENDED PRECISION data type would be added.  The
>compilers on the large word systems would then need to have an option to
>"do it the old way," meaning, compile DOUBLE as EXTENDED.  (I don't use
>any such systems, but I think that's CDC, Cray, etc.)

I see at least two potential problems with this approach:

1. Some machines have more than one precision beyond DOUBLE PRECISION.  This
   provides no standard way to support all of them.

2. Some people depend on REAL being the same size as INTEGER and DOUBLE
   PRECISION being twice the size of REAL in order to do their own storage
   management.  You scheme would break that assumption on some machines.
   The switch you suggest might "fix" things, but at the cost of confusion
   about which switch settings are necessary to make a program work.

The necessity of being compatible with FORTRAN 77 is so overwhelming that the
people on X3J3 who favored this approach found it necessary to leave REAL and
DOUBLE PRECISION alone and instead proposed two new data types that were
associated with specific minimum precisions.  (X3's recent change in X3J3's
program of work appears to require compatibility with FORTRAN 77, so your
proposal may not be an alternative open to X3J3.)

>My claim to solving the IMPLICIT problem for constants has been refuted.
>Extending the syntax of IMPLICIT seems attractive.  However, I am a
>great fan of IMPLICIT NONE, and I'd like to explore ways of solving this
>problem without extending IMPLICIT.

[Another approach might be to provide a means of specifying what properties
are expected of the (default) REAL type.  In the absence of such a
specification, this could default to being the same size as INTEGER, but
in the presence of such a specification, both constants and variables could
be modified to your requirements with a single statement.]

>One argument was that the compiler has to know what precision is needed,
>and should simply compile it.  This was refuted by the subroutine CALL
>problem.  Then it was pointed out that this is solved by INTERFACE
>blocks in 8x.  Well, I don't want interface blocks, but I agree that
>this is the right approach.  What I would prefer, is to extend the
>syntax of the EXTERNAL statement, for example, a routine calling
>
>  REAL FUNCTION FUN (X, I)
>
>could contain
>
>  REAL     FUN
>  EXTERNAL FUN (REAL, INTEGER)

To me, you are just creating a different syntax for the interface block.
Among the disadvantages of a syntax based on extending the EXTERNAL statement
are the following:

1. You are using a different syntax to represent this information at the
   reference than was used at the definition.  Thus the user is forced to
   reenter it.  The interface block proposed by X3J3 uses the same syntax,
   so the statements that define the interface can just be copied into the
   interface block (or placed in an INCLUDE files for use in both places).

2. In a procedure with a large number of arguments, you may have trouble with
   the continuation limit.

3. This approach does not appear to deal well with the problem of specifying
   multiple attributes for an argument or function result (e.g., an array-
   valued function result).

4. This approach may have problems expressing function results whose attributes
   a related to attributes or values of its arguments (e.g., a CHARACTER
   function whose output length depends on the length and/or value of an
   input argument.)

>Furthermore, such statements could be included in the function (or
>subroutine) itself, for the compiler to verify.

1. There are problems with this in RECURSIVE functions and subroutines.

2. It is equally true of the syntax proposed by X3J3.

>I greatly prefer extensions to the existing structure of the language,
>to the kinds of changes proposed by X3J3.

You just are choosing different parts of the existing structure to extend.
In many cases, when you try to work out the details, these approaches aren't
as simple as they first seem.  When X3J3 has added a feature in a way other
than the "obvious" way, it usually means the X3J3 found serious problems in
doing things in the "obvious" way.

>Pete Matthews
>Draper Laboratory
>Cambridge MA 02139

Kurt W. Hirchert     hirchert@ncsa.uiuc.edu
National Center for Supercomputing Applications