[comp.lang.fortran] Uses for EQUIVALENCE

vsnyder@jato.jpl.nasa.gov (Van Snyder) (06/07/91)

In article <1991Jun5.220805.4653@alchemy.chem.utoronto.ca> mroussel@alchemy.chem.utoronto.ca (Marc Roussel) writes:
>In article <1991Jun1.171914.802@weyrich.UUCP> orville@weyrich.UUCP
>(Orville R. Weyrich) writes:
>[on using EQUIVALENCE with variables of differing types]
>>This leads to the $500,000 question: do algorithms exist which can do something
>>intelligent with such uses of EQUIVALENCE?
>>
>>YES, I WOULD BE VERY INTERESTED IN THE ANSWER TO THE ABOVE QUESTION.
>
>I can see one possible legitimate use of EQUIVALENCE with different data
>types: to decide on the endianism of your machine (and similarly to decide
>other quirks of the number system of the computer).  You can load up some
>variables with known values, equivalence them to appropriate-sized integers
>and then use these integers to diagnose the endianism (round-off, etc.).  Such
>diagnostic toolkits are probably the only legitimate use of equivalence
>across data types though (other than saving space).  Anyone know any others?
>
>				Marc R. Roussel
>                                mroussel@alchemy.chem.utoronto.ca
I can understand that one might want to know the endianism of your machine if
you were tearing apart telemetry data or something like that.  But a rationally
designed bit access facility should make that invisible.  You may need a
different such subprogram set for each of several classes of machines (until
it becomes standard in F90).  But scattering EQUIVALENCE all over the place
just to be able to scatter endianism-denendencies all over the place seems
like a poor programming practice.

I'd also be surprised if you could use EQUIVALENCE to discover the round-
off level or the overflow or underflow limits.  Mike Malcolm wrote some
routines to compute such things a long time ago.  Or you could get the
routines R1MACH, D1MACH and I1MACH from Netlib.


-- 
vsnyder@jato.Jpl.Nasa.Gov
ames!elroy!jato!vsnyder
vsnyder@jato.uucp

leipold@eplrx7.uucp (Walt Leipold) (06/08/91)

In article <1991Jun6.232026.10172@jato.jpl.nasa.gov> Van Snyder writes:
>I'd also be surprised if you could use EQUIVALENCE to discover the round-
>off level or the overflow or underflow limits.  Mike Malcolm wrote some
>routines to compute such things a long time ago.  Or you could get the
>routines R1MACH, D1MACH and I1MACH from Netlib.

Is anyone else bothered by D1MACH et al?  I mean, talk about gratuitously
non-portable code...  Why did the author use EQUIVALENCEd hex constants?
Was it to get more precision in specifying a floating point value than he
could get by writing a floating point constant?  If so, how would you print
one of these values, and what does that say about the state of the art in
FORTRAN compilers (and their users)?  And why do I have to ask for each
constant by number (1..5), instead of having a separate function (e.g.,
EPS(), HUGE()) for each constant?  And why aren't functions like this part
of X3.9-1978, so I can write portable numerical code?  Or (the real $64K
question) are these functions included in the long-expected and oft-delayed 
FORTRAN 90?

[Whew... I feel better now...]

-- 
--------------------------------------------------------------------------
"As long as you've lit one candle,                            Walt Leipold
you're allowed to curse the darkness."          (leipolw%esvax@dupont.com)
--------------------------------------------------------------------------
--
The UUCP Mailer

maine@altair.dfrf.nasa.gov (Richard Maine) (06/08/91)

On 7 Jun 91 17:13:57 GMT, leipold@eplrx7.uucp (Walt Leipold) said:

Walt> EPS(), HUGE()) for each constant?  And why aren't functions like
Walt> this part of X3.9-1978, so I can write portable numerical code?
Walt> Or (the real $64K question) are these functions included in the
Walt> long-expected and oft-delayed FORTRAN 90?

Yes.  From the S8.115 (June 1990) draft of F90,  (More detailed
descriptions are in other subsections; also pardon any typos I might
have made in transcribing this).

  13.10.8 Numeric Inquiry Functions

    digits(x)      - Number of significant digits in the model
    epsilon(x)     - Number that is almost negligible compared to one
    huge(x)        - Largest number in the model
    maxexponent(x) - Maximum exponent in the model
    minexponent(x) - Minimum exponent in the model
    precision(x)   - decimal precision
    radix(x)       - base of model
    range(x)       - decimal exponent range
    tiny(x)        - Smallest positive number in the model

  13.10.9 Bit Inquiry Function

    bit_size(i)    - Number of bits in the model

  13.10.10 Bit Manipulation Functions

    ...

  13.10.12 Floating-point Manipulation Functions

    exponent(x)       - Exponent part of a modewl number
    fraction(x)       - Fractional part of a model number
    nearest(x,s)      - Nearest different processor number in given direction
    rrspacing(x)      - Reciprocal of the relative spacing
                          of model numbers near given number
    scale(x,i)        - Multiply a real by its base to an integer power
    set_exponent(x,i) - Set exponent part of a number
    spacing(x)        - Absolute spacing of model numbers near given
                          number

Also relevant are the selected_int_kind and selected_real_kind
functions that allow you to portably specify what precision you want
as in

        !--- You probably put the parameter declaration in a module
        !--- or include file when doing thid for real.
        integer, parameter :: wr_kind = selected_real_kind(10,30)

        real(wr_kind) :: x,y,z

That specifies that x, y, and z are of the "smallest" floating point
type that has at least 10 digits of precision and a range
of at least 10**(-30) to 10**30.  This would be double precision on
most 32-bit systems and single precision on 60- and 64-bit ones.


--
--
Richard Maine
maine@altair.dfrf.nasa.gov

vsnyder@jato.jpl.nasa.gov (Van Snyder) (06/08/91)

In article <1991Jun7.171357.3941@eplrx7.uucp> leipold@eplrx7.uucp (Walt Leipold) writes:
>In article <1991Jun6.232026.10172@jato.jpl.nasa.gov> Van Snyder writes:
>>I'd also be surprised if you could use EQUIVALENCE to discover the round-
>>off level or the overflow or underflow limits.  Mike Malcolm wrote some
>>routines to compute such things a long time ago.  Or you could get the
>>routines R1MACH, D1MACH and I1MACH from Netlib.
>
>Is anyone else bothered by D1MACH et al?  I mean, talk about gratuitously
>non-portable code...  Why did the author use EQUIVALENCEd hex constants?
>Was it to get more precision in specifying a floating point value than he
>could get by writing a floating point constant?  If so, how would you print
>one of these values, and what does that say about the state of the art in
>FORTRAN compilers (and their users)?  And why do I have to ask for each
>constant by number (1..5), instead of having a separate function (e.g.,
>EPS(), HUGE()) for each constant?  And why aren't functions like this part
>of X3.9-1978, so I can write portable numerical code?  Or (the real $64K
>question) are these functions included in the long-expected and oft-delayed 
>FORTRAN 90?

You're right about equivalence and hex being non-portable.  The benefit is
that the non-portability is concentrated in one easily replaced routine,
instead of being scattered around throughout the code.

You can change the interface if you like.  I'd recommend using multiple
entries so you don't have to change 5 files, or one file in 5 places, when
porting code.

the stuff in *1MACH is in F 90, and more.
-- 
vsnyder@jato.Jpl.Nasa.Gov
ames!elroy!jato!vsnyder
vsnyder@jato.uucp