[comp.arch] Making Rounding Modes Usable

crowl@cs.rochester.edu (Lawrence Crowl) (07/21/89)

In article <43004@bbn.COM> slackey@BBN.COM (Stan Lackey) writes:
[In a discussion of why IEEE floating point is expensive.]
>2. Rounding modes ... [had] been intended for range arithmetic.  Now, when I
>asked, lots of people knew what they were for, but no one [on comp.arch] had
>ever heard of them having ever been used.  Interesting how much hardware out
>there can do these operations that are so rarely used.  Kind of like the
>decimal string instruction set on the VAX, except that the VAX decimal
>strings are occasionally used.

There are a number of reasons that rounding modes are not used more.  And
unfortunately, the reasons feed each other.

- Programming languages historically do not support rounding modes.
- So, such use has been restricted generally to changing modes between program
  runs (for a warm fuzzy).
- So, specifying the rounding modes is part of device setup and not part of
  the operation instruction.
- So, changing the rounding mode is a difficult and time consuming operation.
- So, newer programming languages tend not to support rounding modes because
  the designers know users will not pay the cost.

We can break the cycle and provide good access to faster, more robust floating
point programs with the following efforts.

- Specify the rounding mode as part of the operation.  In essence, I am saying
  that rounding MODES are a self-defeating way of looking at the capability.
  We should have ADD-ROUNDING-UP and ADD-ROUNDING-DOWN, etc.
- Make these operations available in a programming language.
- Make these operations convenient in a programming language, e.g. operator
  overloading in Ada and C++.

The most difficult effort is the first, since it involves new hardware design.
The second two efforts would be fairly easy to implement under GNU's C++
compiler.

QUESTION: What effort would be required to retrofit an existing IEEE FP chip
to take the modes as part of the operation?

NOTE:  LANGUAGE DISCUSSIONS ARE NOT APPROPRIATE IN THIS NEWSGROUP.  IF YOU
WISH TO ADDRESS LANGUAGE ISSUES, POST A SEPARATE ARTICLE TO COMP.LANG.MISC.
-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

cik@l.cc.purdue.edu (Herman Rubin) (07/21/89)

In article <1989Jul21.035825.27704@cs.rochester.edu>, crowl@cs.rochester.edu (Lawrence Crowl) writes:
> 
> There are a number of reasons that rounding modes are not used more.  And
> unfortunately, the reasons feed each other.
> 
< - Programming languages historically do not support rounding modes.
< - So, such use has been restricted generally to changing modes between program
>   runs (for a warm fuzzy).
< - So, specifying the rounding modes is part of device setup and not part of
>   the operation instruction.
< - So, changing the rounding mode is a difficult and time consuming operation.
< - So, newer programming languages tend not to support rounding modes because
>   the designers know users will not pay the cost.
> 
> We can break the cycle and provide good access to faster, more robust floating
> point programs with the following efforts.
> 
< - Specify the rounding mode as part of the operation.  In essence, I am saying
<   that rounding MODES are a self-defeating way of looking at the capability.
<   We should have ADD-ROUNDING-UP and ADD-ROUNDING-DOWN, etc.
< - Make these operations available in a programming language.
< - Make these operations convenient in a programming language, e.g. operator
>   overloading in Ada and C++.
> 
> The most difficult effort is the first, since it involves new hardware design.
> The second two efforts would be fairly easy to implement under GNU's C++
> compiler.

This is an example of extremely cheap hardware modifications which are very
expensive otherwise.  It may be difficult to retrofit existing chips, but
for new architecture, it only adds a tag field to the instruction, which
need not be decoded by the instruction decoder, but by the computational
unit, which already has more complicated decoding to do.

Another example, which more or less falls in the same category, and which
has been discussed in this subgroup, is that of integer division with 
quotient and remainder, with the precise choice of options depending on
the signs of the arguments.

Yet another is the need for floating division with integer quotient and
floating remainder.  The same choices for quotient and remainder as in
the integer case occur.  In case there are those who cannot see why this
is an important operation, every trigonometric or exponential computation
starts out this way.

-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

khb%chiba@Sun.COM (Keith Bierman - SPD Languages Marketing -- MTS) (07/25/89)

In article <1989Jul21.035825.27704@cs.rochester.edu> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>
>We can break the cycle and provide good access to faster, more robust floating
>point programs with the following efforts.
>
>- Specify the rounding mode as part of the operation.  In essence, I am saying
>  that rounding MODES are a self-defeating way of looking at the capability.
>  We should have ADD-ROUNDING-UP and ADD-ROUNDING-DOWN, etc.
>- Make these operations available in a programming language.
>- Make these operations convenient in a programming language, e.g. operator
>  overloading in Ada and C++.
>
>The most difficult effort is the first, since it involves new hardware design.
>The second two efforts would be fairly easy to implement under GNU's C++
>compiler.

I don't see why this requires new hardware (hence the comp.arch follow
up). Consider a f88 module which define new operators +up +down etc.
Now one could write

		result = a +up b

and get the obvious result. The fact that +up actually uses some macro
(or system call) to condition the floating point mode shouldn't matter
(performance aside).

Any language which supports operator definitions and overloading (e.g.
Ada) should suffice.

If someone can explain why the hardware MUST be different, please do.
Keith H. Bierman      |*My thoughts are my own. Only my work belongs to Sun*
It's Not My Fault     |	Marketing Technical Specialist    ! kbierman@sun.com
I Voted for Bill &    |   Languages and Performance Tools. 
Opus  (* strange as it may seem, I do more engineering now     *)

crowl@cs.rochester.edu (Lawrence Crowl) (07/26/89)

In article <117496@sun.Eng.Sun.COM> khb@sun.UUCP
(Keith Bierman - SPD Languages Marketing -- MTS) writes:
)In article <1989Jul21.035825.27704@cs.rochester.edu>
)crowl@cs.rochester.edu (Lawrence Crowl) writes:
)>[Specifying the rounding mode as part of the operation] involves new
)>hardware design.
)
)[An implementation that] actually uses some macro (or system call) to
)condition the floating point mode shouldn't matter (performance aside).

But performance is the critical issue.  If specifying the rounding mode
on each operation increases the cost by an order of magnitude, programmers
will not use them.  The costs of using rounding modes must be in line with
the benefits of using them.  Currently, the costs are generally too high
for production software.  There is light at the end of the tunnel:

In article <817@acorn.co.uk> RWilson@acorn.co.uk writes:
)the floating point system for the Acorn RISC Machine provides the rounding
)modes as part of the instruction.
-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

kirchner@uklirb.UUCP (Reinhard Kirchner) (08/17/89)

From article <1989Jul21.035825.27704@cs.rochester.edu>, by crowl@cs.rochester.edu (Lawrence Crowl):
> There are a number of reasons that rounding modes are not used more.  And
> unfortunately, the reasons feed each other.
> 
> - Programming languages historically do not support rounding modes.
> - So, such use has been restricted generally to changing modes between program
>   runs (for a warm fuzzy).
> - So, specifying the rounding modes is part of device setup and not part of
>   the operation instruction.
> - So, changing the rounding mode is a difficult and time consuming operation.
> - So, newer programming languages tend not to support rounding modes because
>   the designers know users will not pay the cost.
> 
> We can break the cycle and provide good access to faster, more robust floating
> point programs with the following efforts.
> 
> - Specify the rounding mode as part of the operation.  In essence, I am saying
>   that rounding MODES are a self-defeating way of looking at the capability.
>   We should have ADD-ROUNDING-UP and ADD-ROUNDING-DOWN, etc.
> - Make these operations available in a programming language.
> - Make these operations convenient in a programming language, e.g. operator
>   overloading in Ada and C++.
> 
I have been in hollidays, so this comes a little late.

I hate to say it again and again:

There are languages which have rounding with their operations, so you
can write

     x := y +> z;    (* + with round upward *)

and this also with < e.g. +<, for all operations.

These languages are    PASCAL-SC    and     FORTRAN-SC.

I will stop here since I know this is not a language group -:).
Only one sentence: The only thing these languages lack are vendors who
port them to their hardware ( No market they say )

R. Kirchner

kirchner@uklirb.uucp
kinf89@dkluni01.bitnet     ( preferred )

mmm@cup.portal.com (Mark Robert Thorson) (08/20/89)

This reminds of something I once heard one of the two principal architects
of the NS16032 microprocessor say (quoted approximately):  "There's only
one use for rounding!  If you run the program once rounding up, then
run it again rounding down, you get a value for noise in your application."

This also reminds me of a table I once saw in a research journal for
economists.  It compared the results of running the same program (doing
multiple regression analysis) in several different environments.  If I
remember correctly, the program computed a single numeric value.  DEC
and IBM had about three digits of accuracy, which was among the best.
Some vendors only had one or two digits.  CDC got the sign wrong.

(The above-mentioned events occurred 8-12 years ago, and undoubtedly do not
reflect upon the current products of any of the fine manufacturers named.)

firth@sei.cmu.edu (Robert Firth) (08/21/89)

In article <21453@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
>This reminds of something I once heard one of the two principal architects
>of the NS16032 microprocessor say (quoted approximately):  "There's only
>one use for rounding!  If you run the program once rounding up, then
>run it again rounding down, you get a value for noise in your application."

A good idea, and one that should be used more often.
However, to do it properly, you need interval arithmetic,
with appropriate rounding.  I'd give a lot for a machine
with good hw support for interval arithmetic.

>This also reminds me of a table I once saw in a research journal for
>economists.  It compared the results of running the same program (doing
>multiple regression analysis) in several different environments.  If I

That brings back memories!  Over a decade ago, a colleague and I
took a subset of an economic model (the so-called 'Treasury model'
of the UK economy), recoded it to use software interval arithmetic,
and ran some calculations.  The general result was that the output
error intervals were large enough to swamp the real numbers, things
like 'if you cut income taxes by 5%, next year's unemployment will
be 7%, plus or minus 20'.  And the government was actually using
this thing to help make policy.