[comp.arch] condition codes

oconnor@sunset.steinmetz (Dennis M. O'Connor) (02/20/88)

IMHO, there is still one and only one good reason for
including condition codes in an architecture :

Translation of code from or emulation of an existing architecture
that already has them.

There's just no good way to do it unless you have a "compatable"
set of condition codes.

What I like is a "condition" or "skip" instruction, that
can cause the following instruction to NOT be executed
if the test fails. This means you need to have the result
of the test ready in one instruction-interval, which
can indeed cause a new critical path. But it's clean,
and lets you eliminate branches by "conditionalizing"
other types of instructions. Since the "CONDITION" instruction
is an entire instruction, you get lots of flexibility in what's being
tested and what's being tested for.

And yes, if you want to, you can test the bits in the condition-code
register, if you've got one. :-)
--
	Dennis O'Connor 	oconnor@sunset.steinmetz.UUCP ??
				ARPA: OCONNORDM@ge-crd.arpa
    "Nuclear War is NOT the worst thing people can do to this planet."

lindsay@gandalf.cs.cmu.edu (Donald Lindsay) (05/12/91)

In article <12162@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
>Why get rid of condition flags?  Other than the condition register, they take
>up little space, and also they cost essentially nothing.  Looking at one or
>two bits is certainly cheaper than comparing two quantities, even if they are
>in registers.  Overflow and carry are useful.

The problem with condition flags is the hardware complexity.

In a simple machine, condition flags can be supported by simple
hardware.  However, suppose that you then build a hot version of that
simple machine, and try to have some independence between the integer
unit and the FPU. This attempt will be frustrated by the condition
codes, because both "independent" units must communicate to these
bits in a correct order.

One way out is to dump the condition codes - as several risc machines
have done. The RS/6000 takes the other way out, and has several sets
of condition codes.
-- 
Don		D.C.Lindsay 	Carnegie Mellon Robotics Institute

hrubin@pop.stat.purdue.edu (Herman Rubin) (05/12/91)

In article <13011@pt.cs.cmu.edu>, lindsay@gandalf.cs.cmu.edu (Donald Lindsay) writes:
> In article <12162@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
> >Why get rid of condition flags?  Other than the condition register, they take
> >up little space, and also they cost essentially nothing.  Looking at one or
> >two bits is certainly cheaper than comparing two quantities, even if they are
> >in registers.  Overflow and carry are useful.
 
> The problem with condition flags is the hardware complexity.
 
> In a simple machine, condition flags can be supported by simple
> hardware.  However, suppose that you then build a hot version of that
> simple machine, and try to have some independence between the integer
> unit and the FPU. This attempt will be frustrated by the condition
> codes, because both "independent" units must communicate to these
> bits in a correct order.
 
> One way out is to dump the condition codes - as several risc machines
> have done. The RS/6000 takes the other way out, and has several sets
> of condition codes.

Why do we have separate integer and floating units, especially without 
communication between them?  I suggest those who push this horror look
at how difficult conversion between them is.  I have already pointed out
that every trigonometic and exponential routine does, in some way, 
float/float -> integer, remainder.  The integer is also used.

Even putting integerization, addition, and subtraction in floating units
is not the answer.  Boolean operations using floats are useful.  Packing
and unpacking should be hardware operations.  If there is a dichotomy, it
is address/loop versus arithmetic, not integer versus floating.  There
have been machines with this feature.  But even if one has this, it is
still necessary to have good communication between the units.

The user may not see the use of more sophisticated hardware, but that does
not mean the casual user does not use it.  In many situations, on the RS/6000
I would keep two copies of an integer loop variable if it will be used in 
floating arithmetic in order to avoid the cost of converting.  I wonder if
the compiler writers have thought of that?

-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

prener@watson.ibm.com (Dan Prener) (05/15/91)

In article <12236@mentor.cc.purdue.edu>, hrubin@pop.stat.purdue.edu (Herman Rubin) writes:

|> Why do we have separate integer and floating units, especially without 
|> communication between them?  I suggest those who push this horror look
|> at how difficult conversion between them is.  I have already pointed out
|> that every trigonometic and exponential routine does, in some way, 
|> float/float -> integer, remainder.  The integer is also used.
|> 
|> Even putting integerization, addition, and subtraction in floating units
|> is not the answer.  Boolean operations using floats are useful.  Packing
|> and unpacking should be hardware operations.  If there is a dichotomy, it
|> is address/loop versus arithmetic, not integer versus floating.  There
|> have been machines with this feature.  But even if one has this, it is
|> still necessary to have good communication between the units.
|> 

We have separate integer and floating units because it helps make the
machines fast.  It provides an easily-detected form of parallelism.

Good communication between the units requires both the hardware real-
estate for the data paths and, when such communication is used, the
synchronization of the two units.  Good communication, like any other
architectural feature isn't "necessary".  It is desirable, if the
cost is low enough.  One must compute the expected performance return
from adding such features.  As with any such computation of an expectation,
this would include

   (1) how often such a feature would be used
   (2) the performance improvement it would provide, when used
 & (3) what performance degradation the machine would suffer from
         having such a feature, even if it is not used (e.g., the
         loss of other ways of spending the chip area)

Does anyone have any suggestions about what these numbers are?

|> The user may not see the use of more sophisticated hardware, but that does
|> not mean the casual user does not use it.  In many situations, on the RS/6000
|> I would keep two copies of an integer loop variable if it will be used in 
|> floating arithmetic in order to avoid the cost of converting.  I wonder if
|> the compiler writers have thought of that?

Yes, we have.
-- 
                                   Dan Prener (prener @ watson.ibm.com)

hrubin@pop.stat.purdue.edu (Herman Rubin) (05/17/91)

In article <1991May15.003949.15076@watson.ibm.com>, prener@watson.ibm.com (Dan Prener) writes:
> In article <12236@mentor.cc.purdue.edu>, hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
 
> |> Why do we have separate integer and floating units, especially without 
> |> communication between them?  I suggest those who push this horror look
> |> at how difficult conversion between them is.

			....................

                                                 If thereis a dichotomy, it
> |> is address/loop versus arithmetic, not integer versus floating.  There
> |> have been machines with this feature.  But even if one has this, it is
> |> still necessary to have good communication between the units.
> |> 
 
> We have separate integer and floating units because it helps make the
> machines fast.  It provides an easily-detected form of parallelism.
 
But if integers need to be used together with floating numbers, the 
conversion procedure loses more than what is gained by the parallelism.
Increased memory references are certainly not the way to get efficiency.

> Good communication between the units requires both the hardware real-
> estate for the data paths and, when such communication is used, the
> synchronization of the two units.  Good communication, like any other
> architectural feature isn't "necessary".  It is desirable, if the
> cost is low enough.  One must compute the expected performance return
> from adding such features.  As with any such computation of an expectation,
> this would include

The cost of the entire arithmetic unit on a computer is a relatively small
part of the cost of the computer.  I admit there might be a data path
problem, but why does it require synchronization?  Putting in a small
dedicated register memory, and using access to it to allow desynchronization,
should not be too difficult.  Even if the unit does conversion, the complexity
of the additional hardware should be quite simple.  Even if the conversion/
communication part is not too well optimized, it should still do much better
than the present.  

>    (1) how often such a feature would be used
>    (2) the performance improvement it would provide, when used
>  & (3) what performance degradation the machine would suffer from
>          having such a feature, even if it is not used (e.g., the
>          loss of other ways of spending the chip area)
> 
> Does anyone have any suggestions about what these numbers are?

As for (1), I suggest asking knowledgeable users for situations in which
they could take advantage of such features, or would be inconvenienced by
lack of them.  These users would have to understand architectural
considerations, and not just HLLs.  These considerations militate non-
obvious choices between algorithms, so testing specific algorithms is
not the answer.

I believe that the current elementary function library for the RS/6000
does quite a bit of this, and this might give an indication.

As for (2), it would reduce the cost of the hybrid operations down to
1/3 their present cost at least, and in some cases more.  For unsigned
integer to floating conversion, at present this takes an integer store,
a floating load, and a floating add, assuming that the preparations have
been made.  Other situations are even worse.

As for (3), possibly we should put in another chip.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

haynes@felix.ucsc.edu (99700000) (05/17/91)

In article <12236@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
>
>Why do we have separate integer and floating units, especially without 
>communication between them?  I suggest those who push this horror look
>at how difficult conversion between them is.  I have already pointed out
>that every trigonometic and exponential routine does, in some way, 
>float/float -> integer, remainder.  The integer is also used.
>
Well if we could go back to the Burroughs B5500 of 1964 vintage we
wouldn't have.  An integer on that machine was simply a floating point
number with a zero exponent; the hardware algorithms tried to keep the
exponent zero as long as possible, rather than always normalizing.
The only type conversion instruction was one that would integerize a
float when an integer value was required.  I believe the concept
actually goes back to Householder of Oak Ridge in the 1950s.

But in that same year of 1964 IBM unleashed its new 360 line on the
world; and one of the features of that product was that for some
models floating point was an extra-cost option, requiring that microde
and perhaps hardware be added to the machine.  So I guess they made a
floating point feature that could be strapped on the side of the
basic machine without much disturbance to it.

Now that we have a floating point standard that requires almost always
normalizing I guess we can never go back to where we were in 1964.
(Burroughs - er, ah, Unisys - is still making machines with the old
number representation, but they must be mightly lonely.)
-- 
haynes@cats.ucsc.edu
haynes@ucsccats.bitnet

"Any clod can have the facts, but having opinions is an Art."
        Charles McCabe, San Francisco Chronicle

dik@cwi.nl (Dik T. Winter) (05/17/91)

In article <15909@darkstar.ucsc.edu> haynes@felix.ucsc.edu (99700000) writes:
 > Well if we could go back to the Burroughs B5500 of 1964 vintage we
 > wouldn't have.  An integer on that machine was simply a floating point
 > number with a zero exponent; the hardware algorithms tried to keep the
 > exponent zero as long as possible, rather than always normalizing.
The same was true for the (Dutch) Electrologica X8 of the same vintage:
keep the absolute value of the exponent as small as possible without losing
precision.
 >                                            I believe the concept
 > actually goes back to Householder of Oak Ridge in the 1950s.
I do not think so.  Here it was always called the Grau representation of
floating-point numbers (and Householder was well known).  It was based on
the following article:
    Grau, A.A.
    On a Floating-Point Number Representation For Use with Algorithmic
    Languages.
    Communications of the ACM, 5 (1962), 160-161.
This was for a large part influenced by:
    Ashenhurst, R.L., and Metropolis, N.
    Unnormalized floating-point arithmetic.
    Journal of the ACM, 6 (July 1959), 415-428.
which is the oldest reference I have found that uses 'unnormalized' fp numbers.
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl