[net.arch] Oh no! More integer division

weemba@brahms.BERKELEY.EDU (Matthew P. Wiener) (02/17/86)

Followups to net.lang, please!

In article <2669@gatech.CSNET> jeff@gatech.UUCP (Jeff Lee) writes:
>>[Whether CS people should even be *allowed* to make such mathematical
>>decisions is another question.
>
>I am afraid that I must rank this statement right up there with the
>statement that was made about a year ago that only computer scientists
>should be the people allowed to program. Being a computer scientist
>with an interest in math, I find in both statements some of the most
>ignorant and arrogant attitudes that I have seen just about anywhere
>(in a professional situation). I suppose that you also believe that
>only professional mechanics should work on your car, that professional
>drivers should drive it, that architects should be the only people
>allowed to design your house, or that professional cooks should be the
>only people allowed to cook your meals? I am afraid that I do all these
>things and will continue to do so.

You do all those things for yourself and for people who know and trust you!
If you were to make something used by hundreds or thousands of people every
day, I would hope you were a professional.  I for one would not work in a
skyscraper built by an amateur.

Concerning the ignorance and arrogance in the statement.  The statement
was not made in a vacuum, but you conveniently deleted its context: two
*explicit* examples of CS knuckleheadedness in the implementation of
*mathematical* functions.  The statement was arrogant, but certainly not
ignorant.  Indeed, several other postings to the effect that "Fortran
does it that way" or "Ada does it that way" or "the naive user wants it
that way" or "I don't know/care, it looks OK to me" can only confirm my
views about (some) CS people.  They strike me as being rather ignorant and
arrogant, but going the other way: it was implemented that way so it can't
be wrong, or it is too late to change, or who cares what users want, etc.

ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

aglew@ccvaxa.UUCP (02/28/86)

>/* Written 11:04 am  Feb 17, 1986 by taylor@glasgow.UUCP in ccvaxa:net.arch */
>During my undergraduate days, one of the Great Men who taught us suggested,
>seriously I think, that rounding should be up OR down AT RANDOM, with a 50/50
>chance. This would remove systematic errors introduced by rounding in iteractive
>algorithms. ( which way do you round 0.5 is the hardest question; 0.5000001 and
>0.4999999 are much easier.)

Ref: Digital Signal Processing, Oppenheim and Schafer, Prentice-Hall, 1975.
Section 9.5 Effects of Finite Register Length in Discrete Fourier Transform
Computations, p. 459, Figure 9.23. A graph showing randomized vs. 
non-randomized rounding - randomized rounding is about 3 times better for
1K DFTs. Their reference isWeinstein, "Roundoff Noise in Floating Point Fast 
Fourier Transform Computation", IEEE Trans. Audio Electroacoust., Vol AU-17,
Sept. 1969, pp 209-215.

So much for the formalities: I just felt like going back to that textbook 
since I really enjoyed my DSP courses.

Now, is this an architectural question: should or could randomized rounding be
provided by a floating point unit? Assuming you can cheaply obtain a random
number in hardware (amplify transistor noise?) doing randomized rounding in
the floating point unit would be a lot faster than doing so in software,
where you'd have to extract and play around with the guard bits.

Don't jump on me, RISC proponents! I'm one of you (well, I used to be).
But this could be one of those applications where a bit of hardware support,
a few extra gates, might solve something that would require a lot of 
instructions in software. Hardware randomized rounding doesn't need to be
microcoded.

Related Q: do any signal processing chips have a randomized rounding mode?
If so, how do they do it? I don't recall that it is in the TI32010 specs.

ark@alice.UucP (Andrew Koenig) (03/01/86)

> Now, is this an architectural question: should or could randomized rounding be
> provided by a floating point unit? Assuming you can cheaply obtain a random
> number in hardware (amplify transistor noise?) doing randomized rounding in
> the floating point unit would be a lot faster than doing so in software,
> where you'd have to extract and play around with the guard bits.

If randomized rounding is provided by a floating-point unit,
it had better be easy to turn off because using it is likely
to make debugging impossible.  Consider: if you run the same
program twice and get slightly different results, how will you
ever know if it is a bug in the program?

aglew@ccvaxa.UUCP (03/03/86)

> /* Written  4:17 pm  Feb 28, 1986 by ark@alice.UucP in ccvaxa:net.arch */
> > Now, is this an architectural question: should or could randomized 
> > rounding be provided by a floating point unit? ...
> 
> If randomized rounding is provided by a floating-point unit,
> it had better be easy to turn off because using it is likely
> to make debugging impossible.  Consider: if you run the same
> program twice and get slightly different results, how will you
> ever know if it is a bug in the program?

True enough, and you can't reproduce transistor noise. But little pseudo-
random pattern generators could be placed in the floating point unit,
reseeded when you want reproducibility. The advantage would be, that the
work necessary to make the pseudo-random decision could be done in parallel,
in hardware.

aglew@ccvaxa.UUCP (03/05/86)

>jer@peora.UUCP responds to an earlier posting of mine with Knuth's
>round always to odd/round always to even rule.

I blush - I forgot the most basic form of randomized rounding.  Round to
even is probably the best way of producing randomized rounding, since it is
really easy to hard wire, and reproducible.  Applause to Concurrent for
doing it right.  Question about the IBM Stretch mentioned by another poster
- since its randomized rounding mode was non-reproducible, how did it do it
(certainly not by round to even)?

However, round to even, round to odd can produce systematic errors, although
I'd have to bend things to get an example where this is important.  Consider
any situation where you have to keep a total T - divide all by N, all your
round to even or round to odd fractions will be rounded down or up.  Thus
you can differ from your total by N(P/2) where P is the precision of your
least significant digit. With true randomized rounding you are likely to
differ only by (P/2), although you might be just as bad. Do this a few
million times, and you might discover where all the dark mass in the
universe is (:-).

But you're probably right - this is not an important question for general
purpose computer architectures. Special purpose?

----

While we're on the topic, and you probably mentioned this on the net a few 
months ago, and probably a few years ago before that, has any machine ever
implemented interval arithmetic `a la Knuth: all floating point numbers
represented by a pair (lowest possible, highest possible)?