hkr4627@acf4.UUCP (Hedley K. J. Rainnie) (04/26/85)
I am attempting to perform limited precision fractional math using a digital computer. The idea is to not use a float where a fraction would do. A fraction also is just integer multiplication,addition,etc... No floating point nastiness need creep into the calculation. I am troubled by ways to efficiently handle normalization. I was wondering whether anyone in the mathematics community has heard of procedures that deal with this problem. I feel that for my applications it will be much faster than floating point work. Thank you, Hedley.
karsh@geowhiz.UUCP (Bruce Karsh) (04/27/85)
> I am attempting to perform limited precision fractional math using a digital > computer. The idea is to not use a float where a fraction would do. A > fraction also is just integer multiplication,addition,etc... No floating > point nastiness need creep into the calculation. I am troubled by ways to > efficiently handle normalization. If you want to keep the calculations exact, then probably the best you can do is to use Euclid's algorithm to reduce the fraction to least common denominator. -- Bruce Karsh | U. Wisc. Dept. Geology and Geophysics | 1215 W Dayton, Madison, WI 53706 | This space for rent. (608) 262-1697 | {ihnp4,seismo}!uwvax!geowhiz!karsh |
ads@mit-hermes.ARPA (Sathya D.Narayanan) (04/27/85)
The Art of Computer Programming,Vol 2 by D.E.Knuth on pg 315 lists the references.Read the one's by Matula and Kornerup this should give all the details you need. Though don't expect it to be faster than floating point especially if you are on a machine which has floating point hardware.If the machine has floating point hardware then you should maybe just use it if you aren't too concerned about exact results.The rational stuff could give you exact results since you never have to approximate,provided you represent both numerator and denominator as large enough integers. -ads
ken@turtlevax.UUCP (Ken Turkowski) (04/30/85)
In article <920001@acf4.UUCP> hkr4627@acf4.UUCP (Hedley K. J. Rainnie) writes: >I am attempting to perform limited precision fractional math using a digital >computer. The idea is to not use a float where a fraction would do. A >fraction also is just integer multiplication,addition,etc... No floating >point nastiness need creep into the calculation. I am troubled by ways to >efficiently handle normalization. I was wondering whether anyone in the >mathematics community has heard of procedures that deal with this problem. >I feel that for my applications it will be much faster than floating point >work. There are two commonly used fixed-point fractional formats: those that include 1 and those that don't. For 16-bit two-s complement fractions, they are: s.fffffffffffffff and su.ffffffffffffff Where "s" is the sign bit, "f" is a fractional bit, and "u" is a unit's bit, and "." is the radix point. The first format is most widely used, especially in the signal processing community. This is probably because TRW and a host of others make multiplier chips that support this format with rounding. You can get something awfully close to 1 with 0.111111111111111. Don't be tempted to use 1.000000000000000 as 1, because it is really -1 (note the sign bit). Addition and subtraction are easy, because they are identical to regular two's complement additiona and subtraction. Multiplication needs to be done in a few assembly instructions, because C does not support fractional arithmetic. Roughly, the product of two fractions is the high part of the two numbers considered as integers. More specifically, the product of two numbers of the form s.fffffffffffffff (sign plus 15 fractional bits) is ss.ffffffffffffffffffffffffffffff (two signs plus 30 fractional bits). Normal integer multiplication truncates off the top of the product, whereas fractional multiplication truncates off the bottom, after first throwing out one of the duplicated sign bits. This is the only "normalization" necessary. One of the features of fractional multiplication is that there is no overflow: the product of any two numbers between 0 and 1 will remain between 0 and 1. Division, on the other hand, is very prone to overflow. What happens when you divide 0.5 by 0.25? You get 2.0, which is not representable in a fractional format. I prefer the second format, because you can represent 1 exactly, at the expense of throwing out 1 fractional bit. Below you will find PDP-11 assembly code for fractional multiplication of numbers with the s.fff... format. Because it is so simple, translation to any other machine should be trivial. Also included is a fixed point square root routine, which is required to have an even number of fractional bits (works with su.fffff... fractions as well as integers). The code for division will be left to the reader, and is only slightly more difficult than multiplication. There is no restriction to using 16 bits; for any of the modern computing machines, 32 bits would be more suitable, especially if your machine has a 32x32-->64 bit multiplication. ----------------------------------------------------------------- # This is a shell archive. Remove anything before this line, then # unpack it by saving it in a file and typing "sh file". (Files # unpacked will be owned by you and have default permissions.) # # This archive contains: # frmul.s sqrt.s echo x - frmul.s cat > "frmul.s" << '//E*O*F frmul.s//' .globl _frmul _frmul: mov 2(sp),r0 /* Get the first parameter */ mul 4(sp),r0 /* Multiply it by the second */ ashc $1,r0 /* Shift out the extra sign bit */ add $100000,r1 /* Rounding */ adc r0 /* Carry the rounding up to the high word */ rts pc /* Return */ //E*O*F frmul.s// echo x - sqrt.s cat > "sqrt.s" << '//E*O*F sqrt.s//' .globl _sqrt .text _sqrt: / int sqrt((long) arg) ~~sqrt: / 256ths come out with 16ths, longs with ints ~arg=4 / high=4, low=6 ~div=r1 ~rem=r2 ~counter=r4 jsr r5,csv clr r0 clr r2 mov 4(r5),r3 / Enter high word jsr pc,Lroot / Calculate root for top word mov 6(r5),r3 / Enter low word jsr pc,Lroot / Calculate root for bottom word jmp cret Lroot: mov $8,r4 / Load counter to loop for the entire high word Lrtlp: ashc $2,r2 / Bring in the next two bits asl r0 / Get ready for the next bit in the root mov r0,r1 / Test divisor = 2 * (partial root) + 1 asl r1 inc r1 sub r1,r2 / Compute next partial remainder jmi Lrestor / Test for divisibility inc r0 / Enter a 1 into the square root sob r4,Lrtlp rts pc Lrestor: add r1,r2 / Test divisor doesn't divide partial remainder sob r4,Lrtlp rts pc .globl .data //E*O*F sqrt.s// exit 0 -- Ken Turkowski @ CADLINC, Menlo Park, CA UUCP: {amd,decwrl,hplabs,nsc,seismo,spar}!turtlevax!ken ARPA: turtlevax!ken@DECWRL.ARPA