**jlh@loral.UUCP (Bilbo Baggins)** (02/14/86)

Got a little problem I could use some help with. I have some software in which I need to input 12 bit integers, do some min, max, average, and some specialized calculations, then output the results in 32 bit floating point format. Now it seems to me that I can save time by doing my intermediate steps with the 12 bit integers, then converting the results to 32 bit floats just before outputting them, as opposed to converting each int to a float before doing my calculations. This seems to work, however I need PROOF. One thing I do remember from my math classes probably wrong also. Also, I don't need to worry about over and underflow with my 12 bit numbers, it's guarenteed not to happen. Please mail the results to me as I don't normally read this newsgroup. Thanks in advance. Jim Harkins Loral Instrumentation, San Diego {ucbvax, ittvax!dcdwest, akgua, decvax, ihnp4}!sdcsvax!sdcc6!loral!jlh