davet@oakhill.UUCP (Dave Trissel) (12/10/84)
xecuting its instruction stream in parallel. Operations are carried out in strict accordance with the IEEE Floating-point Specification (P754) Rev 10.0 and with 80 bits of precision. The standard clock rate of 16.666 Megahertz is assumed and the times are all in microseconds. INSTRUCTION Reg-to-Reg ----------Memory to Reg--------------- Single Double Extended FMOVE (in) 1.5 3.0 3.4 3.3 FADD/FSUB 2.8 4.3 4.6 4.5 FSGLMUL 3.1 4.6 --- --- FSGLDIV 3.8 5.3 --- --- FMUL 4.0 5.5 5.8 5.7 FDIV 5.9 7.4 7.7 7.6 FSIN/FCOS 23.0 24.5 24.9 24.8 FSINCOS 1 25.5 27.0 27.4 27.3 FTAN 27.2 28.7 29.1 29.0 FASIN 33.8 35.3 35.7 35.6 FACOS 35.9 37.4 37.7 37.6 FATAN 22.7 24.2 25.5 24.4 1 This instruction returns BOTH the sine and cosine of a value. Note that these are only a small portion of functions and instructions available. Anyone having times for transcendentals (including square root) for the National NS32081 would you please post them to me? If there is enough interest in comparing both chips, I will post a timing contrast on the net. Motorola Semiconductor Dave Trissel Austin, Texas {ctvax,ihnp4,seismo,gatech}|ut-sally|oakhill|davet
nather@utastro.UUCP (Ed Nather) (12/10/84)
[-+-- begone!] Perhaps the most interesting thing about these timings is that there is essentially NO PENALTY IN USING DOUBLE vs SINGLE PRECISION in the basic floating point operations. There has been much discussion on the net about whether the C language is flawed because is specifies that all floating operations be done in double precision, despite the "obvious gain in speed" that would result in using single precision operations where they offer enough precision. Looks like it just depends on whose chip you use. -- Ed Nather {allegra,ihnp4}!{ut-sally,noao}!utastro!nather Astronomy Dept., U. of Texas, Austin
crandell@ut-sally.UUCP (Jim Crandell) (12/11/84)
> There has been much discussion on the net > about whether the C language is flawed because is specifies that all > floating operations be done in double precision, despite the "obvious > gain in speed" that would result in using single precision operations > where they offer enough precision. > > Looks like it just depends on whose chip you use. Yeah, NOW. C has been around several years now, and it seems to me that FORTRANers have been using that excuse not to look at C (not that they don't have others at least as good) for most of them. I'm a C hacker myself, and I do very little number-crunching these days, but I still believe that the criticism was completely valid when it first appeared. Besides, it's rather easy to show that DMR (I assume) wasn't thinking of the 68881 when he decided to restrict evaluation to double [:-)]. He MAY, of course, have assumed -- as nearly all of us have finally come to realize -- that doing things in hardware gets easier as time goes by, but extending that general observation to a prediction of a monolithic FPU with an 80-bit path would surely have required a pretty phenomenal crystal ball. As for what netters are saying nowadays: well, some of them just need some new material, I agree. -- Jim Crandell, C. S. Dept., The University of Texas at Austin {ihnp4,seismo,ctvax}!ut-sally!crandell
rf@wu1.UUCP (12/12/84)
Ed Nather ({allegra,ihnp4}!{ut-sally,noao}!utastro!nather) writes:
Perhaps the most interesting thing about these timings is that there
is essentially NO PENALTY IN USING DOUBLE vs SINGLE PRECISION in the
basic floating point operations.
Say WHAT? The timing chart shows single precision multiply as about
1/4 faster than double precision multiply. Single precision divide is
similarly faster than double precision divide. There is also a penalty
for fetching double-precision constants from memory.
"Orion shall rise!" Randolph Fritz
UUCPnet: {ihnp4,decvax}!philabs!wu1!rf
henry@utzoo.UUCP (Henry Spencer) (12/12/84)
> ... Anyone having times for transcendentals (including square > root) for the National NS32081 would you please post them to me? > If there is enough interest in comparing both chips, I will post > a timing contrast on the net. Would anyone outside Motorola who has actually seen a 68881 please post word to that effect? Many people have seen 32081s (well, 16081s before the Great Renumbering). For that matter, I note that the 68881 timings appear to have been calculated, rather than measured -- the line-eater bug got the early part of the intro, so I'm not certain -- and thus should be taken with a large bag of salt. Real Soon Now chips *ALWAYS* look better than ones that are already available... I don't doubt that the 68881 is going to be a superior chip, especially since it has full IEEE functionality instead of the drastic subset that the National chip implements. I just wish Motorola would stop talking about it and start selling it. -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
henry@utzoo.UUCP (Henry Spencer) (12/12/84)
Now, if Motorola will only get its act in order on MMUs, it may be real competition for the National parts... Anybody know when the decent MMU for the 68020 is due out? -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
ken@turtlevax.UUCP (Ken Turkowski) (12/15/84)
In article <896@utastro.UUCP> nather@utastro.UUCP (Ed Nather) writes: >Perhaps the most interesting thing about these timings is that there is >essentially NO PENALTY IN USING DOUBLE vs SINGLE PRECISION in the basic >floating point operations. There has been much discussion on the net >about whether the C language is flawed because is specifies that all >floating operations be done in double precision, despite the "obvious >gain in speed" that would result in using single precision operations >where they offer enough precision. > >Looks like it just depends on whose chip you use. Would you like C to do 64 bit arithmetic in all fixed-point computations, if all you needed was 16 or 32? I'd say that the 68881 wastes a lot of time and power computing the extra bits. -- Ken Turkowski @ CADLINC, Menlo Park, CA UUCP: {amd,decwrl,nsc,spar}!turtlevax!ken ARPA: turtlevax!ken@DECWRL.ARPA
jmc@ist.UUCP (John Collins) (12/21/84)
I remember in my student days using an IBM 370/165 and discovering that double was FASTER than single precision floating, since the microcode did everything in double precision and the single ops were done by padding the operands and truncating the results..... Maybe that is the rationale for C working in double.... (What was fun was when some user rang up when I was doing "progam advisor" duty and said "How much slower will my program run if I run it in double precision" - a silly question to which I could truthfully answer "It'll run faster!!) -- John Collins calling courtesy of ist Please reply to ...!mcvax!ist!inset!jmc Phone: +44 727 57267 Snail: 47 Cedarwood Drive, St Albans, Herts, AL4 0DN, England
dsmith@hplabsc.UUCP (David Smith) (12/27/84)
>I remember in my student days using an IBM 370/165 and discovering that >double was FASTER than single precision floating, since the microcode >did everything in double precision and the single ops were done by padding >the operands and truncating the results..... > >Maybe that is the rationale for C working in double.... C grew up on the Pdp-11/45 and /70, which had a mode bit to specify whether floating point was to be done in single or double precision. The C implementation did everything in double to avoid forcing the compiler to change and track the mode bit. David Smith Hewlett-Packard Laboratories