eugene@ames.UUCP (Eugene Miya) (09/13/85)
Laura interpreted correctly what I said about methods in computer science tending to be sloppy. We have a Cyber 205 here, and three of us are trying to program a test to determine whether triadic arithmetic units are cost effective. [see the beginnings of the Null Hypothesis?] The issues involved include: memory contention, pipeline startup, symmetry, different operations, system overhead [does the OS decide to page half way thru your 65K long array?], the quality of the system clock, fooling potentially smart compilers, etc. We've have three days without writing much code. We thought the test would start as: Test Time: Loop: T = A * B D = T + C Versus Time: Loop: D = A * B + C where A,B,C,D,T are all contiguous arrays. What factors are extraneous? What factors are significant? What things can be subtracted out as overhead? The above test turn out to be too naive. A smart compiler should recognize the above expression and perform a strength reduction operation and the times should be equal. What about register allocation? And so forth. This has become an experiment design of about 5 factors. Architectures I know using triads include the Cyber 205 and the FPS-series. It does not yet appear cost-effective in micros or non-"vector" CPUs. Many of the methods in computer science would leave us with simple but (naive) tests. This is the iterative (self-correcting) beauty of the sciences. Oh, for a simpler field :-). From the Rock of Ages Home for Retired Hackers: --eugene miya NASA Ames Research Center {hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene emiya@ames-vmsb