tim@amdcad.UUCP (05/01/87)
In article <698@edge.UUCP> doug@edge.UUCP (Doug Pardee) writes: >I wasn't referring to efficiency. I was referring to predictability. I can >look in my book here and find that on a 10MHz 68000, the instruction > MOVE.W 6(A0),D0 will take exactly 1.6+.1w microseconds, for a "w" wait >state memory. I haven't the vaguest idea how long it will take to execute > temp1 = ptr1->count; and what's worse, a change to some other part of >the same module could cause this statement to be faster or slower (as the >result of global optimization). Okay? Good luck "looking in your book" to find the execution time on a 68020 or '030! (Letssee, is the instruction in the cache? Hmmm, maybe the prefetch buffer... Now, how about the data -- Wait a minute! This can possibly overlap with the previous instruction execution, now what was *that*?). Most, if not all of the newer processors have fairly non-deterministic execution times; this is the price you pay for higher performance. This is not usually a problem, however, since realtime systems usually are interrupt driven, and you just need to guarantee that the interrupt response time is sufficient. -- Tim Olson Advanced Micro Devices Processor Strategic Development (tim@amdcad.AMD.COM)
gwyn@brl-smoke.ARPA (Doug Gwyn ) (05/01/87)
Generally, if one can bound the worst-case response of the system, and if the maximum possible response time is still sufficient, then that's good enough. Also, most reasonable data acquisition systems are buffered, and if one can guarantee that the buffering is sufficient so that no overrun will occur, that also is good enough. This may involve guaranteed statistical behavior of a non-deterministic system. Note that an upper bound can be placed on time taken by the code generated for almost any C expression. One may have to make worst-case assumptions, such as a jmp being generated for every conditional, but most C compilers I've worked on have code generation tables that can be used to make such predictions.