gillies@m.cs.uiuc.edu (04/10/90)
> A brief examination of the 68020 instruction times pointed out a _wide_ > variation in instruction timings, depending on cache hits. Obviously, > normal multi-tasking operations such as preemption will wreak havoc with > cache hits at unpredictable times. Although I don't actually designing real-time systems (I just research scheduling), my rather extreme opinion is that the 68020 is a bullsh*t processor for implementing real-time systems. Even if you disable the 68020's caches, you still must worry that your runtime data may be misaligned in memory. The 68020 will happily compensate for this, slowing down your program considerably. In my opinion, you should stop thinking about the 68020 for real-time systems, and start thinking about highly deterministic RISC processors such as the R2000, or perhaps the MC88000. -------------------------------------------------------------------- Now here's a plug for our group's research. In some hard real-time systems, researchers have advanced the idea that if a task runs longer than its predicted execution, an error exception should be raised in the task. This exception must be handled and an "acceptable" result produced very quickly after the exception is raised. One way to do this is to have a "primary" (long) algorithm trying to get the best possible result, and a "fallback" algorithm that kluges something quickly when the system determines the primary algorithm is taking too long. Another way to do this is to use an algorithm whose results converge monotonically toward the optimal solution. At first a "mandatory" result is produced. Later, intermediate "optional" results are recorded. If the timer expires, the most accurate result yet computed becomes the final result. This type of system has been dubbed the "imprecise computation model" in the literature. Supposedly, this type of system can soak up 100% of the processor cycles, and behaves especially well under transient overloads, as long as the initial "mandatory" results can be produced quickly. I.e. during a transient overload, the system produces a bunch of crappy results in compensating for the transient, but presumably the system's design specification can tolerate very coarse accuracy for short periods of time, thus the system survives and continues to meet its real-time constraints. Perhaps later, precision increase to "make up" for the sloppiness during transient overload. Of course, both of these paradigms depend on being able to recast your entire real-time system into this framework, which may be very difficult. Don Gillies, Dept. of Computer Science, University of Illinois 1304 W. Springfield, Urbana, Ill 61801 ARPA: gillies@cs.uiuc.edu UUCP: {uunet,harvard}!uiucdcs!gillies
schmitz@fas.ri.cmu.edu (Donald Schmitz) (04/12/90)
In article <70900012@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu writes: >.. my rather extreme opinion is that the 68020 is a bullsh*t >processor for implementing real-time systems. > >Even if you disable the 68020's caches, you still must worry that your >runtime data may be misaligned in memory. The 68020 will happily >compensate for this, slowing down your program considerably. In my >opinion, you should stop thinking about the 68020 for real-time >systems, ... Actually, few if any compilers will generate such misaligned data (a few of the sloppy ones may put 4 byte objects on two byte boundaries, but this has surprisingly little effect on typical execution times, and it can usually be avoided). The '020 also has about the simplest cache imaginable, it is only an I cache and holds a single instruction per line. Turning off the cache on an '020 should always yield worst case performance (assuming no external caching). It is also a a bargain today (which is often important for real world real-time systems), the new '030 products have pushed down the cost of single board '020 systems to the $3K/board range - for that price you usually get 20Mhz CPU/FPU, 4M of 0-1 wait state memory, plus some mix of timers and serial ports. >Now here's a plug for our group's research. In some hard real-time >systems, researchers have advanced the idea that if a task runs longer >than its predicted execution, an error exception should be raised in >the task. This exception must be handled and an "acceptable" result >produced very quickly after the exception is raised. At my last job, we implemented a research system that supported this (on an '020), I'm waiting to see if actual applications (they were using it for robot control) can take advantage of the feature. I'd be interested in hearing of any results (theoretical or applications) anyone else has along these lines... Don