vestal@SRC.Honeywell.COM (Steve Vestal) (01/30/91)
(I'm cross-posting this message on the impacts of caching in real-time systems to comp.realtime. Should discussions of architectural considerations for trustworthy and/or real-time systems be moved to comp.realtime?) In article <2288@tuvie.UUCP> alex@vmars.tuwien.ac.at (Alexander Vrchoticky) writes: > vestal@SRC.Honeywell.COM (Steve Vestal) writes: >>First, it has been argued that some real-time code is sufficiently >>deterministic (i.e. memory access patterns are largely independent of input >>data) that cache behavior will be largely deterministic. > One should point out that this does not generally hold for systems where > context switches can occur at arbitrary points in time. > One task's cache hit is another task's cache miss. Yes, the impact of caching on context switch times is a slightly different issue than the impact of caching on worst-case (unpreempted) execution times for individual tasks. In some cyclically scheduled systems there are no real preemptions. In rate monotonically scheduled systems, it is possible to include context switch times in the performance model. In general, one context switch (or two, depending on how you define a switch) will eventually occur as a result of each task dispatch in a system (but this is effectively arbitrary if your scheduling model doesn't allow you to include it). My conjecture is that if you include the time required to completely flush/fill the cache, TLB, etc. in the context switch time then you will be safe. This means that any speed-up due to caching is discounted when bounding context switch times but not necessarily when bounding task execution times (for those willing to assume the memory access patterns of their tasks are largely data-independent, of course). (Personally, I would be hesitant to accept such an analysis for a safety-critical system, but there are applications where this approach should be reasonable.) Steve Vestal Mail: Honeywell S&RC MN65-2100, 3660 Technology Drive, Minneapolis MN 55418 Phone: (612) 782-7049 Internet: vestal@src.honeywell.com
sjohn@poland.ece.cmu.edu (John Sasinowski) (01/31/91)
Dave Kirk, who just finished his Ph.D. here at Carnegie Mellon University, wrote his dissertation on a cache design for real-time systems. This design, called SMART caching (Strategic Memory Allocation for Real Time), provides predictable execution times for tasks that run in the cache. Let me know if you want more information about this. I can send you a list of references if you wish. John Sasinowski sjohn@poland.ece.cmu.edu
huy@rainbow.asd.sgi.com (Huy Nguyen) (02/02/91)
| Dave Kirk, who just finished his Ph.D. here at Carnegie Mellon University, | wrote his dissertation on a cache design for real-time systems. This | design, called SMART caching (Strategic Memory Allocation for Real Time), | provides predictable execution times for tasks that run in the cache. | | Let me know if you want more information about this. I can send you | a list of references if you wish. | | John Sasinowski | sjohn@poland.ece.cmu.edu | | This sounds like an interesting paper. Could you post this information to the net so others can see it? huy@sgi.com
gillies@cs.uiuc.edu (Don Gillies) (02/03/91)
huy@rainbow.asd.sgi.com (Huy Nguyen) writes: >| Dave Kirk, who just finished his Ph.D. here at Carnegie Mellon University, >| wrote his dissertation on a cache design for real-time systems. This >| design, called SMART caching (Strategic Memory Allocation for Real Time), >| provides predictable execution times for tasks that run in the cache. Information on this appears in the IEEE Real-time Systems Symposium "SMART (Strategic Memory Allocation for Real-Time) Cache Design" IEEE RTSS 1989, p 229. "SMART (Strategic Memory Allocation for Real-Time) Using the MIPS R3000" IEEE RTSS 1990, p 322 These papers are about how to build a cache with different partitions for different processes.