andrew@frip.gwd.tek.com (Andrew Klossner) (06/14/88)
Some corrections to a note I posted earlier: > "Even when copy-back is selected, the first write to a cached > location is written through. This updates the main memory and > invalidates any other copies of the data that may be present in > other caches, to ensure that no more than one cache contains a > modified version of the data." > > It writes back the entire 16-byte cache line, regardless of how much of > it was modified. And it does so even if the page is marked "not > global," meaning that software guarantees that the cache line is not > present in any other cache. Ouch! My comments were incorrect. They dealt with a cache miss on write, not with a first write to a cached location (which presumably is in the cache because of an earlier read). On first write to a cached location, no write-through occurs if the page is not global. A local CMMU expert has informed me that the Motorola documentation is incorrect in stating that the entire cache line is written back; only the modified data is written. > Note that, when you want to enlarge a cache, you end up buying multiple > MMUs to go with your additional RAM. This is pretty pricey, but it can > provide well scheduled software with additional opportunities for > parallelism: since memory loads and stores are pipelined, a load from > one CMMU can wait on a page table walk while a load from a second CMMU > can be serviced from a cache hit. It has been pointed out to me that these additional opportunities don't exist. The CPU does have a three-deep data pipeline, so software can issue three load/store instructions without stalling, but the nature of the PBUS (the bus between CPU and CMMUs) prevents the CPU from issuing a new load/store before the previous one completes. (Instruction loads can still overlap with data loads/stores because there are separate I and D PBUSes out of the CPU.) -=- Andrew Klossner (decvax!tektronix!tekecs!andrew) [UUCP] (andrew%tekecs.tek.com@relay.cs.net) [ARPA]