shaw@abp.lcs.mit.edu (Andy Shaw) (01/06/90)
In article <39807@ames.arc.nasa.gov> lamaster@ames.arc.nasa.gov (Hugh LaMaster) writes: >It is interesting to contemplate $100-300K systems, like the SGI Power Series, >with each CPU based on an 80MHz R6000. The possibility is there for a >system which looks like a Cray scaled down by a factor of about 5 for scalar >work, a factor of 15 for vector work. At a cost of 1/30 - 1/100. What would >prevent this from happening? Memory bandwidth could. Nobody >really wants to talk about this in public, but I bet a lot of people are >staying up nights trying to figure out how to scale up memory bandwidth >with processor speed. Cheaply (If you build it like Cray does, it will >cost like a Cray). Actually, I was under the impression that this was no big secret -- memory bandwidth and latency are going to be the limiting factors in the speed of computer systems (both parallel and serial) of the very near future. Am I wrong to say "latency" in the same sentence with "bandwidth"? I don't really think that they are separate issues. I don't think anything spectacularly interesting has been done about memory bandwidth or latency recently -- registers, caches, and interleaving are all old, old ideas ... what else has come around? -Andy Shaw
tve@sprite.berkeley.edu (Thorsten von Eicken) (01/06/90)
In article <1990Jan5.193511.3879@mintaka.lcs.mit.edu> shaw@au-bon-pain.lcs.mit.edu.UUCP (Andy Shaw) writes: >Am I wrong to say "latency" in the same sentence with "bandwidth"? Yes, you can very often trade off latency versus bandwidth. >I don't really think that they are separate issues. I don't think >anything spectacularly interesting has been done about memory >bandwidth or latency recently -- registers, caches, and interleaving >are all old, old ideas ... what else has come around? Multithreading RISC processors is about to come around, i.e. having multiple process/thread contexts loaded in you processor and switching context whenever a long memory operation get initiated. > >-Andy Shaw -Thorsten von Eicken, tve@sprite.berkeley.edu
mash@mips.COM (John Mashey) (01/06/90)
In article <1990Jan5.193511.3879@mintaka.lcs.mit.edu> shaw@au-bon-pain.lcs.mit.edu.UUCP (Andy Shaw) writes: >Actually, I was under the impression that this was no big secret -- >memory bandwidth and latency are going to be the limiting factors in >the speed of computer systems (both parallel and serial) of the very >near future. >Am I wrong to say "latency" in the same sentence with "bandwidth"? I >don't really think that they are separate issues. I don't think >anything spectacularly interesting has been done about memory >bandwidth or latency recently -- registers, caches, and interleaving >are all old, old ideas ... what else has come around? Nothing much, except for issues that arise with inexpensive silicon that sometimes change the tradeoffs in memory system design, i.e., you sometimes get to do things with page-mode DRAMs that encourage different organizations than what you would have done with some of the more expensive older memory systems. Of course, most micros in general suffer (performance) from having just 1 path from CPU to memory, compared with supercomputers/ minisupers. As in the old saw (approx):, "You can buy bandwidth, but latency is forever, because if you break the laws of physics, God pulls you over and gives you a speeding ticket, and He cannot be bribed." -- -john mashey DISCLAIMER: <generic disclaimer, I speak for me only, etc> UUCP: {ames,decwrl,prls,pyramid}!mips!mash OR mash@mips.com DDD: 408-991-0253 or 408-720-1700, x253 USPS: MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086