RWK%SCRC-YUKON@sri-unix.UUCP (12/16/83)
From: Robert W. Kerns <RWK at SCRC-YUKON> Date: Fri, 16 Dec 1983 00:48 EST From: LEVITT@MIT-OZ Date: Thursday, 15 December 1983, 23:13-EST From: Robert W. Kerns <RWK at SCRC-YUKON> To: levitt Re: Your WORKS flame One thing that bothers me about your flame is that you compare the Lisa with a machine we haven't made in over a year. A machine who's maximum memory is the same as our current machine's MINIMUM memory, and was more expensive to boot. I certainly didn't intend my message as an attack on Symbolics. As I said, those problems are true of the Star and every virtual memory PC I'm aware of (although the Dorado offsets it with an expensive associative cache). I've barely used 3600's yet (I'm looking forward to it) so I made sure to say I was comparing to an LM-2. I was interested in the tachnical point, which is apparently valid. As you note, the 3600 requires even more memory than the LM-2, and memory still isn't free. On the other hand, I've seen a PDP-11 do real-time music IO, graphics, print files, etc. simultaneously, all with less memory than the Lisa. They keys are interrupts, a small and/or segmented kernel, and careful memory organization in general. I was serious when I asked about software development systems that support assisted or automatic segment management, and I hope someone responds. I'm amazed at the number of people who assume demand paging is cost-effective and necessary. I'd like to know what other designers think. Is there an inherently large, unsegmentable kernel in any system you can imagine finding satisfactory? Is it too much trouble to think about and implement more structured ways of organizing the system - so it will only become cost effective when someone else builds a good tool? I understood what you were trying to say; most of what you say is valid. It's just that when someone not knowing the Symbolics product line reads your note they will get the impression that our product is slower than a Lisa. Since you weren't comparing our current product, I think you should say so. It doesn't weaken your argument. It just prevents people from drawing unintended conclusions. Anyway, it is certainly possible to do real-time work in a rich enough demand-paged environment. Dan Gerson would be a better person to address the issue, since he has thought about it a lot and has some plans along these lines, but you can do things like putting everything you need for a certain process in a special area, and then wiring it. That's a pretty extreme example; you're essentially defeating demand paging for the real-time process. Less extreme needs allow many other options, including such things as separate working-set computation for separate processes, etc. Take a look at what VMS offers for one model of how this can work. As far as segmenting the system: I don't think this is the right approach. I think it's just a special case of the more general approach of improving locality and effective pre-paging, swapping the working set of entire processes, etc.
LEVITT%MIT-OZ@MIT-MC.ARPA (12/22/83)
emory still isn't free. On the other hand, I've seen a PDP-11 do real-time music IO, graphics, print files, etc. simultaneously, all with less memory than the Lisa. They keys are interrupts, a small and/or segmented kernel, and careful memory organization in general. I was serious when I asked about software development systems that support assisted or automatic segment management, and I hope someone responds. I'm amazed at the number of people who assume demand paging is cost-effective and necessary. I'd like to know what other designers think. Is there an inherently large, unsegmentable kernel in any system you can imagine finding satisfactory? Is it too much trouble to think about and implement more structured ways of organizing the system - so it will only become cost effective when someone else builds a good tool?