chudnall%pkg@mcc.com (11/30/90)
I have a user who is running extreme cpu/memory/disk intensive design software. I can't elaborate extesively on the software in question (it's proprietary), but a typical process can take up anywhere from 30 to over 70 Mbytes of virtual memory. The platform he's using is a Sun 4/390 with 56Mb RAM, and 500+Mb swap, running SunOS 4.0.3. For all practical purposes, it is a single user machine. The problem we're having is that only about 10-20Mb of the process will stay resident in memory, even if there is plenty of space; i.e. in one case, the process was taking up 70Mb of virtual memory, and all but 11Mb of that was swapped to disk, even though there had to be at least 30Mb of RAM unused, even taking into account the size of the few other processes on the system. My question is: How do we get the system to effectively utilize the availiable memory for this large process? I am convinced that the slowdown we're seeing (which happens when the process starts to get large) is due to the constant waiting on disk to swap pages in and out. We are capable of purchasing enough memory to handle the largest processes, but I don't see any point to this when the existing memory isn't being used. Has anyone else had a situation like this? Please respond directly to me: I'll summarize. Thanks, --Christopher