grunwald@flute.cs.uiuc.edu (Dirk Grunwald) (01/26/89)
I'm just curious if anyone else has seen this problem. Our local computing lab runs two Encores, one 10-processor and one 4-processor. The 10 processors (with the appealing name `m') has, during the day, about 90 or 100 users. Many use xterm, gnuemacs, etc. Some run mongo simulations (me) or parallel prolog. In anycase, we allegedly have about 160 MB of swap configured. The first partition is about 82Mb, and the rest is broken into two partitions. Now, our problem is that, between my simulations and the parallel prolog and gnuemacs, we constantly run out of swap, as demonstrated by the ``killed'' messages are you get when you try to run something. When this happens, I catch flake, so I've been trying to find out why this happens. What's interesting is that `ps' and `top' usually show about 80MB being used when this happens, even using naive estimation (discounting shared text). Ideally, this situation shouldn't happen until we're burning 160MB. Everyone who uses an encore should know by now that the `inq_stats' call, the basis for `sysmon', `top' and `ps' is broken. However, it also just doesn't seem possible that we're really using 160MB. So, my question is: does anyone else run multiple swap partitions? If so, have you ever verified that they work? If so, how? Do other people have problems running out of swap? Our local adminstrator has talked to encore 'till she's blue in the face, but, like I said, I still catch flake when I drive the VM over 80MB, so I have an interest in finding an answer. Either post or mail me & I'll summarize. Dirk Grunwald Univ. of Illinois grunwald@m.cs.uiuc.edu