jwp@larry.sal.wisc.edu (Jeffrey W Percival) (05/01/89)
We are running Ultrix 3.0 on MicroVAX II's and VS2000's. We are seeing something new and unusual: once in a while a user will type "df", for example, and get the message "malloc: not enough core". We have seen this while running the News expire program, as well as when running a large data analysis program. What's the deal here? How can a virtual memory machine deny memory to the puny df(1) program? -- Jeff Percival (jwp@larry.sal.wisc.edu)
guy@auspex.auspex.com (Guy Harris) (05/02/89)
>How can a virtual memory machine deny memory to the puny df(1) program?
With ease, if you run out of swap space.
dce@Solbourne.COM (David Elliott) (05/03/89)
In article <170@larry.sal.wisc.edu> jwp@larry.sal.wisc.edu (Jeffrey W Percival) writes: >We have seen this while running the News expire program, as >well as when running a large data analysis program. What's the >deal here? How can a virtual memory machine deny memory to >the puny df(1) program? It's funny. I've known a number of very smart software engineers who believe that "virtual memory" means "infinite memory". Sometimes, these folks write their code without ever checking to see if malloc() fails. It's often important (or at least reasonable) for the OS to expect there to be enough swap space when a program is running. What would happen if df was running and the os needed to swap it out to run something else? Would you rather df just keep running while your network drops packets and fails to update the filesystems? -- David Elliott dce@Solbourne.COM ...!{boulder,nbires,sun}!stan!dce
jwp@larry.sal.wisc.edu (Jeffrey W Percival) (05/03/89)
In article <925@marvin.Solbourne.COM> dce@Solbourne.com (David Elliott) writes: >In article <170@larry.sal.wisc.edu> jwp@larry.sal.wisc.edu (Jeffrey W Percival) writes: >>How can a virtual memory machine deny memory to the puny df(1) program? >It's funny. I've known a number of very smart software engineers who >believe that "virtual memory" means "infinite memory". Then again, many know exactly how these two concepts differ. Some even go further, though, and wonder about what's happening on the margin. Useful concept, that, "marginal availability". One would expect malloc() to fail on large requests and not to fail, generally, on small requests. Therefore when one sees fairly consistent failure on small requests, one might suspect some systematic problem, which in fact was the case here. -- Jeff Percival (jwp@larry.sal.wisc.edu)
mtsu@blake.acs.washington.edu (Montana State) (05/04/89)
In article <925@marvin.Solbourne.COM> dce@Solbourne.com (David Elliott) writes: >In article <170@larry.sal.wisc.edu> jwp@larry.sal.wisc.edu (Jeffrey W Percival) writes: >>We have seen this while running the News expire program, as >>well as when running a large data analysis program. What's the >>deal here? How can a virtual memory machine deny memory to >>the puny df(1) program? > >It's often important (or at least reasonable) for the OS to expect >there to be enough swap space when a program is running. What would >happen if df was running and the os needed to swap it out to run >something else? Would you rather df just keep running while your >network drops packets and fails to update the filesystems? > This is a good point, but what I would like to know is why df needs so much core to run in. While df is blowing up with not enough core, ps, top, and lisp can all fire up and run. I did a ps on a df that was stuck waiting for an NFS server, and it showed an size of 1034K of memory. What does it need all of this space for?? Jaye Mathisen icsu6000@caesar.cs.montana.edu