rees@dabo.ifs.umich.edu (Jim Rees) (05/25/90)
In article <900524.10463981.007067@CMR.CP6>, BROWN@CMR001.BITNET writes: > A few months ago, there were some rather disturbing reports on > this discussion group, about disks going bezerk at a certain > saturation level (roughly around 90% if memory serves me well) There is no risk of anything bad happening from having a disk that is nearly full, except for the risk that it will eventually fill. I regularly run at about 92-98% full with no problems. The problems come in when the disk actually does fill up. This is more likely to happen without your knowledge or consent with domain/os than with Berkeley Unix. This is because paging for temporary objects (like stacks) is done on the root disk instead of on a separate partition. Personally I consider this an advantage, because it makes more effective use of the available disk space, but if you're not used to it then it can surprise you. At sr10 a new "feature" was introduced. In certain cases, if you tried to allocate some disk-backed virtual memory, the space was reserved on the disk for the backing store at the time of allocation instead of at the time of use (as was the case before sr10). This made it less likely that you would fill up your disk but it also meant you couldn't create huge sparse objects unless you had a huge disk. I think this feature was rescinded or made optional or something later. Before sr10, if you filled your disk you were hosed. Whatever piece of code that would normally handle unexpected errors like "disk full" couldn't run because it couldn't get the backing store for its stack and temporary data structures. I haven't filled up a disk recently so I can't tell you whether this is still the case.