Kdavid@gizzmo.UUCP (David Solan) (05/21/88)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Last November John Chambers commented that there is an "out-of-free-blocks" bug in UNIX for the UNIX PC (I also believe this problem exists for other versions of System V UNIX on other machines). I am posting this to offer my full agreement. The problem can even occur in SINGLE-user mode! If you allocate and write to many files (a 100 or more) in rapid succession, such as within a C program in the space of a second or less, AND then immediately unlink and remove all these files very rapidly, AND then immediately reallocate and write to another 100 files or so very rapidly, AND then immediately unlink and remove this second batch of files very rapidly, UNIX -- sometimes (NOT ALWAYS!) -- will cough. In the second mass file deletion, the files will indeed be removed, but sometimes their data blocks will not, I repeat *NOT*, be put on the free block list of UNIX. I can assure you, this is true. Considering this happens in single-user mode, this is clearly a bug in UNIX's file allocation/removal algorithms, and has nothing to do with multi-user race conditions. Upon the next running of fsck, the orphaned blocks are put back on the free block list. And the problem can be obviated entirely by running lots of sync's and sleep's between the removals and reallocations. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- David Solan Objective Programming Incorporated Post Office Box 123 Norwalk, CT 06856 Voice: (203) 866-6900 attmail: !dsolan USENET: gizzmo!kdavid -- :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: {codas,u1100a}-----\ David Solan rutgers!rochester!pcid!kodak!gizzmo!kdavid {lazlo,ethos,fthood}-----/