rg@gandp (Dick Gill) (09/11/90)
A client's 32/650 is producing a "Huge directory" message when find is used to access a particular directory. We moved some files elsewhere, but still get the message; there are 1401 files in the directory now. How much of a problem is this? Is there a maximum number of files in a directory before something unpleasant happens? Thanks for any thoughts. Dick -- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Dick Gill Gill & Piette, Inc. "I can be a humble guy when I need to." (703)761-1163 ..uunet!gandp!rg Donald Trump
shwake@raysnec.UUCP (Ray Shwake) (09/12/90)
rg@gandp (Dick Gill) writes: >A client's 32/650 is producing a "Huge directory" message when >find is used to access a particular directory. We moved some >files elsewhere, but still get the message; there are 1401 files >in the directory now. Removal of a file with a standard System V filesystem does nothing to "shrink" the size of the directory, which itself is just a special type of file. (If you doubt this, type "od -c directoryname".) Anything that requires a "scan" of the directory - even a simple ls - takes longer if your directory is as large as described. This one is so large that are forced to access "indirect" blocks associated with the directory file. Suggest you spend some time reorganizing your resources, then run the public domain 'sqzdir' utility to squeeze the empty slots out of the cleaned out directories. It should be portable to the NCR. shwake@rsxtech
nick@bilpin.UUCP (nick) (09/12/90)
In article <309@gandp>, rg@gandp (Dick Gill) writes: > A client's 32/650 is producing a "Huge directory" message when > find is used to access a particular directory. We moved some > files elsewhere, but still get the message; there are 1401 files > in the directory now. Ah - but of course directories never shrink. If you want to get rid of this message do the following: mvdir <dir> <olddir> mkdir <dir> cp <olddir>/* <dir> > > How much of a problem is this? Is there a maximum number of > files in a directory before something unpleasant happens? > Well - yes. There is no easy answer to this one. Large directories are a problem (see below), but you will generally find that certain commands fail first. 'ls -l' , cp * <dir> etc etc. Actually this really is a problem. You will see a reduction in performance as these large directories, including entries relating to deleted files are searched. You should try to make sure that your directory fits into a logical file system block (512 bytes / 1K / 4K ) using NFFS (NCR Fast File System) available under NCR's Unix V.3 MB1 platform. You only have the 1K option if you are still on V.2. To check whether you have efficient directory scanning use 'sar' with the '-a' flag. If dirblk/s > igets/s then many large directories are being searched. You should also check that your PATH variable is as small as it can be, and that applications find files via relative rather that absolute path names. regards Nick -- _______________________________________________________________________________ Nick Price SRL Data || Apple link : UK0001 1 Perren Street London NW5 3ED || UUCP : nick@bilpin.uucp Phone: +44 1 485 6665 || Path : nick@bilpin.co.uk
dmdc@ncrsea.Seattle.NCR.COM (Dennis M. Dooley) (09/14/90)
In article <206@bilpin.UUCP> nick@bilpin.UUCP (nick) writes: >In article <309@gandp>, rg@gandp (Dick Gill) writes: >> A client's 32/650 is producing a "Huge directory" message when >> find is used to access a particular directory. We moved some >> files elsewhere, but still get the message; there are 1401 files >> in the directory now. > >Ah - but of course directories never shrink. If you want to get rid of this >message do the following: > > mvdir <dir> <olddir> > mkdir <dir> > cp <olddir>/* <dir> >> This will work if there are no subdirectories. An alternative approach would be: mv <dir> <dir.old> mkdir <dir> cd <dir.old> find . -print|cpio -pduvlm ../<dir> cd .. rm -rf <dir.old> _________________________________________________________________________ Dennis M. Dooley NCR Seattle Washington ncrsea!dmdc
dlau@mipos2.intel.com (Dan Lau) (09/15/90)
In article <639@ncrsea.Seattle.NCR.COM> dmdc@ncrsea.UUCP (Dennis M. Dooley) writes: >>> A client's 32/650 is producing a "Huge directory" message when >>> find is used to access a particular directory. We moved some ^^^^ >>> files elsewhere, but still get the message; there are 1401 files >>> in the directory now. >>> > An alternative approach would be: > > mv <dir> <dir.old> > mkdir <dir> > cd <dir.old> > find . -print|cpio -pduvlm ../<dir> ^^^^ > cd .. > rm -rf <dir.old> >_________________________________________________________________________ >Dennis M. Dooley NCR Seattle Washington >ncrsea!dmdc Am I missing something or did the original poster CLEARLY said that using the "find" command over the huge directory does not work?