gchamby@cs-col.Columbia.NCR.COM (Greg Chamby) (08/19/90)
I recently had a Novell file server drive become corrupted and had to send it off for repair. The drive was returned as an MSDOS drive using Disk Manager partitioning software. One subdirectory on this drive had almost 13,000 files in it (yes, thirteen thousand). The problem with it was in copying all those files off the DOS drive to another Novell file server drive. I used Novell's NCOPY command which is basically just a straight copier of files for networks. As far as I know the only difference between it and DOS copy is that NCOPY will handle subdirectories and preserves Network file attributes. The copy took about 2.5 days due to DOS choking on the large number of files in the subdir. Towards the end it was copying about two files a minute. I could watch it copy a file, scan the disk for about 30 secs and then get the next file. Is this an inherent problem with MSDOS? I've seen a DIR command steadily degrade after a couple thousand files as well. Is DOS doing a linear search to get the next dir entry? I'm wondering if there is anything one can do to speed up DOS when accessing a huge number of files such as this. I don't believe upping the BUFFERS would help since we're talking about a sequential copy of files. Also, people kept asking me why I didn't use XCOPY but I can't think of a case where XCOPY would be faster than COPY when copying files sequentially from one hard drive to another. On a one floppy system, XCOPY is nice but otherwise I think it would just waste time filling memory. Can anyone out there comment on all this? Any info would be appreciated. Thanks. {-----------------------------------------------------------------------} {gchamby%cs-col } { or ncrcae!cs-col!gchamby } {-----------------------------------------------------------------------}
scjones@thor.UUCP (Larry Jones) (08/19/90)
In article <1990Aug19.012037.18164@cs-col.Columbia.NCR.COM>, gchamby@cs-col.Columbia.NCR.COM (Greg Chamby) writes: > > The copy took about 2.5 days due to DOS choking on the large > number of files in the subdir. Towards the end it was copying about two > files a minute. I could watch it copy a file, scan the disk for about 30 > secs and then get the next file. Is this an inherent problem with MSDOS? > I've seen a DIR command steadily degrade after a couple thousand files as > well. Is DOS doing a linear search to get the next dir entry? I'm > wondering if there is anything one can do to speed up DOS when accessing a > huge number of files such as this. I don't believe upping the BUFFERS > would help since we're talking about a sequential copy of files. Yes, you seem to have confused the DOS FAT and directory mess with a file system. ;-) The DOS file system is TERRIBLE and handling large files, and a subdirectory with lots of files in it is itself a large file. In the case of your copy, the copy program is sequentially reading the directory to find the next file. This in itself isn't too bad, but copy then has to open the file so it can copy it. Open then has to read the directory again to find the file and it does so by reading sequentially from the beginning. That's why it get worse as it goes. Increasing the number of BUFFERS might help if you make it large enough to hold the entire FAT and directory, since then you'll be reading from memory instead of having to move the heads back and forth across the disk from FAT to directory to file. Unfortunately, DOS's buffering strategy isn't very sophisticated so it might well discard a buffer full of valuable directory or FAT information and replace it will basically worthless file data, so it might not help as much as you'd hope. OS/2's Fast File System is one of it's (few in my opinion) significant improvements over DOS. ---- Larry Jones UUCP: uunet!sdrc!thor!scjones SDRC scjones@thor.UUCP 2000 Eastman Dr. BIX: ltl Milford, OH 45150-2789 AT&T: (513) 576-2070 Rats. I can't tell my gum from my Silly Putty. -- Calvin