robtu@itx.isc.com (Rob Tulloh) (04/06/91)
I have noticed that using zoo with files on the command line causes it to grind the disk drive several minutes before it really gets down to adding files. However, if I pipe the files to zoo, it adds them right away. What gives? # fire up zoo from any shell... zoo a foo.zoo foo1 foo2 foo3 foo4 # grind, grind, grind, ... # using sksh find and pipes... find . -name 'foo*' -print | zoo aI foo.zoo # quick! I don't know why zoo would need to thrash the disk to add files. Can anyone offer an explanation? Rob Tulloh -- INTERACTIVE Systems Corp. Tel: (512) 343 0376 Ext. 116 9442 Capital of Texas Hwy. North Fax: (512) 343 0376 Ext. 161 (not a typo!) Arboretum Plaza One, Suite 700 Net: robertt@isc.com (polled daily) Austin, Texas 78759 GEnie: R.TULLOH (polled monthly)
avadekar@sunee.waterloo.edu (Ashok Vadekar) (04/07/91)
In article <robtu.670882506@mexia> robtu@itx.isc.com (Rob Tulloh) writes: > >I have noticed that using zoo with files on the command line causes >it to grind the disk drive several minutes before it really gets down >to adding files. However, if I pipe the files to zoo, it adds them >right away. What gives? > ... > >I don't know why zoo would need to thrash the disk to add files. >Can anyone offer an explanation? > >Rob Tulloh >-- >INTERACTIVE Systems Corp. Tel: (512) 343 0376 Ext. 116 >9442 Capital of Texas Hwy. North Fax: (512) 343 0376 Ext. 161 (not a typo!) >Arboretum Plaza One, Suite 700 Net: robertt@isc.com (polled daily) >Austin, Texas 78759 GEnie: R.TULLOH (polled monthly) The code (which is originally UNIX based) does some truely horrendous things. Zoo has to position itself in the zoo file to skip past each existing entry. Since a compressed file has a header that states the size, zoo performs a fseek() to position itself at the start of the next record (compressed file). Unfortunately, all fseek() calls are made with respect to the start of the file (instead of relative to the current file postion). This means going back to the start of the file and scanning forward again, past the point you were already at, and on to the start of the next header. Ashok Vadekar
ecarroll@maths.tcd.ie (Eddy Carroll) (04/09/91)
In article <1991Apr6.170351.13111@sunee.waterloo.edu> avadekar@sunee.waterloo.edu (Ashok Vadekar) writes: >In article <robtu.670882506@mexia> robtu@itx.isc.com (Rob Tulloh) writes: >> >>I have noticed that using zoo with files on the command line causes >>it to grind the disk drive several minutes before it really gets down >>to adding files. However, if I pipe the files to zoo, it adds them >>right away. What gives? >The code (which is originally UNIX based) does some truely horrendous >things. Zoo has to position itself in the zoo file to skip past each >existing entry. Since a compressed file has a header that states the >size, zoo performs a fseek() to position itself at the start of the next >record (compressed file). Unfortunately, all fseek() calls are made >with respect to the start of the file (instead of relative to the current >file postion). This means going back to the start of the file and scanning >forward again, past the point you were already at, and on to the start of >the next header. > >Ashok Vadekar This is true, but the real reason for the delay is that the original Amiga version of Zoo called Aztec's "expand wildcards" routine for every file on the command line, even if the filename didn't contain any wildcard characters. So, a command line like: zoo a zoofile file1 file2 file3 file4 file5 file6 ended up scanning your current directory 6 times. If you had any number of files at all in your directory, this took forever. I believe the latest version of Zoo (V2.01) fixes this problem. I did a patch for Zoo 2.0 that fixes the problem as well, a while back. Other than that, using piped filenames is the way to go. Eddy -- Eddy Carroll ----* Genuine MUD Wizard | "You haven't lived until ADSPnet: cbmuk!cbmuka!quartz!ecarroll | you've died in MUD!" Internet: ecarroll@maths.tcd.ie | -- Richard Bartle
dave@unislc.uucp (Dave Martin) (04/09/91)
From article <robtu.670882506@mexia>, by robtu@itx.isc.com (Rob Tulloh): ... > I don't know why zoo would need to thrash the disk to add files. > Can anyone offer an explanation? I've noticed that zoo does this with requests to list the contents of the archive (but not when extracting) too. It happens when the size of the directory containing the archive has exceeded the capabilities of the buffers for the harddisk to cache the entire directory. (Sorry about the iky sentence) It does this when given the entire name of the archive including the .zoo, or when given a partial name. It seems to be doing some kind of directory scan (possibly more than once). It does not do it when extracting though. -- VAX Headroom Speaking for myself only... blah blah blahblah blah... Internet: DMARTIN@CC.WEBER.EDU dave@saltlcy-unisys.army.mil uucp: dave@unislc.uucp or use the Path: line. Now was that civilized? No, clearly not. Fun, but in no sense civilized.
avadekar@sunee.waterloo.edu (Ashok Vadekar) (04/09/91)
In article <1991Apr8.232658.1589@maths.tcd.ie> ecarroll@maths.tcd.ie (Eddy Carroll) writes: >This is true, but the real reason for the delay is that the original Amiga >version of Zoo called Aztec's "expand wildcards" routine for every file >on the command line, even if the filename didn't contain any wildcard >characters. So, a command line like: > > zoo a zoofile file1 file2 file3 file4 file5 file6 > >ended up scanning your current directory 6 times. If you had any number >of files at all in your directory, this took forever. > >I believe the latest version of Zoo (V2.01) fixes this problem. I did >a patch for Zoo 2.0 that fixes the problem as well, a while back. Other >than that, using piped filenames is the way to go. > >Eddy >-- >Eddy Carroll ----* Genuine MUD Wizard | "You haven't lived until >ADSPnet: cbmuk!cbmuka!quartz!ecarroll | you've died in MUD!" >Internet: ecarroll@maths.tcd.ie | -- Richard Bartle Except that I have compiled that code using Lattice, and a zoo -v does the same thing. It is quite possible that both things are happening. But when it takes 5 minutes to list the contents of a 1 meg zoo archive OFF OF A HARD DISK, it is VERY annoying. I solved the problem by hacking the zoo source to death. I also added 'real' wildcard support (Ie. the way I want it). BTW, the Lattice wildcard routines, when running in MSDOS '*' mode convert the filenames to lowercase. Ashok Vadekar