[comp.sys.amiga] zoo enhancements

rico@oscvax.UUCP (Rico Mariani) (12/17/87)

First off I'd like to say that I love zoo and I use it all the time,
but in order to keep up my reputation of never being satisfied I
have some suggestions...

	- Allow recursive archival of directorys i.e.

		zoo a ram:mystuff dh0:ricos_stuff

	- Don't do wildcard matching on files that don't have any
	  wildcards in them.  I currently do large archives like the
	  above with:

		zoo a ram:mystuff dh0:ricos_stuff/.../*
						   ^
				N.B. shell 2.07 expands .../* to every file
			        from there down in the tree

	  Then I wait for 15 minutes while it scans the directory tree
	  once for each file my wildcard sequence expanded to.
	  So the solution is simple.  Check the filenames for wildcard
	  characters, if there are none then don't do any searching.

	- Lose the 97 file limit.  This is a serious limitation if you're
	  archiving from a hard disk.  There's no reason to put any limit
	  on the number of filenames.

	- I wouldn't bother adding more compression types to zoo, I think
	  arc is a big time waster with it's analyzing...(think for
	  a long long time)... crunching stuff.  I've arc'd tons and tons
	  of files of various flavours and it always 'crunches' (lempel
	  ziv 12 bits right?).  I've seen it use huffman coding/squeeze
	  once.  The time you save by not having to figure out which method
	  to use compressing is well worth the storage that you might
	  gain.  For applications that really need squeezing see below...

	- I haven't tried this next part so it might already work but if it
	  doesn't this would be a good thing to add.  I'd like to say

		zoo a pipe:big.zoo dh0:

	  and have my whole harddisk archived and go into the pipe:
	  You could do lots of neat tricks with this such as:

		- set the zoo flags to not compress it's output and then

			run compress <pipe:big.zoo >archive.Z
		    or  run squeeze ....
			run super_compress <pipe:kryptonite ...er... ooops :-)

		  this compresses the archive as a whole rather than a file
		  at a time (much better compression this way).  And allows
		  for the compressor of your choice.

		- make a little utility that splits its standard input
		  into several files of a certain maximum size and then

			run split <pipe:big.zoo vol1:part1 vol2:part2

		  to unarchive I can just

			run join vol1:part1 vol2:part2 as pipe:big.zoo 
			zoo x/ pipe:big.zoo

	remember big.zoo much much bigger than the amount of ram you
	have so the normal fake pipe tricks don't work.
	Essentially you have to guarantee that there are no seeks on the
	archive file as this won't work with a pipe: device.

Zoo's already great so maybe it can be greater, we want the same kind of
flexibility as Unix tar.  Say... is there a tar for the Amiga?

	Merry X-MAS
	  -Rico

PS.	Asteriods (sic) is coming...  watch for a posting soon.  Then
	you can all tear my program to bits :-)
-- 
		...{watmath|allegra|decvax|ihnp4|linus}!utzoo!oscvax!rico
		or oscvax!rico@gpu.toronto.EDU   if you're lucky

[NSA food: terrorist, cryptography, DES, drugs, CIA, secret, decode]
[CSIS food: supermailbox, tuna, fiberglass coffins, Mirabel, microfiche]
[Cat food: Nine Lives, Cat Chow, Meow Mix, Crave]

dhesi@bsu-cs.UUCP (Rahul Dhesi) (12/29/87)

In article <553@oscvax.UUCP> rico@oscvax.UUCP (Rico Mariani) writes:
[suggestions about zoo]
>	- Allow recursive archival of directorys i.e.
>
>		zoo a ram:mystuff dh0:ricos_stuff

A lot of people want this.  There's a slight implementation problem in
that getting filenames recursively is highly system-dependent.  Here's
how to get this feature in Amiga zoo:  Write a C routine for Aztec/Manx
C that will accept a directory name and recursively return a list of
all files in the directory subtree each time it is called.  Send it to
me or to Brian Waters <bsu-cs!jbwaters>.

>	- Don't do wildcard matching on files that don't have any
>	  wildcards in them.  I currently do large archives...
>	  Then I wait for 15 minutes while it scans the directory tree
>	  once for each file my wildcard sequence expanded to.

The same problem occurred with VAX/VMS, which sometimes scans
directories very slowly, and got fixed.  Should get fixed for the Amiga
too (eventually).

>	- Lose the 97 file limit.  This is a serious limitation if you're
>	  archiving from a hard disk.  There's no reason to put any limit
>	  on the number of filenames.

A compile-time option because a static array of pointers is used to hold
filenames.  It can be increased to just about any number, but maybe
Brian Waters <bsu-cs!jbwaters> doesn't think anybody needs more?
Let him know.

>	- I haven't tried this next part so it might already work but if it
>	  doesn't this would be a good thing to add.  I'd like to say
>
>		zoo a pipe:big.zoo dh0:
>
>	  and have my whole harddisk archived...

Zoo will happily archive anything that it can read through the standard
read and fread functions.  If dh0: is not thus readable, you're talking
about a highly system-dependent feature that will take forever
to implement.  This and the remaining suggestions are better
implemented with a tar-type utility.
-- 
Rahul Dhesi         UUCP:  <backbones>!{iuvax,pur-ee,uunet}!bsu-cs!dhesi