[comp.binaries.ibm.pc.d] Strange zoo problem

fireman@sunc5.cs.uiuc.edu (Neil Feiereisel) (03/11/91)

Without changing anything, I suddenly got the following error message when
doing my backups using zoo.

ERROR: Could not open ....
and I get this error for every file in the archive.

I've been using the following batch file for doing the incremental backups:

stuff / -modified ! -name *.bak ! -name temp.* ! -size +1000 > temp.bak
zoo aIunP diffback <temp.bak
erase temp.bak

This batch file was working fine - all of the sudden, I get this error 
message every time.  The size of diffback is only about 700K right now.
I've got 6MB left on my hard disk and a chkdsk revealed no problems.  Not
only that, I used Norton Disk Doctor and checked every single sector on the
hard disk and have no problems.

Can anyone shed any light on this?  Even a few ideas of how to pursue this
strange problem would be helpful.

Thanks in advance,

-- 
Neil Feiereisel         e-mail: fireman@uiuc.edu OR fireman@cs.uiuc.edu

derek@sun4dts.dts.ine.philips.nl (derek) (03/20/91)

fireman@sunc5.cs.uiuc.edu (Neil Feiereisel) writes:

>Without changing anything, I suddenly got the following error message when
>doing my backups using zoo.

>ERROR: Could not open ....
>and I get this error for every file in the archive.

>I've been using the following batch file for doing the incremental backups:

>stuff / -modified ! -name *.bak ! -name temp.* ! -size +1000 > temp.bak
>zoo aIunP diffback <temp.bak
>erase temp.bak

>This batch file was working fine - all of the sudden, I get this error 
>message every time.  The size of diffback is only about 700K right now.
>I've got 6MB left on my hard disk and a chkdsk revealed no problems.  Not
>only that, I used Norton Disk Doctor and checked every single sector on the
>hard disk and have no problems.

>Can anyone shed any light on this?  Even a few ideas of how to pursue this
>strange problem would be helpful.

>Thanks in advance,

>-- 
>Neil Feiereisel         e-mail: fireman@uiuc.edu OR fireman@cs.uiuc.edu

I *think* there is a bug in zoo. I had a problem with zoo archives over 
about 700/800 K. I forget what the error message was, but it certainly
was a strange one. There are three things that you can do:

1. Use different zoo archives. dback1.zoo, dback2.zoo etc. (not so good that)

2. Use a different archiver - pkzip and zipper for example (available on 
   Simtel:
   PD1:<MSDOS.ZIP>
ZIPPER13.ZIP  B   13153  901130  Build ZIPs of limited size from multiple files

3. Wait for Mr. Rahul Dhesi to issue the next version of zoo, which he 
   promised to do sometime this year.

Not such a useful reply, but since no one seems to have posted on this, 
perhaps this is better than nothing?

Best Regards, Derek Carr
DEREK@DTS.INE.PHILIPS.NL           Philips I&E TQV-5 Eindhoven, The Netherlands 
Standard Disclaimers apply.

valley@uchicago (Doug Dougherty) (03/21/91)

derek@sun4dts.dts.ine.philips.nl (derek) writes:

>fireman@sunc5.cs.uiuc.edu (Neil Feiereisel) writes:

>>Without changing anything, I suddenly got the following error message when
>>doing my backups using zoo.

>>ERROR: Could not open ....
>>and I get this error for every file in the archive.

>I *think* there is a bug in zoo. I had a problem with zoo archives over 
>about 700/800 K. I forget what the error message was, but it certainly
>was a strange one. There are three things that you can do:


Zoo (version 2.00) also has a problem with archives which have too many
entries in them (I think this applies to # of entries at a given
directory level, but have not tested it thoroughly)  In any case, we had
to redesign a system that uses zoo for archival purpose to never put
more than about 240 files into a given .ZOO

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (03/25/91)

In article <695@sun4dts.dts.ine.philips.nl> derek@sun4dts.dts.ine.philips.nl (derek) writes:

| 3. Wait for Mr. Rahul Dhesi to issue the next version of zoo, which he 
|    promised to do sometime this year.

  Rahul gave up programming for lent. He will resume work on new-zoo
after Easter. Actually if I'd known he was going on vacation at this
time I would have offered to bug hunt for him while he was gone. The
new-zoo is down to one (known) bug.
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

art@felix.UUCP (Art Dederick) (03/28/91)

I believe the problems with zoo being reported are due to memory space
and not to a bug in zoo.  I have created zoo archives in the
multi-megabyte size and have found only when I run out of free memory
do I ever have a problem, usually trying to update one of these
megabyte archives.  When zoo is asked to update an existing archive,
it must read all entry headers and keep them in memory so it knows
which files need to be updated.  Once zoo runs out of memory, updates
no longer work.  I'm sure RD didn't expect people to use this as a
backup tool so didn't take into account that the number of header
entries would ever be a problem.

To get around your problem, pack the archive before each update and/or
delete then pack those entries you will be updating.  Since the source
to zoo is all over the world (check your local archive site) you can
always modify zoo to use a disk file to keep track of the headers.
Expect slower updates since zoo will then have to search the disk file
instead of just searching memory.

RD - you might want to add an option or put huristics into the next
version of zoo to allow this.

D. Art Dederick (714)966-3618 {ccicpg,hplabs,oliveb,spsd,zardoz}!felix!art
FileNet Corp., 3565 Harbor Blvd., Costa Mesa, CA 92626