[comp.sys.ibm.pc] ARC Wars

bright@Data-IO.COM (Walter Bright) (08/13/88)

In article <6084@xanth.cs.odu.edu> rlb@cs.odu.edu (Robert Lee Bailey) writes:
>Suppose that SEAs attitude had been prevalent at the turn
>of the century?  Can you imagine the early automobile manufacturers
>suing each other because their competitors product also happened to
>have 4 wheels, an engine, and a steering wheel?

Don't laugh! This happened. All the automobile manufacturers agreed to
pay royalties to this one outfit (I forgot the name), except Henry Ford.
Ford took them to court and beat them.

It also happened in the airplane business. The Wright bros patented the
concept of varying the curvature of the wing to effect lateral control,
and also the concept of linking this and the rudder movement to one
combined control. This basically gave them a hammerlock on the aircraft
industry, and there were many attempts to design an airplane that didn't
infringe (the Wright bros won all those cases). Lawsuits abounded, and
the Wrights lost much popularity about this. This continued until WW1,
when the government forced a patent pool to be created so that war production
wouldn't be stalled by patent fights.

By the way, for those people who are designing new arc formats, here's things
I'd like to see that nobody ever talks about:
	1. If one byte is bad in an ARC file, the rest of the files are
	   not recoverable. Arc files should have the directory at the
	   start, and if a byte is lost in the arc file, only the file that
	   the byte is in is lost.
	2. Arc files should use a fat scheme similar to how DOS stores files
	   on a disk, this is so that files can be deleted or updated without
	   rewriting the entire arc file, which is a major reason why it's
	   slow.

Also, the BBS I run will be converted to using ZOO.

deanr@lakesys.UUCP (Dean Roth) (08/31/88)

SEA strikes again, taking PKWare to court Sept. 9.
Seems SEA does not like PKWare's PKPAK/PKUNPAK 3.61
reading and writing *.arc files.  I think the
complaint is along the lines of "contempt of court".
As I understand the agreement, PKWare can continue
reading/writing .arc files until 1/1/89, or have I
missed something?

I will publish details when I get them.

(I am not associated with either PKWare or SEA.
I just think the whole thing stinks.  Gee, I can
read and write *.arc files using DEBUG.  Maybe
SEA should sue me, IBM and Microsoft too.)

Dean A. Roth
deanr@lakesys.UUCP
{rutgers, uwvax} uwmcsd1!lakesys!deanr

campbell@hpbsla.HP.COM (gary campbell) (09/14/88)

>By the way, for those people who are designing new arc formats, here's things
>I'd like to see that nobody ever talks about:

I would like an archiver with compression that can handle multi-floppy
archives, primarily as a means of system backup.  I have never heard of
a freeware offering addressing system backup.  This surprises me a bit,
with all of the horror stories about backup/ restore floating around.
(Is this because noone wants to trust their system backup to a freeware
product :-)?)

Zoo, with its data compression and its ability to store subdirectories
looks like a possibility, but there is the problem of handling multiple
floppy backups.  The PD Tar just distributed will handle multi-floppy
backups, but doesn't compress, at least not on the PC.

First, do you know of any existing freeware solution to this problem?
It seems like it shouldn't be too hard to add an option to Zoo, either
to sense the available room on a floppy, or to supply a maximum archive
section size.  I don't know the format of a Zoo file, but I assume that
there is or could be a header identifier to identify continuations of an
archive.  I haven't decided whether such a thing should be able to break
a file between disks, allowing processing of a file which compresses to
a size that is larger than a floppy, or whether to force a file to fit
on a floppy.  What does Backup do?  I think it should be possible to
list and extract files from any volume of the archive without having to
read the whole archive, and it would be nice to be able to get a
complete listing from the first volume along with volume number.  This
latter might be better done with a log file.

Someone on a local BBS made the following comments about compression in
a backup utility:  "Also, you would have to wonder about the integrity
of the backed-up files.  If the archive dropped one bit -- it could
destroy multiple files within the one archive.".  I think that Zoo
compresses each file separately, so the part about losing many files
wouldn't be true, but is his other concern valid?

I am interested in any comments or suggestions you may have on this
subject.

--
Gary Campbell
{decvax,fortune}hplabs!hpbsla!campbell

wheels@mks.UUCP (Gerry Wheeler) (09/20/88)

In article <360004@hpbsla.HP.COM>, campbell@hpbsla.HP.COM (gary campbell) writes:
> I would like an archiver with compression that can handle multi-floppy
> archives, primarily as a means of system backup.

Something which fits that description is the cpio utility in the MKS
Toolkit.  It is not, however, freeware.

On Unix, one would use a command like "find /usr | cpio -ocv | compress
>/dev/xxx".  On DOS, of course, because there are no real pipes, the
entire output of cpio would be saved in a temp file before compress was
run.  This doesn't work well if you are trying to back up 10 megs of
data.  :-) So, we added an option to cpio to have it do the compression
on the fly.  The results are the same as above, but no pipe is involved. 

Our cpio will also assume that a write error indicates a full disk, and
asks if you want to continue on another volume.  This allows the cpio
file occupy as many floppies as necessary. 

> Someone on a local BBS made the following comments about compression in
> a backup utility:  "Also, you would have to wonder about the integrity
> of the backed-up files.  If the archive dropped one bit -- it could
> destroy multiple files within the one archive.".

True.  You don't want to use compression if your floppies are flakey. 
(But who would use flakey floppies for backup?) If you're going to start
from scratch and create a backup program with compression, you probably
want to compress each file separately and try to arrange some way to
re-synchronize to the headers within the file in case there is some
corruption.  Possibly the use of a separate file with pointers into the
backup file would work.  Or, perhaps the indexes could be stored at the
end of the file in a fixed size area.  (Not too handy on a multi-volume
backup, though.)
-- 
     Gerry Wheeler                           Phone: (519)884-2251
Mortice Kern Systems Inc.               UUCP: uunet!watmath!mks!wheels
   35 King St. North                             BIX: join mks
Waterloo, Ontario  N2J 2W9                  CompuServe: 73260,1043