opergb@uvm-gen.UUCP (Gary Bushey) (05/01/88)
A run a small PD Software library and am rebuilding it using compressed an/or crunched format. I was wondering if I should use Zoo or Arc and which version to accomplish this. Which one is going to become the one used in comp.binaries.ibm.pc? Any discussion, complaints, flames? Gary Bushey
browning@cory.Berkeley.EDU (Craig Browning) (05/02/88)
In article <827@uvm-gen.UUCP> opergb@uvm-gen.UUCP (Gary Bushey) writes: >A run a small PD Software library and am rebuilding it using compressed an/or >crunched format. I was wondering if I should use Zoo or Arc and which version >to accomplish this. Which one is going to become the one used in >comp.binaries.ibm.pc? > >Any discussion, complaints, flames? > >Gary Bushey ARC format. This is my vote, offered now that we have a discussion group; I feel that arc is much more the standard. The only reason given previously for not using PKARC including squashing was that Phil Katz wouldn't release the format of it, but a message from him posted did give it. It is fast, and compresses nicely; the 'Benchmarks' posted recently confirmed that PKARC was fastest and close to most effecient, after compress which usually was smallest, as you'd expect with 16-bit LZW. We no-moderator fans apparently have to have a moderator for the binaries group (I don't see why now with the discussion group) but I think not using arc format for the files would be a major inconvenience. We should use the non-squashing format perhaps because not everyone has PKARC, but since the PKARC binaries are supposed to be posted regularly, maybe even straight PKARC is OK. BBS's seem to have adapted uniuversally to the arc format, with or without squashing. Since this is an IBM group that seems pertinent. We've been using arc, why change? P.S. On Pibterm 4.1.3 (?), a list of recent changes would be nice to help people decide whether they want to get a new version. I do encourage people to try in a widely-distributed message to try to include all pertinent info; there's too much noise in questions otherwise. If anyone has nice VGA programs, please post them. I'll post a nice windows program called globe, which is nice, it shows a picture of the globe spinning, and you can locate cities etc. It's fun. Craig
heiby@falkor.UUCP (Ron Heiby) (05/02/88)
Craig Browning (browning@cory.Berkeley.EDU.UUCP) writes: > We should use the non-squashing format perhaps > because not everyone has PKARC, but since the PKARC binaries are supposed to > be posted regularly, maybe even straight PKARC is OK. Let's not forget that the majority of the systems that are actually on the Usenet are *not* PCs. In fact, the majority of them are computers running a variant of the UNIX operating system. It would be extremely nice to standardize on the non-squashing format, so that the arc files could be inspected on the host UNIX system, the documentation extracted and printed, etc. Otherwise, folks like me (and I bet there are a lot of us) will have to download the arc file, extract the documentation, upload the documentation back up to the host, and finally print it. I really don't think that the small additional compression that squashing adds is worth the major hassle it would impose. -- Ron Heiby, heiby@mcdchg.UUCP Moderator: comp.newprod & comp.unix "I believe in the Tooth Fairy." "I believe in Santa Claus." "I believe in the future of the Space Program."
phil@amdcad.AMD.COM (Phil Ngai) (05/02/88)
In article <827@uvm-gen.UUCP> opergb@uvm-gen.UUCP (Gary Bushey) writes: >A run a small PD Software library and am rebuilding it using compressed an/or >crunched format. I was wondering if I should use Zoo or Arc and which version >to accomplish this. Which one is going to become the one used in >comp.binaries.ibm.pc? My understanding is that zoo can handle directories and arc can't. If that is true, it would seem zoo is preferable. Some may say arc is more standard, but since self extracting zoo binaries can be created, there shouldn't be any one who can't use a zoo posting. Plus you can easily get zoo on many other operating systems like Unix. -- Make Japan the 51st state! I speak for myself, not the company. Phil Ngai, {ucbvax,decwrl,allegra}!amdcad!phil or phil@amd.com
feg@clyde.ATT.COM (Forrest Gehrke) (05/02/88)
In article <21371@amdcad.AMD.COM>, phil@amdcad.AMD.COM (Phil Ngai) writes: > My understanding is that zoo can handle directories and arc can't. If > that is true, it would seem zoo is preferable. > -- Will someone please explain why any archiver's ability to handle directories is important for transmitting binaries on USENET? Forrest Gehrke
carlp@iscuva.ISCS.COM (Carl Paukstis) (05/02/88)
In article <21371@amdcad.AMD.COM> phil@amdcad.UUCP (Phil Ngai) writes: >In article <827@uvm-gen.UUCP> opergb@uvm-gen.UUCP (Gary Bushey) writes: >>A run a small PD Software library and am rebuilding it using compressed an/or >>crunched format. I was wondering if I should use Zoo or Arc and which version >>to accomplish this. Which one is going to become the one used in >>comp.binaries.ibm.pc? > >My understanding is that zoo can handle directories and arc can't. If >that is true, it would seem zoo is preferable. Some may say arc is >more standard, but since self extracting zoo binaries can be created, >there shouldn't be any one who can't use a zoo posting. > Zoo is inherently a better system. I does indeed handle the directory structure, which is a feature taken much too lightly by many who argue for ARC. It also includes comment facilities for the archive and for the individual files. If you're starting your own PD software library, you should definitely use zoo. The ARC system is too limiting for complex software that must reside in several directories. I tend to think that c.b.ibm.pc should use zoo also, but I seem to be outvoted. Those who object apparently don't have zoo for their favorite host system, and want to use ARC to look at files before downloading to their home systems. This is the only semi-rational argument I have seen to use ARC as the standard. And it's not that rational - it's EASY to get ZOO for your host system (OK, maybe not for IBM mainframes or some such, but any UNIX system, I think. And I'll bet Mr. Dhesi would work something out for the tru-bluers.) -- Carl Paukstis +1 509 927 5600 x5321 |"I met a girl who sang the blues | and asked her for some happy news UUCP: carlp@iscuvc.ISCS.COM | but she just smiled and turned away" ...uunet!iscuva!iscuvc!carlp | - Don MacLean
davidsen@steinmetz.ge.com (William E. Davidsen Jr) (05/03/88)
In article <827@uvm-gen.UUCP> opergb@uvm-gen.UUCP (Gary Bushey) writes: | A run a small PD Software library and am rebuilding it using compressed an/or | crunched format. I was wondering if I should use Zoo or Arc and which version | to accomplish this. Which one is going to become the one used in | comp.binaries.ibm.pc? I suspect neither. I can't see forcing the moderator into repacking every submission. There are good technical reasons for using zoo, and good historical reasons for using arc. There are good technical reasons for not using PKARC at this time. I think postings will be in whatever format the poster uses, unless some other format such as DWC or PKARC is used. I would like to mention that this *is* a UNIX network, and many of us have to use a modem to move stuff to a PC. For that reason we want to unpack the documentation and look at it on the UNIX system. Given that, PKARC format is not unpackable on SysV, and only a small number of BSD sites as yet. I doubt that there are 10 UNIX sites in the country which can handle DWC, and two which handle FASTARCH. There are two major reasons why people want to use zoo over arc; (a) features such as comments and subdirectories, and (b) performance, since zoo is 5-8 times faster than arc *in UNIX*. Neither of these are compelling reasons, and I think the choice should be left to the poster between these alternatives. -- bill davidsen (wedu@ge-crd.arpa) {uunet | philabs | seismo}!steinmetz!crdos1!davidsen "Stupidity, like virtue, is its own reward" -me
davidsen@steinmetz.ge.com (William E. Davidsen Jr) (05/03/88)
In article <2898@pasteur.Berkeley.Edu> browning@cory.Berkeley.EDU.UUCP (Craig Browning) writes: | ARC format. This is my vote, offered now that we have a discussion group; | I feel that arc is much more the standard. The only reason given previously | for not using PKARC including squashing was that Phil Katz wouldn't release | the format of it, but a message from him posted did give it. It is fast, | and compresses nicely; the 'Benchmarks' posted recently confirmed that PKARC | was fastest and close to most effecient, after compress which usually was Please read my other posting. A lot of sites can't unpack PKARC and don't want to move large file to a PC until they have read the docs. Ego, a format which UNIX and DOS share, such as arc and zoo. I know zoo runs on VMS, how about arc? I heard that there was an unpack only version, but it didn't work here. -- bill davidsen (wedu@ge-crd.arpa) {uunet | philabs | seismo}!steinmetz!crdos1!davidsen "Stupidity, like virtue, is its own reward" -me
loci@csccat.UUCP (Chuck Brunow) (05/03/88)
In article <21371@amdcad.AMD.COM> phil@amdcad.UUCP (Phil Ngai) writes: > >My understanding is that zoo can handle directories and arc can't. If >that is true, it would seem zoo is preferable. ??????????? what's preferable about that? Doesn't your system have a file system? You know, the things that list when you say "dir". Funny thing about the file system is that it's already got directories (an idea they picked up from unix). The real problem seems to be the clustering used by MS-DOS which is very wasteful of disk space. Why not go to the root of the problem instead of the dumb layering scheme. Just because IBM does stupid things (PC) is no reason to emulate them. > ...more standard, what standard? What are you smoking? What planet are you from? >there shouldn't be any one who can't use a zoo posting. > >Plus you can easily get zoo on many other operating systems like Unix. >-- Keep you junk on your own machine, Unix doesn't need it. >Make Japan the 51st state! Now that really makes me mad. I'd rather make them a crater. > >I speak for myself, not the company. >Phil Ngai, {ucbvax,decwrl,allegra}!amdcad!phil or phil@amd.com Maybe, but it reflects on the company, university or whatever when you express yourself, doesn't it? After all, they think your a worthy employee (they hired you) and that reflects on their judgement.
bobmon@iuvax.cs.indiana.edu (RAMontante) (05/03/88)
carlp@iscuva.ISCS.COM (Carl Paukstis) writes: > >Zoo is inherently a better system. I does indeed handle the directory >structure, which is a feature taken much too lightly by many who argue for >ARC. It also includes comment facilities for the archive and for the >individual files. HOW does one add comments to the overall archive? (Or does it take two?...) I have version 2.00 zoo, 2.00 zoo documentation, and there's one hint that v2.0 can add archive comments. But the switches only seem to add file comments. Now, when I have a pom.zoo archive, containing pom.c, pom.h, pom.doc, and makefile, I can figure out what the files are -- I just need to remind myself what the heck POM is! >I tend to think that c.b.ibm.pc should use zoo also, but I seem to be >outvoted. Those who object apparently don't have zoo for their favorite >host system, and want to use ARC to look at files before downloading to [...] So here's what we do.... we post everything in BOTH formats, especially the multipart stuff, and keep track of which format generates more "Please Repost foo.bar.part_31" messages. Then we all spend 6 months arguing whether the winner is the more popular archiver, or just the one with more problems... Until someone asks what uudecode is, or the net.gods decide they don't like posting binaries anyway. My own favorite compression method guarantees a 50% savings. You just fill in all those empty, unused zero bits with one bits from the end of the string. And error detection during transmission is a snap, too -- just catch anything that isn't a one and turn it into a one. Hmmm. there might be room for some interesting heuristic data analyses here. (BTW: I favor zoo too, although it's not a strong feeling -- we don't have zoo on the host machine here.) ~~~ "I was wondering whether I was stoned tonight." - Zonker, Doonesbury
wfp5p@euclid.acc.Virginia.EDU (William F. Pemberton) (05/04/88)
You can STILL use PKARC if you don't want squashing, all you have to do is use the "/oc" switch. -------------------------------------------------------------------- Bill Pemberton | Life | (804)296-FRYD | is the only thing | WFP5P@Virginia.BITNET | worth living for! | Academic Computing Center | | University of Virginia | | Charlottesville, Va 22904 |-----------------------------|
scjones@sdrc.UUCP (Larry Jones) (05/04/88)
In article <10683@steinmetz.ge.com>, davidsen@steinmetz.ge.com (William E. Davidsen Jr) writes: > I know zoo runs on VMS, how about arc? I heard that there was an > unpack only version, but it didn't work here. Yep, I created and posted the VMS version of ARC to comp.os.vms a couple of months ago. When I get some free time I plan to add squashing (at least for extracting), fix some bugs, and improve performance. There is very little VMS specific code in it, so the end result should be portable to all flavors of that other operating system without too much work. ---- Larry Jones UUCP: ...!sdrc!scjones SDRC AT&T: (513) 576-2070 2000 Eastman Dr. BIX: ltl Milford, OH 45150 "When all else fails, read the directions."
randy@umn-cs.cs.umn.edu (Randy Orrison) (05/04/88)
In article <8405@iuvax.cs.indiana.edu> bobmon@iuvax.UUCP (RAMontante) writes: |My own favorite compression method guarantees a 50% savings. You just fill |in all those empty, unused zero bits with one bits from the end of the string. |And error detection during transmission is a snap, too -- just catch anything |that isn't a one and turn it into a one. Hmmm. there might be room for some |interesting heuristic data analyses here. You can improve on this too: just encode the length of the resulting string of 1 bits in binary, and apply the compression recursively. At some point you won't be getting any better, but that should be good enough. Probably get it into one byte... |(BTW: I favor zoo too Me too! -randy -- Randy Orrison, Control Data, Arden Hills, MN randy@ux.acss.umn.edu (Anyone got a Unix I can borrow?) {ihnp4, seismo!rutgers, sun}!umn-cs!randy The best book on programming for the layman is "Alice in Wonderland"; but that's because it's the best book on anything for the layman.
davidsen@steinmetz.ge.com (William E. Davidsen Jr) (05/04/88)
In article <8405@iuvax.cs.indiana.edu> bobmon@iuvax.UUCP (RAMontante) writes: | carlp@iscuva.ISCS.COM (Carl Paukstis) writes: | > | >Zoo is inherently a better system. I does indeed handle the directory | >structure, which is a feature taken much too lightly by many who argue for | >ARC. It also includes comment facilities for the archive and for the | >individual files. | | HOW does one add comments to the overall archive? (Or does it take two?...) | I have version 2.00 zoo, 2.00 zoo documentation, and there's one hint that | v2.0 can add archive comments. But the switches only seem to add file | ... The 'A' modifier causes the 'c' (comment) or 'g' (generation) command to apply to the archive as a whole rather than any individual. This info is on the one screen help, but you have to know what to look for before you can see it ;-) -- bill davidsen (wedu@ge-crd.arpa) {uunet | philabs | seismo}!steinmetz!crdos1!davidsen "Stupidity, like virtue, is its own reward" -me
davidsen@steinmetz.ge.com (William E. Davidsen Jr) (05/04/88)
In article <25745@clyde.ATT.COM> feg@clyde.ATT.COM (Forrest Gehrke) writes: | In article <21371@amdcad.AMD.COM>, phil@amdcad.AMD.COM (Phil Ngai) writes: | | > My understanding is that zoo can handle directories and arc can't. If | > that is true, it would seem zoo is preferable. | > -- | | Will someone please explain why any archiver's ability to | handle directories is important for transmitting binaries | on USENET? Briefly, there are large PD/shareware packages which reside in several directories, such as help, graphics screens, overlays, etc. It is desirable to pack these so that they unpack where they are needed, rather than say "create a directory fubar and unpack this archive there..." since anyone reading this group will know that there are some new users who would have trouble with doing that. Use of zoo reduces the instructions to "type 'zoo x//' and press RETURN", which is more likely to be done correctly. -- bill davidsen (wedu@ge-crd.arpa) {uunet | philabs | seismo}!steinmetz!crdos1!davidsen "Stupidity, like virtue, is its own reward" -me
guest@vu-vlsi.Villanova.EDU (visitors) (05/05/88)
In article <10683@steinmetz.ge.com> davidsen@crdos1.UUCP (bill davidsen) writes: > [...] > > I know zoo runs on VMS, how about arc? I heard that there was an >unpack only version, but it didn't work here. A while back there was a version of ARC (ver 1.1?) posted to comp.os.vms that could both create and un-arc .arc files. This version of ARC is ARC compatible but not PKARC compatible, but would work with PKARC if you use the -otc switch. If there is sufficient interest, I can post the .obj file somewhere (alas, the source was not posted) or try and contact the author to get the latest version. ---- Mark Schaffer BITNET: 16448591@VUVAXCOM Villanova University UUCP: ...{ihnp4!psuvax1,burdvax,cbmvax,pyrnj,bpa} (Go Wildcats!) !vu-vlsi!excalibur!164485913 "Look, It's Bicycle Repair Man! He's fixing it with his own hands!"
phil@amdcad.AMD.COM (Phil Ngai) (05/05/88)
In article <10681@steinmetz.ge.com> davidsen@crdos1.UUCP (bill davidsen) writes:
. There are two major reasons why people want to use zoo over arc;
.(a) features such as comments and subdirectories, and (b) performance,
.since zoo is 5-8 times faster than arc *in UNIX*. Neither of these are
.compelling reasons, and I think the choice should be left to the poster
Subdirectories unimportant? I suggest you've been using PCs too long.
--
Make Japan the 51st state!
I speak for myself, not the company.
Phil Ngai, {ucbvax,decwrl,allegra}!amdcad!phil or phil@amd.com
carlp@iscuva.ISCS.COM (Carl Paukstis) (05/05/88)
In article <25745@clyde.ATT.COM> feg@clyde.ATT.COM (Forrest Gehrke) writes: >In article <21371@amdcad.AMD.COM>, phil@amdcad.AMD.COM (Phil Ngai) writes: > >> My understanding is that zoo can handle directories and arc can't. If >> that is true, it would seem zoo is preferable. > >Will someone please explain why any archiver's ability to >handle directories is important for transmitting binaries >on USENET? Ahem. The ORIGINAL note, to which Mr. Ngai's note replied was, if I remember correctly, asking for advice about which archiver to use for setting up one's own shareware/PD library. However, much of the same reasoning should apply. The directory capability makes it easy to PACKAGE program environments, not single binaries. I'm belaboring the obvious point when I say that many useful binaries require several support files for configuration, option setup, etc. With really useful software getting ever more complex, it is desirable to organize a PACKAGE into a main directory and one or more subdirectories, to minimize the number of uninformative file names the end-user need see when in the PACKAGE directory. This is not to say that a hierarchy-maintaining archiver is VITAL (one can always provide INSTALL.BAT files to do the dirty work), only that is preferable to use one if it is available. On the issue of Usenet transmission, I am of two minds. On the one hand, I have acquired several VERY useful PC-type things this way, and I'd hate to give it up. On the other hand, I sympathize with those who pay the transmission costs (I don't) and would rather not transmit/receive large chunks of stuff. -- Carl Paukstis +1 509 927 5600 x5321 |"I met a girl who sang the blues | and asked her for some happy news UUCP: carlp@iscuvc.ISCS.COM | but she just smiled and turned away" ...uunet!iscuva!iscuvc!carlp | - Don MacLean
ralf@b.gp.cs.cmu.edu (Ralf Brown) (05/05/88)
In article <21414@amdcad.AMD.COM> phil@amdcad.UUCP (Phil Ngai) writes: }Subdirectories unimportant? I suggest you've been using PCs too long. Floppy-only systems, I'd wager. I don't see how anyone could get by completely without subdirectories on a hard disk (unless they're still running Eagle MS-DOS 1.25, which handled two types of hard disks but didn't have subdirectory support yet). -- {harvard,uunet,ucbvax}!b.gp.cs.cmu.edu!ralf -=-=- AT&T: (412)268-3053 (school) ARPA: RALF@B.GP.CS.CMU.EDU |"Tolerance means excusing the mistakes others make. FIDO: Ralf Brown at 129/31 | Tact means not noticing them." --Arthur Schnitzler BITnet: RALF%B.GP.CS.CMU.EDU@CMUCCVMA -=-=- DISCLAIMER? I claimed something?
bobmon@iuvax.cs.indiana.edu (RAMontante) (05/05/88)
loci@csccat.UUCP (Chuck Brunow) writes: ,phil@amdcad.UUCP (Phil Ngai) writes: ,> ,>My understanding is that zoo can handle directories and arc can't. If ,>that is true, it would seem zoo is preferable. , , ??????????? what's preferable about that? Doesn't your system , have a file system? You know, the things that list when you , say "dir". Funny thing about the file system is that it's already , got directories (an idea they picked up from unix). I don't think you know what Ngai is talking about. Zoo preserves the file structure of the archived files, which is handy if you're collecting files that wish to reside in different directories most of the time; you can restore them into the same structure they came from. If "his" system didn't have a file system, _then_ he wouldn't need the ability to preserve the directory information. And "the file system" may not have picked up directories from unix; there are operating systems even older, which also had the concept of file systems and even directories. , The real problem seems to be the clustering used by MS-DOS which , is very wasteful of disk space. Why not go to the root of the , problem instead of the dumb layering scheme. Just because IBM , does stupid things (PC) is no reason to emulate them. Are you suggesting bigger clusters, smaller clusters, or no clusters? If the latter, how to you propose to allocate disk space? A bit at a time? And why do you bring up physical disk management in an argument about logical file organization? ,>Make Japan the 51st state! , , Now that really makes me mad. I'd rather make them a crater. We made Alaska a state, and got the Alaskan oil fields. We cratered Japan once already, and got the current trade situation (and 256K memory chips, as well). Whose side are you on?
larry@sgistl.SGI.COM (Larry Autry) (05/05/88)
In article <21414@amdcad.AMD.COM>, phil@amdcad.AMD.COM (Phil Ngai) writes: > > Subdirectories unimportant? I suggest you've been using PCs too long. > -- > Phil Ngai, {ucbvax,decwrl,allegra}!amdcad!phil or phil@amd.com Where am I? Is this a PC group or what? Ok, subdirectories ARE important. Speed is important. But number one, that which is used by the majority should indicate the modus operandi with regard to archive method. Those who like zoo enough could try to effect a change. Since this is a binaries group, post the binary to ZOO. I have been on the net for more than a year and I have only seen sources for for ZOO and they were in the unix sources group. I could be easily convinced to use zoo if it started appearing on every BBS in the country and uploads were packed in zoo format. But at the present, that is not true, I have yet to see zoo uploads on any BBS I call, one of which is a large one in Silicon Valley. -- Larry Autry larry@sgistl.sgi.com or {ucbvax,sun,ames,pyramid,decwrl}!sgi!sgistl!larry
derek@achel.UUCP (derek) (05/05/88)
In all the discussion of the archivers that has been going on, I have seen nothing of the limitation of ARC, PKARC and ZOO when used on a floppies-only system (does *everybody* use hard disks? :-)). That is: you can only use about half of the disk, because on updating the archive, the archiver copies the archive _onto the same disk_ while doing the update. OK, so this is a good security measure, but is there any way to persuade the archiver to do its back up onto another disk? Or is there a way to automate the archiving facility so that I can fill, or nearly fill a disk with one or more archives, that I have overlooked? This question really relates to using an archiver as a substitute for backup on a floppy system, to archive all those utilities down-loaded from bulletin boards, without using a one-for-one copy of all my utility diskettes. Derek Carr - Philips I&E, Eindhoven, The Netherlands.
davidsen@steinmetz.ge.com (William E. Davidsen Jr) (05/05/88)
In article <21414@amdcad.AMD.COM> phil@amdcad.UUCP (Phil Ngai) writes: | In article <10681@steinmetz.ge.com> davidsen@crdos1.UUCP (bill davidsen) writes: | . There are two major reasons why people want to use zoo over arc; | .(a) features such as comments and subdirectories, and (b) performance, | .since zoo is 5-8 times faster than arc *in UNIX*. Neither of these are | .compelling reasons, and I think the choice should be left to the poster | | Subdirectories unimportant? I suggest you've been using PCs too long. I'm not sure what brought this on, you've flamed me for something I didn't say, and then brought up an irrelevant point as conclusion. What I said was that the ability to handle subdirectories in an archiver is not a compelling reason to choose an archiver. After looking over the stuff I have pulled from the net for programs which use subdirectories, and have come up with a total of one. Count them ONE!! Do it on the wrong finger and it looks like a rude gesture. I have been a major proponent of zoo, but I see no reason to disallow arc, since I can handle that on UNIX nicely. For a fraction of a percent of postings we should cram a new standard down people's throats? Finally, I'm not sure what brought on the nasty comment about using PCs too long??? I *have* been using PCs since 1976 (or so) when 16k was a big system. I've also used mainframes since 1966, and UNIX since V6. Now what has all this got to do with a standard archiver needing to be able to handle subdirectories? If you have a problem with what I say tell me. But don't make up quotes and then flame them. -- bill davidsen (wedu@ge-crd.arpa) {uunet | philabs | seismo}!steinmetz!crdos1!davidsen "Stupidity, like virtue, is its own reward" -me
dalegass@dalcsug.UUCP (Dale Gass) (05/06/88)
It seems that one of the big reasons people like Zoo is it's handling of directories. I've written a little program which uses PKARC to pack a directory structure into an arc. All it does, is create .ARC files of each subdirectory, and arc these subdir arcs into the main archive file. Conversely, the unpacking program, de-arcs the specified archive, and any result .ARC files are de-arced into directories of the same name (created if necessary). For example, consider the following structure: . ----data |--plots |--oldstuff--foo The files in the foo subdir are arc'ed into a file FOO.ARC, which is in turn archived into a file with all the files in oldstuff dir to make an OLDSTUFF.ARC, etc, etc, and finally DATA.ARC, PLOTS.ARC, and OLDSTUFF.ARC are arc'ed into one archive file. Granted, this takes longer, as data must be arc'ed several times, but this is not much of a problem, as PKARC blows away ZOO speedwise anyway. If there is sufficient interest, I'll post this program. If there isn't, I won't. -dalegass@dalcsug.uucp
tneff@dasys1.UUCP (Tom Neff) (05/06/88)
I don't think there is any need to "anoint" ZOO or ARC as the "approved" format in comp.binaries.ibm.pc to the exclusion of the other. We could argue the theoretical merits of the two systems all year (and still may do so! :-) ) but the fact remains that they're both reasonably popular and reliable packaging systems, with a substantial flow of software being distributed in each format. There is no reason we cannot accept, support and distribute both. If someone submits an ARC, distribute it as such; ditto if someone submits a ZOO. I see no need to disassemble incoming ARChives and repackage as ZOOs or vice versa, for two reasons: (1) a tiny, fast extractor exists for each format and should always be available via netmail and/or monthly posting in the group -- LOOZ.EXE is only 9k and ARCE.COM only 7k; and (2) it does violence to the integrity of an author's product to reshuffle or repackage it without prior permission... something quite a few people are stating explicitly in their README's these days. No need for usenet to set a poor example. Although portability to other environments happens to be taken care of for both ARC and ZOO, I don't consider it a factor. The binaries posted here are intended to be unpacked and run on PC's, which are ubiquitous. If subscribers can only download net news on a Vax at work, for instance, and need to port the result to a PC afterwards, let them squirt the whole ARC file over, it should uudecode with full integrity on the VAX even if the resulting file is opaque; or just as good, port the uuencoded text stream over to the PC. A perfectly good PC port of uudecode exists. -- Tom Neff UUCP: ...!cmcl2!phri!dasys1!tneff "None of your toys CIS: 76556,2536 MCI: TNEFF will function..." GEnie: TOMNEFF BIX: are you kidding?
heiby@falkor.UUCP (Ron Heiby) (05/06/88)
Chuck Brunow (loci@csccat.UUCP) writes: > In article <21371@amdcad.AMD.COM> phil@amdcad.UUCP (Phil Ngai) writes: > > > >My understanding is that zoo can handle directories and arc can't. If > >that is true, it would seem zoo is preferable. > > ??????????? what's preferable about that? Doesn't your system > have a file system? You know, the things that list when you > say "dir". Funny thing about the file system is that it's already > got directories (an idea they picked up from unix). [irrelevant flame about MS-DOS filesystem deleted] > >Plus you can easily get zoo on many other operating systems like Unix. > Keep you junk on your own machine, Unix doesn't need it. [japan flame deleted] [ad hominem attack deleted] Looks like Chuck got up on the wrong side of the bed. Not only did he miss the entire point that Phil was making (quite eloquently, I believe), but he decided to lob a few insults at Phil to top everything off. Chuck, you really should re-read your postings before you send them off. Now, to business! The point made about zoo being able to handle directories is that you can save a sub-treee using zoo. In arc, there is no knowledge of the directory structure the files came from. You can have only one file named "MAKEFILE" in an .ARC file, but several (one per directory) in a zoo archive. Consider how useful/useless tar and cpio would be if they didn't know how to make use of the UNIX filesystem. I'd say that zoo is to (UNIX's) tar/cpio as arc is to (UNIX's) ar. Yes, "the file system's already got directories", but that is the argument *in favor* of an archive method that understands them! -- Ron Heiby, heiby@mcdchg.UUCP Moderator: comp.newprod & comp.unix "I believe in the Tooth Fairy." "I believe in Santa Claus." "I believe in the future of the Space Program."
feg@clyde.ATT.COM (Forrest Gehrke) (05/06/88)
In article <10709@steinmetz.ge.com>, davidsen@steinmetz.ge.com (William E. Davidsen Jr) writes: > In article <25745@clyde.ATT.COM> feg@clyde.ATT.COM (Forrest Gehrke) writes: > | > My understanding is that zoo can handle directories and arc can't. If > | > that is true, it would seem zoo is preferable. > | > | Will someone please explain why any archiver's ability to > | handle directories is important for transmitting binaries > | on USENET? > > Briefly, there are large PD/shareware packages which reside in several > directories, such as help, graphics screens, overlays, etc. It is > desirable to pack these so that they unpack where they are needed, > rather than say "create a directory fubar and unpack this archive > there..." > -- It is the recent fashion with much commercial software to choose for the user the directory structure in an install program or in a .bat file. If given the chance, I skip this procedure because too many times these choices would write over or delete existing files, or mess up my existing directory structure. While I agree that this capability of zoo is a nice feature for internal use, I am as dubious of external choices for my directory structure from the net as I am from, say Microsoft. (Read Dick Flanagan's experience for an example). My question was more a rhetorical one: how often in USENET binary postings is such a feature desirable or even necessary? Forrest Gehrke
greggy@infmx.UUCP (greg yachuk) (05/07/88)
In article <35@achel.UUCP>, derek@achel.UUCP (derek) writes: > OK, so this is a good security measure, but is there any way to persuade > the archiver to do its back up onto another disk? Or is there a way to > automate the archiving facility so that I can fill, or nearly fill a disk > with one or more archives, that I have overlooked? ARC 5.20 has a couple of environment variables (ARCTEMP and TEMP) which tell it where to build the temporary ARC file. By setting this to another disk (e.g SET ARCTEMP=b:) you can build the temporary file whereever you want. Unfortunately, it then tries to RENAME the file, which you cannot do across disks in MSDOS. I realize that even if this worked correctly, it still wouldn't do what you asked for (i.e. placing the resulting ARC somewhere else). I guess the answer (for ARC at least) is NO, unless you muck with the sources. Greg Yachuk Informix Software Inc., Menlo Park, CA (415) 322-4100 {uunet,pyramid}!infmx!greggy !yes, I chose that login myself, wazit tooya? Natasha: How do you stop moose from charging? Boris: No problem, Dahlink, you take away credit card.
dhesi@bsu-cs.UUCP (Rahul Dhesi) (05/07/88)
Somebody asks: Is there a way to automate the archiving facility so that I can fill, or nearly fill a disk with one or more archives, that I have overlooked? Since zoo does not create a temporary file when adding files to an archive, just use it normally and you can fill a disk with zoo archives. Additional disk space is needed only when packing an archive to recover space from deleted files/deleted comments etc., and even this works across disks if you give the "." modifier to the "P" command. -- Rahul Dhesi UUCP: <backbones>!{iuvax,pur-ee,uunet}!bsu-cs!dhesi
pechter@dasys1.UUCP (Bill Pechter) (05/08/88)
In article <469@dalcsug.UUCP> dalegass@dalcsug.UUCP (Dale Gass) writes: > >I've written a little program which uses PKARC to pack a directory structure >into an arc. All it does, is create .ARC files of each subdirectory, and arc >these subdir arcs into the main archive file. > >If there is sufficient interest, I'll post this program. Please, Please post the program. I think a lot of us can think of many uses for this gem. There's nothing like it on the BBS's around and I think that a lot of people would have a need for this kind of program Thanks.
heiby@falkor.UUCP (Ron Heiby) (05/08/88)
Tom Neff (tneff@dasys1.UUCP) writes: > or just as good, port the uuencoded text > stream over to the PC. A perfectly good PC port of uudecode exists. What Tom fails to consider with this statement is how *extremely* nice it was to be able to read through the (almost) non-existent documentation for a recently posted demonstration to determine that to use the really neat stuff I needed an 8087 and that I had to get some manual from UofC to use any of it, BEFORE I hassled with shoving hundreds of kilobytes at my PC. It is VERY important to be able to pull documentation files out of the archive on the host system (which is usually a UNIX derivative). It is important for two big reasons. One is so you know whether or not you really want to hassle with sending it to your PC. The other is so you can use the nice fast laser printer on the UNIX box to print the 300 page manual, rather than the 10 year old Epson MX-100 or 10 cps Olympia (feed one sheet at a time) that you happen to have at home without sending it to the PC, then extracting it, then sending it back up where it came from. (Maybe when one of them breaks, my wife will let me get something more state-of-the-art. ... Naaaaahh!) -- Ron Heiby, heiby@mcdchg.UUCP Moderator: comp.newprod & comp.unix "I believe in the Tooth Fairy." "I believe in Santa Claus." "I believe in the future of the Space Program."
tneff@dasys1.UUCP (Tom Neff) (05/10/88)
In article <165@falkor.UUCP> heiby@mcdchg.UUCP (Ron Heiby) writes: >Tom Neff (tneff@dasys1.UUCP) writes: >> or just as good, port the uuencoded text >> stream over to the PC. A perfectly good PC port of uudecode exists. > >What Tom fails to consider with this statement is how *extremely* nice >it was to be able to read through the (almost) non-existent documentation >for a recently posted demonstration ... >... It is VERY important to be able to pull documentation files >out of the archive on the host system (which is usually a UNIX derivative). Tom didn't fail to consider that; Tom is on your side 100%. It's just that, as has already been pointed out at some length in this newsgroup, ARC and ZOO are both running on UNIX and VMS at this time. If you read my posting you must also have read several testimonials to this effect. So get the programs and use them. My point about squirting the uuencodes without trying to extract them was only that if you KNOW you want the file, but just don't have a VAX hosted copy of ARC or ZOO at hand, it's not strictly necessary -- you can do everything on the PC, including uudecoding. If you want to exercise selectivity at the mainframe stage, though, it obviously helps to be able to manipulate the archives right there. Which I believe you can do. TMN -- Tom Neff UUCP: ...!cmcl2!phri!dasys1!tneff "None of your toys CIS: 76556,2536 MCI: TNEFF will function..." GEnie: TOMNEFF BIX: are you kidding?
davidsen@steinmetz.ge.com (William E. Davidsen Jr) (05/10/88)
In article <4281@dasys1.UUCP> tneff@dasys1.UUCP (Tom Neff) writes: | [........................................] If | subscribers can only download net news on a Vax at work, for instance, and | need to port the result to a PC afterwards, let them squirt the whole | ARC file over, it should uudecode with full integrity on the VAX even if | the resulting file is opaque; or just as good, port the uuencoded text | stream over to the PC. A perfectly good PC port of uudecode exists. I think the term 'dribble' would be more appropriate. I want to be able to read the docs before 'squirting them over' at 1200 baud. Using the PC to do the work is appropriate with a limited UNIX systems and a fast line to the PC (or some form of unlimited message service on your phone). Many of us want to use ARC or ZOO to read the docs first. -- bill davidsen (wedu@ge-crd.arpa) {uunet | philabs | seismo}!steinmetz!crdos1!davidsen "Stupidity, like virtue, is its own reward" -me
davidsen@steinmetz.ge.com (William E. Davidsen Jr) (05/10/88)
In article <469@dalcsug.UUCP> dalegass@dalcsug.UUCP (Dale Gass) writes: | | Granted, this takes longer, as data must be arc'ed several times, but this | is not much of a problem, as PKARC blows away ZOO speedwise anyway. Your program sounds useful, but I would like to see the basis for your statement about releative speeds. I see times in the range 1.3 to 1.7 longer with zoo. Your comments imply that it is equally fast to load PKARC many times and run it at least twice as it is to run zoo. Once. I don't think results less than 2:1 really qualify as "blowing away." Once more: those of us who must read the docs on a UNIX machine before shipping to a PC don't have PKARC. We don't have batch files, either. Your script sounds very useful, but not a good choice for use as a general standard for distribution. -- bill davidsen (wedu@ge-crd.arpa) {uunet | philabs | seismo}!steinmetz!crdos1!davidsen "Stupidity, like virtue, is its own reward" -me
loci@csccat.UUCP (Chuck Brunow) (05/10/88)
In article <164@falkor.UUCP> heiby@mcdchg.UUCP (Ron Heiby) writes: >Chuck Brunow (loci@csccat.UUCP) writes: >> In article <21371@amdcad.AMD.COM> phil@amdcad.UUCP (Phil Ngai) writes: >> > >> >My understanding is that zoo can handle directories and arc can't. If >> >that is true, it would seem zoo is preferable. >> >Looks like Chuck got up on the wrong side of the bed. Not only did he >miss the entire point that Phil was making (quite eloquently, I believe), I think you're missing my point: IT TAKES TOO LONG TO WADE THOUGH ARCHIVERS. Got it? I don't want to keep copies of 17 (and counting) archivers, and I want to see what in a file QUICKLY. The local machine which archives the good stuff only supports ARC. Not my choice, theirs. I choose none of the above because I'm not going to keep most of this stuff: I just want a fast synopsis. Is that reasonable? > >Now, to business! The point made about zoo being able to handle directories >is that you can save a sub-treee using zoo. In arc, there is no knowledge >of the directory structure the files came from. You can have only one >file named "MAKEFILE" in an .ARC file, but several (one per directory) in >a zoo archive. Consider how useful/useless tar and cpio would be if >they didn't know how to make use of the UNIX filesystem. I'd say that >zoo is to (UNIX's) tar/cpio as arc is to (UNIX's) ar. Yes, "the file >system's already got directories", but that is the argument *in favor* >of an archive method that understands them! Let me clarify one point: TAR is a TAPE ARCHIVER. That's where it got it's name. Nobody would seriously use it for anything else. CPIO is only slightly better. But these aren't viable comparisons because nobody is gonna argue that their not very useful. If you want to make a comparison, lets talk about "ar", the object librarian, is directly linkable to compiles, etc., etc. Now, to business! The point made about zoo being able to handle directories is that the last thing I want somebody posting is an entire directory tree. That adds even more layers to layers, to layers. Is this making sense to you? I DON'T WANT MORE LAYERS, I WANT LESS. Adding more space just invites more junk. I want a USER-FRIENDLY way to scan the contents of files quickly so I can skip it if it's not interesting. Furthermore, as the posting also ignores my previous posting, let me refresh your memory. These archivers are not all PD, but rather use the USENET facilities to solicite $$$$$ from people which I consider improper. I received a number of responses to my posting which basically said, "Hey, don't worry about it. Everybody does it". However, bending rules results in stiffer rules: something nobody really wants. Lets review: when I see 300+ posting in a group, given limited time to peruse, and reading at 1200 baud what counts to me is SPEED. I haven't got enough spare disk space to keep all of the archivers, I'm not gonna send them money. If there are so many creative, intelligent and committed people out there as is claimed, it should be pretty easy to do better. That's my challenge. >--
keithe@tekirl.TEK.COM (Keith Ericson) (05/11/88)
In article <179@infmx.UUCP> greggy@infmx.UUCP (greg yachuk) writes: >In article <35@achel.UUCP>, derek@achel.UUCP (derek) writes: >> ...is there a way to >> automate the archiving facility so that I can fill, or nearly fill a disk >> with one or more archives, that I have overlooked? > >ARC 5.20 has a couple of environment variables (ARCTEMP and TEMP) which >tell it where to build the temporary ARC file. By setting this to another >disk (e.g SET ARCTEMP=b:) you can build the temporary file whereever you >want. Unfortunately, it then tries to RENAME the file, which you cannot >do across disks in MSDOS. Try playing with the dos "join" command which will allow you to make drive B: into a subdirectory of drive A:. I think the sequence might go something like this: JOIN B: A:\DRIVEB makes B: a subdirectory of A: SET ARCTEMP=A:\DRIVEB have ARC work use B: without even realizing it... ARC X ARFCILE (chug, chug, chug) JOIN B: /d un-join drive B: from A: Let me know if this works... I'm just extrapolating what I've done with my machine. (Alternatively you might have a ramdisk that you could JOIN to A: instead of your second floppy..) keith