[alt.sources.d] REPOST lharc102A Part 01/04 BSD Unix to Amiga archives

tneff@bfmny0.BFM.COM (Tom Neff) (01/08/91)

In article <1991Jan8.001457.28490@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>Whether from stupidity or malice, part 2 of the previous four part
>posting got cancelled, so here comes the whole thing again, as I have no
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>record of where the previous split on the uuencoded data was done.
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Speaking of inherent robustness, eh?  We can't have polite 'REPOST - Part 4' 
source postings any more because binary gibberish doesn't split that well.
So if there's an error, we ship the whole huge twice-compressed thing TWICE!

>Complaints about the format in which this is posted will be ignored.

By whom?  Speak for yourself!  :-)

-- 
Stalinism begins at home.  }{  Tom Neff  }{  tneff@bfmny0.BFM.COM

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (01/09/91)

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
Whether from stupidity or malice, part 2 of the previous four part
posting got cancelled, so here comes the whole thing again, as I have no
record of where the previous split on the uuencoded data was done.
----
tneff@bfmny0.BFM.COM (Tom Neff) writes:
Speaking of inherent robustness, eh?  We can't have polite 'REPOST - Part 4' 
source postings any more because binary gibberish doesn't split that well.
So if there's an error, we ship the whole huge twice-compressed thing TWICE!
----
Email interactions with the moderator of alt.sources.index provided a copy
of what turned out to be a forged cancellation message; a copy was posted
to news.admin.

Some person with an axe to grind about uuencoded source archives chose to
do a little net.vandalism rather than engage in dialog.  Since the control
message was inserted directly at uunet.uu.net, it is probably impossible to
trace this childish behavior back to its originator.

The total repost turned out to have been unnecessary, as the original was
still online at this site, but I found that out considerably later when I
went back to cancel the remaining three articles of the original posting
and found the cancelled one still online here (of course).  By that point
it seemed prudent to continue on the existing course.

Suggestions that the results of vandalism should be used to judge the merits
of a source code distriution mechanism are warped at best.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

slamont@network.ucsd.edu (Steve Lamont) (01/11/91)

In article <1991Jan9.001036.28469@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>Suggestions that the results of vandalism should be used to judge the merits
>of a source code distriution mechanism are warped at best.

Not quite so.  A well designed distribution mechanism would allow for damaged
postings, intentional or otherwise.

Clearly, taking a gonzo uuencoded source distribution and arbitrarily chopping
it into pieces leaves something to be desired (it that is indeed what was
done).

Now, obviously, it is difficult to guard against forged cancel messages.
However, it should be easier than you indicated in your posting to recover
from such net.malice.  Sometimes postings just don't get out.

							spl (the p stands for
							precisely the point)

-- 
Steve Lamont, SciViGuy -- (408) 646-2572 -- a guest at network.ucsd.edu --
NPS Confuser Center / Code 51 / Naval Postgraduate School / Monterey, CA 93943
"... most programmers don't even bother going to the metal on machines where
the metal is painful and there's no light to see by..." -J. Eric Townsend

ralph@laas.fr (Ralph P. Sobek) (01/18/91)

In article <1991Jan9.001036.28469@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:

|  Some person with an axe to grind about uuencoded source archives chose to
|  do a little net.vandalism rather than engage in dialog.  Since the control
|  message was inserted directly at uunet.uu.net, it is probably impossible to
|  trace this childish behavior back to its originator.

Well, why not create alt.binaries or alt.uuencoded for such parts?

For split uuencoded files I suggest that net user use UUE to create,
and I suggest net users to get UUD.  After that one doesn't care
anymore if the file is split or not.  UUE adds headers around each
split part, and UUD will concatenate the files all by itself.

Cheers,

--
Ralph P. Sobek			  Disclaimer: The above ruminations are my own.
ralph@laas.fr				   Addresses are ordered by importance.
ralph@laas.uucp, or ...!uunet!laas!ralph		
If all else fails, try:				      sobek@eclair.Berkeley.EDU
===============================================================================
THINK: Due to IMF & World Bank policies 100 million Latin American children are
living, eating, and sleeping in the streets -- Le Monde Diplomatique

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (01/19/91)

There really is something in here appropriate to both newsgroups; persevere.

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:

| Some person with an axe to grind about uuencoded source archives chose to
| do a little net.vandalism rather than engage in dialog.  Since the control
| message was inserted directly at uunet.uu.net, it is probably impossible to
| trace this childish behavior back to its originator.

ralph@laas.fr (Ralph P. Sobek) writes:

> Well, why not create alt.binaries or alt.uuencoded for such parts?

Perfectly simple: alt.sources is for _sources_; that says something about
contents, not about format.  Over the years, and with experience, most folks
learn how to handle nearly any _format_; the only things I can't unpack that
I know of by now are Mac BINHEX and IBM-PC PKZIP files, and only because
neither contain anything of interest to an Amiga user.  The advantage of
alt.sources is that, whatever the format in which the data is transmitted,
I know that "under all that manure is a pony"; I'm going to find sources
when I find a way to unpack it.  This would not be true for either of your
suggestions.

The insistance by some folks on clear text transmission of sources seems
mostly to be an unwillingness to learn how to deal with available tools,
whatever the stated motives may be.

> For split uuencoded files I suggest that net user use UUE to create,
> and I suggest net users to get UUD. After that one doesn't care
> anymore if the file is split or not. UUE adds headers around each
> split part, and UUD will concatenate the files all by itself.

Sounds good; can they do this without the headers and such being stripped
first, like unshar does for shars?

One of the whole file checksum routines would be a nice standard feature
of future uuencode mechanisms, too.

> Ralph P. Sobek Disclaimer: The above ruminations are my own.

> THINK: Due to IMF & World Bank policies 100 million Latin American
> children are living, eating, and sleeping in the streets -- Le Monde
> Diplomatique

Yes, a little thinking shows how bogus that viewpoint is.

I'd rather put the blame a little closer to the source of the problem:
citizens who breed without regard for the lack of resources to support
larger populations, and governments and religious authorities that
encourage overbreeding by restricting easy access to birth control
technology.

Famine and lack of shelter are sad but inevitable triage mechanisms on
those whole populations that concentrate on reproduction to the
exclusion of sense, planning, agriculture, or economic infrastructure.

Blaming the world banking mechanism for refusing to carry and extend bad
debts for nations that will take no steps to attack the overpopulation
roots of their problems, or even stop directly contributing to their
problems by bad public policy, is a bit on the disingenuous side, but
there are always those who would rather see a conspiracy than suffer the
necessary pain of admitting and correcting their own stupidity.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

darcy@druid.uucp (D'Arcy J.M. Cain) (01/20/91)

In article <1991Jan19.012025.12536@zorch.SF-Bay.ORG> Kent Paul Dolan writes:
>Perfectly simple: alt.sources is for _sources_; that says something about
>contents, not about format.  Over the years, and with experience, most folks
> [...]
>The insistance by some folks on clear text transmission of sources seems
>mostly to be an unwillingness to learn how to deal with available tools,
>whatever the stated motives may be.

*FLAME FUCKING ON*
Kent, I believe you have crossed the line with that statement.  Argue if
you will the advantages of one or another method of posting but please
don't suggest that those who disagree with you are lying, lazy or stupid
unless you have some proof.  I, and most people I know who prefer clear
text source posting (CTSP), have no trouble using all the tools you suggest.
We argue for CTSP for exactly the reasons that we state.  If you have proof
that the case is otherwise then please present it.

*REDUCE FLAME INTENSITY*
Also it is impolite to change the follow-up line without mentioning it in
the body of the message.

*FLAME OFF*
When I see a source posting that sounds interesting I always scan it to see
what it is like.  First I look at the readme file and if still interested I
check out some of the code.  Throwing a bunch of factors together I make a
decision about whether to keep it or pass.  I don't necessarily throw away
something if it is uuencoded but the odds against keeping it rise.  I also
post all my sources in CTSP making sure that the readme file is the first
thing in the file.  I appreciate it when others do the same for me.

-- 
D'Arcy J.M. Cain (darcy@druid)     |
D'Arcy Cain Consulting             |   There's no government
West Hill, Ontario, Canada         |   like no government!
+1 416 281 6094                    |

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (01/20/91)

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:

> Perfectly simple: alt.sources is for _sources_; that says something
> about contents, not about format. Over the years, and with experience,
> most folks [learn some minimal competence in unwrapping archives...]
> The insistance by some folks on clear text transmission of sources
> seems mostly to be an unwillingness to learn how to deal with
> available tools, whatever the stated motives may be.

darcy@druid.uucp (D'Arcy J.M. Cain) writes:

> *FLAME FUCKING ON*

> Kent, I believe you have crossed the line with that statement. Argue
> if you will the advantages of one or another method of posting but
> please don't suggest that those who disagree with you are lying, lazy
> or stupid unless you have some proof. I, and most people I know who
> prefer clear text source posting (CTSP), have no trouble using all the
> tools you suggest. We argue for CTSP for exactly the reasons that we
> state. If you have proof that the case is otherwise then please
> present it.

Sounds like a personal problem to me, D'Arcy. It is much less trouble
for me to save and unpack a coded archive and then see if its subject
really indicates useful material, and throw it away a minute or two
later if not, than to debug problems caused by code known to be useful,
but containing no useful information because the shipping method didn't
protect the code. Since I have to believe that this trade-off is the
same for anyone as competent as my modest competence with the unwrapping
tools, then I must conclude that those who find using the tools so
difficult that the trade-off is in the opposite direction have
_deliberately_ _chosen_ to bitch rather than exert the small effort
required to learn a minumum competence with the tools.

> *REDUCE FLAME INTENSITY*

> Also it is impolite to change the follow-up line without mentioning it
> in the body of the message.

A minimal competence with news software includes looking to see where you
are posting.  Warnings for the less competent are not a requisite for
polite posting.  If you can't play the game, get off the field.

Also, You posted a flame in alt.sources.d. The proper newsgroup for
flames is alt.flame.

From your inability to target your posting correctly, what should I
conclude about your competence, and your claims of competence, with
simple news article processing software and conventions?

> *FLAME OFF*

> When I see a source posting that sounds interesting I always scan it
> to see what it is like. First I look at the readme file and if still
> interested I check out some of the code. Throwing a bunch of factors
> together I make a decision about whether to keep it or pass. I don't
> necessarily throw away something if it is uuencoded but the odds
> against keeping it rise. I also post all my sources in CTSP making
> sure that the readme file is the first thing in the file. I appreciate
> it when others do the same for me.

What prevents you from doing all of this after you have taken the few
moments to unpack an archive?  Only if you blunder around spending
tens of minutes doing this simple task would this seem to you the
insurmountable problem your religiously toned posting would suggest it
is.  You want to trade off a few moment's work for archive integrity;
I maintain that is a poor tradeoff, as you will lose far more time in
repairing the damages done by perverse news software to unprotected
desired archives than you ever spend unpacking protected undesired ones.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

slamont@network.ucsd.edu (Steve Lamont) (01/23/91)

In article <1991Jan20.144049.3404@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>                                          ... It is much less trouble
>for me to save and unpack a coded archive and then see if its subject
>really indicates useful material, and throw it away a minute or two
>later if not, than to debug problems caused by code known to be useful,
>but containing no useful information because the shipping method didn't
>protect the code. ...

With all due respect, I think these are separate issues.

As I've said before, the network transport mechanisms should be fixed if they
scrozzle code.  The burden should be placed on the system, not the user.

I favor cleartext because it is exactly that -- clear text.  I can read it.
I can tell immediately whether it is something I can use or not.

Yes, I can save the files, unpack them, and then scan them.  However, since I
am a guest on this very overloaded machine, this means forwarding the postings
to an account on another machine (I'm sure that the system owners wouldn't
appreciate me filling up their disks, even for only a few minutes, with some
multimegabyte uuencoded, compressed, shar files) and, often as not, fiddle
with ftps and so forth.  The process is cumbersome and, due to circumstances,
not readily automated.

While I clearly don't expect the net to bend to my own peculiar set of
circumstances, I do suggest that some kind of common denominator be adhered
to.  At the present time, cleartext is that common denominator.

							spl (the p stands for
							packed, uuencoded,
							shared, and
							compressed)
-- 
Steve Lamont, SciViGuy -- (408) 646-2572 -- a guest at network.ucsd.edu --
NPS Confuser Center / Code 51 / Naval Postgraduate School / Monterey, CA 93943
"It's not what you know, it's who you know to go ask..."
					- Richard W. Hamming

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (01/23/91)

peter@sugar.hackercorp.com (Peter da Silva) writes:

> Apologies in advance for the low-temp setting on this article.

Oh, I'll warm it right up for you, never fear.

> The highest costs associated with Usenet are telecommunications costs,
> and they are lower with plain text sources. Why? Because the most
> expensive links are compressed, and the
> compressed-uuencoded-recompressed version is quite a bit larger than
> the compressed version itself.

> It really is not appropriate to send stuff in uuencoded compressed
> archives unless there is some technical reason plain text won't work.

Well, let's address your points out of order.

First, doing a shar of the original clear text code received the following
report:

	Found 592 control chars in "'lh.doc.japanese'"
	Found 124 control chars in "'lh.inst.japanese'"
	Found 320 control chars in "'lh.n.japanese'"

So, using the recommended clear text technology, three of the enclosed
files would have arrived damaged.

Second, "compress,uuencode,recompress" is not the best use of technology;
I did a little test with the same files in just one big shar, to simplify
the reporting of the results:

The size of the original clear text shar:

-rw-r--r--  1 xanthian   179346 Jan 22 21:53 lha.sh

As typically compressed from clear text using sixteen bit "compress" to
transmit news:

-rw-r--r--  1 xanthian    76691 Jan 22 22:06 lha.sh.Z

The same shar as lharc'ed and uuencoded and then typically compressed
for news transmission:

-rw-r--r--  1 xanthian    58303 Jan 22 21:56 lha.lzh
-rw-r--r--  1 xanthian    80356 Jan 22 21:58 lha.lzh.uu
-rw-r--r--  1 xanthian    73077 Jan 22 21:58 lha.lzh.uu.Z

So in fact, for the files being sent, there is some modest _gain_ in
telecommunications efficiency by using the best compression technology
on text, and then uuencoding it and letting the standard net node to
node compression have its way with the files.

The conclusions are thus exactly opposite to both your arguments.

Don't feel bad, though, Peter, most folks don't realize how far
behind best technology "compress" has fallen, and continue to spout
the same superstitious nonsense you did.

Actually, things are a bit worse than that yet for the clear text case,
and better for the best technology case.

First, the standard response to the control characters problem is to
uuencode just the files with the control characters, and put them into a
shar with the remaining files as clear text. This gives a still bigger
shar:

-rw-r--r--  1 xanthian   187221 Jan 22 22:36 lhb.sh

which is quite a bit bigger than before when typically compressed for
transmission:

-rw-r--r--  1 xanthian    84945 Jan 22 22:32 lhb.sh.Z

Second, there is no reason to pay shar overhead, nor to uuencode the
control character containing files, with a competent archiving
compression tool, so compressing the original files filewise saves that
overhead:

-rw-r--r--  1 xanthian    57965 Jan 22 22:35 lhc.lzh
-rw-r--r--  1 xanthian    79890 Jan 22 22:38 lhc.lzh.uu

and leads to a modestly smaller _yet_ typically compressed file for
transmission:

-rw-r--r--  1 xanthian    72595 Jan 22 22:38 lhc.lzh.uu.Z

So, at the end, for the particular files under discussion in this
thread, best technology as opposed to the existing clear text methods
transmits 72595 bytes instead of 84945 bytes, or about 85% as much.
Fifteen percent off the phone bills would warm th cockles of any system
manager's heart.

At the recipients site, clear text requires 187221 bytes of spool space
to store, as opposed to the lharc'd uuencoded file's 79890 bytes, making
the latter 43% as much as the former, a huge savings in a crucial area
to every site at which the data is stored.

This is such fun, I always love arguing against indefensible positions.

Who's next with some wimpy excuse why the source file transmission
method that has been successfully used in comp.binaries.ibm.pc and
comp.sources.atari.st for just ages can't possibly work in the other
source groups?

I have yet to see a single argument for the present methods that
comes down, at the last, to anything but sheer laziness on the part
of those who don't want to change their habits.  Compressed, uuencoded
transmission methods win on every reasonable criterion.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>
--
By the way, it is _not_ a solution to replace compress with a filter
form of lharc as the typical file compressor for telecommunications;
lharc is _much_ too slow to use at every step along the way, so it
needs to be done just once at the originating site to accomplish these
savings.

bernie@metapro.DIALix.oz.au (Bernd Felsche) (01/31/91)

De-flamed deliberately.

In <1991Jan23.071609.1401@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>Second, "compress,uuencode,recompress" is not the best use of technology;
>I did a little test with the same files in just one big shar, to simplify
>the reporting of the results:

WHOAH THERE! Shouldn't you be using tar to generate the archive
instead of shar?  Its wrapper information is more compact and
efficient.

Then you compress the tar archive... and uuencode it. Please try this
and publish the results for comparison.

Depending on software versions, you can do all this in a pipe (which
you undoubtedly know) "tar cf - files | compress | uuencode >bugs.tar.Z.uu"

For transmission, it can be compressed again, (it would be smarter to
uudecode) though this _should_ be done by a network layer, even though
it often isn't.  Wouldn't it be nice if modem transfer protocols were
smart enough to compress on the fly?

>So in fact, for the files being sent, there is some modest _gain_ in
>telecommunications efficiency by using the best compression technology
>on text, and then uuencoding it and letting the standard net node to
>node compression have its way with the files.

Agreed.  In fact, the more text, the better the gain.

>I have yet to see a single argument for the present methods that
>comes down, at the last, to anything but sheer laziness on the part
>of those who don't want to change their habits.  Compressed, uuencoded
>transmission methods win on every reasonable criterion.

Although one should be wary of zoo archives, which don't work well if
there are many small text files in it (i.e. typical source code).
Compression can be as little as 10-15%, which uuencoding explodes past
the original size.

>Kent, the man from xanth.
><xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>
>--
>By the way, it is _not_ a solution to replace compress with a filter
>form of lharc as the typical file compressor for telecommunications;
>lharc is _much_ too slow to use at every step along the way, so it
>needs to be done just once at the originating site to accomplish these
>savings.

TANSTAFL.
-- 
 _--_|\  Bernd Felsche         #include <std/disclaimer.h>
/      \ Metapro Systems, 328 Albany Highway, Victoria Park,  Western Australia
\_.--._/ Fax: +61 9 472 3337   Phone: +61 9 362 9355  TZ=WST-8
      v  E-Mail: bernie@metapro.DIALix.oz.au | bernie@DIALix.oz.au

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (02/01/91)

 bernie@metapro.DIALix.oz.au (Bernd Felsche) writes:

> De-flamed deliberately.

Spoil sport.

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:

>> Second, "compress,uuencode,recompress" is not the best use of
>> technology; I did a little test with the same files in just one big
>> shar, to simplify the reporting of the results:

> WHOAH THERE! Shouldn't you be using tar to generate the archive
> instead of shar? Its wrapper information is more compact and
> efficient.

It is more efficient yet because putting everything in one big file
lets compression proceed across file boundaries rather than start
fresh at each file, but filewise storage is nearly as efficent.

> Then you compress the tar archive... and uuencode it. Please try this
> and publish the results for comparison.

You had to ask; well, I was sitting home grumpy because I was too sick to
make the party tonight, so why not:
-------------------------------------------------------------------------
original data:

  3091 Makefile
  3841 amiga_patch
  2885 generic_patch
 11521 lh.doc.japanese
  2800 lh.inst.japanese
  6783 lh.n.japanese
 13133 lhadd.c
 29556 lharc.c
  7568 lharc.doc.posted
 11220 lharc.doc.revised
  9279 lharc.h
  9588 lharc.l
  2010 lhdir.c
   886 lhdir.h
  6154 lhext.c
  6504 lhio.c
  1483 lhio.h
  6672 lhlist.c
 22476 lzhuf.c
  1229 read.me_1
   486 read.me_2
  1770 read.me_3

original data size total of file sizes (from wc -c)

160935 lha

three files uuencoded because they contain control characters:

   15910 lh.doc.japanese.uu
    3895 lh.inst.japanese.uu
    9376 lh.n.japanese.uu

original data size but with those three uuencodings instead:

169012 lha3uu


Plan a, just sharing the original files, is unworkable, shars with control
        characters won't unpack reliably:

176274 lha.sh

Plan b: current net practice; shar, compress:

184153 lha3uu.sh              shar three files uuencoded, rest plain text;
 82885 lha3uu.sh.Z            its size as transmitted after compression

Plan c: other current net practice; tar, compress, uuencode, compress:

180224 lha.tar               original data tarred - not transmittable, so
 73149 lha.tar.Z             compress it and
100810 lha.tar.Z.uu          uuencode it for safety;
 91533 lha.tar.Z.uu.Z        its size as transmitted after compression

Plan d: improve plan b by replacing compress with lharc, uuencode, compress:

 63604 lha3uu.sh.lzh         lharc of shar file is binary
 87666 lha3uu.sh.lzh.uu      must be uuencoded to hide control characters;
 79863 lha3uu.sh.lzh.uu.Z    its size as transmitted after compression

Plan e: improve plan c by replacing first compress by lharc:

 56476 lha.tar.lzh           lharc of tar file is binary
 77844 lha.tar.lzh.uu        must be uuencoded to hide control characters;
 70839 lha.tar.lzh.uu.Z      its size as transmitted after compression

Plan f: improve plan d by replacing tar | compress by lharc:

 56944 lha.lzh               lharc of original files is binary
 78484 lha.lzh.uu            must be uuencoded to hide control characters;
 71211 lha.lzh.uu.Z          its size as transmitted after compression


Note: step c is not the same as simple news transmission, where tar |
compress | transmit | uncompress | untar is the paradigm, but that
process is not required to create a news article as an intermediate
product, and steps b to f must and do.)

Note: zoo could also have been used whereever lharc was, but lharc compresses
better, and so dominates the zoo data.

Results:

      Costs in bytes
 Data   Telecomm
storage  volume   Plan


184153   82885             b: partial uuencode, shar, compress
100810   91533             c: tar, compress, uuencode, compress
 87666   79863             d: partial uuencode, shar, lharc, uuencode, compress
 77844   70839             e: tar, lharc, uuencode, compress
 78484   71211             f: lharc, uuencode, compress

The absolute storage champion is plan e, but plan f is nearly as good, and
requires one fewer tools; neither of the current plans, nor plan d, has a lot
to recommend it.  The choice between e and f should be made mostly on economic
grounds.
-------------------------------------------------------------------------

> Depending on software versions, you can do all this in a pipe (which
> you undoubtedly know) "tar cf - files | compress | uuencode
> >bugs.tar.Z.uu"

> For transmission, it can be compressed again, (it would be smarter to
> uudecode) though this _should_ be done by a network layer, even though
> it often isn't. Wouldn't it be nice if modem transfer protocols were
> smart enough to compress on the fly?

>> So in fact, for the files being sent, there is some modest _gain_ in
>> telecommunications efficiency by using the best compression
>> technology on text, and then uuencoding it and letting the standard
>> net node to >node compression have its way with the files.

> Agreed. In fact, the more text, the better the gain.

>> I have yet to see a single argument for the present methods that
>> comes down, at the last, to anything but sheer laziness on the part
>> of those who don't want to change their habits. Compressed, uuencoded
>> transmission methods win on every reasonable criterion.

> Although one should be wary of zoo archives, which don't work well if
> there are many small text files in it (i.e. typical source code).
> Compression can be as little as 10-15%, which uuencoding explodes past
> the original size.

Yeah, lharc is _much_ better at compressing small files than is zoo, which
is why putting a shar or tar wrapper around them and then zooing them looks
better than zooing them separately.

>> By the way, it is _not_ a solution to replace compress with a filter
>> form of lharc as the typical file compressor for telecommunications;
>> lharc is _much_ too slow to use at every step along the way, so it
>> needs to be done just once at the originating site to accomplish
>> these savings.

> TANSTAFL.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>