[comp.mail.uucp] Batch mail or batch uucp

karl@sugar.uu.net (Karl Lehenbauer) (07/26/88)

I've been sitting around, watching my system, with its Trailblazer,
connect to uunet, and have noticed that the connection is idle about 
50% of time time, while the systems are turning around uucp jobs, 
transmitting X files and such.  It's quite fast while transferring big 
files, achieving uucico-reported throughput of 1200-1800 bytes/sec.

It has been said that BSMTP is the way to go on this, and maybe it is, but
it seems to me like it's more of a problem that there are a lot of small
uucp jobs, not just uucp mail jobs, and it could be handled at the uucp job
level, rather than at the mail level.  So, is there some way in which we could 
batch up a bunch of uucp jobs, without looking much at their contents, into 
bigger, say 50-300 KByte batches and send the batches as uucp jobs?

I envision it something like this (this is all more-or-less off the top of
my head, so be warned):

A program, uubatch, runs before uucico is going to do a poll.  It finds files,
maybe pairs of files, destined for that system, and it takes all the ones that 
are smaller than some limit, say 30K, and they're batched up into a file with 
cpio and then queued as a file and job to the remote system with the remote 
program being uu_unbatch or something like that.

uu_unbatch then dearchives everything and uuxes it all.

The sender may need to unbatch the batch again if it can't deliver it in order
to bounce the jobs individually.  No doubt there's other nasty stuff as well.

What do you think?  I am not a great uucp hack, so I don't know how viable or
hard this is.  It doesn't *look* all that hard, but hey.
-- 
-- 
-- Karl Lehenbauer, karl@sugar.uu.net aka uunet!sugar!karl, +1 713 274 5184

les@chinet.chi.il.us (Leslie Mikesell) (07/28/88)

In article <2345@sugar.uu.net> karl@sugar.uu.net (Karl Lehenbauer) writes:
>A program, uubatch, runs before uucico is going to do a poll.  It finds files,
>maybe pairs of files, destined for that system, and it takes all the ones that 
>are smaller than some limit, say 30K, and they're batched up into a file with 
>cpio and then queued as a file and job to the remote system with the remote 
>program being uu_unbatch or something like that.
>
>uu_unbatch then dearchives everything and uuxes it all.
>
>The sender may need to unbatch the batch again if it can't deliver it in order
>to bounce the jobs individually.  No doubt there's other nasty stuff as well.
>
>What do you think?  I am not a great uucp hack, so I don't know how viable or
>hard this is.  It doesn't *look* all that hard, but hey.

Could be as simple as using cpio |compress  to batch the files, with the
reverse to unbatch.  I do this with a lot of files ahead of uux, then
uux the unbatching command.  Possible problems are the directory ownership
if cpio creates new directorys running under uux, and the fact that cpio
(up to SysVr3 anyway) doesn't return an error status if it can't write all
the files.  Doing this automatically ahead of all uucico traffic should work
as well, but it will involve some new problems:

Do you want to delay outgoing calls to allow a batch to accumulate?
What about incoming calls?  Maybe uucp and uux should become (different)
front-end programs that generate the bundle (in which case an archiver
like zoo that can add to an existing file might work best).  Personally,
I think a better solution would be a streaming protocol for uucico that
doesn't wait for acks except at the end of the connections when all files
would have to be acknowledged to consider the transfer complete.  This
would also work nicely over satellite links and do away with the need for
protocol spoofing where there is a turnaround delay in the modems.  

Les Mikesell