[net.news.b] Method to reduce spool overhead to down systems.

jerry@oliveb.UUCP (Jerry Aguirre) (06/20/85)

Spooling news to a system that is down can use up a lot of disk space.
For sites that use the "F" batching option the disk usage can be
minimized.

The ":F:" option of news writes the pathname of the article to a file,
usually "/usr/spool/batch/rmtsysname".  Sometime later, usually via
crontab during the night, a sendbatch or csendbatch command is run to
translate the list of filenames into a batch file containing the actual
articles themselves.  The expansion of the list of filenames to the
batch file increases the disk usage by a large amount.  If the site is
not currently accessible then it would make sense to keep the queue in
it's most compact form.

I have modified my news.poll script to not sendbatch unless the last
poll of the remote system succeeded.  It looks like this:

    PATH=/usr/lib/news:/usr/lib/uucp:/bin:/usr/bin
    export PATH
    test -f /usr/spool/uucp/STST.hplabs  || csendbatch hplabs
    uucico -r1 -shplabs
    test -f /usr/spool/uucp/STST.tymix   || csendbatch tymix
    uucico -r1 -stymix
    .
    .
    .

The "test -f /usr/spool/uucp/STST.rmtsystem" will prevent the execution
of csendbatch if the previous attempt to call "rmtsystem" failed.  The
uucico is still executed in an attempt to clear the STST file.

Thus, if one of the sites that oliveb feeds is down for several days,
my spool overhead is limited to one file of ~20 Kbytes.  This is much
better than dozens of files totaling a megabyte or more clogging up my
uucp directories.

I hope this saves someone from having to clean up a 100% full /usr file
system.  Or from having to cancel a feed to a site that is having a lot
of down-time.