[news.admin] How long are you keeping news these days?

karl@tut.cis.ohio-state.edu (Karl Kleinpaste) (01/06/88)

I'm rather curious.  With the high volume of news, we haven't got the
space to keep a whole lot of news for a long time.  We keep most news
less than a week, except for local newsgroups which we keep for as
long as a month.  (The traffic in those groups isn't even a drop in
the bucket compared to the general Usenet traffic.)  On the 3B2 we use
as a UUCP gateway, whose sole purpose is news transfer and mail
passing, we keep news for about 10 days.  I'm using another 3B2 in my
basement as a backup for the regular gateway machine, in case it
decides to collapse under the load; I can bring the backup in to drop
in the original's place in about 10 minutes, just change his name,
update his sys file and away he goes.  This backup (`loquat') keeps
news for 14 days.

I'd like to know, though, how this compares with what other sites are
doing.  How long do you keep news?  What policies do you keep for
expiring some groups fast and other groups slowly?  How much space do
you allow for Usenet?  How full do you let it get?  We have a 30Mb
partition that runs 60-70% full most of the time.

Comments?

Karl

fair@ucbarpa.Berkeley.EDU (Erik E. Fair) (01/06/88)

Disk space has been a problem on ucbvax last fall for the first
time in years. We run netnews (both the spool and lib directory)
out of an Eagle's "h" partition, which is 140Mbytes.

policy prior to August of last year:

	keep all netnews for 30 days
	run expire once a week (early monday morning)

As of the first of August, we ran out of inodes.

I started running expire every night, because we couldn't afford
a week's worth of slop in used inodes. This was OK until September.
Then we ran out of inodes again, so I cut the expire time back to
25 days, and the daily script mails me a "df -i" every night after
expire completes, so that I can keep an eye on it.

The inode count has lately been floating at between 2500 and 4000
free, although the christmas - new year's week has brought that
back up to almost 10,000 free. I shudder to think what will happen
when everyone gets back from vacation.

Disk space has been less of a problem; I keep between 10 and 30 Mbytes
free. UUCP has its spool elsewhere, so UUCP feed queues aren't taking
away from that. NNTP (most of what we do these days for news transfer)
takes articles right out of the spool, so that's no problem either.

If I had my druthers, I'd expire the soc, rec, and talk newsgroups
in two weeks, rather than for 25 days and I'd let the rest of them
back out to 30 days expire, but right now that would mean running
expire twice every night, and we can't afford that either; ucbvax
is just too busy. I'd run the supposedly stupendous C news expire
(which, aside from blinding speed, allows for selective expire
times on a per-group basis), except that it produces a non-B news
compatible history file, and NNTP as currently written can't deal
with that. Someday.

In the mean time, I watch the daily "df -i" that comes in my morning
mail, with one hand poised on the expire-time dial, if the inode
or disk space free counts get too low...

Here's this morning's report:

Date: Wed, 6 Jan 88 02:08:07 PST
From: usenet@ucbvax.Berkeley.EDU (USENET News Administration)
Subject: USENET disk space report

Filesystem    kbytes    used   avail capacity iused   ifree  %iused  Mounted on
/dev/hp1h     140564   89593   36914    71%   27576   11336    71%   /usr/spool/news

	Erik E. Fair	ucbvax!fair	fair@ucbarpa.berkeley.edu

P.S.	Yes, I know, I could backup the filesystem and mkfs with more
	inodes, but that would eventually leave me with disk space
	problem instead of lack of inodes - I can basically take
	my pick on this one, AND ucbvax would have to come down to
	single user mode for a while we do that...

spaf@cs.purdue.EDU (Gene Spafford) (01/06/88)

All the news, spool and lib, in the Dept. of Computer Sciences here is
kept in a 32Mb partition on "arthur" and all the other machines mount
it via NFS.  A larger partition would be nice, but we don't have 
enough disk to go around as is.

Right now, the disk is 70% full, but that is due largely to schools
being out for break; normally, that partition runs about 95% full.  We
seem to hit 100% about every 4 to 6 weeks (averaged over the last 5
months that I've been here!).

Expiration times:
	2 days -- junk,talk.all,comp.binaries.all,comp.sys.all,control
	7 days -- purdue.cs.news
	14 days -- purdue.cs.all,!purdue.cs.news
	5 days -- all other groups, including inet groups, no alt groups
All history information is kept 5 weeks (35 days).

Yes, expire runs 4 times a night.  It's a Vax 8600, soon to be switched
to a Sequent Symmetry, so the cpu cycles aren't a big concern,
especially considering the small number of articles present.  It's
unlikely that we'll switch to C news expire until NNTP interfaces to it
and it exits beta test.
-- 
Gene Spafford
Dept. of Computer Sciences, Purdue University, W. Lafayette IN 47907-2004
Internet:  spaf@cs.purdue.edu	uucp:	...!{decwrl,gatech,ucbvax}!purdue!spaf

jeff@gatech.edu (Jeff Lee) (01/07/88)

We expire junk and control in 5 days and the talk and binary groups
get expired  in 10. Everything else (right now) is being expired in
17 days. This is after the christmas break, though. When everyone
gets started posting again I expect to have to shorten the timeout to
about 2 weeks. We run the lib portion out of /usr but our UUCP queues
are on the same partition (a "g" partition on a CDC 9766 drive, ~67Mb).

As late as the beginning of last summer, we were able to keep almost
everything around for 3 weeks (with room to spare). Now I time out
news so as to keep about 6-10 meg spare for the UUCP traffic. This is
sometimes as little as 12 days.
-- 
Jeff Lee
Internet:	jeff@gatech.edu
UUCP:		...!{decvax,hplabs,ihnp4,linus,rutgers}!gatech!jeff

jerry@oliveb.olivetti.com (Jerry Aguirre) (01/07/88)

In article <22409@ucbvax.BERKELEY.EDU> fair@ucbarpa.Berkeley.EDU (Erik E. Fair) writes:
>Disk space has been a problem on ucbvax last fall for the first
>time in years. We run netnews (both the spool and lib directory)
>out of an Eagle's "h" partition, which is 140Mbytes.

Interesting, my /usr/spool/news is also 1h of an Eagle.

>Then we ran out of inodes again, so I cut the expire time back to
>25 days, and the daily script mails me a "df -i" every night after
>expire completes, so that I can keep an eye on it.

With the variations in volume of news I was also having problems coming
up with with an expire time that made full use of the disk without
running over.  The problem with such a "hand" solution is that it uses
my time and if I am unavailable then things break down.  

I finally came up with a script that maintains things without attention
from me.  Using df and awk the script checks the available blocks and
inodes on /usr/spool/news.  (Somewhat like what patch 14 adds to
sendbatch.)  When this drops below a specified level (10K blocks and 3K
inodes) news is expired at 28 days.  If there is still not enough space
news is expired again at 21 days and mail is sent to me about it.  This
is repeated with shorter expire times until enough blocks and inodes are
free.

In practice expire only runs every 4 days or so.  Due to the unevenness
of the flow sometimes this is not enough and the 21 day expire will run.

>Here's this morning's report:

>Filesystem    kbytes    used   avail capacity iused   ifree  %iused  Mounted on
>/dev/hp1h     140564   89593   36914    71%   27576   11336    71%   /usr/spool/news
Mine looks like:
Filesystem    kbytes    used   avail capacity iused   ifree  %iused  Mounted on
 /dev/hp1h     115311   92688   11091    89%   35405   21939    62%   /usr/spool/news

>P.S.	Yes, I know, I could backup the filesystem and mkfs with more
>	inodes, but that would eventually leave me with disk space
>	problem instead of lack of inodes - I can basically take
>	my pick on this one, AND ucbvax would have to come down to
>	single user mode for a while we do that...

I ran into the problem of too few inodes and found out that mkfs was
creating fewer inodes than the default.  It seems there is a hard limit
on inodes/cylinder-group and specifying more won't help.  On an eagle
this hard limit is less than the default!  I finally rebuilt
/usr/spool/news with a cylinder-group size of 8 and that fixed the
problem.  Of course waiting hours for restore to finish was a pain.

					Jerry Aguirre

kimcm@ambush.UUCP (Kim Chr. Madsen) (01/07/88)

karl@tut.cis.ohio-state.edu (Karl Kleinpaste) writes:
>I'm rather curious.  With the high volume of news, we haven't got the
>space to keep a whole lot of news for a long time.  We keep most news
>less than a week, except for local newsgroups which we keep for as
>long as a month.

Usually a little over a month, we collect the usenet news on separate
147Mb disk on our 3b2. Keeping an eye on the freespace on the disk.

We don't do automatic backups or expire of news - since we're a
limited group who reads the news and some of us do not have the time
to read it regularly enough to avoid a fixed deadline for expire.
When expiring news we usually make a backup of all news arrived since
last backup and then expire all groups - unless someone objects to
this (as I said the numbers of netreaders are limited so we can afford
to make personal favors in this area).

>How full do you let it get?

Well, usually we let the freespace drop to 10 - 15 Mb before starting
expire.

						Regards
						Kim Chr. Madsen.

nyssa@terminus.UUCP (The Prime Minister) (01/07/88)

I administer news on two machines, terminus and rolls.  Terminus 
only receives a subset of newsgroups, which are kept for 7 days.
The exceptions are:  junk is eliminated after a day, articles
crossposted to alt.* or talk.* are eliminated after three days,
and rec.arts.drwho is expired after 1 day, but the expire is run
monthly.

Rolls gets everything.  alt.* and talk.* are expired after 3
days, soc.* after 4.  Everything else is after 7 days.

In both cases, we allow the Expire header to override our expire
command.
-- 
James C. Armstrong, Jnr.	(nicmad,ulysses,ihnp4)!terminus!nyssa

roy@phri.UUCP (Roy Smith) (01/08/88)

In <22409@ucbvax.BERKELEY.EDU> fair@ucbarpa.Berkeley.EDU (Erik E. Fair) writes:
> I'd expire the soc, rec, and talk newsgroups in two weeks [...] and I'd let
> the rest of them back out to 30 days expire, but right now that would mean
> running expire twice every night.

	Why not just expire soc/rec/talk on Monday, Wednesday, Friday, and
Sunday, and sci/news/comp/whatever on Tuesday, Thursday, and Saturday?

	For what it's worth, we have:

Filesystem   kbytes    used   avail capacity iused   ifree  %iused  Mounted on
/dev/ra0d     96547   32247   54645    37%    7905   28959    21%   /usr/spool

which has both our news and uucp spool directories, but not /usr/lib/news,
with its 6 Mbytes worth of log and history files.  This is the least full I've
seen it in a long time (due to the January college vacations, no doubt).  We
get everything except for talk and alt, expire everything in 8 days, every
night, and provide 4 full feeds, plus a number of minor ones.  We used to run
with /usr/spoool as part of the /usr file system; it was split out about 6
months ago to its own file system, and depending on how things are after all
the college people come back, we'll probably push our expire back up to 14
days (or more).
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

fair@ucbarpa.Berkeley.EDU (Erik E. Fair) (01/08/88)

In the referenced article, roy@phri.UUCP (Roy Smith) writes:
>	Why not just expire soc/rec/talk on Monday, Wednesday, Friday, and
>Sunday, and sci/news/comp/whatever on Tuesday, Thursday, and Saturday?

We just started doing this. It should have occurred to me sooner.
Thanks for the suggestion!

	Erik E. Fair	ucbvax!fair	fair@ucbarpa.berkeley.edu

dave@spool.cs.wisc.edu (Dave Cohrs) (01/09/88)

We currently spool news on three sites in the UW Comp Sci dept.
This should decrease to two sometime soon, one for the research
machines (spool.cs.wisc.edu) and one for the other machines
(puff.cs.wisc.edu).  Other machines access the news via either
NNTP or NFS (for a few machines).

Spool receives all the standard newsgroups, as well as the inet, alt,
and unix-pc distributions.  It keeps news for two weeks.  We keep news
on the "h" partition of an old SI (fuji) 160 drive.  That's about 100
Meg.  Both the news itself and the news/lib are kept here; uucp, et al,
are kept on another partition.  Since moving to this filesystem, I
haven't had any problems running out of either inodes or blocks.  We
also archive a few newsgroups on the same partition as uucp.

Puff has a smaller news partition, only 48 Meg.  We keep news there
for 8 days, down from 9 as of November.  The inodes get rather
scarce there occasionally, but it hasn't overrun yet.

We expire nightly, an old habit picked up when our news partitions
were smaller and shared space with other spool directories; this is
still necessary on puff.  I get a little status report in the mail
every morning giving me the current news disk usage and uucp
queues on these machines.

The biggest problem I've had with disk space lately is nntpd's getting
stuck for a day (or two) with 3 Meg of history file open.  After a
couple of these are stuck, the news partition really seems to be
filling up.  Killing them off frees up the space.  If this gets
bothersome, I'll probably have to put timeout code into the nntpd.

Dave Cohrs
+1 608 262-6617                        UW-Madison Computer Sciences Department
dave@cs.wisc.edu                 ...!{harvard,ihnp4,rutgers,ucbvax}!uwvax!dave

ntm1569@dsacg3.UUCP (Jeff Roth) (01/09/88)

in article <3899@tut.cis.ohio-state.edu>, karl@tut.cis.ohio-state.edu 
(Karl Kleinpaste) asks:
  
> ...  How long do you keep news?  What policies do you keep for
> expiring some groups fast and other groups slowly?  How much space do
> you allow for Usenet?  How full do you let it get?  We have a 30Mb
> partition that runs 60-70% full most of the time.

We keep comp, news, misc and our local groups two weeks, archive sources
and comp.bugs.4bsd, and expire soc, rec, etc. after 5 days. We are using
around 95Mb partition which stays around 50-60 percent full, with a
higher percentage of inodes used.
-- 
Jeff Roth               {uunet!gould,cbosgd!osu-cis}!dsacg1!jroth 
Defense Logistics Agency Systems Automation Center | 614-238-9421
DSAC-TMP, P.O. Box 1605, Columbus, OH 43216        | Autovon 850-
All  views  expressed  are  mine,  not  necessarily anyone else's

david@ms.uky.edu (David Herron -- Resident E-mail Hack) (01/10/88)

Well, since we're all saying what we doo ...

In the CS/MA/STAT depts here we have all our news on one machine.
It's on a seperate partition which has a bunch of stuff on it.
It's a 140 meg partition which has some of the spooling area
for news coming in and out, the /usr/lib/news directory and
all of the news articles.  Right now it's running at 60 megs
with about 60 megs free.  (remember, BSD reserves 10%).

We get every last possible newsgroup we can get which includes "alt",
"inet", "unix-pc", "usrgroup", and "bionet".  Plus a couple of distributions 
of our own making.  The expiration is completely standard (i.e. just
"expire") and is done every day at 3am.

For spooling in/out of the system ... the machine news lives on
isn't connected to either bitnet or uucp.  The only news transfers
it does directly is NNTP.  For others I have the batches left
in a directory on the news machine, then shell scripts on
the bitnet and uucp machines (they're seperate machines) pick
up the batches and do the appropriate things to uucp/netcopy
them to the neighbor.  Occasionally we'll have a situation where
either bitnet will be down for awhile or one of our uucp neighbors
doesn't call for a long time.  In THOSE cases we have space 
problems ... but in each case the spooling directories are on
their own partitions and we don't blow other things up.

Now, along about last november I discovered this 20+ meg
log file which NNTP had made and I'd forgotten it was there.
Erik, your script will run on that large a log file but it
takes fooorrreeeevvvveeerrrrr.. :-)

-- 
<---- David Herron -- The E-Mail guy            <david@ms.uky.edu>
<---- or:                {rutgers,uunet,cbosgd}!ukma!david, david@UKMA.BITNET
<----
<---- Winter health warning:  Remember, don't eat the yellow snow!

fair@ucbarpa.Berkeley.EDU (Erik E. Fair) (01/11/88)

In the referenced article, david@ms.uky.edu (David Herron -- Resident E-mail Hack) writes:

	Now, along about last november I discovered this 20+ meg
	log file which NNTP had made and I'd forgotten it was there.
	Erik, your script will run on that large a log file but it
	takes fooorrreeeevvvveeerrrrr.. :-)

I'll bet the numbers were interesting, though...

	Erik E. Fair	ucbvax!fair	fair@ucbarpa.berkeley.edu

P.S.	I write those scripts in awk because, for me, flexibility is
	more important than speed. If the reverse is true for you,
	please feel free to rewrite them in C...

rwa@auvax.UUCP (Ross Alexander) (01/11/88)

Well, for what it's worth, we take a full feed less the soc, talk,
and alt groups and expire after two weeks with history being held for
4 weeks.  Disk usage hangs around 25 to 35 MBytes (currently
29.5Mbytes) for /usr/spool/news and about 4-6 Mbytes for
/usr/lib/news depending on how dutiful I am about trimming the log
file.  

Ross Alexander, Athabasca University
alberta!auvax!rwa

hls@oce-rd1.oce.nl (Harry Schreurs) (01/12/88)

Regarding the problems some people have administering News, I wonder
why News doesn't allow me to use symbolic links.
Instead of running expire to create the necessary free space,
I would like to add some extra disk space to my spool directory.
Just move a high volume newsgroup to another partition and replace
the directory by an appropriate symbolic link.

Any comments?

--
Harry Schreurs
Internet:	hls@oce.nl
UUCP:		...!{..., ..., uunet}!mcvax!oce-rd1!hls
/*
 * This note does not necessarily represent the position
 * of Oce-Nederland B.V. Therefore no liability or
 * responsibility for whatever will be accepted.
 */

fair@ucbarpa.Berkeley.EDU (Erik E. Fair) (01/13/88)

Unfortunately, using a symbolic link to add disk space to the
netnews tree would make newsgroups in that portion of the tree
impossible to cross-post to from the rest of the tree (on your
site), because cross-posting is represented in the filesystem with
hard links, and they don't work across filesystems...

	Erik E. Fair	ucbvax!fair	fair@ucbarpa.berkeley.edu

dan@rna.UUCP (Dan Ts'o) (01/14/88)

In article <22560@ucbvax.BERKELEY.EDU> fair@ucbarpa.Berkeley.EDU (Erik E. Fair) writes:
>Unfortunately, using a symbolic link to add disk space to the
>netnews tree would make newsgroups in that portion of the tree
>impossible to cross-post to from the rest of the tree (on your
>site), because cross-posting is represented in the filesystem with
>hard links, and they don't work across filesystems...

	Unfortunate indeed, as you can't use a separate file system partition
either. I had wanted to use a different partition for comp, but even though
most cross-posting is within single major group (a root group?), apparently the
news software builds a temporary file and then hard links to all the target
newsgroups. So all newsgroups must reside on the same mounted partition.

rees@apollo.uucp (Jim Rees) (01/20/88)

We have about 100 Gbytes free right now (hard to figure for sure, since
you have to find it first).  News is restricted to a single 450 Mbyte
disk, which currently has about 135 Mbytes free.  We don't run out of
inodes ever because Apollo unix dynamically extends the inode list when
you run out.

steve@nuchat.UUCP (Steve Nuchia) (01/23/88)

[news.software.b added to followup to news.admin article]

In article <12312@oliveb.olivetti.com>, jerry@oliveb.olivetti.com (Jerry Aguirre) writes:
> 
> With the variations in volume of news I was also having problems coming
> up with with an expire time that made full use of the disk without
> running over.  The problem with such a "hand" solution is that it uses
> my time and if I am unavailable then things break down.  

I went through the same series of observations..

> I finally came up with a script that maintains things without attention
> from me.  Using df and awk the script checks the available blocks and
> inodes on /usr/spool/news.  (Somewhat like what patch 14 adds to
> sendbatch.)  When this drops below a specified level (10K blocks and 3K
> inodes) news is expired at 28 days.  If there is still not enough space
> news is expired again at 21 days and mail is sent to me about it.  This

but came up with a slightly different solution, which addresses another
problem as well.

My poor little 286 machine, with a half-dozen+ news neighbors, often
couldn't unbatch news fast enough to prevent uuxqt from blowing away
the lock and starting another unbatch, which slowed things down further...

Anyway, it wasn' pretty.  So, I replaced /usr/bin/rnews with:

umask 0222
cat - >> /usr/spool/news/incoming/in.$$
ls -l /usr/spool/news/incoming/in.$$ >> /usr/lib/news/log

and wrote a daemon to do the real rnews, serially and at low
priority.  It also serializes expire with respect to unbatching
and the original also managed sendbatch.  That version was,
I believe, posted a while back.  Since then I've done some
work that makes it more stable and moved sendbatch back to
cron's arena.  I'll include my "sendsome" script below too - 
you may need to twiddle it to parse the df output properly
on your system.


/*
 *	newsd.c - daemon for unbundling news and doing other
 *	essential stuff, like running expire.  Obviates certain
 *	cron processing and allows the slow components of news
 *	to run niced without playing uucp games.
 *
 *		Steve Nuchia
 *		27 Sept 1987
 */

#include <sys/types.h>
#include <sys/stat.h>
#include <ustat.h>
#include <sys/dir.h>
#include <stdio.h>

#define INCOMING	"/usr/spool/news/incoming"
#define RNEWS		"/usr/lbin/rnews"
#define IHAVE		"/files/news/.ihave"
#define OUTGOING	"/usr/spool/batch"
#define NEWSDIR		"/files/news"
#define SPOOLDIR	"/usr/spool"

main()
{
	int	incoming;
	struct direct art;
	char	aname[DIRSIZ+1], fullname[80], best[80], command[160];
	long	btime, xtime;
	int	i;
	struct stat sbuf;
	struct ustat usbuf;
	int	scount = 25, days = 15;

    time(&xtime);
    nice ( 20 );
    if ( (incoming = open ( INCOMING, 0 )) < 0 ) perror ( INCOMING );

    while ( 1 )
    {
	sleep ( 60 );
	/* see how the space situation looks */
	stat ( NEWSDIR, &sbuf );
	ustat ( sbuf.st_dev, &usbuf );
	if ( usbuf.f_tfree > 1000 && usbuf.f_tinode > 500 )
	{
	    scount = 0;
	    /* look around in INCOMING */
	    lseek ( incoming, 2L * sizeof(struct direct), 0 );
	    best[0] = 0;

	    while ( read ( incoming, &art, sizeof(struct direct) ) > 0 )
	    {
		if ( ! art.d_ino ) continue;
		for ( i = 0; i < DIRSIZ; i++ ) aname[i] = art.d_name[i];
		aname[i] = 0;
		sprintf ( fullname, "%s/%s", INCOMING, aname );
		stat ( fullname, &sbuf );
		if ( ! best[0] || btime > sbuf.st_mtime )
		{
		    btime = sbuf.st_mtime;
		    strcpy ( best, fullname );
		}
	    }
	    /* if there is anything, take care of oldest */
	    if ( best[0] )
	    {
		sprintf ( command, "%s < %s", RNEWS, best );
		if ( ! system ( command ) ) unlink ( best );
		continue;
	    }
	}
	else
	{
	    printf ( "space problem in NEWSDIR %d\n", ++scount );
	    fflush ( stdout );
	    sleep ( 120 );
	}
	/* otherwise we are free to do housekeeping */
	stat ( SPOOLDIR, &sbuf );
	ustat ( sbuf.st_dev, &usbuf );
	if ( usbuf.f_tfree > 5000 && usbuf.f_tinode > 500 )
	{
	    if ( scount > 30 ) /* 30 times around with no space */
	    {
		time(&btime);
		scount = 20;
		days = days - 1 + (btime - xtime) / (23 * 3600L);
		xtime = btime;
		sprintf ( command, "expire -e%d -v -a >> expire.log 2>&1",
									days );
		printf ( "%s\n", command );
		fflush ( stdout );
		system ( command );
	    }
	}
	else
	{
	    if ( scount > 25 ) scount = 25;
	    printf ( "space problem in SPOOLDIR\n" );
	    fflush ( stdout );
	    sleep ( 180 );
	}
    }
}


and here's sendsome, with some of the clients removed - I made two
mods to sendbatch to make this work - one to make it exit after
one pass through its loop and another to allow command-line specification
of the compress program.  The latter was added because the memory
consumption for a full 16-bit compress is _much_ greater than
that for a compress compiled for 13-bit, and it just isn't
worth it for local links.


:
eval `df | grep /usr | awk '{ print "BLKS=" $3 "; INODS=" $5 }'`

if test -z "$BLKS" -o -z "$INODS" -o "$BLKS" -lt 7000 -o "$INODS" -lt 100
then
	exit
fi

eval `uustat -q | tail +1 | awk '{print $1 "=" $3}'`

if test -z "$sugar" -o "$sugar" -lt 4
then
	nice -20 /usr/lib/news/sendbatch -cp/usr/lib/news/comp12 sugar
fi

if test -z "$uunet" -o "$uunet" -lt 4
then
	nice -20 /usr/lib/news/sendbatch -c uunet
fi

if test -z "$uhnix1" -o "$uhnix1" -lt 6
then
	nice -20 /usr/lib/news/sendbatch -cp/usr/lib/news/comp12 uhnix1
fi



Oh, and one more piece of technology - I've got the following being
run as part of my rc startup script - it ensures that the spool files,
with PID-based names, don't get overwritten.  You'll notice in the
daemon source that it bases its selection of which batch to process
on the mtime so all is well.

cd /usr/spool/news/incoming
for x in *
do
	for l in a b c d e f g h i j k l m n o p
	do
		if [ ! -f $l$x ]
		then
			mv $x $l$x
			break
		fi
	done
done

cd /usr/lib/news
mv newsd.log onewsd.log
PATH=/usr/lib/news:/usr/lbin:/usr/bin:/bin:. ; export path
../newsd > newsd.log 2>&1 &
-- 
Steve Nuchia	    | [...] but the machine would probably be allowed no mercy.
uunet!nuchat!steve  | In other words then, if a machine is expected to be
(713) 334 6720	    | infallible, it cannot be intelligent.  - Alan Turing, 1947