[net.news.b] Inews vs. df

roy@phri.UUCP (Roy Smith) (07/25/86)

	About the title of this article: around here we have two programs
we use a lot.  Inews runs every night and every morning I run df to see
what the damage was.  Df usually loses. :-)

	It seems to me that a typical scenario when we run out of space on
/usr is that the history file gets trashed and articles get lost.  What
this really means is that expire doesn't find old articles, so they never
get removed.  This makes it even more likely that we will run out of space
again the next day and trash the history file all over again.  The
potential for a vicious cycle should be obvious.

	So, it seems to me that it would probably be a good idea to keep
/usr/lib/news and /usr/spool/news on different file systems.  That way,
news overflowing /usr/spool won't trash /usr/lib/history and start the
cycle all over again.  Comments?
-- 
Roy Smith, {allegra,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

phil@amdcad.UUCP (Phil Ngai) (07/25/86)

In article <2400@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>	So, it seems to me that it would probably be a good idea to keep
>/usr/lib/news and /usr/spool/news on different file systems.  That way,
>news overflowing /usr/spool won't trash /usr/lib/history and start the
>cycle all over again.  Comments?

On amdcad, /usr/spool is a disk partition.  We don't back it up or
anything.
-- 
 Gray cars are hard to see in the rain and should be outlawed.

 Phil Ngai +1 408 749 5720
 UUCP: {ucbvax,decwrl,ihnp4,allegra}!amdcad!phil
 ARPA: amdcad!phil@decwrl.dec.com

aburt@isis.UUCP (Andrew Burt) (07/26/86)

In article <2400@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>	So, it seems to me that it would probably be a good idea to keep
>/usr/lib/news and /usr/spool/news on different file systems.  That way,
>news overflowing /usr/spool won't trash /usr/lib/history and start the
>cycle all over again.  Comments?

Yes, I've set that up here and made /usr/spool/news a symbolic link to the
real location.

Another potential pain is when /usr/spool/uucp fills up with compressed/batched
files to send out, e.g., if csendbatch is running while you're receiving news.
For this reason I've put /usr/spool/uucp on the same filesystem with .../news
(and, also, .../oldnews).

Another approach is to mount a whole (large) file system on /usr/spool.
-- 

Andrew Burt
isis!aburt   or   aburt@udenver.csnet

dave@lsuc.UUCP (David Sherman) (07/28/86)

In article <2400@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>
>							What
>this really means is that expire doesn't find old articles, so they never
>get removed...

Aside from keeping /usr/spool on a separate file system from
/usr/lib, you might consider a simple alternative to expire:
find(1). I've given up on expire, both because it's slow and
because it's unreliable (if anything happens to the history file).
Much simpler, and with few harmful side-effects, is
	find /usr/spool/news/net -mtime +15 -type f -exec rm {} ';'
with whatever modifications for directory name and expiry time you like.

The only significant side-effect is that articles with explicit
long expiry dates will be lost anyway. No big deal, in my opinion.
A less-significant effect is that, under certain circumstances,
rn will give you "Skipping unavailable article" messages due to
the active file being out of sync with the real world.

Dave Sherman
The Law Society of Upper Canada
Toronto
-- 
{ ihnp4!utzoo  seismo!mnetor  utzoo  hcr  decvax!utcsri  } !lsuc!dave

geoff@desint.UUCP (Geoff Kuenning) (07/29/86)

In article <2400@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:

> 	About the title of this article: around here we have two programs
> we use a lot.  Inews runs every night and every morning I run df to see
> what the damage was.  Df usually loses. :-)

I used to have that problem, too, so I wrote the enclosed shell script,
inspired by /usr/lib/acct/ckpacct.  I run it with the following crontab line:

0,15,30,45 * * * * * exec /usr/lib/uucp/ckuucp 1000 500 1600 800 '11 / 10' 2>/dev/null

(I don't get compressed news;  for compressed news you will want '11 / 4' in
place of '11/10').

Don't forget to remove my .signature from the bottom.

#!/bin/sh
#
#	%W%	%G% %U%
#
#	Periodically check the amount of disk space left on /usr
#	If it falls below $1 blocks (500 default), kill any running uucico
#	as soon as its current temp file disappears.  If it falls below
#	$2 blocks (100 default), kill all running uucico's regardless of
#	whether the current temp file is complete.
#
#	The size of the news spool directory is also watched.  If the sum of
#	the sizes of the D.* and TM.* files in /usr/spool/uucp, subtracted
#	from the free space on NEWSSYS, is less than $3 (default is the
#	value of $1), a soft limit on uucico's is invoked.  Similarly, if
#	$4 (default $2) is exceeded, uucico's will be killed immediately.
#
#	$5 is a multiplicative factor that will be applied to the number of
#	blocks in D.* files.  This is intended to allow space for fragmentation
#	and archiving.  The factor must be expressed as a rational number.  It
#	must be quoted, and if it contains shell metacharacters they must be
#	escaped *inside* the quotes.  For example,
#
#	    "5 \* 3"

geoff@desint.UUCP (Geoff Kuenning) (07/29/86)

Oh yes.  I forgot to mention that if your uucp tends to leave TM. files
hanging around, you will also need the following crontab line:

0 * * * * * exec /usr/lib/uucp/uuclean -pTM. -n3

or your spool directory will just fill up with half-completed temp files.

Also, take note that 'ckuucp' just pushes the spool load back onto
your news feed.  Make sure that this is ok with them before installing it.
I try to be real religious about using 'ckuucp' as an emergency fallback,
and swallow my news more or less on time like a good boy.
-- 

	Geoff Kuenning
	{hplabs,ihnp4}!trwrb!desint!geoff

lwall@sdcrdcf.UUCP (Larry Wall) (07/29/86)

In article <1299@lsuc.UUCP> dave@lsuc.UUCP (David Sherman) writes:
> Much simpler, and with few harmful side-effects, is
>	find /usr/spool/news/net -mtime +15 -type f -exec rm {} ';'
> with whatever modifications for directory name and expiry time you like.
>
> The only significant side-effect is that articles with explicit
> long expiry dates will be lost anyway. No big deal, in my opinion.
> A less-significant effect is that, under certain circumstances,
> rn will give you "Skipping unavailable article" messages due to
> the active file being out of sync with the real world.

One more effect:

If the mininum article number is not updated in the active file, be prepared
for .newsrc lines longer than 1024 bytes.  Rn doesn't mind this, but if
you go back to readnews/vnews you could have problems.

Larry Wall
sdcrdcf!lwall

faunt@spar.SPAR.SLB.COM (Doug Faunt) (08/09/86)

In article <1299@lsuc.UUCP> dave@lsuc.UUCP (David Sherman) writes:
>
>Aside from keeping /usr/spool on a separate file system from
>/usr/lib, you might consider a simple alternative to expire:
>find(1). I've given up on expire, both because it's slow and
>because it's unreliable (if anything happens to the history file).
>Much simpler, and with few harmful side-effects, is
>	find /usr/spool/news/net -mtime +15 -type f -exec rm {} ';'
>with whatever modifications for directory name and expiry time you like.
>
>The only significant side-effect is that articles with explicit
>long expiry dates will be lost anyway. No big deal, in my opinion.
>A less-significant effect is that, under certain circumstances,
>rn will give you "Skipping unavailable article" messages due to
>the active file being out of sync with the real world.


Michael Ellis, who set up the system here, set up something he calls
"prune" that trims individual directories in a script, so that some
news-groups stay around longer than others.  This is fine, and seems
to work well, BUT our history file is continuously growing.  What do
people think is the best way of getting the history file back in sync
(and smaller)?

grr@cbmvax.cbm.UUCP (George Robbins) (08/10/86)

In article <543@spar.SPAR.SLB.COM> faunt@spar.UUCP (Doug Faunt) writes:
>
>Michael Ellis, who set up the system here, set up something he calls
>"prune" that trims individual directories in a script, so that some
>news-groups stay around longer than others.  This is fine, and seems
>to work well, BUT our history file is continuously growing.  What do
>people think is the best way of getting the history file back in sync
>(and smaller)?

The simplest solution is to do an 'expire -r' on an occasional basis.
If you are using the 'DBM' option for the history file, this is the only
easy way to trim the history file. If you aren't using this option, then
you can clean up the file anyway you choose, but it is important to do
so, since history file searching is one of the things than makes news
slow.

If you only store a small amount of news < 5MB, the the expire -r is no
big deal, but if you have 50-100MB of news stashed away, it can take
hours and eat up all your /tmp space for sort work files.  However,
running it monthly would probably be ok.
h
-- 
George Robbins - now working with,	uucp: {ihnp4|seismo|caip}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@seismo.css.GOV
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)