[ont.uucp] news

molnar@gpu.utcs.toronto.edu (Tom Molnar) (02/05/88)

The amount of news flowing through utgpu has finally caught my attention.
News is eating up my /spool partition at an alarming rate.  During the
past 12 days, we've had about 45 megabytes of news traffic go through this
site.   If you take a look at the utgpu summaries in tor.news.stats for
utzoo and water, you'll be amazed.  I hope this flood will abate soon.
I've got a bit spool partition, but...

Just how much news traffic did Geoff predict we'd see anyway?

oz@yunexus.UUCP (Ozan Yigit) (02/08/88)

In article <8802050452.AA03771@gpu.utcs.toronto.edu> molnar@gpu.utcs.toronto.edu (Tom Molnar) writes:
>
>The amount of news flowing through utgpu has finally caught my attention.
>News is eating up my /spool partition at an alarming rate.  During the
>past 12 days, we've had about 45 megabytes of news traffic go through this
>site.
	I think the full-news-feed sites (including us) have a problem.
	The news traffic increase geoff predicted appears to be right on
	the money. I have no idea how to deal with this. Couple of days
	ago, I had to do an emergency expire (7-day) on misc,talk,soc,rec
	in order to get some inodes back. We use the default 15-day expiry
	scheme, but I do not know if this can go on for long. Current
	holdings of my spool area is as follows:
		
	1036	alt		60	bionet		84	can
	30434	comp		2708	misc		2	na
	1205	news		118	ont		12637	rec
	1730	sci		3381	soc		2628	talk
	12	to		404	tor		55	ut
		
	I think this adds up to over 50 megs, excluding 7 days worth I
	zapped couple of days ago. Of course, this is half the story,
	considering the amount of "saved" news by the readers.

	So, we get telebits, use C-news for faster processing, and when
	the monies are available, buy larger disks, just so that we can
	keep and pass around news ??!! Somehow, this doesn't sound right.
	A while ago, I have mentioned a "subscription" based news
	delivery, and I am trying to put down the specifics of an
	implementation, as a minor "fix" to the increasing load (of
	unread articles) problem. But in general, I have no idea where
	all of this will end up. And all the while, someone out there is
	paying major phone bills.

oz
-- 
Those who lose the sight	     Usenet: [decvax|ihnp4]!utzoo!yunexus!oz
of what is really important 	    	     ......!seismo!mnetor!yunexus!oz
are destined to become 		     Bitnet: oz@[yusol|yulibra|yuyetti]
irrelevant.	    - anon	     Phonet: +1 416 736-5257 x 3976

dan@maccs.UUCP (Dan Trottier) (02/09/88)

I too had to do panic expiration of high volume newsgroups. Our spool
directory sits in a 96 Megabyte filesystem and we were down to 1.5 MB
with news still coming in! Needless to say talk and binary groups were
the first to be sacrificed.

Disk utilization by Netnews Catagories

988	/usr/spool/news/alt 	80	/usr/spool/news/can
31761	/usr/spool/news/comp 	30	/usr/spool/news/control
6	/usr/spool/news/local 	2670	/usr/spool/news/misc 
1669	/usr/spool/news/news 	145	/usr/spool/news/ont 
16767	/usr/spool/news/rec 	3485	/usr/spool/news/sci 
2085	/usr/spool/news/soc 	1409	/usr/spool/news/talk 
12	/usr/spool/news/to 	555	/usr/spool/news/tor
----------------------------
61734 MB total  + the 10 MB or so I gained from panic expires + 10 MB from 
expires in the last couple of days (Effects of switching news feeder)

Here's an idea, we wire the Toronto (and Hamilton :-) area with Ethernet
and use the Cray at UofT as the news server. Actually in the long run
this could save us money.  :-)

The problem with "Don't store newsgroups that nobody is subscribed to" is
that downstream sites may want that newsgroup. I could clear a lot of disk
space if I could just delete the groups that nobody reads.

Cnews and Trailblazers will aid in moving news but the storage remains a
problem. A proposed solution. Write a new expire that works on the 
following logic:

	IF an article has been sent to all downstream sites AND
	Everyone who is subscribe to that newsgroup has read that article THEN
	mark the article for expiration within X days.

X should be setable for different newsgroups.

This would probably be a minor patch to the expire program but would introduce
the extra load of reading everyones ".newsrc" files. Now since expire runs in
the middle of the night who really cares about the extra time. News still flows
to all downstream sites and once everyone has read the article you don't need
to keep it around much longer.

What do you think? Who would be interested in such an expire?

-- 
       A.I. - is a three toed sloth!        | ...!uunet!mnetor!maccs!dan
-- Official scrabble players dictionary --  | dan@mcmaster.BITNET

henry@utzoo.uucp (Henry Spencer) (02/09/88)

> This would probably be a minor patch to the expire program...

I wouldn't call it minor, actually.  More to the point, however, this can
be done with no changes to expire at all.  C News expire does its thing
according to a control file which can be arbitrarily detailed.  Just
write a program that looks at the .newsrcs and the batcher queues and
generates a suitable control file for expire.
-- 
Those who do not understand Unix are |  Henry Spencer @ U of Toronto Zoology
condemned to reinvent it, poorly.    | {allegra,ihnp4,decvax,utai}!utzoo!henry

dave@lsuc.uucp (David Sherman) (02/09/88)

In article <984@maccs.UUCP> dan@maccs.UUCP (Dan Trottier) writes:
>The problem with "Don't store newsgroups that nobody is subscribed to" is
>that downstream sites may want that newsgroup.

That's easy -- just expire them in 2 days or so, by which
time you can be sure they've been batched, even if not yet uucp'ed.

If you don't have C news expire, just right a shell script with
cd /usr/spool/news
find talk -mtime +2 -exec rm {} ';'
find alt.flame -mtime +2 -exec rm {} ';'

etc. (of course, you can optimize this with xargs etc., but
it's no big load anyway).

If you do have C news expire, you can write yourself a
version of explist (call it, say explist-extreme) with
greatly-reduced expire times. Any time space gets really
low (you can even implement this check automatically if
you want), run expire using explist-extreme.  C news expire
uses few enough cycles that running it during the day is no
bug deal (I just did it on a busy system).

P.S. check out tor.news.stats recent postings for some
wild numbers. We've shipped out 30Mb over the past 3 days...

David Sherman
-- 
{ uunet!mnetor  pyramid!utai  decvax!utcsri  ihnp4!utzoo } !lsuc!dave

clewis@spectrix.UUCP (Chris R. Lewis) (02/11/88)

In article <1988Feb9.095822.9656@lsuc.uucp> dave@lsuc.UUCP (David Sherman) writes:
>In article <984@maccs.UUCP> dan@maccs.UUCP (Dan Trottier) writes:
>>The problem with "Don't store newsgroups that nobody is subscribed to" is
>>that downstream sites may want that newsgroup.
>
>That's easy -- just expire them in 2 days or so, by which
>time you can be sure they've been batched, even if not yet uucp'ed.

Quite true, except in the version of news *you're* running...

C-news, or B-news enhanced by my batcher program (eg: tmsoft, mnetor) or other
similar batch throttles will not necessarily have batched any given
article by a given time.  Just take a look at your statistics for queues -
the number given in the batch status line is the number of articles that
have not been batched at that point in time.  This could either be
that the downstream hasn't picked up it's stuff in so long that they're
using an undesirable amount of spool area, or your spool area has gotten
too full.  If one of those articles gets expired by the time the downstream's
batching gets to it (it doesn't talk to you for a long time, or your spool stays
close to the edge for a long time), that downstream won't see that article.

Standard B-news has no outgoing throttles: so the next time the sendbatch
program is invoked all outstanding articles will be packed.  Problem is,
what if you run out of space....  But, the two day force delete would
be okay in this case.

>P.S. check out tor.news.stats recent postings for some
>wild numbers. We've shipped out 30Mb over the past 3 days...

I've been watching your stats.  YIPES!
-- 
Chris Lewis, Spectrix Microsystems Inc,
UUCP: {uunet!mnetor, utcsri!utzoo, lsuc, yunexus}!spectrix!clewis
Phone: (416)-474-1955

brian@ncrcan.Toronto.NCR.COM (Brian Onn) (02/11/88)

In article <314@yunexus.UUCP> oz@yunexus.UUCP (Ozan Yigit) writes:
>In article <8802050452.AA03771@gpu.utcs.toronto.edu> molnar@gpu.utcs.toronto.edu (Tom Molnar) writes:
>>
>>The amount of news flowing through utgpu has finally caught my attention.
>>News is eating up my /spool partition at an alarming rate.  During the
>>past 12 days, we've had about 45 megabytes of news traffic go through this
>>site.
>	I think the full-news-feed sites (including us) have a problem.
>	The news traffic increase geoff predicted appears to be right on
>	the money. I have no idea how to deal with this. Couple of days
>	ago, I had to do an emergency expire (7-day) on misc,talk,soc,rec

Me too!  Where is it all coming from??   ncrcan is a full feed site, and last
week I had 15Meg on /usr/spool.  This morning, there was 100 blocks!  We blew
up.

I too had to do an emergency expire.

We can and are getting more disks, but that's not a solution.  Is this news
explosion a passing fad? or is it to be expected again?

Brian.
-- 
 +-------------------+--------------------------------------------------------+
 | Brian Onn         | UUCP:..!{uunet!mnetor, watmath!utai}!lsuc!ncrcan!brian |
 | NCR Canada Ltd.   | INTERNET: Brian.Onn@Toronto.NCR.COM                    |
 +-------------------+--------------------------------------------------------+

clewis@spectrix.UUCP (Chris R. Lewis) (02/12/88)

In article <584@ncrcan.Toronto.NCR.COM> brian@ncrcan.Toronto.NCR.COM (Brian Onn) writes:
>We can and are getting more disks, but that's not a solution.  Is this news
>explosion a passing fad? or is it to be expected again?

I'm wondering whether we're getting some sort of positive feed-back loop
around here someplace.  There may be some idiosyncrasy with the batch 
throttles being used that might be causing this.  Henry once suggested the same.

Particularly this last flood.  Eg: lsuc was merrily taking 900K to 1700K
bytes per day from utzoo for a long period, it abruptly went over 3 megabytes 
for several days, then dropped back down to 300K.  And is now starting
to settle out again.

Pardon for the disjointed nature of the posting - I'm not quite sure I 
understand all of the ramifications, so consider this "thinking aloud".

Consider the following scenario:

	1) you have an "incoming" throttle - if spool gets too low you
	   stop news unpacking, or, more drastically, inhibit uucico
	   from your feed site.
	2) you have outgoing throttles - if spool gets too low, you
	   start inhibiting the creation of batches for a downstream.
	3) You're running close to the edge.

Now, let us say that your outgoing batch is at the limit or close to it.
(particularly, if one your downstreams is stuck and they're using all
of your "headroom").  Then your delivery to other downstreams gets pretty
slow, your incoming feed takes up the rest of your space, and then the
incoming feed is turned way down.  Things slow down a lot.  If the downstream
picks up again, the spool empties, your incoming feed gets turned on again
and you get a huge flood.  Expire helps, but perhaps not a lot if you
have many downstream sites.  When the huge flood comes in then you're in
deep trouble - because you'll have a big jump in disk usage until things
get old enough to start expiring again.  Which, for example, is why
lsuc, tmsoft, yunexus and ourselves were accelerating our expiry schedules
during this last flood (we were doing rm -fr's at one point!)

Oscillations in incoming load will lead to corresponding (and probably
worse) oscillations in your disk usage.  Spectrix has no real outgoing
feeds, and we don't get a full feed either - still our spool area seems 
to "breathe" by 50% over a 3-7 day cycle.  A lot of this is due to
our incoming throttle slowing down the incoming feed.

Lsuc's spool oscillates between about 500K (last-ditch throttles kick in)
and 5Mb free spool...  What might be happening with lsuc is the following:

	1) a single downstream slows down (as a trigger)
	2) spool fills, other batching slows down
	3) incoming starts overrunning spool
	4) incoming throttled down (lsuc disables uucico for as last ditch
	   defence, first line is simply not unpacking the news which
	   is still in spool).
	5) expire cleans up some space, and/or stuck downstream starts to
	   catch up.
	6) batcher uses up space for other downstreams and batching
	   speeds up (lsuc runs batcher far more frequently than it
	   successfully connects to upstream).
	7) *eventually* downstreams catch up and spool gets more space
	8) incoming throttled up
	9) delayed incoming batches causes flood and spool fills.
	10) downstreams slow down due to lack of spool - we're back
	   at step two.

Doesn't take much to see that without sufficient "damping" this could
be self-perpetuating.  And, perhaps more importantly would induce very
similar problems on both upstream and downstream neighbors.  Particularly
if you're throttles were set very close to the end of the disk.

So far the damping is solely manual - like everybody's emergency expires.

Without throttles this wouldn't be such a big problem because you wouldn't 
be trying to run anywhere near so close to the edge on your disks.  For 
example, at lsuc the throttles come partially on at 1Mb free, and go to 
panic mode at .5Mb (as I remember how I set it up) - but as mentioned before,
the system hovers at 1Mb to 5Mb free - one sneeze and the throttles kick in
and possibly make the problem worse later.

This probably requires a considerable amount of thought about recommended 
free-space, and carefully selected thresholds for per-system batch limits, 
outbatching spool limits and incoming spool limits.

Things that I would think would help:

1) keep incoming batches, outgoing batches and unpacked articles on 
   different file systems - this will reduce throttle interaction ("impacted 
   spools" - "I can't get rid of any of this s**t because there isn't any
   room to send it!").
2) Making sure that the queue limit for a downstream is quite small compared
   to your spool free average (ideally, queuelimit total for all downstreams
   is less than your spool free average)
3) Invoking the batcher fast enough to reasonably keep up with a downstream 
   that is being connected to at the "desired rate".  Ideally, if the
   downstream connects, invoke the batcher often enough that the queue never 
   empties.  Eg: if a downstream's queue limit can be transfered in
   one hour and 15 minutes, invoke the batcher every hour.
-- 
Chris Lewis, Spectrix Microsystems Inc,
UUCP: {uunet!mnetor, utcsri!utzoo, lsuc, yunexus}!spectrix!clewis
Phone: (416)-474-1955

mason@tmsoft.UUCP (Dave Mason) (02/12/88)

In article <440@spectrix.UUCP> clewis@spectrix.UUCP (Chris R. Lewis) writes:
+----
|In article <584@ncrcan.Toronto.NCR.COM> brian@ncrcan.Toronto.NCR.COM (Brian Onn) writes:
|>We can and are getting more disks, but that's not a solution.  Is this news
|>explosion a passing fad? or is it to be expected again?
|
|I'm wondering whether we're getting some sort of positive feed-back loop
|around here someplace.  There may be some idiosyncrasy with the batch 
|throttles being used that might be causing this.  Henry once suggested the same.
|
|Particularly this last flood.  Eg: lsuc was merrily taking 900K to 1700K
|bytes per day from utzoo for a long period, it abruptly went over 3 megabytes 
|for several days, then dropped back down to 300K.  And is now starting
|to settle out again.
+----
In this case, there were many articles with posting dates >12 days
old.  Therefore I presume it was a large pile of news that got stalled at
?mnetor? (maybe they ran out of disk space, as they are running the throttled
batcher they could have produced this problem - ron?), rather than a feedback
loop.

But next time? who knows?	../Dave

henry@utzoo.uucp (Henry Spencer) (02/15/88)

Although there is indeed room for local oscillations due to things like
throttling batchers, it is clear from conversations at Usenix that the
news fluctuations are network-wide to some degree.
-- 
Those who do not understand Unix are |  Henry Spencer @ U of Toronto Zoology
condemned to reinvent it, poorly.    | {allegra,ihnp4,decvax,utai}!utzoo!henry

mmt@dciem.UUCP (Martin Taylor) (02/17/88)

>Although there is indeed room for local oscillations due to things like
>throttling batchers, it is clear from conversations at Usenix that the
>news fluctuations are network-wide to some degree.
>-- 
>Those who do not understand Unix are |  Henry Spencer @ U of Toronto Zoology
>condemned to reinvent it, poorly.    | {allegra,ihnp4,decvax,utai}!utzoo!henry

If the speculations are correct, this would be an expected consequence.
The oscillations would propagate both upstream and downstream.  Anyone
could start it.  A real analysis is needed, to see whether the idea is
more than speculation.
-- 

Martin Taylor
...uunet!{mnetor|utzoo}!dciem!mmt
mmt@zorac.arpa
Magic is just advanced technology ... so is intelligence.  Before computers,
the ability to do arithmetic was proof of intelligence.  What proves
intelligence now?  Obviously, it is what we can do that computers can't.

shields@yunccn (Paul Shields) (03/16/88)

I'm re-posting this article because it didn't get out the last time
(about 3 weeks ago.)

In article <440@spectrix.UUCP> clewis@spectrix.UUCP (Chris R. Lewis) writes:
>...Spectrix has no real outgoing
>feeds, and we don't get a full feed either - still our spool area seems
>to "breathe" by 50% over a 3-7 day cycle.  A lot of this is due to
>our incoming throttle slowing down the incoming feed.

[...]
>Doesn't take much to see that without sufficient "damping" this could
>be self-perpetuating.  And, perhaps more importantly would induce very
>similar problems on both upstream and downstream neighbors.  Particularly
>if you're throttles were set very close to the end of the disk.

Here are a couple of ways to damp the oscillations: 

1) If we assume that the rate of news creation doesn't oscillate, but its 
delivery just comes in waves, we can make the waves cancel each other out
as follows: 

    (a) Have redundant connections to upstream neighbours (use the 
	ihave/sendme protocol.)

    (b) Divide the work amongst upstream neighbours. Ideally, the upstream
        sites will be chosen so that they are at different points in
        the curve of the wave from each other.  

	This reminds me, would it be a good idea to send the message size
	in the ".ihave" messages?  This would enable a site to pick up equal 
	amounts of news from two adjacent upstream neighbours. 

This may not be possible, as one upstream site may oscillate at a different 
frequency.  It would also not work if the entire network oscillates in one 
giant wave.

2) Make the flow as continuous as possible, by making the batches smaller,
and sending them more often.  Sites should send no more than x MB in any 
given session, and connect as frequently as possible to upstream and 
downstream sites.  This would increase the frequency of oscillations, but
decrease their amplitude.
-- 
Paul Shields, shields@yunccn.UUCP

Communication is a two-way street.  Don't get run over.

brad@looking.UUCP (Brad Templeton) (03/17/88)

I have written the answer to your problem, namely an expiring program
that is based on disk space rather than time.  You say, "keep 10000 blocks,
toss the rest".

With minor mods the program could be keyed into inews itself, so that
it keeps a list of the articles that are due to go, and every time inews
gets a new article, it dumps enough space (and keeps track of the balance)
so that your news disk or inode usage is always constant to within one
article.

I will post this after a couple of days safe use here.
-- 
Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473