[tor.news] Latest news flood

dave@lsuc.uucp (David Sherman) (01/25/88)

mnetor's feed from uunet was stuck for a long time, and this
meant all the Toronto news was coming from clyde!watmath!water!utgpu.
This route (presumably clyde) is not as good, and we seemed to
be missing a lot of stuff (we'd see followups to as-yet-unreceived
articles all the time.  Now with the mnetor problem solved, all
those two-week-old never-received articles have appeared. Look
at the Date: fields on your articles and you'll see what I mean.
There's even the occasional repeat article, posted so long ago
all record of it has vanished from history.

There's a possibility that a couple of other corporations may
be able to start assisting in bringing news into Toronto in
the not-too-distant future, directly from U.S. sites.  We're
working on it.

David Sherman
-- 
{ uunet!mnetor  pyramid!utai  decvax!utcsri  ihnp4!utzoo } !lsuc!dave

mason@tmsoft.UUCP (Dave Mason) (01/25/88)

Here's a list of recent news arrivals here (in paren is what's unexpired):

News received on Jan 23   4383 k bytes
News received on Jan 22   5446 (  5443) k bytes
News received on Jan 21   4861 (  4292) k bytes
News received on Jan 20   3333 (  1343) k bytes
News received on Jan 19   4133 (  1387) k bytes
News received on Jan 18    377 (    69) k bytes
News received on Jan 17   1588 (   160) k bytes
News received on Jan 16   3489 (     0) k bytes
News received on Jan 15   3346 (     0) k bytes
News received on Jan 14   3183 (     0) k bytes
News received on Jan 13   2254 (     0) k bytes

I was looking through the expire(8) man page & it talks about 5000
articles/week (circa 1986).  Well I've expired about 1/3 and still have
7316 articles totaling 19985k for 5 days (which works out to about
14000 per week, about the claimed doubling/year of news).  Most of the
recent flood has come via utzoo (rather than water).  I've noticed
several articles that I'd seen before and had been expired.  My nightly
expire job expires junky things in 2 days, then keeps expiring
everything else until there's 5Mb free or it gets to 6 days left.  As
you may surmise from the above, it got down to 6 days, but there was
only about 1Mb free. I manually did an 'expire -p -e 11 -E 13' today
and got back 6.5Mb.

So, what's going on?

My guess is that utzoo (or possibly someone further away) had a huge
backlog (more than 16Mb (2+ weeks)), which they flushed over the last
3 days (reinstalled C rnews? :-).  What I find interesting is that
utgpu (our feed) seems to only keep 2 weeks worth of history file, as
they were passing along these articles as new.

I hope I didn't throw away too much real news with my manual expire,
but spool space was getting pretty minimal (I know I didn't throw away
anything that I read, and my feeds (with the possible exception of
spectrix (we'll see when they start picking up news)) haven't missed
anything (more good luck than good management on my part)).  I've
heard that the flood filled ontmoh's spool space, I wonder about others.

Any comments?  And what groups does mnetor bring in these days vs.
what water brings in?
	../Dave

brian@ncrcan.Toronto.NCR.COM (Brian Onn) (01/27/88)

In article <272@tmsoft.UUCP> mason@tmsoft.UUCP (Dave Mason) writes:
>...I've
>heard that the flood filled ontmoh's spool space, I wonder about others.

Yep.  ncrcan blew up over the weekend, too.

Brian.