[ont.uucp] genat down with disk troubles

geoff@utstat.uucp (Geoff Collyer) (12/24/87)

I spoke with Mike Stephenson of Genamation (the local Pyramid dealers)
this afternoon, and he reports that genat has been down for the past few
days with a broken Eagle, but repairmen are working on the Eagle now,
and genat should either be up by the end of the afternoon, or will be
down for the holidays.

If genat is down for the holidays, utzoo does not have the disk space
to hold the unsent news that is queued for genat, so genat and sites
downstream from it will lose some news.
-- 
Geoff Collyer	utzoo!utstat!geoff, utstat.toronto.{edu,cdn}!geoff

clewis@spectrix.UUCP (Chris Lewis) (12/28/87)

In article <1987Dec23.161042.2776@utstat.uucp> geoff@utstat.uucp writes:
>I spoke with Mike Stephenson of Genamation (the local Pyramid dealers)
>this afternoon, and he reports that genat has been down for the past few
>days with a broken Eagle, but repairmen are working on the Eagle now,
>and genat should either be up by the end of the afternoon, or will be
>down for the holidays.

Genat seems to still be down, but "gen400" (A NCR 32/400 Tower) is
answering it's phone calls.  Mike is trying to bring news up on it.
Keep your fingers crossed.

>If genat is down for the holidays, utzoo does not have the disk space
>to hold the unsent news that is queued for genat, so genat and sites
>downstream from it will lose some news.

C-news's batcher is supposed to prevent that - it meters stuff so as
to not fill up the queue.  Genat won't lose anything (on utzoo's side)
until utzoo starts expireing some of the articles that are queued but
not batched yet.  Then again, gen400 may start dropping stuff once
it starts picking things up again.

Lsuc is running EXTREMELY short of disk space and I have had to take
drastic action (prevented uucico on inbound stuff, and commented out 
all C-news batching and unbatching) until expire can catch up (C-news
had blown both /dev/spool and /dev/usr - 4.5Mb of *compressed* news sitting
in incoming from *one* day! - C-news isn't supposed to blow disks, and
I've added even more metering - don't know why this is happening).  
utzoo has been pumping an incredible amount of stuff.  Where in hell is 
it all coming from?

HCR seems to have been down since Dec 24th.  Chewing up valuable
spool space on lsuc.  Sickkids seems partially stuck.

Spectrix seems to have weathered most of the storm pretty well (but
we're not a full feed, and we have some suspicious zero-length
files lying around...).
-- 
Chris Lewis, Spectrix Microsystems Inc,
UUCP: {uunet!mnetor, utcsri!utzoo, lsuc}!spectrix!clewis
Phone: (416)-474-1955

oz@yunexus.UUCP (Ozan Yigit) (12/29/87)

In article <360@spectrix.UUCP> clewis@spectrix.UUCP (Chris Lewis) writes:
>
>utzoo has been pumping an incredible amount of stuff.  Where in hell is 
>it all coming from?
>
>Spectrix seems to have weathered most of the storm pretty well ...
>-- 
>Chris Lewis, Spectrix Microsystems Inc,

We managed to handle the storm as well, by far the biggest I have
seen. [No spool problems, only dialer overload..]  It seems that for
each new trick usenet sites can pull off, (faster modems, better
compression, better news software etc) the volume increases at an
appropriate rate. Sigh... If we only had a subscription service...

oz
-- 
Those who lose the sight	     Usenet: [decvax|ihnp4]!utzoo!yunexus!oz
of what is really important 	    	     ......!seismo!mnetor!yunexus!oz
are destined to become 		     Bitnet: oz@[yusol|yulibra|yuyetti]
irrelevant.	    - anon	     Phonet: +1 416 736-5257 x 3976

mark@sickkids.UUCP (Mark Bartelt ) (12/31/87)

In article <360@spectrix.UUCP> clewis@spectrix.UUCP (Chris Lewis) writes:
 [ ... ]
> HCR seems to have been down since Dec 24th.  Chewing up valuable
> spool space on lsuc.  Sickkids seems partially stuck.

Not stuck at all; just down for a half day getting a disk fixed, and
restoring various filesystems from tape.  All is back to normal now.

molnar@gpu.utcs.toronto.edu (Tom Molnar) (01/04/88)

In article <260@yunexus.UUCP> oz@yunexus.UUCP (Ozan Yigit) writes:
# In article <360@spectrix.UUCP> clewis@spectrix.UUCP (Chris Lewis) writes:
# >
# >utzoo has been pumping an incredible amount of stuff.  Where in hell is 
# >it all coming from?
# >
# >Spectrix seems to have weathered most of the storm pretty well ...
#
# We managed to handle the storm as well, by far the biggest I have
# seen.

Storm?  What storm?  Never felt a thing.
-- 
Tom Molnar
Unix Systems Group, University of Toronto Computing Services.

clewis@lsuc.uucp (Chris Lewis) (01/04/88)

In article <1988Jan3.201804.10833@gpu.utcs.toronto.edu> molnar@gpu.utcs.UUCP (Tom Molnar) writes:
#In article <260@yunexus.UUCP> oz@yunexus.UUCP (Ozan Yigit) writes:
## In article <360@spectrix.UUCP> clewis@spectrix.UUCP (Chris Lewis) writes:
## >
## >utzoo has been pumping an incredible amount of stuff.  Where in hell is 
## >it all coming from?
## >
## >Spectrix seems to have weathered most of the storm pretty well ...
##
## We managed to handle the storm as well, by far the biggest I have
## seen.
#
#Storm?  What storm?  Never felt a thing.

You must have more disk space than us Gunga Din.
-- 
Chris Lewis, Non-resident C-news Hacker,
Real: {uunet!mnetor,utcsri!utzoo,ihnp4!utzoo,utcsri!utzoo}!spectrix!clewis
Virtual: {same as above}!lsuc!clewis

gerry@syntron.UUCP (G. Roderick Singleton) (01/05/88)

In article <1988Jan3.201804.10833@gpu.utcs.toronto.edu>, molnar@gpu.utcs.toronto.edu (Tom Molnar) writes:
# In article <260@yunexus.UUCP> oz@yunexus.UUCP (Ozan Yigit) writes:
  # In article <360@spectrix.UUCP> clewis@spectrix.UUCP (Chris Lewis) writes:
  # >
  # >utzoo has been pumping an incredible amount of stuff.  Where in hell is 
  # >it all coming from?
  # >
  # >Spectrix seems to have weathered most of the storm pretty well ...
  #
  # We managed to handle the storm as well, by far the biggest I have
  # seen.
> 
> Storm?  What storm?  Never felt a thing.
> -- 
> Tom Molnar
> Unix Systems Group, University of Toronto Computing Services.


Must have been a local squall 'cause I didn't notice anything either.

G. Roderick Singleton              |  "ALL animals are created equal,
   <gerry@syntron.uucp>,           |   BUT some animals are MORE equal
or <gerry@geac.uucp>,              |   than others." a warning from
or <gerry@eclectic.uucp>           |  "Animal Farm" by George Orwell
-- 
G. Roderick Singleton              |  "ALL animals are created equal,
   <gerry@syntron.uucp>,           |   BUT some animals are MORE equal
or <gerry@geac.uucp>,              |   than others." a warning from
or <gerry@eclectic.uucp>           |  "Animal Farm" by George Orwell

geoff@utstat.uucp (Geoff Collyer) (01/05/88)

I am the news administrator for utzoo during Henry's absence.  Just to
set the record straight, there was a news storm or flood originating at
utzoo at Christmas time.  It has passed except that dciem hasn't picked
it all up yet and genat has been down, or at least not ready to receive
news, and so hasn't seen it yet.  (So all you sites downstream of dciem
and genat can gird your loins now: dciem is now receiving some news and
genat appears to be healing. :-)  I think there is a lesson (or two) in
the story of the flood...

Until mid-December, utzoo ran B 2.10 rnews [gasp! well it is public
information; you could have discovered this by sending a version control
message], but only from 6:30 PM each week night until about 8 AM the
next morning, and around the clock on weekends, to prevent interference
with real work during the day.  Around the end of November, Henry
noticed that not only was B rnews not processing all of the nightly news
flow by 8 AM, but the 24-hour processing on the weekends left some
unprocessed news on Monday morning.  I did some gross measurements in
early December which suggested that the rate of accumulation of
unprocessed news was slightly over 1Mb/day.  As you might expect from
that rate, within three weeks of Henry first noticing a backlog on
Monday morning, there was a backlog of about 25Mb of incoming news on
utzoo by mid-December, growing rapidly.

On December 15th, Henry installed C rnews and let it loose (subject
to the same time-of-day restrictions as above) on the backlog at 6 PM.
During the night of December 15th-16th, C rnews processed all but a few
megabytes of the incoming queue and at 10:03 PM on December 16th,
C rnews polished off the last article of the backlog.

There's a lesson here for any other VAX-750-class machines (utzoo is a
PDP-11/44 with 2 Eagles and 3Mb or 4Mb main memory) which don't process
news during the day:  Switch to C news.  Soon.  It's no fun putting up
news software when there's a gun (consisting of a rapidly-growing
backlog) pointed at your disks (but it certainly does concentrate the
mind! :-).  Thus endeth the advertisement.

Then there was the small problem of distributing all that news.  Just
filing it all reduced utzoo's free space on /usr to 6Mb, at which
threshold utzoo's news batcher refused to generate batches due to the
lack of free space, and Henry left for vacation (December 21st).  I
spent a couple of days removing files, forcing calls to sites with dead
autodialers, and waiting for the storm to expire.  Around Christmas,
expire started producing a lot of free space, so that batching could
proceed even faster (now that the articles being batched had mostly
expired :-).

utstat gets a fairly small (and shrinking) subset of the available news,
so I don't have a good feeling for how many megabytes are in a day's
news flow, but I gather it is now about 1.5Mb-2Mb.  A permanent,
standing 1200 baud UUCP baud connection can pump no more than 9.5Mb/day
(assuming 110 bytes/second, which is the empirical upper bound at
1200 baud).  A surprising number of utzoo's news neighbours in Toronto
have only 1200 baud modems, so this is not entirely academic.  Looking a
little into the future, keeping 9.5Mb/day for 10 days (as utzoo
currently does) will consume 95Mb under /usr/spool/news and a
site-specific volume in the outgoing UUCP queues.  You will also need
about 5Mb free for the incoming, unprocessed news for a single day
(compressed).

Let's look a little further into the future.  B rnews on a VAX 750 under
4.2BSD processed about 67 bytes/second.  C rnews, when I last measured
it (over a year ago), was processing over 1,000 bytes/second on the same
machine, so C rnews should not present a limit to volume of news until
major news links use Telebit Trailblazers.  Assuming 1,000 bytes/second
through the Trailblazers and standing connections, one can only pump
86.4Mb/day.  Retaining news for 10 days will consume 908Mb (864Mb in
/usr/spool/news + 44Mb incoming), or 2.4 Sun Eagles, or about 2
Swallows.

Somewhat later, CSRI should have a good FDDI Internet connection, so we
should be able to transfer 10Mb of news per second, but C rnews will
likely be running at only several kilobytes/second, unless we use
a Cray as the main U of T news server.  Unfortunately current disks
typically transfer data no faster than about 2.5Mb/second, but we shall
assume that disks will get magically faster.  Assuming that the C rnews
on the Cray can keep up with FDDI transfer rates, we can transfer only
864Gb of news per day.  Keeping it for 10 days will consume 9Tb in
/usr/spool/news, which will have to be on fast optical disks or in Cray
4 main memory.  :-) :-)

More seriously, I am interested in the growth of Usenet traffic vs. the
increases in speed of news hardware and software, and may try to plot
the curves.  I would guess that in a few years, the growth of traffic
will exceed the ability of machines smaller than Sun 4s running C news
to keep up.  I suspect that eventually only relatively large machines
will be able to keep up with traffic volumes, especially if the owners
of machines carrying news want people to get work (other than news
maintenance) done on those machines.

I see some final lessons: don't volunteer to be a backbone site :-); and
only moderated groups will survive in the long-term.   I wonder how long
is "long-term": three years or five?  I can remember news flows of
100kb/day; it all seemed so harmless then :-).
-- 
Geoff Collyer	utzoo!utstat!geoff, utstat.toronto.{edu,cdn}!geoff

dave@lsuc.uucp (David Sherman) (01/06/88)

In article <1988Jan5.011219.2676@utstat.uucp> geoff@utstat.uucp writes:
>There's a lesson here for any other VAX-750-class machines (utzoo is a
>PDP-11/44 with 2 Eagles and 3Mb or 4Mb main memory) which don't process
>news during the day:  Switch to C news.  Soon.  It's no fun putting up
                                          ^^^^^
>news software when there's a gun (consisting of a rapidly-growing
>backlog) pointed at your disks (but it certainly does concentrate the
>mind! :-).  Thus endeth the advertisement.

So, despite Henry being a co-author of C news and utzoo being a
machine that really needed help, he wasn't even running it?
Maybe that explains why C news was so much "fun" to install.

The message that I have from clewis is that C news is rather
difficult to install.  From my (personal) point of view, not
having had to do the installation work, I say C news is great,
since it runs so much faster.  But unless/until you guys can
repackage it so it installs more easily (perhaps Chris can
give you some advice on that score), our Official Statement
to the world is: don't install C news unless you have lots
and lots of expert hacker time to spare.  Thus endeth the
comment to the advertisement.

David Sherman
-- 
{ uunet!mnetor  pyramid!utai  decvax!utcsri  ihnp4!utzoo } !lsuc!dave

jeff@hcr.UUCP (Jeffrey Roberts) (01/07/88)

In article <81@sickkids.UUCP> (Mark Bartelt) writes:
 [ ... ]
> HCR seems to have been down since Dec 24th.  Chewing up valuable
> spool space on lsuc.  Sickkids seems partially stuck.

We were never down over the holidays.  Space is very limited and it may have
appeared that we were down.  We got a very large feed of news from Dec. 29
on.  We are coping with the large volume however we are still backed up quite
a bit in terms of processing it. 
---- 
				Jeffrey Roberts
				HCR Corporation
				{utzoo,utcsri,lsuc}!hcr!jeff

mike@genpyr.UUCP (Mike Stephenson) (01/16/88)

Genat is back up again, limping along as I clear the backlog I've accumulated
in the meantime.  The machine however has been changed to an NCR Tower 32/400.

-- 
						Mike Stephenson

Mail:	Genamation Inc.		Phone:	(416) 475-9434
	351 Steelcase Rd. W
	Markham, Ontario.	UUCP:	uunet!{mnetor,utzoo}!genat!genpyr!mike
	Canada   L3R 3W1		uunet!{mnetor,utzoo}!genat!mike

henry@utzoo.uucp (Henry Spencer) (01/17/88)

> So, despite Henry being a co-author of C news and utzoo being a
> machine that really needed help, he wasn't even running it?

I was running a lot of parts of it, but not the whole thing.  In particular,
I wasn't running C rnews.  This was partly sheer inertia, partly reluctance
to mess with working software without need, and partly lack of real need --
until quite recently, the antique B rnews we ran was more or less coping
with the load.  Oh yes, and partly being decidedly busy.  I am relieved to
have made the switch -- it was long overdue -- but it was a significant
effort at a time when I had too many other things to do.  The growing
backlog forced my hand.

Also, calling me "co-author" is perhaps a bit strong; Geoff did most of the
hard parts.  I've done more of the interfacing to the outside world, but
that's for other reasons (e.g. I talk more!).

> The message that I have from clewis is that C news is rather
> difficult to install... But unless/until you guys can
> repackage it so it installs more easily... our Official Statement
> to the world is: don't install C news unless you have lots
> and lots of expert hacker time to spare.

Chris seems to have had an unusually difficult time of it, based on the
reports we hear.  However, note the following in README.FIRST as sent
out with the alpha release:

	If you don't have time to explore its idiosyncrasies and babysit its
	problems, you should not even try to put it up.

You wuz warned.
-- 
Those who do not understand Unix are |  Henry Spencer @ U of Toronto Zoology
condemned to reinvent it, poorly.    | {allegra,ihnp4,decvax,utai}!utzoo!henry