[news.sysadmin] uuxqt problems with news

dave@bvax.UUCP (Dave Wallace) (05/06/87)

I recently joined the 'net but have a problem. I am running System V,
Release 2.2 on a VAX 730( No, that's NOT the problem). When I get my
'feed', uuxqt runs, forking a sh, who forks a sh, who forks a
compress and rnews,( plus the compress forks an unbatch). I can
get up to 6 of these running at once, which causes the 730 to slow down
just a bit (-: . I currently get my feed from one site and don't send
to anyone else ( until I get this problem resolved. ) It takes the 730
most of a working day to process 6 or 7 batch files in /usr/spool/uucp! It
seems like these processes are fighing them selves. Any body have
any suggestions? 
I appologize for the length of this posting, but I have included a sample
ps to show what is going on.

  F S   UID   PID  PPID  C PRI NI   ADDR  SZ    WCHAN TTY      TIME COMD
  3 S     0     0     0  0   0 20    322   4 8003bfb0 ?        0:01 swapper
  1 S     0     1     0  0  39 20    4cb  93 7ffff800 ?        3:10 init
  3 S     0     0     0  0   0 20    4d2   4 8000456c ?        3:01 swapper
  1 S     0    44     1  0  28 20    59d  56 8003c864 console  0:02 getty
  1 S     0  9609     1  0  28 20    f05  56 8003a834 ttyp12   0:02 getty
  1 S     0    33     1  0  26 20    72d  94 8002a802 ?        0:54 cron
  1 S     0    35     1  0  39 20    6e6  12 7ffff800 console  0:35 update
  1 S     0    37     1  0  26 20    6d1  32 8001e828 console  0:00 errdemon
  1 S    71    40     1  0  26 20    67d  90 8002a142 ?        0:03 lpsched
  1 S   100 10040     1  0  30 20    c40  97 8002d300 ttyp03   0:11 sh
  1 S     9 10119     1  0  30 20    f45 117 8002d350 ttyp12   0:02 uuxqt
  1 S     0    48     1  0  28 20    86c  56 8003a5cc ttyp05   0:02 getty
  1 S     0    49     1  0  28 20    8a2  56 8003a624 ttyp06   0:02 getty
  1 S     0    50     1  0  28 20    8b8  56 8003a67c ttyp07   0:02 getty
  1 S     9 10343 10119  0  30 20    d20  90 8002d490 ttyp12   0:00 sh
  1 S     0    52     1  0  28 20    8e6  56 8003a7dc ttyp11   0:02 getty
  1 S     9 10344 10343  0  30 20    b83  90 8002d530 ttyp12   0:01 sh
  1 S     0    54     1  0  28 20    90e  56 8003a88c ttyp13   0:02 getty
  1 S     0    55     1  0  28 20    92d  56 8003a8e4 ttyp14   0:02 getty
  1 S     9 10368     1  0  30 20    c3e 117 8002d620 ttyp12   0:01 uuxqt
  1 S     9 10369 10368  0  30 20    c74  90 8002d670 ttyp12   0:00 sh
  1 S   101 10438     1  0  28 20    e00  94 8003a574 ttyp04   0:09 sh
  1 S     9 10384     1  0  30 20    b60 117 8002d710 ttyp12   0:01 uuxqt
  1 S     9 10385 10384  0  30 20    ac2  90 8002d760 ttyp12   0:00 sh
  1 R     9 10579 10387 10  63 20    9e6 203          ttyp12   0:46 rnews
  1 S     9 10370 10369  0  30 20    860  90 8002d800 ttyp12   0:01 sh
  1 S   107  4224     1  0  30 20    9eb 139 8002d850 ttyp02   0:15 csh
  1 S     9 10371 10370  0  30 20    c60  56 8002d8a0 ttyp12   0:07 unbatch
  1 S     9 10346 10344  0  30 20    a5e  56 8002d8f0 ttyp12   0:07 unbatch
  1 S     9 10386 10385  0  30 20    5df  90 8002d940 ttyp12   0:01 sh
  1 S     9 10372 10371  0  26 20    d0c 866 8002aec4 ttyp12   0:18 compress
  1 S     9 10387 10386  0  30 20    c72  56 8002d9e0 ttyp12   0:07 unbatch
  1 S     9 10388 10387  0  26 20    9c4 866 8002b884 ttyp12   0:16 compress
  1 S     9 10347 10346  0  26 20    80b 866 8002c484 ttyp12   0:17 compress
  1 R     5 10580 10281  4  64 22    b74 203          ?        0:28 rnews
  1 R     5 10581 10529  4  64 22    bd6 203          ?        0:20 rnews
  1 R     9 10583 10371 29  74 20    e13 203          ttyp12   0:21 rnews
  1 S     3 10525 10522  0  39 26    eda  95 7ffff800 ?        0:02 sadc
  1 S   107 10263  4224  3  28 20    ce5 316 8003a4c4 ttyp02   1:10 vi
  1 S     5 10279 10278  0  30 22    fcc  90 8002dc60 ?        0:00 sh
  1 S     5 10280 10279  0  30 22    505  90 8002dcb0 ?        0:01 sh
  1 S     5 10527 10526  0  30 22    7e0  90 8002dd00 ?        0:00 sh
  1 S     3 10522     1  0  30 26    f37  89 8002dd50 ?        0:01 sh
  1 S     5 10278     1  0  30 22    b62 116 8002dda0 ?        0:01 uuxqt
  1 S     5 10281 10280  0  30 22    bab  56 8002ddf0 ?        0:11 unbatch
  1 S     9 10586 10346 13  20 20    fb4 203 8006ddc8 ttyp12   0:11 rnews
  1 S     5 10528 10527  0  30 22    bba  90 8002de90 ?        0:01 sh
  1 S     5 10526     1  0  30 22    dbf 116 8002dee0 ?        0:01 uuxqt
  1 S     5 10529 10528  0  30 22    bc6  56 8002df30 ?        0:02 unbatch
  1 S     5 10530 10529  0  26 22    4ff 866 8002a324 ?        0:05 compress
  1 R   100 10588 10040 15  67 20    ad3 102          ttyp03   0:03 ps
  1 S   103  4257     1  0  28 20    fd2  97 8003a784 ttyp10   0:12 sh

Thankyou
---------------------------------------------------------------------
Dave Wallace
Bell Canada ( on week days )
Toronto, Ontario, Canada

mnetor!genet!clunk!bvax!dave  or
utzoo!psddevl!hqtd!dave

or
(416) 599-1588
____________________________________________________________________

jgd@pollux.UUCP (Dr. James George Dunham) (05/19/87)

In article <116@bvax.UUCP> dave@bvax.UUCP (Dave Wallace) writes:
>
>I recently joined the 'net but have a problem. I am running System V,
>Release 2.2 on a VAX 730( No, that's NOT the problem). When I get my
>'feed', uuxqt runs, forking a sh, who forks a sh, who forks a
>compress and rnews,( plus the compress forks an unbatch). I can
>get up to 6 of these running at once, which causes the 730 to slow down
>just a bit (-: . I currently get my feed from one site and don't send
>to anyone else ( until I get this problem resolved. ) It takes the 730
>most of a working day to process 6 or 7 batch files in /usr/spool/uucp! It
>seems like these processes are fighing them selves. Any body have
>any suggestions? 

    We reduced the priority of our news jobs so they would not interfere
with other users doing work on our system and have a similiar problem.
It seems that the lock on uuxqt is a timed lock, and so if you have a uuxqt
job running for a long time and someone else calls, they can start a new
uuxqt job if the time on the lock is expired. We have seen several uuxqt
jobs get started when we have long jobs running in the background. On our
system the problem get worse since it appears there is a limit to the number
of processes one user can run. When the limit is reached, then everything
comes to a halt.
     I feel that the correct solution is to make the lock on uuxqt a pid
lock rather than a timed lock. This way, only one copy of uuxqt can run
at a time. If uuxqt is killed for some reason, then the pid will be wrong
and a new one is allowed to start up. A quick fix is to lengthen the time
of the lock which is X_LOCKTIME in uucp.h and when multiple jobs are found,
kill all but the oldest one. I am hoping to get the lock changed to a pid
lock this summer.

grr@cbmvax.cbm.UUCP (George Robbins) (05/22/87)

In article <311@pollux.UUCP> jgd@pollux.UUCP (Dr. James George Dunham) writes:
> In article <116@bvax.UUCP> dave@bvax.UUCP (Dave Wallace) writes:
> >
> >I can
> >get up to 6 of these running at once, which causes the 730 to slow down
> >just a bit (-: . I currently get my feed from one site and don't send
> >to anyone else ( until I get this problem resolved. ) It takes the 730
> >most of a working day to process 6 or 7 batch files in /usr/spool/uucp! It
> >seems like these processes are fighing them selves. Any body have
> >any suggestions? 
> 
>     We reduced the priority of our news jobs so they would not interfere
> with other users doing work on our system and have a similiar problem.

One good solution is to create an rnews script that gets executed via uucp
that simply copies the files into a temporary directory with a unique name.
The you can have something running from crontab that each hour starts up
and processes the queued news.  Of course this script can test, either via
ps or with a lock file, whether one is still running and abort...

This kind of approach can be extended to also keep the incoming stream
around for a couple of days to make for easy recovery.

Another area to look at is how much time is being sucked up by compress.
Compress used to get real hoggy if you used full 16-bit compression.  While
this may have been improved, you may find that changing your feed to 12-bit
compression reduces the cpu/memory load at only a slight increase in your
telephone costs.
 
-- 
George Robbins - now working for,	uucp: {ihnp4|seismo|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@seismo.css.GOV
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

rick@seismo.CSS.GOV (Rick Adams) (05/22/87)

	One good solution is to create an rnews script that gets
	executed via uucp that simply copies the files into a temporary
	directory with a unique name.  The you can have something
	running from crontab that each hour starts up and processes the
	queued news.  Of course this script can test, either via ps or
	with a lock file, whether one is still running and abort...

If you define SPOOLNEWS when you compile rnews, you will get this
behaviour automatically. (Just remember to run rnews -U from crontab or
you will wonder where your news is going!)

---rick

avolio@decuac.dec.com (Frederick M. Avolio) (05/22/87)

In article <116@bvax.UUCP> dave@bvax.UUCP (Dave Wallace) writes:
>
>I recently joined the 'net but have a problem. I am running System V,
>Release 2.2 on a VAX 730( No, that's NOT the problem). ...
>> ... takes the 730
>>most of a working day to process 6 or 7 batch files in /usr/spool/uucp! It
>>seems like these processes are fighing them selves. Any body have
>>any suggestions? 

2.11 news allows you to accept news inm batched for and spirit it away
for later processing.  Why not set things up to unbatch news only
during the night between certain hours (see inews regarding "inews
-U").  Ask you news feed not to call you during that time.

Fred

sl@van-bc.UUCP (Stuart Lynne) (05/23/87)

In article <311@pollux.UUCP> jgd@pollux.UUCP (Dr. James George Dunham) writes:
>In article <116@bvax.UUCP> dave@bvax.UUCP (Dave Wallace) writes:
>>
>>I recently joined the 'net but have a problem. I am running System V,
>>Release 2.2 on a VAX 730( No, that's NOT the problem). When I get my
>>'feed', uuxqt runs, forking a sh, who forks a sh, who forks a
>>compress and rnews,( plus the compress forks an unbatch). I can
>>get up to 6 of these running at once, which causes the 730 to slow down
>>just a bit (-: . I currently get my feed from one site and don't send
>>to anyone else ( until I get this problem resolved. ) It takes the 730
>>most of a working day to process 6 or 7 batch files in /usr/spool/uucp! It
>>seems like these processes are fighing them selves. Any body have
>>any suggestions? 

Whats probably happening is the mulitple copies of compress fighting for
table space. 

>
>    We reduced the priority of our news jobs so they would not interfere
>with other users doing work on our system and have a similiar problem.
>It seems that the lock on uuxqt is a timed lock, and so if you have a uuxqt
>job running for a long time and someone else calls, they can start a new
>uuxqt job if the time on the lock is expired. We have seen several uuxqt
>jobs get started when we have long jobs running in the background. On our
>system the problem get worse since it appears there is a limit to the number
>of processes one user can run. When the limit is reached, then everything
>comes to a halt.
>     I feel that the correct solution is to make the lock on uuxqt a pid
>lock rather than a timed lock. This way, only one copy of uuxqt can run
>at a time. If uuxqt is killed for some reason, then the pid will be wrong
>and a new one is allowed to start up. A quick fix is to lengthen the time
>of the lock which is X_LOCKTIME in uucp.h and when multiple jobs are found,
>kill all but the oldest one. I am hoping to get the lock changed to a pid
>lock this summer.


The best (?) way around this problem seems to be to define SPOOLNEWS and 
re make your news system. rnews/inews will then simply copy the incoming 
files to /usr/spool/news/.rnews.

Then once(twice) per hour run a script from crontab to run rnews -U. With
a couple of tests you can figure out if rnews is already unbatching and
exit if it is.

Defining NICENESS does seem to be a big win for small systems.  You stop
noticing when news is being unbatched.

The other problem with running rnews from uuxqt is that if it does lock up
correctly so that no second copy of rnews starts up, your mail gets delayed
until the news is unbatched. Which on some small systems can be hours.

Spooling the news takes only a few seconds and then the system is ready to
forward incoming mail.

In general you should try avoid running jobs which take a long time to
execute directly from uuxqt. Instead try and use uuxqt to spool the
information and initiate the job as a separate task allowing uuxqt to exit or
run the job from a cron entry.

--------- Cut here for sample news hourly script -----------

# hourly script for news
#
# check for various lock files
# initiate rnews -U to unspool news 
#

ACTIVELOCK="active.lock"
ACTIVEBATCH="active.unbatch"
RNEWSLOCK="/usr/spool/news/.rnews.lock"

cd /usr/local/lib/news
if
test -f $ACTIVELOCK
then
	echo newshourly: $ACTIVELOCK exists 
	exit 1
fi
if
test -f $ACTIVEBATCH
then
	echo newshourly: $ACTIVEBATCH exists 
	exit 1
fi
if
test -f $RNEWSLOCK
then
	echo newshourly: $RNEWSLOCK exists 
	exit 1
fi

ln active $ACTIVEBATCH

/usr/local/bin/rnews -U

rm $ACTIVEBATCH

# end of news hourly

-- 
Stuart Lynne	ihnp4!alberta!ubc-vision!van-bc!sl     Vancouver,BC,604-937-7532

dave@lsuc.UUCP (05/25/87)

In article <43878@beno.seismo.CSS.GOV> rick@seismo.CSS.GOV (Rick Adams) writes:
>
>	One good solution is to create an rnews script that gets
>	executed via uucp that simply copies the files into a temporary
>	directory with a unique name...
>
>If you define SPOOLNEWS when you compile rnews, you will get this
>behaviour automatically.

Almost, but not quite. With SPOOLNEWS (which requires news 2.11,
incidentally), unbatching and installing of news will be postponed,
but uncompression will be done right away, if your feeds are sending
you files which are totally compressed (2.10.*-style).

If you're on a system with limited memory, you may not want compress
running during the day either. We use a "cunbatch" which is basically
just a "cat".

David Sherman
The Law Society of Upper Canada
-- 
{ seismo!mnetor  cbosgd!utgpu  watmath  decvax!utcsri  ihnp4!utzoo } !lsuc!dave

scotty@l5comp.UUCP (06/10/87)

First, some further questions about uuxqt and the news.

I get my news by calling in for it to a remote site. The remote site batches it
and sets it up to be processed by cunbatch on my system.

I have spooling turned on in my news stuff. I also have nice set.

Uuxqt still runs for a good hour after the actual uucp operation finishes. 
I've also noticed that articles get posted onto my local system while uuxqt is
running. I suspect that after the first file is decompressed news starts 
running and posting begins. This has to slow down further de-compressing.
Which also of course lengthens the time that uuxqt is left active.

Do I have some switch set wrong in my system?

Second, there be time slot hoppers in the world. I have an assigned dial in
time, but every so often someone jumps it. Last week my system didn't get
through till 05:00 (time slot was at 02:00) one night and then didn't
finish uucp till 06:30. However, I have an expire-daily that runs at 06:00.
The system was still running the expire AND the rnews -U at 13:00.

I assume this was a result of expire running at the same time as news was
being munched down, BUT I don't know enough about the news system to say
for sure. I had to kill -9 the rnews -U at 13:20 to shut everything down.
The pacct files had run up to 2.6 megs so I figured something was fatally
wrong. :)

As a side issue, is it 'safe' to kill -9 an active expire?

Was the expire-daily the problem? And if so what should I do to prevent
this mess from happening again? Move expire-daily till MUCH further past
the 02:00 news feed slot?

Third, I'm sure everyone here has seen the 'rebel approved' posting to
comp.sources.misc. Is there anything I can do to the news software on my
system to prevent someone posting such a message using my system? I know
trust is a great protection against alot of stuff, and I DO trust my
regular users, but that posting seems to have been made from a 'guest'
account (if that wasn't faked as well.) I'd rather build in protection than
yank the guest account on my system.

And to whomever made that posting: You should have signed it rather than
skulking around like a criminal with a mask on. Your efforts are more
likely to focus attention on locking the system up rather than on
unmoderating newsgroups. When I read your posting there was an answering
pang of rebellion in my heart, then I looked to see whom to converse with
and my heart froze as cold as the ice in Siberia.

Thanks in advance for any help,

Scott Turner
-- 
UUCP-stick: stride!l5comp!scotty | If you want to injure my goldfish just make
UUCP-auto: scotty@l5comp.UUCP    | sure I don't run up a vet bill.
GEnie: JST			 | "The bombs drop in 5 minutes" R. Reagan
		Disclaimer? I own L5 Computing. Isn't that enough?

news@jpusa1.UUCP (06/13/87)

Summary:
Expires:

In article <287@l5comp.UUCP> scotty@l5comp.UUCP (Scott Turner) writes:
-First, some further questions about uuxqt and the news.
-I get my news by calling in for it to a remote site. The remote site batches it
-and sets it up to be processed by cunbatch on my system.
-I have spooling turned on in my news stuff. I also have nice set.
-Uuxqt still runs for a good hour after the actual uucp operation finishes. 
You should turn on spooling to cut down on uucp time.  What's going on is uuxqt
is firing up news with the associated compress/batch stuff and the machine is
bogging down.  Much better to let uucp squirel away the batches to be processed
later.  I use batching and if the uucp port is active, I don't start the unbatch.
This is handled in a script run hourly by cron.
-
-The system was still running the expire AND the rnews -U at 13:00.
News 2.11 handles the expire/rnews interlocking so this shouldn't be a problem.
At the end of expire, an 'rnews -U' is started up so it's normal to see this.
-As a side issue, is it 'safe' to kill -9 an active expire?
You'll probably trash your history file if you do.  Use an 'expire -r' to fix it.
-Was the expire-daily the problem? And if so what should I do to prevent
-this mess from happening again? Move expire-daily till MUCH further past
-the 02:00 news feed slot?
Separating the uuxqt from news via spooling is what's needed.  I also don't
start up expire if news or the uucp port are active.  The expire.sh script
checks for this and sleeps 5 min till things have quieted down.  This is more
to keep the load down than rnews/expire paranoia as the interlocking seems to
work now.
-Third, I'm sure everyone here has seen the 'rebel approved' posting to
-comp.sources.misc. Is there anything I can do to the news software on my
-system to prevent someone posting such a message using my system? I know
-trust is a great protection against alot of stuff, and I DO trust my
-regular users, but that posting seems to have been made from a 'guest'
-account (if that wasn't faked as well.) I'd rather build in protection than
-yank the guest account on my system.
You can hack Pnews but anyone who knows how can post an article even if you have
no news connections or news software.

Stu Heiss {gargoyle,ihnp4}!jpusa1!stu

jerry@oliveb.UUCP (Jerry F Aguirre) (06/15/87)

Even with "spooling" turned on you may still have lengthy overhead at
uuxqt time.  If the transmitting site is sending you news in "cunbatch"
format rather than "#!cunbatch" format then the "uncompress", which is a
large part of the overhead, will happen at uuxqt time.  The rnews
invoked will save the article for later processing but the uncompress
overhead will slow things down.  If the timeout on the uuxqt lock file
expires then you will get another "cunbatch" will happen with another
uncompress running in parallel.

Older versions of news (2.10) execute the "cunbatch" command instead of
"rnews".  Newer versions (2.11) will use this format if the "-o" option
of "sendbatch" is used.  Frequently this is used when the new site is
feeding a site requiring the old version.

The preferable sollution is for the site feeding you to switch to the
new format.  (Remove the "-o" option to "sendbatch".).  If they are
queueing a single batch for multiple sites and some of them are old
versions then this may be a problem.  If your feed is itself running
an old version then urge them to upgrade.

The second alternative is to replace "cunbatch" with something that just
saves the "cunbatch" for later processing.  Several shell scripts have
been posted in the past.  I have some C programs that I use here but
they are somewhat BSD dependant.  The uncompression, unbatching, and
rnews processing is done only when the load is below a specified level.

As far as the rnews/expire timing problem goes, wouldn't it make more
sense to run expire BEFORE polling for new news?  If you ran expire at
midnight, and then polled at 2AM then you would have less chance of an
overlap.  After all, you control the expire, you can only make a try for
the uucp.  This also makes more sense if you think of the expire
clearing out disk space for the news that is about to arrive.