dave@lsuc.UUCP (David Sherman) (07/22/85)
Our system is heavily loaded during office hours and can't handle the additional load of uuxqt/cunbatch/news-unpack/rnews which news feeding causes. However, I don't mind a uucico running, and in fact I like to have uucico running, so we can get mail in and out of our system during the day. Mark Brader (lsuc!msb) and I have been playing with a shell script which would let us batch incoming news by sending it to (say) /usr/spool/newsbatches/* and running the cunbatches from there at night. Before we actually try this kind of thing, has anyone done it already? Is there any other obvious solution which will permit mail to work but gracefully prevent news from running during the day? Dave Sherman The Law Society of Upper Canada Toronto -- { ihnp4!utzoo pesnta utcs hcr decvax!utcsri } !lsuc!dave
chuqui@nsc.UUCP (Chuq Von Rospach) (07/23/85)
In article <726@lsuc.UUCP> dave@lsuc.UUCP (David Sherman) writes: >Our system is heavily loaded during office hours and can't >handle the additional load of uuxqt/cunbatch/news-unpack/rnews >which news feeding causes. However, I don't mind a uucico >running, and in fact I like to have uucico running, so we >can get mail in and out of our system during the day. > >Mark Brader (lsuc!msb) and I have been playing with a shell >script which would let us batch incoming news by sending it >to (say) /usr/spool/newsbatches/* and running the cunbatches >from there at night. Before we actually try this kind of >thing, has anyone done it already? Well, I'm now running a version of uucp that allows me to grade batches -- it should be part of 4.3, and I believe honey-danber does this as well. It means that news is simply not transmitted around during prime hours. This has helped things out to a great degree. If you can't fix uucp, there are still things you can do: o Simply don't queue up anything until the evening hours, and have your upstream feed (only the main feed has to do it -- I've found that even with 8 or 9 local feeds running into my site it doesn't really bother things much) do it as well. If your site generates the batches at 8PM and again at midnight or some such, and you poll that site every hour or two, the chances are VERY good that you'll get all your news by the time you get in for work the next morning. If they simply don't batch it up during the day, you don't need to worry abou it. Using the 'F' protocol in the sys file will keep them from using a lot of disk. o There is a program written locally by Mark Stein (ex Fortune hacker) that allowed you to shove incoming news into a queue to free up uux for mail. It was written for a batcher called bnproc, but there is probably a version somewhere for cunbatch (or it could be hacked up reasonably easily) and if you take the exec() of the unbatcher out of it, the news will just sit there. You can then use cron to run the unbatcher when you want. I've found, though, that a little coordination with your upstream site and your downstream sites tends to save a lot of hassle -- I rarely transfer large amounts of news during the day anymore, and 95% of the changes I made were administrative. chuq -- :From the carousel of the autumn carnival: Chuq Von Rospach {cbosgd,fortune,hplabs,ihnp4,seismo}!nsc!chuqui nsc!chuqui@decwrl.ARPA Your fifteen minutes are up. Please step aside!
dsp@ptsfa.UUCP (David St. Pierre) (07/24/85)
>In article <726@lsuc.UUCP> dave@lsuc.UUCP (David Sherman) writes: >>Our system is heavily loaded during office hours and can't >>handle the additional load of uuxqt/cunbatch/news-unpack/rnews >> >>Mark Brader (lsuc!msb) and I have been playing with a shell >>script which would let us batch incoming news by sending it >>to (say) /usr/spool/newsbatches/* and running the cunbatches >>from there at night. Before we actually try this kind of >>thing, has anyone done it already? I've been running this way for about 3 months. My "cunbatch" just leaves the files in .XQTDIR. Starting about 6PM, my cron kicks off the real cunbatch shell every hour for the rest of the night. This allows evening transmission to be processed in almost real time. I've built a trivial locking scheme with the PID in a lockfile. If the process is still running, the new cunbatch (in this case) just goes away. Otherwise it updates it and starts off. I've also put a shell front-end into expire to honor the lockfile. It doesn't go away but sleeps for a while and tries again. While we do try to coordinate hours with our upstream neighbor, sometimes we aren't able to connect in the evening. Having cunbatch/compress/rnews kick off at about 8 AM was a real bummer for everyone. -- David St. Pierre {ihnp4,dual,qantel}!ptsfa!dsp
ladm@kitc.UUCP (John Burgess - Local tools ADM) (07/24/85)
In article <726@lsuc.UUCP> dave@lsuc.UUCP (David Sherman) writes: >Our system is heavily loaded during office hours and can't >handle the additional load of uuxqt/cunbatch/news-unpack/rnews ... >thing, has anyone done it already? Is there any other obvious >solution which will permit mail to work but gracefully prevent >news from running during the day? > >Dave Sherman Yes, have your sender ONLY batch the news to you in off hours. Try smething like this as your cron entry: 30 0-7,17-23 * * * /<wherever>/sendbatch ... We stick an extra ",12," to pick up news during lunch. -- John Burgess - Local Tools Administrator ATT-IS Labs, So. Plainfield NJ (HP 1C-221) {most Action Central sites}!kitc!ladm (201) 561-7100 x2481 (8-259-2481)
stuart@sesame.UUCP (Stuart Freedman) (07/25/85)
> o Simply don't queue up anything until the evening hours, and have > your upstream feed (only the main feed has to do it -- I've found that > even with 8 or 9 local feeds running into my site it doesn't really > bother things much) do it as well. If your site generates the batches > at 8PM and again at midnight or some such, and you poll that site every > hour or two, the chances are VERY good that you'll get all your news by > the time you get in for work the next morning. If they simply don't > batch it up during the day, you don't need to worry abou it. Using the > 'F' protocol in the sys file will keep them from using a lot of disk. Having your news feed do their batching at certain times is a very good control on when you get news. We do it at ncoast and it has cut down on problems with news polling at the wrong hours. -- Stuart Freedman {genrad|ihnp4|ima}!wjh12!talcott!sesame!stuart {cbosgd|harvard}!talcott!sesame!stuart or mit-eddie!futura!stuart
honey@down.FUN (Peter Honeyman) (07/25/85)
it is trivial to set up a cron entry to unbatch news in the wee hours. this being epsilon less than perfect, rick adams and i, independently and simultaneously, added grade-dependent transfer to uucp. amazingly, our hacks are compatible with one another. coincidences can happen! rick's will be in 4.3; mine is, well, sitting in a 3b2 in summit nj, waiting for bianchi to care. peter
heiby@cuae2.UUCP (Ron Heiby) (07/25/85)
I found that I had a similar problem. Response time on my system was kinda draggin' with a uucico or two, plus five users doing software development, plus the news unbatching (on an AT&T 3B2/300 running some experimental O.S. code). So, I modified inews a bit to make the news unbatching "nice". The following fragment is from around line 735 in ifuncs.c. My addition is marked with the comment "RWH". sprintf(unbatcher, "%s/%s", LIB, BATCH); reset_stdin(); nice(15); /* RWH */ execl(unbatcher, "news-unpack", (char *)0); xerror("Unable to exec shell to unpack news."); It could have been 19 instead of 15, for all I care. If someone more familiar with the netnews code sees a problem with this, please let us know. I can't see one. -- Ron Heiby heiby@cuae2.UUCP (via ihnp4) AT&T-IS, /app/eng, Lisle, IL (312) 810-6109
rbp@investor.UUCP (Bob Peirce) (07/25/85)
> dave@lsuc.UUCP (David Sherman) writes in <726@lsuc.UUCP> > Our system is heavily loaded during office hours and can't > handle the additional load of uuxqt/cunbatch/news-unpack/rnews We have an identical problem. In our case, anytime we even try to send mail out of the company uucp starts up another uuxqt. We have had as many as four running at once on large pieces of news. Talk about death in the afternoon! You would think LCK.XQT was put there to prevent this but on our machine it must have a more subtle goal in life. We are a binary site so kludges are our standard method of fixing problems. Before trying the more extreme approaches you might want to explore some of the things we are trying. 1. We modified cunbatch to run nice -20. 2. Last night, so I don't know if it works yet, we moved uuxqt to uuxqt.pgm and added a uuxqt that checks usr/spool/uucp for the absence of LCK.XQT before running uuxqt.pgm&. This should make life livable. If we still have problems we will probably resort to your solution. However, instead of moving the batched files, I think we would first try moving the control files in usr/spool/uucp. They are the ones uuxqt looks to. I suspect they could be moved out and brought back without serious problems. -- Bob Peirce, Pittsburgh, PA uucp: ...!{allegra, bellcore, cadre, idis} !pitt!darth!investor!rbp 412-471-5320 NOTE: Mail must be < 30,000 bytes/message
itkin@luke.UUCP (Steven List) (07/26/85)
In article <726@lsuc.UUCP> dave@lsuc.UUCP (David Sherman) writes: >Our system is heavily loaded during office hours and can't >handle the additional load of uuxqt/cunbatch/news-unpack/rnews Since cunbatch is just a shell script, why not either: 1) change it to save the file to a special directory and then run the original cunbatch at the time of your choice, or 2) replace cunbatch with csavebatch and coordinate with your newsfeed to use the new name in their sys file. Both solutions are basically the same. They just implement the suggestions from the original article. The script csavebatch could be as simple as: SAVE=/usr/spool/savebatch if [ ! -d $SAVE ]; then mkdir $SAVE; fi NEXTID=`ls $SAVE/[0-9]*` set $NEXTID if [ $# -ne 1 ] then NEXTID=1 touch $SAVE/$NEXTID fi cat > $SAVE/save.$NEXTID NEXT=`expr $NEXTID + 1` mv $SAVE/NEXTID $SAVE/NEXT -- *** * Steven List @ Benetics Corporation, Mt. View, CA * Just part of the stock at "Uncle Bene's Farm" * {cdp,greipa,idi,oliveb,sun,tolerant}!bene!luke!itkin ***
dave@lsuc.UUCP (David Sherman) (07/29/85)
Well, thanks for the kind posted replies everyone; because of our uucp problems, I wasn't able to see any of them before we had to resolve the problem on Friday. What we did was change cunbatch to a shell file which copies its standard input into /usr/spool/batchnews/, and create a new /usr/lib/news/unspoolnews which we run from crontab in the evenings. It meant we could run uucp all day during the day on Friday to bring in batches, which was helpful because our uucp is presently only working at 300 baud, for some reason. If anyone wants our source to cunbatch and unspoolnews, let me know. As to other solutions suggested: - queueing in the evening was already done; the problems came with our UUCP acting up or our dialer going on the blink, and not being fixed till the morning - upgraded UUCP would be interesting. Can anyone tell me what the availability is of a "better" UUCP for us? We're source-licensed for System III and are running v7 UUCP. Dave Sherman The Law Society of Upper Canada Toronto -- { ihnp4!utzoo pesnta utcs hcr decvax!utcsri } !lsuc!dave
dhp@ihnp3.UUCP (Douglas H. Price) (08/16/85)
I ran into the problem of what to do about unpacking during the middle of the day also. In my case, the sites downstream were complaining that they were not getting news until after we did our unbatches at night. The fix was to use an rnews replacement which spooled the news in a private directory, and directly forwarded the inbound news out through uux. This meant that the downstream sites received their news right away, and we had a private copy of the batched news to unpack at our convenience. This will only work if you are forwarding all news to the next site rather than a selected set of newsgroups however. -- Douglas H. Price Analysts International Corp. @ AT&T Bell Laboratories ..!ihnp4!ihnp3!dhp
roy@phri.UUCP (Roy Smith) (08/17/85)
> The fix was to use an rnews replacement which spooled the news in a private > directory, and directly forwarded the inbound news out through uux. > Douglas H. Price > ..!ihnp4!ihnp3!dhp If I understand this right, that means your system doesn't add its site name to the path headers. Besides generating incorrect paths, the site downstream from you will probably bounce the articles back to you since it won't realize that you have seen them already. -- Roy Smith <allegra!phri!roy> System Administrator, Public Health Research Institute 455 First Avenue, New York, NY 10016
dhp@ihnp3.UUCP (Douglas H. Price) (08/19/85)
In my case, it is true that incorrect paths are generated, but I only feed leaf sites which do not forward news themselves. So as a matter of practice, the problem of getting news echoed back to me doesn't occur. In point of fact, the REAL problem here is that netnews unpacking is so dad-blamed inefficient and expensive in CPU time. Its getting to the point where I am extremely tempted to rewrite rnews/inews. (I'm not volunteering...yet). -- Douglas H. Price Analysts International Corp. @ AT&T Bell Laboratories ..!ihnp4!ihnp3!dhp
msb@lsuc.UUCP (Mark Brader) (08/20/85)
We receive almost all of our news in compressed batches.
We have replaced cunbatch by the following shell script:
-------------------------------- cut here ---------------------------------
cd /usr/spool/batchnews
name=cunb-$$
cat >making-$$
mv making-$$ $name
-------------------------------- eher tuc ---------------------------------
(/usr/spool/batchnews is generally-writable on our system, analogous to
/usr/spool/uucppublic. I don't know what you do if this is unacceptable.)
The fun part is the shell script to uncompress them. We want news to
appear on the system fairly quickly outside of working hours, and have
chosen 1 hour as the acceptable delay. So we run the script below,
called "unspoolnews", once an hour outside working hours.
But the first run in the evening may take well over an hour, and we
could end up with two or more news-unpacks competing, especially the
day after any kind of delay. So to prevent this, unspoolnews writes a
lock file called LCK.
But if an unspoolnews crashes for some reason (system crash, for instance),
we don't want the LCK file to hang around stopping the next run. The
solution is to check when the LCK file was last written. If it is more
than 40 minutes ago, or (for simplicity) not this month, we assume that
the older unspoolnews has gone away, and throw out the old LCK.
Note that the 40-minute checking is done right in the shell using bc.
If new batches arrive while unspoolnews is running, it will unspool them
when it's finished the ones that were then when it started.
We assume that there are some people who want to hear about certain
errors that may be produced by rnews or compress. The code catches
stderr from both programs. Any error from compress is considered worth
telling about; certain messages from rnews are considered ignorable, and
are only mentioned if there are any other errors. The behavior is to
send mail to the people mentioned in $WHOM if anything not containing
the strings in $EXCLUDEMSGS shows up in stderr of rnews, or if anything
is in stderr of compress. You will want to configure this according to
what you think is serious. At our site it is governed by the fact that
our junk directory is unwritable, for instance, and we know it.
Two further notes: zcat is a link to compress -d; and mail -s mails
an article with the specified subject line. You may have to change
these. We run Edition VII, essentially V7, and this is Bourne Shell code.
Here's unspoolnews:
-------------------------------- cut here ---------------------------------
PATH=/bin:/usr/bin:/usr/ucb
WHOM="news dave"
EXCLUDEMSGS="Newsgroups in active, but not sys
Cannot install article as
failed, errno 13, check dir permissions.
rejected. linecount expected
Article too old
Unknown newsgroup
No valid newsgroups found, moved to junk"
cd /usr/spool/batchnews
if test -r LCK && sh -c '
IFS=":$IFS"
cat <LCK >/dev/null
set x`ls -lu LCK 2>/dev/null`
MinU="$6 * 1920 + $7 * 60 + $8"
MonU="$5"
set x`ls -l LCK 2>/dev/null`
MinL="$6 * 1920 + $7 * 60 + $8"
MonL="$5"
if test "$MonU" != "$MonL" \
-o `echo "(($MinU)-($MinL))/40" | bc 2>&1` != 0
then
rm -f LCK
fi
test -r LCK'
then
exit 0
fi
while :
do
for i in cunb-*
do
if test "$i" = "cunb-*"
then
rm -f LCK .err-?-$$ 2>/dev/null
exit 0
else
>LCK
mv $i $$-$i
if test ! -s $$-$i ||
zcat <$$-$i 2>.err-z-$$ | rnews 2>.err-r-$$ &&
test ! -s .err-z-$$ &&
(fgrep -v "$EXCLUDEMSGS" .err-r-$$ >.err-s-$$
test ! -s .err-s-$$)
then
rm $$-$i
else
mv $$-$i .$i
(file `pwd`/.$i
ls -l `pwd`/.$i
cat .err-[sz]-$$
if test -s .err-r-$$
then
echo "
Full error listing:
"
cat .err-[rz]-$$
fi) |
mail -s "zcat or rnews error!" $WHOM
rm .err-?-$$
fi
fi
done
done
-------------------------------- eher tuc ---------------------------------
{ decvax | ihnp4 | watmath | ... } !utzoo!lsuc!msb
also via { hplabs | amd | ... } !pesnta!lsuc!msb
Mark Brader and uw-beaver!utcsri!lsuc!msb