[news.software.b] ihave/sendme problems and fixes

mrm@sceard.Sceard.COM (M.R.Murphy) (06/29/90)

It would appear that ${NEWSBIN}/batch/batchsplit has a problem with
ihave/sendme.

The ${NEWSARTS}/out.going/*.ihave/togo* and ${NEWSARTS}/out.going/*.sendme/togo*
files have lines that contain only an article id and not a size. Therefore
${NEWSBIN}/batch/batchsplit assumes a default article size of 3000 when
splitting togo to make togo.0, togo.1, ... This is wrong for ihave and sendme
messages; for ihave/sendme the size of the article id and not the size of the
article is appropriate for comparing with the size of batch parameter in
${NEWSCTL}/batchparms. The value 60 (doesn't C news generate nice long
article id's :-) would seem more appropriate for ihave/sendme than 3000.
The result is lots and lots of little control messages, and a togo backlog
in somesystem.ihave and somesystem.sendme that reached 30000 articles :-)

I fixed the problem by changing the line in sendbatches that used to read

			batchsplit $batchsize
to
			batchsplit $batchsize $batcher

and adding

# set the default size for the result of a togo line based on the batcher
if [ x$2 = xbatcher ]
then
	defsize=3000
else
	defsize=60
fi

before

# pick an input file, shuffling togo aside if needed, and unlock
...

in batchsplit. I also changed the usage message in batchsplit to be

0)	echo 'Usage: batchsplit size [batcher]' >&2

but that's pretty much cosmetic. I tried to fix this behavio(u)r by making
the minimum number of changes to what came in the C news box.

Also, ${NEWSBIN}/queuelen doesn't do what I expected for ihave/sendme.
queuelen always returns zero for ihave/sendme , since neither the directory for
somesystem.ihave nor somesystem.sendme is in the /usr/spool/uucp tree. The
directory for somesystem would be. I modified queuelen to be the following
(System V(tm) HDB flavo(u)red, sorry, other environments left as an exercise
for the interested reader :-)
-----
#! /bin/sh
# Find size of current queue of news outbound to $1.  HDB/BNU version.

# =()<. ${NEWSCONFIG-@<NEWSCONFIG>@}>()=
. ${NEWSCONFIG-/usr/lib/news/bin/config}

PATH=$NEWSCTL/bin:$NEWSBIN:$NEWSPATH ; export PATH
umask $NEWSUMASK

cd /usr/spool/uucp
sys=`echo $1|cut -d "." -f1`
if test -d $sys
then
	cd $sys
	grep "news" C.* 2>/dev/null|grep -v X|wc -l
else
	echo 0
fi
-----

I'm sure someone will come up with a more elegant way of doing the
same thing, but this works the way I think that it should given the
descriptions of the parameters on a line in ${NEWSCTL}/batchparms.


Maybe I did this right, maybe not, and it may work by accident, much
as it sort of didn't work before, at least not in the expected manner.
Maybe all of this has already been discussed and I didn't see it. Maybe
nobody cares, and this is just a waste of bandwith, but I think probably
not.

C news has changed the idle cpu cycles on our news/mail machine from 0%
(over a four day period, with mounting news backlog, which was the reason
that finally forced me to changing to C news :-) to 40%. Thanks to Henry
and Geoff for 4/10's of a machine, and for software that is easy to
maintain and to modify for one's particular needs.
-- 
Mike Murphy  Sceard Systems, Inc.  544 South Pacific St. San Marcos, CA  92069
mrm@Sceard.COM        {hp-sdd,nosc,ucsd,uunet}!sceard!mrm      +1 619 471 0655

henry@zoo.toronto.edu (Henry Spencer) (06/30/90)

In article <1990Jun28.222640.12674@sceard.Sceard.COM> mrm@Sceard.COM (M.R.Murphy) writes:
>It would appear that ${NEWSBIN}/batch/batchsplit has a problem with
>ihave/sendme.

As noted in newsbatch(8), ihave/sendme batch processing is pretty much
a kludge that happens to work tolerably well.  Mike has some good thoughts
on making it work better; I plan to take a look at them for inclusion
in the official distribution.

Just in case people are curious, another thing which is more or less on
the list for ihave/sendme is an idea some folks hereabouts came up with:
delayed ihave/sendme, so that an ihave/sendme used as a backup feed won't
pass zillions of articles just because the main feed was a bit slow today.
-- 
"Either NFS must be scrapped or NFS    | Henry Spencer at U of Toronto Zoology
must be changed."  -John K. Ousterhout |  henry@zoo.toronto.edu   utzoo!henry

red@redpoll.uucp (Richard E. Depew) (06/30/90)

     In article <1990Jun28.222640.12674@sceard.Sceard.COM>
mrm@sceard.Sceard.COM (M.R.Murphy) describes an elegant and correct
way to increase the maximum size of ihave/sendme articles.

     There is, of course, an inelegant and incorrect way to achieve
the same end without mucking about in the source code: simply increase
the "size" definition in the /usr/lib/news/batchparms file from
100,000 to 5,000,000.  This has the advantage of not causing rejected
patches the next time Henry and Geoff update sendbatches or
batchsplit.  Of course it could result in rather large files if the
sizes were ever calculated correctly.  :-)

Dick Depew
-- 
Richard E. Depew,  Village of Munroe Falls, OH.      red@redpoll.uucp
uunet!aablue!redpoll!red (east)  lll-winken!neoucom!redpoll!red (west)

zeeff@b-tech.ann-arbor.mi.us (Jon Zeeff) (06/30/90)

>
>Just in case people are curious, another thing which is more or less on
>the list for ihave/sendme is an idea some folks hereabouts came up with:
>delayed ihave/sendme, so that an ihave/sendme used as a backup feed won't
>pass zillions of articles just because the main feed was a bit slow today.

Several years ago someone posted a script that used at(1) to do this.


-- 
Jon Zeeff (NIC handle JZ)	 zeeff@b-tech.ann-arbor.mi.us

henry@zoo.toronto.edu (Henry Spencer) (07/01/90)

In article <MV0%=PC@b-tech.uucp> zeeff@b-tech.ann-arbor.mi.us (Jon Zeeff) writes:
>>Just in case people are curious, another thing which is more or less on
>>the list for ihave/sendme is an idea some folks hereabouts came up with:
>>delayed ihave/sendme, so that an ihave/sendme used as a backup feed won't
>>pass zillions of articles just because the main feed was a bit slow today.
>
>Several years ago someone posted a script that used at(1) to do this.

There are several ways of doing it, but some are better than others.
The obvious techniques all require tinkering with the feed sites, but
that can be avoided -- so all the funniness is at the recipient site,
where it belongs -- by being more clever.  Stay tuned.
-- 
"Either NFS must be scrapped or NFS    | Henry Spencer at U of Toronto Zoology
must be changed."  -John K. Ousterhout |  henry@zoo.toronto.edu   utzoo!henry

karl@naitc.uucp (Karl Denninger) (07/02/90)

In article <1990Jun29.181256.3508@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes:

>Just in case people are curious, another thing which is more or less on
>the list for ihave/sendme is an idea some folks hereabouts came up with:
>delayed ihave/sendme, so that an ihave/sendme used as a backup feed won't
>pass zillions of articles just because the main feed was a bit slow today.

We've done this with some of the stuff.... it seems to work.

All it took was a couple of cron scripts.


--
Karl Denninger
karl@kbox.naitc.com
x3285