[comp.mail.uucp] Limiting Simultaneous 'uuxqts' on Pre-HDB UUCP

tkevans@fallst.UUCP (Tim Evans) (10/13/90)

Is there any way of limiting the number of simultaneous uuxqts in
pre-HDB UUCP?  I receive a pre-compressed newsfeed on my SCO Xenix
2.2.3 box and find that once several rnews/uncompress processes
get running, the machine is pretty much brought to its knees for
doing any other (real) work.

Thanks
-- 
UUCP:		{rutgers|ames|uunet}!mimsy!woodb!fallst!tkevans
INTERNET:	tkevans%fallst@wb3ffv.ampr.org
Tim Evans	2201 Brookhaven Ct, Fallston, MD 21047

s872007@otto.bf.rmit.oz.au (David Burren [Athos]) (10/15/90)

In <1820@fallst.UUCP> tkevans@fallst.UUCP (Tim Evans) writes:

>Is there any way of limiting the number of simultaneous uuxqts in
>pre-HDB UUCP?  I receive a pre-compressed newsfeed on my SCO Xenix
>2.2.3 box and find that once several rnews/uncompress processes
>get running, the machine is pretty much brought to its knees for
>doing any other (real) work.

My understanding is that the limit on most non-HDB uucps should be one uuxqt.
When it starts up it checks for the file LCK.XQT, and if it's valid, it
assumes a job is already in progress.

eyrie (see my sig) is a SCO 2.2.3 box, with the main news feed being
compressed and batched.  Yes, when rnews is uncompressing things tend to
get a bit busy, but there's only one uuxqt running.
If you _are_ getting more than one I'd be most interested in hearing more.

Other things such as limited RAM and the fact that my spool drive is an
old stepper unit tend to kill the performance more than anything else.
_________________________________________________________________________
David Burren (Wookie Athos)		Work:	david@bacchus.esa.oz.au
Software Development Engineer		Home:	athos%eyrie@labtam.oz.au
Expert Solutions Australia		Study:	s872007@minyos.rmit.oz.au

jtc@van-bc.wimsey.bc.ca (J.T. Conklin) (10/16/90)

In article <s872007.655997255@otto> s872007@otto.bf.rmit.oz.au (David Burren [Athos]) writes:
>In <1820@fallst.UUCP> tkevans@fallst.UUCP (Tim Evans) writes:
>>Is there any way of limiting the number of simultaneous uuxqts in
>>pre-HDB UUCP?  I receive a pre-compressed newsfeed on my SCO Xenix
>>2.2.3 box and find that once several rnews/uncompress processes
>>get running, the machine is pretty much brought to its knees for
>>doing any other (real) work.
>
>My understanding is that the limit on most non-HDB uucps should be one uuxqt.
>When it starts up it checks for the file LCK.XQT, and if it's valid, it
>assumes a job is already in progress.

A problem I have observed is that pre-HDB SCO uuxqt erroneously
considers the lock file stale if it is "too" old and starts anyway.
This is likely to happen if you are processing a lot of compressed
news.

The way I solved the problem was to recompile bnews with SPOOLNEWS
defined in defs.h.  The SPOOLNEWS option causes all inbound news to be
spooled, rather than unpacked, by rnews.  Since spooling takes almost
no time at all, uuxqt completes its job quickly, and there never is a
chance for subsequent uuxqt's to find a "stale" lock.

I ran "/usr/bin/rnews -U" every 15 minutes from cron to unpack news
from the spool.  The news lock mechanism must be more robust than what
SCO used, as I never had any problems after I made the change.

	--jtc

-- 
J.T. Conklin	UniFax Communications Inc.
		...!{uunet,ubc-cs}!van-bc!jtc, jtc@wimsey.bc.ca

per@erix.ericsson.se (Per Hedeland) (10/19/90)

In article <2588@van-bc.wimsey.bc.ca>, jtc@van-bc.wimsey.bc.ca (J.T. Conklin) writes:
|> A problem I have observed is that pre-HDB SCO uuxqt erroneously
|> considers the lock file stale if it is "too" old and starts anyway.
|> This is likely to happen if you are processing a lot of compressed
|> news.

This problem is certainly not unique to SCO's uuxqt...

|> The way I solved the problem was to recompile bnews with SPOOLNEWS
|> defined in defs.h.

This is probably the best solution, but in case you prefer not to do it for
some reason or other, here is a script I hacked together for SunOS 3.x/4.0[.x]
- it may well work for other systems, perhaps with some tweaking; a necessary
requirement is that the uucp programs store the process ID of the locker in
the lock file. Btw, it will also help with long-running uucicos, where a
corresponding problem can occur. Run from crontab with an interval shorter
than the "stale" time (e.g. half-hourly).

Regards
--Per Hedeland
per@erix.ericsson.se  or
per%erix.ericsson.se@uunet.uu.net  or
...uunet!erix.ericsson.se!per

uulckchk--------------------------------------------------------------------
#!/bin/sh
# Update the mod, access, and change times (using touch(1)) of all uucp 
# lock files that still are valid (i.e. the process that created the file
# is still active). This is a workaround for primitive uucp's that just 
# check the age of the lock file rather than checking for the process 
# when determining validity.
#
# Where the lock files live
LCKDIR=/usr/spool/uucp

cd $LCKDIR
set LCK.*
if [ "$1" = "LCK.*" ]
then
	# No lock files found
	exit 0
fi
for file
do
	if [ -r "$file" ]
	then
		pid=`od -l $file | awk '$1~/0$/{print $2}'`
		if [ "$pid" != "" ]
		then
			# SunOS 3.2 kill(1) doesn't give a meaningful exit code
			# - we rely on ps (undocumented, but present in SunOS
			# 3.2 & 4.0, BSD 4.3) instead...
			#if kill -0 $pid >/dev/null 2>&1
			if ps $pid >/dev/null 2>&1
			then
				# Mustn't recreate the file if it went away
				# while weren't looking; hence -c
				touch -c $file >/dev/null 2>&1
			fi
		fi
	fi
done

exit 0