stv@qantel.UUCP (Steve Vance@ex2499) (01/22/85)
Isn't there a locking mechanism to prevent batches of news from our newsfeed sites from all being processed at once? When I came in this morning, there were 5 editions of news processing going on, as well as last night's expire run. Things were slow enough that people were complaining about their nroffs taking too long. Is there nothing to prevent multiple rnews's, or do I have a bug in uuxqt, or something wrong with inews? -- Steve Vance {dual,hplabs,intelca,nsc,proper}!qantel!stv Qantel Corporation, Hayward, CA
matt@oddjob.UChicago.UUCP (Matt Crawford) (01/24/85)
In article <qantel.337> stv@qantel.UUCP (Steve Vance@ex2499) writes: >Isn't there a locking mechanism to prevent batches of news from our >newsfeed sites from all being processed at once? When I came in >this morning, there were 5 editions of news processing going on, >as well as last night's expire run. > >Steve Vance Qantel Corporation, Hayward, CA We were unhappy when multiple troff processes were active simultaneously. Having them all 'nice'd didn't really ease the memory hogging, so I wrote a couple of interlock subroutines which I am posting to net.sources. They can be called by any program that has a spare file descriptor, but they are specific to 4.2. _____________________________________________________ Matt University crawford@anl-mcs.arpa Crawford of Chicago ihnp4!oddjob!matt
geoff@desint.UUCP (Geoff Kuenning) (01/26/85)
In article <337@qantel.UUCP> stv@qantel.UUCP (Steve Vance@ex2499) writes: >When I came in >this morning, there were 5 editions of news processing going on, >as well as last night's expire run. Your problem is in uuxqt. Older versions of it do not have a concept of jobs that take several hours to run. So, after news has been unbatching for an hour, a new uuxqt comes along (probably out of crontab), discards the old LCK.XQT file, and starts up a new unbatching job. Of course, usually it is running on the same batch as the previous one. This continues until the first uuxqt deletes the X. and D. files; subsequent uuxqt's then pick up the second one. Eventually everything dies down, but I once had an especially heavy news load (our feed was down) take over 24 hours to unbatch! The solution is trivial: add the following lines to your crontab or to your hourly uucp demon: # Ensure that long-running uuxqt's continue to run without interference. # ***NOTE*** If uuxqt crashes and a LCK.XQT file gets left hanging around, # no more uuxqt's will run until the LCK.XQT file is removed. This is done # by the /etc/rc startup file. test -f /usr/spool/uucp/LCK.XQT && touch /usr/spool/uucp/LCK.XQT -- Geoff Kuenning Unix Consultant ...!ihnp4!trwrb!desint!geoff