[news.admin] --- Multiple UUXQTs causing thrashing

fortin@zap.UUCP (Denis Fortin) (10/05/87)

Greetings...

	I have recently installed News 2.11 on my system (an 
iAPX286 machine running Microport System V/AT 2.2L), and have
arranged a full compressed newsfeed with one of my neighbors.  Everything 
works fine except that I have noticed a few times that the unpacking of
the news seemed to take (much) longer than usual, and on top of that,
the response time of the system was really rotten.  A quick "ps -ef"
informed me that no less than three UUXQTs were running on the system,
each one with associated news-unpack, compress, etc.

	Now, my system has only 2 MB of RAM, and compress is fairly
large...  Running three compress at the same time means that the
swap space starts getting overused, and that the system starts thrashing!

	Each time this has happened, I noticed that there *was* a LCK.XQT
file in /usr/spool/uucp --> shouldn't this prevent the appearance of new
UUXQTs?  (the new ones seem to get spawned off by a "uucico -r1" command
from my uudemon.hr script)

	Anyway, what I'd like to know is: (1) is this normal, and 
(2) what can be done to limit the number of concurrent compresses to
a single one???

	Thanks!
-- 
Denis Fortin,				| fortin@zap.UUCP
CAE Electronics Ltd			| rutgers!mit-eddie!musocs!zap!fortin
The opinions expressed above are my own	| fortin%zap.uucp@uunet.uu.net

lindsay@dscatl.UUCP (Lindsay Cleveland) (10/18/87)

In article <173@zap.UUCP>, fortin@zap.UUCP (Denis Fortin) writes:
> 	I have recently installed News 2.11 on my system (an 
> iAPX286 machine running Microport System V/AT 2.2L), and have
> arranged a full compressed newsfeed with one of my neighbors.  Everything 
> works fine except that I have noticed a few times that the unpacking of
> the news seemed to take (much) longer than usual, and on top of that,
> the response time of the system was really rotten.  A quick "ps -ef"
> informed me that no less than three UUXQTs were running on the system,
> each one with associated news-unpack, compress, etc.
> 
I have the identical setup and also do a full feed to two
additional sites.

The trick is to move the processing from UUXQT, which occurs when
the stuff arrives, to something which *you* control.

My technique is to have the following in /usr/bin/rnews (the place
where UUXQT will find it):

  #  This pseudo-"rnews" program copies the standard input
  #  into a queue directory for processing at a later time.
  # 
  # NOTE: the SAVEDIR's higher directory must have
  # permissions/ownership such that this program can do the "mkdir".
  SPOOLDIR=/usr/spool/news
  SAVEDIR=$SPOOLDIR/.rnews
  OWNER=news
  
  if [ ! -d $SPOOLDIR ] 
   then mkdir $SPOOLDIR; chmod 777 $SPOOLDIR; chown $OWNER $SPOOLDIR
  fi
  
  if [ ! -d $SAVEDIR ] 
   then mkdir $SAVEDIR; chmod 777 $SAVEDIR; chown $OWNER $SAVEDIR
  fi
  
  # Make file name from year/month/day/hour/min/sec/PID
  FILENM=`date '+%y%m%d%H%M%S'``expr "00$$" : '.*\(..\)'`
  
  # Store the stdin into the file for processing by the local system.
  cat - > $SAVEDIR/$FILENM 
  chmod 666 $SAVEDIR/$FILENM
  
  #  Force a zero return code
  exit 0
  
  
Using "crontab", you then have "/usr/lib/rnews -U" invoked whenever
you wish (such as in the wee hours when you're not using the
system).  It will wander down the /usr/spool/news/.rnews directory
and process the articles in the order they were received.  By spacing
the running of it judiciously, (or with your own interlocking technique,
or by using "batch", or whatever), you will usually have only one "rnews"
running on your system.

There are other techniques, but this one is reasonably
straightforward.  Hope it helps.

Cheers,
  Lindsay

Lindsay Cleveland         Digital Systems Co.   Atlanta, Ga
  gatech!dscatl!lindsay     (404) 497-1902
                         (U.S. Mail:  PO Box 1140, Duluth, GA  3

jane@tolerant.UUCP (Jane Medefesser) (10/19/87)

In article <173@zap.UUCP>, fortin@zap.UUCP (Denis Fortin) writes:
> 	I have recently installed News 2.11 on my system (an 
> iAPX286 machine running Microport System V/AT 2.2L), and have
> arranged a full compressed newsfeed with one of my neighbors.  Everything 
> works fine except that I have noticed a few times that the unpacking of
> the news seemed to take (much) longer than usual, and on top of that,
> the response time of the system was really rotten.  A quick "ps -ef"
> informed me that no less than three UUXQTs were running on the system,
> each one with associated news-unpack, compress, etc.

Yes, I have encountered the SAME THING. Drove me nuts for a month or two. I
brain stormed this with Carl G. at pyramid and we sort of determined that
the LCK.XQT lock times out after about an hour.  

This occured at our site (Vax 780 4.2 BSD) whenver I got a "checkgroups"
control message. inews would hang or block or something at the control
message. An hour would pass. UUXQT files would be waiting. The lock would
time out and be removed. Uucp would see the UUXQT's sitting there and start
another uuxqt. *IT* would hang on "checkgroups". An hour would pass......
Then, around 7:00 am I waltz in to find the process table half full, and
the system's load average running around 12.8!!!

Since I don't have loads of time on my hands to debug the kernal, nor have
I got the time to figure out WHY checkgroups is freaking out, I backed-up
checkgroups and changed the script to have 1 executable line. "exit 0".
I have never encountered the problem since. Of course, I don't have a real
up to date "newsgroups" file either. (I update it by hand from time to
time), but I feel that it's a small price to pay under the circumstances.

Now, I feel pretty confident that I have a configuration error somewhere
that causes checkgroups to hang. (It's the ONLY control message that causes
this behavior.) If anyone can shed some light on this, I'd appreciate it.

Hope this helps...



-- 
* Not me, baby - I'm too precious * <-- ( it's only rock & roll..)

Jane Medefesser		uucp: {pyramid,mordor,oliveb,sci}!tolerant!jane
Tolerant Systems 	tele: +1 408 433 5588

david@ms.uky.edu (David Herron -- Resident E-mail Hack) (10/20/87)

In article <768@tolerant.UUCP> jane@tolerant.UUCP (Jane Medefesser) writes:
>In article <173@zap.UUCP>, fortin@zap.UUCP (Denis Fortin) writes:
>Yes, I have encountered the SAME THING. Drove me nuts for a month or two. I
>brain stormed this with Carl G. at pyramid and we sort of determined that
>the LCK.XQT lock times out after about an hour.  

yes ... This is true of many of the "older" UUCP's.  if you've got sources
it's real easy to fix (It's a constant in one of the .h files).  If you
don't got sources ... welll ...



Another way around this is to make rnews a one line script like:

	cat >/usr/spool/some-dir/uunews.$$
	exit 0

Then you have another shell script like:

	cd /usr/spool/some-dir
	for i in uunews*; do
		if /usr/lib/news/real-rnews <$i; then
			rm -f $i
		else
			mv $i P.$i
		fi
	done

I've always run this using "flock" to make sure that only one of these
guys is running at a time.  "flock" is more intelligent than the
locking mechanism in uuxqt...

The only thing I can say about hanging-on-checkgroups is that it's
never hung here.  It's usually come up with a really strange message,
but it's never hung.

Oh, that deal in the script about checking the exit status from real-rnews
and mv'ing the file to P.$i really helps a lot ... but then we have
news coming in from many strange paths.  Anyway, I just go look for P.
files from time to time ...
-- 
<---- David Herron,  Local E-Mail Hack,  david@ms.uky.edu, david@ms.uky.csnet
<----                    {rutgers,uunet,cbosgd}!ukma!david, david@UKMA.BITNET
<---- I thought that time was this neat invention that kept everything
<---- from happening at once.  Why doesn't this work in practice?

rick@seismo.CSS.GOV (Rick Adams) (10/21/87)

People keep refering to shell script hacks to place incoming news
into a spool directory for later processing.

I'd like to point out that the define SPOOLNEWS has been around for 6 months
now and does exactly that. It also does better locking, etc than you
can probably throw together in a shell script. Just run "rnews -U" every
once in a while from crontab and you're all set.

--rick

jane@tolerant.UUCP (Jane Medefesser) (10/22/87)

In article <7539@e.ms.uky.edu>, david@ms.uky.edu (David Herron -- Resident E-mail Hack) writes:
> In article <768@tolerant.UUCP> jane@tolerant.UUCP (Jane Medefesser) writes:
> >In article <173@zap.UUCP>, fortin@zap.UUCP (Denis Fortin) writes:
> >Yes, I have encountered the SAME THING. Drove me nuts for a month or two. I
> >brain stormed this with Carl G. at pyramid and we sort of determined that
> >the LCK.XQT lock times out after about an hour.  
> 
> yes ... This is true of many of the "older" UUCP's.  

OLDER? You call BSD 4.2 Unix OLDER??  I thought V7 and rel3 AT&T versions
were "older" - ???




============================================

Jane Medefesser		uucp: {pyramid,mordor,oliveb,sci}!tolerant!jane
Tolerant Systems 	tele: +1 408 433 5588
81 E. Daggett Dr. 
San Jose, Ca  95134'

csg@pyramid.pyramid.com (Carl S. Gutekunst) (10/23/87)

In article <173@zap.UUCP> fortin@zap.UUCP (Denis Fortin) writes:
>A quick "ps -ef" informed me that no less than three UUXQTs were running on
>the system, each one with associated news-unpack, compress, etc.

Ah yes, the Great Glacier News Flood. :-)

As you surmised, there must only be one uuxqt running at any one time; other-
wise your have multiple daemons trying to process the same queue files. (Honey
DanBer allows as many as you want, default two, and they don't get confused.)

Uuxqt uses a lock file (variously named /usr/spool/uucp/XQT.LCK) to prevent
multiple daemons. The problem is that all versions of UUCP except 4.3BSD and
HoneyDanBer use an excessively simply mechanism for determining whether a lock
is dead: the running uuxqt touches the lock file between each job, and any new
daemons that get started override the lock if the c_time hasn't changed within
the last hour. The assumption is that if uuxqt needed more than an hour to run
a job, then it probably dumped core somewhere.

On a small machine (anything smaller than a 68020, apparently) it's not that
unusual for a single uncompress/unbatch/rnews run to take more than an hour.
So the lock gets swept away, and another uuxqt starts running the same job.

4.3BSD and HoneyDanBer read the PID of the locking process out of the lock
file, then use a kill(PID, 0) call to find out if the process is still alive.
(All versions of UUCP write the PID into the lock file.)

You have the following solutions available:

- Regularly touch /usr/spool/uucp/XQT.LCK from cron. The most portable way to
  do this is:

	chmod 644 /usr/spool/uucp/XQT.LCK 2> /dev/null

  which alters the c_time if the file exists, and nothing else.

- If you have UUCP source, you can increase the time on lock files to some
  hemungous number. If you are more ambitious, you can fix ulockf.c to check
  the PID instead; this only works in 4BSD and System V.

- Enable SPOOLNEWS in Netnews src/defs.h, so all incoming news is held in the
  /usr/spool/news/.rnews directory. This allows uuxqt to run very quickly,
  since the time-consuming rnews is deferred. Then add 'rnews -U' to be run by
  cron every hour or so. Or if you don't have any impatient downstream loads,
  then run 'rnews -U' only after working hours, to completely relieve your
  machine of news processing during the day.

<csg>

steve@nuchat.UUCP (Steve Nuchia) (10/27/87)

On my system, one of those silly intel-based boxes, the load
from a few compress -d's and rnews's would bring the thing to
its knees.  Also, when I ran out of disk space rnews would
happily dump stuff into the bit bucket.

To solve thes problems I hacked together the following little
program.  I know there is a SPOOLNEWS option, but I haven't
tried it, since I would still have to wrap a disk-space-checker
around it.  This program is run as a daemon.  It nices itself
and always sleeps at least a minute between bouts of work to
let things rest.  I have uuxqt executing a shell script that
copies the incoming news to a hold directory, where this daemon
finds it and tries to despool it.

I also hacked sendbatch to send only one batch per invokation,
and the "sendsome" script just calls sendbatch for each neighbor
with the proper magic options.  The daemon will run sendsome
in any pass that can't do any de-spooling, either because there
are no new batches or because of space problems.  It should fire
off an expire when it runs out of room, but I've got bigger plans
and until then I'll just do it by hand (killing the daemon
and restarting it afterwards).  It suspends outgoing batching
if there is not enough room (a different threshold) on
the uucp spool device. (my systems has a separate partition
for the news spool).

I also started calling uunet recently, and wanted to implement
a "delayed sendme" feature.  I noticed that the "ihave" processing
in inews was incredibly slow, and cooked up a solution, also
included below.  Basically I hacked control.c to stick incoming
ihave messages into a hold file, named after the machine that
sent the ihave.  The "sendme" program, when run, loads the entire
history database into memory in a reasonably efficient form and
then reads the ihave spool file.  It rejects articles it has
seen in the history file or already in the spool and writes
the novel ones to either "order" or "toonew".  One sets the
"toonew" threshold in the hope that a cheaper neighbor will
supply the article "soon" so you won't have to order it from
the expensive neighbor.  You then replace the old spool file
with "toonew", limiting its growth.

----this is the "rnews" that you let UUXQT find----
umask 0222
cat - >> /usr/spool/news/incoming/in.$$
ls -l /usr/spool/news/incoming/in.$$ >> /usr/lib/news/log
-----

----this is the news de-spooler daemon-----
/*
 *	newsd.c - daemon for unbundling news and doing other
 *	essential stuff, like running expire.  Obviates certain
 *	cron processing and allows the slow components of news
 *	to run niced without playing uucp games.
 *
 *		Steve Nuchia
 *		27 Sept 1987
 */

#include <sys/types.h>
#include <sys/stat.h>
#include <ustat.h>
#include <sys/dir.h>
#include <stdio.h>

#define INCOMING	"/usr/spool/news/incoming"
#define RNEWS		"/usr/lbin/rnews"
#define IHAVE		"/files/news/.ihave"
#define OUTGOING	"/usr/spool/batch"
#define SENDSOME	"/usr/lib/news/sendsome"
#define NEWSDIR		"/files/news"
#define SPOOLDIR	"/usr/spool"

main()
{
	int	incoming;
	struct direct art;
	char	aname[DIRSIZ+1], fullname[80], best[80], command[160];
	long	btime;
	int	i;
	struct stat sbuf;
	struct ustat usbuf;

    nice ( 20 );
    if ( (incoming = open ( INCOMING, 0 )) < 0 ) perror ( INCOMING );

    while ( 1 )
    {
	/* see how the space situation looks */
	stat ( NEWSDIR, &sbuf );
	ustat ( sbuf.st_dev, &usbuf );
	if ( usbuf.f_tfree > 1000 && usbuf.f_tinode > 500 )
	{
	    /* look around in INCOMING */
	    sleep ( 60 );
	    lseek ( incoming, 2L * sizeof(struct direct), 0 );
	    best[0] = 0;

	    while ( read ( incoming, &art, sizeof(struct direct) ) > 0 )
	    {
		if ( ! art.d_ino ) continue;
		for ( i = 0; i < DIRSIZ; i++ ) aname[i] = art.d_name[i];
		aname[i] = 0;
		sprintf ( fullname, "%s/%s", INCOMING, aname );
		stat ( fullname, &sbuf );
		if ( ! best[0] || btime > sbuf.st_mtime )
		{
		    btime = sbuf.st_mtime;
		    strcpy ( best, fullname );
		}
	    }
	    /* if there is anything, take care of oldest */
	    if ( best[0] )
	    {
		sprintf ( command, "%s < %s", RNEWS, best );
		if ( ! system ( command ) ) unlink ( best );
		continue;
	    }
	}
	else
	{
	    printf ( "space problem in NEWSDIR\n" );
	    fflush ( stdout );
	    sleep ( 600 );
	}
	/* otherwise we are free to do housekeeping */
	stat ( SPOOLDIR, &sbuf );
	ustat ( sbuf.st_dev, &usbuf );
	if ( usbuf.f_tfree > 1000 && usbuf.f_tinode > 500 )
	{
	    /* for now, just fire sendbatch.all and sleep */
	    system ( SENDSOME );
	}
	else
	{
	    printf ( "space problem in SPOOLDIR\n" );
	    fflush ( stdout );
	    sleep ( 600 );
	}
	sleep ( 60 );
    }
}
----

----this is the ihave line processor----
#define reg	register
#include <stdio.h>

main ( argc, argv )
	char	*argv[];
{
	int	fn, i;
	char	line[256];
	char	art[128];
	FILE	*from, *hf, *toonew, *order;
	int	am, ad;
	int	m, d;
    
    d = atoi ( argv[2] );
    m = atoi ( argv[3] );

    from = fopen ( argv[1], "r" );
    toonew = fopen ( "toonew", "w" );
    order = fopen ( "order", "w" );

    fprintf ( order, "Newsgroups: to.%s.ctl\n", argv[1] );
    fprintf ( order, "Subject: sendme nuchat\n" );
    fprintf ( order, "Control: sendme nuchat\n\n" );

    for ( fn = 0; fn < 10; fn++ )
    {
	sprintf ( line, "/usr/lib/news/history.d/%d", fn );
	hf = fopen ( line, "r" );
	while ( fgets ( line, 256, hf ) )
	{
	    if ( sscanf ( line, "<%[^>\n ]>", art ) == 1 ) save ( art );
	}
	fclose ( hf );
    }
    while ( fgets ( line, 256, from ) )
    {
	if ( sscanf ( line, "<%[^>\n ]> %d/%d", art, &am, &ad ) == 3 )
	{
	    if ( save ( art ) )
	    {
		if ( am < m || ad <= d )
		    fprintf ( order, "<%s>\n", art );
		else
		    fprintf ( toonew, "%s", line );
	    }
	}
	else
	{
	    fprintf ( stderr, "can't grok '%s'\n", line );
	}
    }
    report();
}

typedef	struct	_saved	saved;
struct	_saved
{
	saved	*sp;
	char	ss[4];
};

extern	char	*malloc(), *ssave();

char	*my_alloc ( n )
{
static	char	*chunk;
static	int	size;

    if ( size < n ) chunk = malloc ( size = 32000 );
    size -= n;
    chunk += n;
    return ( chunk - n );
}

#define	NHASH	(16000/sizeof(char *))

typedef	struct	_smsg	smsg;
struct	_smsg
{
	smsg	*nx;
	char	*seq;
	char	*site;
};

smsg	*sites[NHASH];

save ( name )
	char	*name;
{
	char	buf[128];
reg	smsg	*p;
reg	int	i = 0;
	char	*site, *seq;

    while ( *name && *name != '@' ) buf[i++] = *name++;
    buf[i] = 0;
    if ( *name ) name++;
    site = ssave ( name );
    seq = ssave ( buf );
    i = ((long)site >> 16) ^ ((long)seq >> 16) ^ (long)site ^ (long)seq;
    i &= 0x7fff;
    i %= NHASH;
    for ( p = sites[i]; p; p = p->nx )
	if ( p->seq == seq && p->site == site ) return ( 0 );
    if ( !(p = (smsg *) my_alloc ( sizeof(smsg) )) ) abort();
    p->nx = sites[i];
    p->seq = seq;
    p->site = site;
    sites[i] = p;
    return ( 1 );
}

static	saved	*savedp[NHASH];

report()
{
	int	i, count;
reg	saved	*p;
reg	long	strs = 0, total = 0;
    
    for ( i = 0; i < NHASH; i++ )
    {
	count = 0;
	for ( p = savedp[i]; p; p = p->sp ) { strs++; count++; }
	total += count*((long)count/2);
    }
    printf ( "strs = %ld, NHASH = %d, total = %ld\n", strs, NHASH, total );
}

char	*ssave ( s )
reg	char	*s;
{
reg	saved	*t;
	int	h;
reg	char	*hs;

    if ( !s || !*s ) return ( (char *) 0 );
    for ( hs = s, h = 0; *hs; h = ((h << 1) ^ (h >> 3) ^ *hs++) & 0x7fff );
    h %= NHASH;
    for ( t = savedp[h]; t; t = t->sp )
	if ( t->ss[0] == *s && !strcmp ( t->ss, s ) ) return ( t->ss );
    t = (saved *) my_alloc ( sizeof(saved) + strlen(s) - 3 );
    if ( !t ) abort();
    t->sp = savedp[h];
    strcpy ( t->ss, s );
    savedp[h] = t;
    return ( t->ss );
}
----

----this is the replacement body for ihave in control.c----
#define SPOOLIHAVE "/usr/spool/news/ihave/%s"
#ifdef SPOOLIHAVE
c_ihave(argc, argv)
register char **	argv;
{
	long	t;
	char	tstamp[40], lineout[200];
	int	i, ihv_file;
	struct	tm *tm;

    (void) time(&t);
    tm = localtime(&t);
#ifdef USG
    sprintf(tstamp,"%2.2d/%2.2d/%d %2.2d:%2.2d",
#else /* !USG */
    sprintf(tstamp,"%02d/%02d/%d %02d:%02d\tcancelled",
#endif /* !USG */
    tm->tm_mon+1, tm->tm_mday, tm->tm_year, tm->tm_hour, tm->tm_min);

    if (argc < 2)
	error("ihave: Too few arguments.");
    if (strncmp(FULLSYSNAME, argv[argc - 1], SNLN) == 0)
	return 0;

    sprintf ( lineout, SPOOLIHAVE, argv[argc - 1] );
    if ( (ihv_file = open ( lineout, 1 )) < 0 )
	ihv_file = creat ( lineout, 0600 );
    lseek ( ihv_file, 0L, 2 );

    if (argc > 2)
    {
	for (i = 1; i < (argc - 1); ++i)
	{
	    sprintf ( lineout, "%s %s\n", argv[i], tstamp );
	    write ( ihv_file, lineout, strlen(lineout) );
	}
    }
    else
    {
	char	myid[256];

	while ( fgets(myid, sizeof myid, infp) )
	{
	    for ( i = 0; myid[i] && myid[i] != ' ' && myid[i] != '\n'; i++ );
	    myid[i] = '\0';
	    sprintf ( lineout, "%s %s\n", myid, tstamp );
	    write ( ihv_file, lineout, strlen(lineout) );
	}
    }

    close ( ihv_file );
    return 0;
}
#else
<old c_ihave, ifdef'ed out>
#endif
----
-- 
Steve Nuchia	    | [...] but the machine would probably be allowed no mercy.
uunet!nuchat!steve  | In other words then, if a machine is expected to be
(713) 334 6720	    | infallible, it cannot be intelligent.  - Alan Turing, 1947

fortin@zap.UUCP (Denis Fortin) (11/19/87)

In article <8800@pyramid.pyramid.com> csg@pyramid.UUCP (Carl S. Gutekunst) writes:
>In article <173@zap.UUCP> fortin@zap.UUCP (Denis Fortin) writes:
>>A quick "ps -ef" informed me that no less than three UUXQTs were running on
>>the system, each one with associated news-unpack, compress, etc.
>
>Ah yes, the Great Glacier News Flood. :-)
>
[...]
>On a small machine (anything smaller than a 68020, apparently) it's not that
>unusual for a single uncompress/unbatch/rnews run to take more than an hour.
>So the lock gets swept away, and another uuxqt starts running the same job.
>
>You have the following solutions available:
>
>- Regularly touch /usr/spool/uucp/XQT.LCK from cron. 
[...]
><csg>

Greetings...

	First of all, I'd like to thank all of the news.admin readers
who responded to my inquiry about multiple UUXQTs running on my System V
machine (including Dave@arnold, rbl@nitrex, lyndon@ncc, tanner@ki4pv,
jerry@oliveb.atc.olivetti.com, steve@mahendo.jpl.nasa.gov, root@investor,
lindsay@dscatl, jane@tolerant, david@ms.uky.edu, etc).

	Basically, the consensus is that with "old" UUCPs (including
System V Rel.2), the "LCK.XQT" file in /usr/spool/uucp is considered
"old" (and therefore ignored) after about 60 minutes (or is that 30?).

	Anyway, various solutions were proposed, including:

   * touching LCK.XQT every once in a while,
   * obtaining HDB UUCP  (maybe when I get info on the Microport System V/AT
     2.2 to 2.3 upgrade, there will be some information in there on obtaining
     HDB UUCP?!?)
   * using the "SPOOLNEWS" feature.

	Now, having very little time to play with this, I chose the easy 
solution and simply added a

		27  *  *  *  *  touch -c /usr/spool/uucp/LCK.XQT

	in the crontab entry for user uucp.  I know that this might
cause problems if a UUXQT *really* died, but so far this hasn't happened.
(Somebody *did* send me a nice script that checks whether or not UUXQT
is still executing before doing the "touch"...  It's on my list of things 
to install!)

	So far, this has worked *remarkably well* and my system unpacks
news very happily (now if I could just find the time to recode parts of
"compress" in assembler in order to speed it up a bit, my machine might
not be unpacking stuff from 0:00 to 13:00!!!).

	Thanks again...
-- 
Denis Fortin,				| fortin@zap.UUCP
CAE Electronics Ltd			| rutgers!mit-eddie!musocs!zap!fortin
The opinions expressed above are my own	| fortin%zap.uucp@uunet.uu.net