[news.software.b] More about C News and barfing.

mcr@Sandelman.OCUnix.On.Ca (Michael Richardson) (02/07/91)

  Recently I read something here to the effect that a (new) C news
site was producing a large number of barfs on a adjacent B news site?
  Henry responded to the effect that this can't be as C news as bashed
quite extensively against B news.

  Well, one of my downfeeds seems to have the same problem -- to
quote:

Received: by Sandelman.OCUnix.On.Ca (4.1/smail2.5/09-15-89)
	id AA02490; Wed, 6 Feb 91 19:44:26 EST
From: revcan!dave (David Blackwood)
X-Mailer: SCO System V Mail (version 3.2)
To: latour!mcr
Subject: Bad news batches
Date: Wed, 6 Feb 91 17:37:29 EST
Message-Id:  <9102061737.aa05706@revcan.UUCP>
Status: OR

We seem to be getting a significant number of bad news batches which cannot be
uncompressed.  This is causing us a major problem as the SCO news software
hangs on a bad batch but continues to write the last log entry over and over
until the file system fills completely at which time the system is effectively
dead.  Any ideas, suggestions?

Dave

----

  Does this ring ANY bells at all? I'm afraid that the previous
discussion has expired off my system already. (I'm quite behind in my
news.all reading)
 
  My batchparms says:

revcan		100000  20 	batcher compcun viauux

  compcun is:
#! /bin/sh
# Invoke compress, adding silly 2.11-compatible header.
# 12-bit compression is the lowest common denominator among news sites,
# and is often almost as good as the much-more-costly 16-bit compression.

echo "#! cunbatch"
compress -b 12
status=$?
case "$status"
in
	2)
	status=0		# compress stupidity
	;;
esac
exit $status

  (So this isn't the B news problem with compressing the whole file.)
  
  My C News is at:
  25-May-1990

  I'm in the process of applying patches. (Which never works right for
me... alas). Oh -- I run SunOS 4.1

  
-- 
   :!mcr!:            |  The postmaster never | - Pay attention only
   Michael Richardson |    resolves twice.    | to _MY_ opinions. -  
 HOME: mcr@sandelman.ocunix.on.ca +   Small Ottawa nodes contact me
 Bell: (613) 237-5629             +    about joining ocunix.on.ca!

clewis@ferret.ocunix.on.ca (Chris Lewis) (02/08/91)

In article <1991Feb7.023059.3082@Sandelman.OCUnix.On.Ca> mcr@Sandelman.OCUnix.On.Ca (Michael Richardson) writes:

>  Recently I read something here to the effect that a (new) C news
>site was producing a large number of barfs on a adjacent B news site?
>  Henry responded to the effect that this can't be as C news as bashed
>quite extensively against B news.

>  Well, one of my downfeeds seems to have the same problem -- to
>quote:

I'm also one of your downfeeds, and I'm running B 2.11.19, without any
problems whatsoever.

>>From: revcan!dave (David Blackwood)
>>Subject: Bad news batches

>>We seem to be getting a significant number of bad news batches which cannot be
>>uncompressed.  This is causing us a major problem as the SCO news software
                                                           ________
>>hangs on a bad batch but continues to write the last log entry over and over
>>until the file system fills completely at which time the system is effectively
>>dead.  Any ideas, suggestions?

I think the underlined phrase is the likely cause.  Anybody know what version
or patchlevel or how hacked a version of news "SCO news" is?

If SCO news appears to be a B 2.11, you might want to try again with a
prolog of "#! rnews" and do the uux to rnews, not cunbatch.  (cunbatch
became unecessary in 2.10.3 -> 2.11, and is all done within C without
going thru the unbatcher).  You could probably figger out what kind of
news SCO's is, including the patch level, by sending a "sendversion" to it.
-- 
Chris Lewis, Phone: (613) 832-0541, Internet: clewis@ferret.ocunix.on.ca
UUCP: uunet!mitel!cunews!latour!ecicrl!clewis
Moderator of the Ferret Mailing List (ferret-request@eci386)
Psroff enquiries: psroff-request@eci386, current patchlevel is *7*.

stealth@caen.engin.umich.edu (Mike Pelletier) (02/09/91)

In article <1991Feb7.023059.3082@Sandelman.OCUnix.On.Ca>
	mcr@Sandelman.OCUnix.On.Ca (Michael Richardson) writes:
>
>  Recently I read something here to the effect that a (new) C news
>site was producing a large number of barfs on a adjacent B news site?
>  Henry responded to the effect that this can't be as C news as bashed
>quite extensively against B news.
>
>  Well, one of my downfeeds seems to have the same problem -- to
>quote:
>
>To: latour!mcr
>Subject: Bad news batches
>Date: Wed, 6 Feb 91 17:37:29 EST
>Message-Id:  <9102061737.aa05706@revcan.UUCP>
>
>We seem to be getting a significant number of bad news batches which cannot be
>uncompressed.  This is causing us a major problem as the SCO news software
>hangs on a bad batch but continues to write the last log entry over and over
>until the file system fills completely at which time the system is effectively
>dead.  Any ideas, suggestions?
>
>Dave
>
>----
>
>  Does this ring ANY bells at all? I'm afraid that the previous
>discussion has expired off my system already. (I'm quite behind in my
>news.all reading)

With the mention of uncompression failure, my first thought is to check
to make sure the downstream site is using the same number of bits as the
upstream site in the compression algorithm.

># 12-bit compression is the lowest common denominator among news sites,
># and is often almost as good as the much-more-costly 16-bit compression.
>
>echo "#! cunbatch"
_________________
>compress -b 12
^^^^^^^^^^^^^^^^^

IE, perhaps your downfeed is trying to uncompress the batches using 16 bits
instead of 12 bits.  What is "a significant number"?

mcr@Sandelman.OCUnix.On.Ca (Michael Richardson) (02/11/91)

In article <1991Feb8.210554.22633@engin.umich.edu> stealth@caen.engin.umich.edu (Mike Pelletier) writes:
>In article <1991Feb7.023059.3082@Sandelman.OCUnix.On.Ca>
>With the mention of uncompression failure, my first thought is to check
>to make sure the downstream site is using the same number of bits as the
>upstream site in the compression algorithm.

  Well, unless SCO really looses in the neuron count, (they aren't _that_
bad) they shouldn't have screwed up compress. My compress
man page says:

     The  bits
     parameter specified during compression is encoded within the
     compressed file, along with a magic number  to  ensure  that
     neither  decompression  of  random data nor recompression of
     compressed data is subsequently allowed.


>IE, perhaps your downfeed is trying to uncompress the batches using 16 bits
>instead of 12 bits.  What is "a significant number"?
 
  I'm not sure. Given that a single bad compress trashes the system
(fills the disk) it would be rather hard to get a nice percentage... 

  



-- 
   :!mcr!:            |  The postmaster never | - Pay attention only
   Michael Richardson |    resolves twice.    | to _MY_ opinions. -  
 HOME: mcr@sandelman.ocunix.on.ca +   Small Ottawa nodes contact me
 Bell: (613) 237-5629             +    about joining ocunix.on.ca!

root@ledgepc.uucp (System Administrator) (02/13/91)

From article <111@ledgepc.uucp>, by wayne@ledgepc.uucp (Wayne Brown):
> I had a similar problem under an old version of B News (2.8).  Replacing the

Sorry, that _should_ have been B News 2.11, patchlevel 8.  I guess that's what I
get for posting after a long day of COBOL debugging.  Tends to fry the brain
pretty badly, but it does put food on the table. . .  :-)

-- 
Wayne Brown	wayne@ledgepc.uucp
		uunet!{loft386,dsuvax}!ledgepc!wayne
		72447.2645@compuserve.com
Warning:  .signature truncated; maximum trivia level exceeded.