[news.admin] More duplicates from 19 October

tale@pawl.rpi.edu (David C Lawrence) (11/13/89)

We have duplicates coming in today dated 19 Oct.  The sites in
common in the paths:

...!think!mintaka!oliveb!pyramid!decwrl!...

This has been in six articles I've seen so far.  gem has been feeding
them to us but in two out of the three cases gem!think has samsung in
the middle.  It seems most likely that the real source of them problem
is somewhere in the above sites.  Anyone have any ideas?

Dave
-- 
 (setq mail '("tale@pawl.rpi.edu" "tale@ai.mit.edu" "tale@rpitsmts.bitnet"))

tale@pawl.rpi.edu (David C Lawrence) (11/13/89)

Wugh.  Just got hit with a whole bunch (> 20) more and I have a
feeling it will continue.  More data points, however, have cut:

...!think!mintaka!oliveb!pyramid!decwrl!...

to

...!think!mintaka!oliveb!... as the sites in common.  I suspect oliveb
is the culprit.  I haven't been able to successfully get a message
through to there.

Dave
-- 
 (setq mail '("tale@pawl.rpi.edu" "tale@ai.mit.edu" "tale@rpitsmts.bitnet"))

coolidge@brutus.cs.uiuc.edu (John Coolidge) (11/13/89)

tale@pawl.rpi.edu (David C Lawrence) writes:
>Wugh.  Just got hit with a whole bunch (> 20) more and I have a
>feeling it will continue.  More data points, however, have cut:
>...!think!mintaka!oliveb!pyramid!decwrl!...
>to
>...!think!mintaka!oliveb!... as the sites in common.  I suspect oliveb
>is the culprit.  I haven't been able to successfully get a message
>through to there.

Yet more data points: we just got one with the same sites in common.

--John

--------------------------------------------------------------------------
John L. Coolidge     Internet:coolidge@cs.uiuc.edu   UUCP:uiucdcs!coolidge
Of course I don't speak for the U of I (or anyone else except myself)
Copyright 1989 John L. Coolidge. Copying allowed if (and only if) attributed.
You may redistribute this article if and only if your recipients may as well.
New NNTP connections always available! Send mail if you're interested.

ambar@bloom-beacon.mit.edu (Jean Marie Diaz) (11/13/89)

   From: tale@pawl.rpi.edu (David C Lawrence)
   Date: 12 Nov 89 22:10:00 GMT

   We have duplicates coming in today dated 19 Oct.  The sites in
   common in the paths:

   ...!think!mintaka!oliveb!pyramid!decwrl!...

I'll bet the problem is oliveb->mintaka.  mintaka has been down for the
past two weeks with disk problems, and I just brought it back from the
dead last night (running C News instead of B News 3.0).

I suspect that mintaka's other feeds
	a) are running the nntpsend script
	   written in Perl, which tends to trash batch files for sites
	   that don't respond, or
	b) don't keep a full month's worth of net news.

Given the duplication problems we've been seeing lately, I'm seriously
considering jacking up the 'retain' time for history lines to 8 weeks
(56 days), at least on the systems I run that have disk space enough for
such foolishness.  Any comments?

				 AMBAR
ambar@bloom-beacon.mit.edu		   {mit-eddie,uunet}!bloom-beacon!ambar

scs@itivax.iti.org (Steve Simmons) (11/13/89)

ambar@bloom-beacon.mit.edu (Jean Marie Diaz) writes:

>   From: tale@pawl.rpi.edu (David C Lawrence)
>   We have duplicates coming in today dated 19 Oct.  The sites in
>   common in the paths . . . 

>I suspect that . . . other feeds
>	b) don't keep a full month's worth of net news.

We had a similar problem here in MI.  One site retransmitted all kinds
of articles.  The only long-term cure has been for all the neighbors
to go to 30 day histories.  Your proposed 8 weeks is probably excessive.
-- 
Steve Simmons	       scs@iti.org         Industrial Technology Institute
You're not a big name on Usenet until someone puts you in their .sig file.

karl@cheops.cis.ohio-state.edu (Karl Kleinpaste) (11/13/89)

ambar@bloom-beacon.mit.edu writes:
   ... I'm seriously
   considering jacking up the 'retain' time for history lines to 8 weeks
   (56 days), at least on the systems I run that have disk space enough for
   such foolishness.  Any comments?

My paranoia about such stuff is unbounded above; I use 70-day
retention.  I don't get hit with old dups.

--Karl

bin@primate.wisc.edu (Brain in Neutral) (11/15/89)

From article <AMBAR.89Nov12190742@portnoy.mit.edu>, by ambar@bloom-beacon.mit.edu (Jean Marie Diaz):
> I'll bet the problem is oliveb->mintaka.  mintaka has been down for the
> past two weeks with disk problems, and I just brought it back from the
> dead last night (running C News instead of B News 3.0).
> 
> I suspect that mintaka's other feeds
> 	a) are running the nntpsend script
> 	   written in Perl, which tends to trash batch files for sites
> 	   that don't respond, or

This is news to me and I am a bit surprised, since the perl nntpsend
doesn't do anything with the batch files after forking the nntpxmit.
What are the particulars of how files are trashed?

Also, how could trashing a batch file result in transmission of old news?
I would think that that would result, instead, in a loss of news.

Paul DuBois
dubois@primate.wisc.edu

jerry@olivey.olivetti.com (Jerry Aguirre) (11/21/89)

In article <AMBAR.89Nov12190742@portnoy.mit.edu> ambar@bloom-beacon.mit.edu (Jean Marie Diaz) writes:
>   ...!think!mintaka!oliveb!pyramid!decwrl!...
>
>I'll bet the problem is oliveb->mintaka.  mintaka has been down for the
>past two weeks with disk problems, and I just brought it back from the
>dead last night (running C News instead of B News 3.0).

I did notice the queue for mintaka getting a little big while it was
down.  I keep news on oliveb for 28 days so mintaka could be getting
articles from me that were that old.

Isn't rnews supposed to trash articles that were posted before the
history expire limit?  I suggest that sites keep their history longer,
expecially a non-leaf site such as "think".  I use a value of 40 days.

				Jerry Aguirre

ambar@bloom-beacon.mit.edu (Jean Marie Diaz) (11/21/89)

[mail failed; hence post]

   Date: Tue, 14 Nov 89 11:40:06 -0600
   From: Brain in Neutral <bin@primate.wisc.edu>

   > I suspect that mintaka's other feeds
   > 	a) are running the nntpsend script
   > 	   written in Perl, which tends to trash batch files for sites
   > 	   that don't respond, or

   This is news to me and I am a bit surprised, since the perl nntpsend
   doesn't do anything with the batch files after forking the nntpxmit.
   What are the particulars of how files are trashed?

Well, I'll explain the different behaviours I've seen.  We have an nntp
feed to a site with a flakey Internet connection.  Pre-perl, the batch
file would grow, and a sitename.nntp file would also be present.  The
next attempt to send after a transmission failure would begin by using
the sitename.nntp file already present, and nothing would be lost.
Post-perl, any sitename.nntp files left by previous (failed)
transmission attempts just get overwritten (I suspect), since the result
is that the queue is always short.  I don't care enough to have bothered
to fix it (or even report it, you might have noticed :-), but someone
else might.

   > I'll bet the problem is oliveb->mintaka.  mintaka has been down for the
   > past two weeks with disk problems, and I just brought it back from the
   > dead last night (running C News instead of B News 3.0).

   Also, how could trashing a batch file result in transmission of old news?
   I would think that that would result, instead, in a loss of news.

Precisely.  The problem is oliveb->mintaka; I suspect mintaka's _other_
feeds have trashed (or deliberately shortened) their batch files, and
that's why we're only having trouble with oliveb.

				AMBAR

henry@utzoo.uucp (Henry Spencer) (11/22/89)

In article <51035@oliveb.olivetti.com> jerry@olivey.UUCP (Jerry Aguirre) writes:
>Isn't rnews supposed to trash articles that were posted before the
>history expire limit?  I suggest that sites keep their history longer,
>expecially a non-leaf site such as "think".  I use a value of 40 days.

C News relaynews doesn't do the trash-prehistoric-articles business, partly
because Geoff dislikes the idea of having to parse dates (on grounds of
complexity, program size, and execution time), partly because there is a
real problem with slow links that take a long time to get articles out
to the world for the *first* time.  Our current official opinion is that
keeping more history around is a better solution.

Unofficially and tentatively, we're starting to wonder about this policy,
given the amount of trouble there's been lately.
-- 
A bit of tolerance is worth a  |     Henry Spencer at U of Toronto Zoology
megabyte of flaming.           | uunet!attcan!utzoo!henry henry@zoo.toronto.edu