earle@mahendo.Jpl.Nasa.Gov (Greg Earle) (04/05/88)
Has anyone thought about implementing a Netnews shadow capability? In an informal discussion recently, I breached the notion that if one had a large organizational net (say, for example, 4-5 disk servers each with their own subnets of diskless machines and PC's, et al., along with perhaps some other diskful machines on the backbone), it would be nice to set up one machine as the News server for all of these. Being basically fascistic, we install NNTP 1.5 and rrn on all the machines, and away we go. Of course, if the news server goes down, your friendly local system administrator has the screaming multitudes chanting, `Off with his head!' (-: Now we can't have that, so an obvious solution would be to have a `shadow' server, that talks NNTP with the main server, and keeps all of the same articles as the main server on-line, in case of main server crash. Now the problem is that you DON'T want to have seperate .newsrc files - you just want to be able to tell people that if `server' goes down, just say `setenv NNTPSERVER shadow-server' and all will be well. This immediately presents the main roadblock - namely, how do you keep the two machines perfectly in synch so that each stores the exact same articles in the exact same numbered file names in each's own news spool heirarchies?? Other than adding some sort of extra `article number' tag parameter to NNTP (which would be sent along with each article to the NNTP peer, to be used or discarded as necessary), at first crack we all shook our heads and agreed that, yes, it was Not A Trivial Thing To Do. And we left it at that ... I'm sure someone somewhere has given this some thought before. Any bright ideas, anyone? -- Greg Earle earle@mahendo.JPL.NASA.GOV Indep. Sun consultant earle%mahendo@jpl-elroy.ARPA [aka:] (Gainfully Unemployed) earle%mahendo@elroy.JPL.NASA.GOV Lake View Terrace, CA ...!{cit-vax,ames}!elroy!jplgodo!mahendo!earle
dave@spool.cs.wisc.edu (Dave Cohrs) (04/06/88)
In article <238@mahendo.Jpl.Nasa.Gov> earle@mahendo.JPL.NASA.GOV (Greg Earle) writes: >Has anyone thought about implementing a Netnews shadow capability? > >This immediately presents the main roadblock - namely, >how do you keep the two machines perfectly in synch so that each stores the >exact same articles in the exact same numbered file names in each's own news >spool heirarchies?? How about setting up an line in your sys file: backupserver:all:F:/usr/spool/news/batch/backupstuff (or whatever is appropriate for your "ME" line). Then, occasionally, run something like: #!/bin/csh -f set LIBDIR=/usr/spool/news/lib set BACKUP=backup.hostname.domain cd /usr/spool/news/batch mv backupstuff backupstuff.work rdist -c `cat backupstuff.work` ${LIBDIR}/{active,history*} ${BACKUP}: exit 0 # end (with other stuff added to make it reliable) on the main server? This, of course, assumes that the main and backup server are both of the same architecture (or you can't copy history files; anyone have byte-order independent DBM package?). The machines won't be perfectly in sync, but they should be close; it depends on how often you run the shellscript. dave Dave Cohrs +1 608 262-6617 UW-Madison Computer Sciences Department dave@cs.wisc.edu ...!{harvard,ihnp4,rutgers,ucbvax}!uwvax!dave
matt@oddjob.UChicago.EDU (My Name Here) (04/06/88)
earle@mahendo.JPL.NASA.GOV (Greg Earle) writes:
) Has anyone thought about implementing a Netnews shadow capability?
) ...
) how do you keep the two machines perfectly in synch so that each stores the
) exact same articles in the exact same numbered file names in each's own news
) spool heirarchies??
I think an hourly "rdist" (See 4.3 BSD or the more recent SunOS
releases) of the spool and LIB directories should take care of it
quite simply.
Matt
jerry@oliveb.olivetti.com (Jerry Aguirre) (04/07/88)
In article <5518@spool.cs.wisc.edu> dave@spool.cs.wisc.edu (Dave Cohrs) writes: >In article <238@mahendo.Jpl.Nasa.Gov> earle@mahendo.JPL.NASA.GOV (Greg Earle) writes: >>Has anyone thought about implementing a Netnews shadow capability? >How about setting up an line in your sys file: > >backupserver:all:F:/usr/spool/news/batch/backupstuff This won't work. The "F" flag will save only the name of the first link to each article. If the article is cross posted then the other links will not get created on the shadow machine. An obvious solution is to add a new flag that works like "F" but lists all the links. You could then use that output to periodically update the secondary server using "tar" or some other transfer method that would maintain the link structure. Another idea is to process this history file to create the file names to transfer. It includes the names of all the links and is in the order that they arrived. As someone else mentioned "rdist" could be used. I think that the overhead of scanning the entire news spool directory, file by file, would be prohibitive. It might be OK for a nightly check but you couldn't use it every hour. The same is probably true of running "find" on the news spool directory. It would run faster than "rdist" in that it would only have to look for recent files, not compare each one to the server. Jerry Aguirre