earle@smeagol.UUCP (03/04/87)
I was reading Gene Spafford's `Concise History Of Usenet' posting, and got to thinking. One of the paragraphs states: >This protocol allows hosts to exchangearticles via TCP/IP connections >rather than using the traditional uucp. It also permits users to read >and post news (using a modified version of "rn" or other user agents) >from machines which cannot or chose not to install the USENET news >software. Reading and posting are done using TCP/IP messages to a >server host which does run the USENET software. Sites which have many >workstations like the Sun and Apollo products find this a convenient >way to allow workstations like the Sun and Apollo products find this a >convenient way to allow workstation users to read news without having >to store articles on each system. My own personal decision was to place the posting/reading commands in a universally mounted /usr/local/bin ({check,post,read,v,r}news). I also universally NFS mount /usr/lib/news (allowing access to inews), and /usr/spool/news (everyone can get at the articles), and finally I have my system set to `ME' in /usr/lib/news/sys (so any machine can post). I have a network of over 10 Suns, and this scheme seems to work just fine for me. What I was wondering was if anyone out there had tried to gather any qualitative statistics comparing doing things this way (i.e., doing NFS mounts everywhere to allow all workstations access to news) versus using NNTP to allow remote news access. I'm not sure I could make a definitive statement either way, but my first guess would be that using NFS would be slightly quicker, if less `elegant' (i.e., having to do all those NFS mounts). But then again, *I'm* the only person who gives a damn about that sort of thing anyway; the users just care about reading news from anywhere (plus, they also get to use any news reader rather than being tied to `rrn'). Comments? -- Greg Earle UUCP: sdcrdcf!smeagol!earle; attmail!earle JPL ARPA: elroy!smeagol!earle@csvax.caltech.edu AT&T: +1 818 354 4034 earle@jplpub1.jpl.nasa.gov (For the daring) Is this an out-take from the ``BRADY BUNCH''?
lear@aramis.UUCP (03/05/87)
Regarding NFS vs NNTP, Mel Pleasant at Rutgers has written in support into News for NFS. The benefits are considerable in my view for the following reasons: (1) In general, support does not have to be added to specific programs such as rn or postnews (although you may run into byte order problems when using vnews and its artfile. Mounting one file system and making several links seems far more elegant then rewriting all the utilities you have to use rrn. (2) Surveys should be considerably faster (although I do not have figures) using NFS mounts and vnews because of the artsfile. The reason is that there is much less traffic involved going between machines since the survey is already compiled. Furthermore, NFS should be faster the TCP, which is what NNTP uses. (3) For diskless suns, the cost of NFS News should be the same as the cost of normal news. So why use nntp? I came to the conclusion that the big plus of nntp is that you don't need NFS. This is what it came down to when I wrote an NNTP client program for a -20. ...eliot -- [lear@rutgers.rutgers.edu] [{harvard|pyrnj|seismo|ihnp4}!rutgers!lear]
dave@uwvax.UUCP (03/05/87)
There's one major point *against* just using NFS to access news. There is no easy way to separate the binaries et al in /usr/lib/news. So what? Well, what if your server is a Gould and the workstations are all uvaxen. The Gould version of inews doesn't work too well on a vax :-) I guess this is OK if you never want to *post* news. This problem caused me to go the NNTP route. We used to use NFS when the server was a (really slow) vax. Dave Cohrs Proud member of NOTHING +1 608 262-2196 UW-Madison Computer Sciences Dept. dave@rsch.wisc.edu ...!{harvard,ihnp4,seismo,rutgers}!uwvax!dave
jordan@ucbarpa.Berkeley.EDU.UUCP (03/05/87)
NNTP was not designed for a DFS environment. In fact, it was specifically for a NON-DFS setup. If you have NFS, you should clearly use it. /jordan
lear@aramis.RUTGERS.EDU (eliot lear) (03/05/87)
Dave, The problem of binaries is not that hard to get over. Make inews a link to a remotely mounted directory containing the appropriate binaries. For example: On the Gould you could have the following: /usr/client/sun/inews /usr/client/vax/inews /usr/client/pyr/inews /usr/local/machine/inews (which, on the gould, would be its inews.) The other systems would mount the appropriate /usr/client/??? as /usr/local/machine. /usr/lib/news/inews (and programs of that nature could point to /usr/local/machine). Note that this won't necessarily solve byte order or word boundry problems for binary dbs but at least alignment problems can be fixed with minor hackery... ...eliot -- [lear@rutgers.rutgers.edu] [{harvard|pyrnj|seismo|ihnp4}!rutgers!lear]
jerry@oliveb.UUCP (03/05/87)
I used to run news on 5 systems. This was a constant series of headaches. First there was the disk overhead of storing 5 copies of the news. Then there was the cpu overhead of transmitting the news to 5 systems, running 5 expires, etc. Then there was the administrative overhead of having to manage news group cancelation, corrupted files, etc. And finally there was the problem of alignment. Each system diverged from the others in article numbering, number of newsgroups, etc. This was a pain to the users because if they were moved from one system to another, their .newsrc became useless. I worked out a script using "rsh" to send news and handle queueing for down systems. But it was all quite unsuitable for an integrated group of systems. Now I have the news spool directory, the active file, etx. symbolicly linked across the network. I am not using NFS (we plan to get it) but rather the public domain RFS that was posted to the net. I am not faced with the problem of differing binaries and the executibles used to be shared. They are not currently because of a bug in RFS and because of performance considerations. However, having to compile the executables for each type of machine and distribute them would still be a lot less work than supporting multiple news systems. It would even be less work than installing different versions using NNTP. And, of course, the users get to use whatever news reading program they desire without installing the NNTP mods. I don't want to knock NNTP as not everyone has NFS and NNTP can be used for transmitting news as well as reading it. There is one other modification usefull when news is shared. I have modified the news software so that it uses the "client" host name regardless of what system it is actually running on. Before I did this posters on "client" systems would have a message ID and "Path:" line showing that system's hostname only. This caused my neighbors to send the article back to the "server" system. It also created confusion in the various statistics as they would show news connections to the "client" systems when no such connection actually existed. I think there was also a problem with article cancelation. While these were not serious problems it was easy to fix and worth doing. It is beginning to look like enough people are using NFS to share news that support should be included for it in the release. I kind of brute force figured out what files needed to be on the clients, what files needed to be accessible to the client, and what were only necessary for the server. Makefile support for installing the client would have made my work easier. On a related topic. Is anyone using the HIDDENNET option and happy with it? I got confused trying to figure out exactly what it did and from comments on the net I suspect I am not alone. Jerry Aguirre Systems Administration Olivetti ATC
dave@rsch.wisc.edu (Dave Cohrs) (03/06/87)
[ I replied to Eliot in person, but he "Cc:" the net so... ] In article <309@aramis.RUTGERS.EDU> lear@aramis.RUTGERS.EDU (eliot lear) writes: >The problem of binaries is not that hard to get over. True, there's nothing that can't be solved by another level of indirection. Maybe we'll try something like that eventually. For now, though, using NNTP takes no additional hacks, which is the perfect thing to use when one has no time for even the most trivial hacking. ---------- [ end of forwarded msg ] Using NFS, in retrospect, is probably better CPUwise. You don't need the nntpd running on the server; you pay the price for disk access either way. Of course, you have to convince to system admin (I'm not) to set up the server the way you want. This isn't a problem for most people, I'm sure. The main reason I went with NNTP in this case (we used to use NFS for news when the server and the clients were all vaxen) was that I didn't have the time to set everything up right so it wouldn't break the next time someone did "make install". Using NNTP took less forethought. When one must contend with gremlins on a dayly basis, it changes ones outlook. Dave Cohrs Proud member of NOTHING +1 608 262-2196 UW-Madison Computer Sciences Dept. dave@rsch.wisc.edu ...!{harvard,ihnp4,seismo,rutgers}!uwvax!dave
brian@sdcsvax.UCSD.EDU (Brian Kantor) (03/07/87)
In article <17710@ucbvax.BERKELEY.EDU> /jordan wrote: >NNTP was not designed for a DFS environment. In fact, it was >specifically for a NON-DFS setup. If you have NFS, you should clearly >use it. >/jordan NOT TRUE! NNTP was written to be independent of environment - and becomes a clear winner as soon as the clients become a mixture of at least ONE non-dfs systems. Should I also store news on my Sun in a distributed file system when I have a vax serving a dozen or so machines which don't (and probably WON'T) use a dfs? No, of course not. Now if all you have is one network with a filesystem that's common to all the machines on it, NFS might be better. Maybe. But its a shortsighted solution when a rich mix of clients is considered! Brian Kantor UCSD Office of Academic Computing Academic Network Operations Group UCSD B-028, La Jolla, CA 92093 USA
pwl@fluke.UUCP (03/12/87)
In article <3311@rsch.WISC.EDU>, dave@rsch.wisc.edu (Dave Cohrs) writes: > There's one major point *against* just using NFS to access news. > > There is no easy way to separate the binaries et al in > /usr/lib/news. So what? Well, what if your server is a Gould and > the workstations are all uvaxen. The Gould version of inews doesn't > work too well on a vax :-) I guess this is OK if you never want to > *post* news. > > This problem caused me to go the NNTP route. We used to use NFS > when the server was a (really slow) vax. This article brings up one of my pet peeves with the layout of the news system. News has always had the notion that it was reasonable to take files which describe the news database and lump them with binaries of news maintenance programs, shell scripts for house cleaning, etc. This is tolerable when the news is isolated to a particular machine or architecture, but it causes no end of problems when one decides to distribute news using NFS or some other form of remote access. At our site we have five vaxen running 4.2bsd and almost forty Sun workstations, with six Sun file servers. About a year ago we decided to make a news server out of a single Sun system for all the other Sun systems. The NFS made this look like a reasonable goal. We could just ship the news to our news server and every client could NFS mount the news database. Unfortunately, the database administration files (active, history, sys, log, errlog) all lived in /usr/local/news, rather than somewhere in /usr/spool/news. Each file server has its own copy of /usr/local/news, so the database admin files had to somehow be moved out of the binary directory to someplace where they could be accessed as easily as the news database itself. Now I realize that symbolic links could be used as a bandage to fix this problem. However, we decided that we really wanted to repair the underlying problem. Our solution was to move the administration files to /usr/spool/news/.admin, which is a sub-directory within the news database. We then modified the news software by adding a new #define called "ADMIN". Anyplace there was a reference to "LIB/active" was replaced with "ADMIN/active". This was done for all of the database admin files. ADMIN was defined to be /usr/spool/news/.admin. The result is that a client machine merely does an NFS mount of /usr/spool/news from the news server. The news binaries (inews, rnews, etc.) reside on each of the file servers, in the various flavors needed by the clients (MC68010 and MC68020, Sun 2.2 and 3.2). To read news, the client just fires up the news reader of his choice. When a client posts an article, it is spooled in /usr/spool/news/.rnews through the services of the SPOOLNEWS option in 2.11 news. The news server checks this directory once an hour for news articles to be processed. This scheme has been working successfully for over a year now. Now for the good news/bad news. We are in the process of switching our vaxen over to MORE/bsd with NFS support. I had originally hoped that we could just NFS mount news from the news server on all the vaxen. It turns out that reading news in this manner is no problem. Unfortunetly, posting news is a different matter. When inews gets an article that should be batched for the news server, it first goes out to the history DBM database to see if it is a duplicate article. The DBM database is in a binary format, which is architecturally dependent. The vax dbm reading routines choke on the Sun generated database. So much for vax posting! There seem to be a couple of work-arounds for this problem. The one I am seriously considering is a scheme that was suggested recently by Chuq Von Rospach of Sun. He described their system, which uses NFS mounts to support news reading. News posting is done by using the Berkeley NNTP to transmit articles to the news server. This means that all article processing is done at the news server, which is where it belongs anyway. The nice thing about this approach is that you don't need a special version of the news readers. Also, installing and maintaining the NNTP code to just do posting looks like a reasonable task. So there you have it. I plan to implement the NNTP approach in the coming months. Please keep the ideas flowing. I have gotten many valuable ideas from these discussions. -- Paul Lutt KE7XT (206) 356-5059 new: pwl@tc.fluke.COM John Fluke Mfg. Co. old: uw-beaver!fluke!pwl P.O. Box C9090 Everett WA 98206 or: allegra!fluke!pwl