sewilco@datapg.MN.ORG (Scot E Wilcoxon) (08/26/88)
The discussion has wandered off to another domain vs site name battle, so I changed the Subject line. My proposal was that an article from site "abc" should be forced to be sent to site "abc" unless the article was received from site "abc". This will stop sites with duplicate names from being invisible to each other. Fully-qualified domain (FQD) names in the Path list will eliminate the invisibility. If rnews/inews were to use a dynamically allocated string, the capability of fitting hundreds of such FQD names in 32K or 64K should be sufficient. The Path field presently is truncated when too long. The Path field is needed to track where a message has already been and is used to prevent unnecessary transmission of articles. The Message-ID history mechanism can also stop articles from circulating endlessly in the net, but well-connected sites would be flooded with duplicates. The existing Path method, including truncation, serves its purpose. It stops well-connected sites from being flooded with articles which have already passed through them, and it stops regional loops. It also handles the worst case of an article circulating around a path which goes around well-connected sites, as well-connected sites will eventually rebroadcast the article and kill it through the Message-ID mechanism. But sites with identical ("uucp-style") names are invisible to each other. My method at least makes the articles which the other sites post visible to the duplicates. -- Scot E. Wilcoxon sewilco@DataPg.MN.ORG {amdahl|hpda}!bungia!datapg!sewilco Data Progress UNIX masts & rigging +1 612-825-2607 uunet!datapg!sewilco
david@ms.uky.edu (David Herron -- One of the vertebrae) (08/26/88)
In article <1597@datapg.MN.ORG> sewilco@datapg.MN.ORG (Scot E Wilcoxon) writes: >The discussion has wandered off to another domain vs site name battle, so >I changed the Subject line. > >My proposal was that an article from site "abc" should be forced to be sent >to site "abc" unless the article was received from site "abc". This will >stop sites with duplicate names from being invisible to each other. This won't work. The news software doesn't know what site the news arrived from. Especially here where rnews on my uucp machine is a little script along the lines of: cat >/usr/spool/inews/uunews.$$ You may be thinking of log file entries where it looks as if the software knows that the message has arrived from some particular place. All it's doing is looking at the first component of the Path: line. Assuming for the moment that all the sites with duplicate names knew of each other ... a decidedly non-trivial task since one or all of these hosts *must* be un-registered ... How are you going to handle the administrative hassle of telling the neighbors of these sites to install the header munging stuff for them? And what lever will you use on the SAs to get them to actually DO it as well. In general I can't think of a good way to accomplish what you're wanting to accomplish. That is, discovering duplicate Usenet hosts and getting them to change in some way. -- <---- David Herron -- The E-Mail guy <david@ms.uky.edu> <---- ska: David le casse\*' {rutgers,uunet}!ukma!david, david@UKMA.BITNET <---- Problem: how to get people to call ...; Solution: Completely reconfigure <---- your mail system then leave for a weeks vacation when 90% done.