[comp.mail.uucp] A bigger problem

gmp@rayssd.ray.com (Gregory M. Paris) (09/06/88)

In article <4753@b-tech.UUCP> zeeff@b-tech.UUCP (Jon Zeeff) writes:
> I don't know.  How would any site be handled if it advertised
> many low cost links and then screwed up the mail it received?

This brings up what I think is a much bigger problem than the active
rerouter problem that's gotten so much airplay in this group.  What
do you do about sites that claim low-cost links that aren't
low-cost?  These erroneous links can have a dramatic impact on the
timeliness and reliability of *incoming* mail.  The problem stems
from the fact that altering your map data causes a change only in
the way(s) mail *leaves* your site.  A mail administrator cannot
control which way(s) the mail comes in.

Readers are thinking, "coordinate that with your neighbors," but it's
not just your immediate neighbors that contribute to the problem.
Often, the problem can be two or three hops away, and admins at these
sites sometimes don't know what they're doing, or don't put any
priority on making somebody else's mail work.  No names mentioned,
but just recently we had a month's worth of mail for this domain jam
up at a site two hops away for exactly the reason described above
(some two hundred letters if I'm remembering correctly).  Credit the
admins there for correcting their map when I brought the problem to
their attention, but that was long after the problem became severe.
Another site advertised a DIRECT connection to us when there was no
connection at all!  Even worse, apparently pathalias on uunet
calculated the best way to send mail to us was through this
nonexistent link!  (A way to extort new uucp connections if you're
short on them.)

If you want to discuss a real problem, this is it.

-- 
Greg Paris                    <gmp@rayssd.ray.com>
{decuac,gatech,necntc,sun,uiucdcs,ukma}!rayssd!gmp

                    NO KILL I

lear@NET.BIO.NET (Eliot Lear) (09/06/88)

Greg:

What you are saying is that there are two sources of the same
information, and they tend to lose sync with each other quite often.
The first source is what is actually the case, and the second is what
is in the map entries.  Perhaps the best way to irradicate this
dichotomy is to make a program that would generate pathalias entries
for each system, based on UUCP connections listed in an L.sys file,
and maybe a little extra information.  Stick that program in cron and
let it run once a month (once a week if you're an active site, once a
day if you happen to be uunet ;-).

Eliot
-- 
Eliot Lear
[lear@net.bio.net]

vixie@decwrl.dec.com (Paul Vixie) (09/06/88)

Sigh.  This just goes on and on.

lear@NET.BIO.NET (Eliot Lear) writes:
# What you are saying is that there are two sources of the same
# information, and they tend to lose sync with each other quite often.
# The first source is what is actually the case, and the second is what
# is in the map entries.

Eliot,

The map is not the territory.

Yes, people could set up their systems to send in map data every week or day
or hour.  I don't wish to be required to do two things, though:

1. Publish all my connections.  Some of my connections are for private use
  only, and I publish them to a handful of sites only.  Or I don't publish
  them at all, and I type the routes in my hand whenever I use them.

2. Tell the UUCP Project about it whenever I add something I _do_ intend to
  publish.  I will tell them eventually, like in about three weeks or so
  after I've added and deleted and tuned and munged and stabilized things.

I am coming to believe that you are simply unwilling to even begin to try to
understand why someone wouldn't want to have a constant 1:1 correspondance
between their connectivity and their public map data.

Even if it's desired and attempted, there are reasons why it will never be
the ideal that would be required to make active rerouting a safe practice.

And that's what I mean when I say: The Map Is Not The Territory.  You cannot
make it so.  For reasons of psychology, distributed self-interest, and for
the simple lack of any intrinsic reason why the Territory should conform to
your Map.

I agree that the maps can be made much better, and that more updates should
be sent in and that the maps should reflect the territory as accurately as
possible.  But: only within the limits of the desires of the local admins.

That means you can't ask people to publish every connection they have.  And
that, in turn, means that you can't (short of rudeness) reroute their mail
according to your (definitionally) limited view of the Territory.

Are we on the same wavelength yet?  Earth to Eliot, come in, Eliot...
-- 
Paul Vixie
Work:    vixie@decwrl.dec.com    decwrl!vixie    +1 415 853 6600
Play:    paul@vixie.sf.ca.us     vixie!paul      +1 415 864 7013

gmp@rayssd.ray.com (Gregory M. Paris) (09/06/88)

In <Sep.5.16.53.29.1988.11588@NET.BIO.NET> lear@NET.BIO.NET (Eliot Lear) writes:
> What you are saying is that there are two sources of the same
> information, and they tend to lose sync with each other quite often.

Or were never in sync to begin with.

> The first source is what is actually the case, and the second is what
> is in the map entries.  Perhaps the best way to irradicate this
> dichotomy is to make a program that would generate pathalias entries
> for each system, based on UUCP connections listed in an L.sys file,

Yipes!  Judging by how many sites run arbitron, I don't think this idea
has a chance in hell of flying.  Besides, it can't work properly.
For instance, this site calls several others that never call us.
I want our mail to come in through some of those sites.  Using a
program like the one you suggest, their cost to us would be POLLED
and mail would never come to us through them.  Not a win.

I was thinking more along the lines of changing pathalias or the way
the maps are generated and/or verified.  Some ideas:

1.  The cost names are all wrong.  People think that LOCAL means
any site that's close, even if you never call them.  This thinking
is often incorrect, but inspired by the name.  The same goes for
FAST.  Without looking up the numbers, which is faster, HOURLY+FAST
or DEMAND?  Which should be faster?  And are there really any sites
that call each other WEEKLY?

2.  I want to be able to tell the rest of the world that site
xxxxxx is wrong about the cost of calling my site and that
the cost is *higher* than what xxxxxx is advertising.

3.  Maybe map coordinators should check with both sides of
each link before publishing map entries.  New links, at
least, should be checked out before being published.  New
sites should have all links checked before being published.

4.  No map data for a site should be published unless the postmaster
alias for the site is proved to get human attention.  Many sites
seem to ignore all mail to postmaster.  (I found about five such
sites last year when I was getting bombarded by 100 letters a day
from an insane mailer and couldn't find a contact to stop it.)


-- 
Greg Paris                    <gmp@rayssd.ray.com>
{decuac,gatech,necntc,sun,uiucdcs,ukma}!rayssd!gmp

                    NO KILL I

wisner@killer.DALLAS.TX.US (Bill Wisner) (09/06/88)

There already is such a program, or rather, script. It was written by
Erik fair and is in volume 7 of the comp.sources.unix archives as
"uucp+nuz.tulz". There may very well be a newer version by now.

chip@ateng.uucp (Chip Salzenberg) (09/06/88)

Oh, this is rich.

According to gmp@rayssd.ray.com (Gregory M. Paris):
>This brings up what I think is a much bigger problem than the active
>rerouter problem that's gotten so much airplay in this group.
>... sites that claim low-cost links that aren't low-cost ...

Yes, this can be a problem.  But wait!  He proceeds to write:

>Often, the problem can be two or three hops away, and admins at these
>sites sometimes don't know what they're doing, or don't put any
>priority on making somebody else's mail work.

So even the proponents of active routing must admit:

    Sysadmins don't always know what they're doing.

Let's expand that sentence:

    Sysadmins THAT WRITE MAP ENTRIES don't always know what they're doing.

So even if the world maps were propagated instantaneously, they would
*still* be wrong.

Can we all agree on this point?  It does seem so obvious.
-- 
Chip Salzenberg                <chip@ateng.uu.net> or <uunet!ateng!chip>
A T Engineering                My employer may or may not agree with me.
	  The urgent leaves no time for the important.

brisco@pilot.njin.net (Thomas Paul Brisco) (09/07/88)

In article <3728@rayssd.ray.com> gmp@rayssd.ray.com (Gregory M. Paris) writes:

> 
> ....
> For instance, this site calls several others that never call us.
> I want our mail to come in through some of those sites.  Using a
> program like the one you suggest, their cost to us would be POLLED
> and mail would never come to us through them.  Not a win.
> 
	Extracted from site njin:

njin	[others ...], barfy(POLLED), [others ...]

	Extracted from site rutgers:

rutgers [others ...], barfy(POLLED), [others ...]

	Extracted from site barfy:

barfy	rutgers(DAILY), [others ...], njin(DAILY)

	What is all that stuff on my disk then? If you want
routing through them, then you should raise the (declared) 
frequency of the calls _to_ them. If you have other sites that
_are_ calling you (a case which "barfy" does not have), then
you need to juggle the weights to be in the ratio that you 
want them to be in.

> 
> 1.  The cost names are all wrong.  People think that LOCAL means
> any site that's close, even if you never call them.  This thinking
> is often incorrect, but inspired by the name.  

	Sounds like a good case for "RTFM". (or, more appropriately,
RTFSourceCode)

>                                          The same goes for
> FAST.  Without looking up the numbers, which is faster, HOURLY+FAST
> or DEMAND?  Which should be faster?  

	Quick - what is pi to 100 digits? (Hint: that is what
computers are for)

> 
> 2.  I want to be able to tell the rest of the world that site
> xxxxxx is wrong about the cost of calling my site and that
> the cost is *higher* than what xxxxxx is advertising.

	Why do you care what cost site xxxxxx is advertising?
If it's filling up your disk, then call postmaster@xxxxxx
and get her/him to fix his/her maps.

> 
> 3.  Maybe map coordinators should check with both sides of
> each link before publishing map entries.  New links, at
> least, should be checked out before being published.  New
> sites should have all links checked before being published.
> 
> 4.  No map data for a site should be published unless the postmaster
> alias for the site is proved to get human attention.  Many sites
> seem to ignore all mail to postmaster.  (I found about five such
> sites last year when I was getting bombarded by 100 letters a day
> from an insane mailer and couldn't find a contact to stop it.)
> 
> Greg Paris                    <gmp@rayssd.ray.com>

	I suspect that there is rarely a problem with links when
they are first put in.  The problems I have observed have been
mainly with sites "falling off" and not mentioning it and not
altering their maps correctly.  I had one such similiar problem
with a site -- I got about 2 meg backed up for them and then
phoned the guy. I called him and he said that it was dead and
not coming back. I asked him to update the map appropriately
and he said he would.  I had similiar conversations for every
2 meg that I accumlated. At 10 meg, I put the mail on a tape
and gave it to him, and then deleted him from my local information.
Thus - mail went a-bouncing back to whereever it came from
(this elicited some complaints from the innocent - but I 
 redirected them to him).
	I have also had very bad expriences with "postmaster"
(even some at what I would normally consider "nice" sites, such as
 "fed" - a federal site).  The problem is that at some places
"administrator" is defined as "the first person that gets tired of 
having stuff blow up on them" - they usually have other jobs and don't
want to spend much of their time doing "computer stuff". The other
group is people who do things out of ignorance - these people
usually respond quickly, and in best intentions. Contacting people
of type "2" will usually work - contacting people of type "1"
will probably never work.  I (and I suspect you also) have better
things to do time than call administrators who dont get their
work done -- why actively look for further ulcer food?  Also -
how are {you,me} to know when a "good" administrator leaves
a company/institution and is replaced by a "no-good" administrator?
Do you suggest that I contact _all_ of the administrators that
my machines talk to on some sort of regular basis? Telephone?
email?
	The USENET is a _cooperative_ effort -- that means we
all try to work together for the best overall effect (though
sometimes our opinions differ in small ways); I'd suggest
that you start dropping sites that do not conform (or attempt
to conform) to the established standards and methods. It might
sound a bit like overkill, but I would think that 3 or 5 steady
sites would beat 20 undependable sites.  Note: you should try
notifying these people that you are dropping them before doing
something as obnoxious as the above - I've only ever had need to do
it once.


					Tp.
-- 

...!rutgers!brisco (UUCP)               brisco@pilot.njin.net (ARPA)
    brisco@ZODIAC (BITNET)              201-932-2351          (VOICE)

rroot@edm.UUCP (Stephen Samuel) (09/29/88)

From article <800@bacchus.dec.com>, by vixie@decwrl.dec.com (Paul Vixie):
> Yes, people could set up their systems to send in map data every week or day
> or hour.  I don't wish to be required to do two things, though:
> 1. Publish all my connections.  Some of my connections are for private use

> 2. Tell the UUCP Project about it whenever I add something I _do_ intend to
>   publish.  I will tell them eventually, like in about three weeks or so
>   after I've added and deleted and tuned and munged and stabilized things.

I agree that people shouldn't be forced to publish private sites, but
they SHOULD be willing to do is publish the sites that you DON'T connect
to.  More precisely: when you stop feeding a site, or know that you soon
will, you should publish a new map with that site removed ASAR! (As Soon
As Reasonable).

  It doesn't do people much good to send mail to a dead-end site because
of out-of-date maps, but it DOESN't hurt if they send things thru
a less-than-optimal path.

-- 
-------------
Stephen Samuel 	  (userzxcv@ualtamts.bitnet   or  alberta!edm!steve)
(Only in Canada, you say??.... Pity!)

mml@magnus.UUCP (Mike Levin) (09/30/88)

In article <3283@edm.UUCP> rroot@edm.UUCP (Stephen Samuel) writes:
>
>I agree that people shouldn't be forced to publish private sites, but
>they SHOULD be willing to do is publish the sites that you DON'T connect
>to.  More precisely: when you stop feeding a site, or know that you soon
>will, you should publish a new map with that site removed ASAR! (As Soon
>As Reasonable).
>
	A slight twist to your idea, Stephen.  What if a newsgroup was
setup (probably should be moderated) for map changes-  i.e.  ONLY when
a site DISCONNECTS somebody or is experiencing serious connection
problems, the SA posts (to this new group) an updated map.  Then, if some
really clever person out there could write a map patcher, the new info
could be applied as a patch to the appropriate map.  Thus, the info would
be as UTDAP (Up To Date As Possible) at all times.

	I wonder if somebody could standardize on a map update program.  I
use a shell script which is run from cron, whose job it is to pull a copy
of any new maps that came in, unshar it, and stick it into my maps directory.
What if this was improved to where it kept a simple DB of the entire uucp
universe, and the whole process was standardized/optimized.  I think that
could be made to work REALLY WELL! !

	I wonder if it will rain tomorrow????


					Mike Levin

-- 
+---+  P L E A S E    R E S P O N D   T O: +---+  *  *  *  *  *  *  *  *  *  *
| Michael M. Levin, Silent Radio, Los Angeles  | I never thought I'd be LOOKING
|{aeras|csun|mtune|pacbell|srhqla}!magnus!levin|   for something to say! ! !
+----------------------------------------------+------------------------------+

carlo@electro.UUCP (Carlo Sgro) (10/04/88)

In article <318@magnus.UUCP> mml@magnus.UUCP (Mike Levin) writes:
>	A slight twist to your idea, Stephen.  What if a newsgroup was
>setup (probably should be moderated) for map changes-  i.e.  ONLY when
>a site DISCONNECTS somebody or is experiencing serious connection
>problems, the SA posts (to this new group) an updated map.  Then, if some
>really clever person out there could write a map patcher, the new info
>could be applied as a patch to the appropriate map.  Thus, the info would
>be as UTDAP (Up To Date As Possible) at all times.

Another benefit of this would be that sites that cannot justify getting maps
regularly (due to transfer time) would be able to get maps once and then 
patch them.  We would fall into this category.

Or are we talking about *.config?
-- 
Carlo Sgro                              <Insert interesting thought here>
watmath!watcgl!electro!carlo