[net.mail] name change

hokey@plus5.UUCP (Hokey) (02/16/85)

In article <405@lsuc.UUCP> dave@lsuc.UUCP (David Sherman) writes:
>This silliness exemplifies the problems we'll continue to have
>as the net grows, if people insist on picking names which are
>unrelated to their organization.
>
>Couldn't you use "digi-snow" or "digi-white"? Or (horrors) "digi-at"?

Another, better, solution is to use a decent domain scheme (or a decent
edge database system) to avoid the problem of name ambiguities.

And I don't mean geographic domains, either!  There is no *good* reason for
using the proposed geographic domain system.  If the primary reason for going
to a geographic domain system is to reduce the administrative load and table
maintenance, there are easier and better ways.

A "flatter" namespace is much easier to use.  I would *greatly* prefer our
machines to be known as plus5.uucp instead of plus5.geographic.uucp for
several reasons:

	- shorter to type
	- makes sense if we have sites in many geographic regions
	- fewer table entries to maintain

There are *many* sites on the net which do not meet the proposed requirements
for a second level domain but have offices distributed amongst many geographic
domains.  Other than Sun, HP, Dec, Tek, ATT, and any others which clearly
meet the proposed requirement (now), we have Gould, Perkin-Elmer, Interactive
Systems, Intel, Masscomp, CCI, and untold smaller firms which span the
proposed geographic boundaries.  "Forcing" these people to join local domains
or to setup multiple domain addresses just makes for bigger tables and
greater frustration for users (because all the names are "hidden" below
the geographic subdomain).

Administration/registration of a "flatter" namespace can be easily handled
by separating the "assistant" administrators/registrars from the actual
second level domain names.

We could even use the current regional map coordinators for the job.

-- 
Hokey           ..ihnp4!plus5!hokey
		  314-725-9492

hokey@plus5.UUCP (Hokey) (02/20/85)

I received a reply to my posting from Mark Horton.  What follows is his
letter (reprinted with permission, of course) and my response.

To: wucs!ihnp4!cbosgd!mark
Subject: Re: name change

> Hokey - what exactly are you proposing?  Clearly there must be some
> rule for who is entitled to a 2nd level domain.  If anyone is allowed
> to have a 2nd level domain, the number would grow so rapidly that
> there would be no way to keep an accurate map, and no reasonable way
> to deal with mail sent to a 2nd level domain that isn't recognized.

I would prefer that any organization can get a second level domain.  I
realize that, at present, sites are treated as second level domains, but I
believe this to be unnecessary.

Why is an accurate map necessary (accuracte with respect to completeness,
not correctness)?  I realize that by limiting the number of second level
domains an "accurate" map of second level domains is easy.

There is a very easy way to deal with unrecognized second level domains:
the "ring" scheme I mentioned in my mail to the mailing list.  I realize I
haven't followed up with an in-depth description of the scheme, but it
seems pretty easy.

(Subsequent emphasis around *route* and *address* is partially for my
benefit, as well as for any others who might see this but do not have a
clear understanding of the ramifications.)

Basically, we define a "list" of backbone sites.  Mail from The Depths (or
from The Outside) eventually reaches a backbone site.  If the backbone does
not know how to route the mail to the correct destination, it passes the
message to its "right-hand" neighbor.  If the message returns to the initial
backbone site (with the same *address*), it can be rejected.  We can tell
if the message has been seen at "our" site by parsing the header for a
Received-by: line with our name on it.  Small amounts of intelligence will
be required because Other Mailers might try to *route* the mail through
our site; in this case the destination will be a *route* instead of an
*address*.

> There is no requirement that domains be geographic.  In fact, we
> expect to have some non-geographic domains, but only those that
> would be as large as the geographic domains.

The requirements for non-geographic domains are too strict, and there will
be, I believe, great resistance toward their creation.

Geographic domains Stink for organizations which have machines in different
geographic domains.

The only other alternatives are for "groups" to band together.  If we are
going to go to that trouble, we might as well register directly under EDU,
COM, GOV, MIL, or ORG, and bypass the .UUCP domain entirely.  (I have
already discussed this point with Jon Postel.)  Either that, or form the
same domains under .UUCP (which seems silly to me).

> I gather you have some alternative in mind.  What is it?
> 
> 	Mark

Fix the netnews software to permit "permanent" articles.  This will help solve
many existing problems the net now faces, and will also provide a perfect
mechanism for automatically updating site connectivity and domain information.

Help me feel good about the way the uucp-mail project is going, so I feel good
about spending my (mostly non-existent) spare time on the mailer.

Discuss these issues over the net in a moderated format, thereby providing
as many as possible as much information as we can, and also limit the rehash
of well-understood issues by those who have slogged through so much in the
past.

Consider the possibility of a higher level of funding to support the design,
documentation, and software effort (the order of the projects is significant).
I don't really care *who* gets paid to do this; I do get frustrated because
this is taking so long, primarily because we all have higher-priority things
to do (mostly, we all have to earn a living).

How did I do?

Hokey

-- 
Hokey           ..ihnp4!plus5!hokey
		  314-725-9492

hokey@plus5.UUCP (Hokey) (02/20/85)

> Date: Tue, 19 Feb 85 11:18:16 est
> From: cbosgd!mark (Mark Horton)
> To: hokey@plus5.uucp
> 
> Go ahead and publish, including this reply.
> 
> What you are proposing is really a two-level plan anyway.  You are
> suggesting backbone machines (these are really "2nd level domains") and
[I assume you mean second level domains under "your" proposal]
> that any org, no matter how small, goes into a table on at least one
> backbone machine.  The only difference between this and considering the
> orgs to have 3rd level domains (which is what will probably happen, unless
> a 2nd level domain chooses to subdivide differently) is that the 2nd level
> is hidden.  The only advantage to hiding the 2nd level is to save typing
> of about 5 characters in a mail address.

Not true; the difference is in what is *represented* by the "extra" subdomain.
Segregating (commercial) sites by geographic area makes it harder to locate
a particular site.  By making the organization the higher level domain, the
task of locating the organization is that much easier.  Furthermore, this
sort of organization will tend to route mail *through* an organization's
"domain" rather than through other organizations'.

> However, there are plenty of disadvantages.  Once you try to make the map
> seem logically flat, you'll get thousands of organizations that all appear
> equal.  The map for these orgs will be so large that it will never be up
> to date, and it will be too large to distribute very far.  (Essentially
> the same problem we have with the current map.)  So you have to assume that
> local machines don't have a very good map.  This in turn means that more mail
> will be sent to unrecognized host names, which will get routed through the
> backbone, putting a huge load on the backbone.  If you were the SA on a
> backbone machine, would you want that much mail going through your machine?
> The only hope we have of having mail be well routed is to have good maps of
> local areas and frequently used destinations on each machine.  And having
> mail be well routed is crucial, otherwise the only machines willing to be
> backbones will be commercial services that charge for each message they
> transmit.

I disagree.  If we get the netnews software elevated to the point where
permanent postings are handled, we will have no trouble automating the
routing tables.

Until the netnews software is "enhanced", we can still *drastically*
reduce the volume of mail by using short routes when replying to messages
from netnews.  I wish I could get rn to use From: instead of Path:.

So, in order to support a large number of second level domains, we need to
make it easy to maintain the maps.  I don't think this is too hard at all.
The easier we make it to maintain the maps, the more sites we shall see
who have maps, and the easier it will be to distribute the load.

With geographic domains, there is no incentive or mechanism to "improve"
routing.

> As for getting funding, I don't have a source for it.  The net is not willing
> to pay money to support UUCP.  Usenix doesn't want to get involved with
> domains.  Do you have a source in mind?

If the members of Usenix want it, they should ask for it.  If that fails,
what about the members of /usr/group?

We are members of both; I suspect if enough members of either organization
ask for it..

> Joe Kelsey at Fluke has also expressed an interest in a different
> organization, along the EDU, COM, etc lines.  You might be interested in
> working with him to create a reasonably concrete proposal, if you two agree
> on how it should be done.
> 
> 	Mark

Hi Joe!  What do you think?
-- 
Hokey           ..ihnp4!plus5!hokey
		  314-725-9492

mark@cbosgd.UUCP (Mark Horton) (02/21/85)

Well, I guess the rest of this discussion now has to be done in public.
If the reader is bored, feel free to turn it off or skip ahead.

Hokey - I don't see the connection between enhancing netnews
and maintaining a huge flat domain space.  We already have permanent
postings - namely articles with expiration dates in the future.  We also
have a facility for keeping a separate database - putting an extra
psuedo-site in the sys file, feeding the article to a program that
does something with the article text.  What we don't have is a way to
keep the same article number on different versions, or to automatically
expire the previous version when a new version comes in.  These features
only matter when you want humans to read the article using standard
netnews interfaces, and this doesn't make sense for a map.

So you could write a program to take updates from net.adm.map (or whatever)
and update the database locally.  This would give each host the ability
to optionally subscribe to the service, just by adding a line to their
sys file and installing a program.

What I don't see is how this solves the problem of a huge flat name space.
Who has responsibility for the database?  Who has the master copy?  Where
do you go if you want an accurate copy to send someone in another domain?
Who resolves name conflicts?  How do you prevent site B from changing site
A's data against site A's wishes?  What are hosts that don't get netnews
supposed to do?  That don't have the disk space to store such a huge
database?  That don't want to have UNIX directories with thousands of
files in them?  What do you do about situations where the entire 2nd
level must be published somewhere - it will be huge!  How do new sites
come on-line?  What if a machine doesn't run UNIX so it can't get news?

	Mark

honey@down.FUN (code 101) (02/22/85)

what i want is a simple way to tell people my mail address.  to me this
means always giving my user name, frequently giving my host name, and
sometimes naming my network, but no more than that.  to achieve this, i
pick a host that has high visibility.  in my case, this happens not to
be the machine on which i work.  this is common, e.g., cbosgd!mark ->
cbpavo!mark, allegra!jpl -> presto!jpl, bellcore!everybody -> god knows
where.

what does it take to make this simple idea work well?

first, we must have a means to make a host name well-known.  i believe
that the efforts of karen (map management) and lauren (name registry)
are making this a reality.

next, we need a way to hide the names of "minor" hosts.  this is partly
for convenience, so that people can use familiar gateways, partly to
minimize data rot, and partly to avoid host name conflicts.  it appears
people have sendmail tricks for hiding host names; i have also seen a
one line sed script that does the job.

this seems a good moment to mention an undocumented capability of the
latest pathalias.  a line of the form

private {host, ...}

declares the list of host names that follows as private, or local, for
the remander of the current input file.  connection data for a private
host is associated with a host guaranteed to be distinct from all others.

for example, eecs at princeton has a machine called iris, but there is
at least one other iris on the network, at brown.  so i declare iris
private in the eecs connection data file.  pathalias then distinguishes
princeton's iris connection info from brunix's, so that i don't end up
routing to princeton!iris!brunix!...

there are two reasons why this is not documented anywhere ('til now,
such as it is).  first, i'm not happy with my yacc description, which
requires that private and { appear on the same line.  (this is to avoid
making private a keyword.  i invite the interested to clean this up.)

second, what do you do with it once you've got it?  should i produce
output for both irises?  how do i build a dbm file on that?

at the moment, i don't produce output for private hosts at all, nor for
any host that clashes with a private host.  this means that i can
specify princeton!iris!user or brunix!iris!user, but not iris!user.

of course, i cheat by adding the local iris to my route tables.  also,
i accommodate other people's broken software:  we changed iris' name to
ivy this week.

finally, my hacked up delivermail consults an edge database built from
karen's data to help optimize the route.  this enables me to re-route a
long path ending in ...!brunix!iris!.. to brown, or even to hosts to
the right of iris on the specified address.

i maintain that these techniques have the same effect as domaining:  by
adding more information to the address specification, ambiguous routes
and host names can be accommodated, and route optimization can take
place.  i also maintain that this is more powerful than domaining,
since it achieves the economy of host!user (or user@host, if you must)
wherever possible, is compatible with existing practices, and works
even when !'s and @'s are mixed.

	peter

ps:  if you care to write a followup to this note, please do not
include huge chunks of it, or even small pieces.  i know the temptation
to make a point-by-point reply is irresistible, but i suggest you
simply state your ideas and let the news reading software provide the
right level of context.

hokey@plus5.UUCP (Hokey) (02/23/85)

Is the conversational followup technique good or bad?

In article <890@cbosgd.UUCP> mark@cbosgd.UUCP (Mark Horton) writes:
>Well, I guess the rest of this discussion now has to be done in public.
>If the reader is bored, feel free to turn it off or skip ahead.

People would be bored if they didn't care, or were burned out, or felt
we were discussing really stupid things.  If they don't care, they probably
aren't reading this newsgroup.  If they are burned out, I can understand it,
but I would appreciate it if they would rekindle enough to get the application
fully designed and documented.  If this is a stupid discussion, they can
always ignore it or flame me (or anybody else...).

>Hokey - I don't see the connection between enhancing netnews
>and maintaining a huge flat domain space.  We already have permanent
>postings - namely articles with expiration dates in the future.

Expiration dates in the future won't work for reasons I mentioned in another
article.

Basically, lots of people run "find" scripts which delete old articles because
expire lets things fall through the cracks, disk systems fill up and the
history file gets truncated, and several other fun things.  Furthermore,
new sites won't get any of the existing articles.

>What we don't have is a way to
>keep the same article number on different versions, or to automatically
>expire the previous version when a new version comes in.  These features
>only matter when you want humans to read the article using standard
>netnews interfaces, and this doesn't make sense for a map.

Again, not quite sufficient.  What about new sites?  The extensions noted
would be *useful* (although perhaps not necessary) for map maintenance,
but would be of *great* help for net-etiquette and other documents, including
RFCs, oft-posted sources or manual pages, and "Standards" documents (like
design documents for mail and news software).

I'm interested in generalizing the software, so we can avoid problems like
this in the future.

>What I don't see is how this solves the problem of a huge flat name space.
>Who has responsibility for the database?  Who resolves name conflicts?  

The Domain Registrar, and the regional registrars/administrators,
just like they do now.

>Who has the master copy? Where
>do you go if you want an accurate copy to send someone in another domain?

Anybody who keeps the postings on line.  I realize that the sent copy would
be accurate to the moment it was retrieved, similar to the way a balance
sheet reflects the books at a point in time.

How accurate should the registrars be?

>How do you prevent site B from changing site A's data against site A's wishes?

We're talking moderated postings of public data.  If A wants secure data, A
can maintain a separate database using the mechanism you mentioned earlier,
and manually post changes (similar to rmgroup control messages).

>What are hosts that don't get netnews supposed to do?

They can either run the netnews software and only receive the appropriate
postings, or wait until some Angel writes the software, or let some other
site be their smartmailer, or ...

>That don't have the disk space to store such a huge database?
>That don't want to have UNIX directories with thousands of files in them?

Parts of the previous answer apply here.  Again, if there is going to be
any significant traffic between two sites, they should connect.  This will
tend to reduce the amount of tables needed at "minimal" routing sites.

>What do you do about situations where the entire 2nd
>level must be published somewhere - it will be huge!

What cases are these?  So what if it is huge?  It is big because there are
a lot of sites!

>How do new sites come on-line?

They contact the Domain Registrar, who initially verifies that the name
is not already in use, and probably turfs the request over to a regional
registrar for ongoing maintenance of connectivity information.  The
"assistant" registrar/administrator is the one who posts updates to the
moderated map group.

>What if a machine doesn't run UNIX so it can't get news?

Then what are they doing in the UUCP domain, and how are they running
this mail system?

Otherwise, the news software seems to be a superset of RFC822, and that is
what mail will probably look like, so them find a way to get the map postings
(if they want them; see earlier answers about "lack of space", "we don't run
netnews", and a couple of others).

>	Mark

Hokey

PS - catch the next posting for Answers to Questions Not Yet Asked!
-- 
Hokey           ..ihnp4!plus5!hokey
		  314-725-9492

hokey@plus5.UUCP (Hokey) (02/23/85)

There is no problem with other Internet domains seeking path info for the
UUCP domain under the situation where the second level domains change
frequently; if queried, the gateway can always say "Send mail to
site.uucp via me", because what is really being asked is if it the name
is valid in order to reduce traffic.  If the UUCP domain gateway knows
how to reach the site, all is well.  If the gateway does not, it is simply
promising to assume responsibility for the message according to the
rules of RFC822 (unless it is a different RFC) message delivery.

In all honesty, there are probably other Unasked Questions floating around.
The preceeding paragraph may, in all cases, be used as an answer to those
questions.  It might not be a correct answer...
-- 
Hokey           ..ihnp4!plus5!hokey
		  314-725-9492

joe@fluke.UUCP (Joe Kelsey) (02/28/85)

Well, since my name was mentioned, I guess I should speak up.

There was a discussion on header-people (I think) about a month ago
which was started when Mark posted a note asking everyone if they
thought dahses (-) were ok to use in domain names.  He outlined the
current UUCP Domain proposal along the way.  What ensued was a variety
of messages, mostly about whether or not it was appropriate to name a
domain after the transport mechanism.  I thought about this for a
while, especially after a particluarly enlightening message from Jon
Postel about the real meaning of Domains.

What struck me most was that the UUCP domain was in some respects
re-inventing the wheel.  Way back when the ARPAnet started, they had a
flat name space.  After a while, they decided this wasn't good enough,
so someone came up with Domains.  It started to catch on when DoD
forced a change to TCP/IP, and everyone started tacking .ARPA onto
their hostname.  This works fine until you try to start adding "real"
domains, like .COM, .EDU, etc.  Suddenly, your hostname no longer has
anything to do with the network you are attached to, it simply
describes you!  Then along comes the UUCP Domain Spec, and suddenly we
are transported back in time 5 years!  Our hostname no longer simply
describes us, but rather ties us down to a particluar transport
mechanism.

Now I ask you, how are you going to deliver a message to ISIF.USC.EDU?
That IS going to be the name of one of the ISI machines at USC.  I know
that they are connected to ARPAnet now, but you can't tell that from
the name, can you?  How about BEAVER.UWASH.EDU?  A possible name for
uw-beaver, but they are on ARPAnet AND UUCP!  In fact, they are a
backbone site for USENET.  Are you going to special case all of the
non-UUCP domain names as ASSUME they are ARPA-based?

Don't get me wrong - I think the UUCP Doamin Spec is a GREAT first
draft.  However, there are some issues that need to be clarified, and I
think that the issue of names is the primary one.  If you read the
Domain Spec RFC you will notice that a host may have ONE AND ONLY ONE
name. (well, LOCAL nicknames are OK as long as they don't filter out to
the resto of the world)  This means that hosts like uw-beaver will have
to choose a name to be known as in Internet-land.  They can still keep
uw-beaver as a uucp transport name, but they can't use
UW-BEAVER.WA.UUCP and BEAVER.UWASH.EDU.  I suspect that they will
choose a name in the .EDU domain over a name in the .UUCP domain.  That
means that people who send messages to the Beav' can either send to
user@BEAVER.UWASH.EDU or to host!host!host!uw-beaver!user.  So, having
a .UUCP domain does nothing for us.

What I propose is a distributed routing algorithm, loosely based on the
current UUCP Domain Spec. and the UUCP Mail Transmission Format Std.
and including a concept of a distributed name server.  Bascially, you
start with a mapping from domain style names to transport names,
similar to the way you currently map hostnames to Internet addresses.
In the strictly UUCP case, your table would contain the mapping from
domain name to uucp name, with a possible step to further specify the
uucp name using some sort of path finder (or you could store the actual
path in the database).  This is not much more complicated than the
current setup, and we will still allow directly routed paths in the
traditional uucp bang format.

Now, what happens when we come across a host that we have no data on?
Well, let's peruse RFC882 (Domain Names - Concepts and Facilities) for
some help.  Instead of having direct access to a name server, we must
rely on relatively static databases, but we can use the same concepts.
We can establish regional or corporate (for ATT, HP) centers which
would be responsible for keeping fairly accurate routing information,
and then specify MF (mail forwarders) or MD (mail destination) records
for at least the major top-level domains.  Then, if we have mail for
some unknown place, we should at least have a MF record in our database
and we can send the message to them for further processing.  They in
turn will guarantee us that they can either directly send the mail or
they will return it to us with an error.  In the case of Class 1 uucp
hosts, they may be two or more hops away from and authority, and so
won't get that kind of service.  Class 2 uucp hosts should be only one
hop away from an authority.  After sending the message, the authority
COULD send us a database update for the domain, if we requested it, so
that we could update our database.

I guess that what it boils down to is that the UUCP Doamin Spec is a
great Administrative concept, but it is not backed by enough
understanding of the Domain system, including the concepts in RFC 882
and 883.  If we don't do something to address the issue of name servers
and non-.UUCP domains, we will end up being even more cut off from the
Internet world than we are now.   I basically see the establishment of
a .UUCP domain as the beginning of true isolation from other networks
unless we really face up to the issue of a REAL domain implementation.
I would not be opposed to the use of .UUCP as a domain as long as I can
choose to be in the .COM domain if I decide that I prefer that one.
(I'm not saying that Fluke will actually choose that - we are nowhere
near that kind of decision yet).

I will post RFC882 and RFC883 to mod.sources so that we can discuss
issues in those documents.

/Joe Kelsey			John Fluke Mfg. Co., Inc.
				(206)356 5933

zben@umd5.UUCP (03/02/85)

I decided to break site names loose entirely from transport mechanisms.
Perhaps engendered by the unfortunate fact I need to transfer mail between
UMD2.ARPA and UMDA.ARPA by BitNet/RSCS rather than InterNet/SMTP, but there
you are.  DON'T ask why...

In any case break address parsing loose from transfer channel selection.
You can consider UMDC!ZBEN and ZBEN@UMDC to be identical addresses, they
specify a host and a user.  Even with forwarding, the two addresses:

     MARYLAND!UMD2!UMDC!ZBEN
and
     @MARYLAND,@UMD2:ZBEN@UMDC

have the same SEMANTICS.  Now, make your transport channel a property of
UMDC rather than anything from the SYNTAX format.  In this case my channel
to UMDC is *neither* Internet *nor* UUCP - its BitNet.

-- 
Ben Cranston        ...seismo!umcp-cs!cvl!umd5!zben    zben@umd2.ARPA