[comp.protocols.tcp-ip] Naive questions about subnets & domains

mogul@decwrl.dec.com (Jeffrey Mogul) (08/15/89)

In article <1072@adobe.UUCP> shore@adobe.COM (Andrew Shore) writes:
>[Adobe is] NOT currently a "connected network" -- we
>aren't on the Internet for IP traffic.
>
>We may soon have private IP connections to some of our remote offices 
>(e.g., Boston and Amsterdam) through various leased services.
>
>Subnet question:  Is it best for me to give each remote office a subnet
>of my class B net, or get them there own (class C) network number?
>I ask this because it is my impression that subnet topology should
>ideally be invisible wrt. external routing decisions, and if we ever DO
>connect up to the Internet (especially if we connect in more than one
>location), then some very strange things could get routed through Adobe.
>Another way to phrase this question is: was it the intention of the subnet
>scheme that subnets must be geographically close or only topologically "close" 
>for routing purposes?  If they must be topologically close, am I better off
>	1) using subnets for remote networks and limiting my connections
>	   in the future
>or	2) getting distinct network numbers to leave me flexibility in the
>	   future

The most basic rule of subnetting is that, if you go with option #1,
the subnets must be connected to each other via a path that doesn't
ever leave your class B network.  If you cannot arrange internal links
between the home office and the branch offices, then you are not really
allowed to use option #1.

If you can use option #1, there are two potential problems:
    (a) Except for sites with hand-crafted routes into your network, it
    will nearly impossible to say "use gateway X between the Internet
    and the home office, but use gateway Y between the Internet and the
    Amsterdam office."  This means that there may be some packets that
    go around the world when they only need to travel a few miles.  For
    example, if your primary Internet gateway is in California, and a
    customer in Amsterdam tries to send a packet to the Amsterdam office,
    the packet will flow via California.
    
    This is sad, but nothing is perfect.  You could hope that this doesn't
    happen too often (how many Europeans run IP, after all?); in cases
    where it is important, your customers could install hand-crafted
    routes to your externally-visible Amsterdam host(s).

    (b) Nasty people in Amsterdam, if they know that Adobe is paying
    for an internal IP link between their city and California, could
    try to save money on their own phone bills by routing their
    packets through your network.  This should not happen with normal
    routing protocols; anyway, it is a simple matter to provide access
    control mechanisms in your routers to deny forwarding of such
    "transit" packets.

If you use option #2, then neither of these two problems exists.
On the other hand, the size of the Internet routing tables is
growing at a frightening rate, and I'm sure people would rather that
you kept the number of networks as low as possible.  Although
option #2 may be better for some specific situations, for the
community as a whole, the fewer networks the better.

-Jeff

dpz@convex.com (David Paul Zimmerman) (08/16/89)

You'll probably see a lot of similarities to your original in this reply...

Convex currently has an official domain name (convex.com), and MX forwarder
(uxc.cso.uiuc.edu), and an official class B network number (with 8 bit
subnets).  However, we are NOT currently a "connected network" -- we aren't on
the Internet for IP traffic.

We currently do have private IP connections to some of our remote
offices (e.g., CA, FL, MD) through various leased services.

Subnet answer: So far, our connected remote sites are subnets under our class
B, whether US or foreign.  Since I'm in Engineering, not MIS, I don't have
direct control over this, but hopefully (with a little nudging from me if
necessary :-) we can keep going in that direction.  I believe that subnets are
intended for topological groupings, so under that presumption, all the hosts
of your company would have addresses under your single class B.  If you've got
a lot, like Rutgers University does, but not enough to warrant a class A, you
may eventually have to go to multiple B networks.  How you deal with that is
pretty much up to you -- you probably could organize geographically by B
network, but I suspect that by the time you need the additional networks it
would be a major piece of work to pull off.

Distinct C network numbers can get to be more of a hassle than giving you
flexibility.  When I was at Rutgers, they had a remote net that was a class C,
and eventually redid them to be under their class B.  That net has since gone
forth and easily multiplied into a healthy number of subnets.  That is a
flexibility that you don't have when you have to get a class C allocated every
time you want to split or add a remote network or something.  Plus, it's a
waste of Internet network address mapping space if you've already got the
class B allocated.

Then again, Convex is only planning a single connection to the Internet, and
Rutgers didn't care if it played go-between for packets.  For multiple
connections, you could probably play routing protocol games to keep the
unwanted traffic off.  Haven't hacked routing protocols yet, so I can't say
how easy or hard this is.  It could just be a matter of telling your
routers-to-the-real-world to not advertise their knowledge to one another.

Domain answer: Our US sites are simply hosts as part of .convex.com, no
subdomains yet.  However, we do have a couple of remote sites in Europe, and
those just conform to the European geographical domains (.convex.oxford.ac.uk,
.convex.nl).  They're not on our network yet, so we get to them via UUCP.  You
can probably get more information about this whole issue from Piet Beertema
(piet@cwi.nl), who handles the European UUCP maps.

						David

David Paul Zimmerman                                             dpz@convex.com
CONVEX Computer Corp                                                 convex!dpz

dpz@convex.com (David Paul Zimmerman) (08/16/89)

> It could just be a matter of telling your routers-to-the-real-world to not
> advertise their knowledge to one another.

Argh... That doesn't make sense.  Your non-world-connected internal routers
could still propagate that information between your world-connected routers.
Told you I haven't hacked that stuff yet :-) What you would really want is to
have your internal net have all the routing information it wants, while, to
the real world, making all paths coming into your network look like dead ends.

						David

David Paul Zimmerman                                             dpz@convex.com
CONVEX Computer Corp                                                 convex!dpz

lear@NET.BIO.NET (Eliot Lear) (08/16/89)

This is more of a question than an answer, but you might find it
interesting...

It would seem to me that whether you stick your other sites in your
class B depends on whether those remote sites will eventually have
entry points to outside networks.  Existing routing technology is
pretty wretched about such things.  Case and point:

Suppose you have a network that consists of two or more locations,
that looks something like the following:


	Site A			 T 1		Site B
	Highspeed Internet	<--->		Backup Link
	Link

Well, what happens if the T1 goes down?  If each site has a different
net number, then with the blessing of appropriate routing gods, one
might even route through the Internet to get around the break
(forgetting policy issues, for the moment).  If you use the same
network, then Site A continues to advertise it as before, and the
chances are that Site B will most likely be screwed, depending on what
routing protocol is in use.

My question:

Does anyone see an answer to this problem, or have I defined the
problem incorrectly?

One way to handle such a break would be to transmit a subnet mask with
the route.  Yes, this would increase routing traffic, but one would
only do such a thing when attempting to correct a situation like the
one I described.
-- 
Eliot Lear
[lear@net.bio.net]

mogul@decwrl.dec.com (Jeffrey Mogul) (08/17/89)

In article <Aug.16.09.04.47.1989.26446@NET.BIO.NET> lear@NET.BIO.NET (Eliot Lear) writes:
>Suppose you have a network that consists of two or more locations,
>that looks something like the following:
>
>
>	Site A			 T 1		Site B
>	Highspeed Internet	<--->		Backup Link
>	Link
>
>Well, what happens if the T1 goes down?  If each site has a different
>net number, then with the blessing of appropriate routing gods, one
>might even route through the Internet to get around the break
>(forgetting policy issues, for the moment).  If you use the same
>network, then Site A continues to advertise it as before, and the
>chances are that Site B will most likely be screwed, depending on what
>routing protocol is in use.
[...]
>One way to handle such a break would be to transmit a subnet mask with
>the route.  Yes, this would increase routing traffic, but one would
>only do such a thing when attempting to correct a situation like the
>one I described.

If I understand your suggestion, what you propose is that routing
protocol servers on the boundaries between a subnetted network and the
rest of the Internet would transmit subnet information about the
subnetted network to "external" routers.  I perceive an intention here
to allow routers outside the "broken" network to send packets via the
right boundary gateway.

 [I'm not quite sure how sending a "subnet mask" would accomplish this;
 knowing only the mask for a network doesn't allow one to infer what
 gateway is the optimal entry point for a given destination.]

The IP subnetting model does not allow this.  The whole point of
subnetting is to hide a certain amount of detail, in order to simplify
protocols and administration, and to keep the routing tables small.
Doing this only when the subnetted network is broken doesn't help; If a
large network with oodles of subnets partitions, the other routers on
the Internet would be hit with tremendous increase in routing table
complexity ... like a dam which bursts and floods the villages
downstream.

Subnetting reduces complexity at the expense of functionality.  You
can't compute "optimal" routes if you ignore certain kinds of
information, but that may be the price we must pay to be able to
compute routes at all.

There are better solutions to the "partitioned subnetted network with
multiple external gateways problem" (PSNWMEGP?).  For example, one could
do the responsible thing and provide multiple internal paths between
Site A and Site B.  Or, one could do something topologically
equivalent, which is to set up a "tunnel" between Site A and Site B
that acts like a real link between the sites, but is actually
implemented by source-routing packets through the Internet.  (There are
policy issues to worry about here.)  The most cost-effective solution
might be to set up a temporary SLIP link, over a dialup line, between
the two sites for the duration of the partition.

keith@spider.co.uk (Keith Mitchell) (08/18/89)

I believe the solution which puts international subsidiaries into
subdomains of the country they are in is not, in general at least,
the correct solution.

My understanding is that the the domain name space reflects an
*organisational* not geographical, hierarchy.  It is thus valid to
have sites which are in another country be subdomains of the parent
company that is a sub-domain in its own country.

i.e. We are "spider.co.uk". We have US and French subsidaries, which
are "boston.spider.co.uk" and (soon) "paris.spider.co.uk" . These
are sub-organisations within the bigger spider organisation, and the
names reflect the organisational heirarchy.

I would say that ".convex.oxford.ac.uk" is invalid, as a company
cannot be part of Oxford University in its role as part of the UK
Academic Community.

For spider, mail routing works because our only point of contact
with the external world is a UK site, international delivery is an
internal operation.

Now, if our international subsdiaries had their own links to the
outside world, in the country they are geographically located in,
then it would be appropriate for them to be registered in that
country's domain (e.g. spider.com and spider.fr).

Without the external links in the relevant country, routing which is
done on top-level domains will get confused. i.e. If we were to
register our US site as spider.com, someone in the UK mailing this
would have it routed to a US backbone site that knows about .com,
which would know you get to Spider via the UK, so back it goes.

Whether the internal and external links use UUCP, the Internet or damp
string is actually irrelvant to naming.

So, I think the general rule is to register a site as a sub-domain of
the country its mail link to the outside world is in. This does not
preclude registering a site more than once.

What we would ideally like is to have a domain ".spider", which all
our machines and sites are in from an internal point of view. This
would be registered as a sub-domain of all necessary countries, with
an external mail link in each of them. Thus, edinburgh.spider.com,
boston.spider.fr, and paris.spider.co.uk are all valid, the top level
domain merely dictating the point of entry to our internal mail
system, and the bottom one where it finishes up.  This fits in with
global mail routing based on domains.

Is this sensible ? Does the domain name system permit the same entity
appearing in distinct sub-domains, or have I the wrong end of the stick ?

I better make it clear that the above represents my current thoughts on
this topic, rather than any offically decided company policy.

Keith Mitchell

Spider Systems Ltd.             Spider Systems Inc.
Spider Park    		        12 New England Executive Park
Stanwell Street                 Burlington
Edinburgh, Scotland             MA 01803
+44 31-554 9424                 +1 (617) 270-3510

keith@spider.co.uk              keith%spider.co.uk@uunet.uu.net
...!uunet!ukc!spider!keith      keith%uk.co.spider@nsfnet-relay.ac.uk

dcrocker@AHWAHNEE.STANFORD.EDU (Dave Crocker) (08/20/89)

The mail path for a specific host is unrelated to its location in the
domain name space.  Hence, Keith, your suggestion does seem to fit my
understanding of domain name/mail system performance, although it DOES
match most people's intuition about the system.

Any leaf entry (host reference) may have any IP address for itself or its
MX (mail relay) attributes.

Dave

morgan@Jessica.stanford.edu (RL "Bob" Morgan) (08/22/89)

>Another way to phrase this question is: was it the intention of the subnet
>scheme that subnets must be geographically close or only topologically "close"
>for routing purposes?  If they must be topologically close, am I better off
>       1) using subnets for remote networks and limiting my connections
>          in the future
>or     2) getting distinct network numbers to leave me flexibility in the
>          future

This problem is a rather deep one, I think, that many institutions
will have to struggle through in the near future.  I have been
somewhat involved with the case of a local university that is in much
the same boat.  The computer science dept is very anxious to get
connected, and wants to lease a T1 line right away to the nearby
connection point of the local NSFNET-sponsored regional network.  The
computer center, however, is much happier to wait for the statewide
university system, which has an existing 56Kbps network, to start
supporting IP, connect to the Internet in one place, and eventually
upgrade the campus to T1.  

The question, to paraphrase the quote above, is the relative
importance of geographical versus institutional "closeness."  The
issues in this case have more to do with the cost of long-distance T1
circuits, funding timelines, and the quality of central support than
the technical details of IP address structure.  The choices that
individual institutions make will determine the structure of the
Internet for years to come, and the nature of the technical problems
(like enormous routing tables) that have to be solved.

The domain name system (one might note) reflects the same split.  Most
domains are now organized by institutional type (.edu, .gov) but the
use of geographical domains (.us, .au) is increasing, it seems.  Maybe
someday we'll be "stanford.ca.us."

 - RL "Bob" Morgan
   Networking Systems
   Stanford

pvm@ISI.EDU (Paul Mockapetris) (08/23/89)

Hierarchical names, such as those used in the DNS, X.500, and DNANS are
popular because you can distribute authority for name creation by
assigning a node to an organization, and then let it "grow" nodes
underneath.  Because of this use, the hierarchy must always follow the
delegation of control.

We might also like to have out hierarchies correspond to something else
as well.  For example, some like the organizational structure, others
want to have a top-level be network names, others feel EDU vs COM vs
ORG is right, etc.

Sooner or later, there are problems creating one hierarchy that follows
two different criteria.  The DNS doesn't show this problem much because
the organizational criteria and tree-delegation criteria are virtually
identical.  X.400 ORname allocation schemes are debated so much because
the designers are trying to serve multiple masters.

paul

epsilon@wet.UUCP (Eric P. Scott) (08/23/89)

In article <4699@portia.Stanford.EDU> morgan@Jessica.UUCP (RL "Bob" Morgan) writes:
>                                                   I have been
>somewhat involved with the case of a local university that is in much
>the same boat.  The computer science dept is very anxious to get
>connected, and wants to lease a T1 line right away to the nearby
>connection point of the local NSFNET-sponsored regional network.  The
>computer center, however, is much happier to wait for the statewide
>university system, which has an existing 56Kbps network, to start
>supporting IP, connect to the Internet in one place, and eventually
>upgrade the campus to T1.

Mr. Morgan somewhat misrepresents the facts.  The local
university campus in question has its own IP network, does not
subnet, and is for the most part in one place geographically.
It has its own secondary domain and no current plans to
subdivide it.  As such, I don't see what relevance this case has
to do with the current thread of discussion.

The statewide data network will never offer the performance or
reliability of a BARRNET connection.  The "connection in one
place" would be hundreds of miles away "in the wrong direction,"
through many gateways under different administrative control.
Furthermore, the majority of IP traffic is expected to be
exchanged with current BARRNET members.  The "computer center"
people have no experience with IP networking, having been
sidetracked into 3Com/Bridge XNS by an ex-employee.

The major problem comes from BARRNET's insistence that potential
members purchase Proteon hardware, while the University uses
Cisco equipment almost exclusively.  Nearly all serious
connectivity failures have been traced to existing Proteon
equipment.  The Cisco equipment has proven itself admirably.  If
the University has two active connections to the Internet, the
gateways must enforce policy-based routing.  Cisco, itself an
associate BARRNET member, has experience with this.  I've noticed
that, even with two competent vendors, you end up with a lot of
finger pointing when things don't work.

					-=EPS=-

// Opinions are mine, and do not necessarily reflect those of my
// employer.  I have no relation to either Proteon or Cisco.

ggm@bunyip.cc.uq.OZ (George Michaelson) (08/24/89)

From article <8908222057.AA06211@venera.isi.edu>, by pvm@ISI.EDU (Paul Mockapetris):
> 
> Sooner or later, there are problems creating one hierarchy that follows
> two different criteria.  The DNS doesn't show this problem much because
> the organizational criteria and tree-delegation criteria are virtually
> identical.  X.400 ORname allocation schemes are debated so much because
> the designers are trying to serve multiple masters.

Also, where the DNS (or any other naming scheme) is open to interpretation in 
multiple ways, it is possible for there to be problems within that naming
community. 

Also Also, where two or more initially disjoint (inter)nets join together,
naming conflict is to be expected. 

Actually, you can probably collapse them two also's into one. 

Question: 
	if the DNS authors had a second stab, would they 
	"put an "e" on the end of creat()" so to speak and make 
	the edu/org/gov domains lie within the CCITT preferred 
	naming model? 

	Would they be under "pressure" to do so?

	-George
ACSnet: ggm@brolga.cc.uq.oz                    Phone: +61 7 377 4079
Postal: George Michaelson, Prentice Computer Centre
          Queensland University, St Lucia, QLD 4067