[comp.sys.apollo] changing our net topology. advice needed.

herb@blender.UUCP (Herb Peyerl) (01/11/90)

We are in the midst of submitting a proposal to the big-wigs to
obtain some more admin nodes and alter our network topology... Presently
we have 42 Apollo's hooked up on three rings like so--->

	O-O-O

however, we're buying approximately 30 new nodes in the next year and
this will present us with a rather disgusting backup problem.  So we
thought we'd alter the topology to reflect 4 rings in box formation
like so--->

	O-O
	| |
	O-O

We figure that this way, if there was a gateway failure of any ONE of
the four gateways, we'd still have network traffic between rings.  My
question(s) is/are:

1) Is the Domain routing smart enough to recognize that a gateway has
gone down and automatically route packets the other way around???

2) Is there any sort of special setup in the rtsvc or startups that we
should do to reflect this topology and possibly account for the alternate
routing???

3) Is anyone doing this and have you ever experienced any rude and
twisted problems with this sort of thing?

4) How good is TCP/IP for dealing with this sort of thing.  I would
think routed would handle this with relative ease.

Unrelated Questions
-------------------

1) out of idle curiosity, how many nodes does each sys_admin at your
site have to deal with?  Should I feel justified in being stressed
out with 42 nodes at the present time?  Will I be near suicide when
this figure approaches 80 nodes?

2) What're everyone's feelings and experience with Technet? We're
having some rather disgusting problems with it and aren't overly 
pleased with the product.  Of course we're running it on a DN3000
with 4 megs which is likely part of the problem.  It only works
about half the time and the Vax cluster likes to report the Apollo
as being down whenever a Decnet operation is performed on the Apollo.

3) Does anyone have any insight as to when the 1.2 update of Technet 
is due? (Apollo?).


    Any help is greatly appreciated...  I will summarize any interesting
    private replies that I get.


-- 
---------------------------------------------------------------------
UUCP: herb@blender.UUCP   ||  ...calgary!xenlink!blender!{herb||root}
ICBM: 51 03 N / 114 05 W  || Apollo Sys_admin, Novatel Communications
"The other day, I...... No wait..... That wasn't me!" <Steven Wright>

dbfunk@ICAEN.UIOWA.EDU (David B Funk) (01/16/90)

Herb, you say:

> We are in the midst of submitting a proposal to the big-wigs to
> obtain some more admin nodes and alter our network topology... Presently
> we have 42 Apollo's hooked up on three rings like so--->
> 
> 	O-O-O
> 
> however, we're buying approximately 30 new nodes in the next year and
> this will present us with a rather disgusting backup problem.  So we
> thought we'd alter the topology to reflect 4 rings in box formation
> like so--->
> 
> 	O-O
> 	| |
> 	O-O

    I have a silly question for you: why not configuire your network
like so --> O ? (IE have all your nodes in one ring?) Traffic around
a single ring is much faster than traffic thru gateways. A single
network is actually easier and quicker to administer if you have
reliable hardware. Do you have some hardware limitation (such as a T1
bridge) that forces you to run several small rings? I can't imagine a
need to break up such a small net except due to physical limitations.
    Our network currently is configured as 2 rings. The ring in our main
building has 100 nodes on it and has almost 4 miles of wire in it. We
average 5 to 10 million packets a day of network traffic. Our other ring
has only 35 nodes on it but they are spread out in 4 other buildings
that are connected to our main building with a total of over 3 miles of
fiber-optic cable & DFL100 modems. The only reason that I set the net up
as 2 rings was because I worried about the reliability of the fiber
links and the total ring size. I have heard of  nets larger than ours
with hundreds of nodes in one ring. So I don't think that you need to be
afraid of a large network.
    I'm curious as to what the practical limitations are for network
size: in node count, total network circumference, and network traffic.
Does anybody have experience with a ring of over 10 miles in
circumference? I heard rumor of somebody at Motorola wanting to connect
together rings that were in different parts of the country (like Texas
and Illinois).

WRT your question:
> 1) out of idle curiosity, how many nodes does each sys_admin at your
> site have to deal with?  Should I feel justified in being stressed
> out with 42 nodes at the present time?  Will I be near suicide when
> this figure approaches 80 nodes?

That depends upon a lot of things: how old & flaky is the hardware?
how many different things do you do as "sys_admin"? How much user
support do you do? How many users do you have? (is that 80 users or
800 users for those 80 nodes?)

We have 3 sys_admins (2 full time people and 2 part time) for our 135
nodes, and I find that our 2000+ users create far more heat than the
nodes.

Dave Funk

kint@software.org (Rick Kint) (01/17/90)

In article <90@blender.UUCP> herb@blender.UUCP (Herb Peyerl) writes:
>We are in the midst of submitting a proposal to the big-wigs to
>obtain some more admin nodes and alter our network topology

	We have this:

        1
       /|\
      | 4 |
      |/ \|
      2---3

(My apologies for the diagram; there are limits to character graphics).

	The idea is that rings 1, 2, and 3 (which correspond to floors
in our building) can get to each other directly, and ring 4 (the control
ring, which has central servers and the systems on which backups are run)
can get to rings 1, 2, and 3 directly--we didn't want backup traffic
competing with user traffic.  We use six routers, between the following
pairs of rings: 12, 23, 31, 41, 42, and 43.  The first three are also
gateways between the Apollos and Ethernet and run NFS.  These are DN3000s
with extra memory.  There are about 180 nodes total.

	We can lose any two routers without losing connectivity.  With a
year of experience behind us, yes, it's overbuilt, but these nodes handle
TCP/IP traffic as well.  Older TCP/IP was very unstable so we needed the
redundancy then.

>1) Is the Domain routing smart enough to recognize that a gateway has
>gone down and automatically route packets the other way around???

	Sure.  We've gone for extended periods without noticing that
a gateway has crashed (or that routing has died).  One may have doubts
about Apollos in some areas (like UNIX compatibility), but it's hard to
fault their networking.  It just works...

>2) Is there any sort of special setup in the rtsvc or startups that we
>should do to reflect this topology and possibly account for the alternate
>routing???

	No.  The alternate routing will happen magically at need.  Just
make sure that the rtsvc commands are correct.

>3) Is anyone doing this and have you ever experienced any rude and
>twisted problems with this sort of thing?

	The only twisted problem we've seen was our own fault.  If two
gateways broadcast different network numbers (set in the rtsvc command)
on the same ring, life gets very interesting very quickly.  If you move
gateways around, don't be sloppy.

>4) How good is TCP/IP for dealing with this sort of thing.  I would
>think routed would handle this with relative ease.

	Yes, with a couple of reservations.  In early SR10 releases,
Apollo recommended using the "-h" option to routed on non-gateways.
DO NOT DO THIS;  it was removed from rc.local at SR10.2.  This turns
routed into the equivalent of makegate;  it sets up the routing table
and then exits.  You get a process slot back but lose dynamic routing,
which is often not a beneficial tradeoff.

	The length of time it takes to update the routing table depends
on a number of circumstances (see the man page for routed, or RFC 1058).
If you are replacing a route with a more expensive route, it takes longer
and a connection may die if one end is pounding on it in the interim.

	Note that you don't get TCP/IP routing automatically when you
have Domain routing.  For the first you use routed, for the second rtsvc.

--
Rick Kint                          CSNET:   kint@software.org
Software Productivity Consortium   ARPANET: kint%software.org@relay.cs.net   
Herndon, VA                        UUNET:   ...!uunet!sunny!kint