[comp.dcom.lans] Request For Opinions: Optical Fiber Physical Topologies

dd@ariel.unm.edu (04/12/89)

At UNM, our network is organized as a backbone network (broadband, with
each active 6 MHz channel functioning as a 5 Mbps CSMA/CD "Ethernet
like" data channel), serving as in inter-building transport for
Ethernets.  The Ethernets are "glued" to a particular broadband channel
in a MAC filtered fashion - no flames please, TCP/IP isn't everywhere
it should be yet, and we have a very cooperative user community.

As we move into distributed image processing (expected to occur in the
next 2~3 years), I need a better backbone.  Also, the cost of extending
our private CATV system is now (finally!) higher than the cost of
extending an optical fiber backbone.  And there are other advantages
(which you all know more about than I do, in all likelyhood).  So fiber
it is.  And of course, the "right" way to go is to install an FDDI
compatible wiring plant, and drive it with FDDI-non-compliant
electronics, temporarily.

FDDI is logically a token ring system, but the wiring plant may
optionally be "star-shaped", i.e., go out to a building, and come back
to a "wiring center", and go out to the next building, and come back
in,... and so forth, ad nauseum.

My question, addressed to anyone who has a solid argument, but
particularily to folks who might have installed an FDDI compatible
wiring plant is:  What is the best way to go?  Should I build a ring,
or a star-shaped ring, or a ring of rings, or a ring of stars, or
what?  I realize that the ultimate decision will depend on the
particulars of our campus, but if anyone can think of factors that I
ought to consider, please let me know.

Thank you all for your help!

Don Doerner				dd@ariel.unm.edu
University of New Mexico CIRT
2701 Campus Blvd, NE
Albuquerque, NM, 87131			(505) 277-8036

kwe@bu-cs.BU.EDU (kwe@bu-it.bu.edu (Kent W. England)) (04/13/89)

In article <4824@charon.unm.edu>, dd@ariel.unm.edu writes:
> 
> My question, addressed to anyone who has a solid argument, but
> particularily to folks who might have installed an FDDI compatible
> wiring plant is:  What is the best way to go?  Should I build a ring,
> or a star-shaped ring, or a ring of rings, or a ring of stars, or
> what?  I realize that the ultimate decision will depend on the
> particulars of our campus, but if anyone can think of factors that I
> ought to consider, please let me know.
> 
> Thank you all for your help!
> 
> Don Doerner				dd@ariel.unm.edu

	[This turned out rather long.]

	We at Boston University installed Pronet-80 in '87 on a fiber
optic cable plant and we are writing the contract for the contractor
to install a new fiber plant in new conduit to extend our fiber
network for Pronet-80 and other uses.

	Perhaps what we did in our pilot installation and what we
designed for the extension would be instructive.

	As you know, Pronet-80 is essentially the same as FDDI with
the important exception that FDDI has Station Management.  Pronet-80
has no management of any kind.  The cable plants are the same.  Other
uses for fiber include f/o Ethernet and sometime in the future,
telephone services might be useful on campus over fiber.  I am
including single mode fiber in the entire cable plant so I can
distribute external services anywhere on campus as needed.

	When we designed our pilot cable plant, we installed the fiber
in a big ring around a street in our campus.  We installed patch
panels in the basement of each building and we ran 24 fibers between
each patch panel.  We could install a f/o interface in each building
and the distance between f/o interfaces would be a short hop on the
trunk between panels.  Of course, for those buildings with no f/o
service, we patched through the panel to get to the next panel.  I
think we have about a dozen patch panels, but only seven routers.
That means that there are a lot of patch connections in some of our
links and most of our loss budget is in the connectors.  This is no
problem over a short distance.

	But then in the process of debugging a problem with the
receivers in the f/o interfaces, we ran up against the problem of too
much loss in some links and we had to go back and take out some patch
cords to reduce the loss.  Still not really a problem.

	Then we had a need to do some f/o Ethernet extension to solve
a problem we had to grandfather, so we needed some fiber with a
different topology.  Still workable, but getting a little kludgy.

	Now, scale up the distance, number of buildings, etc and you
can see that this topology design begins to get unworkable very fast.

	Our extension project triples the geographic scope of our
fiber cable plant with almost no expansion of our router base.  That
part of campus is less compute-intensive today, so the fiber is really
more for the future than to solve today's problems.  However, our
admin computing people will be joining this extension, so we will have
our hands full.  Anyway we have a big sparse network and the need for
some low budget networking.  Not every Mac cluster can afford a
FastPath and a $25k p4200 with Pronet-80.   :-)

	After much looking around at other designs and much thought,
etc I came up with the multihub star topology as our preferred
solution.  A full blown star was impractical because we didn't have a
logical place to put the hub and a ring topology just wasn't practical
in terms of providing low loss paths where we wanted them in our
sparse matrix.

	We defined three hub sites in our extension: one on each end
for service coverage and further extension, and one roughly in the
middle.  We joined one end hub to our pilot plant thru our computer
center. 

	The hubs "tile the plane".  That is, they each have a service
area that covers the whole extension area without overlap.  Each hub
serves from three to four buildings.  The building service is star
configured from each hub to each building in its service area.  We are
running twelve multimode fibers to each building.

	The center hub is linked to each of the end hubs with a dual
trunk cable, each cable contains 24 multimode and 4 single mode
fibers.  Unfortunately, I do not have widely separated paths to route
each of my trunk links thru, but I recommend physically dispersing
your trunk runs as much as possible.  I am having them routed through
different ducts in the bank and down each side of the vaults, etc to
maximize physical redundancy and fault tolerance.  You could tie your
hubs together in a ring or a matrix, depending on geography and number
of hubs.

	I will end up placing a router in each hub and they will be
linked on the unspliced, continuous trunk cables.  Then I can install
other routers or f/o ethernet extensions from the hubs to the
buildings in the service area.  I can place most anything in the hub
and link it up with fiber to any or every building in the service
area.  Maybe even PBXs.

	We designed a broadband extension with exactly the same
topology and a trunk amp in each hub.  All trunk service is in one of
three hub closets.  No equipment in any manhole.

	Now when we plug our f/o interfaces together we just have to
be sure that there is enough loss in each path so we don't have to buy
attenuators.  Our longest trunk run is 2800 feet, I think, so, again,
loss is determined by connectors and not fiber loss.

	I am sure we will like our multihub topology much better.  I
hope the trunk fibers last a while before we have to pull more.

	BTW, the broadband trunk pulls are going to be very tough,
requiring winches and all kinds of special work.  The fiber following
the same path can be pulled in by one man by hand.  You could probably
do it yourself.  Our broadband trunk is one-inch cable.  My tech is
very demanding about signal quality.  :-)  The fiber is about half an
inch, feather light and easily flexed.

	--Kent England, Boston University

morgan@Jessica.stanford.edu (RL "Bob" Morgan) (04/13/89)

As my last act at San Francisco State University in 1987, I designed a
fiber plant to support FO Ethernet (using Codenoll xcvrs) now and FDDI
very eventually.  I ended up with something that is a lot like what
Kent just described for his new net at BU:  a set of interconnected
stars.  In this case it was three stars, each with a "passive" center,
each serving four to six buildings with 12 multi-mode fibers per
building.  The stars were connected together with 12 more fibers
(probably should have been more).  This allowed a logical design that
is currently very Computer-Center-centric (it's at one of the stars),
using Ethernet bridges to link the stars, while allowing ring or
whatever eventually.

(Stanford, BTW, is a living museum of cable and signalling methods,
and probably not a model for anyone's planning, at least for baseband
applications.) 

 - RL "Bob" Morgan
   Networking Systems
   Stanford

kwe@bu-cs.BU.EDU (kwe@bu-it.bu.edu (Kent W. England)) (04/13/89)

In article <1507@Portia.Stanford.EDU> 
morgan@Jessica.stanford.edu (RL "Bob" Morgan) writes:
>
>(Stanford, BTW, is a living museum of cable and signalling methods,
>and probably not a model for anyone's planning, at least for baseband
>applications.) 
>
	I have this theory of installed wire as a complex system.
Once your installed base of wire exceeds a certain critical threshold
(no one ever pulls anything out of his wire "museum"), the cabling
takes on new and complex behaviours.  The wire becomes alive, capable
even of eating technicians who foolishly venture in the zone above the
ceiling tiles.
	We lost two techs.  We left the ladders where they were and
the ceiling tile off, hoping the wire beast would at least spit out
the bones, but it never did.

dsmith@oregon.uoregon.edu (Dale Smith) (04/13/89)

In article <1507@Portia.Stanford.EDU>, morgan@Jessica.stanford.edu (RL "Bob" Morgan) writes:
> As my last act at San Francisco State University in 1987, I designed a
> fiber plant to support FO Ethernet (using Codenoll xcvrs) now and FDDI
> very eventually.  I ended up with something that is a lot like what
> Kent just described for his new net at BU:  a set of interconnected
> stars.  In this case it was three stars, each with a "passive" center,
> each serving four to six buildings with 12 multi-mode fibers per
> building.  The stars were connected together with 12 more fibers
> (probably should have been more).  This allowed a logical design that
> is currently very Computer-Center-centric (it's at one of the stars),
> using Ethernet bridges to link the stars, while allowing ring or
> whatever eventually.

At the University of Oregon, we have taken a slightly different approach
and installed a "star of rings" rather than the "ring of stars"
described by Kent of BU and Bob of Stanford (network of SFSU).  We have
three interconnected rings that cover most of our 1 square mile campus. 
From our Computing Center, which is a node in one of the rings, we run
point-to-point to the other rings (thus the star of rings).  As Kent
points out, you must be careful about loss in buildings with no active
devices.  You should engineer for losing 1dB per building that you
patch through.  Actual practice should be somewhere closer to .5dB, but
you should engineer for 1dB.  With FDDI, this can become a big problem
fast.  FDDI has an 11dB loss budget.  If you want to survive a failed
station that has gone into bypass, then you need to figure you'll lose
4dB through the bypassed station, giving you 7dB.  But, you have to spit
the 7dB between runs on each side of the failed station, giving you a
3.5dB budget between any two stations.  Figuring 1dB per inactive
building plus a little loss in the cable, you can see you don't want to
build huge rings.  Note that I have been very conservative in the
figures above and you can get by with a lot more.

Special point-to-point applications also pose problems in a ring
topology.  You can potentially end up using lots of fiber for simple
links.  We have carefully planned where our star presences are
so that we can minimize the amount of fiber required for special
applications (we hope). 

A drawback of running FDDI over a physical star is that a single media
failure can partition your network, whereas no single failure can
partition your FDDI network if you have a true ring.

Dale Smith			Internet: dsmith@oregon.uoregon.edu
University of Oregon		BITNET: dsmith@oregon.bitnet
Computing Center		UUCP: ...hp-pcd!uoregon!dsmith
Eugene, OR  97403-1212		Voice: (503)686-4394

bud@ut-emx.UUCP (04/13/89)

In article <29516@bu-cs.BU.EDU> kwe@buit13.bu.edu (Kent England) writes:
>In article <1507@Portia.Stanford.EDU> 
>morgan@Jessica.stanford.edu (RL "Bob" Morgan) writes:
>>
>>(Stanford, BTW, is a living museum of cable and signalling methods,
>>and probably not a model for anyone's planning, at least for baseband
>>applications.) 
>>
>	I have this theory of installed wire as a complex system.
>Once your installed base of wire exceeds a certain critical threshold
>(no one ever pulls anything out of his wire "museum"), 

As a former curator of the Stanford wire museum I can second the
observation that no one ever retires a network system that runs, no
matter how poorly designed or implemented it is.  

There ain't no such beast as a "temporary" network.  Network designers
be warned, if you install a quick and dirty hack (like we did at
Stanford), odds are it will become a permanent embarassment.  

morgan@Jessica.stanford.edu (RL "Bob" Morgan) (04/14/89)

Kent writes:

> The wire becomes alive, capable even of eating technicians who
> foolishly venture in the zone above the ceiling tiles.  We lost two
> techs.  We left the ladders where they were and the ceiling tile off,
> hoping the wire beast would at least spit out the bones, but it never
> did.

Ah, there's your mistake: true cable techs have no bones, the better
to sneak cables thru skinny conduits (though, of course, *real* cable
techs don't use conduits, or plenum-rated cable).  Your report of
disappearing technicians adds fuel to my conjectures about the true
composition of Yellow-77 . . . *8^)*

 - RL "Bob"

ciriello@lafcol.UUCP (Patrick Ciriello II) (04/14/89)

      We have just finished designing a cabling plant that divides that
campus buildings into 16 logical rings, all connected by fiber, and a
backbone that connects all the rings.  Each building will get 18 fibers.
This will allow for a main path (with built in backup path), a full
backup set of fibers (used 4 so far), 2 for another net to run
concurrently (if neccessary), 2 for its backup, and the rest to be used
for any future needs (including telephone, video, security, direct links
between certain dept.'s, etc).  

We are linking 44 buildings, and the total length of fiber is about 5.85
miles.