[comp.protocols.tcp-ip] Thick or Thin

mep@AQUA.WHOI.EDU (Michael E. Pare) (10/03/89)

You've seen pros and cons of both.  The biggest problem with both is in trying
to run wiring from the host to the 'backbone' whatever it may be.  Thick is
a pain if you have several machines in the same area, each requires a drop
using bulky transceiver cable, and you can quickly run out of space on the
cable (2.5m between transceivers) unless you use a multiport transceiver,
but you are still stuck with bulky transceiver cables.  An entire backbone
of Thinnet is clumsy and can easily lead to wiring faults and hard to trace
network problems, and has severe node limitations.

One escape may see a thick ethernet backbone with thick-to-thin repeaters
used to hook up local groupings of hosts.  One repeater can support say 8
areas, with each area supporting several machines (or just one).  This
provides better fault isolation and enables a large node installation.

I would definitely suggest you look into twisted pair format (as someone
mentioned).  This can be the least costly to install if the twisted pairs
(just one using 3COM's system, or two for Synoptics or eventually the 
10BASET standard) are already available.  You can even install the twisted
pairs separately for a lower cost than trying to run a lot of coax.  This
method provides for the best fault isolation and is the easiest to support
if people move around, as well as allowing for a large node installation.

I've installed and supported all three and twisted pair wins hands down.
By the way, 3COM's twisted pair ethernet is based on thinnet technology
while Synoptics is more based on thick.

ian@lassen.wpd.sgi.com (Ian Clements) (10/03/89)

In article <8910022022.AA09420@aqua.whoi.edu>, mep@AQUA.WHOI.EDU (Michael E. Pare) writes:
> 
> ...This can be the least costly to install if the twisted pairs
> (just one using 3COM's system, or two for Synoptics or eventually the 
> 10BASET standard) are already available.  

 I disagree.  Remember that you need one transceiver for each workstation
regardless of whether or not you use thick, thin or twisted pair (assuming that
no workstation already has a thinnet xcvr installed).  I'm calling the xcvr
costs a wash even though there is a $50 difference between thin and twisted
pair (thin being least expensive and twisted pair most expensive).  So we can 
assume that all that is being compared is transmission medium and associated 
equipment, correct?  
 
 Following that assumption, the cost to acquire backbone cable for thick
or thin costs $1.25/ft and $.29/ft respectivly.  Twisted pair cable costs 
$.05/ft.  The twisted pair cable it self is much cheaper however, to make 
the whole thing work you need this box (commonly refered to as an Active 
star repeater) that can cost somewhere around 8k for more than 10 connections.

 Admittedly, once the twisted pair system is up and running, maintenance,
additions and other changes are far easier to deal with.  One can almost 
always install more twisted pair cable than either thick or thin backbone
and drop cables for the same costs.  Here at SGI for example, every office and
cube gets a 4 port data block.  From that block you can have Ethernet, 
PhoneNET (AppleTalk) and serial connections.

 Then of course there is the issue of the 10BASET spec which is to be 
released soon.  What does that mean to those of us with large twisted pair
installations?


	Cheers,

	Ian

ecf_hap@jhunix.HCF.JHU.EDU (Andrew Poling) (10/04/89)

In article <42457@sgi.sgi.com> ian@lassen.wpd.sgi.com (Ian Clements) writes:
[...]
> I disagree.  Remember that you need one transceiver for each workstation
>regardless of whether or not you use thick, thin or twisted pair (assuming that
>no workstation already has a thinnet xcvr installed).  I'm calling the xcvr
>costs a wash even though there is a $50 difference between thin and twisted
>pair (thin being least expensive and twisted pair most expensive).


I've been watching this discussion go by and I want on.  Over and over I've
been muttering to myself, "But they've probably already bought a thinnet
xcvr - whether they know it or not".

Fact is, a whole bunch of workstations come from the factory with a xcvr on
board and that cute little BNC connector on the back.  And what about
PCs/compatibles?  Almost EVERY ethernet card that I've laid eyes on for PCs
has both thick and thinnet interfaces.  Some machines are even available ONLY
with thinnet interfaces (DEC VS2000's come immediately to mind - there must
be others, though).  In order to put a machine like this on thicknet or
twisted-pair, you must insert a $1,000-or-so repeater of some sort.  Uh-oh,
kinda makes those sweeping generalizations start looking like expensive
oversights.

When considering the relative costs of thicknet, thinnet, and twisted-pair,
you have to consider that alot of machines come thinnet ready.  If we're
figuring the costs of putting such machinery on alternate cable types, we
have to admit that we're buying two xcvrs for each machine and leaving one
idle.  We may also end up jacking up our repeater-count converting from one
cable type to another.

We've been using thinnet more and more, recently, for in-building wiring
for several reasons:
	1) it's smaller in diameter and more flexible and thus easier to
		put in place than thicknet (twisted-pair likewise)
	2) thinnet cable termination is a breeze compared to xcvr cable
		termination (we're talking about putting a connector on
		the end of the cable - fabricating custom-length cables)
	3) when we want to put several machines in one room, we can
		daisy-chain
	4) people with thinnet-ready equipment save money - no additional
		cost for a xcvr
	5) people with DB-15 interfaces spend less because thinnet costs
		considerably less than xcvr cable per foot and we can put
		the thinnet xcvr very close to the machine

We still use thicknet for the "backbone" (inter-building cabling) and for
some "building risers" for mostly obvious reasons:
	1) greater allowable cable length - important with our geography
		and layout
	2) greater (perceived at least) durability under the sometimes
		adverse conditions encountered - I'm not sure that there
		isn't sufficiently sturdy thinnet cable available nowadays;
		in fact, I'm sure there probably is
	3) the best reason of all - alot of our cabling of this type
		predates the wide availability and usage of thinnet


I mostly wanted to point out some easily overlooked "hidden" costs.  It's
amaizing how fast the costs can climb when you start figuring in repeaters
and pairs of xcvrs.

-Andy

--
Andy Poling                              Internet: andy@gollum.hcf.jhu.edu
Network Services Group                   Bitnet: ANDY@JHUVMS
Homewood Academic Computing              Voice: (301)338-8096    
Johns Hopkins University                 UUCP: mimsy!aplcen!jhunix!gollum!andy

roy@phri.UUCP (Roy Smith) (10/04/89)

In <2781@jhunix.HCF.JHU.EDU> andy@gollum.hcf.jhu.edu (Andy Poling) writes:
> We still use thicknet for the "backbone" (inter-building cabling) and for
> some "building risers" for [...] greater (perceived at least) durability
> under the sometimes adverse conditions encountered

	I suppose the thick trunk cable is pretty tough, and the way the
typical vampire tap xciever connects to the cable is pretty good, but the
stupid D-15 connectors for the xciever drop cable is a disaster.  The
xciever ends don't give us any trouble because they are hidden away in a
ceiling or wall where nobody can touch them, and we have the drop cable
firmly lashed to the trunk cable with 2 or 3 nylon cable ties to keep them
from shifting.  On the other hand, the connection from the xciever cable to
the workstations is our single most common cause of network failures.  You
just can't take a stiff heavy cable and expect it to stay attached to a
flimsy slide-lock widget, especialy where the cable and/or the workstation
are capable of being moved by accident.

	The Sun 3/50 is about the worst in this respect.  We've totally
given up on thick cable for PC's too.  Our DEC, Kinetics, and TCL gear
seem to have somewhat better designed slides, but still suffer from the
inherent absurdity of the basic design.  Our UB gear has the slide locks
replaced with screws.  It makes for a non-standard cable, but at least it
doesn't fall out (this is, by the way, about the only good thing I can
think to say about our UB ethernet bridges).  We have a few machines in one
office suite on thin (with a DEC DESPR thick-to-thin repeater) and have
never had any trouble with the connections at all.
-- 
Roy Smith, Public Health Research Institute
455 First Avenue, New York, NY 10016
{att,philabs,cmcl2,rutgers,hombre}!phri!roy -or- roy@alanine.phri.nyu.edu
"The connector is the network"

jfinke@itsgw.rpi.edu (Jon Finke) (10/04/89)

In <4028@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>	I suppose the thick trunk cable is pretty tough, and the way the
>typical vampire tap xciever connects to the cable is pretty good, but the
>stupid D-15 connectors for the xciever drop cable is a disaster.  The
>xciever ends don't give us any trouble because they are hidden away in a
>ceiling or wall where nobody can touch them, and we have the drop cable
>firmly lashed to the trunk cable with 2 or 3 nylon cable ties to keep them
I have had cables like this cause problems when someone else pulls
a wire throught the ceiling, or opens the ceiling for any other reason.

>from shifting.  On the other hand, the connection from the xciever cable to
>the workstations is our single most common cause of network failures.  You
>just can't take a stiff heavy cable and expect it to stay attached to a
>flimsy slide-lock widget, especialy where the cable and/or the workstation
>are capable of being moved by accident.

We also found this to be our most common ethernet failure at RPI.
It is now standard practice here to replace slide lock hardware
with screw lock hardware for all installations.  We mostly use
thinnet, but enough equipment comes through with the DB15s
that we still convert when it arrives.   About half the equipment can
be modified without opening the covers.   I don't think we have had
a failure of a screw lock connected DB15.  We also make a lot of our
own drop cables, that way we can get the correct hoods and hardware
for them.  Some cables can also be modified for screwlock hardware.
This does preclude the use of right angle cables, but that seems a 
small price to pay for the greatly increased reliability.

-- 
Jon Finke                              jfinke@itsgw.rpi.edu
Network Systems Engineer               USERB239@RPITSMTS.BITNET
Information Technology Services        518 276 8185 (voice)
Rensselaer Polytechnic Institute       518 276 2809 (fax)

henry@utzoo.uucp (Henry Spencer) (10/05/89)

In article <4028@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
> The Sun 3/50 is about the worst in this respect...  Our DEC, Kinetics,
> and TCL gear seem to have somewhat better designed slides...

It's worth pointing out that the problems on the Suns are not just poor
design, they are out-and-out violations of the specs for the connector
mounting.  The connectors never get a chance to seat to their full depth.
Actually meeting the letter of the law of the specs takes work and is a
bit inconvenient; Sun didn't bother.
-- 
Nature is blind; Man is merely |     Henry Spencer at U of Toronto Zoology
shortsighted (and improving).  | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

earle@poseur.JPL.NASA.GOV (Greg Earle - Sun JPL on-site Software Support) (10/08/89)

In article <1989Oct4.153141.19593@rpi.edu> jfinke@itsgw.rpi.edu (Jon Finke) writes:
>It is now standard practice here to replace slide lock hardware
>with screw lock hardware for all installations.  We mostly use
>thinnet, but enough equipment comes through with the DB15s
>that we still convert when it arrives.

Please, people, think about what you are talking about before you post.
There is no reason a thread about Ethernet cabling and DB-15 connectors should
be cluttering up the TCP-IP mailing list (a.k.a. comp.protocols.tcp-ip).
Please take it over to comp.dcom.lans where it belongs.

(We now return you to the "ASCII vs. PostScript" 15-round title bout (^: )

--
	Greg Earle
	Sun Microsystems, Inc. - JPL on-site Software Support
	earle@poseur.JPL.NASA.GOV	(direct)
	earle@Sun.COM			(indirect)