uggora@sunybcs.UUCP (02/13/87)
I am new in this area and have several questions related to Ethernet. I hope that somebody in this newsgroup can help me. Here are the questions : 1) What are the major factors that limit the length of cable in Ethernet ? 2) Why Broadband Ethernet network length is longer than Baseband ? 3) What are the major factors to limit data transfer rate of Etherne ? 4) Why Broadband Ethernet can handle data transfer rate higher than Baseband ? Thank you Michael Gora
berger@clio.UUCP (02/17/87)
Briefly, the length of the cable is limited by capacitance and the number of terminations. Splices will change the SWR and also affect performance and maximum length. Baseband networks use the entire cable bandwidth for a single channel. A broadband cable of the same bandwidth might be divided into dozens or hundreds of channels. Although you could theoretically squeeze the same performance out of either system, high performance baseband systems are rare (and usually wasteful, since every device would have to be very fast to take advantage of the entire bandwidth). If broadband networks are longer than baseband networks, it's only because broadband repeaters are easier and cheaper to build. Data transfer rates, as noted above, are based on a number of factors. A typical intra-city television cable may carry 150 channels, each of which is several megahertz wide. 150 devices can thus use the cable concurrently. Only one device can use a broadband cable at a time, thus, each of 150 devices would have to wait their turn (and possibly arbitrate to decide who grabs the line, wasting more bandwidth). Slow devices tie up the bus for longer periods. Your last question is sort of a non-sequiter - I'm not convinced that broadband systems have inherently more bandwidth than baseband systems - but you can see why it's easier to use all the bandwidth in a broadband system. Mike Berger Center for Advanced Study University of Illinois {ihnp4|convex|pur-ee}!uiucuxc!clio!berger is a non-sequiter. Broadb
gardner@uxc.cso.uiuc.edu.UUCP (02/18/87)
The length limitation of baseband ethernet is a function of the minimum packet size, the speed of propagation in the specific cable and the collision detect algorithm. The maximum lenth of cable together makes sure that if two packets are tranmitted from either end of the cable, that each(and everyone in between) will see each others packet and hence the collision, while they are still transmitting. Broadband ethenets get around the problem by using different collision schemes. Most broadband ethernets actually offer less bandwidth than baseband, and few packets/second due to their collision detect. mgg
phil@amdcad.UUCP (02/19/87)
In article <18500001@clio> berger@clio.Uiuc.ARPA writes: > >Briefly, the length of the cable is limited by capacitance and >the number of terminations. Splices will change the SWR and >also affect performance and maximum length. This is sort of correct but not really. Maximum cable length is limited by several things but most notably, by *attenuation*. Capacitance does have something to do with this but so does the resistance of the copper. If you've ever looked at Thin Wire Ethernet cable vs regular Ethernet trunk cable, you'll see that there's a lot more copper in the regular trunk cable. That's why you can use 500 meters vs 185 meters. Other considerations in setting Ethernet physical dimensions include timing jitter and collision propagation times. An Ethernet cable should only have two terminations, one at each end. The transceivers are *taps*, not terminations. (by Ethernet cable, I mean the trunk cable. Transceiver cables are either transceiver cables or AUI cables.) > If broadband networks are longer >than baseband networks, it's only because broadband repeaters are >easier and cheaper to build. In the case of Ethernet, the limit is imposed by collision propagation times. Ethernet repeaters are not particularly difficult to build. >Data transfer rates, as noted above, are based on a number of factors. >A typical intra-city television cable may carry 150 channels, each of >which is several megahertz wide. 150 devices can thus use the cable >concurrently. Only one device can use a broadband cable at a time, >thus, each of 150 devices would have to wait their turn (and possibly >arbitrate to decide who grabs the line, wasting more bandwidth). Slow >devices tie up the bus for longer periods. I've stayed away from broadband systems like the plague (who needs modems that stop working because the temperature changed from 75 degrees to 70 degrees) but I believe you are wrong about broadband cable usage. The way I understood it, broadband bandwidth is divided up into a number of channels, much like television. Each group of devices which need to communication with each other gets a channel. I think they are about 6 MHz. If this seems like television, you're right. There is a reuse of cable TV components, due to the great economy of scale those components have achieved, as well as the large base of people who know how to work with it. I could be wrong about the broadband stuff but I'm pretty sure about the baseband stuff. -- How can I be Asian when I like milk so much? Phil Ngai +1 408 982 7840 UUCP: {ucbvax,decwrl,hplabs,allegra}!amdcad!phil ARPA: amdcad!phil@decwrl.dec.com
lien@osu-eddie.UUCP (02/20/87)
One reason that the broadband system provides higer data rate is that the spectrum of baseband signals is wider than that of broadband signals such that the bandwidth utilization of broadband systems is higher. Although the carrier signal of broadband signals are higher than that of baseband signals, the homonic is much less. Broadband signal use sin wave while baseband signal use squal wave. The following diagram cangive you an idea why the spectrum of a baseband signal is wider after Fourier Transform. ------------ <---- Low frequency componet | | | | | | | | | | <----- High frequency component | | | | | | | | ------ ------------ Yao-Nan Lien -- ------------------------------------------------------------ Yao-Nan Lien Department of Computer and Information Science Ohio State University 2036, Neil Ave. Mall Columbus, Ohio 43210-1277 Tel 614 292-5236 CSNet : lien@ohio-state.CSNET Arpa : lien@ohio-state.arpa UUCP : {cbosgd, ihnp4}!osu-eddie!lien
andersa@kuling.UUCP (02/23/87)
In article <14863@amdcad.UUCP> phil@amdcad.UUCP (Phil Ngai) writes: >I've stayed away from broadband systems like the plague (who needs >modems that stop working because the temperature changed from 75 >degrees to 70 degrees) but I believe you are wrong about broadband >cable usage. Environment dependencies like this seems like a major drawback with broadband systems, as compared to baseband ones. Is this particular kind of fault verified by experienced broadband users? Are there other similar pros and cons to take into consideration? I'm asking because we have finally installed a baseband network at our site. Our "managers" wanted to supply us with a broadband network instead. As a compromise, we now have both kinds (of which we only use the ether)... Now, afterwards (and in preparation for future expansion of our site network) I would like to find out what really is the best choice, technically and economically. It seems to me that connecting more than 300 terminals via broadband modems would be far more expensive than using ethernet-based terminal servers. Is this true or false? These terminals are located in close proximity to each other, within a single building, and they are supposed to be used in conjunction with a couple of UNIX systems and two DEC-2060:s. What I need is info like Phil's complaint above, and suggestions for important criteria to observe when comparing the two kinds of systems. Some important facts are already clear: We (those guys who will be using and operating the system) already have some experience with baseband and none with broadband, but I would like to hear some purely technical arguments. I'll welcome public follow-ups as well as mail, and I'll try to compile the essence of what I get for the benefit of the net. I would also appreciate suggestions on what to do with our currently unused broadband cable (except for linking together all those ethernet segments which will be found all over the site in years to come)... :-) -- Anders Andersson, Dept. of Computer Systems, Uppsala University, Sweden Phone: +46 18 183170 UUCP: andersa@kuling.UUCP (...!{seismo,mcvax}!enea!kuling!andersa)
jimbi@copper.UUCP (02/24/87)
---- Here at tektronix, we have both broadband and baseband. From my observation of both systems I have formed the opinion, if you don't have RF engineers to maintain and tune the broadband system, your in trouble. When one system was laid out, here at Tek, taps where planned on 40 foot centers and the attenuation both downstream from the taps in the broadband and at the taps was carefully accounted for. When the system grew to need more taps then those present, the whole system had to be reworked to account for the attenuation present from extra taps. Contrast this with baseband where taps may be installed by people without RF training, in any order or time frame up to the limit of the maximum number of taps for a given length of cable. If the demands on the baseband system grow too large then another hunk of cable can be installed and the two nets connected with a repeater or bridge (depending on the overall size). Certainly, more networks may be placed on the same piece of cabling, simply by having one set of talking and listening frequencies for one network and another set for another network. However, the non-trivial task of designing, setting up the headend, and maintaining the RF characteristics of the broadband seem to out wieght the gain of multiple nets on the same cable. A special hunk of hardware is needed to map the different nets frequencies into each other, should you ever want to have the nets to talk to each other. I'm sure that some sites have the expertise available, so this would not be to hard. But other sites exist where this experience is not there. These sites could install a baseband system and keep it going with less difficulty then a broadband system. Jim Bigelow CASE Division Tektronix, Inc. tektronix!copper!jimbi
michael@m-net.UUCP (Michael McClary) (03/12/87)
In article <24@kuling.UUCP>, andersa@kuling.UUCP writes: > In article <14863@amdcad.UUCP> phil@amdcad.UUCP (Phil Ngai) writes: > >I've stayed away from broadband systems like the plague (who needs > >modems that stop working because the temperature changed from 75 > >degrees to 70 degrees) but I believe you are wrong about broadband > >cable usage. > > Environment dependencies like this seems like a major drawback with > broadband systems, as compared to baseband ones. Is this particular > kind of fault verified by experienced broadband users? Are there other > similar pros and cons to take into consideration? When GM did the research leading up to the MAP protocol specification, they decided to go with broadband rather than baseband, because of temperature problems associated with baseband active cable taps. In auto plants, the cables must be run near the ceiling, to keep them out of the way of forklift trucks (which will hit anything automated, for some strange reason B-) ). Hitting the main cable takes out the entire system, while hitting a broadband drop cable only takes out the drop. (On ethernet it may also kill power to the tap, which could adversely affect the main cable, depending on the design of the tap.) The temperature near the ceiling in many plants is high enough to cause ethernet's active taps to fail, and a failed tap can take out the whole cable. With a broadband system, on the other hand, the taps are passive, and the few active components (repeater amplifiers) in the environmentally- exposed parts of the distribution system were designed for outdoor service in extreme climates. The modems and headend channel converters, on the other hand, end up in controlled environments. (The headend is usually in an airconditioned computer room, and the modems in offices or inside the electronic cabinets of machine tools, which have their own air conditioning.) I find the complaint about broadband modems flaking out on minor temp changes surprising. At the last site where I worked with them (Ford Ypsilanti), we were using several types of broadband modems designed by ISI (which became a division of 3M and is now owned by Allen Bradley). The one I had the most experience with was the LAN 1, which has RF electronics designed by an engineer they hired away from Motorola. In the environments where we ran them (heavily- and lightly-air-conditioned offices, air-conditioned equipment cabinets) I don't recall a single failure in the RF modules. (We did have a power supply go out, a couple failures in the digital electronics, and a continuing battle with the firmware, which is excessively paranoid about receiver failure, taking the modems offline if they temporarily lose communication with the head end, and requiring manual intervention (local power reset or remote control by an extra-cost "network monitor" machine) to get them back up.) A well-designed broadband modem should easily handle any environment where a computer terminal will operate at all. Now the complaint above may mean there are some out there that were designed without proper attention to RF stability, but at least one vendor has quality hardware, and it should be possible to find others. =========================================================================== "I've got code in my node." | UUCP: ...!ihnp4!itivax!node!michael | AUDIO: (313) 973-8787 Michael McClary | SNAIL: 2091 Chalmers, Ann Arbor MI 48104 --------------------------------------------------------------------------- Above opinions are the official position of McClary Associates. Customers may have opinions of their own, which are given all the attention paid for. ===========================================================================