seanwilliams@attmail.com (01/06/91)
The following is a summary of several newswire stories about the interruption in AT&T's long distance service which occurred yesterday: American Telephone & Telegraph Co. accidentally ripped apart one of its own fiber optic cables, disabling major commodity exchanges and disrupting service throughout New York City. The company revealed earlier this afternoon that its own contruction crews had inadvertently severed an active cable under a Newark avenue yesterday, while attempting to remove an inoperative one. AT&T began investigating technical problems at 0930 EST creating hours of havoc in long-distance calling to and from New York. About 60 percent of the calls into and out of the metropolitan area were met with a recorded message saying that all circuits were busy, said Jim Messenger, a spokesman for the American Telephone and Telegraph Co. The problem also disrupted some overseas calls, the company said. Hundreds of flights to and from Newark, Kennedy and LaGuardia airports were delayed, and some incoming planes were diverted elsewhere because air traffic controllers were unable to communicate. The loss of the cable, which could transmit more than 100,000 calls at once, underlined how society's rising reliance on new technology carries a risk because it concentrates so much information in one potentially vulnerable place. A few years ago, that volume of calls would have been spread over numerous, less efficient cables. "These failures don't occur very often, but when they do occur, there's the potential to have an impact across a broad part of the population," said Casimir Skrzypczak, vice president of science and technology at the New York regional phone company Nynex Corp. Local service and long-distance service provided by other companies, such as MCI Communications Corp. and US Sprint Communications Co. (a unit of United Telecommunications Inc.) were not affected. In fact, An AT&T spokesman said that the company instructed operators in the New York area to provide customers with access codes to its long-distance competitors at about 1000 EST/1500 GMT. AT&T was criticized last year when it waited more than three hours to distribute the special codes required for AT&T customers to places calls on MCI or Sprint networks. Disruption was widespread, however, because American Telephone & Telegraph Co. is the United States's largest long-distance carrier, handling about 70 percent of all toll calls. AT&T began directing calls away from the affected area at midmorning, and the company said that service had been restored almost to normal by 5:30 p.m. The incident was a severe embarrassment for AT&T, which cultivates an image of reliability but which a year ago suffered a virtual shutdown of its network due to errant computer software. It depicted yesterday's failure as a freak accident. "Despite the commitment that (AT&T) people make day in and day out," said AT&T spokesman Herb Linnen, "the dice roll against us." The disruption focused on lower Manhattan, where the U.S. financial industry is headquartered. "The phones went down and you could not make telephone calls out of New York City to just about anywhere," said Richard Berner, director of bond market research at the securities firm Salomon Brothers Inc. Not everyone was upset. "We've got almost no phone calls all day," said one secretary at a Manhattan company, who asked not to be identified, "which was wonderful." In the 1980s, long-distance companies laid thousands of miles of high-capacity optical fiber cables, which carry phone calls or data in enormous volume as rapid pulses of light. But some research has raised concerns that concentration of calling through single wires brings a higher threat of disruption. Jeff Held, a telecommunications specialist at the Ernst & Young accounting and consulting firm, said many long-distance companies, because of cost, have not yet put in enough alternate cable routes to handle potential problems. But he said that in view of the Newark line's importance, "It's really pretty amazing to me that that route would not be totally backed up" already. Jim Carroll, AT&T's vice president for network operations, said the disruption dragged on in part because workers had to reprogram computers and physically rearrange cables - tasks that soon will be done using new software. "If this had happened this time next year," said Carroll, "the length of this outage would have been in the range of 15 minutes." ________ This article was compiled from various sources. Credits are as follows: Joanne Kelley, "AT&T Phone Outage Paralyzes Certain Markets" Reuter, 01/04/91 Bart Ziegler, "AT&T Problem" AP Business Newswire, 01/04/91 John Burgess, "Severed Cable Disables N.Y. Markets, Airports; AT&T Accident Creates Telephone Havoc" {Washington Post}, 01/05/91 Sean E. Williams -- seanwilliams@attmail.com [Moderator's Note: Sean is a new subscriber/contributor to the Digest, and I want to thank him for an excellent report. PAT]
roy@phri.nyu.edu (Roy Smith) (01/07/91)
In article <15817@accuvax.nwu.edu> seanwilliams@attmail.com writes: > An AT&T spokesman said that the company instructed operators in the New > York area to provide customers with access codes to its long-distance > competitors at about 1000 EST/1500 GMT. Ignoring for the moment the political problems involved, how difficult would it be to implement automatic load-shedding without having to have customers manually dial a different 10xxx code? It seems that all that would be needed is for the AT&T computers to tell the local telcos' computers "OK, until further notice, take all [or half, or whatever fraction is appropriate] of the calls you would normally route to us because we're the default dial-1 long distance carrier, and send them to Sprint or MCI instead". There would be some details to work out with the billing, but that's not really a technical issue. Callers might get billed directly by the alternate carriers, or the carriers might bill AT&T under some sort of treaty; AT&T could then bill the customer normally, and they might never known what had happened (or, presumably, care). Assuming this could all be made to work (at worst, it's probably a Simple Matter Of Programming), would it be a good idea? Would the overall integretity of the long distance network be improved by this, or would the greater coupling between the various pieces generate the possibility of having a inter-carrier meltdown, making things worse? Roy Smith, Public Health Research Institute 455 First Avenue, New York, NY 10016 roy@alanine.phri.nyu.edu -OR- {att,cmcl2,rutgers,hombre}!phri!roy
Jim.Redelfs@iugate.unomaha.edu (Jim Redelfs) (01/12/91)
> In the 1980s, long-distance companies laid thousands of miles of > high-capacity optical fiber cables, which carry phone calls or data in > enormous volume as rapid pulses of light. But some research has raised > concerns that concentration of calling through single wires brings a > higher threat of disruption. US WEST Communications (NE) is offering special, "self-healing" (whatever THAT means) fiber service to major business. I have forgotten the two options, but one includes installing TWO cables to the business, fed from opposite directions. One is (presumably) idle (spare?) while the other one operates. In the event of an outage, the system automatically (again, presumably) switches to the back-up cable. JR Copernicus V1.02 Elkhorn, NE [200:5010/666.14] (200:5010/2.14)
macy@fmsystm.uucp (Macy Hallock) (01/14/91)
In article <16013@accuvax.nwu.edu> JR writes: >> In the 1980s, long-distance companies laid thousands of miles of >> high-capacity optical fiber cables, which carry phone calls or data in >> enormous volume as rapid pulses of light. But some research has raised >> concerns that concentration of calling through single wires brings a >> higher threat of disruption. >US WEST Communications (NE) is offering special, "self-healing" >(whatever THAT means) fiber service to major business. I have >forgotten the two options, but one includes installing TWO cables to >the business, fed from opposite directions. One is (presumably) idle >(spare?) while the other one operates. In the event of an outage, the >system automatically (again, presumably) switches to the back-up >cable. Due to the mindset of many phone companies, this is a poor option. In most (but not all) area, what you get is a feed to the same CO for both cables. The protection you receive is partial at best. What I have seen is: Two entrance cables, entering at separate points...that meet somewhere down the street and use the same feed cable back the the same central office. This yields no protection against many, if not most, types of failures. Examples: truck hits phone poles, takes out major cable or backhoe digs up backbone cables In Chicago and a couple other cities, their are companies that offer intra-city local feed cable (usually fiber) that can be used to access your IXC independantly of the telco's cables and CO. We advise our customers with critical communications needs to have two separate feeds to two separate IXC's using a different link to each. Around here, the only real alternative to using the local telco for network access is a microwave link. And that's what we suggest. Many customers do not want to pay the costs associated with this kind of redundant service. And in every instance, they have been out of communications at some time for a period. The reasons are many: cut cable, CO outage, IXC failure ... the effect is the same. Another curiousity: In Ohio, the telco's have written into their tariffs that each premises shall have only one entrance point. Ask for redundant feed cables, and the first thing they do it cite the tariff. I have also seen them violate this tariff provision repeatedly for their own convenience. When confronted with this the answer is almost always "necessary to provide required service" or some other variation. Macy M. Hallock, Jr. macy@fmsystm.UUCP macy@NCoast.ORG uunet!aablue!fmsystm!macy