[comp.dcom.telecom] Use of T3 for Packet Switched Networks on the Increase

telecom@eecs.nwu.edu (TELECOM Moderator) (02/13/91)

[Moderator's Note: The attached was sent along by Jody Kravitz and is
being sent out as a special mailing because of its size. Regarding the
flow of stuff into telecom, readers should be advised I still have
over 100 messages backlogged in the queue of things to print. Most of
it will get out in the next two or three days .... all will be redated
as needed to keep it from expiring in comp.dcom.telecom. 

Again please: DO NOT send further messages to the group until toward
the weekend. Thanks.    PAT

  From: foxtail!kravitz@ucsd.edu
  Subject: Use of T3 for Packet Switched Networks on the Increase
  Date: 13 Feb 91 7:18:12 GMT
  Organization: The Foxtail Group, San Diego, CA


This month's California Education and Research Federation Network
(CERFnet) newsletter talks about the installation of T3 circuits in
NSFNET.  NSFNET and CERFnet are part of the packet-switched
"Internet".  The growth in use of T3, which operates at 45Mbps, may be
of interest to some of the readers of the TELECOM Digest.  I've
excerpted the relevant articles from the newsletter and included them
below:

	CERFnet News
	February 1991
	Volume 3, Number 1

INSIDE THIS ISSUE:
* 	The new faster NSFNET reaches CERFnet
		SDSC installs new 45 Mbps connection to the 	
		NSFNET. This is part of the NSFNET migration to 45 
		Mbps. This article also discusses the benefits to
		CERFnet users. 

	.... articles deleted ....

* 	Initial T3 deployment in place on NSFNET
		This article discusses NSFNET's new 45 Mbps (T3) 
		backbone and future improvements of the T3 architecture.

	.... articles deleted ....

Staff for this issue of CERFnet News includes:

Editor		          		Advisors
Karen McKelvey           		Susan Estrada 
	            	          	Robert Morgan 	

Writers					Contributors  
ESnet staff				Mike Beach     
Ken Horning				Rachel Chrisman
Paul Love				Carlos Robles	
Cathy Wittbrodt			       

CERFnet News is published monthly by the California Education and 
Research Federation Network (CERFnet). CERFnet is a mid-level network 
linking academic, government, and industrial research facilities throughout 
California. CERFnet receives partial funding from the National Science 
Foundation (NSF), operating under grant number NCR8819851.  

	Any opinions or recommendations expressed in this publication
are those of the author(s) and do not necessarily reflect the views of
NSF, other funders, General Atomics, SDSC, or CERFnet.

	If you would like to receive CERFnet News or would like
further information about CERFnet, please send your request to
help@cerf.net, or telephone 800-876-CERF, or contact the CERFnet
office at the following address:

		CERFnet
		c/o San Diego Supercomputer Center			
		P. O. Box 85608
		San Diego, CA  92186-9784


A NEW FASTER NSFNET REACHES CERFNET
SDSC installs new 45 Mbps connection to the NSFNET.

by Paul Love

During the year-end holidays, the San Diego Supercomputer Center
(SDSC) installed its new 45 Mbps (T3) connection to NSFNET. This was
accomplished during a visit to SDSC by Hans-Werner Braun, then the
Merit Principle Investigator on the NSFNET project. (Braun has since
become one of SDSC's networking specialists.) CERFnet users can now
enjoy faster service and gain access to even greater computational
capabilities.

	The NSFNET T3 circuit at SDSC uses a fiber that was recently
installed from UC San Diego and SDSC to the local MCI office. This
fiber connection bypasses the local telephone office which reduces
rates for connections to MCI. The fiber installation was possible
because most of it runs through land owned by UC San Diego, Chevron,
and General Atomics, who cooperated in its installation. The fiber
also means new technologies can be added quickly. For example, the
CASA testbed network (see article in CERFnet News, Aug-Sept 1990) of
the National Research and Education Network (NREN) will use
connections running at 1 Gigabit.  This is only available over fiber.
SDSC is a node on the CASA network.

	As of January 25, traffic on the T3 from SDSC was limited to
Merit (in Ann Arbor, MI) and the National Center for Supercomputing
Applications (in Urbana-Champaign, IL). Today, connections include
Stanford University (Palo Alto, CA) and the Pittsburgh Supercomputing
Center.

	Initially, six of the thirteen NSFNET sites were scheduled to
receive T3 service in addition to their T1 service. Also, two new
sites (Cambridge, MA and Argonne, IL) were scheduled to receive only
T3 service.

	At the FARNET meeting in January, Steve Wolff of NSF reported
that the agreement with Merit/MCI/IBM has been modified. All sixteen
sites on the NSFNET backbone will receive T3 service. These
installations are expected to be completed by the end of the year. The
T1 network will be dismantled when all of the sites are operating T3
service.

Benefits for CERFnet users
 
	The T3 midgration means that CERFnet users will not experience
any degradation of service during 1991. It was projected that some of
the links on the T1 network would become congested this year if
additional bandwidth was not provided.

	Also the new T3 network will make new services available.
Remotely mounted file systems will seem much more like those mounted
just across your local Ethernet. Distributed computing across the
country will now be practical. The new network will make document and
data retrieval a faster, simpler, and common operation -- a necessary
service for users as more full -- text libraries and databases become
available via the Internet.

Effect on mid-level/local networks 

	The growth of most mid-level and local networks is another
important facet of the NSFNET migration to T3. Most mid-level networks
have a high percentage of T1 links. The new T3 connections will keep a
mid-level's connection to the NSFNET backbone from becoming a
bottleneck. Also, the T3 connections will provide better service to
local network resources, especially where the supporting LANs use
FDDI.

Summary

	Merit has been tracking the growth in network usage for the
last several years and offers these eye-opening statistics. In
November 1988, the network carried less than 400 million packets. In
November 1989, the rate was 2.1 billion. And, by November 1990, the
rate was over 3.8 billion.  While the percentage increase is falling,
the absolute number of packets carried has grown each year by the same
amount: 1.7 billion. If this growth continues, we can expect NSFNET to
carry over 6.5 billion packets by November 1991.

	The NSFNET migration to T3 will keep the backbone ahead of
network usage patterns. It will allow users to use new
network-intensive services as they become available resulting in a
positive impact on scientific inquiry and industrial R&D. *


INITIAL T3 DEPLOYMENT IN PLACE ON THE NSFNET

by Ken Horning


[Editor's Note: This article is reprinted from LINK LETTER, V3 N 5, 
December 1990. Ken Horning works for Merit/NSFNET.]

Operational deployment of NSFNET's new T3 backbone was started in the
final months of 1990. T3 installations are now complete and ready for
operational traffic at the backbone end nodes in Ann Arbor, MI,
Urbana-Champaign, IL, San Diego, CA, and Palo Alto, CA.

	"This upgrade again respects the National Science Foundation's
(NSF) commitment to keep NSFNET the world's leading computer network
for the support of research and education," said Dr. Stephen S. Wolff,
Division Director, Division of Networking and Communications Research
and Infrastructure, at the NSF. "New applications that were not
feasible on slower networks will be possible with the availability of
T3 bandwidth."

Production ready

	Prior to their installation, the T3 connections at the four
installed nodes were thoroughly tested. Testing procedures included
continued verification of hardware, software, and circuits to evaluate
reliability. A suite of testing tools and procedures has also been
created which will facilitate the installation of the T3 connections
at the remaining nodes.

	The model developed for high-speed backbone transmission
involves a new generation of Nodal Switching Subsystem technology
developed by IBM. Advanced circuit technology for the T3 upgrade is
being provided by MCI.

Future improvements

	The architecture for the T3 network is utilizing a collection
of IBM Core Nodal Switching Subsystems (C-NSS) within the MCI
infrastructure, forming a cloud of co-located packet switching
capability. Exterior Nodal Switching Subsystems (E-NSS) are located at
client sites and connect into the C-NSS cloud.

	With the deployment of the new T3 architecture, the node
packet switching performance will improve significantly. The initial
T3 deployment employs an Ethernet interface to the local area network,
providing material performance improvement compared to T1 NSS
performance.

	As the NSFNET partnership completes the FDDI interoperability
testing and deploys FDDI with the new technology, even more signi^cant
performance improvement will be realized.

Additional new technology due in '91

	Later in 1991, the partnership plans to deploy new technology
which will use intelligent subsystems for the extended interfaces.
These subsystems or powerful RISC-based adapters utilize bus master
and slave capabilities on high bandwidth implementations of the
microchannel to achieve very high-speed card-to-card forwarding with
no system intervention.

	Coupled with optimized distributed protocol code, these
systems can achieve very high throughput rates. IBM's RISC on RISC
architecture utilizes RS/6000 RISC chipsets for the control processor
and a 25 MHz superscalar, RISC embedded controller with on-chip cache
and data RAM for the adapter engines. The new technology with on card
packet forwarding will dramatically improve the performance on the T3
network. *


CERFNET NEWS AVAILABLE VIA ANONYMOUS FTP

	Issues are available via anonymous ftp to NIC.CERF.NET 
in the subdirectory cerfnet/cerfnet_news.  *