[comp.doc] RFC1077 - Critical Issues in High Bandwidth Networking 1/2

brian@ucsd.EDU (Brian Kantor) (12/20/88)

Network Working Group                              Gigabit Working Group
Request for Comments: 1077                             B. Leiner, Editor
                                                           November 1988


              Critical Issues in High Bandwidth Networking


Status of this Memo

   This memo presents the results of a working group on High Bandwidth
   Networking.  This RFC is for your information and you are encouraged
   to comment on the issues presented.  Distribution of this memo is
   unlimited.

ABSTRACT

   At the request of Maj. Mark Pullen and Maj. Brian Boesch of DARPA, an
   ad-hoc working group was assembled to develop a set of
   recommendations on the research required to achieve a ubiquitous
   high-bandwidth network as discussed in the FCCSET recommendations for
   Phase III.

   This report outlines a set of research topics aimed at providing the
   technology base for an interconnected set of networks that can
   provide highbandwidth capabilities.  The suggested research focus
   draws upon ongoing research and augments it with basic and applied
   components.  The major activities are the development and
   demonstration of a gigabit backbone network, the development and
   demonstration of an interconnected set of networks with gigabit
   throughput and appropriate management techniques, and the development
   and demonstration of the required overall architecture that allows
   users to gain access to such high bandwidth.


















Gigabit Working Group                                           [Page 1]

RFC 1077                                                   November 1988


   1.  Introduction and Summary



   1.1.  Background


   The computer communications world is evolving toward both high-
   bandwidth capability and high-bandwidth requirements.  The recent
   workshop conducted under the auspices of the FCCSET Committee on High
   Performance Computing [1] identified a number of areas where
   extremely high-bandwidth networking is required to support the
   scientific research community.  These areas range from remote
   graphical visualization of supercomputer results through the movement
   of high rate sensor data from space to the ground-based scientific
   investigator.  Similar requirements exist for other applications,
   such as military command and control (C2) where there is a need to
   quickly access and act on data obtained from real-time sensors.  The
   workshop identified requirements for switched high-bandwidth service
   in excess of 300 Mbit/s to a single user, and the need to support
   service in the range of a Mbit/s on a low-duty-cycle basis to
   millions of researchers.  When added to the needs of the military and
   commercial users, the aggregate requirement for communications
   service adds up to many billions of bits per second.  The results of
   this workshop were incorporated into a report by the FCCSET [2].

   Fortunately, technology is also moving rapidly.  Even today, the
   installed base of fiber optics communications allows us to consider
   aggregate bandwidths in the range of Gbit/s and beyond to limited
   geographical regions.  Estimates arrived at in the workshop lead one
   to believe that there will be available raw bandwidth approaching
   terabits per second.

   The critical question to be addressed is how this raw bandwidth can
   be used to satisfy the requirements identified in the workshop: 1)
   provide bandwidth on the order of several Gbit/s to individual users,
   and 2) provide modest bandwidth on the order of several Mbit/s to a
   large number of users in a cost-effective manner through the
   aggregation of their traffic.

   Through its research funding, the Defense Advanced Research Projects
   Agency (DARPA) has played a central role in the development of
   packet-oriented communications, which has been of tremendous benefit
   to the U.S. military in terms of survivability and interoperability.
   DARPA-funded research has resulted in the ARPANET, the first packet-
   switched network; the SATNET, MATNET and Wideband Network, which
   demonstrated the efficient utilization of shared-access satellite
   channels for communications between geographically diverse sites;



Gigabit Working Group                                           [Page 2]

RFC 1077                                                   November 1988


   packet radio networks for mobile tactical environments; the Internet
   and TCP/IP protocols for interconnection and interoperability between
   heterogeneous networks and computer systems; the development of
   electronic mail; and many advances in the areas of network security,
   privacy, authentication and access control for distributed computing
   environments.  Recognizing DARPA's past accomplishments and its
   desire to continue to take a leading role in addressing these issues,
   this document provides a recommendation for research topics in
   gigabit networking.  It is meant to be an organized compendium of the
   critical research issues to be addressed in developing the technology
   base needed for such a high bandwidth ubiquitous network.


   1.2.  Ongoing Activities


   The OSTP report referred to above recommended a three-phase approach
   to achieving the required high-bandwidth networking for the
   scientific and research community.  Some of this work is now well
   underway.  An ad-hoc committee, the Federal Research Internet
   Coordinating Committee (FRICC) is coordinating the interconnection of
   the current wide area networking systems in the government; notably
   those of DARPA, Department of Energy (DoE), National Science
   Foundation (NSF), National Aeronautics and Space Administration
   (NASA), and the Department of Health and Human Services (HHS).  In
   accordance with Phases I and II of the OSTP report, this activity
   will provide for an interconnected set of networks to support
   research and other scholarly pursuits, and provide a basis for future
   networking for this community.  The networking is being upgraded
   through shared increased bandwidth (current plans are to share a 45
   Mbit/s backbone) and coordinated interconnection with the rest of the
   world.  In particular, the FRICC is working with the European
   networking community under the auspices of another ad-hoc group, the
   Coordinating Committee for Intercontinental Research Networks
   (CCIRN), to establish effective US-Europe networking.

   However, as the OSTP recommendations note, the required bandwidth for
   the future is well beyond currently planned public, private, and
   government networks.  Achieving the required gigabit networking
   capabilities will require a strong research activity.  There is
   considerable ongoing research in relevant areas that can be drawn
   upon; particularly in the areas of high-bandwidth communication
   links, high-speed computer switching, and high-bandwidth local area
   networks.  Appendix A provides some pointers to current research
   efforts.






Gigabit Working Group                                           [Page 3]

RFC 1077                                                   November 1988


   1.3.  Document Overview


   This report outlines a set of research topics aimed at providing the
   technology base for an interconnected set of networks that can
   provide the required high-bandwidth capabilities discussed above.
   The suggested research focus draws upon ongoing research and augments
   it with basic and applied components.  The major activities are the
   development and demonstration of a Gigabit Backbone network (GB) [3],
   the development and demonstration of an interconnected set of
   networks with gigabit throughput and appropriate management
   techniques, and the development and demonstration of the required
   overall architecture that allows users to gain access to such high
   bandwidth.  Section 2 discusses functional and performance goals
   along with the anticipated benefits to the ultimate users of such a
   system.  Section 3 provides the discussion of the critical research
   issues needed to achieve these goals.  It is organized into the major
   areas of technology that need to be addressed: general architectural
   issues, high-bandwidth switching, high-bandwidth host interfaces,
   network management algorithms, and network services.  The discussion
   in some cases contains examples of ongoing relevant research or
   potential approaches.  These examples are intended to clarify the
   issues and not to propose that particular approach.  A discussion of
   the relationship of the suggested research to other ongoing
   activities and optimal methods for pursuing this research is provided
   in Section 4.


   2.  Functional and Performance Goals


   In this section, we provide an assessment of the types of services a
   GN (four or five orders of magnitude faster than the current
   networks) should provide to its users.  In instances where we felt
   there would be a significant impact on performance, we have provided
   an estimate of the amount of bandwidth needed and delay allowable to
   provide these services.


   2.1.  Networking Application Support


   It is envisioned that the GN will be capable of supporting all of the
   following types of networking applications.







Gigabit Working Group                                           [Page 4]

RFC 1077                                                   November 1988


   Currently Provided Packet Services

      It is important that the network provide the users with the
      equivalent of services that are already available in packet-
      switched networks, such as interactive data exchange, mail
      service, file transfer, on-line access to remote computing
      resources, etc., and allow them to expand to other more advanced
      services to meet their needs as they become available.

   Multi-Media Mail

      This capability will allow users to take advantage of different
      media types (e.g., graphics, images, voice, and video as well as
      text and computer data) in the transfer of messages, thereby
      increasing the effectiveness of message exchange.

   Multi-Media Conferencing

      Such conferencing requires the exchange of large amounts of
      information in short periods of time.  Hence the requirement for
      high bandwidth at low delay.  We estimate that the bandwidth would
      range from 1.5 to 100 Mbit/s, with an end-to-end delay of no more
      than a few hundred msec.

   Computer-Generated Real-time Graphics

      Visualizing computer results in the modern world of supercomputers
      requires large amounts of real time graphics.  This in turn will
      require about 1.5 Mbit/s of bandwidth and no more than several
      hundred msec.  delay.

   High-Speed Transaction Processing

      One of the most important reasons for having an ultra-high-speed
      network is to take advantage of supercomputing capability.  There
      are several scenarios in which this capability could be utilized.
      For example, there could be instances where a non-supercomputer
      may require a supercomputer to perform some processing and provide
      some intermediate results that will be used to perform still
      further processing, or the exchange may be between several
      supercomputers operating in tandem and periodically exchanging
      results, such as in a battle management, war gaming, or process
      control applications.  In such cases, extremely short response
      times are necessary to accomplish as many as hundreds of
      interactions in real time.  This requires very high bandwidth, on
      the order of 100 Mbit/s, and minimum delay, on the order of
      hundreds of msec.




Gigabit Working Group                                           [Page 5]

RFC 1077                                                   November 1988


   Wide-Area Distributed Data/Knowledge Base Management Systems

      Computer-stored data, information, and knowledge is distributed
      around the country for a variety of reasons.  The ability to
      perform complex queries, updates, and report generation as though
      many large databases are one system would be extremely powerful,
      yet requires low-delay, high-bandwidth communication for
      interactive use.  The Corporation for National Research
      Initiatives (NRI) has promoted the notion of a National Knowledge
      base with these characteristics.  In particular, an attractive
      approach is to cache views at the user sites, or close by to allow
      efficient repeated queries and multi-relation processing for
      relations on different nodes.  However, with caching, a processing
      activity may incur a miss in the midst of a query or update,
      causing it to be delayed by the time required to retrieve the
      missing relation or portion of relation.  To minimize the overhead
      for cache directories, both at the server and client sites, the
      unit of caching should be large---say a megabyte or more.  In
      addition, to maintain consistency at the caching client sites,
      server sites need to multicast invalidations and/or updates.
      Communication requirements are further increased by replication of
      the data.  The critical parameter is latency for cache misses and
      consistency operations.  Taking the distance between sites to be
      on average 1/4 the diameter of the country, a one Gbit/s data rate
      is required to reduce the transmission time to be roughly the same
      as the propagation delay, namely around 8 milliseconds for this
      size of unit.  Note that this application is supporting far more
      sophisticated queries and updates than normally associated with
      transaction processing, thus requiring larger amount of data to be
      transferred.


   2.2.  Types of Traffic and Communications Modes


   Different types of traffic may impose different constraints in terms
   of throughput, delay, delay dispersion, reliability and sequenced
   delivery.  Table 1 summarizes some of the main characteristics of
   several different types of traffic.












Gigabit Working Group                                           [Page 6]

RFC 1077                                                   November 1988


                Table 1: Communication Traffic Requirements

   +------------------------+-------------+-------------+-------------+
   |                        |             |             | Error-free  |
   | Traffic                | Delay       | Throughput  | Sequenced   |
   | Type                   | Requirement | Requirement | Delivery    |
   +------------------------+-------------+-------------+-------------+
   | Interactive Simulation | Low         |Moderate-High| No          |
   +------------------------+-------------+-------------+-------------+
   | Network Monitoring     | Moderate    | Low         | No          |
   +------------------------+-------------+-------------+-------------+
   | Virtual Terminal       | Low         | Low         | Yes         |
   +------------------------+-------------+-------------+-------------+
   | Bulk Transfer          | High        | High        | Yes         |
   +------------------------+-------------+-------------+-------------+
   | Message                | Moderate    | Moderate    | Yes         |
   +------------------------+-------------+-------------+-------------+
   | Voice                  |Low, constant| Moderate    | No          |
   +------------------------+-------------+-------------+-------------+
   | Video                  |Low, constant| High        | No          |
   +------------------------+-------------+-------------+-------------+
   | Facsimile              | Moderate    | High        | No          |
   +------------------------+-------------+-------------+-------------+
   | Image Transfer         | Variable    | High        | No          |
   +------------------------+-------------+-------------+-------------+
   | Distributed Computing  | Low         | Variable    | Yes         |
   +------------------------+-------------+-------------+-------------+
   | Network Control        | Moderate    | Low         | Yes         |
   +------------------------+-------------+-------------+-------------+

   The topology among users can be of three types: point-to-point (one-
   to-one connectivity), multicast (one sender and multiple receivers),
   and conferencing (multiple senders and multiple receivers).  There
   are three types of transfers that can take place among users.  They
   are connection-oriented network service, connectionless network
   service, and stream or synchronous traffic.  Connection and
   connectionless services are asynchronous.  A connection-oriented
   service assumes and provides for relationships among the multiple
   packets sent over the connection (e.g., to a common destination)
   while connectionless service assumes each packet is a complete and
   separate entity unto itself.  For stream or synchronous service a
   reservation scheme is used to set up and guarantee a constant and
   steady amount of bandwidth between any two subscribers.








Gigabit Working Group                                           [Page 7]

RFC 1077                                                   November 1988


   2.3.  Network Backbone


   The GB needs to be of high bandwidth to support a large population of
   users, and additionally to provide high-speed connectivity among
   certain subscribers who may need such capability (e.g., between two
   supercomputers).  These users may access the GN from local area
   networks (LANs) directly connected to the backbone or via high-speed
   intermediate regional networks.  The backbone must also minimize
   end-to-end delay to support highly interactive high-speed
   (supercomputer) activities.

   It is important that the LANs that will be connected to the GN be
   permitted data rates independent of the data rates of the GB.  LAN
   speeds should be allowed to change without affecting the GB, and the
   GB speeds should be allowed to change without affecting the LANs.  In
   this way, development of the technology for LANs and the GB can
   proceed independently.

   Access rate requirements to the GB and the GN will vary depending on
   user requirements and local environments.  The users may require
   access rates ranging from multi-kbit/s in the case of terminals or
   personal computers connected by modems up to multi-Mbit/s and beyond
   for powerful workstations up to the Gbit/s range for high-speed
   computing and data resources.


   2.4.  Directory Services


   Directory services similar to those found in CCITT X.500/ISO DIS 9594
   need to be provided.  These include mapping user names to electronic
   mail addresses, distribution lists, support for authorization
   checking, access control, and public key encryption schemes,
   multimedia mail capabilities, and the ability to keep track of mobile
   users (those who move from place to place and host computer to host
   computer).  The directory services may also list facilities available
   to users via the network.  Some examples are databases,
   supercomputing or other special-purpose applications, and on-line
   help or telephone hotlines.

   The services provided by X.500 may require some extension for GN.
   For example, there is no provision for multilevel security, and the
   approach taken to authentication must be studied to ensure that it
   meets the requirements of GN and its user community.






Gigabit Working Group                                           [Page 8]

RFC 1077                                                   November 1988


   2.5.  Network Management and Routing


   The objective of network management is to ensure that the network
   functions smoothly and efficiently, and consists of the following:
   accounting, security, performance monitoring, fault isolation and
   configuration control.

   Accounting ensures that users are properly billed for the services
   that the network provides.  Accounting enforces a tariff; a tariff
   expresses a usage policy.  The network need only keep track of those
   items addressed by the tariff, such as allocated bandwidth, number of
   packets sent, number of ports used, etc.  Another type of accounting
   may need to be supported by the network to support resource sharing,
   namely accounting analogous to telephone "900" numbers.  This
   accounting performed by the network on behalf of resource providers
   and consumers is a pragmatic solution to the problem of getting the
   users and consumers into a financial relationship with each other
   which has stymied previous attempts to achieve widespread use of
   specialized resources.

   Performance monitoring is needed so that the managers can tell how
   the network is performing and take the necessary actions to keep its
   performance at a level that will provide users with satisfactory
   service.  Fault isolation using technical control mechanisms is
   needed for network maintenance.  Configuration management allows the
   network to function efficiently.

   Several new types of routing will be required by GN.  In addition to
   true type-of-service, needed to support diverse distributed
   applications, real-time applications, interactive applications, and
   bulk data transfer, there will be need for traffic controls to
   enforce various routing policies.  For example, policy may dictate
   that traffic from certain users, applications,  or hosts may not be
   permitted to traverse certain segments of the network.
   Alternatively, traffic controls may be used to promote fairness; that
   is, to make sure that busy link or network segment isn't dominated by
   a particular source or destination.  The ability of applications to
   reserve network bandwidth in advance of its use, and the use of
   strategies such as soft connections, will also require development of
   new routing algorithms.


   2.6.  Network Security Requirements


   Security is a critical factor within the GN and one of those features
   that are difficult to provide.  It is envisioned that both



Gigabit Working Group                                           [Page 9]

RFC 1077                                                   November 1988


   unclassified and classified traffic will utilize the GN, so
   protection mechanisms must be an integral part of the network access
   strategy.  Features such as authentication, integrity,
   confidentiality, access control, and nonrepudiation are essential to
   provide trusted and secure communication services for network users.

   A subscriber must have assurance that the person or system he is
   exchanging information with is indeed who he says he is.
   Authentication provides this assurance by verifying that the claimed
   source of a query request, control command, response, etc., is the
   actual source.  Integrity assures that the subscriber's information
   (such as requests, commands, data, responses, etc.) is not changed,
   intentionally or unintentionally, while in transit or by replays of
   earlier traffic.  Unauthorized users (e.g., intruders or network
   viruses) would be denied use of GN assets through access control
   mechanisms which verify that the authenticated source is authorized
   to receive the requested information or to initiate the specified
   command.  In addition, nonrepudiation services can be offered to
   assure a third party that the transmitted information has not been
   altered.  And finally, confidentiality will ensure that the contents
   of a message are not divulged to unauthorized individuals.
   Subscribers can decide, based upon their own security needs and
   particular activities, which of these services are necessary at a
   given time.


   3.  Critical Research Issues


   In the section above, we discussed the goals of a research program in
   gigabit networking; namely to provide the technology base for a
   network that will allow gigabit service to be provided in an
   effective way.  In this section, we discuss those issues which we
   feel are critical to address in a research program to achieve such
   goals.


   3.1.  General Architectural Issues


   In the last generation of networks, it was assumed that bandwidth was
   the scarce resource and the design of the switch was dictated by the
   need to manage and allocate the bandwidth effectively.  The most
   basic change in the next generation network is that the speeds of the
   trunks are rising faster than the speeds of the switching elements.

   This change in the balance of speeds has manifested itself in several
   ways.  In most current designs for local area networks, where



Gigabit Working Group                                          [Page 10]

RFC 1077                                                   November 1988


   bandwidth is not expensive, the design decision was to trade off
   effective use of the bandwidth for a simplified switching technique.
   In particular, networks such as Ethernet use broadcast as the normal
   distribution method, which essentially eliminates the need for a
   switching element.

   As we look at still higher speed networks, and in particular networks
   in which the bandwidth is still the expensive component, we must
   design new options for switching which will permit effective use of
   bandwidth without the switch itself becoming the bottleneck.

   The central thrust of new research must thus be to explore new
   network architectures that are consistent with these very different
   speed assumptions.

   The development of computer communications has been tremendously
   distorted by the characteristics of wide-area networking: normally
   high cost, low speed, high error rate, large delay.  The time is ripe
   for a revolution in thinking, technology, and approaches, analogous
   to the revolution caused by VCR technology over 8 and 16 mm. film
   technology.

   Fiber optics is clearly the enabling technology for high-speed
   transmission, in fact, so much so that there is an expectation that
   the switching elements will now hold down the data rates.  Both
   conventional circuit switching and packet switching have significant
   problems at higher data rates.  For instance, circuit switching
   requires increasing delays for FTDM synchronization to handle skew.
   In the case of packet switching, traditional approaches require too
   much processing per packet to handle the tremendous data flow.  The
   problem for both switching regimes is the "intelligence" in the
   switches, which in turn requires electronics technology.

   Besides intelligence, another problem for wide-area networks is
   storage, both because it ties us to electronics (for the foreseeable
   future) and because it produces instabilities in a large-scale
   system.  (See, for instance, the work by Van Jacobson on self-
   organizing phenomena for self-destruction in the Internet.)
   Techniques are required to eliminate dependence on storage, such as
   cut-through routing.

   Overall, high-speed WANs are the greatest agents of change, the
   greatest catalyst both commercially and militarily, and the area ripe
   for revolution.  Judging by the attributes of current high-speed
   network research prototypes, WANs of the future will be photonic,
   multi-gigabit networks with enormous throughput, low delay, and low
   error rate.




Gigabit Working Group                                          [Page 11]

RFC 1077                                                   November 1988


   A zero-based budgeting approach is required to develop the new high-
   speed internetwork architecture.  That is, the time is ripe to
   significantly rethink the Internet, building on experience with this
   system.  Issues of concern are manageability, understanding
   evolvability and support for the new communication requirements,
   including remote procedure call, real-time, security and fault-
   tolerance.

   The GN must be able to deal with two sources of high-bandwidth
   requirements.  There will be some end devices (computers) connected
   more or less directly to the GN because of their individual
   requirements for high bandwidth (e.g., supercomputers needing to
   drive remote high-bandwidth graphics devices).  In addition, the
   aggregate traffic due to large numbers of moderate rate users
   (estimates are roughly up to a million potential users needing up to
   1 Mbit/s at any given time) results in a high-bandwidth requirement
   in total on the GN.  The statistics of such traffic are different and
   there are different possible technical approaches for dealing with
   them.  Thus, an architectural approach for dealing with both must be
   developed.

   Overall, the next-generation architecture has to be, first and
   foremost, a management architecture.  The directions in link speeds,
   processor speeds and memory solve the performance problems for many
   communication situations so well that manageability becomes the
   predominant concern.  (In fact, fast communication makes large
   systems more prone to performance, reliability, and security
   problems.)  In many ways, the management system of the internetwork
   is the ultimate distributed system.  The solution to this tough
   problem may well require the best talents from the communications,
   operating systems and distributed systems communities, perhaps even
   drawing on database and parallelism research.


   3.1.1.  High-Speed Internet using High-Speed Networks


   The GN will need to take advantage of a multitude of different and
   heterogeneous networks, all of high speed.  In addition to networks
   based on the technology of the GB, there will be high-speed LANs.  A
   key issue in the development of the GN will be the development of a
   strategy for interconnecting such networks to provide gigabit service
   on an end to end basis.  This will involve techniques for switching,
   interfacing, and management (as discussed in the sections below)
   coupled with an architecture that allows the GN to take full
   advantage of the performance of the various high-speed networks.





Gigabit Working Group                                          [Page 12]

RFC 1077                                                   November 1988


   3.1.2.  Network Organization


   The GN will need an architecture that supports the need to manage the
   system as well as obtain high performance.  We note that almost all
   human-engineered systems are hierarchically structured from the
   standpoint of control, monitoring, and information flow.  A
   hierarchical design may be the key to manageability in the next-
   generation architecture.

   One approach is to use a general three-level structure, corresponding
   to interadministrational, intraadministrational, and cluster
   networks.  The first level interconnects communication facilities of
   truly separate administrations where there is significant separation
   of security, accounting, and goals.  The second level interconnects
   subadministrations which exist for management convenience in large
   organizations.  For example, a research group within a university may
   function as a subadministration.  The cluster level consists of
   networks configured to provides maximal performance among hosts which
   are in frequent communication, such as a set of diskless workstations
   and their common file server.  These hosts are typically, but not
   necessarily, geographically collocated.  For example, two remote
   networks may be tightly coupled by a fiber optic link that bridges
   between the two physical networks, making them function as one.

   Research along these lines should study the interorganizational
   characteristics of communications, such as those being investigated
   by the IAB Task Force on Autonomous Networks.  Based on current
   results, we expect that such work would clearly demonstrate that
   considerable communication takes place between particular
   subadministrations in different administrations; communication
   patterns are not strictly hierarchical.  For example, there might be
   intense direct communication between the experimental physics
   departments of two independent universities, or between the computer
   support group of one company and the operating system development
   group of another.  In addition, (sub)administrations may well also
   require divisions into public information and private information.


   3.1.3.  Fault-Tolerant System


   Although the GN will be developed as part of an experimental research
   program, it will also serve as part of the infrastructure for
   researchers who are experimenting with applications which will use
   such a network.  The GN must have reasonably high availability to
   support these research activities.  In addition to facilitate the
   transfer of this technology to future operational military and



Gigabit Working Group                                          [Page 13]

RFC 1077                                                   November 1988


   commercial users, it will need to be designed to become highly
   reliable.  This can be accomplished through diversity of transmission
   paths, the development of fault-tolerant switches, use of a
   distributed control structure with self-correcting algorithms, and
   the protection of network control traffic.  The architecture of a GN
   should support and allow for all of these things.


   3.1.4.  Functional Division of Control Between Network Elements


   Current protocol architectures use the layered model of functional
   decomposition first developed in the early work on ARPANET protocols.
   The concept of layering has been a powerful concept which has allowed
   dramatic variation in network technologies without requiring the
   complete reimplementation of applications.  The concept of layering
   has had a first-order impact on the development of international
   standards for data communication---witness the ISO "Reference Model
   for Open Systems Interconnection."

   Unfortunately, however, the powerful concept of layering has been
   paired, both in the DoD Internet work and the ISO work, with an
   extremely weak concept of the interface between layers.  The
   interface designs are all organized around the idea of commands and
   responses plus an error indicator.  For example, the TCP service
   interface provides the user with commands to set up or close a TCP
   connection and commands to send and receive datagrams.  The user may
   well "know" whether they are using a file transfer service or a
   character-at-a- time virtual terminal, but can't tell the TCP.  The
   underlying network may "know" that failures have reduced the path to
   the user's destination to a single 9.6 kbit/s link, but it also can't
   tell the TCP implementation.

   All of the information that an analyst would consider crucial in
   diagnosing system performance is carefully hidden from adjacent
   layers.  One "solution" often discussed (but rarely implemented) is
   to condense all of this information into a few bits of "Type of
   Service" or "Quality of Service" request flowing in one direction
   only---from application to network.  It seems likely that this
   approach cannot succeed, both because it applies too much compression
   to the knowledge available and because it does not provide two-way
   flow.

   We believe it to be likely that the next-generation network will
   require a much richer interface between every pair of adjacent layers
   if adequate performance is to be achieved.  Research is needed into
   the conceptual mechanisms, both indicators and controls, that can be
   implemented at these interfaces and that, when used, will result in



Gigabit Working Group                                          [Page 14]

RFC 1077                                                   November 1988


   better performance.  If real differences in performance can be
   observed, then the implementors of every layer will have a strong
   incentive to make use of the mechanisms.

   We can observe the first glimmers of this sort of coordination
   between layers in current work.  For example, in the ISO work there
   are 5 classes of transport protocol which are supposed to provide a
   range of possible matches between application needs and network
   capabilities.  Unfortunately, it is the case today that the class of
   transport protocol is chosen statically, by the implementer, rather
   than dynamically.  The DARPA Wideband net offers a choice of stream
   or datagram service, but typically a given host uses all one or all
   the other---again, a static rather than a dynamic choice.  The
   research that we believe is needed, therefore, is not how to provide
   alternatives, but how to provide them and choose among them on a
   dynamic, real-time basis.


   3.1.5.  Different Switch Technologies


   One approach to high-performance networking is to design a technology
   that is expected to work as a stand-alone demonstration, without
   addressing the need for interconnection to other networks.  Such an
   experiment may be very valuable for rapid exploration of the design
   space.  However, our experience with the Internet project suggests
   that a primary research goal should be the development of a network
   architecture that permits the interconnection of a number of
   different switching technologies.

   The Internet project was successful to a large extent because it
   could incorporate a number of new and preexisting network
   technologies: various local area networks, store and forward
   switching networks, broadcast satellite nets, packet radio networks,
   and so on.  In this way, it decoupled the use of the protocols from a
   particular technology base.  In fact, the technology base evolved
   rapidly, but the Internet protocols themselves provided a stability
   that led to their success.

   The next-generation architecture must similarly deal with a diverse
   and evolving technology base.  We see "fast-packet" switching now
   being developed (for example in B-ISDN); we see photonic switching
   and wavelength division multiplexing as more advanced technologies.
   We must divorce our architecture from dependence on any one of these.

   At the host interface, we must divorce the multiplexing of the medium
   from the form of data that the host sees.  Today the packet is used
   both as multiplexing and interface element.  In the future, the host



Gigabit Working Group                                          [Page 15]

RFC 1077                                                   November 1988


   may see the network as a message-passing system, or as memory.  At
   the same time, the network may use classic packets, wavelength
   division, or space division switching.

   A number of basic functions must be rethought to provide an
   architecture that is not dependent on the underlying switching model.
   For example, our transport protocols assume that data will be lost in
   units of a packet.  If part of a packet is lost, we discard the whole
   thing.  And if several packets are systematically lost in sequence,
   we may not recover effectively.  There must be a host-level unit of
   error recovery that is independent of the network.  This sort of
   abstraction must be applied to all the aspects of service
   specification: error recovery, flow control, addressing, and so on.


   3.1.6.  Network Operations, Monitoring, and Control


   There is a hierarchy of progressively more effective and
   sophisticated techniques for network management that applies
   regardless of network bandwidth and application considerations:

      1.  Reactive problem management

      2.  Reactive resource management

      3.  Proactive problem management

      4.  Proactive resource management.

   Today's network management strategies are primarily reactive rather
   than proactive:  Problem management is initiated in response to user
   complaints about service outages; resource allocation decisions are
   made when users complain about deterioration of quality of service.
   Today's network management systems are stuck at step 1 or perhaps
   step 2 of the hierarchy.

   Future network management systems will provide proactive problem
   management---problem diagnosis and restoral of service before users
   become aware that there was a problem; and proactive resource
   management---dynamic allocation of network bandwidth and switching
   resources to ensure that an acceptable level of service is
   continuously maintained.

   The GN management system should be expected to provide proactive
   problem and resource management capabilities.  It will have to do so
   while contending with three important changes in the managed network
   environment:



Gigabit Working Group                                          [Page 16]

RFC 1077                                                   November 1988


      1.  More complicated devices under management

      2.  More diverse types of devices

      3.  More variety of application protocols.

   Performance under these conditions will require that we seriously
   re-think how a network management system handles the expected high
   volumes of raw management-related data.  It will become especially
   important for the system to provide thresholding, filtering, and
   alerting mechanisms that can save the human operator from drowning in
   data, while still permitting access to details when diagnostic or
   fault isolation modes are invoked.

   The presence of expert assistant capabilities for early fault
   detection, diagnosis, and problem resolution will be mandatory.
   These capabilities are highly desirable today, but they will be
   essential to contend with the complexity and diversity of devices and
   applications in the Gigabit Network.

   In addition to its role in dealing with complexity, automation
   provides the only hope of controlling and reducing the high costs of
   daily management and operation of a GN.

   Proactive resource management in GNs must be better understood and
   practiced, initially as an effort requiring human intervention and
   direction.  Once this is achieved, it too must become automated to a
   high degree in the GN.


   3.1.7.  Naming and Addressing Strategies


   Current networks, both voice (telephone) and data, use addressing
   structures which closely tie the address to the physical location on
   the network.  That is, the address identifies a physical access
   point, rather than the higher-level entity (computer, process, human)
   attached to that access point.  In future networks, this physical
   aspect of addressing must be removed.

   Consider, for example, finding the desired party in the telephone
   network of today.  For a person not at his listed number, finding the
   number of the correct telephone may require preliminary calls, in
   which advice is given to the person placing the call.  This works
   well when a human is placing the call, since humans are well equipped
   to cope with arbitrary conversations.  But if a computer is placing
   the call, the process of obtaining the correct address will have to
   be incorporated in the architecture as a core service of the network.



Gigabit Working Group                                          [Page 17]

RFC 1077                                                   November 1988


   Since it is reasonable to expect mobile hosts, hosts that are
   connected to multiple networks, and replicated hosts, the issue of
   mapping to the physical address must be properly resolved.

   To permit the network to maintain the dynamic mapping to current
   physical address, it is necessary that high-level entities have a
   name (or logical address) that identifies them independently of
   location.  The name is maintained by the network, and mapped to the
   current physical location as a core network service.  For example,
   mobile hosts, hosts that are connected to multiple networks, and
   replicated hosts would have static names whose mapping to physical
   addresses (many-to-one, in some cases) would change with time.

   Hosts are not the only entities whose physical location varies.
   Users' electronic mail addresses change.  Within distributed systems,
   processes and files migrate from host to host.  In a computing
   environment where robustness and survivability are important, entire
   applications may move about, or they may be redundant.

   The needed function must be considered in the context of the mobility
   and address resolution rates if all addresses in a global data
   network were of this sort.  The distributed network directory
   discussed elsewhere in this report should be designed to provide the
   necessary flexibility, and responsiveness.  The nature and
   administration of names must also be considered.

   Names that are arbitrary or unwieldy would be barely better than the
   addresses used now.  The name space should be designed so that it can
   easily be partitioned among the agencies that will assign names.  The
   structure of names should facilitate, rather than hinder, the mapping
   function.  For example, it would be hard to optimize the mapping
   function if names were flat and unstructured.


   3.2.  High-Speed Switching


   The term "high-speed switching" refers to changing the switching at a
   high rate, rather than switching high-speed links, because the latter
   is not difficult at low speeds.  (Consider, for example, manual
   switching of fiber connections).  The switching regime chosen for the
   network determines various aspects of its performance, its charging
   policies, and even its effective capabilities.  As an example of the
   latter, it is difficult to expect a circuit-switched network to
   provide strong multicast support.

   A major area of debate lies in the choice between packet switching
   and circuit switching.  This is a key research issue for the GN,



Gigabit Working Group                                          [Page 18]

RFC 1077                                                   November 1988


   considering also the possibility of there being combinations of the
   two approaches that are feasible.


   3.2.1.  Unit of Management vs. Multiplexing


   With very high data rates, either the unit of management and
   switching must be larger or the speed of the processor elements for
   management and switching must be faster.  For example, at a gigabit,
   a 576 byte packet takes roughly 5 microseconds to be received so a
   packet switch must act extremely fast to avoid being the dominant
   delay in packet times.  Moreover, the storage time for the packet in
   a conventional store and forward implementation also becomes a
   significant component of the delay.  Thus, for packet switching to
   remain attractive in this environment, it appears necessary to
   increase the size of packets (or switch on packet groups), do so-
   called virtual cut-through and use high-speed routing techniques,
   such as high-speed route caches and source routing.

   Alternatively, for circuit switching to be attractive, it must
   provide very fast circuit setup and tear-down to support the bursty
   nature of most computer communication.  This problem is rendered
   difficult (and perhaps impossible for certain traffic loads) because
   the delay across the country is so large relative to the data rate.
   That is, even with techniques such as so-called fast select,
   bandwidth is reserved by the circuit along the path for almost twice
   the propagation time before being used.

   With gigabit circuit switching, because it is not feasible to
   physically switch channels, the low-level switching is likely doing
   FTDM on micro-packets, as is currently done in telephony.  Performing
   FTDM at gigabit data rates is a challenging research problem if the
   skew introduced by wide-area communication is to be handled with
   reasonable overhead for spacing of this micro-packets.  Given the
   lead and resources of the telephone companies, this area of
   investigation should, if pursued, be pursued cooperatively.


   3.2.2.  Bandwidth Reservation Algorithms


   Some applications, such as real-time video, require sustained high
   data rate streams over a significant period of time, such as minutes
   if not hours.  Intuitively, it is appealing for such applications to
   pre-allocate the bandwidth they require to minimize the switching
   load on the network and guarantee that the required bandwidth is
   available.  Research is required to determine the merits of bandwidth



Gigabit Working Group                                          [Page 19]

RFC 1077                                                   November 1988


   reservation, particular in conjunction with the different switching
   technologies.  There is some concern to raise that bandwidth
   reservation may require excessive intelligence in the network,
   reducing the performance and reliability of the network.  In
   addition, bandwidth reservation opens a new option for denial of
   service by an intruder or malicious user.  Thus, investigations in
   this area need to proceed in concert with work on switching
   technologies and capabilities and security and reliability
   requirements.


   3.2.3.  Multicast Capabilities


   It is now widely accepted that multicast should be provided as a
   user-level service, as described in RFC 1054 for IP, for example.
   However, further research is required to determine the best way to
   support this facility at the network layer and lower.  It is fairly
   clear that the GN will be built from point-to-point fiber links that
   do not provide multicast/broadcast for free.  At the most
   conservative extreme, one could provide no support and require that
   each host or gateway simulate multicast by sending multiple,
   individually addressed packets.  However, there are significant
   advantages to providing very low level multicast support (besides the
   obvious performance advantages).  For example, multicast routing in a
   flooding form provides the most fault-tolerant, lowest-delay form of
   delivery which, if reserved for very high priority messages, provides
   a good emergency facility for high-stress network applications.
   Multicast may also be useful as an approach to defeat traffic
   analysis.

   Another key issue arises with the distinction between so-called open
   group multicast and closed group multicast.  In the former, any host
   can multicast to the group, whereas in the latter, only members of
   the group can multicast to it.  The latter is easier to support and
   adequate for conferencing, for example.  However, for more client-
   server structured applications, such as using file/database server,
   computation servers, etc. as groups, open multicast is required.
   Research is needed to address both forms of multicast.  In addition,
   security issues arise in controlling the membership of multicast
   groups.  This issue should be addressed in concert with work on
   secure forms of routing in general.









Gigabit Working Group                                          [Page 20]

RFC 1077                                                   November 1988


   3.2.4.  Gateway Technologies


   With the wide-area interconnection of local networks by the GN,
   gateways are expected to become a significant performance bottleneck
   unless significant advances are made in gateway performance.  In
   addition, many network management concerns suggest putting more
   functionality (such as access control) in the gateways, further
   increasing their load and the need for greater capacity.  This would
   then raise the issue of the trade-off between general-purpose
   hardware and special-purpose hardware.

   On the general-purpose side, it may be feasible to use a general-
   purpose multiprocessor based on high-end microprocessors (perhaps as
   exotic as the GaAs MIPS) in conjunction with a high-speed block
   transfer bus, as proposed as part of the FutureBus standard (which is
   extendible to higher speeds than currently commercially planned) and
   intelligent high-speed network adaptors.  This would also allow the
   direct use of hardware, operating systems, and software tools
   developed as part of other DARPA programs, such as Strategic
   Computing.  It also appears to make this gateway software more
   portable to commercial machines as they become available in this
   performance range.

   The specialized hardware approach is based on the assumption that
   general-purpose hardware, particularly the interconnection bus,
   cannot be fast enough to support the level of performance required.
   The expected emphasis is on various interconnection network
   techniques.  These approaches appear to require greater expense, less
   commercial availability and more specialized software.  They need to
   be critically evaluated with respect to the general-purpose gateway
   hardware approach, especially if the latter is using multiple buses
   for fault-tolerance as well as capacity extension (in the absence of
   failure).

   The same general-purpose vs. special-purpose contention is an issue
   with operating system software.  Conventionally, gateways run
   specialized run-time executives that are designed specifically for
   the gateway and gateway functions.  However, the growing
   sophistication of the gateway makes this approach less feasible.  It
   appears important to investigate the feasibility of using a standard
   operating system foundation on the gateways that is known to provide
   the required security and reliability properties (as well as real-
   time performance properties).







Gigabit Working Group                                          [Page 21]

RFC 1077                                                   November 1988


   3.2.5.  VLSI and Optronics Implementations


   It appears fairly clear that gigabit communication will use fiber
   optics for at least the near future.  Without major advances in
   optronics to allow effectively for optical computers, communication
   must cross the optical-electronic boundary two or more times.  There
   are significant cost, performance, reliability, and security benefits
   for minimizing the number of such crossings.  (As an example of a
   security benefit, optics is not prone to electronic surveillance or
   jamming while electronics clearly is, so replacing an optic-
   electronic-optic node with a pure optic node eliminates that
   vulnerability point.)

   The benefits of improved technology in optronics is so great that its
   application here is purely another motivation for an already active
   research area (that deserves strong continued support).  Therefore,
   we focus here in the issue of matching current (and near-term
   expected) optronics capabilities with network requirements.

   The first and perhaps greatest area of opportunity is to achieve
   totally (or largely) photonic switches in the network switching
   nodes.  That is, most packets would be switched without crossing the
   optics-electronics boundary at all.  For this to be feasible, the
   switch must use very simple switching logic, require very little
   storage and operate on packets of a significant size.  The source-
   routed packet switches with loopback on blockage of Blazenet
   illustrate the type of techniques that appear required to achieve
   this goal.

   Research is required to investigate the feasibility of optronic
   implementation of switches.  It appears highly likely that networks
   will at some point in the future be totally photonically switched,
   having the impact on networking comparable to the effect of
   integrated circuits on processors and memories.

   A next level of focus is to achieve optical switching in the common
   case in gateways.  One model is a multiprocessor with an optical
   interconnect.  Packets associated with established paths through the
   gateway are optically switched and processed through the
   interconnect.  Other packets are routed to the multiprocessor,
   crossing into the electronics domain.  Research is required to marry
   the networking requirements and technology with optronics technology,
   pushing the state of the art in both areas in the process.

   Given the long-term presence of the optic-electronic boundary,
   improvements in technology in this area are also important.  However,
   it appears that there is already enormous commercial research



Gigabit Working Group                                          [Page 22]

RFC 1077                                                   November 1988


   activity in this area, particularly within the telephone companies.
   This is another area in which collaborative investigation appears far
   better than an new independent research effort.

   VLSI technology is an established technology with active research
   support.  The GN effort does not appear to require major new
   initiatives in the VLSI area, yet one should be open to significant
   novel opportunities not identified here.


   3.2.6.  High-Speed Transfer Protocols


   To achieve the desired speeds, it will be necessary to rethink the
   form of protocols.

      1.  The simple idea of a stateless gateway must be replaced by a
          more complex model in which the gateway understands the
          desired function of the end point and applies suitable
          optimizations to the flow.

      2.  If multiplexing is done in the time domain, the elements of
          multiplexing are probably so small that no significant
          processing can be performed on each individually.  They must
          be processed as an aggregate.  This implies that the unit of
          multiplexing is not the same as the unit of processing.

      3.  The interfaces between the structural layers of the
          communication system must change from a simple
          command/response style to a richer system which includes
          indications and controls.

      4.  An approach must be developed that couples the memory
          management in the host and the structure of the transmitted
          data, to allow efficient transfers into host memory.

   The result of rethinking these problems will be a new style of
   communications and protocols, in which there is a much higher degree
   of shared responsibility among the components (hosts, switches,
   gateways).  This may have little resemblance to previous work either
   in the DARPA or commercial communities.


   3.3.  High-Speed Host Interfaces


   As networks get faster, the most significant bottleneck will turn out
   to be the packet processing overhead in the host.  While this does



Gigabit Working Group                                          [Page 23]