[comp.protocols.tcp-ip] Rumors about the death of the ARPANET

boesch@VAX.DARPA.MIL (03/15/88)

There have been a number of rumors about the impending death of the
ARPANET.  Here is the current DARPA position.

Brian Boesch

------------------------------------------------------------------------


	DEATH OF THE ARPANET AND OTHER PARANOIA

There have been a number of rumors throughout the community that the
ARPANET project is being terminated. Many individuals and
organizations have expressed concern that the service that they have
become accustomed to will be terminated.

Enough rumors, now a word from your sponsor, DARPA.

The ARPANET project in fact is being terminated, but not soon.  DARPA 
is in the business of conducting research into critical NEW technologies 
that will advance the state of the art.  ARPANET is neither new, 
nor state of the art.  It is slow and  expensive.

ARPANET was founded in the early 70's when 56Kbit/second trunks were
on the cutting edge of modulation and transmission technology. Packet
switching was unheard of.  (An interesting fact is that the average
terminal of the day was 30cps giving the net trunks about a factor of
230 faster than the average user interface). Since that time the
project expanded into the INTERNET where a number of dissimilar
networks could be interconnected relatively transparently.  The
internet grew from about 63 hosts to over 20,000. The local nets that
connect to the ARPANET and other Wide Area Nets (WANs) progressively
increased in speed.  The result is that while in '73 a large number of
users could effectively share one trunk, today, one user on a PC can
overload the entire capacity of the ARPANET.

In addition to being overloaded, the ARPANET is no longer able to
support its other prime function, that of a research base.  To conduct
any kind of experiment on the ARPANET causes too much service
disruption to the community.

Finally, the ARPANET is absorbing a significant fraction of our total
research budget in what is really a support function.

Solution, eliminate the source of the problem.  Rather than cutting
off the community our approach is to outgrow the ARPANET in a few
years.

The follow on network experiment will be called the Defense Research
Internet (DRI). We are also working in conjunction with other Federal
agencies, most notably National Science Foundation, to integrate our
networking experiments with the new regional networks, the NSFNET 
project, and other agency networks.

An additional source of confusion is the fact that we are currently
arranging for NSFNET to support some ARPANET users, as part of a joint 
effort to reduce costs by phasing out overlapping service.  Our
intention, as always, is to do this with minimal disruption to the
reserach community.

While this happening, we will be putting together the initial version
of the DRI apart from the ARPANET.  From the beginning the DRI will provide
the long distance trunk capacity that the ARPANET lacks. Initial
speeds will be 1.5Mbit/second per link (a factor of 25 improvement).
The DRI will also be segregated into an "experimental" and an
"operational" side.  The experimental side will have higher performance,
with the possibility of higher degree of net problems; the operational
side will support high data-rate applications such as image transfer.
The experimental side will be phased from 1.5Mbit to higher and higher
bandwidths with the intent of eventually reaching gigabit/second
performance; the operational side will take over for the ARPANET.
It will be operated by a contractor, and will be funded as overhead on
individual users' projects rather than becoming a drain on the Networking
research budget.  After the DRI is stable, the ARPANET will be phased out.  


PLEASE DON'T BURY US WITH QUERIES ON THE DETAILS OF THE
IMPLEMENTATION, WE DON'T HAVE TIME TO ANSWER THEM.  AS DETAILS ARE 
FINALIZED AND READY FOR PUBLIC DISSEMINATION, WE WILL POST THEM.


Mark Pullen & Brian Boesch
- -------

KASTEN@MITVMA.MIT.EDU (Frank Kastenholz) (03/19/88)

When the ARPANET was built, 56Kb lines were used - the leading  edge
of technology of the day. The "New and Improved" network is proposed
to use 1.5Mbit lines (I assume T1). In keeping with the leading edge
philosophy of yore, the just erupting 45Mbit/sec technology (T3?) comes
to mind.

Any thoughts given to T3? After all, current experience with the ARPANET
tends to indicate that applications expand to fill the available bandwidth.

This is not a flame! merely a true question.

Frank Kastenholz
Atex Inc.

All opinions are mine and mine alone. My employer has no idea I am even
saying this.

CERF@A.ISI.EDU (03/21/88)

Frank,

DS-3 is a reality for many people who either mux it down to manageable
bandwidth channels or build special interfaces that can push/pull data
at 45 Mbit/sec.

About your earlier messages concerning trusting local I/O and not doing
checksums end to end (by implication) - we have tried that in the past and
been burned - what is different today? Fiber? The problem is that the end
to end channel may still contain some weakness in terms of S/N and bit 
error rate. I'd rather see silicon checksums to speed them up than doing
away with them because they take time...

Vint

kwe@bu-cs.BU.EDU (kwe@bu-it.bu.edu (Kent W. England)) (03/21/88)

In article <8803210118.AA19481@ucbvax.Berkeley.EDU> KASTEN@MITVMA.MIT.EDU (Frank Kastenholz) writes:
>
>When the ARPANET was built, 56Kb lines were used - the leading  edge
>of technology of the day. The "New and Improved" network is proposed
>to use 1.5Mbit lines (I assume T1). In keeping with the leading edge
>philosophy of yore, the just erupting 45Mbit/sec technology (T3?) comes
>to mind.
>
>Any thoughts given to T3? 

	There is the FCCSET [fixit!] committee of the President's
Office of Science and Technology which has done studies and set up
subcommittee's on supercomputing, networking, and technology
development, etc.  Gordon Bell is the chairman of the subcommittee on
networking.  He recently wrote an article [with an unfortunate
derogatory leading remark on BU networking!] in the February issue of
IEEE Spectrum about the need for a new national research and education
network.  He proposes a series of steps upgrading the internet,
leading eventually to a T3 link to every major university and research
center.  He discusses some costs associated with upgrading to T1, but
says nothing about the cost of T3, which may be appropriate given the
lead time to deployment, except to say that the institutions served
should pay (telephone analogy).  I wonder if this is reasonable?
	Paying for T3 would be dear compared to paying for today's
services.  I think the telephone companies rates, based on message
units and not cost-to-provide, are probably too high.
	Bell says it's up to us to take the lead since our government
probably won't.  Anybody want AT&T to provide T3 service for a new
super-internet?  Anybody up to building our own private network?  The
providers of supercomputer services (like, but not necessarily,
Boeing) already have complained about unfair competition from the NSF
supercomputer centers.  Wonder what the telecommunications vendors
will say when someone puts Bell's proposal before the Congress, if
ever?
	Oh, well, I think I'll get our network users used to paying
for service now on a recurring basis.  Soften them up for the $5k to
$30k per month the super-internet will cost.  :-)
	As Bell says, technology is not the issue.  We can build T3
networks and we can build super-routers and we can fix TCP {VJ can,
anyway}...  The problem is figuring out how to do it cooperatively.

	Kent England
	Boston University (which does have the networks the Medical
	Center doesn't have. [read the article])

haverty@CCV.BBN.COM (03/22/88)

Vint/Frank - my recollection of some of the conflagrations around
checksumming is that the end-end checksum is valuable even if the
piecewise error control is perfect (10 to however many you like);
what the end-end often catches are plain and simple bugs deep down
in the middle of the path, e.g., a pointer off-by-one in a packet
switch buffer under some obscure condition and the like.  Since
software is never debugged fully until you retire it, such situations
will crop up, and according to Murphy will happen at the worst time.
What would be interesting is to see if there is a way to design
a simple end-end "checksum" designed to catch errors which are not
like those in communications media, i.e., result from bugs, configuration
mistakes, etc., rather than line noise and the like.
Jack

KASTEN@MITVMA.MIT.EDU (Frank Kastenholz) (03/22/88)

Jack, Vint, and friends,

By trusting the local I/O channel I had in mind just the transfer of
data from the hosts main storage to the protocol processor. For example,
the Block Multiplexer channel on an IBM mainframe provides a guaranteed
transfer - if the application issues a write to the channel, the final status
of the operation indicates success or failure. Also, the channel will transfer
data intact, or reports an error to the application. There is little need
(other than paranoia?) to provide an application level checksum on the
transferred data, or a high level ack mechanism across the channel.

End to End checksums, acks, etc are needed. Nolo Contendre. But they
can be between intelligent control units.

(This whole argument also assumes that the protocol processor has a
reasonably powerful CPU and amount of memory - imagine a IBM 3090
dumping data into a 8 Mhz 68000 with 512 Kb of memory!!! :-)

Frank

braden@VENERA.ISI.EDU (03/23/88)

	What would be interesting is to see if there is a way to design
	a simple end-end "checksum" designed to catch errors which are not
	like those in communications media, i.e., result from bugs, configuration
	mistakes, etc., rather than line noise and the like.
	Jack
	
Actually, Jack, I think that is what you (as part of the TCP WG) did.
The TCP checksum is probably too weak for communications media, but it
DOES, as you point out, catch subtle host software errors!

Bob Braden

haverty@CCV.BBN.COM (03/23/88)

Hi Bob,

Yes, good observation.  Actually it's not so much the weakness of the
TCP algorithm, it's the fact that so much has to remain intact for the
bad packet to make it to the destination at all to be checksummed!

The other thing to note is that error-correcting codes are related to
checksumming and to fault isolation - they detect, isolate the cause,
and repair all at the same time (but they DON'T usually report the
event to someone who might, for example, dispatch service to fix a
flaky component.  

Here's part of an exchange from Vint commenting on checksums that
would have a bit that is set for 'pointer-off', and another for
'count-off-by-one', etc. that you might find interesting:

Vint - You've touched on an interesting point.  Right now, checksums
are almost alway binary, i.e., the data is either intact to some high
probability, or it's bad in some undefined way.  Thus checksums become
good fault detection tools, but lousy fault isolation tools.  If we
could design a checksum algorithm that produced more than a yes/no
output, it could be a useful network management tool; e.g., even if it
could differentiate between a single-bit error in a packet, which
might be a 'normal' behavior of certain lines, and a totally clobbered
packet, e.g., all bytes zeroed, which might indicate a major hardware
or software failure, it would be very useful.  The former would be
ignored unless it happens a lot; the latter could trigger alarms,
memory snapshots, datascope traces, etc.
             
Of course the ultimate such algorithm would set bit 1 for off-by-one,
bit 2 for pointer problem, etc.  It may sound crazy, but I think it's
not so farfetched.  From lots of hours looking over people's shoulders
as they debug problems (and doing it myself), the key principle is to
look for something 'unusual' in the behavior.  If that test could be
converted into an AI-ish algorithm, it could become part of the 
network technology itself.  Deja vu - we talked about this kind ofd
thing when getting the original Automated Network Managemet prmoject
off the ground.  Maybe we should get the academic community to try to
invent a new branch of science and engineering focussed around 'checksums'?

CERF@A.ISI.EDU (03/23/88)

The FCCSET committees, the NSF, the EDUCOM Network Forum, and a number
of other individuals, groups, organizations, random parties, etc. all
are interested in seeing a high speed network emerge which could benefit
the research community and ultimately the entire business population.

Serious work is going on in industry and in the research labs on very
high speed switching capble of operating in excess of 45 Mbps. A SONET
swiitch (circuit) was demonstrated recently at 135 Mb/s; a packet
mode switching fabric is under development at Bellcore which will
operate at 100-200 Mb/s per channel (packet mode).

Cost is an important consideration and it does seem as if various
forms of subsidy will be needed in the early stages, just as the
ARPANET was subsidized fully by the R&D funding activity of DARPA.
In the longer term, though, there are services which will dmeand the
bandwidth and make it far less expensive on average. To give a
trivial example, once ISDN emerges, 64 kb/s will be the standard rate
for voice - so you get to use it for data, too, without the cost
of a modem.

This isn't to say everything will happen easily or even very quickly,
but there are enough forces in motion that I believe the will and
the therewithall to make high speed nets a reality will be available.

Vint

Mills@UDEL.EDU (03/24/88)

Vint,

Your suggestino that, once we get ISDN, we will have ubiquitous 64-kbps
digital service tripped me up. While making much the same comment at
a recent meeting, carrier representatives were quick to point out that
2B + D (two 64-kbps bearer plus 16 kbps data/control) might well be
ubiquitous in the feeder and distribution plants, but it may take much
longer for this to occur in the interexchange plant. What may happen for
many years is that 64-kbps data may not be ubiquitous outside your local
area. In fact, I was told the 16-kbps data/control channel would not
be end-to-end for a long time.

Why don't we chuck the whole thing and move to B-ISDN and SONET? The
general consensus of the carrier's R&D guys is that this would really be
the best course. Are you ready for 51.84 Mbps? You gete 125 microseconds to
make up your mind.

Dave

CERF@A.ISI.EDU (03/24/88)

Dave,

I'm sure not holding my breath for ISDN or B-ISDN, but I would not
mind having them both!  The local loop is the last part and the one
which will take the longest, but for many city-based systems,
the pairs are short enough that there are no loading coils, so
the switchover requires only CPE and CO equipment. It is probably 
1996-2000 before we see 65% business penetration and 35% residential, but
your guess is as good as mine.

Vint

budden@tetra.NOSC.MIL (Ray A. Buddenberg) (03/25/88)

Maybe I'm missing something, but:

- we are getting 10-20 km between repeaters on fiber links these days,
perhaps somewhat more with high clarity and laser transmitters.

- FDDI rings accomodate 512 nodes before we have to split the net.

- FDDI starts life at 100 Mbits.

Looks to me that 500 nodes times 20 km multiplies out to a pretty
decent regional backbone -- internet a half dozen of these and you've
got the country covered.  Why are we fiddling around with ISDN?

Rex Buddenberg

braun@drivax.UUCP (Kral) (03/26/88)

In article <8803141728.AA08682@sun46.darpa.mil> boesch@VAX.DARPA.MIL writes:
>
>The follow on network experiment will be called the Defense Research
>Internet (DRI). We are also working in conjunction with other Federal
>agencies, most notably National Science Foundation, to integrate our
>networking experiments with the new regional networks, the NSFNET 
>project, and other agency networks.
>

Ahem.  I would like to take this opportunity to point out that nodes in the
to-be implemented drinet.com domain will have nothing to do with the to-be
implemented Defense Research Internet :-).


-- 
kral 	408/647-6112			...{ism780|amdahl}!drivax!braun
DISCLAIMER: If DRI knew I was saying this stuff, they would shut me d~-~oxx

grr@cbmvax.UUCP (George Robbins) (03/27/88)

In article <670@tetra.NOSC.MIL> budden@tetra.nosc.mil.UUCP (Rex A. Buddenberg) writes:
> 
> Maybe I'm missing something, but:
...
> Looks to me that 500 nodes times 20 km multiplies out to a pretty
> decent regional backbone -- internet a half dozen of these and you've
> got the country covered.  Why are we fiddling around with ISDN?

Yes, and who do you have lined up to pay for a little fiber cable
terminating at your residence?  ISDN gives hopes of getting reasonable
bandwidth almost anywhere a reasonable cost, whereas fiber will
probably be limited to backbone applications and connecting super
computer ($$$) centers.

-- 
George Robbins - now working for,	uucp: {uunet|ihnp4|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

budden@tetra.NOSC.MIL (Rex A. Buddenberg) (03/29/88)

In article <3528@cbmvax.UUCP> grr@cbmvax.UUCP (George Robbins) writes:
>In article <670@tetra.NOSC.MIL> budden@tetra.nosc.mil.UUCP (Rex A. Buddenberg) writes:
>> Looks to me that 500 nodes times 20 km multiplies out to a pretty
>> decent regional backbone -- internet a half dozen of these and you've
>> got the country covered.  Why are we fiddling around with ISDN?
>
>Yes, and who do you have lined up to pay for a little fiber cable
>terminating at your residence?  ISDN gives hopes of getting reasonable
>bandwidth almost anywhere a reasonable cost, whereas fiber will
>probably be limited to backbone applications and connecting super
>computer ($$$) centers.
>
Who do you know who has a 56k trunk in his home now?  I was talking
about backbone connectivity (where the fiber is already laid),
not local loop.

By the way, at least some local phone companies are providing local
loop fiber now.  They figure it is more cost effective, considering
growth and weather resistance.

B

ted@ultra.UUCP (Ted Schroeder) (03/30/88)

In article <671@tetra.NOSC.MIL> budden@tetra.nosc.mil.UUCP (Rex A. Buddenberg)
 writes:
>Who do you know who has a 56k trunk in his home now?  I was talking
>about backbone connectivity (where the fiber is already laid),
>not local loop.

One of the wonders of ISDN is the fact that local loop supports the BRI (2B+D)
directly with the proper driving chips.  So the answer to the question is:
Everyone who happens to have a CO that's set up to talk ISDN.  

      Ted Schroeder                   ultra!ted@Ames.arc.nasa.GOV
      Ultra Network Technologies
      2140 Bering drive               with a domain server:
      San Jose, CA 95131                 ted@Ultra.COM
      408-922-0100

smiller@umn-cs.cs.umn.edu (Steven M. Miller) (03/30/88)

> terminating at your residence?  ISDN gives hopes of getting reasonable
> bandwidth almost anywhere a reasonable cost, whereas fiber will
> probably be limited to backbone applications and connecting super
> computer ($$$) centers.

I recently attended a presentation on ISDN given by a rep from 
Northwestern Bell.  He talked about the local loop being Fiber with
ISDN only being a small part of the bandwidth, with the rest being
things like super hi res cable tv and other goodies.

-Steve
-- 



			-Steve Miller, U of MN

SNELSON@STL-HOST1.ARPA (03/31/88)

Just keep in mind that on wire the CENTRAL OFFICE is supplying the power
for your telephones. When you go fiber and you got no 'lectricity= no
'lectrons= no signal.
Steve

slevy@UC.MSC.UMN.EDU ("Stuart Levy") (04/01/88)

It might not be that bad...  the Bell System Tech. J. a few years ago had
an article on supplying telephone power directly over the optical fiber,
using LEDs and photocells.  Efficiency was quite high -- photocells do
a good job when they see light of the right wavelength.
	Stuart Levy