[comp.protocols.tcp-ip] trace route to OZ

J.Crowcroft@CS.UCL.AC.UK (Jon Crowcroft) (07/18/89)

FYI

Script started on Tue Jul 18 11:26:46 1989

traceroute to 128.250.1.21 (128.250.1.21), 30 hops max, 38 byte packets
 1  ucl-cisco (128.16.6.150)  0 ms  40 ms  0 ms
 2  192.5.28.6 (192.5.28.6)  140 ms  80 ms  80 ms
 3  192.42.155.2 (192.42.155.2)  660 ms  860 ms  640 ms
 4  192.42.155.2 (192.42.155.2)  660 ms  640 ms  640 ms
 5  192.52.71.2 (192.52.71.2)  660 ms  680 ms  680 ms
 6  10.3.0.5 (10.3.0.5)  760 ms  1460 ms  740 ms
 7  arc-psn.arc.nasa.gov (26.1.0.16)  1360 ms  1400 ms  2240 ms
 8  192.52.195.11 (192.52.195.11)  2040 ms  1400 ms  1420 ms
 9  132.160.249.1 (132.160.249.1)  1360 ms  1120 ms  1820 ms
10  132.160.1.2 (132.160.1.2)  1200 ms  1340 ms  1240 ms
11  132.160.253.2 (132.160.253.2)  1780 ms  2000 ms  1780 ms
12  128.250.1.21 (128.250.1.21)  1960 ms  2440 ms  2000 ms

script done on Tue Jul 18 11:33:46 1989

i wish i could put names to all the numbers, but thats london to
Melbourne for you; the performance is pretty reasonable, but i guess
everyone West and South of here is still in the land of Nod, their
Ruby slippers under the bed...

jon

Mills@UDEL.EDU (07/19/89)

Jon,

(Sputter) About that hop from ARPANET to MILNET at NASA/Ames. Either the
Munchkins are upon us or the Wicked Witch ain't dead. I conclude the route
to Oz is via US DoD, but who would find that surprising?

Dave

torben@DORSAI.ICS.HAWAII.EDU ("Torben N. Nielsen") (07/19/89)

>From:	Mills@udel.edu
>To:	Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
>cc:	G.Michaelson@cc.uq.oz.au, tcp-ip@sri-nic.arpa, ja@cs.ucl.ac.uk
>Subject:  Re:  trace route to OZ
>Message-ID: <8907182209.aa04438@huey.udel.edu>
>Status: R
>
>Jon,
>
>(Sputter) About that hop from ARPANET to MILNET at NASA/Ames. Either the
>Munchkins are upon us or the Wicked Witch ain't dead. I conclude the route
>to Oz is via US DoD, but who would find that surprising?
>
>Dave

It may be going that way, but it shouldn't be. If my understanding of the 
situation is correct, the UK is linked via JVNC. Now, if I run a simple thing
like a ``traceroute" to ``jvnc.csc.org" (and Oz is one hop South of here), I get
the following:

traceroute to jvnca.csc.org (128.121.50.1), 30 hops max, 40 byte packets
 1  menehune.Hawaii.Net (128.171.1.6)  0 ms  0 ms  10 ms
 2  132.160.249.2 (132.160.249.2)  60 ms  70 ms  70 ms
 3  ARC1.BARRNET.NET (192.52.195.7)  70 ms  70 ms  70 ms
 4  ARC.SU.BARRNET.NET (131.119.3.6)  70 ms  70 ms  70 ms
 5  129.140.79.13 (129.140.79.13)  140 ms  150 ms  140 ms
 6  129.140.81.15 (129.140.81.15)  220 ms  200 ms  200 ms
 7  129.140.72.17 (129.140.72.17)  240 ms  230 ms  230 ms
 8  * * *
 9  zaphod-gateway.jvnc.net (128.121.54.72)  260 ms  270 ms  300 ms
10  * * * 
11  * * * 
12  * * * 
13  * * * 
14  * * * 
15  * * * 
16  * * * 
17  * * * 
18  * * * 
19  jvnca.csc.org (128.121.50.1)  390 ms !  530 ms !  490 ms ! 

and when I run ``traceroute" to ``nsfnet-relay.ac.uk", I get the following:

traceroute to nsfnet-relay.ac.uk (128.86.8.6), 30 hops max, 40 byte packets
 1  menehune.Hawaii.Net (128.171.1.6)  0 ms  0 ms  10 ms
 2  132.160.249.2 (132.160.249.2)  70 ms  70 ms  70 ms
 3  ARC1.BARRNET.NET (192.52.195.7)  60 ms  70 ms  70 ms
 4  ARC.SU.BARRNET.NET (131.119.3.6)  80 ms  70 ms  70 ms
 5  129.140.79.13 (129.140.79.13)  140 ms  140 ms  140 ms
 6  129.140.81.15 (129.140.81.15)  190 ms  200 ms  200 ms
 7  129.140.72.17 (129.140.72.17)  230 ms  240 ms  240 ms
 8  * * *
 9  ford-gateway.jvnc.net (128.121.54.73)  250 ms  260 ms  240 ms
10  fenchurch-gateway.jvnc.net (128.121.54.78)  270 ms  260 ms  270 ms
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  NSFNET-RELAY.AC.UK (128.86.8.6)  3840 ms !  930 ms ! *

Up through the seventh hop, it's clear to me what is happening. But after that,
I get kind of confused. I can't really tell how many hops there are in between
the JVNC gateways and ``nsfnet-relay.ac.uk". What I can tell is that I have
a stable RTT of about 250ms between me and the JVNC gateways. But when I try
to reach ``nsfnet-relay.ac.uk" with a ``ping", this is what I get:

----nsfnet-relay.ac.uk PING Statistics----
104 packets transmitted, 88 packets received, 15% packet loss
round-trip (ms)  min/avg/max = 930/3123/5270

And that is not so good. Even the minimum seems to indicate an RTT of almost
700ms across that little pond and even if this is a 56Kbps satellite link,
that's pretty stiff. I presume some other gateway is hiding in between plus
likely a *very* high load on that line.

Things are bad since I also have New Zealand swimming around somewhere South
of here and they're having trouble getting mail through to the UK. And they
tend to send a lot to the UK I gather :-(

						Torben

P.S. For those who care, Oz is a 56Kbps satellite hop South of here; it converts
to a 64Kbps terrestrial line (via ANZCAN) around the middle of August. Or at
least that's what the carriers tell us. But maybe they're just trying to keep
us happy while thinking up an excuse to delay a bit. Doubt it though....

J.Crowcroft@CS.UCL.AC.UK (Jon Crowcroft) (07/19/89)

 >>(Sputter) About that hop from ARPANET to MILNET at NASA/Ames. Either the
 >>Munchkins are upon us or the Wicked Witch ain't dead. I conclude the route
 >>to Oz is via US DoD, but who would find that surprising?
 
Dave

yes i was a little surprised at the milnet hop

 >It may be going that way, but it shouldn't be. If my understanding of the 
 >situation is correct, the UK is linked via JVNC. Now, if I run a simple thing
 >like a ``traceroute" to ``jvnc.csc.org" (and Oz is one hop South of here), I get
 >the following:

not from me it aint, the jvnc/uk link is academic traffic - i was
using the uk mod path... but below is right for nsfnet realy (i.e.
Univ of London Comp Center


 >and when I run ``traceroute" to ``nsfnet-relay.ac.uk", I get the following:

 >21  NSFNET-RELAY.AC.UK (128.86.8.6)  3840 ms !  930 ms ! *

 >Up through the seventh hop, it's clear to me what is happening. But after that,
 >I get kind of confused. I can't really tell how many hops there are in between
 >the JVNC gateways and ``nsfnet-relay.ac.uk". What I can tell is that I have
 >a stable RTT of about 250ms between me and the JVNC gateways. But when I try
 >to reach ``nsfnet-relay.ac.uk" with a ``ping", this is what I get:

Torben,

i dont understand all the extra hops either, but between JVNC and ULCC are 2
cisco gateways running X.25 (over a satellite link) which will account
for the extra delay and not a little of the loss...

 >----nsfnet-relay.ac.uk PING Statistics----
 >104 packets transmitted, 88 packets received, 15% packet loss
 >round-trip (ms)  min/avg/max = 930/3123/5270

 >And that is not so good. Even the minimum seems to indicate an RTT of almost
 >700ms across that little pond and even if this is a 56Kbps satellite link,
 >that's pretty stiff. I presume some other gateway is hiding in between plus
 >likely a *very* high load on that line.

also, the cisco at the UK end goes thru an X.25 swicth into a
microvax, which runs IP over X.25 (dont ask why, its history) - all
this does not help a lot - someday, we may pursuade folks here that
X.25 and IP mix like oranges and onions, then (when the line goes to
TAT-8) the performance will go reasonable...

 jon


------- End of Forwarded Message

Mills@UDEL.EDU (07/19/89)

Torben,

The NTP clocks in Norway show a most peaceful Atlantic crossing, with
stable delays, small dispersions and negligible loss most times. I 
believe those circuits depart via JvNC too, but I can't recall just now
how UCL is connected at the UK end. There is a Dod/MoD (?) line to
RSRE in Malvern and a tail to UCL, but I don't know what magic happens
at the UCL termination blocks.

Dave

medin@NSIPO.NASA.GOV ("Milo S. Medin", NASA ARC NSI Project Office) (07/20/89)

First off, it's rather inappropriate for this list to be used for
network debugging, so let me try and categorically explain what is going
here and then carry on a discussion if necessary offline from the tcp-ip
list.  

The Austrailian connection goes via 64 Kbps satellite to Hawaii, where
it is switched over a new 512 Kbps TPC-3 fiber link back to CONUS into
the NSN here at Ames.  From there is it sent into the NSFNET backbone,
as an interim measure through BARRNET, but as soon as the NSS connection
here is operation, directly from NSN into NSFnet.  We also have a DDN
link that is used as a path of last resort for talking to everything
else.  All NSN traffic will use a direct connection to the BMILAMES
mailbridge here at Ames as soon as that system becomes operational
(it's still a bit flaky here after an ethernet controller was
added to it).  

It appears that while 128.250 is known via the NSFNET path, it is not making
it to the UK via this path, but the UK is routing this via the ARPANET/MILNET
path instead.  Those of us who manage the Internet routing system are trying
to find out why the UK is being routed this way and not via the primary path
via NSFNET.

In any case, if you find oddities in routing or other such interesting
tidbits of information, please send this to whoever manages your 
connections to the Internet.  The tcp-ip list is too big and too diverse
to be used for network debugging.  The MERIT folks and NASA both have mailing
lists for operational issues which are appropriate for this type of discussion.

						Thanks,
						   Milo

torben@DORSAI.ICS.HAWAII.EDU ("Torben N. Nielsen") (07/20/89)

Dave,


	Apparently, there are *two* lines to the UK. And only one goes through
JVNC. That poor line is successivley mauled by X.25 and then a Microvax. So
the delays are understandable. I'd guess the load is quite heavy too since
the ``ping" RTT's exhibit quite a high variance.

	The real problem is that it's oft hard to reach the SMTP ports over
there.

						Torben

pvo3366@OCE.ORST.EDU (Paul O'Neill) (07/20/89)

>	Apparently, there are *two* lines to the UK......

You've neglected the CORE route to the UK.  The DoD has recently been 
conducting experiments with this route around lunchtime on Wednsdays.

example:
-------------------------------------------------------------------------
traceroute to cs.ucl.ac.uk (128.16.6.4), 30 hops max, 40 byte packets
 1  * * * 
 2  orst-nwnet-gw.UCS.ORST.EDU (128.193.16.2)  10 ms  10 ms  10 ms
 3  ogc-gwy.ogc.edu (129.95.20.54)  40 ms  30 ms  40 ms 
 4  130.42.46.6 (130.42.46.6)  70 ms  70 ms  70 ms 
 5  192.31.173.2 (192.31.173.2)  100 ms  110 ms  100 ms 
 6  Palo_Alto.CA.NSS.NSF.NET (129.140.77.14)  120 ms  120 ms  120 ms
 7  Salt_Lake_City.UT.NSS.NSF.NET (129.140.79.13)  220 ms  200 ms  210 ms 
 8  Palo_Alto.CA.NSS.NSF.NET (129.140.77.15)  220 ms  260 ms  260 ms 
 9  Salt_Lake_City.UT.NSS.NSF.NET (129.140.79.13)  360 ms  360 ms  370 ms 
10  Palo_Alto.CA.NSS.NSF.NET (129.140.77.15)  400 ms  370 ms  330 ms 
11  Salt_Lake_City.UT.NSS.NSF.NET (129.140.79.13)  420 ms *  510 ms 
12  Palo_Alto.CA.NSS.NSF.NET (129.140.77.15)  670 ms  750 ms  770 ms 
13  Salt_Lake_City.UT.NSS.NSF.NET (129.140.79.13)  1240 ms  690 ms  550 ms 
14  Palo_Alto.CA.NSS.NSF.NET (129.140.77.15)  530 ms  570 ms  660 ms 
15  Salt_Lake_City.UT.NSS.NSF.NET (129.140.79.13)  680 ms  640 ms  650 ms 
16  Palo_Alto.CA.NSS.NSF.NET (129.140.77.15)  1360 ms  760 ms  710 ms 
17  Salt_Lake_City.UT.NSS.NSF.NET (129.140.79.13)  720 ms *  680 ms 
18  Boulder.CO.NSS.NSF.NET (129.140.71.15)  860 ms Salt_Lake_City.UT.NSS.NSF.NET (129.140.79.13)  790 ms * 
19  * La_Jolla,CA.NSS.NSF.NET (129.140.6.11)  980 ms * 
20  La_Jolla,CA.NSS.NSF.NET (129.140.6.14)  1240 ms *  1370 ms 
21  La_Jolla,CA.NSS.NSF.NET (129.140.6.11)  1390 ms  1320 ms  1460 ms 
22  192.35.180.1 (192.35.180.1)  90 ms !N La_Jolla,CA.NSS.NSF.NET (129.140.6.14)  1900 ms * 
23  La_Jolla,CA.NSS.NSF.NET (129.140.6.11)  2090 ms *  1230 ms 
24  * * * 
25  * * * 
26  Urbana_Champaign.IL.NSS.NSF.NET (129.140.76.5)  480 ms  1010 ms  490 ms 
27  Pittsburgh.PA.NSS.NSF.NET (129.140.69.12)  570 ms Ithaca.NY.NSS.NSF.NET (129.140.74.5)  650 ms Pittsburgh.PA.NSS.NSF.NET (122.52.1959.140.69.10)  620 ms 
28  Ithaca.NY.NSS.NSF.NET (129.140.74.5)  640 ms  610 ms  710 ms 
29  Pittsburgh.PA.NSS.NSF.NET (129.140.69.10)  1080 ms  580 ms * 
30  * * * 
---------------------------------------------------------------------------

By using Strict Source Routing to run packets around selected backbone sites
a phased-array emitting ELF (read "earth penetrating") syncrotron radiation 
is created.  This particular example emits a focused "pencil-beam" at 
Greenwich.  Examples of "fan-beams" directed upwards have also been reported.
Presumably these are Directed Information Energy tests.  With capabilities
like this, it's not hard to imagine reprogramming re-entry vehicles in flight
to impact harmlessly in the ocean or Canada.

Paul O'Neill                 pvo@oce.orst.edu
Coastal Imaging Lab
OSU--Oceanography
Corvallis, OR  97331         503-737-3251

ps--The traceroute output is real.

mckee@MITRE.MITRE.ORG (H. Craig McKee) (07/20/89)

>From: "Milo S. Medin" (NASA ARC NSI Project Office) <medin@nsipo.nasa.gov>
>
>First off, it's rather inappropriate for this list to be used for
>network debugging ....

Well ... yes and no.  We don't need the detailed results of the
trace route routine, but we do need frequent reinforcement of the 
suspicion/notion/fact that the present (EGP) routing paradigm is
inadequate.  Something must be done, and that something will likely be
painful.  The community will not accept painful unless convinced of the
need, and "trace route to OZ" is convincing.

Regards - Craig  (standard disclaimers)

Mills@UDEL.EDU (07/20/89)

Paul,

Your synchrotron radiation was apparently intended to detect CONUS
emitters. The ionospheric heating experiments are being conducted in
Alaska. You are maybe on the wrong frequency. Perhaps whistlers are
to blame.

Dave

Mills@UDEL.EDU (07/24/89)

Milo,

I don't regard the discussion of interesting routing phenomena between the
furtherst outflings of the Internet as operational issues at all; in fact,
some of the oddities being discovered may have very real impact on the
understanding and solving of basic problems for users and operators of
campus, regional and backbone networks. Your comments are out of place and
leave the misleading impression that if our users simply call their
network gurus everything will get well.

Dave

WHITESID@McMaster.CA (Fred Whiteside) (07/24/89)

        Dave Mills write:
>  Milo,
>
>  I don't regard the discussion of interesting routing phenomena between
>  the furtherst outflings of the Internet as operational issues at all;
>  in fact, some of the oddities being discovered may have very real
>  impact on the understanding and solving of basic problems for users
>  and operators of campus, regional and backbone networks. Your comments
>  are out of place and leave the misleading impression that if our users
>  simply call their network gurus everything will get well.
>
>  Dave

        It is worthy of note that this is the *very* first note from
Dave that I have understood in its entirety and (as I long suspected)
I find that I agree completely with what he has to say.

        Those who feel that this list should not have discussion of
any of the real-world application difficulties of the topic of the
list, please feel free to vent your spleen elsewhere.

        Thank you for your attention

Fred Whiteside               POSTMAST@MCMASTER.BitNet
McMaster NetNorth Postmaster POSTMASTER@McMaster.CA
Development Analyst          WHITESID@SSCvax.McMaster.CA
McMaster University          ...!uunet!utai!utgpu!maccs!fred
Hamilton, Canada

heker@JVNCA.CSC.ORG (Sergio Heker) (07/24/89)

Jon,

The link between JvNC and JANET is provided by *one* cisco router in our
end, connected through a 56kbps link to an x.25 switch in London.  The
"nsfnet-relay" gateway is a SUN connected to the x.25 switching system.
The *very* high delay seems to be mostly due to the x.25 switch (the
satellite delay only accounts to approximately 500 msec).  In the case
of our link to NORDUnet, which is also satellite, we use a cisco router
at each end and we get a delay of approximately 600msec (average).  There
are discussions to add a cisco router at the JANET end in order to reduce
the delay.  

I ran ping to 128.86.8.6 and to 128.86.8.1.  The first address is the address
of the "nsfnet-relay" in England, while the second one is the address of 
the cisco router at JvNC.  Note below the round trip delays and packets
lost for each.

Mon Jul 24 08:23:03 EDT 1989
PING 128.86.8.6: 10 data bytes
----128.86.8.6 PING Statistics----
100 packets transmitted, 96 packets received, 4% packet loss
round-trip (ms)  min/avg/max = 7560/11579/16280


Mon Jul 24 08:23:14 EDT 1989
PING 128.86.8.1: 10 data bytes
----128.86.8.1 PING Statistics----
100 packets transmitted, 100 packets received, 0% packet loss
round-trip (ms)  min/avg/max = 10/46/1750

The routing and delay to the 128.86 network (our side of the point to 
point link) is reasonably good.  While the other side of the point to
point link, is extremly slow and there are some packets lost.

We have two Technical Reports (TRs) describing the connection between JvNC
and JANET and between JvNC and NORDUnet.  These two TRs are, at this point
drafts and are being "certified".  For more information on either of these
international links, please send mail to "nisc@nisc.jvnc.net".

					-- Sergio
-----------------------------------------------------------------------------
|		John von Neumann National Supercomputer Center		    |
|  Sergio Heker				tel:	(609) 520-2000		    |
|  Director for Networking		fax:	(609) 520-1089		    |
|  Internet: "heker@jvnca.csc.org"	Bitnet:	"heker@jvnc"		    |
-----------------------------------------------------------------------------

J.Crowcroft@CS.UCL.AC.UK (Jon Crowcroft) (07/24/89)

Sergio, et al

 >The link between JvNC and JANET is provided by *one* cisco router in our
 >end, connected through a 56kbps link to an x.25 switch in London.  The
 >"nsfnet-relay" gateway is a SUN connected to the x.25 switching system.

not quite according to ja,
its a microvax connected to the x.25 switch...it will be a sun someday
soon, possiblyu onm a 'null' ethernet with a cisco - still all under
negotiation/decision...

sorry about the misinformation on two cisco's - I was anticipating
what *will* be the setup at ULCC

 >The *very* high delay seems to be mostly due to the x.25 switch (the
 >satellite delay only accounts to approximately 500 msec).  In the case
 >of our link to NORDUnet, which is also satellite, we use a cisco router
 >at each end and we get a delay of approximately 600msec (average).  There
 >are discussions to add a cisco router at the JANET end in order to reduce
 >the delay.  

yep, the x.25 switch + the uVax ahave pretty appalling performance

this is due to the window of 2, limit of 1 on number of X.25 VCs the
switch/uVax can support, and 128 byte X.25 packet size amongst other
things...(this from john Andrews who manages the nsfnet-relay
machine).

 >I ran ping to 128.86.8.6 and to 128.86.8.1.  The first address is the address
 >of the "nsfnet-relay" in England, while the second one is the address of 
 >the cisco router at JvNC.  Note below the round trip delays and packets
 >lost for each.

if i use my defense path (ucl - rsre - (different satellite path)- bbn - 
nsfnet backbone - jvnc - ulcc), i get similar effects, although exaggerated:

Mon Jul 24 13:42:56 BST 1989

---128.86.8.6 PING Statistics----
17 packets transmitted, 11 packets received, 35% packet loss
round-trip (ms)  min/avg/max = 1760/3483/6300


----128.86.8.1 PING Statistics----
11 packets transmitted, 10 packets received, 9% packet loss
round-trip (ms)  min/avg/max = 1140/1291/1479


 >We have two Technical Reports (TRs) describing the connection between JvNC
 >and JANET and between JvNC and NORDUnet.  These two TRs are, at this point
 >drafts and are being "certified".  For more information on either of these
 >international links, please send mail to "nisc@nisc.jvnc.net".

will do,

thanks for info...

(now of course none of this has anything to do with OZ anymore, or my
traceroute which was via our *defense* path - has nayone got a
traceroute for the original "furthest telnet" path from OZ to
scandinavia?)

 jon

Mills@UDEL.EDU (07/25/89)

Sergio,

Some clue to the interesting behavior of our burgeoning Euroswamps may
lie in the far different behavior of the JvNC-Oslo circuit compared to the
JvNC-London circuit. The data I have here is collected with a Network
Time Protocol peer path between time servers in Delaware and Oslo and
over the last few weeks. The delays are about 850 ms and clock offsets
about 10 ms, not bad at all for any circuit, domestic or otherwise. The
servers ping each other about once every 17 minutes and run continuously,
so make a useful record of connectivity and congestion. If the path
becomes flaky the delay, offset or dispersion quickly reflect the fact,
so these time-server gizmos make good tripwires for detecting waves in the
swamps.

Okay, now the shoe drops. Jon, who would you like to bring up NTP on one
of your Gower Street munchkins? While at it, you might interest Robert
Cole at HP Labs in joining chimes. This way we could all watch each other's
clocks and get a better handle on (a) what is going on and (b) assess how
well precision time capability can help in cases like this. Come to think of
it, I'm sure you and Robert know each other and may have already discussed
this.

Perhaps further discussion should be offline.

Dave

heker@JVNCA.CSC.ORG (Sergio Heker) (07/25/89)

Dave,

It would have been very useful to get more data, like the data that NTP
can provide, to better understand the behavior of all these networks,
domestic and foreign.  By the way, the JvNC link to the Scandinavian
countries ends in Sweden.  The fact that you are getting good response
from Oslo, is due to the fact of the good connectivity, which not only exists
here but within the NORDUnet network.  NORDUnet provides for connectivity 
between Sweden and Norway.  Cheers!.


						-- Sergio


-----------------------------------------------------------------------------
|		John von Neumann National Supercomputer Center		    |
|  Sergio Heker				tel:	(609) 520-2000		    |
|  Director for Networking		fax:	(609) 520-1089		    |
|  Internet: "heker@jvnca.csc.org"	Bitnet:	"heker@jvnc"		    |
-----------------------------------------------------------------------------

rick@sbcs.sunysb.edu (Rick Spanbauer) (07/25/89)

In article <89Jul24.083917edt.57352@ugw.utcs.utoronto.ca>, WHITESID@McMaster.CA (Fred Whiteside) writes:
> >  Milo,
> >
> >  I don't regard the discussion of interesting routing phenomena between
> >  the furtherst outflings of the Internet as operational issues at all;
> >  in fact, some of the oddities being discovered may have very real
> >  impact on the understanding and solving of basic problems for users
> >  and operators of campus, regional and backbone networks. Your comments
> >  are out of place and leave the misleading impression that if our users
> >  simply call their network gurus everything will get well.
> >
> >  Dave

	Just want to file my $0.02 on this subject.  I've found the
	recent routing discussions that have taken place here to be
	both informative and useful.  Useful at least in so far as
	I might not have been motivated to install traceroute and
	use it to investigate problems with our regional network,
	Nysernet :-)  I don't think it does anyone a favour to insist
	that the Internet be thought of in the same inviolate black
	box model the telephone companies like us to think of their system
	as.  Let's continue the free flow of information and tools - our
	regional networks and Internet in general can only stand to
	gain by the exposure!

					Rick Spanbauer
					SUNY/Stony Brook

roy@phri.UUCP (Roy Smith) (07/25/89)

In article <8907191847.AA06030@cincsac.arc.nasa.gov> medin@NSIPO.NASA.GOV ("Milo S. Medin", NASA ARC NSI Project Office) writes:
> The tcp-ip list is too big and too diverse to be used for network debugging.
> The MERIT folks and NASA both have mailing lists for operational issues
> which are appropriate for this type of discussion.

	While this may "officially" be true, I'd like to point out that I
for one have no objection to this on tcp-ip.  While I don't have any direct
involvement with routing issues, I am interested in knowing as much as I
can about it.  Listening in on the discussions about problems seems like a
good way of picking stuff up.
-- 
Roy Smith, Public Health Research Institute
455 First Avenue, New York, NY 10016
{att,philabs,cmcl2,rutgers,hombre}!phri!roy -or- roy@alanine.phri.nyu.edu
"The connector is the network"

hoey@aic.nrl.navy.mil (Dan Hoey) (07/26/89)

In article <8907200854.AA02438@sapphire.oce.orst.edu> pvo3366@OCE.ORST.EDU (Paul O'Neill) writes:
...
>20  La_Jolla,CA.NSS.NSF.NET (129.140.6.14)  1240 ms *  1370 ms 
>21  La_Jolla,CA.NSS.NSF.NET (129.140.6.11)  1390 ms  1320 ms  1460 ms 

Say what you like about the bad old host table days, but at least we
didn't get any commas in the host names back then.  I can't even find
out the mail address for whatever zone 14.6.140.129.IN-ADDR.ARPA is in,
but I can hope that the postmasters at the authoritative servers may be
able to figure it out.

Considering what would happen if this name got into a mail header, I
just wonder if it's possible to get CRLFs into the canonical name,
leading to wholesale rewriting of mail headers through host name
canonicalization.  But surely that's impossible....

Dan

Mills@UDEL.EDU (07/26/89)

Sergio,

Yummy, you sailed right into that one. I hear you say you appreciate the
value of accurate clock synchronization. You have a weenie Fuzzball chiming
second-class time right in the middle of your swamp that just begs for a
first-class radio to improve the quality of chime to the max. Do you think
you could add such a thing to your rate base? Then we can both gang up on
our Norse friends and try to talk them into hooking a cesium clock I know
about near Oslo (NTARE) into our chime network. Tempus frugit or something
like that.

Dave

heker@JVNCA.CSC.ORG (Sergio Heker) (07/26/89)

Dave,

I am certainly interested. This is in my list of things to explore.

Cheers!.


						-- Sergio


-----------------------------------------------------------------------------
|		John von Neumann National Supercomputer Center		    |
|  Sergio Heker				tel:	(609) 520-2000		    |
|  Director for Networking		fax:	(609) 520-1089		    |
|  Internet: "heker@jvnca.csc.org"	Bitnet:	"heker@jvnc"		    |
-----------------------------------------------------------------------------

zweig@p.cs.uiuc.edu (07/27/89)

> Written Jul 24, 1989 by roy@phri.UUCP in comp.protocols.tcp-ip
> In article <8907191847.AA06030@cincsac.arc.nasa.gov> medin@NSIPO.NASA.GOV
>    ("Milo S. Medin", NASA ARC NSI Project Office) writes:
> > The tcp-ip list is too big and too diverse to be used for network debugging.
> > The MERIT folks and NASA both have mailing lists for operational issues
> > which are appropriate for this type of discussion.
> 
> 	While this may "officially" be true, I'd like to point out that I
> for one have no objection to this on tcp-ip.  While I don't have any direct
> involvement with routing issues, I am interested in knowing as much as I
> can about it.  Listening in on the discussions about problems seems like a
> good way of picking stuff up.
> -- 

  I agree very strongly with this and the other similar opinions that have
been expressed recently. Finding problems people are having can be very
instructive about complex software systems like communication protocol
subsystems. I think the "we don't have the time for your silly little
problems" sentiment would be more appropriate in comp.protocols.ivory-tower.


-Johnny Zweig
 University of Illinois at Urbana-Champaign
 Department of Computer Science
--------------------------------Disclaimer:------------------------------------
   Rule 1: Don't believe everything you read.
   Rule 2: Don't believe anything you read.
   Rule 3: There is no Rule 3.
-------------------------------------------------------------------------------

kre@cs.mu.oz.au (Robert Elz) (08/04/89)

In article <8907282200.AA06660@ucbvax.Berkeley.EDU>,
J.Crowcroft@CS.UCL.AC.UK (Jon Crowcroft) writes:
> (now of course none of this has anything to do with OZ anymore, or my
> traceroute which was via our *defense* path - has nayone got a
> traceroute for the original "furthest telnet" path from OZ to
> scandinavia?)

Sure ...

traceroute to sics.se (192.16.123.90), 30 hops max, 40 byte packets
 1  HW.GW.AU (128.250.1.1)  10 ms  10 ms  10 ms
 2  132.160.253.1 (132.160.253.1)  560 ms *  560 ms
 3  132.160.1.1 (132.160.1.1)  570 ms  560 ms  560 ms 
 4  132.160.249.2 (132.160.249.2)  620 ms  610 ms  610 ms 
 5  ARC1.BARRNET.NET (192.52.195.7)  620 ms  620 ms  620 ms 
 6  ARC.SU.BARRNET.NET (131.119.3.6)  620 ms  620 ms  630 ms 
 7  Salt_Lake_City.UT.NSS.NSF.NET (129.140.79.13)  690 ms  690 ms  690 ms 
 8  Ann_Arbor.MI.NSS.NSF.NET (129.140.81.15)  750 ms  740 ms  750 ms 
 9  Princeton.NJ.NSS.NSF.NET (129.140.72.17)  790 ms *  790 ms 
10  * * * 
11  slartibartfast-gateway.jvnc.net (128.121.54.76)  800 ms  800 ms  800 ms 
12  * kth-ptp-gw.nordunet.se (192.36.148.66)  1750 ms  1720 ms 
13  se-gw.nordunet.se (192.36.148.21)  1670 ms  1650 ms  1650 ms 
14  * ipsthlm-gw.sunet.se (192.36.125.10)  1660 ms  1670 ms 
15  130.237.210.1 (130.237.210.1)  1660 ms  1920 ms  1720 ms 
16  ipkista-gw.kth.se (130.237.72.204)  1740 ms  1870 ms  1670 ms 
17  * * * 
18  * * *

I assume that sics.se has the IP TTL bug still in it, and if I had let
it run out to 34 hops (or so) eventually the packet from the destination
would have made it back.  (I did let it run beyond 18, but the rest
wasn't interesting).

kre