hrp@windsor.CRAY.COM (Hal Peterson) (02/12/88)
Dave, In view of your remark Having swept the alligators from the wrong end of the IMP, you are cordially invited to swat the lizards on the NSFNET Backbone and dependent tributaries. It's much like a municipal waterworks system going out to the users, but also like a municipal sewer system coming back. I (as a tester of TCP/IP and user of the Internet) hope that somebody out there is collecting the gooiest, scaliest monsters for eventual domestication in a test suite. If there is bizarre behavior that uncovers bugs and happens out there in the swamps, then we must have samples with which to inoculate our implementations. ``Those who cannot remember the past are condemned to repeat it.'' - George Santyana -- Hal Peterson / Cray Research / 1440 Northland Dr. / Mendota Hts, MN 55120 hrp%hall.CRAY.COM@umn-rei-uc.ARPA ihnp4!cray!hrp (612) 681-3145
Mills@UDEL.EDU (02/12/88)
Hal, Once upon a time Vint Cerf was keeper of the alligators and even Bob Braden collected a few. I've got a backyard full of the critters myself. However, the point of my remark was that we don't need to invent bizarre test suites, just how well it works in the current environment. What may be more useful for you would be to find out what the current environment really is (loss rates, mangle angles, quench characteristiccs, etc., then build a flakeway (broken network simulator) with similar characteristics and do war with it. That's in fact how we did the initial IP testing (with credit to Bob Braden) in the bakeoffs of antiquity. I'll rephrase my homily: We have met the enemy and he is us. Now you may understand my preocupation with swamps. Pass the stogies, Albert. Dave
braden@VENERA.ISI.EDU (02/13/88)
Dave, I love the idea and wish it were mine, but I cannot take credit for the flakeway. I stole both the concept and the term from the BBN wizards Bill Plummer and Ray Tomlinson. They came up with the first flakeway under TOPS-20, during the early TCP/IP days. [I have seen the listing of that program, and I think I remember Ray actually wrote it; but Bill was responsible for popularizing it in the research community at that time [[about 1976-1978]]]. My only contribution was to write a flakeway for the UCLA ACP; it ran for there many years, reachable via source routing. (For those who don't know, "flakeway" is a contraction of "flakey gateway", a test gateway implementation set to deliberately and randomly reorder, delay, and/or drop packets with some set specified frequency distributions. The idea was to test your TCP/IP implementation through such a monster). From an historical perspective, it is interesting that the early work DID include real concern about robustness in the presence of long delays and packet losses, and there was active testing implementations under such conditions. Then the second- generation of implementors came along, and worked entirely in the Ethernet environment, promptly ignoring/forgetting the lessons already learned. One can speculate that if there were no LAN's in the world, we would have discovered (perforce) all the wonderful Van Jacobson ideas about 8 years ago. Bob Braden
LYNCH@A.ISI.EDU (Dan Lynch) (02/13/88)
Regarding Testing: It is an economic issue. Who could argue that bakeoffs aren't very useful? Who can argue that conformance testing against a "standard" isn't useful? Neither is sufficient to ensure interoperability in all cases. Heck, the randomness of network behaviour ensures that we will never get it perfect. (Folks stil find bugs in 20 year old Fortran and Cobol compilers...) If we had a test suite that was in some sense 'official" we would not have fiascos like I saw at Uniforum this week! There was the usual "hook all the TCP/IP speaking booths together" party. And it barely worked. Why? Two reasons: 1) Not everyone did subnetting "right" and 2) the rwho broadcast storms made the net unusable much of the time. If we had a conformance test suite available that everyone could test against, then these two rather simple hurdles could be tested for and vendors would have to pass them to get a "certificate". Would this make the world "perfect"? Probably not, but it would make it a lot, lot better! Folks, we are actually trying to get our work done with these marvelous networks. And the world is going to be a lot better off when we are all able to communicate with each other with ease. Let's all vote to support whatever it takes to make it work well. No one approach will suffice. Dan -------
satz@clash.cisco.COM (02/14/88)
>> If we had a test suite that was in some sense 'official" we would not >> have fiascos like I saw at Uniforum this week! There was the usual >> "hook all the TCP/IP speaking booths together" party. And it barely >> worked. Why? Two reasons: 1) Not everyone did subnetting "right" >> and 2) the rwho broadcast storms made the net unusable much of >> the time. If we had a conformance test suite available that everyone >> could test against, then these two rather simple hurdles could be tested >> for and vendors would have to pass them to get a "certificate". Would >> this make the world "perfect"? Probably not, but it would make it a lot, lot >> better! The major problem with the Uniforum network was misconfiguration and lack of understanding of all of the broadcast addresses. However the misconfiguration was so bad the it was almost impossible to discern broadcasts from other packets. What happened was that the show-net started out to be network 89 with a subnet number of 1. People who requested individual subnet numbers got them starting at some larger number. Interestingly enough, however, was that people weren't able to live with this arrangement for some reason networks like 8.0.0.0 and 1.0.0.0 started appearing instead. Unfortunately some hosts were still sending out [IP] subnet broadcasts instead of network broadcasts or general broadcasts (all ones). Test suites can do little to solve this problem. I also saw random ICMP message types flying around and packets with bad checksums. A real live test suite would go along way toward eliminating this problem. The unusability of the network stemmed from a few hosts that were generating error rates of 10%. Excelan, the show-net manager, quickly resolved the problem when it was pointed out to them, much to their credit. Aside from all of that, it seems that Sun was advertising all of its many networks via RIP and HP was offering a portal into its network with an IGRP route. Sun refused to pass packets to the MILnet and HP blocked access to the ARPAnet.
bobj@MCL.UNISYS.COM (Bob Jones) (02/16/88)
Dan, I was not nearly as verbose as you in my response to Dave, but you expressed my sentiments exactly. Anyway perfection is found only in heaven. This is earth, and all of the solutions for that perfect world are just not here..yet. Again, thanks for your input. bobj
LYNCH@A.ISI.EDU (Dan Lynch) (02/16/88)
Bob, You're welcome. I calls 'em as I sees 'em, eh? Actually it is time for some leadership in all this and I have been a patient person for over a year now. Things don't get fixed if there's no incentive to fix them. Something as simple as doing subnetting right or pluggin in the right broadcast address should be done by now. Must be we have to kick the vendors in the butt! Dan -------
sra@MITRE-BEDFORD.ARPA (Stan Ames) (02/19/88)
Dan It has been bad enough that existing vendors have not seen fit to fix such simple things as using the correct IP broadcast address. What I see as troubling is that entirely new products are perpetuating the problem. Perhaps the emerging ISO vendors will learn from our mistake and control products that claim to implement the protocols but have taken creative interpretations. Perhaps it is time for the internet group to list vendors that implement the protocols correctly and also publish a list of those that do not. Stan Ames
STJOHNS@SRI-NIC.ARPA (02/19/88)
Stan, the reason no one has done this (and it has crossed my mind more than once) is that is leaves them liable to a law suite. Without some sort of *formal* testing procedure, with objective results, a vendor could sue and most probably win if I got up and said its product didn't work. Unfortunately, passing the formal testing is no guarantee of being able to interoperate. We could start holding the internet bake-offs as an annual event and publish a matrix of who could talk to who and the type of performance we saw. And we could also publish which vendors declined to compete. Comments?
kleonard@PRC.Unisys.COM (Ken Leonard --> kleonard@gvlv2@prc.unisys.com) (02/23/88)
In article <[SRI-NIC.ARPA]Fri,.19.Feb.88.06:21:34.PST.STJOHNS> STJOHNS@SRI-NIC.ARPA writes: >... >more than once) is that is leaves them liable to a law suite [sic]. >... >results, a vendor could sue and most probably win if I got up and >said its product didn't work. Unfortunately, passing the formal >testing is no guarantee of being able to interoperate. >... Which is why the DCA/NBS "ProtoLab NVLAP" program needs our support. The output from a ProtoLab test suite of a protocol implementation IS NOT JUST a good/nogood tag; it is a COMPLETE and TRACEABLE standard-compliance report. Which means that any potential user can evaluate, in light of OWN/REAL/ACCEPTABLE/NECESSARY task to be accomplished, whether or not to buy a particular implementation. And, properly done, the ProtoLab results show a HECK OF A LOT about the chances of interoperability, too. Regardz, [Engineering: The Art of Science] Ken Leonard --- --- This represents neither my employer's opinion nor my own: It's just something I overheard in a low-class bar down by the docks.
JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen") (02/24/88)
Well, the unclear comments and generalizations had better not get any stamp of approval from the organization involved (just like a good bug-reporting system will bounce anything like that right back to the originator for clarification without a developer ever having to read it...). The list needs an editor. I think a simple list of easy-to-prove facts would do a good deal of good, and it could be done in fairly short order. List the vendors who are perpetuating 4.2 bugs (UDP checksums, TFTP, non-RFC959 FTP, the old broadcast address, thinking they're a router by default). List the vendors who don't support nameservers (the majority). List the vendors who don't support subnets (maybe a minority, now?). List the vendors who don't support ICMP redirects (harder to determine). Millitary-related people might even be able to collect a list of who can and can't handle IP packets with options. The sophisticated, repeatable data from a test suite is desirable, but we can begin the process much sooner. Perhaps we can even get many of the simple problems out of the way (RFC959 FTP only requires adding 4 table entries to the server's parser), and maybe even get most of the vendors used to sophisticated input from the field, perhaps even including directions for new development... jbvb PS: maybe it all looks too easy for me, because both of my PCs can run our network monitor program at a moment's notice, but the Sun people have their own tools, too.
tcs@USNA.MIL (Terry Slattery) (02/25/88)
Is there any reason why the regular vendor list cannot be expanded to include additional 'facts' about an implementation. Such facts would be subnet support, nameserver support, specific questions relating to whether known widespread bugs are fixed (i.e. udp checksums, TFTP, etc). Vendors then voluntarily complete the form. It then becomes a "keep up with the Jones'" task to fill out the form with as much positive information as possible. False listings would be discovered and possible negative publicity would eliminate them. Someone who knows most of the pitfalls needs to come up with the first template. As new items need be added, a new field is added to each entry. Old fields can be aged away as the problem subsides (i.e. we don't need to decide right here and now ALL the right questions to ask - just get close.) Announcements of a new field will eventually get the vendors and they will answer the empty item. -tcs