[comp.protocols.tcp-ip] protocol verification

art@ACC.ARPA (02/09/88)

With the recent discussion about TCP testing and verification, I thought
that I would pass on an observation.  I don't know if this reflects on
just the specific instances or protocol testing in general.

I was involved with GM in testing of MAP protocols for Autofact '84 and '85.
Both times, all of the vendors eventually were able to successfully pass
the conformance test suite.  When interoperability testing began, MOST of
the vendors found problems affecting interoperability.  On other occasions,
I have seen systems working on real networks fail conformance tests and
systems which passed the conformance tests fail on the real network.

I feel that this probably stems from a tendency by the test writer(s)
to concentrate more on the protocol details (such as which diagnostic
error code should be returned), than on the basic functionality of the
protocol.  The protocol writer may then end up spending effort just trying
to make the test suite happy at the expense of a robust design.

						Art Berggreen
						art@acc.arpa

------

kozel@SPAM.ISTC.SRI.COM (Edward R. Kozel) (02/10/88)

Art,
	I could not agree more with you.  Validation of a protocol
based on the formal specification is certainly fundamental but does
not ensure either proper operation or interoperability.  As another
observation, I know of one officially validated DDN X.25 
implementation which broke when put into actual operation, yet it
passed the DCA protocol tests.  "Bake offs", while perhaps colloquial,
serve a very useful and important function.

Ed Kozel

ahill@CC7.BBN.COM ("Alan R. Hill") (02/10/88)

Art,
	My experience in providing support to host subscribers trying to
bring new software products onto the DDN indicates that the biggest
oversight of protocol testing is imparement testing.  Sure, products
obey all the rules if the operating or test environment is perfect and
also obeys the rules but products routinely mis-behave or fail if the
environment is degraded or the communicating partner is not obeying protocol.

Regards,
Alan

hrp@windsor.CRAY.COM (Hal Peterson) (02/12/88)

Alan,

You wrote that

   Sure, products obey all the rules if the operating or test
   environment is perfect and also obeys the rules but products
   routinely mis-behave or fail if the environment is degraded or the
   communicating partner is not obeying protocol.

This is absolutely correct, and is a widespread problem in the testing
of ANY software:  most programmers don't think to try invalid inputs.
For an excellent treatment of the topic read the first chapter of
``The Art of Software Testing'' by Glenford J. Myers; then read the
rest of the book, which is well worth the time of anyone who tests,
builds, or otherwise copes with software.

On the matter at hand:  the DCA Protocol Laboratory Suite includes
lots of tests which deliberately violate the protocols.

This points up a couple of arguments in favor of having a verification
suite as a part of comprehensive testing.  First, the test procedure
can create arbitrary errors at arbitrary intervals, taking into
account known implementation mistakes of the past and anticipating
tomorrow's buggy code.  Second, once such violations are in the suite,
they are there for good; with bake-off testing, implementors fix their
bugs as they go along and some flakinesses become extinct, so some of
everyone's error detection code goes untested.  A good (and
well-maintained!) verification suite can give you the effect of
talking to every dumb mistake that's been made plus a few still
waiting to be made.

I should stress, though, that a verification suite is only part of a
comprehensive testing strategy, and that bake-offs can be extremely
valuable for finding bugs.  A verification suite is software, and like
all software it is imperfect.  It is entirely possible that two
implementations both pass the suite but don't interoperate.  Bake-offs
can catch that, and when they do they have found as many as FOUR bugs:

1.  a bug in one of the implementations.
2.  a bug in the other implementations; after all, they could both be
    at fault.
3.  a bug in the specification.  It may be unclear or inconsistent or
    incomplete and so have misled the implementors.
4.  a bug in the verification suite.  It should have caught the
    problem, and next time it will, since it's a simple matter to add
    a test to a well-designed suite.  The suite gets better as time
    goes on.

--
Hal Peterson / Cray Research / 1440 Northland Dr. / Mendota Hts, MN  55120
hrp%hall.CRAY.COM@umn-rei-uc.ARPA	ihnp4!cray!hrp	    (612) 681-3145