[comp.theory] Correctness in parallel and distributed systems.

jbn@glacier.STANFORD.EDU (John B. Nagle) (02/01/88)

      Proof of correctness has been a disappointing technology.  The
basic problems are 

    1)  How do you write down what the program is "supposed to do"?
	Formal specifications of real programs turn out to be no better
	than half the size of the program itself, and much harder to
	debug.  In any case, if we really had a way to express formal
	specifications that was any good, we would be better off trying
	to generate code from them than verifying handwritten code against
	them.  Formal specification languages turn out, in the end, to be
	just another kind of programming language.

    2)  A useful verifier is about as complicated as a globally optimizing
	compiler for the same language.  Such compilers typically cost
	tens to hundreds of man-years to develop.  Building a verifier
	is a big, expensive job.

    3)  The low-rent approaches don't work.  Proving "partial correctness"
	generates partially correct programs.  We can do that without
	verification technology.  Doing verification by hand, without
	automated tools, is not only error prone, but a task of mind-numbing
	boredom, and the boredom itself results in errors, many of which
	consist of proving something correct that wasn't.

    4)  It is brutally hard to push a program through a verifier.  It's
	like doing your taxes while being audited at the same time.  So
	verified programs are very expensive to develop. 


Verification is not impossible.  But there are some very tough problems.
Verification theory isn't the problem, though.  Nor is automatic theorem
proving.  We actually know enough to build and use verification systems.
but it is very, very hard.  See my paper (with Scott Johnson) in POPL 83 and
Don Good's GYPSY project at U. Texas for examples of some non-trivial systems
that were built.

If problem 1) above ever gets cracked, the others can be solved.

					John Nagle