[comp.object] Reasons why you don't prove your pr

johnson@p.cs.uiuc.edu (01/10/90)

/* Written  1:22 pm  Jan  7, 1990 by mmm@cup.portal.com in p.cs.uiuc.edu:comp.object */
There was an excellent article about 10 years ago in CACM on this.
I believe the title was "The Social Process and Proofs of Program
Correctness".  This article marked the death of program verification;
feelings which had echoed in the halls of academia and industry
for more than a decade were expressed in a clear form in print,
and from that point on the "conventional wisdom" was that program
verification was a dead-end for research.  In retrospect, it's hard
hard to see why this wasn't obvious from the very beginning.
/* End of text from p.cs.uiuc.edu:comp.object */

Actually, I (and many others) think that the article was full of
half-truths and that its conclusions were generally invalid.  Program
verification is just as popular in academia and industry as it ever
was, and is still receiving government funding.

Now, this doesn't mean that I (or anybody else) actually prove my
programs correct, but the reason is not the ones given in the article
you mention.  Program verification may not yet be practical, but it
is still theoretically interesting.

johnson@p.cs.uiuc.edu (01/14/90)

Arguments that program verification is impossible because the
halting problem is undecideable are silly.  Don't you think that
the people who work(ed) on program verification knew that?  
Just because it isn't possible to verify every program doesn't
mean that it isn't possible to verify all the programs I want
to verify.  Undecideability is with a class of problems, not
individual problems.

There are lots of good arguments against 1970's style verification.
The best (to me) is that proving the equivalence of two programs
is not really interesting, and that is essentially what proving that
a program meets its complete specification is about.

A more general argument is that most system bugs are due to bad
specifications, not bad implementations of good specifications.
Thus, worrying about verification is a waste of time.  This is
a less compelling argument because, after all, there are quite a
few implementation errors, even if most errors are design errors.

One of the popular approaches to developing verified code today
is to start with the specifications and to transform it into a
program.  The idea is that the specification is a sort of program,
only extremely inefficient, and the transformations make it efficient.

Another is the idea of developing the program and the proof hand
in hand, but not necessarily making them equivalent, as the first
approach tends to do.

Then there was the idea expressed earlier of just letting the
system verify that certain assertions were true.  I like this
approach because you could combine it with Eiffel-style assertions
and a good optimizing compiler to eliminate a lot of checks and
make programs faster.

I think everybody (except Dijkstra) would agree that it isn't
currently practical to verify the correctness of large programs.
Nevertheless, I think that verification technology is important now
because it helps us think about programming better, and it might be
an important part of the way we build systems in the future.

Ralph Johnson

render@m.cs.uiuc.edu (01/17/90)

Written 12:41 pm  Jan 15, 1990 by joshua@athertn.Atherton.COM:
>No problem.  Specifiy the following programs, in a machine readable
>format useful to a program verifier.  These should be commerical products,
>so they must look good and feel good so that people want to use them:
>
>   A source code control facility like SCCS or RCS
>       If you use system calls you must have some way of proving that
>       they do what your spec says they do.

For this, you might take a look at:

	Ian D. Cottam, "The Rigorous Development of a System Version 
	Control Program," IEEE Transactions on Software Engineering,
	Vol. SE-10, No. 2, March 1984, pp. 143-154.  

I think you'll find basically what you've asked for.  

>My last example points out a current problem with proving program correct.
>They must only use libraries which have been proven correct, must run on 
>an operating system which has been proven correct, and must use hardware
>which is proven correct.  All of this is impossible right now.

Yes, it is impossible to prove anything COMPLETELY correct.  This is because 
the machines and programs involved in doing so are all devised by humans, and
humans are notoriously unreliable.  It is, however, very possible to increase 
one's assurance that a given program is correct using formal program develop-
ment and/or verification techniques.   The benefits of such assurances to 
mission critical software and systems should be obvious.  Thus, continued
research into formal development and proving methods seems quite worthwhile.

hal.

render@m.cs.uiuc.edu (01/18/90)

Written 12:15 pm  Jan 16, 1990 by jimad@microsoft.UUCP:
>Again: 'proving' that software is correct does nothing to improve the
>quality of that software.  Finding and removing bugs does improve the
>quality of software.  

Wrong. Finding and removing errors is testing.  And, as has oft been noted, 
testing can only determine the presence of errors, not their absence.  By 
proving software "correct" (actually proving that the software meets some 
formal specification), one attempts to show that there are no errors in 
the software (w.r.t the specification).  The process of proving a program
correct should either indicate that there are no errors or should indicate
that there are (and often what and where they are).  Thus, successful program 
proving methods should eliminate the need for testing.  In practice things 
are not so straightforward, because verification is tough to do for many kinds
of programs, and one still has to contend with erroneous specifications.

BTW, if anyone hasn't read the December CACM exchange between Dijkstra, Parnas
and the others, they should do so now.  Even though he sounds like a crank,
Dijkstra raises a lot of good points pertinent to this discussion. 

hal.