[comp.lang.misc] Formal definitions

chip@tct.com (Chip Salzenberg) (04/05/91)

According to schwartz@groucho.cs.psu.edu (Scott Schwartz):
>How do you know that the validation suit tests for the language that
>ANSI specified?

I don't.  I suppose that should worry me.  *yawn*

>Some languages, like Turing, were formally specified from the time
>of inception; you can run a program and mechanically decide if it
>did what the language definition said it should do.

How do you know that the program that makes that decision doesn't have
a bug in it?

At some point, you have to stop assuring and start doing.
-- 
Brand X Industries Custodial, Refurbishing and Containment Service:
         When You Never, Ever Want To See It Again [tm]
     Chip Salzenberg   <chip@tct.com>, <uunet!pdn!tct!chip>

schwartz@groucho.cs.psu.edu (Scott Schwartz) (04/07/91)

According to Chip Salzenberg:
| According to Scott Schwartz:
| >How do you know that the validation suit tests for the language that
| >ANSI specified?
| 
| I don't.  I suppose that should worry me.  *yawn*

It worries me.  It also worries me that you aren't worried.

| >Some languages, like Turing, were formally specified from the time
| >of inception; you can run a program and mechanically decide if it
| >did what the language definition said it should do.
| 
| How do you know that the program that makes that decision doesn't have
| a bug in it?

It might, but that's just one program, not 50 different compilers with
5,000 test cases.  It's easier to feel sure about.  Consider this...

* Because the language was unambiguously specified I have more
confidence in the people writing the software.  For one thing, if you
know what the mission is in advance, it is that much easier to
accomplish it.  For another, having a mechanically interpretable
formal specification means you can build software tools to help the
entire process along, just as having a yacc grammer for some language
makes it easier and more reliable to write a parser.  There may be
bugs, but debugging small well specified tools is easier and more
effective than debugging large ill-specified final products.
 
* If a formal description of a programming language is available
it makes it _possible_ to decide the question you ask.  Absent such a
description the best you can do is flame about it on usenet for a few
weeks. 

| At some point, you have to stop assuring and start doing.

Sure.  That's the argument in favor of dynamic typing that has been so
well received here recently.  What I'm suggesting is that providing a
formal specification of a language is the software engineering analog
of using things like parser generators and static type checking.  Yes,
no system is perfect, but gee, why make it harder on everyone?

mrs@netcom.COM (Morgan Schweers) (04/08/91)

In article <p79Gyx1j1@cs.psu.edu> schwartz@groucho.cs.psu.edu (Scott Schwartz) writes:
>
>According to Chip Salzenberg:
>| According to Scott Schwartz:
>| >How do you know that the validation suit tests for the language that
>| >ANSI specified?
>| 
>| I don't.  I suppose that should worry me.  *yawn*
>
>It worries me.  It also worries me that you aren't worried.
>
    This is because you see everything in terms of 'theory' and not
'practice'.  (Inferred from your various comments.)  I'm not a theory person
either, so it doesn't really worry me either.  The point is that if I can get
somethine to *WORK* then to hell with all the theory in the world.
    I don't need a language to be formally accepted, or formally tested, to
write code in it.  I do the testing on my program.  If the compiler breaks it,
I'll worry about it.  If the compiler doesn't break my code, whassamatter with
it?

>| >Some languages, like Turing, were formally specified from the time
>| >of inception; you can run a program and mechanically decide if it
>| >did what the language definition said it should do.
     Yes, but can you get any *WORK* done in it in a reasonable amount of time?
How powerful is it?  How can you tell the difference between a program that
is 'decided to be incorrect' and an OS which forces an error condition on the
program?  Does it *REALLY* matter?  You can build your theoretical languages
all you want, but if they don't add to my ability to produce smooth, working
code, I'M NOT GOING TO TOUCH THEM!
     I'm a systems programmer, so decide before you respond, if your language
(probably supposedly universal across all implementations, if I know you theory
weenies) will do the job for what I consider real work.

>| 
>| How do you know that the program that makes that decision doesn't have
>| a bug in it?
>
>It might, but that's just one program, not 50 different compilers with
>5,000 test cases.  It's easier to feel sure about.  Consider this...
>
>* Because the language was unambiguously specified I have more
>confidence in the people writing the software.  For one thing, if you
>know what the mission is in advance, it is that much easier to
>accomplish it.  For another, having a mechanically interpretable
>formal specification means you can build software tools to help the
>entire process along, just as having a yacc grammer for some language
>makes it easier and more reliable to write a parser.  There may be
>bugs, but debugging small well specified tools is easier and more
>effective than debugging large ill-specified final products.
> 
>* If a formal description of a programming language is available
>it makes it _possible_ to decide the question you ask.  Absent such a
>description the best you can do is flame about it on usenet for a few
>weeks. 
    Get serious.  If you come up with a language that is guaranteed to be bug-
proof, I'll show you a language with a BEGIN and an END which can be inter-
changed, and NOTHING ELSE.
    Show me one that is supposedly USEFUL, and I'll *WRITE* a bug in it! ;-)

>
>| At some point, you have to stop assuring and start doing.
>
>Sure.  That's the argument in favor of dynamic typing that has been so
>well received here recently.  What I'm suggesting is that providing a
>formal specification of a language is the software engineering analog
>of using things like parser generators and static type checking.  Yes,
>no system is perfect, but gee, why make it harder on everyone?

    YOU are the one looking to make things harder.  Software Engineering?  Get
real.  Perhaps it's the bad associations I've had with people calling
themselves Software Engineers (they worked in Ada, Modula-2 and COBOL), but
Software Engineering For Better Programs is nonsense.  If you have a language
that makes it possible to 'prove' the program correct, then you have a language
which is probably putting too many restrictions on the programmer.

    Also, to me, a formal specification consists of a grammar, and a
description of the basic keywords.  Period.  At that point, the language is
defined in terms of what I can use it for.  What do you mean by a 'formal
specification'?  Don't give me math crap.  I'm a programmer, not a math
junkie.

    *IF* you can make me more productive, I'd look at what you were talking
about.  The important point here is that *I KNOW WHAT I'M DOING* and I don't
want a language that told me I didn't.  I could program in Pascal or some other
psuedo-useless language if I wanted that.  (I have a friend who is forced to
program in Pascal.  You should hear him scream for type-casting!)

     The point is that there are tradeoffs involved, and that there is *NO WAY*
you are going to design a language that will be both completely useful and
completely 'safe'.  The two are incompatible, in my (admittedly limited)
experience.  Prove me wrong.  Feel free.
                                                           --  Morgan Schweers
+---------------
  My company wouldn't really understand this discussion, so I can't imagine
what their opinions would be.                              --  mrs@netcom.com
---------------+

new@ee.udel.edu (Darren New) (04/09/91)

In article <1991Apr8.080931.23209@netcom.COM> mrs@netcom.COM (Morgan Schweers) writes:

>    I don't need a language to be formally accepted, or formally tested, to
>write code in it.  I do the testing on my program.  If the compiler breaks it,
>I'll worry about it.  If the compiler doesn't break my code, whassamatter with
>it?

Maybe that it doesn't do the same thing on my configuration, or over my
network, or with my version of the compiler as it does with yours.  As
an example, there are programs on the Amiga written before the formal method
for determining PAL vs NTSC was decided. These usually didn't work when
written in Europe and run in the USA.

Ever get a C program that compiled when int==long but not when int==short?
That was formally defined (in some sense) back in K&R.  When people ignored
that definition and played loose with what their compiler accepted, things
break.

>     I'm a systems programmer, so decide before you respond, if your language
>(probably supposedly universal across all implementations, if I know you theory
>weenies) will do the job for what I consider real work.

No need to insult formal languages or their users just because they
don't do what *you* want, especially when you later admit you have no
clue what you are talking about.  It's like saying "before you tell me
that Ryder is better than U-Haul, understand that I drive race cars for
a living."  You can't draw conclusions on which truck is better for
moving on such a basis.

If you program where every cycle counts (and you should, if you are
doing what I would call systems programming) then formal languages
probably can't be compiled as efficiently as you want them to, because
they either have to work the same independant of machine or there has
to be decisions you make for your machine that you wouldn't otherwise
need to make. However, most of us don't program where every cycle counts
(especially when it's not worthwhile after you system programmer clowns
destroy the efficiency with hack coding :-) and do program where continued
correct execution is important.

>    Get serious.  If you come up with a language that is guaranteed to be bug-
>proof, I'll show you a language with a BEGIN and an END which can be inter-
>changed, and NOTHING ELSE.

What do you mean by "bug-proof"?  You mean a language where no programs
have bugs, or a language where all bugs are caught?  Normally, a formal
language defines the exact effects of any syntactically valid program.
Of course, if what you tell it to do is not what you want it to do, then
you are not going to get what you want.

>    Show me one that is supposedly USEFUL, and I'll *WRITE* a bug in it! ;-)

Take a look at LOTOS.  It's completely mathematically defined.  It's
not even difficult to understand.  It is very powerful, allowing
definitions of any arbitrary data types that are no more infinite than
integers.  It also handles complex parallel evaluation.  Is it useful?
Very.  Is it compilable?  Not yet, that I know of.  Then what good is
it?  I can describe TCP and IP completely and formally in about 5
pages.  I can *communicate* to you exactly what you do in all cases,
including reception of packets not to you, conflicting option bits,
empty packets, all the errors which you would normally have missed
because you didn't think about that case, etc, etc, etc.  Also, I can
*know* that all of that has been communicated and that I have not
forgotten anything.  This is an example of a very useful formal
language (I can direct you to conferences which will attest to its
usefulness) that you can't even compile.

>If you have a language
>that makes it possible to 'prove' the program correct, then you have a language
>which is probably putting too many restrictions on the programmer.

LOTOS is quite a bit more flexible that any compilable language I've seen.
At some point, you have to make your program work on more than one
configuration.  You also have to know that it meets the customer's
specifications.  How do you do that with C?  With LOTOS, you can prove
that your functionality is a superset of the customer's specification.
You can also write programs which could not possibly be implemented in
C, even by Dan Bernstein :-).  What more proof do you need?

>    Also, to me, a formal specification consists of a grammar, and a
>description of the basic keywords.  Period.  

Then you have never seen a *real* formal specification and you
shouldn't be claiming they are useless, since you've obviously never
tried to use one.  :-)

>Don't give me math crap.  I'm a programmer, not a math >junkie.

I don't think you need to be a "math junkie" to understand LOTOS, but
you can't have a formal definition without math.  LOTOS's definition
is really very simple.  Much simpler than C.  About as simple as FORTH
or LISP.

>    *IF* you can make me more productive, I'd look at what you were talking
>about.  The important point here is that *I KNOW WHAT I'M DOING* and I don't
>want a language that told me I didn't.  

This leads me to believe you have no clue what you are talking about.
*IF* your productivity ever depended on anybody else understanding what
you are doing (like a project long enough that 80% of the people
finishing it were not around when it was started and big enough that
many people don't ever see other people working on the same program),
then formal languages are a great bonus.  A formal definition does not
necessarily make a language any more restrictive than C is. From
everything I know about Turing (not much :-), it is about as flexible
as C and somewhat more powerful.

>I could program in Pascal or some other
>psuedo-useless language if I wanted that.  (I have a friend who is forced to
>program in Pascal.  You should hear him scream for type-casting!)

This has nothing to do with formal definitions.  Pascal has a formal
definition, but certainly your friend isn't using it.  (Pascal was not
originally formally defined, but somewhere I have a reference for a
formal definition thereof.)

>     The point is that there are tradeoffs involved, and that there is *NO WAY*
>you are going to design a language that will be both completely useful and
>completely 'safe'.  

I suspect you have the wrong idea what "safe" means.  Formal means
that every sequence of operations you perform either has a defined result
or you know it doesn't.  For example, in C, running off the end of a malloc'ed
array isn't formally defined.  "Faithful" means that it either does what
is defined or the run-time system tells you about it.  For example, if
array bounds are checked, then array bounds are "faithful."  If everything
is faithful, then referencing disposed memory is caught, uninitialized
variables are caught, and so on.  I'm not sure what you mean by "safe."

>The two are incompatible, in my (admittedly limited)
>experience.  Prove me wrong.  Feel free.

Check out LOTOS.  Then talk to anybody who designed a 50,000 line network
protocol product after specifying it in LOTOS.  Talk to the people they
hired halfway through the development and ask them how long it took
them to get up to speed (probably about a week).  Talk to the customers,
who had a contract based on a formal definition of the desired service,
and how they could 1) settle whether there was a bug or not, 2) could
get realistic estimates on construction and modification times/costs,
etc etc etc.

How do you know Ada's validation suite tests all the semantics of Ada?
Do you want to build SDI based on "I think so"?

>+---------------
>  My company wouldn't really understand this discussion, so I can't imagine
>what their opinions would be.                              --  mrs@netcom.com
>---------------+

Actually, I bet *somebody* there *would* understand this discussion (given a
name like "netcom", if that really is where you work).  Ask around, and
see if you can find somebody who understands LOTOS or Estelle or SDL.
Networking is really where formal techniques are best applied, because
it tremendously aids in human-to-human communication as well as human-
to-machine.             -- Darren
-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
  +=+=+ My time is very valuable, but unfortunately only to me +=+=+
+ When you drive screws with a hammer, screwdrivers are unrecognisable +

schwartz@groucho.cs.psu.edu (Scott Schwartz) (04/09/91)

Morgan Schweers rants:
| This is because you see everything in terms of 'theory' and not
| 'practice'.

Not a bit.  I'm not a "theory weenie".  On the other hand, part of the
fun of writing programs is that you get tangible benefits from
precision.

| I don't need a language to be formally accepted, or formally tested,
| to write code in it.  I do the testing on my program.  If the
| compiler breaks it, I'll worry about it.  If the compiler doesn't
| break my code, whassamatter with it?

How can you tell if it is a compiler bug, or a bug in your code, or a
bug in your understanding?  Someone has to specify what the langauge
is supposed to do.  It seems to me that if you can build software
tools to help implement the language you will get a better result.

| Yes, but can you get any *WORK* done in it in a reasonable amount of time?

I type "make".  How do you do it?

| How powerful is it? 

A flip answer is, "More powerful than what you are using now".  A
serious answer notes that your question has nothing to do with whether
or not the language is formally defined.

Suppose someone writes denotational semantics for C, and the ANSI
committee stamps it with their seal of approval.  Will you suddenly
decide you hate C?

| Also, to me, a formal specification consists of a grammar, and a
| description of the basic keywords.  Period.  At that point, the language is
| defined in terms of what I can use it for.  What do you mean by a 'formal
| specification'?  Don't give me math crap.  I'm a programmer, not a math
| junkie.

Great... just when hostile usenet junkies get tired of accusing Dan
Bernstein of being a computer scientist, they start accusing me of
being a mathematician.

You realize that a BNF grammer is a formal specification of syntax,
right?  (Or are you opposed to parser generators too?)  What objection
do you have to an analagous formal description of semantics?  Why
shouldn't the C compiler implementor be able to directly code the fact
that type widening is unsigned preserving (or is it value
preserving?), or that you are allowed to index but not dereference one
element past the end of an array?

| *IF* you can make me more productive, I'd look at what you were
| talking about. 

The first step is for you to sit down and drink some orange juice and
watch some tv.  Don't read usenet until you feel better. :-)

Next time you run "lint" think about the kinds of things it is
checking.  Then think about why not all compilers come with things
like lint.  Then think about why.

dc@sci.UUCP (D. C. Sessions) (04/10/91)

  The same old arguments are always dragged out to justify bad workmanship, 
  whether in software, hardware, or carpentry.  This time around, we're 
  hearing them from Morgan Schweers.  Nothing personal, but they set of 
  some major BS alarms.

In article <1991Apr8.080931.23209@netcom.COM> mrs@netcom.COM (Morgan Schweers) writes:
# In article <p79Gyx1j1@cs.psu.edu> schwartz@groucho.cs.psu.edu (Scott Schwartz) writes:

# >It worries me.  It also worries me that you aren't worried.
# >
#     This is because you see everything in terms of 'theory' and not
# 'practice'.  (Inferred from your various comments.)  I'm not a theory person
# either, so it doesn't really worry me either.  The point is that if I can get
# somethine to *WORK* then to hell with all the theory in the world.

  I routinely get the same argument when I show someone a design violation 
  in a circuit design:  "Hey, it works!  Who cares about some theoretical 
  problem?"

  Of course, I can't *prove* that a transient error was due to substrate 
  injection resulting from pin overdrive.  Since I can't prove it, let's 
  forget the whole thing, right?

#     I don't need a language to be formally accepted, or formally tested, to
# write code in it.  I do the testing on my program.  If the compiler breaks it,
# I'll worry about it.  If the compiler doesn't break my code, whassamatter with
# it?

  Translation: "It worked; ship it!"

#      I'm a systems programmer, so decide before you respond, if your language
# (probably supposedly universal across all implementations, if I know you theory
# weenies) will do the job for what I consider real work.

  Is this one-downmanship?  I'm one step down from a systems programmer: I
  design the stinking HARDWARE your stuff runs on.  Why don't you REALLY
  get down and dink with the *voltages* like I do?  Bits are for wimps. 

  The same arguments are used universally to justify spending a lot of time 
  on the easy problems of micro-optimizing the 5% instead of making sure 
  the 95% works reliably.

# >| At some point, you have to stop assuring and start doing.
# >
# >Sure.  That's the argument in favor of dynamic typing that has been so
# >well received here recently.  What I'm suggesting is that providing a
# >formal specification of a language is the software engineering analog
# >of using things like parser generators and static type checking.  Yes,
# >no system is perfect, but gee, why make it harder on everyone?
# 
#     YOU are the one looking to make things harder.  Software Engineering?  Get
# real.  Perhaps it's the bad associations I've had with people calling
# themselves Software Engineers (they worked in Ada, Modula-2 and COBOL), but
# Software Engineering For Better Programs is nonsense.  If you have a language
# that makes it possible to 'prove' the program correct, then you have a language
# which is probably putting too many restrictions on the programmer.

  We used to hear this argument from radio hobbyists who would grab the old 
  breadboard, a handful of tubes, and a soldering iron and DISCOVER a 
  circuit.  I trust it as much as I trust a bridge which was built by 
  sticking together available beams however the designer thought they fit 
  well.

#     Also, to me, a formal specification consists of a grammar, and a
# description of the basic keywords.  Period.  At that point, the language is
# defined in terms of what I can use it for.  What do you mean by a 'formal
# specification'?  Don't give me math crap.  I'm a programmer, not a math
# junkie.

  I'm glad you know from the grammer and keywords what would happen if you 
  executed 

    c = f( c, c = g( c ) );

  but frankly, I'd like to know that the results wouldn't change with the 
  compiler version.

#     *IF* you can make me more productive, I'd look at what you were talking
# about.  The important point here is that *I KNOW WHAT I'M DOING*

  That's funny; so did the carpenter who cut before he measured.  I'm glad 
  that there are at least two infallible humans in the world.  As for the 
  rest of us, we make occaisional mistakes and some even appreciate having 
  those mistakes caught early in the cycle rather than late.

#                                                                  and I don't
# want a language that told me I didn't.  I could program in Pascal or some other
# psuedo-useless language if I wanted that.  (I have a friend who is forced to
# program in Pascal.  You should hear him scream for type-casting!)

#      The point is that there are tradeoffs involved, and that there is *NO WAY*
# you are going to design a language that will be both completely useful and
# completely 'safe'.  The two are incompatible, in my (admittedly limited)
# experience.  Prove me wrong.  Feel free.
#                                                            --  Morgan Schweers

  Sounds like Goedel's Theorem.  But I forgot; that's for math weenies.
-- 
| The above opinions may not be original, but they are mine and mine alone. |
|            "While it may not be for you to complete the task,             |
|                 neither are you free to refrain from it."                 |
+-=-=-    (I wish this _was_ original!)        D. C. Sessions          -=-=-+

peter@ficc.ferranti.com (Peter da Silva) (04/10/91)

In article <50097@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:
> You can also write programs which could not possibly be implemented in
> C, even by Dan Bernstein :-).  What more proof do you need?

This is as insane a claim as any of Dan's. If it's implementable in
principle it's implementable in C. It might take building another
language on top of C to make it convenient, but it's implementable.

> How do you know Ada's validation suite tests all the semantics of Ada?
> Do you want to build SDI based on "I think so"?

Is there an alternative? How do you specify hardware and human actions
based on a formal language specification?
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

gudeman@cs.arizona.edu (David Gudeman) (04/11/91)

In article  <969@mgt3.sci.UUCP> D. C. Sessions writes:
]  Of course, I can't *prove* that a transient error was due to substrate 
]  injection resulting from pin overdrive.  Since I can't prove it, let's 
]  forget the whole thing, right?

That isn't a parallel situation.  If you can show that some insecurity
exists in a program then it should be fixed.  That does not mean that
it is useful to try to prove mathematically that there are no
insecurities.

]  We used to hear this argument from radio hobbyists who would grab the old 
]  breadboard, a handful of tubes, and a soldering iron and DISCOVER a 
]  circuit.  I trust it as much as I trust a bridge which was built by 
]  sticking together available beams however the designer thought they fit 
]  well.

How is direct experimentation with circuit C less reliable than
(1) experimentation with general circuits followed by (2) theorizing
about the nature of circuits followed by (3) inferences about the
circuit C based on the assumption that the theory is correct?

I think that this "faith in formalism" betrays a lack of thought about
the philosophy of science, math, and models.  Not that I blame you for
this.  The fault lies with modern university educations which -- at
least in the US -- are painfully lacking some important foundational
studies.

Can anyone who thinks it is useful to "prove" programs correct provide
me with an explanation of exactly what the proof accomplishes?  What
do you know after the proof that you didn't know before the proof?
Before you answer, consider this: the process of deriving a
mathematical proof is every bit as prone to errors as is the process
of writing a computer program.

Also consider that when you have written a program, you can test it to
get some level of confidence that it works.  When you have written a
proof, the only way to "test" it is to let others read it and try to
find problems with it.  (Note that "testing" a proof is not the same
as "testing" a theorem.  You can test a theorem by looking for
multiple proofs, and by looking for contradictions.)

Finally, consider the relationship of the proof to the real world,
what assumptions are essential to this relationship, and whether those
assumptions are more reliable than the reliability that you can get by
direct testing.  To answer this question you will have to deal with
the philosophy of mathematical models.  What relationship does the
concept of a "set" have in common with the real world?  Sets follow
strict rules, how can we know that anything in the real world follows
any rules similar to those for sets?
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

new@ee.udel.edu (Darren New) (04/11/91)

In article <RNMAX3C@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>In article <50097@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:
>> You can also write programs which could not possibly be implemented in
>> C, even by Dan Bernstein :-).  What more proof do you need?

>This is as insane a claim as any of Dan's. If it's implementable in
>principle it's implementable in C. It might take building another
>language on top of C to make it convenient, but it's implementable.

If you look closely at the complete post, you will notice that I never
claimed that all LOTOS programs are implementable.  For example, LOTOS
very easily allows you to use integers and queues and such, and you
must go out of your way to use integers mod 2^32 or bounded queues or
whatever.  Of course, programs which you write based on a LOTOS
specification will not necessarily implement the full specification,
but it is possible to determine where there will be problems.  In
practice, people write things like "every time you get a connection,
either pick a new port or return an error."  The specification says
that it is possible to always pick a new port or always return an
error.  For different and hopefully obvious reasons, neither is
practical.  But the point of LOTOS is not the automatic implementation
from the specification (use Estelle for that :-) but the concise and
formal description of complex reactive behavior.

>> How do you know Ada's validation suite tests all the semantics of Ada?
>> Do you want to build SDI based on "I think so"?

>Is there an alternative? How do you specify hardware and human actions
>based on a formal language specification?

I understand a CPU called "viper" has in some way been formally validated.
Check out ACM architecture-oriented journals.  Also, law-goverend systems
can formally specify the ways in which people are allowed to interact
with the system, including the ways in which they are allowed to change
the code and the laws themselves. (The laws are enforced, so it really
is formal.) LGS's are described in various Rutgers University tech reports.

In any case, a formal semantics for Ada would allow you to know that
the validation suite tested all parts of the compiler.  These tests
would not necessarily assure you that the compiler is correct, of course.
Kind of like testing the implementation of a protocol by making sure
that all transitions in the state machine fire at least once and wind
up in the correct final state; without that, you don't even know if
all the code is linked in, let alone correct.

              --                 --                --

BTW, in reference to the FORTH-based NOVA chip:
I can't find the reference, but that isn't suprising since most of
my journals are in boxes in another state.  I suspect I read about
it in one of the ACM publications, probably ASPLOS.  Definitely sometime
around 1987 or 1988.                        -- Darren


-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
+=+=+ My time is very valuable, but unfortunately only to me +=+=+
+=+=+ When you drive screws with a hammer, nails look better +=+=+

mhcoffin@tolstoy.waterloo.edu (Michael Coffin) (04/11/91)

In article <1726@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>Can anyone who thinks it is useful to "prove" programs correct provide
>me with an explanation of exactly what the proof accomplishes?  What
>do you know after the proof that you didn't know before the proof?
>Before you answer, consider this: the process of deriving a
>mathematical proof is every bit as prone to errors as is the process
>of writing a computer program.

If I prove a program correct I have a high degree of confidence that
the program will work.  If I both prove it and test it my confidence
is extremely high.  Even then I'm not *certain* that it will work.  I
don't "know" anything after the proof than I knew before, but then I
don't after testing either.  This is especially true of parallel
programs, where nondeterminism makes it notoriously difficult to find
errors by testing.  After testing I suppose I might be said to
"know" at least that the program works on the test input, but even
there I might make a mistake about what the output should be.
Deriving a mathematical proof is error prone, but deriving a proof
*and* a program is certainly less prone to error than just writing a
program. 

>Also consider that when you have written a program, you can test it to
>get some level of confidence that it works.  When you have written a
>proof, the only way to "test" it is to let others read it and try to
>find problems with it.  (Note that "testing" a proof is not the same
>as "testing" a theorem.  You can test a theorem by looking for
>multiple proofs, and by looking for contradictions.)

Why do you set proofs against testing?  Why not do whichever is
appropriate, or do both?  For code that counts the lines in a file,
testing is fine.  For a distributed program that flys an airplane or
monitors my vital signs, both testing and proof are appropriate.

Also, I don't understand the parenthetical comment.  You *can* test a
proof by independently deriving another. (The resulting proof may or
may not be different.) And you *can* test a proof by looking for
counterexamples.  What are you getting at here?

Michael Coffin				mhcoffin@watmsg.waterloo.edu
Dept. of Computer Science		office: (519) 885-1211
University of Waterloo			home:   (519) 725-5516
Waterloo, Ontario, Canada N2L 3G1

gudeman@cs.arizona.edu (David Gudeman) (04/12/91)

In article  <1991Apr11.161347.26513@watmath.waterloo.edu> Michael Coffin writes:
]In article <1726@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
]>Can anyone who thinks it is useful to "prove" programs correct provide
]>me with an explanation of exactly what the proof accomplishes?  What
]>do you know after the proof that you didn't know before the proof?...
]
]If I prove a program correct I have a high degree of confidence that
]the program will work.  If I both prove it and test it my confidence
]is extremely high.  Even then I'm not *certain* that it will work.

I will accept that, but it isn't really what I was getting at.  What I
meant is this: you have a programming language which is a formal
notation, and you have written a program in that notation that you
believe accomplishes some goal.  You also have a mathematical notation
in which you can prove the "correctness" of the program in the
programming language notation.  Now I have the following questions:

(1) given that you already think the program is correct by the rules
of its own notation, why does it increase your confidence just because
you have another "program" in another notation that says the first one
is correct?

(2) Is the mathematical notation more reliable in some sense?  If so,
how?

(3) Is formalism (proof of correctness) more reliable than experience
(testing)?  If so, how?

I would answer that yes, proofs of correctness do increase the
confidence in a program, but not because of (2) or (3), to which I
would answer "no".  There is another reason that proofs of correctness
are useful that I think Mike knows but hasn't expressed clearly.
Hint: what is the main way to acheive fault tolerance?

]Why do you set proofs against testing?  Why not do whichever is
]appropriate, or do both?  For code that counts the lines in a file,
]testing is fine.  For a distributed program that flys an airplane or
]monitors my vital signs, both testing and proof are appropriate.

Absolutely, I don't think we disagree on this point.  My complaint is
with people who think that proofs of correctness are some form of
magic that give supreme confidence in a piece of software, and that
testing is just a dubious hack.  In fact, (and this is where
philosophy comes in) any confidence we might have in formalism rests
ultimately on experience.  I see no basis to put this sort of
generalized, historical experience at a higher level of reliability
than direct experience.

]Also, I don't understand the parenthetical comment.  You *can* test a
]proof by independently deriving another.

The point is that just because a proof proves something that happens
to be true, that doesn't mean the proof is correct.  The proof can
have errors that act together in such a way to still give the correct
result.  If you independently derive another proof, you have increased
confidence that the _theorem_ is correct, but I don't think it has as
big an effect on showing that the original _proof_ was correct.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

cs450a03@uc780.umd.edu (04/12/91)

>Programming languages, at least imperative languages, are constructed
>from statements, which say "do this, then this, etc.".  They don't
>say "achieve this goal", which is really the purpose of running a
>program.

eh?  imperative programming languages don't say "achieve this goal" ?

huh?

that's flat out contradictory.

I mean, if I say "sort this list" that's the sort of thing that an
imperative language does.  That's an equivalent statement to "make
this list sorted".

Maybe you don't like the levels of abstraction available to you in
some imperative languages?

>  [example of navigating city streets]

It is perfectly valid imperative programming practice to say things
like:

    move such-and-such (distance, direction)
    locate blah-de-blah (landmark)
    move ...
    locate ...

Which it seems to me you are claiming can only be done in proofs.

Raul Rockwell

mhcoffin@tolstoy.waterloo.edu (Michael Coffin) (04/12/91)

In article <1755@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:

>(1) given that you already think the program is correct by the rules
>of its own notation, why does it increase your confidence just because
>you have another "program" in another notation that says the first one
>is correct?
>
>(2) Is the mathematical notation more reliable in some sense?  If so,
>how?
>
>(3) Is formalism (proof of correctness) more reliable than experience
>(testing)?  If so, how?
>
>I would answer that yes, proofs of correctness do increase the
>confidence in a program, but not because of (2) or (3), to which I
>would answer "no".  There is another reason that proofs of correctness
>are useful that I think Mike knows but hasn't expressed clearly.
>Hint: what is the main way to acheive fault tolerance?

I assume you mean that proofs are a just a form of redundancy---a way
to "write the program twice".  There's something to that, but it
isn't the whole story.  Programming languages, at least imperative
languages, are constructed from statements, which say "do this, then
this, etc.".  They don't say "achieve this goal", which is really the
purpose of running a program.  A proof connects these two ideas; it's
a way of verifying that executing a sequence of statements achieves
a particular goal.  

Here's an analogy to illustrate the difference (off the top of my
head, so let's not push this too far).  Suppose I'm telling you how to
get to my house.  A strictly imperative program looks like

    go two blocks north,
    turn left,
    go straight 4.1 miles,
    turn left,
    then right at the second corner.

Proving a program is like adding landmarks to the program and then
showing that the actions get from one landmark to the next:

    { you are at your house }
    go two blocks north,
    { there's a McDonalds on your right }
    turn left,
    { you are now on Main St. and Limberlost}
    go straight 4.1 miles,
    { You will see a blue house on the corner }
    turn left,
    { You are at the corner of Birch Ave. and Main }
    then right at the second corner.
    { You're in my driveway; look for my grey Mazda }

Besides giving you more confidence while you're driving, the proof has
the property that it abstracts away detail.  If you make it to Birch
Ave. and Main, you're in good shape regardless of the twists and turns
it took to get there.  But more importantly, notice that the original
program doesn't say anything about what each step accomplishes, or
even what the program as a whole accomplishes.  In the original
program, it's difficult to see if "go straight 4.1 miles" is right
without trying the entire route; there's no mention of what "go
straight 4.1 miles" ought to *do*. That's all informally
(mis)understood.  In the proven program, that statement can be checked
in isolation: it ought to get you from Main and Limberlost to a corner
with a blue house.

As far as your last question goes, I don't know if proofs are more
reliable than testing for sequential programs.  For concurrent
programs, where nondeterminism can cause a program to run for weeks
and then suddenly fail, I would guess that proofs are far more
reliable. 

Michael Coffin				mhcoffin@watmsg.waterloo.edu
Dept. of Computer Science		office: (519) 885-1211
University of Waterloo			home:   (519) 725-5516
Waterloo, Ontario, Canada N2L 3G1

new@ee.udel.edu (Darren New) (04/12/91)

In article <1755@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>(1) given that you already think the program is correct by the rules
>of its own notation, why does it increase your confidence just because
>you have another "program" in another notation that says the first one
>is correct?

If by "correct" you mean "meets the specification" then the ability to
show that the behavior of the program is related to the behavior
of the specification is a testing-equivalence sense can be very helpful.

For example, your customer gives you the following requirements:

1) a phone will not ring if it is off-hook.
2) a phone will ring if it is on-hook and its number is dialed.
3) a non-ringing phone will issue a dialtone if off-hook and no
   digits have been dialed.
4) a phone which has had 7 digits dialed will emit either a ring signal
   or a busy signal.
5) ring signals will never be associated with phones which are not ringing.
 
 etc etc etc

Clearly, this is the kind of specification a customer wants to give.
Such a specification, translated into (say) LOTOS, could be used to
find unspecified behavior. For example, the above does not specify that
busy signals won't be sent to phones which are on-hook. Now you can go
to the customer and say "here, formally, is what I think you want.
It's missing this information. Am I correct in your interpretation?"

Once the specification is complete and correct in LOTOS, it is possible
to implement a program in LOTOS. This program, due to circumstances
which I won't explain here, should be written in a style different from
the specification; for example, the specification may be
constraint-based and the "implementation" may be state-machine based.
However, by using the same formalism and the rules of equivalence, it
is possible to show that the state-based version presents the same
external behavior as the constraint-based version.  Now we have an
implementable specification for a phone switch which we know meets the
customer's specifications, which are unambiguous because we made sure
they were unambiguous formally.

Once implemented, if you really, really, really, need assurance that it
is correct, you need to prove that the compiler generated the correct
code, that the CPU correctly interprets the machine code, and so on.
Neither testing nor formal methods will show you that the hardware
is reliable; they only reduce the probability that it is wrong. Both are
appropriate at different levels.

If you want an "informal" language with formal semantics (say, Turing,
which gives formal semantics for a language which looks like an
otherwise pretty standard imperitive language), the formal semantics
allow precise communication between the compiler-writer and the
compiler-user. You can analyze a program and say "if the compiler
behaves as specified, my code can never index off the end of this
array." Alternately, you can turn on runtime checking and test it a few
times to see if it crashes. The difference is: what are you testing?
The compiler, or your code?  I think that all formalisms are purely for
human-to-human communication.  There is no way to formally prove that a
formalism is isomorphic with an informal phenomenon.  The advantage of
using a formalism to communicate is that it provides modularity of
understanding:  1) do you understand the formalism? 2) do you
understand the specification I have written in that formalism? 3) Does
your understanding of what you want match the specification I have
written? 4) Does the implementation behavior match the specification I
have written? By using the right level of detail/abstraction/whatever
at each level, interpersonal communication can proceed quite quickly
and accurately.  Only step (4) really needs informal testing; I can't
imagine how you could "test" a specification of what is wanted.

One of the grad students here managed the implementation of a network
protocol with some sophisticated bandwidth-reduction mechanisms.  He
learned Estelle (a formalism for specifying protocols) about half-way
through the project.  He says that if he knew Estelle at the start, he
could have cut development time in half, not because Estelle is so
great, but because it is easier to communicate precise and accurate
protocol details in Estelle than in English. Formalisms are just
jargon; if you know them, they really speed up communication, but they
don't mean that what you said is right.

-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
+=+=+ My time is very valuable, but unfortunately only to me +=+=+

mrs@netcom.COM (Morgan Schweers) (04/12/91)

Some time ago schwartz@groucho.cs.psu.edu (Scott Schwartz) happily mumbled: 
>
>Morgan Schweers rants:
>| This is because you see everything in terms of 'theory' and not
>| 'practice'.
>
>Not a bit.  I'm not a "theory weenie".  On the other hand, part of the
>fun of writing programs is that you get tangible benefits from
>precision.

    Yes, I was ranting.  I *hope* I didn't use the phrase 'theory weenie
in my text, though I admit that I was thinking it.  <Grin>  Perhaps I
underused (can you say, none-at-all?) smileys, or some-such.  I really
did mean most of what I said, but I didn't mean it as a flame.
    Tangible benefits from precision, eh?  I tend to see people putting
too much emphasis on 'precision' and not enough on, 'does it work?'

>
>| I don't need a language to be formally accepted, or formally tested,
>| to write code in it.  I do the testing on my program.  If the
>| compiler breaks it, I'll worry about it.  If the compiler doesn't
>| break my code, whassamatter with it?
>
>How can you tell if it is a compiler bug, or a bug in your code, or a
>bug in your understanding?  Someone has to specify what the langauge
>is supposed to do.  It seems to me that if you can build software
>tools to help implement the language you will get a better result.

    I recently had a chance to observe JUST THIS SYMPTOM.  The Microsoft
C(6.0) compiler has an optimization bug that broke my code.  The program
got tested (I don't *LIKE* the way it was tested, but that's my boss's
decision) and we found that it failed under certain circumstances.  I
fixed the problem (fixed the optimization) and went on with life.
    Is this a compiler bug?  Of certainty!  Is this something which would
make me consider changing languages?  Certainly not!  Would it even make
me consider changing compilers?  It's not in my hands to do that.  Would
I have the same problem under other optimizers?  Maybe, maybe not.  Would
I have the same problem under LOTOS, if it were compilable?  Maybe, maybe
not.  You can brag about 'bug-checking' in the source files all you want,
but the executables are where it matters.
    If your optimizer has the same 'features' as the Microsoft C compiler,
then it'll still break my code.  It's the object code that does the work,
and that's where everything matters.  You can 'formalize' all you want,
but the end result is whatever the compiler does.

>
>| Yes, but can you get any *WORK* done in it in a reasonable amount of time?
>
>I type "make".  How do you do it?

    This is a statement just BEGGING for a flame...
#ifdef FLAME
    Really?  'make' writes all your code for you, automatically choosing
the correct language, for the problem that you've envisioned in your
mind?  It then proceeds to 'prove' the programs correctness within
the parameters not supplied to it?  It then does beta-testing on the
program, as well as speed and accuracy tests?
    D'you mind if I borrow your 'make' program, I'd *REALLY* like a
copy of it.  As I said, if it'll improve my productivity, no
problem!
    Goddamn dweeb.  (Note:  Obligitory flaming ad hominem attack.)
#endif

>
>| How powerful is it? 
>
>A flip answer is, "More powerful than what you are using now".  A
>serious answer notes that your question has nothing to do with whether
>or not the language is formally defined.
>
>Suppose someone writes denotational semantics for C, and the ANSI
>committee stamps it with their seal of approval.  Will you suddenly
>decide you hate C?

    Ah, but this wasn't the original point.  The original point was a
person (yourself?) discussing the advantages of a language in which it
could (supposedly) be proven that a program would work as designed.
    If the ANSI committee stripped C of it's functionality in the
process of making it 'formalized' I'd simply not use an ANSI compiling
C compiler.
    Standards exist to make things work between different platforms.  I
understand that this statement is going to be met with *FLAMES*, BUT
the following holds true for my particular field.  IF you are doing
bit level systems programming, your code ain't likely to be very
portable.  It's also *NECESSARY* to be fast, and compact.
    I understand the need for portable programming.  It's a great thing,
and I've made excellent use of it in the past (porting my favorite
utilities to various platforms.)  However, no language (IMHO) can be
fully formalized and be fully useful on all platforms.  The fact is that
each individual platform has certain features that you can take advantage
of that will boost your productivity and your codes power.
    Yes, you should create a seperate module for system specific stuff,
*BUT* the point is that the language has to allow it.  If a language is
fully formalized (in the manner I've noted) then it's not likely (note,
I said not *LIKELY*) to be able to encompass new features on new hardware.
    If it can't allow for system specific modules, then it's going to be
a rapidly dead language.  If it does allow for system specific modules,
it's got to understand that it's going to be different from system to
system.  And after all, how *DO* you formalize what the input value from
port 70h is?

>
>| Also, to me, a formal specification consists of a grammar, and a
>| description of the basic keywords.  Period.  At that point, the language is
>| defined in terms of what I can use it for.  What do you mean by a 'formal
>| specification'?  Don't give me math crap.  I'm a programmer, not a math
>| junkie.
>
>Great... just when hostile usenet junkies get tired of accusing Dan
>Bernstein of being a computer scientist, they start accusing me of
>being a mathematician.
>
>You realize that a BNF grammer is a formal specification of syntax,
>right?  (Or are you opposed to parser generators too?)  What objection
>do you have to an analagous formal description of semantics?  Why
>shouldn't the C compiler implementor be able to directly code the fact
>that type widening is unsigned preserving (or is it value
>preserving?), or that you are allowed to index but not dereference one
>element past the end of an array?

     Yes, I realize that a BNF grammar <etc>, but I also *feel* that
the grammar of a language should be variable by implementation.  With
a fully 'formalized' language, this isn't (to the best of my knowledge)
true.  It's also worth noting that you *did* mention a compiler
IMPLEMENTOR making these choices.  That's up to the implementor, and
I'll live with their changes.  What you (or whomever) was advocating
was that it be in the formal spec.  I also hold the opinion that the
programmer should be able to override the above-mentioned options and
make them either warnings, or totally ignored.  If not that, then
give me an implementation where it is possible to get AROUND those
limitations.

>
>| *IF* you can make me more productive, I'd look at what you were
>| talking about. 
>
>The first step is for you to sit down and drink some orange juice and
>watch some tv.  Don't read usenet until you feel better. :-)
>
>Next time you run "lint" think about the kinds of things it is
>checking.  Then think about why not all compilers come with things
>like lint.  Then think about why.

    <Grin>  Thanks.  I've got some CaffieneCola in front of me, and
a bag of chips.  I'm feelin' fine...  I don't watch TV, but I do have
Kraftwerk blaring in my ears.  I understand your meaning, but *DESPITE*
the sound of my article, I really don't mean to flame.

    In re: lint...  Some compilers don't come with 'lint' because they
chop the programmer's head off in the name of 'pretty programming'.
I like 'lint' and similar programs, because I can feel free to ignore
their results.  The languages which *don't* include them tend to FORCE
me to follow their idea of good programming.

    This entire bag of...chips...is really a religious war, in
disguise.  (A *THIN* disguise.)  I wasn't really aware of that
when I posted my message, but I am now.  <Sigh>  My entire set of
experiences with programming has been mainly to get the most
ability out of the equipment I'm on.  I often resort to assembly
language to do it, so I have a certain callous attitude towards
the idea that a language with a lot of constraints can fix the
worlds problems.  To bastardize HEAVILY a standard quote,
"When the main tool one has used is a swiss-army knife, one tends to
look down on people who claim that a single screwdriver could do it
better."

    I've used Lisp, Snobol, Fortran and many other languages that
constrain me.  I'm very guilty of using the best tool for the job,
I suppose.  It's an unforgivable trait, I've noticed.  However, the
*IDEA* of programming in a language that was *DESIGNED* to constrain
the programmer is insane.  (Note, I've also used Pascal, but not for
anything serious, thank you.)

    This is a *HELL* of a lot longer than I had intended it to be, but
here it is.  Flamage is in the eye of the beholder.  I admit, now, to
being a poor speller, a poor writer, and any other grammar flames you
might have.  I also admit to being a short-sighted, ignorant, clueless
programmer if it'll make you feel better.  (*I* don't have to believe
it, but if it'll shut the people who are likely to flame me up, I
don't care.)

    OBTW, if my facts are out of order feel free to correct me.  I
read this group often, and am always willing to learn.  (Despite
whatever you may think of my postings.)

                                              --  Morgan Schweers
+--
    The people at my place of employment wouldn't understand this
discussion, except for one.  He's the other programmer.  He's not
around, so I can't ask his opinion.  So there.  --  mrs@netcom.com
--+

mrs@netcom.COM (Morgan Schweers) (04/12/91)

Some time ago dc@mgt3.sci.com (D. C. Sessions) happily mumbled: 
>
>  The same old arguments are always dragged out to justify bad workmanship, 
>  whether in software, hardware, or carpentry.  This time around, we're 
>  hearing them from Morgan Schweers.  Nothing personal, but they set of 
>  some major BS alarms.
>
    Acceptable.  I don't consider low-level work to be bad workmanship,
while you clearly do.  The major difference between what you are talking
about and what I am talking about is levels of ABSOLUTEs.  If I have an
implementation of a compiler that won't allow me to use a feature of a
system, then that compiler is (by my definition) broke.  If there is a
language which has features to 'prove' that the program does what it's
supposed to do, then the odds are that that language can't allow system-
dependant features.  Am I wrong?  Tell me why.


>In article <1991Apr8.080931.23209@netcom.COM> mrs@netcom.COM (Morgan Schweers) writes:
># In article <p79Gyx1j1@cs.psu.edu> schwartz@groucho.cs.psu.edu (Scott Schwartz) writes:
>
># >It worries me.  It also worries me that you aren't worried.
># >
>#     This is because you see everything in terms of 'theory' and not
># 'practice'.  (Inferred from your various comments.)  I'm not a theory person
># either, so it doesn't really worry me either.  The point is that if I can get
># somethine to *WORK* then to hell with all the theory in the world.
>
>  I routinely get the same argument when I show someone a design violation 
>  in a circuit design:  "Hey, it works!  Who cares about some theoretical 
>  problem?"
>
>  Of course, I can't *prove* that a transient error was due to substrate 
>  injection resulting from pin overdrive.  Since I can't prove it, let's 
>  forget the whole thing, right?
>
    Can you reproduce it?  Can you demonstrate it?  If you can, then you can
trace it.  This is true in software, but I don't know about hardware, having
never worked on the low-hardware level.

>#     I don't need a language to be formally accepted, or formally tested, to
>#            <etc>
>
>  Translation: "It worked; ship it!"
>
    Interesting translation.  I'd like to know what language you were
translating.  It's my opinion that a product should work in ALL cases
before you ship it.  It's also my opinion that this cannot be proven on
a systems programming level, since the variables are far too complex.
(As an example, we have one program which can't be used in a certain mode
with certain other programs, because they use memory in an odd way.  Could
this have been proven to happen?  Heck no.  Could a 'formal' language prove
that we can successfully use the disk to swap in portions of our code during
runtime?  Heck no.  Am I wrong about this?  Someone feel free to tell me of
a method that will tell me (just from looking at my source) that if I'm running
<program X> in memory, and another program swaps in to do some system checking
that there will be a memory conflict between the two.  Assume, as is the normal
case, that I don't have the sourcecode to program X.  Do you even think that
there is a language in which this *IS* possible?  IMO, hell no.)

>#  <Can a systems-programming language be compatible across platforms>
>
>  Is this one-downmanship?  I'm one step down from a systems programmer: I
>  design the stinking HARDWARE your stuff runs on.  Why don't you REALLY
>  get down and dink with the *voltages* like I do?  Bits are for wimps. 

   <Grin!>  I respect hardware people as much as I disrespect <sigh>
theory weenies.  (I really did use that word, didn't I...  Oh well.)
Theory is, IMHO, for people who want to be paid for not doing real work.
If you theorize for weeks, and come up with a new language for a problem
(but not an implementation of it, oh no!) that could be solved in ten
minutes with an existing language, then you are (by my, admittedly
small-minded definition) not getting any useful work done.

    IF, on the other hand, you design and IMPLEMENT a new language that has
applications beyond one problem, is useful, allows low level access across
a multidude of platforms, can be optimized reasonably at the executable level,
and doesn't impose heavy burdens on the user, *THEN* you're getting some
use out of your theory.  The theory-folk (I'll try to avoid 'weenies', okay?)
I've met in the past (and the 'software engineers') tend to REALLY hate
getting their fingers dirty.  That's why I don't have much respect for them,
I'm afraid.  It's also why I have a lot of respect for hardware engineers,
especially on the bit-level.  They get their hands dirty constantly.
They *WORK*.
   
>  The same arguments are used universally to justify spending a lot of time 
>  on the easy problems of micro-optimizing the 5% instead of making sure 
>  the 95% works reliably.

    What arguments of mine supported this?  A formal language won't allow
either.  (An optimization of the time-consuming 5%, *OR* a real-world
test of reliability.)  So what's your point?

>#
># <To start programming in a language all I need is a grammar and a
>#  explanation of the keywords>
>#
>
>  I'm glad you know from the grammer and keywords what would happen if you 
>  executed 
>
>    c = f( c, c = g( c ) );
>
>  but frankly, I'd like to know that the results wouldn't change with the 
>  compiler version.
>
    Really?  Pick any language that allows you to do system level work, and
I'll show you a language that returns different values on different systems.
It's up to the programmer to know what's different between a language on a
bitty-box and a language on a large system.  *UNLESS* the programmer is
working in a field where the system-level information isn't important.

>#     *IF* you can make me more productive, I'd look at what you were talking
># about.  The important point here is that *I KNOW WHAT I'M DOING*
>
>  That's funny; so did the carpenter who cut before he measured.  I'm glad 
>  that there are at least two infallible humans in the world.  As for the 
>  rest of us, we make occaisional mistakes and some even appreciate having 
>  those mistakes caught early in the cycle rather than late.
>
    Never claimed to be perfect, never will.  Just don't be stupid, and claim
that formally defined languages are perfect.  You'll be wrong.  There *ARE*
tradeoffs.

>
>  Sounds like Goedel's Theorem.  But I forgot; that's for math weenies.

    Yeah, it does doesn't it.  Well it is similar, in fact.  Attempts to
'formalize' a math system which is powerful, have yielded Godel's theorem.
That is that the system cannot be fully powerful, yet completely
formalized.  To flame for a second, "Get a clue from Godel's Theorem,
theory-people!  You cannot completely formalize a language without having
holes!"

    Thanks, I hadn't though t about Godel's theorem the first time through.

                                                      --  Morgan Schweers
+--
   One other person where I work would understand this.  I'll recommend
him to this group, and see what he thinks.  Everyone else is Non-Computer
People.                                               --  mrs@netcom.com
--+

sakkinen@jyu.fi (Markku Sakkinen) (04/12/91)

In article <1726@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <969@mgt3.sci.UUCP> D. C. Sessions writes:
> ...
>]  We used to hear this argument from radio hobbyists who would grab the old 
>]  breadboard, a handful of tubes, and a soldering iron and DISCOVER a 
>]  circuit.  I trust it as much as I trust a bridge which was built by 
>]  sticking together available beams however the designer thought they fit 
>]  well.
>
>How is direct experimentation with circuit C less reliable than
>(1) experimentation with general circuits followed by (2) theorizing
>about the nature of circuits followed by (3) inferences about the
>circuit C based on the assumption that the theory is correct?

And how is something like Newton's law of gravitation more reliable
and useful than experimenting with each pertinent object separately?
"That ball fell onto the ground when I left hold of it ...
by golly, this glass bowl does the same!"

>Can anyone who thinks it is useful to "prove" programs correct provide
>me with an explanation of exactly what the proof accomplishes?  What
>do you know after the proof that you didn't know before the proof?
> ...
>Finally, consider the relationship of the proof to the real world,
>what assumptions are essential to this relationship, and whether those
>assumptions are more reliable than the reliability that you can get by
>direct testing.  [...]

Suppose someone says he has written a procedure that takes two
integers and returns their sum as result.  Would you really prefer
to test it (exhaustively:  in a 32-bit computer, 2**64 additions
take a lot of time;  quickly:  add (1, 1) returns 2, add (1, 2)
returns 3, I guess this really is an addition procedure)
to a "proof":  reasoning about the source code?

If the Fortran source read like this:
      FUNCTION ADD (I, J)
      ADD = I + J
      RETURN
      END
I would have very high confidence that the function is all right.
On the other hand, if there were additionally a statement like
      IF (DAY(TODAY) .EQ. 13) ADD = ADD - 3
I would promptly "prove" that the code will not do what was promised.
Even exhaustive testing would not discover that unless one should
happen to test on the 13th of some month.

Many authors through the decades (I don't know who was the first)
have pointed out in the literature that testing computers and software
is tremendously more difficult and less conclusive than doing physical
experiments and measurements, because software and digital hardware
can have utterly discontinuous behaviour.

I may interpret you incorrectly, but here
you seem to have an attitude similar to James H. Fetzer,
who in his article "Program verification: the very idea" (CACM,
September 1988) stubbornly refused to view source code as an entity
distinct from its execution on a concrete computer.

Markku Sakkinen
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
          SAKKINEN@FINJYU.bitnet (alternative network address)

mrs@netcom.COM (Morgan Schweers) (04/12/91)

Some time ago new@ee.udel.edu (Darren New) happily mumbled: 
>In article <1991Apr8.080931.23209@netcom.COM> mrs@netcom.COM (Morgan Schweers) writes:
>
>>    I don't need a language to be formally accepted, or formally tested, to
>>write code in it.  I do the testing on my program.  If the compiler breaks it,
>>I'll worry about it.  If the compiler doesn't break my code, whassamatter with
>>it?
>           [Very good arguments about playing 'fast and loose']
>
    I agree.  However, there are (IMO) circumstances in which you *MUST*
have the capability to play 'fast and loose.'  If you think this isn't
true, then your just not doing the same jobs as I am.

>>     I'm a systems programmer, so decide before you respond, if your language
>>(probably supposedly universal across all implementations, if I know you theory
>>weenies) will do the job for what I consider real work.
>
>No need to insult formal languages or their users just because they
>don't do what *you* want, especially when you later admit you have no
>clue what you are talking about.  It's like saying "before you tell me
>that Ryder is better than U-Haul, understand that I drive race cars for
>a living."  You can't draw conclusions on which truck is better for
>moving on such a basis.
>
    You're right.  No argument.  *IF* however, someone claims that
I should drive a truck on the racetrack because it's got a better
safety record than my racecar, I'll laugh in their face.  I think
you would too, in the same circumstances.

>              However, most of us don't program where every cycle counts
>(especially when it's not worthwhile after you system programmer clowns
>destroy the efficiency with hack coding :-) and do program where continued
>correct execution is important.
    <Grin>  Hack coding improves the efficiency.  I never claimed that it
was PRETTY, but it improves the efficiency.  If it didn't, it wouldn't be
hack coding.  ;-)

>
>>    Get serious.  If you come up with a language that is guaranteed to be bug-
>>proof, I'll show you a language with a BEGIN and an END which can be inter-
>>changed, and NOTHING ELSE.
>
>What do you mean by "bug-proof"?  You mean a language where no programs
>have bugs, or a language where all bugs are caught?  Normally, a formal
>language defines the exact effects of any syntactically valid program.
>Of course, if what you tell it to do is not what you want it to do, then
>you are not going to get what you want.

    And if the input is undefined, and you have to adjust to circumstances?
What if you're writing a neural network program.  Will the 'formal language'
tell you all possible exact effects?  How about an OCR program?  I call
bullshit on that.  For a limited set of inputs, in a controlled environment,
it's possible.  For real-world problems and real-world systems, I say it's
not possible.  See my other responses for why.

>
>>    Show me one that is supposedly USEFUL, and I'll *WRITE* a bug in it! ;-)
>
>Take a look at LOTOS.  It's completely mathematically defined.  It's
>not even difficult to understand.  It is very powerful, allowing
>definitions of any arbitrary data types that are no more infinite than
>integers.  It also handles complex parallel evaluation.  Is it useful?
>Very.  Is it compilable?  Not yet, that I know of.  Then what good is
        ^^^^^^^^^^^^^^^^
>it?
    Really?  It's not compilable, eh?  Sounds real similar to BNF to me.
I accept that these are useful.  You can't *PROGRAM* in them, but they
are useful as references.  The point is that you have to write it in a
compilable language at some point.  Your original can be a flowchart,
a BNF, a LOTOS program, or scribblings on a piece of toilet paper, but
your code has to go through a real compiler.  That's where the work gets
done.  By the programmer who does the implementation.  He can't be
programming in a language that doesn't run, for obvious reasons.
    He can define the algorithms however he wants, as long as the
result is what comes off the spec.  I understand this, as does (I
would imagine) all other programmers doing Real Work(tm).  It's the
concept that I should program in a 'formalized' language that makes
me laugh.  It's the idea that I should be worried that the language
ISN'T formally provable that makes me laugh.

>
>>If you have a language
>>that makes it possible to 'prove' the program correct, then you have a language
>>which is probably putting too many restrictions on the programmer.
>
>LOTOS is quite a bit more flexible that any compilable language I've seen.
>At some point, you have to make your program work on more than one
>configuration.  You also have to know that it meets the customer's
>specifications.  How do you do that with C?  With LOTOS, you can prove
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^  (You test it.)
>that your functionality is a superset of the customer's specification.
>You can also write programs which could not possibly be implemented in
>C, even by Dan Bernstein :-).  What more proof do you need?

    Who *IS* this Dan Bernstein guy?  You're the second person to
mention him.
    Yeah, 'more flexible than any compilable language', etc.  Sheesh.
Do you ever get any work done?  You write the specs for the
customer, no problem.  I'll take it and make it work.  Right?  Right.

>
>>    Also, to me, a formal specification consists of a grammar, and a
>>description of the basic keywords.  Period.  
>
>Then you have never seen a *real* formal specification and you
>shouldn't be claiming they are useless, since you've obviously never
>tried to use one.  :-)
    <Grin>  So enlighten me, mortal.

>
>>Don't give me math crap.  I'm a programmer, not a math >junkie.
>
>I don't think you need to be a "math junkie" to understand LOTOS, but
>you can't have a formal definition without math.  LOTOS's definition
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  (Why?)
>is really very simple.  Much simpler than C.  About as simple as FORTH
>or LISP.

>
>     [Good stuff about big projects, and such.]
    You're right, to an extent.  However, some projects are never
'finished' and cannot have a formal spec at the beginning because it
*MUST* change by the end, and would hamper more than help.  You've
probably never wortked on a project like that, so we really are talking
from two different points of view.

>    [More good points about 'faithful' as a concept.]
    Too true.  I meant 'safe' as 'faithful' I suppose.  Consider that
much of the code involved in the sort of work that I (and many other
programmers) do would fall under the heading of 'undefined' results.
(More specifically, platform-defined, or situation-defined.)  If the
sole purpose for the language is to have a language whose results
can be checked, as long as they are defined, then it makes no sense to
use it for things that will normally be site-defined.  Does that make
sense?  Why use an innappropriate tool?

>
>How do you know Ada's validation suite tests all the semantics of Ada?
>Do you want to build SDI based on "I think so"?
>
     For the Real Time mode of something like SDI, I wouldn't trust
Ada as far as I could throw Congress.  You're talking about
'Catastrophic Failure' if speed-deadlines aren't met.  Catastrophic
failure in a Big Way!  ( ;-) )

     Netcom is a service from which people 'lease' accounts.  Network
Communications, obviously.  It's a service.  My company is a small
software firm, so some of your comments don't apply.  Of course, SINCE
it's a small company, I'm sure many of my comments don't apply to
you, wherever you work.

>--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
>----- Network Protocols, Graphics, Programming Languages, FDTs -----
>  +=+=+ My time is very valuable, but unfortunately only to me +=+=+
>+ When you drive screws with a hammer, screwdrivers are unrecognisable +

                                                 --  Morgan Schweers

P.S.   This has gotten *WAY* out of hand.  We're talking about
    messages 200+ lines in length, constantly.  I love the discussion,
    and am waiting for someone to define Formal Languages, but it's
    getting a bit long.  Will anyone settle for a pathetic, "We have
    different opinions, because we do different jobs?" or is this
    determined to be a continuing thread?

+--
  "When you've used a swiss-army-knife for years, it's hard to imagine
working with only a spoon."                      --  mrs@netcom.com
--+

mhcoffin@tolstoy.waterloo.edu (Michael Coffin) (04/12/91)

In article <11APR91.19210188@uc780.umd.edu> cs450a03@uc780.umd.edu writes:
>eh?  imperative programming languages don't say "achieve this goal" ?

No.  Imperative languages specify a series of actions, not the results
of those actions.  Some imperative languages have assertions, which
look as though they specify results, but they're interpreted as as
run-time checking: if this isn't true, print an error message or raise
an exception.

>I mean, if I say "sort this list" that's the sort of thing that an
>imperative language does.  That's an equivalent statement to "make
>this list sorted".

But neither is equivalent to "this list IS sorted", which is what
proofs allow you to say, and which is the way the specification of the
sorting program is probably phrased.  The closest you can get in an
imperative language is saying "Check whether the list is sorted; if it
isn't, abort the program or raise an exception".

>Maybe you don't like the levels of abstraction available to you in
>some imperative languages?

No, I do like them, but they don't provide the same things as do
proofs.  Encapsulating in a procedure named "sort" all the swaps that
accomplish sorting just pushes the problem down a level.  The question
remains: how do I know that after calling "sort(x)" the array x is
sorted?  Well, I can either test the routine "sort", or I can prove
that after calling it the array will be sorted.  Which to do is what
the discussion is all about.

Michael Coffin				mhcoffin@watmsg.waterloo.edu
Dept. of Computer Science		office: (519) 885-1211
University of Waterloo			home:   (519) 725-5516
Waterloo, Ontario, Canada N2L 3G1

mathew@mantis.co.uk (mathew) (04/12/91)

new@ee.udel.edu (Darren New) writes:
> In article <RNMAX3C@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Sil
> >                         How do you specify hardware and human actions
> >based on a formal language specification?
> 
> I understand a CPU called "viper" has in some way been formally validated.
> Check out ACM architecture-oriented journals.

This is correct. The Viper CPU is a relatively simple CPU design; it was
necessary to keep it so in order to make it validatable. The formal proof
system for hardware is based around the language ML; you specify the actions
of various elements of the design, and the system uses inference rules to
construct a formal proof that the design is correct (or incorrect :-)

There are still a number of problems with proving hardware correctness. For
example, it is difficult to codify the behaviour of transistors accurately.
Most transistor models either make too many assumptions to be generally
applicable, or are too complicated to be tractable using automated proof
methods.

It is also the case that the proof is only as good as the information put in
at the start. If what you say the hardware is does not match the actual chip
layout, your proof is useless.

In spite of this, formal verification of hardware is useful, and research
continues. If people are interested I could dig out some examples of hardware
verification. I think they are working on Viper II, which will be a slightly
more useful chip (i.e. one which you might actually want to build into
something more complicated than a toaster).

> In any case, a formal semantics for Ada would allow you to know that
> the validation suite tested all parts of the compiler.  These tests
> would not necessarily assure you that the compiler is correct, of course.

The problem is that most powerful and expressive languages have complicated
and messy formal semantics. As soon as you start allowing pointers, goto
statements, side-effects like x = y++ and the like, your semantics become
horribly complicated or even impossible to formulate.

I'm personally unconvinced by denotational semantics; it seems to be a way of
spraying some maths over the problem to make it look like a formal proof of
something, and I can't see that big an advantage over operational semantics.
Still, it wasn't my favourite course by any means, and I may have been
misled.

Certainly operational semantics is useful as a way of expressing what a
particular bit of code does in a much less ambiguous and more tractable way
than is possible via an English-language description.

> BTW, in reference to the FORTH-based NOVA chip:
> I can't find the reference, but that isn't suprising since most of
> my journals are in boxes in another state.  I suspect I read about
> it in one of the ACM publications, probably ASPLOS.  Definitely sometime
> around 1987 or 1988.

It's called the NOVIX.


mathew

--
If you're a John Foxx fan, please mail me!

peter@ficc.ferranti.com (Peter da Silva) (04/12/91)

In article <50419@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:
> In article <RNMAX3C@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
> >In article <50097@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:
> >> You can also write programs which could not possibly be implemented in
> >> C, even by Dan Bernstein :-).  What more proof do you need?

> >This is as insane a claim as any of Dan's. If it's implementable in
> >principle it's implementable in C. It might take building another
> >language on top of C to make it convenient, but it's implementable.

> If you look closely at the complete post, you will notice that I never
> claimed that all LOTOS programs are implementable.

Well, you said there wasn't currently a compiler available for it. That
doesn't say the same thing as "it's not implementable".

> Of course, programs which you write based on a LOTOS
> specification will not necessarily implement the full specification,

... and may be written in C as well as anything ...

[ formal specifications for SDI ]
> >Is there an alternative? How do you specify hardware and human actions
> >based on a formal language specification?

> I understand a CPU called "viper" has in some way been formally validated.

Fine. How about the rest of the system (up to and including launch vehicles,
ordnance, sensors, etc...)?

> Also, law-goverend systems
> can formally specify the ways in which people are allowed to interact
> with the system, including the ways in which they are allowed to change
> the code and the laws themselves.

I see, the formal specification rules out infiltrating the U.S. with small
nuclear devices hidden in Ford Econoline vans?
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

gudeman@cs.arizona.edu (David Gudeman) (04/13/91)

In article  <1991Apr12.075520.25394@jyu.fi> Markku Sakkinen writes:
]In article <1726@optima.cs.arizona.edu> David Gudeman writes:
]>How is direct experimentation with circuit C less reliable than...
                                                    --------

]And how is something like Newton's law of gravitation more reliable
]and useful than experimenting with each pertinent object separately?
     ------

Newton's law is _less_ reliable than direct experimentation.  Draging
"useful" into it strikes me as a transparent attempt to change my
meaning.  Of course formalism is useful in some circumstances -- did I
mention that I consider myself primarily a theorist?  My view on
formal proofs of correctness is not from a super-pragmatic disdain of
formalism, it comes from contemplating the nature of formalism.

]Suppose someone says he has written a procedure that takes two
]integers and returns their sum as result.  Would you really prefer
]to test it (exhaustively:  in a 32-bit computer, 2**64 additions
]take a lot of time;  quickly:  add (1, 1) returns 2, add (1, 2)
]returns 3, I guess this really is an addition procedure)
]to a "proof":  reasoning about the source code?

I reject both of your methods of test, neither is what I call testing.
Furthermore, it is obviously necessary to reason about the source code
but that is not the same thing as a formal proof of correctness.  A
formal proof of correctness involves a sequence of written expressions
in another language --distinct from the programming language-- that
purports to "prove" that the program meets some specification, also
written in a formal language.  The question is how writting proofs in
this other formal language is better than testing the program
directly.  I claim that it is not.

(I also claim that in general writing specifications in a formal
language is superflous.  Please note that "in general" is not the same
as "universally".  Specific examples where a formal specification is
useful are not going to influence my opinion.)

]      IF (DAY(TODAY) .EQ. 13) ADD = ADD - 3
]I would promptly "prove" that the code will not do what was promised.

It doesn't take a formal proof in a different language to see the
problem, as shown by the fact that you assume I will see the problem
without such a formal proof.

]Even exhaustive testing would not discover that unless one should
]happen to test on the 13th of some month.

To do testing correctly, you have to take into account the nature of
the code being tested.  If the code has a conditional as above then
you had better test it both on a 13th and on other days or you have
not tested correctly.

]Many authors through the decades (I don't know who was the first)
]have pointed out in the literature that testing computers and software
]is tremendously more difficult and less conclusive than doing physical
]experiments and measurements, because software and digital hardware
]can have utterly discontinuous behaviour.

Nonsense.  Testing software is much easier than physical experiments.
You have complete control over the environment and you can guarantee
that the test subject (the program) is identical to the previous
subject.  In physical experiments things wear out, dirt gets into
parts, loose coat sleeves throw things out of alignment...  This
discontinuity in behavior in computers is an _advantage_.  It lets you
ignore history.

]I may interpret you incorrectly, but here
]you seem to have an attitude similar to James H. Fetzer,
]who in his article "Program verification: the very idea" (CACM,
]September 1988) stubbornly refused to view source code as an entity
]distinct from its execution on a concrete computer.

I'll have to look up the article, maybe I'm not the only one who sees
that the emporer has no clothes.

I'm willing to view source code as a distinct entity, I accept the
usefulness of languages semantics and even formal proofs of
correctness in some circumstances.  My argument is that these formal
proofs do not give the same level of confidence that testing does.
And further, that anyone who thinks that formalism gives more
confidence than testing has not thought enough about the nature of
formalism.

I have yet to see a defense of formal proofs of correctness that deals
with my philosophical objections.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

cs450a03@uc780.umd.edu (04/13/91)

Michael Coffin >
Me             >|

>|eh?  imperative programming languages don't say "achieve this goal" ?
>No.  Imperative languages specify a series of actions, not the
>results of those actions.

umm.. hmm...

I'm not totally sure I see the difference.

If I have a routine that sorts numbers so that the result is
monotonically increasing, doesn't the closure of that routine with
some numeric list specify the results?

>|...if I say "sort this list" that's [what] an imperative language
>|does.  That's an equivalent statement to "make this list sorted".
>But neither is equivalent to "this list IS sorted", which is what
>proofs allow you to say, and which is the way the specification of
>the sorting program is probably phrased.  The closest you can get in
>an imperative language is saying "Check whether the list is sorted;
>if it isn't, abort the program or raise an exception".

Er.. yes, in that proofs generally are sets of equivalent statements
about the same "data" (often denumerable sets, or what-have-you).  But
even in proofs, you can deal with expressing A in form B.

But if I have a routine called "sort" which sorts, and I feed a list
into it which it can sort, the result is a sorted list.  Somebody has
to establish that "sort" does indeed sort, but that would be true of a
formal proof as well (you need to establish that you have a definition
of "sorted" which fits within your formalism -- or else accept a
priori that there is this thing called "sorted" and use its
properties).

I'll grant you that an imperative language transforming data is doing
something different than a proof about some form(s) of that data.  But
what about an imperative language transforming some set of statements
to some equivalent set of statements (like a compiler, or a symbolic
math engine)?

I think it's just a matter of what's being transformed.

No??

Well, I guess some proof techniques are O(godawful).  But that's what
people are for...

>Encapsulating in a procedure named "sort" all the swaps that
>accomplish sorting just pushes the problem down a level.  The
>question remains: how do I know that after calling "sort(x)" the
>array x is sorted?  Well, I can either test the routine "sort", or I
>can prove that after calling it the array will be sorted.

Hmm... so what you're talking about is introducing new concepts to a
language or system.  I think that technically this falls into the
category of "meta-proofs" rather than "formal proofs".

Correct me if I'm wrong :-)

I know somebody will...

Raul Rockwell

new@ee.udel.edu (Darren New) (04/13/91)

In article <1991Apr12.082128.11654@netcom.COM> mrs@netcom.COM (Morgan Schweers) writes:
>    Really?  It's not compilable, eh?  Sounds real similar to BNF to me.

Actually, it is a combination of CCP and ACT-One. (i.e., Calculus of
Communicating Processes and equational data definitions.)

>I accept that these are useful.  You can't *PROGRAM* in them, but they
>are useful as references.  The point is that you have to write it in a
>compilable language at some point.  

LOTOS can be compiled if you don't take advantge of (say) infinite numbers
of processes or unlimited-size queues.  I made another post explaining
how to program in LOTOS.

>your code has to go through a real compiler.  That's where the work gets
>done. By the programmer who does the implementation.  He can't be
>programming in a language that doesn't run, for obvious reasons.

I disagree.  "work" gets done all up and down the line. Work gets done
by the person who gathers the specs, by the person who writes the
C code, by the person who writes the C compiler and libraries, the
testers, and so on.  Formal languages help these people work together.

>    He can define the algorithms however he wants, as long as the
>result is what comes off the spec.  I understand this, as does (I
>would imagine) all other programmers doing Real Work(tm).  It's the
>concept that I should program in a 'formalized' language that makes
>me laugh.  It's the idea that I should be worried that the language
>ISN'T formally provable that makes me laugh.

The informality of the language you use does not keep you from writing
code in that language that works well. Rather, the formality helps
you write better code faster. Analogy: assembler doesn't keep
you from writing good code. C helps you write better code faster.

>>You also have to know that it meets the customer's
>>specifications.  How do you do that with C?  
>                  ^^^^^^^^^^^^^^^^^^^^^^^^^^  (You test it.)

You mean the customer tests it.  You can't possibly know whether for
subtle problems the program does what the customer wants because you
may not know what the customer wants.  In my experience, the *customer*
doesn't know what the customer wants :-), and hence the "prototype"
approach.

>    Who *IS* this Dan Bernstein guy?  You're the second person to
>mention him.

He brings up good points, but he really really likes C. :-)

>    Yeah, 'more flexible than any compilable language', etc.  Sheesh.
>Do you ever get any work done?  You write the specs for the
>customer, no problem.  I'll take it and make it work.  Right?  Right.

Exactly. And the formalism makes me sure that I'm telling you what
you are understanding.

>>Then you have never seen a *real* formal specification and you
>>shouldn't be claiming they are useless, since you've obviously never
>>tried to use one.  :-)
>    <Grin>  So enlighten me, mortal.

I'll post references at the end of this article, since you are about the
tenth person whose asked.

>>you can't have a formal definition without math.  LOTOS's definition
>     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  (Why?)

Because "math" means "formal."  (I think the word "math" comes from
the name of the Arabic person who invented math, or something like that.)

"Formal" means "based on the form". I.e., you can understand what is
expressed by observing the "form" or "shape" of the argument without
knowing what real-world things is relates to.

:if P then Q    :P     :therefore Q

I don't need to know what P or Q represent in the real world to know
that this is "valid" in a formal sense.  It may not be "true" if the
things P and Q refer to don't match the formalism, but it is "valid"
in my formalism. 

>    You're right, to an extent.  However, some projects are never
>'finished' and cannot have a formal spec at the beginning because it
>*MUST* change by the end, and would hamper more than help.  

Heh.  The formalism would let you know exactly what parts of the 
project need to change when requirements change.

>You've
>probably never wortked on a project like that, so we really are talking
>from two different points of view.

Double heh. You haven't lived in "shifting spec" world until you've
programmed life insurance software.  It was always much easier when we
got 1-2-3 spreadsheets than when we got papers with lots of scribbles
and arrows.  ESPECIALLY when changes were made, because we kept track
in our source of which functions corresponded to which rows/columns.

>    Too true.  I meant 'safe' as 'faithful' I suppose.  Consider that
>much of the code involved in the sort of work that I (and many other
>programmers) do would fall under the heading of 'undefined' results.
>(More specifically, platform-defined, or situation-defined.)  

"Platform defined" is not "undefined."  
Example platform defined: the open() call returns either a file handle
or -1 if you reach a platform-defined limit.
Example of undefined: the open() call returns a file handle. (Note no mention
of error-behavior.)

Which documentation would you rather use?

>If the >sole purpose for the language is to have a language whose results
>can be checked, as long as they are defined, then it makes no sense to
>use it for things that will normally be site-defined.  Does that make
>sense?  

Sorry. No.  :-(       Whether a language if formally defined has little
or nothing to do with site-defined behavior.  Again, formal definitions
are for inter-personal communication, not for checking results of
a program against what *should* be the results of a program.

>P.S.   This has gotten *WAY* out of hand.  We're talking about
>    messages 200+ lines in length, constantly.  I love the discussion,
>    and am waiting for someone to define Formal Languages, but it's
>    getting a bit long.  Will anyone settle for a pathetic, "We have
>    different opinions, because we do different jobs?" or is this
>    determined to be a continuing thread?

I see no probelms as long as good information is being communicated.
I think 200+ line posts are usually clearer and better thought-out
than the "yes it is/no it isn't" posts we often get.  As long
as people keep posting, the discussion will continue.

Below is the appropriate bits of my biblio file.

The Brinksma tutorial is probably the easiest to understand. The
international standard is of course the most complete. Even including
the formal definitions of things like addition and subtraction, arabic
decimal, hex, and binary numbers, flattening (i.e., scope resolution),
and so on, the standard is still only about 50-100 pages.  I would
recommend the other tutorials (for Estelle and SDL) also.  BTW, if you
don't follow the "piano" examples in the tutorial, ignore them, learn
LOTOS, and then go back and look at the piano examples. LOTOS is so
different from most programming languages that trying to take an
informal approach is more difficult than taking a formal approach.

Let me know if there's anything else I can do for you.

@techreport{Tenn88,
	title = "A Tutorial Introduction to Estelle",
	author = "R. Tenney",
	number = "88--1",
	institution = "Univerisity of Mass., Boston",
	month = jun,
	year = 1988
	}

@article{BB87,
	author = "T. Bolognesi and E. Brinksma",
	title = "Introduction to the ISO Specification Language LOTOS",
	journal = "Computer Networks and ISDN Systems",
	year = 1987,
	volume = 14,
	number = 1,
	pages = "25-60"
	}

@article{BH89,
	author = "F. Belina and D. Hogrefe",
	title = "The CCITT Specification and Description Language SDL",
	journal = "Computer Networks and ISDN Systems",
	year = 1989,
	volume = 16,
	number = 4,
	pages = "311-341"
	}


@manual{ISO8807,
	key = "ISO8807",
	title = "ISO International Standard 8807: LOTOS - A Formal Description 
Technique Based on the Temporal Ordering of Observat
ional Behaviour",
	Xyear = 1989,
	organization = "Information Processing Systems - Open System Interconne
ction"
	}

@manual{GLOTOS,
	key = "ISO8807AD1",
	title = "ISO International Standard 8807: LOTOS - A Formal Description 
Technique Based on the Temporal Ordering of Observat
ional Behaviour: Proposed Draft Addendum 1 (G-LOTOS)",
	Xyear = 1989,
	organization = "Information Processing Systems - Open System Interconne
ction"
	}


@book{BNT92,
	authors="Tommaso Bolognesi, Elie Najm and Paul Tilanus",
	title="G-LOTOS: a Graphical Language for Concurrent Systems",
	year="In preparation"
	}


%NEW, 20 Jun 90, SCC
@manual{ISO10167,
	key = "ISO10167",
	title = "ISO Draft International Standard 10167: Guidelines for the App
lication of Estelle, LOTOS and SDL",
	Xnumber = "JCT1/SC21 N2549",
	organization = "Information Processing Systems - Open System Interconne
ction",
	Xmonth = mar,
	Xyear = 1988
	}

@techreport{KSW88,
	author = "Steve King and Ib H. S\o{}rensen and James C.P. Woodcock",
	title = "Z: Grammar and Concrete and Abstract Syntaxes",
	institution = "Programming Research Group, Oxford University",
	address = "Oxford, UK",
	year = "1988",
	type = "Technical Monograph",
	number = "PRG-68",
	note = "ISBN 0902928503"
	}

@techreport{Haye85,
	author = "Ian J. Hayes",
	title = "Specification Case Studies",
	type = "Technical Monograph",
	number = "PRG-46",
	institution = "Programming Research Group, Oxford University",
	address = "Oxford, UK",
	year = "1985",
	month = "July"
	}

%NEW, 20 Jun 90
@article{THoo87,
	author = "G. T'Hooft",
	title = "Formal Description Techniques: Communication Tools for Data Co
mmunication Specialists: Formal Specification and Im
plementation of a File Transfer Protocol",
	journal = "Computer Networks and ISDN Systems",
	year = 1987,
	volume = 14,
	number = 1,
	pages = "311--321"
	}

-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
+=+=+ My time is very valuable, but unfortunately only to me +=+=+

new@ee.udel.edu (Darren New) (04/13/91)

In article <MBOA8C9@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>In article <50419@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:

>Well, you said there wasn't currently a compiler available for it. That
>doesn't say the same thing as "it's not implementable".

I said "Can you compile it?  No."  if you want to be picky :-)

>> Of course, programs which you write based on a LOTOS
>> specification will not necessarily implement the full specification,
>... and may be written in C as well as anything ...

No argument. Of course, if you *do* write it in LOTOS (the subset that
you could compile) then you can be sure it implements what was
specified.

>> I understand a CPU called "viper" has in some way been formally validated.
>Fine. How about the rest of the system (up to and including launch vehicles,
>ordnance, sensors, etc...)?

Probably not.  What's your point?  I'm not trying to say that formal
specifications can prevent you from needing to test anything.  I'm
merely saying that they *help*.  Tastes great, less filling.

>> Also, law-goverend systems
>> can formally specify the ways in which people are allowed to interact
>> with the system, including the ways in which they are allowed to change
>> the code and the laws themselves.

>I see, the formal specification rules out infiltrating the U.S. with small
>nuclear devices hidden in Ford Econoline vans?

Well, the quality of this post is certainly not up to the standards I've
come to expect from your posts.  If you forgot the :-)'s then I understand.

I can't imagine what blowing up the computer has to do with whether
there are bugs in a program due to misunderstanding of what the
compiler is supposed to support.  It's also possible to have the police
arrest and beat to death the programmers; does that mean we shouldn't
have laws?

Sorry for mentioning these other formal systems.  It was my impression
that you might be interested in hearing about something you might not
know about, rather than pointless flamage.   Geez.    

>Peter da Silva.  `-_-'  peter@ferranti.com

	 -- Darren

-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
+=+=+ My time is very valuable, but unfortunately only to me +=+=+

jallen@libserv1.ic.sunysb.edu (Joseph Allen) (04/13/91)

In article <50453@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:

>For example, your customer gives you the following requirements:

>1) a phone will not ring if it is off-hook.
>2) a phone will ring if it is on-hook and its number is dialed.
>3) a non-ringing phone will issue a dialtone if off-hook and no
>   digits have been dialed.
>4) a phone which has had 7 digits dialed will emit either a ring signal
>   or a busy signal.
>5) ring signals will never be associated with phones which are not ringing.

> etc etc etc

>Clearly, this is the kind of specification a customer wants to give.

:-) Yeah.  (Do you work in the real world (tm)?)  If customers could give this
good of a specification we probably really could automate coding and we would
all be out of a job. 

--
#define h 23 /* Height */         /* jallen@ic.sunysb.edu (129.49.12.74) */
#define w 79 /* Width */                       /* Amazing */
int i,r,b[]={-w,w,1,-1},d,a[w*h];m(p){a[p]=2;while(d=(p>2*w?!a[p-w-w]?1:0:0)|(
p<w*(h-2)?!a[p+w+w]?2:0:0)|(p%w!=w-2?!a[p+2]?4:0:0)|(p%w!=1?!a[p-2]?8:0:0)){do
i=3&(r=(r*57+1))/d;while(!(d&(1<<i)));a[p+b[i]]=2;m(p+2*b[i]);}}main(){r=time(
0L);m(w+1);for(i=0;i%w?0:printf("\n"),i!=w*h;i++)printf("#\0 "+a[i]);}

jallen@libserv1.ic.sunysb.edu (Joseph Allen) (04/13/91)

In article <1991Apr11.214018.19443@watmath.waterloo.edu> mhcoffin@tolstoy.waterloo.edu (Michael Coffin) writes:
>In article <1755@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:

>>[Proofs are equivelent to programs (they had better be).  The only reason
   proofs make programs less buggy is that they add redundancy (it's almost
   as hard to derive the proof as it is to write the program in the first
   place)]

I agree with this completely.

>[Proofs are better because instead of giving detailed steps, they are closer
  in showing that the goals sketched out in the specs are acheived]

If the proofs are detailed enough to detect programming errors than why not
just program in proof-steps (or whatever they're called)?  They are probably
just a "higher" level of abstraction than current programming languages. WE'll
have to make a proof compiler...
--
#define h 23 /* Height */         /* jallen@ic.sunysb.edu (129.49.12.74) */
#define w 79 /* Width */                       /* Amazing */
int i,r,b[]={-w,w,1,-1},d,a[w*h];m(p){a[p]=2;while(d=(p>2*w?!a[p-w-w]?1:0:0)|(
p<w*(h-2)?!a[p+w+w]?2:0:0)|(p%w!=w-2?!a[p+2]?4:0:0)|(p%w!=1?!a[p-2]?8:0:0)){do
i=3&(r=(r*57+1))/d;while(!(d&(1<<i)));a[p+b[i]]=2;m(p+2*b[i]);}}main(){r=time(
0L);m(w+1);for(i=0;i%w?0:printf("\n"),i!=w*h;i++)printf("#\0 "+a[i]);}

cs450a03@uc780.umd.edu (04/13/91)

Michael Coffin writes:
>The definition of "sorted" can be translated into a logical predicate
>pretty easily.  E.g., for a list of numbers A, "A is sorted in
>ascending order" is translated to
> 
>    "for elements i and j in A, i <= j implies A[i] <= A[j]"
>
>...  It's quite different to point to a routine for quicksort and
>claim that the partitions and exchanges somehow describe
>"sortedness."

But it is not all that different to point to a routine for bubble-sort
and claim that the comparisons and exchanges somehow describe
"sortedness".

Assuming that A can be ordered.

Raul

mhcoffin@tolstoy.waterloo.edu (Michael Coffin) (04/13/91)

In article <12APR91.20195376@uc780.umd.edu> cs450a03@uc780.umd.edu writes:
>But if I have a routine called "sort" which sorts, and I feed a list
>into it which it can sort, the result is a sorted list.  Somebody has
>to establish that "sort" does indeed sort, but that would be true of a
>formal proof as well (you need to establish that you have a definition
>of "sorted" which fits within your formalism -- or else accept a
>priori that there is this thing called "sorted" and use its
>properties).

This is the whole point.  The definition of "sorted" can be translated
into a logical predicate pretty easily.  E.g., for a list of numbers
A, "A is sorted in ascending order" is translated to

    "for elements i and j in A, i <= j implies A[i] <= A[j]"

This is perhaps a leap of faith, but it's across a pretty narrow
canyon.  It's quite different to point to a routine for quicksort and
claim that the partitions and exchanges somehow describe "sortedness."

By the way, I agree with David Gudeman that proofs are prone to error.
But short of exhaustive testing, which is rarely possible, proofs are
the only way I know to reconcile the specification of a program with
its imperative realization.  It may not be particularly *good* tool, 
but as far as I can see it's the only tool.

Michael Coffin				mhcoffin@watmsg.waterloo.edu
Dept. of Computer Science		office: (519) 885-1211
University of Waterloo			home:   (519) 725-5516
Waterloo, Ontario, Canada N2L 3G1

pattis@june.cs.washington.edu (Richard Pattis) (04/14/91)

In article <13APR91.15060840@uc780.umd.edu> cs450a03@uc780.umd.edu writes:
>Michael Coffin writes:
>>The definition of "sorted" can be translated into a logical predicate
>>pretty easily.  E.g., for a list of numbers A, "A is sorted in
>>ascending order" is translated to
>> 
>>    "for elements i and j in A, i <= j implies A[i] <= A[j]"
>>
>>...  It's quite different to point to a routine for quicksort and
>>claim that the partitions and exchanges somehow describe
>>"sortedness."
>
>But it is not all that different to point to a routine for bubble-sort
>and claim that the comparisons and exchanges somehow describe
>"sortedness".
>
>Assuming that A can be ordered.
>
>Raul

I always use this specification in class and ask whether it is a necessary
and sufficient condition to ensure their intuitive notion of what a sorting
subprogram should do.  Pretty much everyone agrees that this just about says
it all.  Then I show them a "sorting" subprogram that sets each element,
A[i], to the value i.  Sure enough, it satisfies the postconditions. There is
silence, then everyone starts to yell that the question was unfair since it
was "obvious" that I also meant the output was a permutation of the input.
Then we talk about "obvious", "proof", and some other things. Finally, we go
on to try to write code that correctly checks permutations, which itself gets
hairy.

  I know the definition above didn't quite say that it was the correct
postcondition for a sort subprogram, it just describes what ordered
means (and if checked by code that directly implements the "logic", will
require n**2 operations, because there are n**2 pairs of i,j).

Also, it might be better to write

    "for elements i and j in A, i <= j implies A[i] is not > A[j]"

Most of my students (freshman) don't know about partial orders, so
they see little difference between the two descriptions.

Rich Pattis
-- 
------------------------------------------------------------------------------
  Richard E. Pattis			"Programming languages are like
  Department of Computer Science	 pizzas - they come in only "too"
    and Engineering			 sizes: too big and too small."

gudeman@cs.arizona.edu (David Gudeman) (04/15/91)

In article  <1991Apr11.214018.19443@watmath.waterloo.edu> Michael Coffin writes:
]I assume you mean that proofs are a just a form of redundancy---a way
]to "write the program twice".

More than just writing the program twice, you also write the program
in a different style -- hopefully leading to different kinds of
mistakes.

]...  Programming languages, at least imperative
]languages, are constructed from statements, which say "do this, then
]this, etc.".  They don't say "achieve this goal", which is really the
]purpose of running a program.

Proofs don't say "achieve this goal" either.  They say "if the
preconditions are such, then then post conditions are such".  It isn't
clear to me that this is closer to the original idea.

]Here's an analogy to illustrate the difference (off the top of my
]head, so let's not push this too far).  Suppose I'm telling you how to
]get to my house.  A strictly imperative program looks like

I'll take the analogy with the caveat that the person following the
directions will make no mistakes.  It is the directions that might be
in error...

]Proving a program is like adding landmarks to the program and then
]showing that the actions get from one landmark to the next:

How is adding landmarks different from redundancy?

]Besides giving you more confidence while you're driving, the proof has
]the property that it abstracts away detail.

I don't believe that.  The amount of detail you have to specify is
dependent on the data structures you have available and to some extent
on how well the paradigm matches the problem.  It is not true that the
paradigm of predicate logic requires less details in general than the
paradigm of imperative procedures.

]In the original
]program, it's difficult to see if "go straight 4.1 miles" is right
]without trying the entire route; there's no mention of what "go
]straight 4.1 miles" ought to *do*. That's all informally
](mis)understood.  In the proven program, that statement can be checked
]in isolation...

I don't believe that either.  The effects of any procedure are as
formally understood as predicate logic.  How do you know what 3 < 4
means?  At some level all formalism is defined informally, and I don't
see why this is any less legitimate for programming languages than it
is for logic.  Also, it is not true that you can't see if "go straight
4.1 miles" is correct without trying the entire route.  You know
unambiguously what the effect of an instruction is given any
particular start state.

My gripe is that you _know_ what effect "x := 3" has.  It doesn't give
any better level of understanding to write "{true} x := 3 {x = 3}".
All you are doing is defining one notation in terms of another.  You
can also define predicate logic in terms of another notation, so that
instead of writing "x = 3" you would write "x = 3 |- <x,3> elem S".
At what point do you stop defining one notation in terms of another?
(For logicians out there: yes, this notation stacking has some uses,
but program verification is not one of them.)

]...  For concurrent
]programs, where nondeterminism can cause a program to run for weeks
]and then suddenly fail, I would guess that proofs are far more
]reliable. 

It may be that there are as yet no adequate testing methods for
parallel programming in procedural languags.  It may also be that
predicate logic is a more convenient paradigm for parallel programming
than imperative procedures.  But that is a seperate question from the
one of whether formalism for its own sake give some sort of magical
verification advantages.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

gudeman@cs.arizona.edu (David Gudeman) (04/15/91)

In article  <50453@nigel.ee.udel.edu> Darren New writes:
]If by "correct" you mean "meets the specification" then the ability to
]show that the behavior of the program is related to the behavior
]of the specification is a testing-equivalence sense can be very helpful.

Of course that's what "correct" means.  But the "specification" is an
informal description of what the program should do.  A _formal_
specification can be as wrong as a program can.  And in general the
process of writing a formal specification can be a buggy as the
process of writing a program.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

peter@ficc.ferranti.com (Peter da Silva) (04/15/91)

In article <50577@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:
> No argument. Of course, if you *do* write it in LOTOS (the subset that
> you could compile) then you can be sure it implements what was
> specified.

OK, fine. You can implement a subset of LOTOS, and anything implementable
in that subset is implementable in C. End of argument.

> >> I understand a CPU called "viper" has in some way been formally validated.
> >Fine. How about the rest of the system (up to and including launch vehicles,
> >ordnance, sensors, etc...)?

> Probably not.  What's your point?

That bringing up the SDI bugaboo is pretty much an irrelevant digression.

Is viper fast enough to do the job? Will it remain so?

> Well, the quality of this post is certainly not up to the standards I've
> come to expect from your posts.  If you forgot the :-)'s then I understand.

I'm running out of smileys. I don't find SDI amusing. "Nuclear War can
ruin your whole compile" -- Karl Lehenbauer.

> I can't imagine what blowing up the computer has to do with whether
> there are bugs in a program due to misunderstanding of what the
> compiler is supposed to support.

The symptom of a failure in SDI is nuclear explosions in major cities,
remember.

Hey, you brought up SDI. If you want to use a reasonable example against
informal programming methods, or whatever, how about the Therac incident?
Much more appropriate.
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

frank@grep.co.uk (Frank Wales) (04/15/91)

In article <RJqa13w164w@mantis.co.uk> mathew@mantis.co.uk (mathew) writes:
>This is correct. The Viper CPU is a relatively simple CPU design; it was
>necessary to keep it so in order to make it validatable.

I heard recently that the company that developed it is currently
embroiled in legal action against the Ministry of Defence; something
about making claims that implied it was more reliable than it
really was, I think.

>I think they are working on Viper II, which will be a slightly
>more useful chip (i.e. one which you might actually want to build into
>something more complicated than a toaster).

Indeed.  I remember reading the spec for Viper when it came out;
I put it down again when I got to the part about how, if a run-time
error was encountered, the processor dealt with it by halting.
--
Frank Wales, Grep Limited,             [frank@grep.co.uk<->uunet!grep!frank]
Kirkfields Business Centre, Kirk Lane, LEEDS, UK, LS19 7LX. (+44) 532 500303

dww@math.fu-berlin.de (Debora Weber-Wulff) (04/16/91)

mrs@netcom.COM (Morgan Schweers) writes:

[bunch of stuff deleted]
>It's my opinion that a product should work in ALL cases
>before you ship it.  It's also my opinion that this cannot be proven on
>a systems programming level, since the variables are far too complex.
>(As an example, we have one program which can't be used in a certain mode
>with certain other programs, because they use memory in an odd way.  Could
>this have been proven to happen?  Heck no.  Could a 'formal' language prove
>that we can successfully use the disk to swap in portions of our code during
>runtime?  Heck no.  Am I wrong about this?  Someone feel free to tell me of
>a method that will tell me (just from looking at my source) that if I'm running
><program X> in memory, and another program swaps in to do some system checking
>that there will be a memory conflict between the two.  Assume, as is the normal
>case, that I don't have the sourcecode to program X.  Do you even think that
>there is a language in which this *IS* possible?  IMO, hell no.)

Well, we're getting there. J Moore did a proof of correctness for
an assembler, William Bevier did a proof for a small kernel,
and Warren Hunt did one for the gate level implementation
of a chip using the Boyer-Moore prover (see Journal of Automated
Reasoning, December 1989). The prover - an obstinate, nagging proof
checker that won't accept any nonsense - required the proof of
all sorts of lemmata that get close to your problems. For example, it
needed to be proven that the assembler program code could not modify
itself during execution.

I think a lot of people are just scared of formal methods because
they smell like the m-word (*mathematics*), and they are afraid
that they won't understand how to use them and might make fools
out of themselves, so they just try and flame away in the hopes
that formal methods will go away.

"Wat de Buuer nit kunnt, dat frisst he nit" - (loose translation
from the Low German: if the farmer doesn't recognize it, he
won't eat it.)

I quite agree with the opinions expressed in other postings:
Use testing and/or proving as appropriate.

-- 
Debora Weber-Wulff
snail: FU Berlin, ZI Fachdidaktiken, Habelschwerdter Allee 45, W-1000 Berlin 33
email: weberwu@inf.fu-berlin.de, dww@math.fu-berlin.de

new@ee.udel.edu (Darren New) (04/16/91)

In article <1991Apr13.022129.29607@sbcs.sunysb.edu> jallen@libserv1.ic.sunysb.edu (Joseph Allen) writes:
>>Clearly, this is the kind of specification a customer wants to give.
>:-) Yeah.  (Do you work in the real world (tm)?)  If customers could give this
>good of a specification we probably really could automate coding and we would
>all be out of a job. 

I misspoke, I guess.  I meant that this is the kind of specification you
eventually develop from what the customer tells you he/she wants. I was
contrasting this with a "specification" of the form "main(){ ... }".

And, yes, I *used* to work in the "real world".  That's why I returned
to graduate school :-).
-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
     +=+=+ My time is very valuable, but unfortunately only to me +=+=+
+=+ Nails work better than screws, when both are driven with screwdrivers +=+

new@ee.udel.edu (Darren New) (04/16/91)

In article <1882@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>But the "specification" is an
>informal description of what the program should do.  

I'm arguing that formally-specified languages should be used so that
the "specification" can be made formal and then be formally compared
to the "code".

>A _formal_
>specification can be as wrong as a program can.  

Usually not, because the specification is at a higher level and closer
to the way a person thinks than an implementation.  Compare the 
specification of a relational database schema and an RDBMS implementation.
Which is the customer more likely to get right?

>And in general the
>process of writing a formal specification can be a buggy as the
>process of writing a program.

It can be, but experience has again and again shown that formal
specifications tend to have fewer bugs than informal specifications.
See any FORTE conference proceedings.  Every time I've heard of that
somebody has taken (say) an ISO network protocol specification and
respecified it formally, bugs have been found. I consider this a
positive benefit.
			     -- Darren

-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
     +=+=+ My time is very valuable, but unfortunately only to me +=+=+
+=+ Nails work better than screws, when both are driven with screwdrivers +=+

martelli@cadlab.sublink.ORG (Alex Martelli) (04/16/91)

new@ee.udel.edu (Darren New) writes:
	...
:>>you can't have a formal definition without math.  LOTOS's definition
:>     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  (Why?)
:
:Because "math" means "formal."  (I think the word "math" comes from
:the name of the Arabic person who invented math, or something like that.)

You're probably thinking of "algorithm".  "Mathematics" is a deeply 
Indo-European word: immediate derivation from Greek "mathein, manthanein",
"to learn", but undoubted relationships link it to other I.E. words (in
various Germanic languages, and Sanskrit) such as "muntar" (prompt, awake),
"munda" (to aim), "mundon" (to pay attention), "medha" (intelligence,
wisdom), ultimately to something like "manas" (mind) + "dadhati" (to put,
to place) - yes, "mind", "mental", "mnemonic" all come from the same
root-word family.  Sorry to intrude, but, while not formally treated,
this definitely IS a MISCellaneous LANGuage problem...:-).

-- 
Alex Martelli - CAD.LAB s.p.a., v. Stalingrado 53, Bologna, Italia
Email: (work:) martelli@cadlab.sublink.org, (home:) alex@am.sublink.org
Phone: (work:) ++39 (51) 371099, (home:) ++39 (51) 250434; 
Fax: ++39 (51) 366964 (work only), Fidonet: 332/401.3 (home only).

new@ee.udel.edu (Darren New) (04/16/91)

In article <FWQA._6@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>That bringing up the SDI bugaboo is pretty much an irrelevant digression.

My point is that for certain jobs, even if formal techniques don't 
guarantee perfection, the job may be important enough to get right 
that the extra effort done on a formal validation may be worthwhile.

Hacking xfish doesn't need formal techniques.

>I'm running out of smileys. I don't find SDI amusing. "Nuclear War can
>ruin your whole compile" -- Karl Lehenbauer.

I don't find SDI amusing either.  I can see I touched a raw nerve here
by mentioning SDI.  Let me assure everyone that I don't agree with trying
to implement SDI, and as far as *I* can tell, it is doomed to failure
(and us along with it) if it ever is invoked.

>> I can't imagine what blowing up the computer has to do with whether
>> there are bugs in a program due to misunderstanding of what the
>> compiler is supposed to support.

>The symptom of a failure in SDI is nuclear explosions in major cities,
>remember.

I remember.  I still don't see the relevance. The explosion you were
talking about was caused from a station wagon, not an incomming
missle.  I don't think there are any plans (at least not publicly :-O )
to use SDI against station wagons.  

I don't see how you equate my stance (that formal techniques are better
that informal techniques) with 
a. desire to implement SDI
b. desire to not test systems 
c. blow up computers
d. fail to test missile launch vehicles
e. any of the other stuff you brought up

			  -- Darren




-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
     +=+=+ My time is very valuable, but unfortunately only to me +=+=+
+=+ Nails work better than screws, when both are driven with screwdrivers +=+

gudeman@cs.arizona.edu (David Gudeman) (04/16/91)

In article  <50835@nigel.ee.udel.edu> Darren New writes:
]
]>A _formal_
]>specification can be as wrong as a program can.  
]
]Usually not, because the specification is at a higher level and closer
]to the way a person thinks than an implementation.

That is an argument for higher-level programming languages (with
dynamic typing, for instance), not an argument for adding another
language (the specification language) to the process of implementing
software.  If your specification is so great, why not make that your
programming language?

]... experience has again and again shown that formal
]specifications tend to have fewer bugs than informal specifications.

I'll bet that is heavily dependent on the application.

]...  Every time I've heard of that
]somebody has taken (say) an ISO network protocol specification and
]respecified it formally, bugs have been found...

When I try to implement an informal specification in any programming
language I usually find "bugs" in the specification.  No win for
formal specs there either, it is just the process of going into
details that finds the problems.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

cs450a03@uc780.umd.edu (04/16/91)

Debora Weber-Wulff writes:
   [about validating system utilities]

>Well, we're getting there. J Moore did a proof of correctness for an
>assembler, William Bevier did a proof for a small kernel, and Warren
>Hunt did one for the gate level implementation of a chip using the
>Boyer-Moore prover (see Journal of Automated Reasoning, December
>1989). The prover - an obstinate, nagging proof checker that won't
>accept any nonsense - required the proof of all sorts of lemmata that
>get close to your problems. For example, it needed to be proven that
>the assembler program code could not modify itself during execution.

This is fun.

But there is assembler code which modifies itself during execution,
and then there is assembler code which modifies machine code before
execution (linking loaders, loaders in general, debuggers, and
possibly future functional languages).

I'd be interested in hearing how the formalisms worked around those
cases.  (I'm surrounded by libraries, but my schedule makes in hard to
get to any while they're open :-(

Raul Rockwell

yodaiken@chelm.cs.umass.edu (victor yodaiken) (04/16/91)

In article <RJqa13w164w@mantis.co.uk> mathew@mantis.co.uk (mathew) writes:
>new@ee.udel.edu (Darren New) writes:
>> In article <RNMAX3C@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Sil
>> >                         How do you specify hardware and human actions
>> >based on a formal language specification?
>> 
>> I understand a CPU called "viper" has in some way been formally validated.
>> Check out ACM architecture-oriented journals.
>
>This is correct. The Viper CPU is a relatively simple CPU design; it was
>necessary to keep it so in order to make it validatable. The formal proof
>system for hardware is based around the language ML; you specify the actions
>of various elements of the design, and the system uses inference rules to
>construct a formal proof that the design is correct (or incorrect :-)



In fact, formal proofs of discrete systems are still objects of 
research more than development. Nobody really knows how to do them, or
even if discrete systems are amenable to rigorous mathematical analysis.

The Viper proof is a classical example of overselling of the utility
of formal proofs. See  the following paper by Cohn for a nice
discussion of what "proof" really means here.

@article{Cohn,
author={Avra Cohn },
title="The notion of proof in Hardware Verification",
journal="Journal of Auomtated Reasoning",
volume={5},
number={2},
year={1989},
page={127-140}
}

@incollection{Viper,
author={Avra Cohn },
title="A Proof of Correctness of the Viper Microprocessor: The First Level",
booktitle="VLSI Specification and Synthesis",
editor="Graham Birtwistle and P.A. Subrahmanyam",
publisher="Kluwer",
year={1988},
page={27-72}
}


Also, since someone mentioned it, here is a ref to Bevier.


@techreport{Bevier,
author=" William R. Bevier",
title="Kit: A Study in Operating System Verification",
institution={Computational Logic Incorporated} ,
year=1988,
number={28}
}

yodaiken@chelm.cs.umass.edu (victor yodaiken) (04/16/91)

In article <1726@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <969@mgt3.sci.UUCP> D. C. Sessions writes:
>]  Of course, I can't *prove* that a transient error was due to substrate 
>]  injection resulting from pin overdrive.  Since I can't prove it, let's 
>]  forget the whole thing, right?
>
>That isn't a parallel situation.  If you can show that some insecurity
>exists in a program then it should be fixed.  That does not mean that
>it is useful to try to prove mathematically that there are no
>insecurities.
>

Why not?  

>
>I think that this "faith in formalism" betrays a lack of thought about
>the philosophy of science, math, and models.  Not that I blame you for
>this.  The fault lies with modern university educations which -- at
>least in the US -- are painfully lacking some important foundational
>studies.
>
>Can anyone who thinks it is useful to "prove" programs correct provide
>me with an explanation of exactly what the proof accomplishes?  What
>do you know after the proof that you didn't know before the proof?
>Before you answer, consider this: the process of deriving a
>mathematical proof is every bit as prone to errors as is the process
>of writing a computer program.

I want to write, say, a real-time process synchronization program
that will allow distributed process control programs to communicatate.
Before I begin to code, it seems only reasonable that I should be
able to describe the proposed algorithm, the assumed time costs
of key operations, and the correctness requirements, and then prove
that the algorithm can work. A chunk of Scheme, "C", or any other code
will not be very informative here. 


>Finally, consider the relationship of the proof to the real world,
>what assumptions are essential to this relationship, and whether those
>assumptions are more reliable than the reliability that you can get by
>direct testing.  To answer this question you will have to deal with
>the philosophy of mathematical models.  What relationship does the
>concept of a "set" have in common with the real world?  Sets follow
>strict rules, how can we know that anything in the real world follows
>any rules similar to those for sets?


Not sure what you are getting at here. Of course mathematical notions are
just abstractions from reality. But, they may be useful abstractions. 
An apple does not care about Newton's laws, but the laws give us a good
tool for understanding how the apple might fall.

cok@islsun.Kodak.COM (David Cok) (04/16/91)

In article <1954@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <50835@nigel.ee.udel.edu> Darren New writes:
>]
>]>A _formal_
>]>specification can be as wrong as a program can.  
>]
>]Usually not, because the specification is at a higher level and closer
>]to the way a person thinks than an implementation.
>
>That is an argument for higher-level programming languages (with
>dynamic typing, for instance), not an argument for adding another
>language (the specification language) to the process of implementing
>software.  If your specification is so great, why not make that your
>programming language?
>

Specification languages are and should be different in character than
programming languages.  A specification will state the properties of a
result (given restrictions on the input) or of the state of a system, such
as that a given sequence is ordered.  It says nothing about the method used
to achieve that.  Indeed there are many methods (algorithms) for sorting.
Some of them are quite complex in order to achieve some desired level of
performance.  Different ones will be applicable in different situations.
The question is whether the implementation in the programming 
language actually will produce a result with the desired properties.  It is
my gut feel that for computations which are somewhat close to mathematics
(e.g. sorting), the statement of a predicate giving a specification for a
function can usually be considerably simpler than the statement of an algorithm
for computing the result.  It seems iffier for things like GUIs but that may
be simply a paucity in specification languages. An automated tool which 
will statically produce cases in which the result of a piece of code will not
satisfy the specification would be a valuable contribution to program 
correctness.  

In this context I see program verification in the same light
as testing.  Testing finds bugs; it does not show that a program is correct.
Since specifications can not cover all conditions, a proof of correctness
on code fragments does not show that code compiled with a real compiler 
running on a real machine will execute without error, but if the proof fails, 
it may well point out a case that the code does not handle properly.  Thus 
program verification also serves to find bugs rather than to verify 
correctness.  Since it can be run on the static text, it could point out 
some errors much earlier in the programming process than testing could.
This would be true even if the specifications covered only some aspects of
program logic, leaving out practically important areas such as computation
time, peak memory usage, average paging as a function of physical memory.

David R. Cok
Eastman Kodak Company -- Imaging Science Lab
cok@Kodak.COM

mhcoffin@tolstoy.waterloo.edu (Michael Coffin) (04/16/91)

In article <1954@optima.cs.arizona.edu> David Gudeman writes:

>... If your specification is so great, why not make that your
>programming language?

Because the specification abstracts away details that are necessary
for producing efficient code but unnecessary for correctness.  The
specification for a program that sorts a file might be some formal
translation of "the output file consists of the same lines as the
input file, but in alphabetical order".  But as years of research and
tons of journal articles demonstrate, turning that specification into
an efficient program is not trivial---certainly not so trivial that a
compiler could accomplish it.

Michael Coffin				mhcoffin@watmsg.waterloo.edu
Dept. of Computer Science		office: (519) 885-1211
University of Waterloo			home:   (519) 725-5516
Waterloo, Ontario, Canada N2L 3G1

peter@ficc.ferranti.com (Peter da Silva) (04/16/91)

In article <50842@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:
> In article <FWQA._6@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
> >That bringing up the SDI bugaboo is pretty much an irrelevant digression.

> My point is that for certain jobs, even if formal techniques don't 
> guarantee perfection, the job may be important enough to get right 
> that the extra effort done on a formal validation may be worthwhile.

Oh, I quite agree. End of argument here. (now if they could all be that
easy). I was reacting to the implication that formal techniques could make
the impossible practical. There *are* people who think that way.

> I remember.  I still don't see the relevance. The explosion you were
> talking about was caused from a station wagon, not an incomming
> missle.  I don't think there are any plans (at least not publicly :-O )
> to use SDI against station wagons.  

Well, actually it was a minivan. I guess it would depend on if you
could use active homing on the fuzzbuster. :->
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

peter@ficc.ferranti.com (Peter da Silva) (04/16/91)

The use of formal specification can be considered a first implementation
of the problem, if you like, building on Brooks' admonition in "The Mythical
Man-Month": "build one to throw away -- you will anyway". The first one
firms the spec so the second one works. And because the second one is based
so closely on the first one it avoids the "second system" syndrome.

So, the scientists and engineers agree, they just don't know it.
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

new@ee.udel.edu (Darren New) (04/17/91)

In article <1954@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>That is an argument for higher-level programming languages (with
>dynamic typing, for instance), not an argument for adding another
>language (the specification language) to the process of implementing
>software.  If your specification is so great, why not make that your
>programming language?

If what I wrote about LOTOS was written well enough to be clear :-),
you would have noticed that this is what I recommend.  You can express
constraints in LOTOS and then use formal manipulations to generate a
state machine which meets those constraints.  I saw one conference paper
where an algorithm was presented whereby the network service is specified
in LOTOS and the algorithm slices and dices it into a finite-state-machine-based
protocol engine.

>]... experience has again and again shown that formal
>]specifications tend to have fewer bugs than informal specifications.

>I'll bet that is heavily dependent on the application.

Maybe.  The applications here were specifications of network protocols.

>When I try to implement an informal specification in any programming
>language I usually find "bugs" in the specification.  No win for
>formal specs there either, it is just the process of going into
>details that finds the problems.

I think there is a qualitative difference as well.  For example, it is possible
to formally subject the specification to "undefined reception" error tests.
(The equivalent, if you will, of "message not understood", except much
more common.) Using the right level of formalism lets you get away from
the nitty gritty details of storage management, timing, system dependencies,
and so on, while you are specifying what you want. 

If you are speaking about network protocol specifications, then getting rid
of the bugs in the specification instead of in the implementation makes
things more robust (as people tend to make the same implementation decisions)
and prevents people from having to bugfix things over and over with each
implementation.  

The real win is when you use a formalism so well suited to the task that
even the customer can understand it.  For example, if you are programming
life-insurance software (:-), using Lotus 1-2-3 (not LOTOS) for a specification
language makes a lot of sense, and even the underwriters can understand what
you are talking about. There's even a tool to test it :-)!   -- Darren




>--
>					David Gudeman
>gudeman@cs.arizona.edu
>noao!arizona!gudeman


-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
     +=+=+ My time is very valuable, but unfortunately only to me +=+=+
+=+ Nails work better than screws, when both are driven with screwdrivers +=+

peter@ficc.ferranti.com (Peter da Silva) (04/17/91)

Taking a quick flip-flop, here:

In article <1991Apr16.130341.27345@kodak.kodak.com> cok@islsun.Kodak.COM (David Cok) writes:
> [a specification language] says nothing about the method used
> to achieve that.  Indeed there are many methods (algorithms) for sorting.
> Some of them are quite complex in order to achieve some desired level of
> performance.  Different ones will be applicable in different situations.

Yes, and you need to specify a bit about the behaviour of the sort:
does it retain the order of non-sorted fields, for example? In any case,
how does this differ from top-down programming, where you say:

	sort frammistan

and later on fill in sort with code (or pull an appropriate routine from
a library). I have no idea how the UNIX 'sort' program works... does that
make the following shell script a formal pecification?

	awk -F: '{print $1}' /etc/passwd | sort | uniq -c

In fact I know nothing about what algorithms are used by uniq, sort, or awk.
A great advantage of higher level languages is that you no longer have to
deal with the implementation details of larger and larger components of
the solution.
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

cs450a03@uc780.umd.edu (04/17/91)

David Cok writes:

>It is my gut feel that for computations which are somewhat close to
>mathematics (e.g. sorting), the statement of a predicate giving a
>specification for a function can usually be considerably simpler than
>the statement of an algorithm for computing the result.

Using gut feelings with mathematics is a good way to learn, but not a
good way to be correct.  As I have found many times, from personal
experience :-/

As for this specific example:  in a dynamically typed language, you
can have a generic sort (which is presumably some hand-crafted marvel
with beautiful performance characteristics for both general case and
common specific cases).  So you could say  'sort blarg'.  And if you
have to give it any additional data, besides what you want sorted, it
would be to indicate some non-default sorting order (sort blarg by
date).

As for predicates being easier to specify than algorithms whose
results satisfy the predicate:  both can be implemented in a
programming language.  If implemented in executable form, the results
are tangible, and can be tested in the strictest sense of the word.

Sadly, many computer languages are considered inadequate for this kind
of development.

Raul Rockwell

kers@hplb.hpl.hp.com (Chris Dollin) (04/17/91)

David Gudeman says:
[quote]
   ]
   ]Usually not, because the specification is at a higher level and closer
   ]to the way a person thinks than an implementation.

   That is an argument for higher-level programming languages (with
   dynamic typing, for instance), not an argument for adding another
   language (the specification language) to the process of implementing
   software.  If your specification is so great, why not make that your
   programming language?
[end quote]

Because specifications need not be constructive. Consider

    fn sqrt
        is sqrt x pre x >= 0 return y post y * y = x

[This is a piece of HP-SL, a VDM derivative we're working on.] Any suggestions
on how to compile this to code that you'd use in Real Life? Or

    fn invf
        is invf (f, f') == (FORALL x. f' (f x) = x)

which tests two functions to see if one is the inverse of the other? I'm all in
favour of raising the level of programming languages, but the point of a
specification language is to describe the intended behaviour without being
constrained by the details of a language that is perforce intended to manifest
that behaviour on (usually) a sequential machine in (usually) a reasonably
short time.

[Incidentally, the type-checker tells me that the types of those functions are

fn invf : (_A -> _B) -> ((_B -> _A) -> Bool)
fn sqrt : Num -> Num

Just what I wanted. *In this case*, static type-checking is fine.]
--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

new@ee.udel.edu (Darren New) (04/18/91)

>:Because "math" means "formal."  (I think the word "math" comes from
>:the name of the Arabic person who invented math, or something like that.)
>You're probably thinking of "algorithm".  

Actually, having found the door on which the original tidbit was taped,
I meant "algebra" which derives from "Al Jabar".        -- Darren
-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, FDTs -----
     +=+=+ My time is very valuable, but unfortunately only to me +=+=+
+=+ Nails work better than screws, when both are driven with screwdrivers +=+

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (04/19/91)

In article <51179@nigel.ee.udel.edu> new@ee.udel.edu (Darren New) writes:
> >:Because "math" means "formal."  (I think the word "math" comes from
> >:the name of the Arabic person who invented math, or something like that.)
> >You're probably thinking of "algorithm".  
> Actually, having found the door on which the original tidbit was taped,
> I meant "algebra" which derives from "Al Jabar".        -- Darren

``Algorithm'' also comes from an Arabic name. Naturally, both of these
biographical tidbits may be found in Knuth.

---Dan

torek@elf.ee.lbl.gov (Chris Torek) (04/20/91)

>In article <1726@optima.cs.arizona.edu> gudeman@cs.arizona.edu
>(David Gudeman) writes:
>>Finally, consider the relationship of the proof to the real world ...
>>you will have to deal with the philosophy of mathematical models.

(This goes well off the beam of comp.lang.misc.  You must also deal
with the question of objective reality, and whether, if it exists at
all, it can be perceived.  In order to have useful discussions about
computer theory one has to make a `leap of faith', however justified
that leap may be in one's own experiences, and assume that all this
mathematical stuff really reflects---or perhaps even molds---the `real
world', whatever that is.  For many thousands of words on such subjects
please move to talk.philosophy.* [and say hi to my brother Paul :-) ].)

In article <29344@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu
(victor yodaiken) writes:
>Not sure what you [DG] are getting at here. Of course mathematical notions
>are just abstractions from reality. But, they may be useful abstractions. 
>An apple does not care about Newton's laws, but the laws give us a good
>tool for understanding how the apple might fall.

This is actually a better analogy than one might first notice.  We now
believe that Newtonian physics does *not* accurately describe the real
world---but Newtonian physics is nonetheless useful in everyday
situations.  We may use Newton's 3 laws, but we should be aware of
their limits before blindly applying them.

Likewise---and perhaps this is what David Gudeman is talking about---we
can use mathematical models of computer behaviour quite effectively,
but we should keep in mind that they are only models.  Those who work
with hardware will attest to the fact that even a simple NAND gate is
not *really* described by

		 { 0  if a = 0 and b = 0
	a NAND b {
		 { 1  otherwise

(for instance, we have completely ignored the issue of indeterminate
inputs, fan-out, noisy power supplies, and so on).

We make assumptions because that makes the problems we want to solve
tractable.  If we can then `prove' mathematically that the model system
works, we can have some degree of confidence that the real system works
(assuming there is a real system, objective reality, etc.).

To get back onto the topic, then, people advocate `proving programs
correct' because they believe that the activity really does lead to
correct programs.  Others advocate testing because they believe that
testing leads to correct programs.  Both groups are partially right:
the process of reasoning about programs often turns up bugs, and the
process of testing often turns up bugs.  Both methods are fallible,
because they are done by people, and because they make assumptions that
are sometimes false.

My own position on this is that both methods should be used in
moderation, and in proportion to the importance of the problem.  If the
consequences of a bug are minor, it is silly to expend a great deal of
energy avoiding it; if the consequences are serious, it is unwise or
irresponsible not to take safety measures.  Of course, the proper
place at which the line should be drawn is a matter of much dispute.
-- 
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA		Domain:	torek@ee.lbl.gov

markh@csd4.csd.uwm.edu (Mark William Hopkins) (04/20/91)

In article <12297@dog.ee.lbl.gov> torek@elf.ee.lbl.gov (Chris Torek) writes:
>To get back onto the topic, then, people advocate `proving programs
>correct' because they believe that the activity really does lead to
>correct programs.  Others advocate testing because they believe that
>testing leads to correct programs.  Both groups are partially right:
>the process of reasoning about programs often turns up bugs, and the
>process of testing often turns up bugs.  Both methods are fallible,
>because they are done by people, and because they make assumptions that
>are sometimes false.

After spending the better part of a week testing software that I have already
proven correct to find bugs that were never in the software but actually a
result of the test conditions not exactly matching running conditions, and
having this happen on *three* consecutive occasions I have no sympathy for
the testing point of view.

If it's proven mathematically, it *will* work.  And if it doesn't, then it's
provably the fault of the underlying hardware (or compiler or assembler).
And yes, I've run into these conditions countless number of times already.

But the point is, after verification, responsibility for proper functioning is
no longer the software writer's job.  That job is complete.

Software is valid, even when it does not function normally due to faulty
hardware or translators.

torek@elf.ee.lbl.gov (Chris Torek) (04/21/91)

(First one small note: my NAND definition was wrong; it should have read
`0 if a=1 and b=1, 1 otherwise'.  I am surprised that, given the usual
usenet tendency to turn one error into `your entire argument is wrong',
no one has done this yet. :-) )

>In article <12297@dog.ee.lbl.gov> I wrote:
>>[Some] people advocate `proving programs correct' because they believe
>>that the activity really does lead to correct programs.  Others advocate
>>testing because they believe that testing leads to correct programs.
>>Both groups are partially right: the process of reasoning about programs
>>often turns up bugs, and the process of testing often turns up bugs.
>>Both methods are fallible, because they are done by people, and because
>>they make assumptions that are sometimes false.

In article <11204@uwm.edu> markh@csd4.csd.uwm.edu (Mark William Hopkins)
writes:
>After spending the better part of a week testing software that I have already
>proven correct to find bugs that were never in the software but actually a
>result of the test conditions not exactly matching running conditions, and
>having this happen on *three* consecutive occasions I have no sympathy for
>the testing point of view.

This happens to me as well: a routine that I have `reasoned' correct fails
a test, and it turns out that the test is wrong.

>If it's proven mathematically, it *will* work.

(To argue David Gudeman's side:)  *What* will work?

If your proof is correct, you have shown that your program P1 in
language L1 is mathematically equivalent to some other program P2 in
some other language L2.  There are no guarantees that the assumptions
made in the process (that compiler C1 for language P1 `works', that
program P2 has the desired effect in all circumstances, and so on) are
all correct.

>And if it doesn't, then it's provably the fault of the underlying
>hardware (or compiler or assembler).

Quite so: software does not exist in a vacuum (unless perhaps Objective
Reality does not exist: perhaps I myself do not exist except as a
concept in a vacuum---but never mind that).

Moreover, you may have made an error in your proof (mathematicians *do*
make errors [why, I remember back in 1802 when . . . :-) ]), or your
specification language and program L2 and P2 may not describe what was
really supposed to have been programmed (i.e., the specification may be
in error; perhaps it omits a detail that should have been included, or
maybe it is just entirely wrong).

>But the point is, after verification, responsibility for proper
>functioning is no longer the software writer's job.  That job is complete.

If you wish to partition the jobs, you can certainly make this
statement.  Nonetheless I, for one, would not want to be put under an
X-ray machine running software that was proven but never tested.
Whether the software writer's job is complete or not, *someone* should
test such a critical system as this one.  Indeed, there should be
redundant error checks all down the line: from the specifications
through the hardware and software all the way to the machine's
operator.

>Software is valid, even when it does not function normally due to faulty
>hardware or translators.

Yet software that is `valid' by this definition can still `do the wrong
thing', even when the hardware or translators work.  In addition, if
the hardware and translators are fallible (and they are!), critical
software should incorporate redundant error checks anyway.

Again, this draws perilously close to the argument over where the
`line between wasted effort and irresponsibility' is to be drawn.

Remember, I am not claiming that proving programs correct is
valueless.  I (and I think David Gudeman as well) only claim that
a correctness proof is not a `magic bullet', and that it should be
used, where appropriate, in conjunction with other `bug avoidance'
techniques.  `When' and `how much' proof and/or testing is adequate
is problem-dependent.
-- 
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA		Domain:	torek@ee.lbl.gov

kpv@ulysses.att.com (Phong Vo[drew]) (04/22/91)

In article <12318@dog.ee.lbl.gov>, torek@elf.ee.lbl.gov (Chris Torek) writes:
> >If it's proven mathematically, it *will* work.
> (To argue David Gudeman's side:)  *What* will work?
> If your proof is correct, you have shown that your program P1 in
> language L1 is mathematically equivalent to some other program P2 in
> some other language L2.  There are no guarantees that the assumptions
> made in the process (that compiler C1 for language P1 `works', that
> program P2 has the desired effect in all circumstances, and so on) are
> all correct.
> 
Just to make this concrete, the following example is mathematically correct
given all the usual definitions of the C operators. It is incorrect because
it is not portable.

	int strlen(register char *str)
	{
		register char	*sp = str-1;
		while(*++sp)
			;
		return(sp-str);
	}

The problem is in the assignment 'sp = str-1'. On some machine, this will
cause a core dump when the while() loop is executed first time even though
the ++ should make sp valid.

guido@cwi.nl (Guido van Rossum) (04/22/91)

kpv@ulysses.att.com (Phong Vo[drew]) writes:
>Just to make this concrete, the following example is mathematically correct
>given all the usual definitions of the C operators. It is incorrect because
>it is not portable.

>	int strlen(register char *str)
>	{
>		register char	*sp = str-1;
>		while(*++sp)
>			;
>		return(sp-str);
>	}

>The problem is in the assignment 'sp = str-1'. On some machine, this will
>cause a core dump when the while() loop is executed first time even though
>the ++ should make sp valid.

The program makes *another* assumption which makes it non-portable: it
assumes that the string size is less than the maximum integer, which
may be as small as 32767.

It is hard enough to prove that a given C program is correct if one
knows all the environment parameters (integer size, etc.).  How about
proving that a C program is correct under all legal combinations of
environment parameters?

--Guido van Rossum, CWI, Amsterdam <guido@cwi.nl>
"You're so *digital*, girl!"

schwartz@groucho.cs.psu.edu (Scott Schwartz) (04/22/91)

In article <14633@ulysses.att.com> kpv@ulysses.att.com (Phong Vo[drew]) writes:
   Just to make this concrete, the following example is mathematically correct
   given all the usual definitions of the C operators. It is incorrect because
   it is not portable.

The "usual" definitions usually assume various nonportable things;
part of the ANSI effort was to better specify the language.  As an
aside, K&R2 says that if the pointer resulting from address arithmatic
points outside the array, the effect is undefined.  Surely you
couldn't show that example to be "mathematically correct" given that
rule.

davis@barbes.ilog.fr (Harley Davis) (04/22/91)

In article <11204@uwm.edu> markh@csd4.csd.uwm.edu (Mark William Hopkins) writes:

   If it's proven mathematically, it *will* work.  And if it doesn't, then it's
   provably the fault of the underlying hardware (or compiler or assembler).
   And yes, I've run into these conditions countless number of times already.

   But the point is, after verification, responsibility for proper functioning is
   no longer the software writer's job.  That job is complete.

   Software is valid, even when it does not function normally due to faulty
   hardware or translators.

Sadly, for those of us who are paid to deliver working software, this
is not true.  The software has to work despite flaky hardware or
underlying levels which do not meet their (often poorly) documented
interfaces.

-- Harley Davis
--
------------------------------------------------------------------------------
nom: Harley Davis			ILOG S.A.
net: davis@ilog.fr			2 Avenue Gallie'ni, BP 85
tel: (33 1) 46 63 66 66			94253 Gentilly Cedex, France