[comp.lang.ada] Pre-condition vs. Post-condition

cml8@robin.cs.uofs.edu (Chris M. Little) (03/15/91)

Say we have a function CAPITAL which, given a country's name, returns its
capital city.  If the given country does not exist, an exception COUNTRY_ERROR
is raised.  Should the given country's presence be listed as a pre-condition
for this function, or should its absense (it doesn't exist) and the raising
of COUNTRY_ERROR be listed as a post-condition?

I brought this question up in class today and the outcome was a split decision.
I think exception raising and/or handling is as valid an outcome of a function
or procedure as any other outcome, so I'm tempted to cover the issue in the
post-condition comment.  My opponents believe that a function's pre-conditions
should be the conditions under which it would complete "normally", that is,
without any exceptions being raised.

I'd appreciate any insight on the issue.  E-mail please.  Thanks.

-- 
Chris Little, Graduate Asstistant	-     CML8@JAGUAR.UOFS.EDU	(VMS)
Department of Computing Sciences	-     CML8@SCRANTON.BITNET	(VMS)
University of Scranton, Pennsylvania.	-     CML8@ROBIN.CS.UOFS.EDU    (UNIX)

mfeldman@seas.gwu.edu (Michael Feldman) (03/16/91)

>Say we have a function CAPITAL which, given a country's name, returns its
>capital city.  If the given country does not exist, an exception COUNTRY_ERROR
>is raised.  Should the given country's presence be listed as a pre-condition
>for this function, or should its absense (it doesn't exist) and the raising
>of COUNTRY_ERROR be listed as a post-condition?
>
>I brought this question up in class today and the outcome was a split decision.
>I think exception raising and/or handling is as valid an outcome of a function
>or procedure as any other outcome, so I'm tempted to cover the issue in the
>post-condition comment.  My opponents believe that a function's pre-conditions
>should be the conditions under which it would complete "normally", that is,
>without any exceptions being raised.

Hmmm. Interesting question. I have always taught - and thought of - pre-
conditions as a set of "contract terms" which, if they are met, would
obligate the function writer to write code that delivers the right results.
From a verification point of view, I think you are correct that raising
an exception is a _valid_ outcome of the function, and so the function has
to be tested with cases of "bad" input to check that the exception indeed
is raised under those conditions. If the pre- and post-conditions are used
to drive tests (or formal verification), I agree that _explicit_ exception-
raising by the function is a post-condition matter: it needs to be tested.

Failure of a caller to meet a pre-condition violates the contract between
function writer and function user, in a way that the behavior of the
function is _unpredictable_, for example the actual parameter is 
unitialized. If the "garbage" in the parameter _happens_ to constitute
an in-range value, the function delivers a "correct" answer, coincidentally.
Otherwise, an exception may be _unexpectedly_ raised (constraint_error, say).
Since the caller has violated the contract, he gets what he deserves (an
unexpected propagation of an exception). So the pre-conditions are really
saying "if you violate these, all bets are off on what this function does."

This argument makes sense to me from a theoretical standpoint. From a 
practical standpoint, in describing the interface to a function, how does
one distinguish between violations that result in a _predictable_ behavior
and those that do not? I can see why your students may have disagreed.
It's a confusing matter. I'm posting this to the net to provoke other
readers to join this thread if they are interested.

Mike Feldman

knoll@coopn1.csc.ti.com (Ed Knoll @(719)593-5182) (03/16/91)

>is raised.  Should the given country's presence be listed as a pre-condition
>for this function, or should its absense (it doesn't exist) and the raising
>of COUNTRY_ERROR be listed as a post-condition?

The answer is that neither post- or pre-condition is best, or rather both 
are based on the context of the situation.  How is the module being 
used?  What is the scope of the module?  Is it a reusable component?  
Does it interface with other modules in a known or "friendly" environment, 
or is it interfacing/interacting with external subsystems of unknown quality?

For a module which is part of the internal workings of a subsystem, a 
pre-condition would be more reasonable.  It makes more sense for every
module to insure that the data they pass on/generate is always correct.
If modules of the same working subsystem have to verify all inputs, too 
much overhead/coupling is incurred.

However, if a module is interacting with components external to a subsystem
and these components are unknown/unrelated to the local subsystem, it makes
more sense to do the error checking and to document the exceptions as part of 
the behavior of the subsystem.  Well behaved subsystems under valid and 
invalid stimulus will be more portable/reusable then subsystems which react 
unpredictably in the presence of invalid stimulus.

Ed Knoll
Texas Instruments
knoll@coopn1.csc.ti.com

g_harrison@vger.nsu.edu (George C. Harrison, Norfolk State University) (03/17/91)

In article <2865@sparko.gwu.edu>, mfeldman@seas.gwu.edu (Michael Feldman) writes:
>>Say we have a function CAPITAL which, given a country's name, returns its
>>capital city.  If the given country does not exist, an exception COUNTRY_ERROR
>>is raised.  Should the given country's presence be listed as a pre-condition
>>for this function, or should its absense (it doesn't exist) and the raising
>>of COUNTRY_ERROR be listed as a post-condition?
>>
>>I brought this question up in class today and the outcome was a split decision.
>>I think exception raising and/or handling is as valid an outcome of a function
>>or procedure as any other outcome, so I'm tempted to cover the issue in the
>>post-condition comment.  My opponents believe that a function's pre-conditions
>>should be the conditions under which it would complete "normally", that is,
>>without any exceptions being raised.
> 
> Hmmm. Interesting question. I have always taught - and thought of - pre-
> conditions as a set of "contract terms" which, if they are met, would
> obligate the function writer to write code that delivers the right results.
> From a verification point of view, I think you are correct that raising
> an exception is a _valid_ outcome of the function, and so the function has
> to be tested with cases of "bad" input to check that the exception indeed
> is raised under those conditions. If the pre- and post-conditions are used
> to drive tests (or formal verification), I agree that _explicit_ exception-
> raising by the function is a post-condition matter: it needs to be tested.
> 

Lots of stuff deleted.  

This problem raises some interesting questions:  Should pre and post conditions
define the complete functionality of a subroutine?  Should a function which has
only one returned value (in Ada) be allowed to have a compound post condition? 
(old question) How exceptional should exceptions be used? (or something like
that.)


> This argument makes sense to me from a theoretical standpoint. From a 
> practical standpoint, in describing the interface to a function, how does
> one distinguish between violations that result in a _predictable_ behavior
> and those that do not? I can see why your students may have disagreed.
> It's a confusing matter. I'm posting this to the net to provoke other
> readers to join this thread if they are interested.
> 
> Mike Feldman

On a practical (the theoretical) view the user probably should redo his
function as a procedure returning TWO values (the captial and a boolean object
SUCCESSFUL); write the usualy pre and post conditions for that procedure; then
make a functional isomorphism back to the original function.  


Actually, IMHO, if a practical intent of the function IS to guard against
wrong countries, then a procedure might be better anyway.  

-- George C. Harrison                              -----------------------
----- Professor of Computer Science                -----------------------
----- Norfolk State University                     -----------------------
----- 2401 Corprew Avenue, Norfolk, Virginia 23504 -----------------------
----- INTERNET:  g_harrison@vger.nsu.edu ---------------------------------

holly@python.cis.ohio-state.edu (Joe Hollingsworth) (03/18/91)

In article <2865@sparko.gwu.edu> mfeldman@seas.gwu.edu () writes:
>>Say we have a function CAPITAL which, given a country's name, returns its
>>capital city.  If the given country does not exist, an exception COUNTRY_ERROR
>>is raised.  Should the given country's presence be listed as a pre-condition
>>for this function, or should its absense (it doesn't exist) and the raising
>>of COUNTRY_ERROR be listed as a post-condition?
>>
>>I brought this question up in class today and the outcome was a split decision.
>>I think exception raising and/or handling is as valid an outcome of a function
>>or procedure as any other outcome, so I'm tempted to cover the issue in the
>>post-condition comment.  My opponents believe that a function's pre-conditions
>>should be the conditions under which it would complete "normally", that is,
>>without any exceptions being raised.
>
>Hmmm. Interesting question. I have always taught - and thought of - pre-
>conditions as a set of "contract terms" which, if they are met, would
>obligate the function writer to write code that delivers the right results.
>From a verification point of view, I think you are correct that raising
>an exception is a _valid_ outcome of the function, and so the function has
>to be tested with cases of "bad" input to check that the exception indeed
>is raised under those conditions. If the pre- and post-conditions are used
>to drive tests (or formal verification), I agree that _explicit_ exception-
>raising by the function is a post-condition matter: it needs to be tested.
....stuff deleted.....
>Mike Feldman

I'm glad you raised the point of formal verification.  I'd like to point
out how the use of exception handling complicates verification (be it
formal or informal).

Examine the following possible implementation for Pop (popping a stack).

procedure pop(s: stack)
-- this pop doesn't return the top of the stack, just discards it.
begin
   if(not empty(s)) then
      -- pop the stack
   else
      raise underflow
end pop;

The point here is that there are 2 distinct paths through the procedure
which will have to be verified with respect to the pre and post conditions
(pre and post conditions not given).  The two paths are easy to spot.

Now look at another implementation of Pop based on Booch's stack package:

procedure pop(s: stack)
-- this pop doesn't return the top of the stack, just discards it.
begin
   stack.top := stack.top - 1;
   exception
      when Constraint_Error => raise underflow
end pop;

Here there are also two paths through the procedure, but it's not so
obvious to the reader (verifier) how the path through the exception
handler gets taken.  The verifier has to realize that the variable
stack.top has been defined as Natural and that a Constraint_Error
will be raised by the run time system when trying to set it to -1.

Since the control flow in this example is not explicit, let's call
it implicit.  I believe that verifying (formally or informally) code
written using implicit control flow is harder than code using explicit
control flow.  If this point is valid, one might then believe that 
code written using implicit control flow stands a better chance of NOT
being verified correctly, especially if it is informally verified.  What's
more I also believe that using exceptions leads one toward this kind
of programming (i.e. depending on implicit control flow).

I'm a follower of many of the rules found in Kernighan & Plauger's
"The Elements of Programming Style", two of those rules reads as follows:

"Write clearly - don't be too clever."
"Say what you mean, simply and directly."

I believe that exception handling leads one to writing programs that are not
as clear as they could be (see the second example above which doesn't
say what it means simply and directly).

With respect to verification, if the verifier is mechanical, then it probably
doesn't matter if the code uses implicit or explicit control, the
verifier will be able to handle it.  But, when was the last time any of
us used a mechanical verifier on our Ada programs?  Chances are
the verifier is the person writing the code, and that person is doing it
informally.  All the more reason to "Write clearly..." and "Say what you
mean, simply and directly."

The above arguments lead me to believe that exception handling should be used
only sparingly (if at all)!

(Mike, you wanted to get a debate going on this?  The above opinions
should help!)

Joe 

Joe Hollingsworth  Computer and Information Science @ OSU
holly@cis.ohio-state.edu

NCOHEN@IBM.COM ("Norman H. Cohen") (03/18/91)

For exceptions raised by predefined basic operations, it is generally
impossible to write a precondition guaranteeing that the exception WILL
be raised.*  In such cases it is clearly appropriate to incorporate in
the precondition sufficient conditions to guarantee that no predefined
exception** will be raised.

For programmer-defined exceptions raised under predictable circumstances,
it is reasonable to incorporate in formal specifications the conditions
under which a subprogram completes normally, the circumstances under
which such exceptions are raised, and the effect on the program state
in each case.  The approach I've adopted in developing proof rules for
Ada is to assume a separate precondition/postcondition pair for each such
case:

   function CAPITAL (COUNTRY: STRING) return STRING;

   -- Let Country_Map be the mapping { "USA" -> "Washington",
   -- "Mongolia" -> "Ulan Bator", ... }.

   -- Precondition for normal completion:
   --    COUNTRY in domain Country_Map
   -- Postcondition for normal completion:
   --    function_result = Country_Map(COUNTRY)
   --
   -- Precondition for completion with COUNTRY_ERROR raised:
   --    COUNTRY not in domain Country_Map
   -- Postcondition for completion with COUNTRY_ERROR raised:
   --    TRUE  [no change of state]

It is certainly possible to incorporate all this in a postcondition:

   -- Precondition: TRUE
   -- Postcondition:
   --      (    COUNTRY in domain Country_Map
   --       and exception_raised=none
   --       and function_result=Country_Map(COUNTRY))
   --   or (    COUNTRY not in domain Country_Map
   --       and exception_raised=COUNTRY_ERROR)

However, I find this presentation more obscure for the human reader.
The first approach more naturally decomposes the behavior of the function
into separate cases of interest.  For mechanical program analysis (e.g.
for the generation of verification conditions) the first approach is also
clearly preferable, because it allows proof rules to refer to the
exception status at the end of a statement.  For example, if Pre(S,P,E)
is the precondition for statement S to complete with postcondition P and
exception status E, then the proof rule for a sequence of two statements
is:

  (1) Pre(S1;S2, P, normal) = Pre(S1, Pre(S2, P, normal), normal)

  (2) If E is the raising of some exception,
         Pre(S1;S2, P, E) =
            Pre(S1, P, E) or Pre(S1, Pre(S2, P, E), normal)

(In the exception-raising case, case (2), the left operand of "or"
corresponds to the case where S1 raises the exception and the right
operand corresponds to the case where S1 completes normally and then
S2 raises the exception.)

----------

*-Among the reasons for this are the liberty provided by RM 4.5.7(7) to
  ignore real overflow, by RM 11.6(6) to use a wider type for computation
  of intermediate results, and by RM 11.6(7) to delete an operation whose
  "only possible effect is to propagate a predefined exception."  Also
  there are cases in which DIFFERENT exceptions may be raised for
  implementation-dependent reasons, such as the order in which
  subexpressions are evaluated or an implementation's adherence to
  nonbinding interpretation AI-00387, which recommends the raising of
  CONSTRAINT_ERROR wherever the  RM calls for NUMERIC_ERROR to be raised.

**-This really means "no predefined exception other than STORAGE_ERROR
   and IO_EXCEPTIONS.DEVICE_ERROR."  These two exceptions occur for
   reasons that cannot be expressed in terms of the abstract program
   state.  The raising of one of these exceptions is best viewed as an
   unpredictable event that causes the underlying abstract machine to
   break.  It is possible to factor the possibility of such unpredictable
   events into a formal proof of program behavior, but a simpler and
   equally useful approach is to reason about program behavior under the
   assumption that such events will not occur and to prominently
   document the fact that this assumption has been made.

nwebre@polyslo.CalPoly.EDU (Neil Webre) (03/19/91)

When the questions of exceptions and pre- and post-conditions came up,
I answered by mail to the poster. Since there have been some replies
via news, I am posting my reply which follows:

To: cml8@robin.cs.uofs.edu
Subject: Re: Pre-condition vs. Post-condition
Organization: Cal Poly State Univ,CSC Dept,San Luis Obispo,CA 93407

   If a pre-condition is not met, the result of execution is undefined
   (maybe unspecified is a better word). In the case of exceptions, if
   you write the fact that an exception will occur in certain cases,
   in your postcondition, then it seems to me that you have written
   a specification of results if the "error" condition happens. Therefore
   the error condition was not a precondition since your algorithm
   has a well defined and specified result for that case. 

   I am in the process of writing a textbook. What we have done is to
   write the specs of procedures and functions in the following form:

   procedure kaboom(...);
   -- precondition : ....
   -- postcondition : ....
   -- exceptions : ...

   Properly speaking, the exceptions clause is part of the postcondition.
   However, since exceptions are a standard way of handling "errors"
   in Ada, we broke them out into a separate clause.
   Preconditions are reserved to screen out conditions that truly have no
   defined results. It is the job of the client to assure that the
   precondition is met prior to execution of the procedure or function.

      

sss@nic.cerf.net (Marlene M. Eckert) (03/19/91)

How about exceptions should be raised only in _EXCEPTIONAL_ 
situations?  Reaching the end-of-file or trying to POP off an
empty stack are NOT exceptional conditions.
  
I would go so far as to say an exception should never be raised after 
a system has been delivered.  Don't get me wrong, I love Ada exception
handling...  Exception Handling made my last large integration go much 
smoother than I expected. 

Any comments??

Michael Reznick
Structured Systems & Software (3S)
sss@cerf.net
----------------------------------------------------------------------
Standard disclaimer...
----------------------------------------------------------------------

jls@rutabaga.Rational.COM (Jim Showalter) (03/19/91)

>procedure pop(s: stack)
>begin
>   if(not empty(s)) then
>      -- pop the stack
>   else
>      raise underflow
>end pop;

>procedure pop(s: stack)
>begin
>   stack.top := stack.top - 1;
>   exception
>      when Constraint_Error => raise underflow
>end pop;

Note that in the second case the procedure is faster, since it doesn't
have to do the check first. Not only is it faster, it is safer, since
without using tasks you cannot guarantee that between the time you
checked and the time you popped it hadn't been popped elsewhere. For
both of these reasons, I'd say the second version is far better than
the first version, and that the original poster's thesis that exceptions
should be used rarely if ever has been contradicted by the very examples
provided to support his/her case!

P.S. You need an "end if" in the first example.
--
***** DISCLAIMER: The opinions expressed herein are my own. Duh. Like you'd
ever be able to find a company (or, for that matter, very many people) with
opinions like mine. 
              -- "When I want your opinion, I'll read it in your entrails."

holly@python.cis.ohio-state.edu (Joe Hollingsworth) (03/19/91)

In article <jls.669368339@rutabaga> jls@rutabaga.Rational.COM (Jim Showalter) writes:
>>procedure pop(s: stack)
>>begin
>>   if(not empty(s)) then
>>      -- pop the stack
>>   else
>>      raise underflow
     end if;
>>end pop;
>
>>procedure pop(s: stack)
>>begin
>>   stack.top := stack.top - 1;
>>   exception
>>      when Constraint_Error => raise underflow
>>end pop;
>
>Note that in the second case the procedure is faster, since it doesn't
>have to do the check first. 

I figured I'd see this argument, but I don't buy it.

1) Refering back to Kernighan & Plauger's "The Elements of Programming
Style," I'll quote another rule:

"Write clearly - don't sacrifice clarity for 'efficiency'."


2) Ok, so you think that the second version of Pop is faster
because it is not doing the test.  Well there may not be an explicit 
test, but there sure has to be an implicit one.  The compiler has
to generate code to test to see if a constraint error needs to be
raised, i.e. there is run time checking going on here.  And you 
sure can't use the SUPRESS pragma to get rid of it.

There is testing going on here, either you see it
explicitly in the code, or it gets done for you by run time checks.


>Not only is it faster, it is safer, since
>without using tasks you cannot guarantee that between the time you
>checked and the time you popped it hadn't been popped elsewhere. 

The examples given were for the sequential world, there was no
mention in the original posting or my follow up about tasking.

>For both of these reasons, I'd say the second version is far better than
>the first version, and that the original poster's thesis that exceptions
>should be used rarely if ever has been contradicted by the very examples
>provided to support his/her case!

I don't think the follow up poster has really come up a with good
case.  His/her case is mainly based on efficiency concerns,
which obviously doesn't hold since run time checks have to be performed.
And, the follow up poster's far better version is still not as clear
as the first version.


>P.S. You need an "end if" in the first example.
Thank you, I've edited the above example to include "end if."


Joe 
holly@cis.ohio-state.edu

jng@sli.com (Mike Gilbert) (03/20/91)

>>Say we have a function CAPITAL which, given a country's name, returns its
>>capital city.  If the given country does not exist, an exception
>>COUNTRY_ERROR
>>is raised.  Should the given country's presence be listed as a pre-condition
>>for this function, or should its absense (it doesn't exist) and the raising
>>of COUNTRY_ERROR be listed as a post-condition?
>>
>> ...
>
>Hmmm. Interesting question. I have always taught - and thought of - pre-
>conditions as a set of "contract terms" which, if they are met, would
>obligate the function writer to write code that delivers the right results.
> ...
>Failure of a caller to meet a pre-condition violates the contract between
>function writer and function user, in a way that the behavior of the
>function is _unpredictable_, for example the actual parameter is 
>unitialized. If the "garbage" in the parameter _happens_ to constitute
>an in-range value, the function delivers a "correct" answer, coincidentally.
>Otherwise, an exception may be _unexpectedly_ raised (constraint_error, say).
>Since the caller has violated the contract, he gets what he deserves (an
>unexpected propagation of an exception). So the pre-conditions are really
>saying "if you violate these, all bets are off on what this function does."
>
>This argument makes sense to me from a theoretical standpoint.  From a 
> practical standpoint, in describing the interface to a function, how does
>one distinguish between violations that result in a _predictable_ behavior
>and those that do not?
>
>Mike Feldman

I agree with Mike Feldman's description of a function's pre-conditions as a
contract, which, if satisfied, will cause the function to produce the
documented post-conditions, and, if violated, will produce an _unpredictable_
result.

Given this point of view, if a function guarantees to raise a specific
exception under certain pre-defined conditions, then the raising of that
exception must be considered to be part of the contract.  Because of the
guarantee of an exception, this case is very different from a function's
unpredictable behavior if preconditions are violated.

Thus, to answer the original question, the non-existence of a country causing
COUNTRY_ERROR to be raised should be listed as a post-condition.

If COUNTRY_ERROR is a post-condition, then the programmer has several options
for how to handle the possible COUNTRY_ERROR exception in the calling context:

	a If the programmer can guarantee that only valid countries will be
	  passed to CAPITAL, then, by the terms of the post-condition, the 
	  function will never raise COUNTRY_ERROR, and the calling context
	  doesn't have to allow for it.

	b If the programmer can't absolutely guarantee that the country name
	  is valid, but he doesn't expect to pass an invalid one, then the
	  choice of whether the calling context should handle COUNTRY_ERROR
	  depends on how robust the programmer wishes to make the program.

	  (For example, the country name could be generated by another function
	  which has not been formally verified but it believed to generate
	  correct output.)

	c If the programmer has no idea whether the country name is valid, then
	  the calling context had better handle COUNTRY_ERROR.  (For example,
	  the country name could be read as input from a user.)

Contrast these options with an alternate specification of CAPITAL in which the
country name must be valid as a pre-conditon, and thus an invalid country name
violates the function's contract.  With this specification, the function
CAPITAL will be _unpredictable_ if passed an invalid country name.  That is,
CAPITAL _could_ raise COUNTRY_ERROR, it _could_ raise any other exception, it
_could_ always return "SHANGRI-LA", it _could_ go into a loop, etc. etc.

So, if COUNTRY_ERROR is not a possible post-condition, then options "b" and
"c" above are not possible, because the programmer can't count on COUNTRY_ERROR
being raised.

What this all comes down to is that the post-conditions are what a function's
user can _count on_ given input that satisfies the pre-conditions.  Thus, since
the original question stated that COUNTRY_ERROR _is raised_ for a non-existent
country, then that specific exception should be listed as a post-condition.

Alternatively, if the original question had said something like "we've
currently implemented CAPITAL to raise COUNTRY_ERROR if the input country
doesn't exist, but we don't want to promise that it will always do so," then
users of CAPITAL would _not_ be able to count on that behavior, and then the
appropriate specification would be a pre-condition that the country must exist.

BTW, this is a distinction with a great deal of practical importance.  Many
large integrated software systems (e.g., a complex database package modified to
support distributed data by using a commercial network package) develop errors
because the integrator counts on behavior that a software package _currently_
exhibits (esp. under unusual circumstances), but that the developer never
intended to _guarantee_.  Then, when the next release of the software package
comes out, with different behavior under those circumstances, the integrated
system fails.

Mike Gilbert
Software Leverage

mfeldman@seas.gwu.edu (Michael Feldman) (03/20/91)

In article <311@nic.cerf.net> sss@nic.cerf.net (Marlene M. Eckert) writes:
>How about exceptions should be raised only in _EXCEPTIONAL_ 
>situations?  Reaching the end-of-file or trying to POP off an
>empty stack are NOT exceptional conditions.
>  
Well, this is getting to be an interesting thread on exception-handling
philosophy. I disagree with you. A client of a stack package which 
- due to a logic bug in the client algorithm - tries to pop a stack which
turns out to be empty, is indeed committing an error. IMHO this in indeed
an exceptional situation, and raising Stack_Underflow or whatever is
quite appropriate. The client should use a Stack_Is_Empty boolean
function to test for the empty condition, but s'pose he doesn't?
IMHO _both_ entities should be exported from a stack package.
I agree that a program that tests for the empty condition by trying to
pop and then handling the exception is doing violence to exceptions.

Roughly the same is true for end-file conditions. Text_IO exports a
perfectly good function End_Of_File for this purpose. Nevertheless,
s'pose a client of Text_IO screws up and doesn't test, or tests in
the wrong place? The package should raise End_Error for this unintentional
attempt to read past EOF. As in the stack case, one should NOT write
clients that test for normal EOF by just reading and reading until the
exception is raised. Once again, that's abusive.

Here's a more obvious one:

Given TYPE Days IS (Mon, Tue, Wed, Thu, Fri, Sat, Sun);

we find Tomorrow by writing

   IF Today = Days'Last THEN 
     Tomorrow := Days'First
   ELSE
     Tomorrow := Days'Succ(Today);
   END IF;

NOT

   BEGIN
     Tomorrow := Days'Succ(Today);
   EXCEPTION
     WHEN Constraint_Error => 
       Tomorrow := Days'First;
   END;

Some bit-fiddlers have argued that the latter is slightly more efficient,
but I still think it's abusive. Monday follows Sunday EVERY WEEK. The
only thing surprising about it is that Ada doesn't allow cyclic types!

Mike Feldman

sampson@cod.NOSC.MIL (Charles H. Sampson) (03/20/91)

In article <311@nic.cerf.net> sss@nic.cerf.net (Marlene M. Eckert) writes:
>How about exceptions should be raised only in _EXCEPTIONAL_ 
>situations?  Reaching the end-of-file or trying to POP off an
>empty stack are NOT exceptional conditions.

     O. K., but that just pushes the issue off to deciding what the word
_exceptional_ means.  The definition I prefer to use is: "An exceptional
condition is one that occurs infrequently or unexpectedly."  (Not original,
but I forgot whom I stole it from.  I think it was John Barnes.)  When
teaching Ada, immediately after giving that definition I point out to the
students that it does not require all exceptional conditions to be handled
by exceptions.

     What's happening here is that we're in an area where a lot of design
decisions have to be made.  I'm not sure that there is a rule that can be
applied to all cases and I am sure that if there is one we haven't found
it yet.  Unlike Mike Feldman, I have no philosophical problem with using
End_error to detect end-of-file or stack underflow to determine that the
stack is now empty, both satisfying the _infrequent_ criterion.  (I doubt
that I would ever use the latter myself, but my reasons are more esthetic
than anything else.)  Before I would condemn these uses in any particular
situation, I would want to hear the reasons for them.

     This has been an interesting thread to follow as we grapple with
this problem.  I particularly appreciate the fact that very few dogmatic
positions have been put forward.

                              Charlie

jls@rutabaga.Rational.COM (Jim Showalter) (03/20/91)

>Reaching the end-of-file or trying to POP off an
>empty stack are NOT exceptional conditions.

How to you justify this? Exceptions protect clients from incorrect
algorithmic implementation. If a client attempts to read past end of
file or pop past empty stack, it should be notified of the fact,
since the client is in error. Other mechanisms (e.g. status flags)
must be checked by the client to be effective (polling model), whereas
exceptions cannot be overlooked (interrupt model). Thus, exceptions
are more fail-safe.

You also say that you don't think an exception should ever be raised
after a system is delivered. I don't see how you can meet this constraint.
For example, consider interaction with a user, a la Enumeration_Io.
If the user enters bogus data--which you have no control over from
within the program--an exception will be raised. This is not only
unavoidable, but seems desirable to me.
--
***** DISCLAIMER: The opinions expressed herein are my own. Duh. Like you'd
ever be able to find a company (or, for that matter, very many people) with
opinions like mine. 
              -- "When I want your opinion, I'll read it in your entrails."

jls@rutabaga.Rational.COM (Jim Showalter) (03/21/91)

>1) Refering back to Kernighan & Plauger's "The Elements of Programming
>Style," I'll quote another rule:

>"Write clearly - don't sacrifice clarity for 'efficiency'."

Is this Kernighan of Kernighan & Ritchie? (If so, he doesn't follow
his own advice.)

>2) Ok, so you think that the second version of Pop is faster
>because it is not doing the test.  Well there may not be an explicit 
>test, but there sure has to be an implicit one.  The compiler has
>to generate code to test to see if a constraint error needs to be
>raised, i.e. there is run time checking going on here.

Yes, but the checking doesn't have to involve the overhead of a context
switch, as is required in your version with the call to another
subprogram. So my version is still faster, even with the checking.

>>Not only is it faster, it is safer, since
>>without using tasks you cannot guarantee that between the time you
>>checked and the time you popped it hadn't been popped elsewhere. 

>The examples given were for the sequential world, there was no
>mention in the original posting or my follow up about tasking.

Yeah, but if you do it right the first time, migrating to the
parallel world isn't a maintenance nightmare.

>I don't think the follow up poster has really come up a with good
>case.  His/her case is mainly based on efficiency concerns,

Well, I didn't make the case, but I actually think the version with
the exception is at least as easy to understand as the first version.
In this particular case the distinction is not as great, but consider
the classic "if/then" checking that is performed in languages without
exceptions:

      begin
	if some_precondition and then
           some_other_precondition and then
           yet_another_precondition then
              do_what_you_really_wanted_to_do;
        end if;
      end;

Contrast this to the exception version:
     
      begin
        do_what_you_really_wanted_to_do;
      exception
        when this_problem_occurs =>
          take_corrective_action;
        when that_problem_occurs =>
          take_other_corrective_action;
      end;

I argue that it is far easier in the exception case to figure out
what is supposed to happen, and that the reader can skip all the
error handling/checking stuff if he/she is just interested in the
main control flow. I've seen code written with so much precondition
checking that finding the actual statement that did anything was like
finding a needle in a haystack. With exceptions I don't have this
problem.

Furthermore, consider the use of procedures that raise exceptions
with the intent that they express the preconditions required:

      procedure assert_condition_valid (...) is
      begin
        if not some_precondition then
          raise this_problem_occurs;
      end assert_condition_valid;

      procedure assert_another_condition_valid (...) is... -- Similar.

      begin
        assert_condition_valid (...);
        assert_another_condition_valid (...);
        do_what_you_really_wanted_to_do;
      exception
        when this_problem_occurs =>
          take_corrective_action;
        when that_problem_occurs =>
          take_other_corrective_action;
      end;

Now in this case, I not only can find what is really being done,
I can also find out what the preconditions are, all without getting
snarled up in N levels of if-then checking. It reads well, it is
easy to maintain, and it depends on exceptions.
--
***** DISCLAIMER: The opinions expressed herein are my own. Duh. Like you'd
ever be able to find a company (or, for that matter, very many people) with
opinions like mine. 
              -- "When I want your opinion, I'll read it in your entrails."

jls@rutabaga.Rational.COM (Jim Showalter) (03/21/91)

>Some bit-fiddlers have argued that the latter is slightly more efficient,

BIT FIDDLER!?!?!? BIT FIDDLER!?!?!?!?

Stab me in the heart and TWIST THE KNIFE, why don't you?

Sheesh.

Of all the low blows over the years, this has GOT to take the cake.
I realize I made an argument based on efficiency, but I assure you
that was a momentary lapse. I've been accused of over-abstraction
before, but never bit fiddling. I HATE bits--if there was a way
to run software without hardware I'd jump at it.

I'm a software architect, software process and methodology consultant,
and OO/Ada trainer. Efficiency is about 96th on my list of concerns:
I generally tell people the way to improve performance is to bid faster
hardware, not worry about this or that register allocation word alignment
or whatever.

Bit fiddler.

Sheesh.
--
***** DISCLAIMER: The opinions expressed herein are my own. Duh. Like you'd
ever be able to find a company (or, for that matter, very many people) with
opinions like mine. 
              -- "When I want your opinion, I'll read it in your entrails."

jls@rutabaga.Rational.COM (Jim Showalter) (03/21/91)

>When teaching Ada, immediately after giving that definition I point out to the
>students that it does not require all exceptional conditions to be handled
>by exceptions.

Exactly. Some are better handled by status parameters, or status fields
in abstract data types, or whatever.

>     What's happening here is that we're in an area where a lot of design
>decisions have to be made.  I'm not sure that there is a rule that can be
>applied to all cases and I am sure that if there is one we haven't found
>it yet.

Exactly. Although I do think there are heuristics, such as the infrequency
criterion. For example, I have no problem with blowing up on underflow
instead of pre-checking it. But I wouldn't use an exception to implement
a cyclic type, since the whole point of a cyclic type is that it will
be cycled through, so using the exception on 'succ is poor form.

In short, I use exceptions whenever their being raised would signal to
someone in a debugger that something undesirable has occurred. Underflowing
a stack is undesirable. Cycling a type is normal, so trapping that in
a debugger would confuse the hell out of the maintenance programmer (it
brings in the oxymoronic notion of a "normal exception"!).
--
***** DISCLAIMER: The opinions expressed herein are my own. Duh. Like you'd
ever be able to find a company (or, for that matter, very many people) with
opinions like mine. 
              -- "When I want your opinion, I'll read it in your entrails."

mfeldman@seas.gwu.edu (Michael Feldman) (03/22/91)

In article <jls.669524498@rutabaga> jls@rutabaga.Rational.COM (Jim Showalter) writes:
>>Some bit-fiddlers have argued that the latter is slightly more efficient,
>
>BIT FIDDLER!?!?!? BIT FIDDLER!?!?!?!?
>
>Stab me in the heart and TWIST THE KNIFE, why don't you?

Well, OK, you're not a bit fiddler. It's just that in discussing design-
oriented questions like pre- and post- conditions, appropriate design 
with exceptions, and the like, too many folks STILL jump too quickly into
micro-efficiency matters and questions of how compilers implement exceptions.
This gives me a sense of deja-vu from the early days of structured
programming etc., when people argued that go-to's were faster
than loops and procedures, and self-modifying code was even faster than that.
So what? We all know that computers and compilers both get faster over time.

Actually, I wrote the "bit fiddler" comment even before I read your note,
which was several notes later in my news reader. I chuckled when I read
your note, because I knew SOMEONE would raise the issue.

Performance issues are NOT unimportant - even this fuzzyheaded professor
would agree with that - but I think discussions of marginal gains or losses
in efficiency, traded off against more important design matters, just
muddy the waters. 

Sorry, Jim .. I didn't even know your note was out there...nothing personal.

Mike

brad@terminus.umd.edu (Brad Balfour) (03/22/91)

In article <jls.669368339@rutabaga>
jls@rutabaga.Rational.COM (Jim Showalter) writes:
>>procedure pop(s: stack)
>>begin
>>   if(not empty(s)) then
>>      -- pop the stack
>>   else
>>      raise underflow
>>end pop;
>>procedure pop(s: stack)
>>begin
>>   stack.top := stack.top - 1;
>>   exception
>>      when Constraint_Error => raise underflow
>>end pop;
>Note that in the second case the procedure is faster, since it doesn't
>have to do the check first. Not only is it faster, it is safer, since
>without using tasks you cannot guarantee that between the time you
>checked and the time you popped it hadn't been popped elsewhere. For
>both of these reasons, I'd say the second version is far better than
>the first version, and that the original poster's thesis that exceptions
>should be used rarely if ever has been contradicted by the very examples
>provided to support his/her case!

   It should be kept in mind, however, that the first example will always
produce correct results (except in the presence of tasks where it should
be replaced with a concurrent component), but that the second example breaks
in the presence of a "pragma supress". However, on most compilers, it is
not necessary to change the code to get this effect. Instead, all one has
to do is add a switch to the compiler run to turn off the constraint checks.
Then, push will trash memory randomly and pop will return garbage rather
than raise underlfow.

   Also, the second example is not safe in the presence of multiple tasks.
It is possible for a second thread of control to be changing the contents of
the stack at the same time as the first so that between the read of
stack.top and the computation of the -1 and then the assignment there
are plenty of opportunities of the push to change (and write to) stack.top.
It is a mistake to assume that the line "stack.top := stack.top - 1;" is a
single atomic assignment.




Brad Balfour
EVB Software Engineering, Inc.
brad@terminus.umd.edu

stt@inmet.inmet.com (03/25/91)

Re: Should documentation on exception be preconditions or postconditions

This is pretty much of a style issue in my view, but
I much prefer Norm Cohen's approach for general readability.
That is, document the preconditions for normal action,
and then document the result of violating the preconditions.

I don't see why it really matters whether the exception is raised
explicitly or implicitly, or whether it is a predefined or user-defined
exception, for in Ada, the nearly universal result of violating preconditions 
is an exception, whether you state it or not.

Seeing exceptions as the result of violating preconditions emphasizes
their "exceptional" nature, and properly discourages using
exceptions as a kind of "status code."  A good rule (subject to the
usual exceptions that prove it!) is that any exception raised at 
run-time represents a program bug or an external failure, and the only 
reason to have user-defined exceptions is to provide better diagnostics
in post-mortem debugging of what are essentially unrecoverable errors.

Exceptions might trigger recovery, but probably only at a high level
(e.g., in an interactive program, they would flush the current
activity and reprompt the human operator; in a fault-tolerant system
they might cause the failing task to be decommissioned, or reset
and reelaborated.)

I realize this is a pretty extreme view of exceptions, namely that
they are primarily a debugging tool, not a programming tool, but
it is consistent with the "extreme prejudice" for efficient non-exceptional
execution speed over exception-handling speed.

Another implication of this view of exceptions is that surrounding
a single subprogram call with an exception handler is generally a bad
idea, since it implies that an exceptional condition is in fact
expected to happen!  Further, it implies that design rules stating that
undocumented exceptions should never be propagated are possibly misguided,
since handling "others" and raising some catch-all exception
is throwing away information which may be critical to post-mortem debugging.

Of course, once a subsystem gets to the point of being "fully" debugged,
and is being reused more and more, all exceptions which can be
propagated should be documented, though it may still be more appropriate
to document certain exceptions on a subsystem-wide basis, rather
than trying to identify each individual subprogram which could propagate them.
The exception handler attempting the recovery (if any), probably does
not "know" which particular subprogram call failed anyway, and it
may be more useful to know what is a reasonable recovery strategy
(e.g., how to "reset" the subsystem so as to allow clients to continue
to use it), than to know exactly which subprograms can cause the
subsystem to enter its exceptional state.

Therefore, if an exception is intended to be used for recovery
rather than simply debugging, the most important thing is that
the particular exception raised identifies which subsystem failed,
in what error state (if any) it is now, and what sort of reset
operation is appropriate.  If the exception
simply indicates that a bad parameter was passed in somewhere,
there is probably no obvious recovery strategy other than to
take two aspirin and fire up the source-level debugger in the morning...

S. Tucker Taft    stt@inmet.inmet.com
Intermetrics, Inc.
Cambridge, MA  02138

ae@sei.cmu.edu (Arthur Evans) (03/26/91)

Tucker Taft (stt@inmet.inmet.com) states that, in general, exceptions
should be used only for serious errors, and that it is rarely proper to
provide local handling.  I disagree,

In many cases, a programming is continually processing data, and (as
others have already remarked) it is almost as hard to determine in
advance if the data are flawed as it is to do the processing.  Examples:

  - An aplication is processing radar data.  The code might determine
    that a plane has moved 100 miles since the last report a few
    milliseconds ago, or changed altitude by 4000 feet, both impossible.
    Probably the problem is noise in the returned radar data, and the
    best way to deal with it is to raise a BAD_RADAR_DATA exception.
    The calling code can note the problem and ignore the data, if the
    problem is infrequent. It might report frequent errors in a trouble
    report.  In any case, though, the code processing the data can best
    serve the application by just saying (in effect), "These data are no
    good.  Do something about the problem."

  - Consider reading a text file in which each line contains formatted
    data.  The caller would check for EOF before calling the routine
    which reads the next line and processes it.  Again, invalid data
    might be reported by a BAD_DATA exception.  However, a bad input
    file might be manifested in the processing code by an unexpected EOF
    -- a line with missing trailing fields or a missing line-terminator.
    The data processing routine would probably find it most convenient
    to catch the EOF exception and reflect it to the caller as BAD_DATA.
    We have here a mixture of checking for the EOF by the caller, for
    whom it is an expected event, and catching an EOF exception in the
    line processor, for whom it represents incorrect data.

I think most dogmatic statements about how exceptions should be used
turn out to have so many exceptions as to be useless.  (Sorry about
that.)  Exceptions represent yet another tool in the hands of the
application designer; as with other tools, they must be used with care
and taste.

Art Evans

mfeldman@seas.gwu.edu (Michael Feldman) (03/26/91)

In article <23141@as0c.sei.cmu.edu> ae@sei.cmu.edu (Arthur Evans) writes:
>Tucker Taft (stt@inmet.inmet.com) states that, in general, exceptions
>should be used only for serious errors, and that it is rarely proper to
>provide local handling.  I disagree,
>
 ...lots of good stuff deleted
>
>I think most dogmatic statements about how exceptions should be used
>turn out to have so many exceptions as to be useless.  (Sorry about
>that.)  Exceptions represent yet another tool in the hands of the
>application designer; as with other tools, they must be used with care
>and taste.

I couldn't agree more. On the other hand, that there is no much discussion
and controversy about the proper use of exceptions - as there always is
about any language feature - testifies to the value of threads like this
on the net. As is the case with all tools, different people and different
projects have differing ideas about what constitutes "care" and "taste."
In the end, a consistent project-level convention about exceptions - a
well-thought out and careful design - will of course be the best policy.

This thread started with a discussion of pre- and post-conditions, to which
I'd like to return. It seems that we are using two different definitions
of preconditions. One is
(1) "A precondition is my requirement that must be met by the client, and my
 program can detect whether or not it is met."

The other is
(2) "A precondition is my requirement that must be met by the client, and my
 program CANNOT ALWAYS detect whether or not it is met."

The nastiest precondition is the one that requires that IN parameters be
initialized. This is an implicit precondition on ALL subprograms - indeed,
on all expressions - that CANNOT be tested reliably. We say - glibly -
that an uninitialized variable contains "garbage." But garbage is still
a bit pattern, AND THE BIT PATTERN MAY HAPPEN TO LIE IN THE RANGE OF
THE VARIABLE. If it does, there's no way to raise an exception on it.

In some recent discussions with folks close to Ada9x, I have discovered that
one of the proposals is to allow default initial values for all types and
subtypes. As you know, Ada83 allows default initial values only for objects,
not for types, except for fields in a record. It has come to my attention
that this is a controversial proposal; it's not clear if it will survive
review. 

If Ada allowed default initial values for all types and subtypes, e.g.

  SUBTYPE Little IS Integer RANGE -100..100 := 0;

or even

  TYPE Vector IS ARRAY (IndexType RANGE <>) OF Integer := (OTHERS => 0);

it would be much easier for projects to require that all project types be
initialized, which would greatly simplify design, since that nasty
precondition could be met globally for the whole project. (Of course
Ada could not check whether the project rule was being followed, but at
least the humans could...)

I can't think of anything that would make this harder to implement than
default _object_ initialization, and therefore it's a fairly small change
with a big potential payoff.

If you agree that default initialization of types is an important
feature for Ada9x, write to ada9x@ajpo.sei.cmu.edu about it. 

Mike Feldman

jls@rutabaga.Rational.COM (Jim Showalter) (03/26/91)

I was one of the submitters to 9x of the proposal that all types
have default initialization. This seems so obviously right I
can't see why it wouldn't survive review. Why should records
be singled out for special treatment?--it's one of those annoying
"gotchas" that piss people off when they're trying to learn the
language. I'm tired of seeing the user-view of the language--the
only one the user cares about--distorted by concerns for the language
implementers. Compiler writers are SUPPOSED to have a hard job so
that the user community has an easy job--consider the relative
percentages of time saved! One week of additional compiler writer
time is probably worth 50 years of additional programmer time.

An even more radical proposal would be to introduce constructors,
a la C++.
--
***** DISCLAIMER: The opinions expressed herein are my own. Duh. Like you'd
ever be able to find a company (or, for that matter, very many people) with
opinions like mine. 
              -- "When I want your opinion, I'll read it in your entrails."

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (03/26/91)

In article <jls.669961889@rutabaga>, jls@rutabaga.Rational.COM (Jim Showalter) writes:
> I was one of the submitters to 9x of the proposal that all types
> have default initialization. This seems so obviously right I
> can't see why it wouldn't survive review. Why should records
> be singled out for special treatment?

It is obviously a good idea to make the language consistent with itself.
I must admit, though, that I really like Dijkstra's notation, in which
it is simply impossible to have an uninitialised variable, the language
(and the array data structure) being designed that every "location" has
to be initialised explicitly before it can come into existence.  I
recently spent rather more time than I wanted to helping someone fix a
C program.  We eventually discovered that an initialisation routine that
had the job of allocating a bunch of arrays was being called BEFORE the
global variables with the desired array sizes were initialised.  Now C
has this helpful little rule that global variables are initialised to
0 (0.0, NIL, ASCII.NUL, FALSE, or whatever the equivalent happens to be).
Precisely *because* the variables were initialised to a "sensible" value
the error was unexpectedly hard to detect.

I would rather see features that help people detect or avoid the error
of using an uninitialised variable rather than features which define
the problem away.  For example, if arrays with fill pointers were a
standard part of the language (perhaps defined as a standard package),
then we'd be close enough to Dijkstra's arrays to get some of the
protection without being too far from the kind of array already present.

Don't expect default initial values for types to be an unmixed blessing.

-- 
Seen from an MVS perspective, UNIX and MS-DOS are hard to tell apart.

mfeldman@seas.gwu.edu (Michael Feldman) (03/27/91)

In article <5070@goanna.cs.rmit.oz.au> ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:
>
>global variables with the desired array sizes were initialised.  Now C
>has this helpful little rule that global variables are initialised to
>0 (0.0, NIL, ASCII.NUL, FALSE, or whatever the equivalent happens to be).
>Precisely *because* the variables were initialised to a "sensible" value
>the error was unexpectedly hard to detect.

I don't think the _compiler_ (or the standard) should micro-manage what
should be a programmer's responsibility, namely determining, type by type,
what a "sensible" value means.
>
>Don't expect default initial values for types to be an unmixed blessing.
>
Perhaps we have a terminological problem here. By "default initial value"
we do _not_ mean "the compiler determines the value." We _do_ mean "the
programmer has the option of specifying the initializing value, and
all declared objects then have this value when they are elaborated."
This is only inconsistently possible in Ada83. 

If if we wanted the compiler to do it, things are easier said than done.
Given things like range constraints, etc., which C doesn't have to
worry about, it could be messy for the compiler to determine what the
initial value should be. E.g. 0 isn't a sensible initial value for
a Positive subtype. Perhaps Type'First would make sense, but I
still think this would micro-manage what should be a project choice.
Give the programmer the option.

Taking it a step further, the Ada9x standard _could_ REQUIRE that the
programmer give all types default initial values. I favor this;
I think it corresponds to the Dijkstra notation you were referring to.
Going that far may be controversial; I'd settle for a consistent rule
_allowing_ the programmer to do it.

Mike Feldman

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (03/27/91)

In article <2929@sparko.gwu.edu>, mfeldman@seas.gwu.edu (Michael Feldman) writes:
> In article <5070@goanna.cs.rmit.oz.au> ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:
> >Don't expect default initial values for types to be an unmixed blessing.

> Perhaps we have a terminological problem here. By "default initial value"
> we do _not_ mean "the compiler determines the value." We _do_ mean "the
> programmer has the option of specifying the initializing value, and
> all declared objects then have this value when they are elaborated."
> This is only inconsistently possible in Ada83. 

No, we have no terminological problem here.  I understood quite clearly
what was intended.  For heaven's sake, we were shown an *example*.  We
both agree that a compile can't be expected to pick a sensible "default"
value for a type.  Why should anyone expect the designre of a package to
be able to pick a sensible default value either?  The whole point of
re-usable components is that you don't know _how_ your component is to
be misused.

Perhaps I have completely misunderstood the point of default values for
fields of records.  I thought that this, like default values for
omitted parameters, was a dodge to help people cope with the problem
where there is
	-- a package P providing a record type or procedure, and
	-- a package U using that record type or procedure
and package P is updated to include extra fields in the record type
or procedure, and it may not be practical to update the source of
package U.  (In fact, U may be a customer who won't let you anywhere
near their source.)  In this practically important but conceptually
limited case, the package provider _can_ determine sensible default
values: "those values which make the new version of the software
act just like the old version".

If I've understood correctly, then default initialisation of fields
and arguments makes the most sense for things where package U
*couldn't* have initialised those fields or parameters because
they didn't exist in the previous version of the software.

Now consider the kinds of problems that can arise with default
initialisations, *where the package user knows what the default is*.
Suppose, for argument's sake, that there is a subtype T of integer
with default value 1.  Suppose further that a user of that type wants
a variable initialised to that value.  In Ada83, you *have* to
initialise the variable *explicitly*, you had to write X: T := 1;
But when you *know* that T is initialised by default to 1, you know
that exactly the same effect can be obtained by writing X: T;
Ah, but if the providing package is updated, they do _not_ have the
same effect.  If the package changes so that T's default value is 0,
then X: T := 1; still initialises X to 1, while X: T; will initialise
X to 0.

So there is a temptation here to depend on something you perhaps
ought not to.  One thing which would reduce this temptation would
be to set things up so that a type or subtype can be specified in
a package specification and its default value in the package body.
Something along the lines of
	package FOO is
	    subtype BAZ is INTEGER range -10 .. 10 := deferred;
	    ...
	end FOO;
	...
	package body FOO is
	    subtype BAZ := BAZ'LAST:
	    ...
	end FOO:

> Taking it a step further, the Ada9x standard _could_ REQUIRE that the
> programmer give all types default initial values. I favor this;
> I think it corresponds to the Dijkstra notation you were referring to.

No it does not, not at all, no way!  It is the direct opposite!
Dijkstra's notation leaves the programmer with the obligation of
initialising every variable explicitly.  The point is that this
is defined so simply that it is *easy* for a compiler to detect.
There is no particular reason to expect that two variables of the
same type should be initialised to the same value.  Requiring that
every type be given a default initial value is what I call
"defining the problem away".  The problem of variables which the
programmer has failed to give the appropriate value _remains_,
it just isn't _called_ "being uninitialised" any more.  (It might
be called "the problem of variables which are implicitly
initialised to the wrong value".)

By the way, I never said that the *compiler* should bear all (or even
any) of the responsibility for detecting uninitialised variables.  It
is quite possible to have a debugger or interpreter that does this.
*Requiring* all types to have default initial values would make it
extremely hard, if not impossible, for such a tool to work.  _Allow_
programmer-specified defaults for programmer-defined types if you
wish (it doesn't let you do anything you couldn't do before, I think;
you could always have used records with one component).  But please
don't make a "Saber-Ada" impossible.

-- 
Seen from an MVS perspective, UNIX and MS-DOS are hard to tell apart.

mfeldman@seas.gwu.edu (Michael Feldman) (03/27/91)

O'Keefe's posting gave such a good discussion of the issues that there's not
much to add, except the following:

The whole thread started with preconditions. I think Ada has a chance to
make life a whole lot easier by making it _possible_ to initialize all
types with default initial values, consistently. I am not interested in
having clients depend upon the initial values; rather I want clients
to be able to depend on variables, types, whatever, _not_ being
_uninitialized._ There will always be something in there that's
"in-range" for the type. Clearly Ada83 thought this was worth doing for
access types, and look how much easier it is to write linked-list
packages because there are no garbage pointers to worry about (leave
aside the dangling-pointer problem left unsolved by Unchecked_Deallocation!)

I was told once by an Ada83 high priest that the reason record fields can
be initialized is that, syntactically, it's "free" because the syntax
of a field declaration is the same as the syntax of a variable declaration.
I got the impression that the thinking didn't go much deeper than that.
They didn't bother with initializers for other types because it wasn't free.

Mike

jls@rutabaga.Rational.COM (Jim Showalter) (03/27/91)

>I would rather see features that help people detect or avoid the error
>of using an uninitialised variable rather than features which define
>the problem away.

I think we disagree here. I don't view this:

    type Foo is Integer := 10;

    Bar : Foo;

as producing an uninitialized variable. I view it as initializing a variable
to a value deemed (for whatever reason) to be a good value.
--
***** DISCLAIMER: The opinions expressed herein are my own. Duh. Like you'd
ever be able to find a company (or, for that matter, very many people) with
opinions like mine. 
              -- "When I want your opinion, I'll read it in your entrails."

jls@rutabaga.Rational.COM (Jim Showalter) (03/28/91)

>Perhaps I have completely misunderstood the point of default values for
>fields of records.  I thought that this, like default values for
>omitted parameters, was a dodge to help people cope with the problem
>where there is
>	-- a package P providing a record type or procedure, and
>	-- a package U using that record type or procedure
>and package P is updated to include extra fields in the record type
>or procedure, and it may not be practical to update the source of
>package U.  (In fact, U may be a customer who won't let you anywhere
>near their source.)  In this practically important but conceptually
>limited case, the package provider _can_ determine sensible default
>values: "those values which make the new version of the software
>act just like the old version".

Since I tend to use private types, this argument is spurious. If I
am providing a private type in P to which I have added a new field, U is
unable to initialize the fields ANYWAY: I must do it for the client,
since I am providing the abstraction upon which the client depends.

Furthermore, if each U was required to initialize the values in my
type, the odds of at least someone screwing it up increase linearly
with the number of U's.

Finally, consider this case:

    package Some_Package is

        type T is private;

        function Initialize (From_Some_Input : in A_Type) return T;

        procedure Do_Something (To_This_Object : in out T);

        Not_Initialized : exception;

    private
   
        type T is record
                      Some_Field : Some_Type;
                      Is_Initialized : Boolean := False;
                  end record;

    end Some_Package;

I, the provider of T, have a field in the record that is used to insure
consistency. The default initial value is not something the client can
override, so there is no way the client can pass an unitialized value
of T to me and get away with it. Without default initialization I cannot
ensure this.

    package body Some_Package is

        function Initialize (From_Some_Input : in A_Type) return T is
       
            The_T : T;

        begin
            The_T.Some_Field := Some_Normalization_On (From_Some_Input);
            The_T.Is_Initialized := True; <====***
            return The_T;
        end Initialize;

        procedure Do_Something (To_This_Object : in out T) is
        begin
            if not To_This_Object.Is_Initialized then
                raise Not_Initialized; <====****
            end if;
            {do whatever to the object>
        end Do_Something;

    end Some_Package;

This sort of thing is very useful for applications where data integrity
must be ensured (can you think of any where data integrity is NOT
important?...).

>Now consider the kinds of problems that can arise with default
>initialisations, *where the package user knows what the default is*.

That's not the issue: default initialization, particularly of private
types, is for the benefit of the SUPPLIER, not the client.

>One thing which would reduce this temptation would
>be to set things up so that a type or subtype can be specified in
>a package specification and its default value in the package body.

Actually, this is a pretty good idea, at least for visible types
(it doesn't matter for private types).
--
***** DISCLAIMER: The opinions expressed herein are my own, except in
      the realm of software engineering, in which case I've borrowed
      them from incredibly smart people.

mfeldman@seas.gwu.edu (Michael Feldman) (03/28/91)

In article <jls.670109695@rutabaga> jls@rutabaga.Rational.COM (Jim Showalter) writes:
  ... much good stuff deleted
>
>That's not the issue: default initialization, particularly of private
>types, is for the benefit of the SUPPLIER, not the client.

Right. As long as the client program(mer) doesn't make hidden assumptions
in the client code about what the default value is.
>
>>One thing which would reduce this temptation would
>>be to set things up so that a type or subtype can be specified in
>>a package specification and its default value in the package body.
>
>Actually, this is a pretty good idea, at least for visible types
>(it doesn't matter for private types).

True, but see my comment above. This is probably no worse a problem than
arises with Ada private types in general, where the private part is visible
to the human even if it's hidden from the human's code.


Mike Feldman

carson@tron.UUCP (Dana Carson) (03/29/91)

In article <2937@sparko.gwu.edu> mfeldman@seas.gwu.edu () writes:
>make life a whole lot easier by making it _possible_ to initialize all
>types with default initial values, consistently. I am not interested in
>having clients depend upon the initial values; rather I want clients
>to be able to depend on variables, types, whatever, _not_ being
>_uninitialized._ There will always be something in there that's
>"in-range" for the type. Clearly Ada83 thought this was worth doing for
>Mike

  Actually IMHO it should be possible to initialize variables with INVALID
so that it isn't anything legal so that you know that it hasn't been
set.  I believe that Rational allows this on it's machines?

  I do this with enumerated types quite often but you can't do ti with
integers, and I have to put in the checks for the enumerated types
rather than an automatic exception like with ranges. 

--
Dana Carson
Westinghouse Electronic Systems Group  Mail Stop 1615
UUCP:carson@tron.UUCP 
     carson@tron.bwi.wec.com
     ...!uunet!tron!carson
AT&T: (301) 765-3513
WIN: 285-3513

jls@rutabaga.Rational.COM (Jim Showalter) (03/29/91)

>>That's not the issue: default initialization, particularly of private
>>types, is for the benefit of the SUPPLIER, not the client.

>Right. As long as the client program(mer) doesn't make hidden assumptions
>in the client code about what the default value is.

But a programmer who makes such assumptions is a doofus. I don't want
to remove features in the language that are useful for good programmers
just to make it harder for doofus programmers to act like doofuses: I
want fewer doofus programmers to have jobs. Just because it is possible
to write an erroneous program in Ada (and I submit that depending on
default values in private types is highly erroneous) is no reason to
damn the features that make such erroneousness possible. This is not
a language issue: it's an education issue.
--
***** DISCLAIMER: The opinions expressed herein are my own, except in
      the realm of software engineering, in which case I've borrowed
      them from incredibly smart people.