[comp.software-eng] Tolerance

dave@cs.arizona.edu (Dave P. Schaumann) (02/04/91)

According to G.Joly@cs.ucl.ac.uk (Gordon Joly):
GJ: Can the tolerance idea get off the starting blocks?

In article <27A9B451.48BF@tct.uucp>, chip@tct.uucp (Chip Salzenberg) writes:
CS: "The strlen() function returns the number of characters in the
CS: given string, plus or minus two."
 
CS: Right.


In article <15863.27ad36b6@levels.sait.edu.au> xtbjh@levels.sait.edu.au writes:
BH: Sorry, can't let this opportunity pass...

BH: How long does your favourite system take to find the answer at this level 
BH: of precision?  How much variance in the time can you tolerate?  What 
BH: resources are you willing to trade for a higher-accuracy result?

BH: (This is of course a variant of the comment that "correctness is not an 
BH: absolute in an engineering domain" that I posted earlier.)

BH: For me, I think the notion of tolerance is a good one, and is already 
BH: implicit in many areas of software.

I'd certainly be interested in hearing a discussion on this topic (tolerance
implicit in software).

Numerical analysis aside, I can't imagine a single instance of "close is good
enough".

CS: Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp|, <uunet!pdn!tct!chip|

BH: Brenton Hoff (behoffski)			       xtbjh@levels.sait.edu.au

xtbjh@levels.sait.edu.au (02/04/91)

In article <27A9B451.48BF@tct.uucp>, chip@tct.uucp (Chip Salzenberg) writes:
> According to G.Joly@cs.ucl.ac.uk (Gordon Joly):
>>Can the tolerance idea get off the starting blocks?
> 
> "The strlen() function returns the number of characters in the
> given string, plus or minus two."
> 
> Right.

Sorry, can't let this opportunity pass...

How long does your favourite system take to find the answer at this level 
of precision?  How much variance in the time can you tolerate?  What 
resources are you willing to trade for a higher-accuracy result?

(This is of course a variant of the comment that "correctness is not an 
absolute in an engineering domain" that I posted earlier.)

For me, I think the notion of tolerance is a good one, and is already 
implicit in many areas of software.

> -- 
> Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp>, <uunet!pdn!tct!chip>
>  "I want to mention that my opinions whether real or not are MY opinions."
>              -- the inevitable William "Billy" Steinmetz
-- 
Brenton Hoff (behoffski)			       xtbjh@levels.sait.edu.au
Transponder Australia	       My opinions are mine (and they're really weird).

cwk@ORCRIST.GANDALF.CS.CMU.EDU (Charles Krueger) (02/04/91)

In article <777@caslon.cs.arizona.edu>, dave@cs.arizona.edu (Dave P. Schaumann) writes:

> I'd certainly be interested in hearing a discussion on this topic (tolerance
> implicit in software).
> 
> Numerical analysis aside, I can't imagine a single instance of "close is 
> good enough".

How about weather predictions?  In fact, any time software simulates an
external reality (this includes numerical analysis), the input and output
abstractions are probably approximations.  The designer, programmer, and
user should all understand that there are tolerances.  These tolerances
should be quantified.

What goes on inside the computer, however, is still as precise as ever.

shimeall@TAURUS.CS.NPS.NAVY.MIL (timothy shimeall) (02/05/91)

In article <777@caslon.cs.arizona.edu> dave@cs.arizona.edu (Dave P. Schaumann) writes:
>Numerical analysis aside, I can't imagine a single instance of "close is good
>enough".

There are a lot of device-control problems where a certain amount
of imprecision is acceptable (e.g., the automatic transmission doesn't have
to shift at EXACTLY 32001 rpm, but reasonably close).

There are also a lot of acceptable-but-not-optimal problems.  Two quick 
examples:
   Network traffic routing
   Memory management
There is a reasonably large class of problems where an OPTIMAL solution is 
very difficult to calculate, but where you can get a REASONABLY CLOSE
to optimal solution very rapidly.  
					Tim
-- 
Tim Shimeall ((408) 646-2509)

khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) (02/05/91)

In article <777@caslon.cs.arizona.edu> dave@cs.arizona.edu (Dave P. Schaumann) writes:

...
   Numerical analysis aside, I can't imagine a single instance of
   "close is good enough".

Most code optimizations, to wit:

	register allocation (must be correct, but need not be optimal)
	instruction scheduling (ditto)
	etc.
and
	traveling salesfolk (ditto)
	chess/games in general  programs (ditto)
	etc.

are these numerical analysis ?


--
----------------------------------------------------------------
Keith H. Bierman    kbierman@Eng.Sun.COM | khb@chiba.Eng.Sun.COM
SMI 2550 Garcia 12-33			 | (415 336 2648)   
    Mountain View, CA 94043

dave@cs.arizona.edu (Dave P. Schaumann) (02/05/91)

In article <777@caslon.cs.arizona.edu> I wrote:
>   Numerical analysis aside, I can't imagine a single instance of
>   "close is good enough".

khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) wrote:
>Most code optimizations, to wit:
>	register allocation (must be correct, but need not be optimal)
>	instruction scheduling (ditto)
>	etc.
>and
>	traveling salesfolk (ditto)
>	chess/games in general  programs (ditto)
>	etc.

cwk@ORCRIST.GANDALF.CS.CMU.EDU (Charles Krueger) wrote:
>How about weather predictions?  [Mentions other simulations]
>
>What goes on inside the computer, however, is still as precise as ever.

djbailey@skyler.mavd.honeywell.com wrote
>[...]
>"Tolerance" is appropriate to higher level system requirements.  If 
>you could quantify the tolerance in requirements, you could have a 
>very significant impact on how software development contracts are 
>written and probably reduce a lot of arguments.

Ok, so I stand corrected.  There are a lot of applications out there that
don't need exact answers.  But my question was in the context of code re-use.
How could you say something like, "well, I really need a stack, but I'll
settle for something sort of stacky"?  Will we see code like this in the
future:

  assert( stackyness(re_used_type) > 0.9 ) ; /* stackyness(a real stack)=1.0 */

I can't see how the "tolerence paradigm" could possibly lead to a reasonable
means of code re-use.

Dave Schaumann		|  And then -- what then?  Then, future...
dave@cs.arizona.edu	|  		-Weather Report

ogden@seal.cis.ohio-state.edu (William F Ogden) (02/05/91)

In article <777@caslon.cs.arizona.edu> dave@cs.arizona.edu (Dave P. Schaumann) writes:
   ...
>GJ: Can the tolerance idea get off the starting blocks?
   ...
>BH: For me, I think the notion of tolerance is a good one, and is already 
>BH: implicit in many areas of software.

>I'd certainly be interested in hearing a discussion on this topic (tolerance
>implicit in software).
>Numerical analysis aside, I can't imagine a single instance of "close is good
>enough".

One difficulty in recognizing instances of tolerances in software is that
we tend to have a fixation on the first setting where we were exposed to
the notion of tolerance -- namely in the domain of real numbers. There
it got tangled up with the metric notion of the distance between an
ideal desired answer and an actual answer.
So one problem is that we tend to look for the distance metric in our
problem domains. However, there is a more subtle perception bias;
we expect that for a given input, a computation should produce a single
right answer. In other words, we expect that the specifications for
our programs should take the form of functions (which after all is
the familiar object in real analysis).
The better perspective on the problem is that program specifications
generally take the form of a relation between input values and output
values rather than always being a function. This works for the familiar
analysis examples. For say a Cos(x) calculation, the input/output relation
might be R(x,y) = `|y - Cos(x)| < 10^-6', which says that for a given input
x, we're willing to accept a whole host of outputs y. This spec involves
a math function (Cos) and a distance metric, to be sure, but in the end,
they are just incidental concomitants of describing a relation.
A relation then can prescribe the precise range of answers that we are
willing to accept (tolerate) for any given input. Moreover, this perspective
on tolerances is applicable to any programming domain whatsoever. Familiar
examples in everyday computing arise in say (unstable) sorting where our
specifications might insist that records be put in order on a certain
field, but that the output order of records with identical values in
that field are of no moment to us. In fact, certain efficient algorithms
take advantage of this particular tolerance.
Most optimization problems admit to multiple solutions (before we even
begin to consider approximate solutions). Etc.
So I suggest that if we bothered to specify our current software (provided
of course we didn't over specify it), we would find tolerances to be
quite common -- even leaving numerical analysis aside.
/Bill

mikef@cs.ed.ac.uk (Mike Fourman) (02/05/91)

In article <87999@tut.cis.ohio-state.edu> William F Ogden <ogden@cis.ohio-state.edu> writes:
>In article <777@caslon.cs.arizona.edu> dave@cs.arizona.edu (Dave P. Schaumann) writes:
>   ...
>>GJ: Can the tolerance idea get off the starting blocks?

I was hoping it wouldn't, but it has! I was tempted to flame the
original message, I can contain myself no longer... :-/
 
[stuff deleted]

>
>One difficulty in recognizing instances of tolerances in software is that
>we tend to have a fixation on the first setting where we were exposed to
>the notion of tolerance -- namely in the domain of real numbers. There
>it got tangled up with the metric notion of the distance between an
>ideal desired answer and an actual answer.

This analogy certainly colours our thinking. Unfortunately, it makes us
unwittingly think as though outputs must vary _continuously_ with
inputs. For example, small errors in machining a bearing produce only
small deviations from the desired behaviour of a mechanical system.
Unfortunately, digital systems don't behave like this; there is no
guarantee that changing one bit of an executable will only have small
(whatever that may mean) effects on the behaviour of a software system.
In traditional engineering, continuity allows us to derive tolerances
for parts from the tolerances specified for the whole. In digital
systems engineering deriving tolerances for parts from the tolerances
specified for the whole is much harder (it amounts to formal program
development).
[stuff deleted]

>The better perspective on the problem is that program specifications
>generally take the form of a relation between input values and output
>values rather than always being a function. 

Of course this is correct - but formal logic, the calculus for deriving
specifications of parts from the desired specification of the whole, is
(currently?) less tractable than real analysis.

[stuff deleted]

>So I suggest that if we bothered to specify our current software (provided
>of course we didn't over specify it), we would find tolerances to be
>quite common -- even leaving numerical analysis aside.

Yes, the question is, "Can we handle them scientifically?"

--
Prof. Michael P. Fourman                     email        mikef@lfcs.ed.ac.uk
Dept. of Computer Science                    'PHONE (+44) (0)31-650 5198 (sec)
JCMB, King's Buildings, Mayfield Road,              (+44) (0)31-650 5197
Edinburgh EH9 3JZ, Scotland, UK                 FAX (+44) (0)31 667 7209

Chris.Holt@newcastle.ac.uk (Chris Holt) (02/05/91)

dave@cs.arizona.edu (Dave P. Schaumann) writes:

[about "close is good enough" programming]
>Ok, so I stand corrected.  There are a lot of applications out there that
>don't need exact answers.  But my question was in the context of code re-use.
>How could you say something like, "well, I really need a stack, but I'll
>settle for something sort of stacky"?  Will we see code like this in the
>future:
>  assert( stackyness(re_used_type) > 0.9 ) ; /* stackyness(a real stack)=1.0 */

>I can't see how the "tolerence paradigm" could possibly lead to a reasonable
>means of code re-use.

Imagine you've got two objects/modules that you want to stick together;
e.g. A calls B, which returns something to A.  Suppose both of their
specifications are defined in terms of tolerances, so that
        A sends a value with tolerance t to B;
        A requires a result with tolerance u from B.
Then B's specification has to be such that given a tolerance t', it
returns a result with tolerance u'; and we have to show that t<=t',
u'<=u.  Given this, the combination is proved okay.

Applications?  Who's to know?  Perhaps, for example:
        Find the least element of a nearly sorted sequence.
If the sequence has tolerance
        Each element is no more than 3 positions from where it
        would be in the ordered sequence,
then only the first 4 elements need be examined.  This might mean
that the result could be returned within some given time constraint,
e.g. delta_t; and the proof would require knowing the time constraint
of a single comparison, and how many processors were available.
-----------------------------------------------------------------------------
 Chris.Holt@newcastle.ac.uk      Computing Lab, U of Newcastle upon Tyne, UK
-----------------------------------------------------------------------------
 "[War's] glory is all moonshine... War is hell." - General Sherman, 1879

mrp@minster.york.ac.uk (02/06/91)

The point being missed so far is when can a system 'make-do' with an
'imprecise/less-accurate/less tolerant/in-exact/imprefect' result.

The driving force behind a system using imprecision is the
time-precision trade off.  This requires that algorithms are written
so the preciseness of the result as it is sampled during the execution
of the task rises as more time is spent executing the task.  Thus when
the task stops the precision of the result is dependent on how much of
the code the task managed to execute.  [see Lin,Swaminathan & Liu,
IEEE RTS Symposium 1987].

Now suppose you have a system dealing with locating objects and to
precisely locate one object takes a task one second.  Assuming you are
using a single processor system and the number of seconds you have
available out-numbers the number of objects to locate, all results
will be as precise as they can be.  Now suppose you have 3 seconds in
which to locate 6 objects.  In 'normal' real-time systems, 3 tasks
would run to completion and return the results and 3 would not be
scheduled at all.  In an imprecise system each task could be run for
1/2 a second, each returning an 'imprecise' result.  These could be in
a form of an area where the object is, rather than an exact location.
For example a result of 'at position x with a precision of 75%' could
be interpreted as 'in a circular area centred on x radius 7.5 meters'.
(Defining metrics for imprecise results is a very tricky problem,
especially in non-numerical domains.)

Depending on the application, having imprecise data on six objects may
be considered more useful than knowing exact locations of 3 objects
and knowing nothing about 3 others.  I would definitely prefer to know
the approximate positions of six other vehicles on the road around me
than know exactly where 3 were and have no idea about the other three.

   - Martin

#===============================#===========================================#
| Martin Portman                | Phone: (0904) 432735                      |
| Dept. of Comp. Science        | JANET:  mrp@uk.ac.york.minster            |
| University of York            | UUCP: ...!mcsun!ukc!minster!mrp           |
| Heslington                    | ARPA:                                     |
| YORK YO1 5DD ENGLAND          | mrp%minster.york.ac.uk@nsfnet-relay.ac.uk |
#===============================#===========================================#

xtbjh@levels.sait.edu.au (02/06/91)

In article <785@caslon.cs.arizona.edu>, dave@cs.arizona.edu (Dave P. Schaumann) writes:
> [...] But my question was in the context of code re-use.
> How could you say something like, "well, I really need a stack, but I'll
> settle for something sort of stacky"?  Will we see code like this in the
> future:
> 
>   assert( stackyness(re_used_type) > 0.9 ) ; /* stackyness(a real stack)=1.0 */
> 
> I can't see how the "tolerence paradigm" could possibly lead to a reasonable
> means of code re-use.

Every specification has areas that are black and white and areas that
are shades of gray.  There isn't enough time to specify everything; 
you can only specify the most important things and hope that the 
remainder is tolerable.  For example, how long will a particular 
circuit board run before something on it breaks down?  Usually your 
only "specification" in such a case is the manufacturer's warranty 
(or in extreme cases the consumer protection laws).

The stack example above is typical: you wanted the operation of the 
stack to be very reliable, but you didn't specify:
	- how quickly you wanted the stack to operate, and
	- how the stack would perform if one of its resources 
		was exhausted.
You might not accept a stack implementation that took 1 second to 
execute a push or a pop, or a stack that had a limit of 2 elements.
Any implementation of a stack will have restrictions; your design, 
specification and testing are intended to ensure that the "stackyness" 
of the final implementation is sufficiently high.

The nub of the tolerance issue is that you are tricked by the 
languages you use into believing that there is an infinite amount 
of some resource (most typically CPU and/or memory).  This is 
because these are mostly irrelevant to the interface that you 
are building; they mostly remain hidden by the implementation.
Yet when you look to reuse a component, these details are often 
significant.  

Another problem is that the interface itself is often shaped by 
the underlying architecture of the machine.  The machine in this 
case refers mostly to the programming language; other items such
as CPU and memory are involved at a lesser level.

> Dave Schaumann		|  And then -- what then?  Then, future...
> dave@cs.arizona.edu	|  		-Weather Report
--
Brenton Hoff (behoffski)			       xtbjh@levels.sait.edu.au
Transponder Australia	       My opinions are mine (and they're really weird).

G.Joly@cs.ucl.ac.uk (Gordon Joly) (02/06/91)

cwk@ORCRIST.GANDALF.CS.CMU.EDU (Charles Krueger) writes

> How about weather predictions?  In fact, any time software simulates an
> external reality (this includes numerical analysis), the input and output
> abstractions are probably approximations.  The designer, programmer, and
> user should all understand that there are tolerances.  These tolerances
> should be quantified.
> 
> What goes on inside the computer, however, is still as precise as ever.

Yes, I think that is true. Digital computers, at the bit level are
precise, but I feel sure that genetic algorithms need their fill of a
Really Good Random Number Generator. Also, Monte Carlo methods and
simulated annealing have a need for randomness.

Wheather "predictions" are doomed to failure once they find a chaotic
path in the solution space.

dave@cs.arizona.edu (Dave P. Schaumann) writes:

> I'd certainly be interested in hearing a discussion on this topic (tolerance
> implicit in software).
> 
> Numerical analysis aside, I can't imagine a single instance of "close is 
> good enough".

Implicit? What is implicit should be made explicit! SDI has the
problem of being implicitly faulty, but that has not stopped the
current rebirth.

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

G.Joly@cs.ucl.ac.uk (Gordon Joly) (02/08/91)

dave@cs.arizona.edu (Dave P. Schaumann) writes 
 
> Ok, so I stand corrected.  There are a lot of applications out there that
> don't need exact answers.  But my question was in the context of code re-use.
> How could you say something like, "well, I really need a stack, but I'll
> settle for something sort of stacky"?  Will we see code like this in the
> future:
> 
>   assert( stackyness(re_used_type) > 0.9 ) ; /* stackyness(a real stack)=1.0 */
> 
> I can't see how the "tolerence paradigm" could possibly lead to a reasonable
> means of code re-use.
> 
> Dave Schaumann		|  And then -- what then?  Then, future...
> dave@cs.arizona.edu	|  		-Weather Report

Returning to the idea of the idea of a engine piston, a piston made in
one factory must fit into the engine block made in another. The
machining of the cylinder piston must be correct to within so many
thousandths of an inch.

The post and pre conditions associated with this stack, written in
SOLVE, suggest that you could have a tolerance associated with ``what
happens when you attempt to push an stack to overflow''. The "amount"
of action could be set in the Error Method called.

However, more generally it seems that the notion of a "metric" is
basic to setting a tolerance gap of any sort.

--*98--*98--*98--*98--*98--*98--*98--*98--*98--*98--*98--*98--*98--*98

Signature Stack

SuperTypes (Object)

InstanceOperations

  push : (<Integer>) -> <Stack>
     precondition
     [ size [ (self <- size) <- lt(self <- maxsize) ]]
     postcondition
     [ cont [ (self <- top) <- eq(result) ]]
     
  pop  : ()-><Integer>
  top  : ()-><Integer>
  isEmpty : ()-><Boolean>
  size    : ()-><Integer>
  maxsize : ()-><Integer>

TypeOperations
  new: (<Type>)-><Stack>

Equations

TemporalProtocol
nonEmpty [ tr 
           inwhich $a
           iff $a!send(pop) <= $a!send(push) ]

End

--*89--*89--*89--*89--*89--*89--*89--*89--*89--*89--*89--*89--*89--*89

Another example of fault tolerance; computerised telephone exchanges,
which can fail to route 1/100 calls, say, but must never crash
completely. The reason that one in a hundred is OK is that human
originated misdials (eg slippery fingers) can account for 1/10
failures.

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

           Email: Les jeux sans frontiers du monde

gerety@hpfcbig.SDE.HP.COM (Colin Gerety) (02/09/91)

> engineering is to be considered an *engineering* discipline, rather
> than a science, the notion of tolerance should be developed.
...

> modern production line, and is not original. If you buy a new spark
> plug or a tyre, then it has to fit in place, within a certain margin
> of error - tolerance. Cylinders fit into an engine block with a
> certain gap, within a certain range of error. And so on.

> Can the tolerance idea get off the starting blocks?

> Gordon Joly 

  While I think that specifed, guaranteed behaviour is extremely
important to software re-use, the notion or tolerance probably won't
get off the starting block.

   In most physical systems there is some continuity between structure
and behaviour.  If you have a beam, making it bigger usually makes it 
stronger.  This is not the case in software.  Changing a single character 
in a program may lead from a correct program to catastrophic failure.  It
is not clear that adding more code will either detect or be able to
correct for these errors.

  For a good discussion of this, see "Evaluation of Safety-Critical
Software" by Parnas, Schouwen, and Kwan in CACM June 1990.

Colin Gerety
gerety@hpfclp.sde.hp.com

ejp@icd.ab.com (Ed Prochak) (02/12/91)

In article <665834913.1874@minster.york.ac.uk>, mrp@minster.york.ac.uk writes:
|> The point being missed so far is when can a system 'make-do' with an
|> 'imprecise/less-accurate/less tolerant/in-exact/imprefect' result.
|> 
|> The driving force behind a system using imprecision is the
|> time-precision trade off.  This requires that algorithms are written
|> so the preciseness of the result as it is sampled during the execution
|> of the task rises as more time is spent executing the task.  Thus when
|> the task stops the precision of the result is dependent on how much of
|> the code the task managed to execute.  [see Lin,Swaminathan & Liu,
|> IEEE RTS Symposium 1987].

I think that the real point being missed is that the tolerance has nothing
to do with the result, be it numerical, alphabetic, or other.
|> 
	[deleted description of time critical numerical problem & solution]
|> 
|>    - Martin
|> 
|>
#===============================#===========================================#
|> | Martin Portman                | Phone: (0904) 432735               
|
|> | Dept. of Comp. Science        | JANET:  mrp@uk.ac.york.minster     
|
|> | University of York            | UUCP: ...!mcsun!ukc!minster!mrp    
|
|> | Heslington                    | ARPA:                              
|
|> | YORK YO1 5DD ENGLAND          |
mrp%minster.york.ac.uk@nsfnet-relay.ac.uk |
|>
#===============================#===========================================#

In article  <785@caslon.cs.arizona.edu>, dave@cs.arizona.edu (Dave P.
Schaumann) writes:
>In article <777@caslon.cs.arizona.edu> I wrote:
>>   Numerical analysis aside, I can't imagine a single instance of
>>   "close is good enough".
>
>khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) wrote:
>>Most code optimizations, to wit:
>>	register allocation (must be correct, but need not be optimal)
>>	instruction scheduling (ditto)
>>	etc.
>>and
>>	traveling salesfolk (ditto)
>>	chess/games in general  programs (ditto)
>>	etc.
>
>cwk@ORCRIST.GANDALF.CS.CMU.EDU (Charles Krueger) wrote:
>>How about weather predictions?  [Mentions other simulations]
>>
>>What goes on inside the computer, however, is still as precise as ever.
>
>djbailey@skyler.mavd.honeywell.com wrote
>>[...]
>>"Tolerance" is appropriate to higher level system requirements.  If 
>>you could quantify the tolerance in requirements, you could have a 
>>very significant impact on how software development contracts are 
>>written and probably reduce a lot of arguments.

an excellant observation IMHO
>
>Ok, so I stand corrected.  There are a lot of applications out there that
>don't need exact answers.  But my question was in the context of code re-use.
>How could you say something like, "well, I really need a stack, but I'll
>settle for something sort of stacky"?  Will we see code like this in the
>future:
>
>  assert( stackyness(re_used_type) > 0.9 ) ; /* stackyness(a real
stack)=1.0 */

I don't think so, because this is not where the tolerance lies.

>
>I can't see how the "tolerence paradigm" could possibly lead to a reasonable
>means of code re-use.
>
>Dave Schaumann		|  And then -- what then?  Then, future...
>dave@cs.arizona.edu	|  		-Weather Report


 Saying software has a tolerance on the
output produced misses the point of tolerances completely.

Some folks are on the right track, but I still haven't seen a clearcut
example. Since Dave provided the question, I will use his stack as an example.
The  "tolerence(sic) paradigm" doesn't require that the operations not be
performed precisely, but that certain other parameters may vary greatly
according to the implementation.

Now a stack is a stack is a stack. All things that claim to be stacks
must be stacks 100%. There must be at least three fundamental operations:
Initialize, Push, and Pop. the tolerance comes in when you try to compare
two implementations of stacks.

Consider first a stack implemented using a fixed size array. All three
operations in this case are fast. But there is a finite limit to how much
can be pushed onto the stack before the array is filled. The specifications
might provide the info. that push/pop uses Xmicroseconds and there is a fixed
memory cost of Ykilobytes (even when the stack is empty that array is there.)
This stack has very tight timing and memory tolerances, but at the cost of
always having more memory for the stack than you need.

Now a second implementation uses linked list, with elements allocated as
needed. Maybe the Push and Pop operations are slower in this case, say
2Xmicroseconds. but the memory constraint is removed. The stack size limit
is essentially infinite, or at least as big as the disc swap space on
a virtual memory system. And actually, the timing of the Push and Pop
operations have more variation as the OS might possibly do swaps during
the calls.

The stack example is very trivial, but the point is that tolerance
applies to more than just the operation being performed. It is an
object view of the software, you don't look at the internals, how it
is implemented. You think, "I need something to do these operations
with this transaction rate."

Want a real life example? Consider a printer. it must locate the
characters and image them at the correct point size. there is little
or no tolerance there, but how fast the pages come out has a good
deal of tolerance. A customer expects simple pages to come out faster
than more complex pages. there are two measurements that may vary:
print speed and page complexity. Print speed is relatively easy to
define and can be measured precisely. Complexity of the text is
not so easily defined and may have a wide tolerance in the customer's
view.

In summary: the software has properties that can be measured.
The measurements are sometimes imprecise, & othertimes, are allowed
to vary because that property is less important. The software must
still provide correct results!

As a final example, consider the tolerance in the hardware world.
What property of a memory IC has tolerances associated with it?
Certainly the memory cell must return the exact value placed there
by the last write operation (zero tolerance here). The IC is sold
on the properties of how fast and how many, and the greatest
tolerance is on the speed.





Edward J. Prochak   Voice: w(216)646-4663  h(216)349-1821
               Email: {cwjcc,pyramid,decvax,uunet}!ejp@icd.ab.com
USmail: Allen-Bradley, 747 Alpha Drive, Highland Heights,OH 44143
Wellington: ENGINEERING is "the ability to do for one dollar,
                            what any damn fool can do for two."

djbailey@skyler.mavd.honeywell.com (02/13/91)

In article <2053@abvax.UUCP>, ejp@icd.ab.com (Ed Prochak) writes:
> I think that the real point being missed is that the tolerance has nothing
> to do with the result, be it numerical, alphabetic, or other.
> [...] 
> Want a real life example? Consider a printer. it must locate the
> characters and image them at the correct point size. there is little
> or no tolerance there, but how fast the pages come out has a good
> deal of tolerance. A customer expects simple pages to come out faster
> than more complex pages. there are two measurements that may vary:
> print speed and page complexity. Print speed is relatively easy to
> define and can be measured precisely. Complexity of the text is
> not so easily defined and may have a wide tolerance in the customer's
> view.
> 

Right, but tolerance in requirements is even harder to define. For 
example, customer A says, "Yes, I want to print things." Customer B 
says, "I want to print my 20 page listings in less than five minutes."
Customer C says, "I need to print text and graphics at 1200 dpi, 10 
pages per minute, on A, B, C, or D size paper, and I don't want to 
spend more than $10000."  Customer D says, "I want to print reports."

There is a lot more tolerance in Customer A's requirements than in 
Customer C's requirements.  Customer D's requirements could mean 
anything depending on the definition of a "report."

You need a way to measure the semantic content of requirements.  
Tolerance in requirements could be the number of design factors 
where your customer's response is "I don't care" or "I don't care as 
long as [...] is true."  
-- Don J. Bailey