[comp.lang.functional] Term rewriting and denotation

pop@cs.umass.edu (06/09/90)

Since at least some people, including myself, may not be entirely on top of
the whole of the literature of functional programming, I think it might be
enlightening to talk about a very small example of a formal system, namely
propositional logic, and then try to relate this to the more complex 
functional systems. Corrections and/or filling in and/or pointers to the
literature w.r.t anything in this message will be most welcome. 

In propositional logic, we work with forms rather than functions.

The terms in propositional logic consist of  (a) variables, and (b) well
formed combinations of variables drawn from an alphabet  V,
with the propositional operations, {and, or, implies, not, true, false equiv}
say.

A domain of interpretation contains only two elements,  B = {True, False},
with associated functions  AND OR IMPLIES NOT TRUE FALSE EQUIV, defined by
truth-tables. An environment is a mapping from a finite set  V1 contained in V
to  B. The function eval(T,E) is defined by:                     

    eval(T1 and T2,E) = eval(T1) AND eval(T2)
      etc. for or, not ...
      eval(v,E) = E(v)  if v in V

[Algebraically, eval is related to the concept of a homomorphism.]
A term T is said to be a tautology if eval(T,E) = True  for all environments
E. An environment is minimal for a given term if its domain is precisely the
set of variables occurring in the term. In the propositional calculus we can
decide if a term is a tautology since the set of minimal environments for a
given term is finite (tho' exponential in the number of variables). More
interesting logics do not admit this possibility.

If  S1 equiv T1,... Sn equiv Tn  are a set of tautologies then the
term-rewriting system:

E =
	{S1 -> T1,
	.
	.
	Sn -> Tn}

has the property of preserving truth functional equivalence, or we may say
that if S can be rewritten to T using E,  S -->(E) T, then  S equiv T is a
tautology. E may or may not have other nice properties, e.g. confluence.

I understand that interpretation in the Propositional Calculus corresponds to
denotation in Scott-Strachey semantics. (Indeed, given an ordering on the set
of variables, you could associate a truth function with a term). Thus it would
appear possible to relate syntactic manipulations using term-rewriting to
denotational identity, as can be done for the Propositional Calculus.  

What is clear is that this precludes certain kinds of pattern-matching
definitions, particularly those which are order-dependent. E.g.,

	lookup(x,[]) = false
	lookup(x,(x::y)::l) = y
	lookup(x,z::l) = lookup(x,l)

This issue is discussed in Peyton-Jones (see eg. section 5.5, p98 "Uniform
Definitions").

There is, of course, an intermediate formalism, namely the lamdba calculus,
which has its own syntactic manipulations, e.g. beta-reduction, which are
complicated by rules about avoiding name-clashes. I take it that these
have been proved sound with respect to the Scott-Strachey semantics.
So one could construct a diagram:

          compile                  denote
(E1,T)   -------------->    L1 ---------------------------> F1
  |                          |         		         /
  | R(spec)                  | R_L(Derive(E1,spec))     /
 \/       compile           \/                 	       /
(E2,T)   -------------->    L2 -----------------------/
                                     denote

Here (E1,T1) is a set of rewrite rules E1, together with a term T, to be
reduced by the rules. Given a specification, spec, a new set of rules can be
derived, using E1 to rewrite itself. (with the Burstall-Darlington approach,
the rules may be used bi-directionally in this rewriting). A compiler can
transform these (E1,T) and (E2,T) to a terms L1,L2 in the lambda-calculus,
which itself denotes a (possibly constant) function in the semantic domain F1.
A function  R_L provides a way of applying reductions in the lambda-calculus.
Now comes the question - does there exist a function  Derive  which takes  E1
and spec and produces a new specification for a series of reductions in the
lambda-calculus which will convert  L1  to  L2?  If there is, the soundness of
of the rewriting  R(spec) depends on the soundness of derivations in the
lambda-calculus.

Term rewriting systems are normally first-order - no function variables
are permitted. Thus only some functional programs using patterns could
actually be treated with them. Some of the associated theory of term rewriting
systems (e.g. Knuth-Bendix) is heavily dependent on the unification algorithm,
which is first-order. I understand that Sorted Universal Algebras also provide
only a first-order system. Thus a more general approach would seem to be called
for. I know that Sheard and Stemple in this Department have used an extended
Boyer-Moore method of inference that depends heavily on higher order functions
like  _reduce_  (_reduce_ in APL terminology = Strachey's _lit_ function).
I will endeavour to obtain their response to this message - they are not
news-junkies as I am becoming.

Peyton Jones,S.L (1987) The Implementation of Functional Programming Languages
Prentice Hall.

fs@rex.cs.tulane.edu (Frank Silbermann) (06/12/90)

In article <15341@dime.cs.umass.edu> pop@cs.umass.edu () writes:
>
>	I think it might be enlightening to talk about
>	a very small example of a formal system, namely
>	propositional logic, and then try to relate this
>	to the more complex functional systems.

These words of yours are key to the debate between
the typed-lambda-calculus /denotational-semantics approach
and the term-rewriting (equational logic) approach
to functional programming:

>	Term rewriting systems are normally first-order
>	- no function variables are permitted.
>	Thus only some functional programs using patterns
>	could actually be treated with them.
>	Some of the associated theory of term rewriting systems
>	(e.g. Knuth-Bendix) is heavily dependent
>	on the unification algorithm, which is first-order.
>	I understand that Sorted Universal Algebras
>	also provide only a first-order system.
>	Thus a more general approach would seem to be called for.

Until now, I have argued the necessity of
the denotational-semantics-based approach
for higher-order functional programming.
Now me say a word for the opposition.

Because higher-order functions are so easily
simulated in first-order functional programming,
and because equational logic is so much simpler
than the denotational approach, many argue that
higher-order capabilities are both unnecessary and undesirable
(except as syntactic sugars in a semantically 1st-order system).

This is a valid point of view.
Many decisions in programming language design
involve a trade-off between simplicity and expressisiveness.
Good work may maximize expressiveness
for a given complexity tolerance, or, alternatively,
minimize complexity to meet a given standard of expressiveness.

	Frank Silbermann	fs@rex.cs.tulane.edu

yodaiken@freal.cs.umass.edu (victor yodaiken) (06/12/90)

In article <3570@rex.cs.tulane.edu> fs@rex.cs.tulane.edu (Frank Silbermann) writes:
>Until now, I have argued the necessity of
>the denotational-semantics-based approach
>for higher-order functional programming.
>Now me say a word for the opposition.
o

This seems like an opportune time to put in a plug for the Primitive
Recursive arithmetic [1]. 
While this formal method has received little attention in the c.s.
community, it has numerous advantages over more widely known methods.
In particular, p.r.a. has an algorithmic notion of function that seems
better suited to the world of c.s. than the set theoretic notions
implicit in predicate calc and denotational semantics.  
Also p.r.a. allows for a completely formal approach, i.e. functions as 
textual objects, without requiring clumsy axiomatizations of ordinary
arithmetic. The proof rules are simple substitution rules for equality
e.g.  A = B |- F(A) = F(B) , and the axioms are the defining axioms of
the p.r. functions. 


1. Goodstein "Recursive Number Theory" (Noth Holland 1957)

guttman@mitre.org (Joshua D. Guttman) (06/12/90)

In article <15421@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
   This seems like an opportune time to put in a plug for the Primitive
   Recursive arithmetic [1]. 
   While this formal method has received little attention in the c.s.
   community, it has numerous advantages over more widely known methods.

But doesn't PRA have the intrinsic problem that it contains much too small a
set of functions?  EG no "unbounded search" and other sources of partial
recursive functions.  

	Josh

yodaiken@freal.cs.umass.edu (victor yodaiken) (06/13/90)

In article <GUTTMAN.90Jun12105623@darjeeling.mitre.org> guttman@mitre.org (Joshua D. Guttman) writes:
>In article <15421@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>   This seems like an opportune time to put in a plug for the Primitive
>   Recursive arithmetic [1]. 
>   While this formal method has received little attention in the c.s.
>   community, it has numerous advantages over more widely known methods.
>
>But doesn't PRA have the intrinsic problem that it contains much too small a
>set of functions?  EG no "unbounded search" and other sources of partial
>recursive functions.  
>
>	Josh

Too small for what? I have yet to see an example of a proposed or
implemented digital computer that can actually compute an unbounded search,
and do not understand why a programming language would need to be able to
expressive functions that are not feasible.  The class of p.r. functions
is very large, and contains a great deal of number theory. It would be
interesting to hear of examples of functions that are not p.r. and are
objects of interest for a programming language.

guttman@mitre.org (Joshua D. Guttman) (06/14/90)

In article <15484@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>   In article <GUTTMAN.90Jun12105623@darjeeling.mitre.org> guttman@mitre.org (Joshua D. Guttman) writes:
>   >In article <15421@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>   >   >This seems like an opportune time to put in a plug for the Primitive
>   >   >Recursive arithmetic [1]. 
>
>   >But doesn't PRA have the intrinsic problem that it contains much too small a
>   >set of functions?  
>
>   Too small for what? I have yet to see an example of a proposed or
>   implemented digital computer that can actually compute an unbounded search,
>   and do not understand why a programming language would need to be able to
>   expressive functions that are not feasible.  

An example too cheap to put much weight on is the non-terminating function 

(lambda (x)
  (let loop ((x x))
    (if #T
	(loop x)
	2)))

which is non PR simply because PR functions are total.  But I think a more
valuable point is that there are programming language idioms that (in the ideal
sense) allow many non-PR algorithms to be expressed, and which are too useful
to do without.  That is, to replace them by a PR expression that would do the
same computation in the cases of practical interest would require an impossibly
fine analysis of which cases will be of practical interest, and what
termination-guaranteeing conditions will be true in those cases.  

	Josh

yodaiken@freal.cs.umass.edu (victor yodaiken) (06/14/90)

In reply to:guttman@linus.mitre.org
>>   >In article <15421@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>>   >   >This seems like an opportune time to put in a plug for the Primitive
>>   >   >Recursive arithmetic [1]. 
>>
>>   >But doesn't PRA have the intrinsic problem that it contains much too small a
>>   >set of functions?  
>>
>>   Too small for what? I have yet to see an example of a proposed or
>>   implemented digital computer that can actually compute an unbounded search,
>>   and do not understand why a programming language would need to be able to
>>   expressive functions that are not feasible.  
>
> ... But I think a more
>valuable point is that there are programming language idioms that (in the ideal
>sense) allow many non-PR algorithms to be expressed, and which are too useful
>to do without.  That is, to replace them by a PR expression that would do the
>same computation in the cases of practical interest would require an impossibly
>fine analysis of which cases will be of practical interest, and what
>termination-guaranteeing conditions will be true in those cases.  
>
>	Josh
>

This is a common argument, but I'd like to see something a little more
concrete. What's an example of a function that is of real interest
to a programmer, and that cannot be  expressed cleanly as a p.r. function.
It is oft stated that p.r. function definitions will be far
more cumbersome than those of more general frameworks, but I have yet
to see any real evidence of this.
There is an old result (Blum (?)) which
show that the complete text of a p.r. function definition is quite a bit
longer than the complete text of a partial recursive definition for a large
class of functions, but this is not a result with much meaning in this
context (any reasonable p.r. programming language will give the programmer
a collection of pre-defined p.r. fuctionals). 

If the programmer requires unbounded
search for a program to work correctly, than the program is undoubtedly
incorrect. If a programmer cannot bound the depth of a recursion by, say,
a k-exponential function, than it is hard to believe that the function can
be computed.

td@alice.UUCP (Tom Duff) (06/14/90)

I read a bizarre interchange that starts
>This seems like an opportune time to put in a plug for the Primitive
>Recursive arithmetic
and ends
>What's an example of a function that is of real interest
>to a programmer, and that cannot be  expressed cleanly as a p.r. function.

The bizarre part is that people replying to this junk took
it seriously!

What are you people smoking?
Is your lisp interpreter primitive-recursive?
Is your UNIX kernel?
Is your X server?
Is your VAX microcode?

gudeman@cs.arizona.edu (David Gudeman) (06/15/90)

In article  <15548@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>
>This is a common argument, but I'd like to see something a little more
>concrete. What's an example of a function that is of real interest
>to a programmer, and that cannot be  expressed cleanly as a p.r. function.

How about eval(P) to execute a program P in some more powerful language?
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

gb@cs.purdue.EDU (Gerald Baumgartner) (06/15/90)

In article <15548@dime.cs.umass.edu> you write:
>In reply to:guttman@linus.mitre.org
>> ... But I think a more
>>valuable point is that there are programming language idioms that (in the ideal
>>sense) allow many non-PR algorithms to be expressed, and which are too useful
>>to do without.  That is, to replace them by a PR expression that would do the
>>same computation in the cases of practical interest would require an impossibly
>>fine analysis of which cases will be of practical interest, and what
>>termination-guaranteeing conditions will be true in those cases.  
>>
>>	Josh
>>
>
>This is a common argument, but I'd like to see something a little more
>concrete. What's an example of a function that is of real interest
>to a programmer, and that cannot be  expressed cleanly as a p.r. function.
>It is oft stated that p.r. function definitions will be far
>more cumbersome than those of more general frameworks, but I have yet
>to see any real evidence of this.

Since there is still `term rewriting' in the subject: take the
Knuth-Bendix algorithm for transforming rewrite rule systems into a
canonical form. It is a straightforward algorithm if you write it with
a while-loop or with recursion. You'll have trouble writing it with PR
constructs since no general bound is known.

Another example is the Groebner basis algorithm for transforming a set
of polynomials into standard form. As soon as you have this standard
form (`Groebner basis') you can decide membership of the ideal
generated by the set of polynomials, which you can't with the original
set -- so the algorithm is of practical use.

This Groebner basis algorithm has a similar structure as the
Knuth-Bendix algorithm (Knuth-Bendix can be simulated by Groebner
basis algorithm, probably vice versa). The Groebner basis algorithm
runs in doubly exponential time (double exp. in the degree of the
polynomials and exponential in the number of polys or so) and still it
is useful (at least many mathematicians are using it). The algorithm
is around since 1967, it took some 10 years until a bound was
discovered for the case of polys of degree at most 2. There is still
no bound known for the general case. Without a bound for the running
time you cannot program this algorithm using PR constructs.

A more practical example: the Roider algorithm for collision detection
(of a robot with an obstacle). It has a simple structure (with a
while loop) and it's fast. I don't know whether a bound is known. If
there is it must be quite pessimistic, since the running time depends
on the form of the obstacle and the robot (input data) and on the
starting point for the iteration (arbitrary).

Especially, in mathematical or technical software you often have
algorithms that are naturally expressed with non-PR constructs and
where no bound is known or where the bound is extremely pessimistic.
So better keep recursion and/or while loop in the language.

Even partial recursive functions might be useful, for example for
writing an automatic theorem prover.

Gerald

--
Gerald Baumgartner
Dept. of Computer Sciences, Purdue University, W. Lafayette, IN 47907
Internet: gb@cs.purdue.edu  UUCP: ...!{decwrl,gatech,ucbvax}!purdue!gb

murthy@algron.cs.cornell.edu (Chet Murthy) (06/15/90)

gudeman@cs.arizona.edu (David Gudeman) writes:

>In article  <15548@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>>
>>This is a common argument, but I'd like to see something a little more
>>concrete. What's an example of a function that is of real interest
>>to a programmer, and that cannot be  expressed cleanly as a p.r. function.

>How about eval(P) to execute a program P in some more powerful language?

A better example is an implementation of the Knuth-Bendix completion
algorithm (for rewriting).  Or the program that searches for a pair of
embeddable trees in a stream of them.  Or a program that looks for a
witness for the intermediate Paris-Harrington theorem.

There are lots of examples of algorithms that depend for their
termination upon the well-ordered-ness of ordinals of greater height
than epsilon_0, and the fact that every primitive recursive function
sits properly below epsilon_0 tells us that we cannot hope to do all
programming just with p.r. functions.

--chet--
	--chet--
	murthy@cs.cornell.edu

yodaiken@freal.cs.umass.edu (victor yodaiken) (06/15/90)

In article <22132@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <15548@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>>
>>This is a common argument, but I'd like to see something a little more
>>concrete. What's an example of a function that is of real interest
>>to a programmer, and that cannot be  expressed cleanly as a p.r. function.
>
>How about eval(P) to execute a program P in some more powerful language?
>-- 
>					David Gudeman
>Department of Computer Science
>The University of Arizona        gudeman@cs.arizona.edu
>Tucson, AZ 85721                 noao!arizona!gudeman


1. Not a very concrete example. Why do we need the more powerful language?
The question here is whether or not we need non-p.r. functions to
write programs? 

2. We can easily write eval_{k,C}(P) where 
eval_{k,C} must complete within  AckermansFunction(k,C*Length(P)) 
steps. If we cannot write such a function, then is eval(P) feasible?

yodaiken@freal.cs.umass.edu (victor yodaiken) (06/15/90)

In article <42209@cornell.UUCP> murthy@algron.cs.cornell.edu (Chet Murthy) writes:
>There are lots of examples of algorithms that depend for their
>termination upon the well-ordered-ness of ordinals of greater height
>than epsilon_0, and the fact that every primitive recursive function
>sits properly below epsilon_0 tells us that we cannot hope to do all
>programming just with p.r. functions.
>
>--chet--


And 
In article <10838@medusa.cs.purdue.edu> gb@cs.purdue.edu (Gerald Baumgartner) writes:
>Since there is still `term rewriting' in the subject: take the
>Knuth-Bendix algorithm for transforming rewrite rule systems into a
>canonical form. It is a straightforward algorithm if you write it with
>a while-loop or with recursion. You'll have trouble writing it with PR
>constructs since no general bound is known.
>

I'm not arguing that one can dispense with all but p.r. mathematics. I'm
questioning the need for anything beyond p.r. functions in 
a *programming language*. Since, programming languages are usually intended
as aids in describing computations that we desire a digital computer to
perform, it is not clear to me why we need to be able to describe
computations that are not feasible.  One can write a p.r. function which
computes a bounded Knuth-Bendix algorithm say KB(r,p,q), where "r"
bounds the number of steps. Suppose we write a program in a language which
allows expression of unbounded algorithms and have an unbounded version of
knuth bendix KB'(p,q). What is the formal semantics of this function?
Is it that the algorithm will be executed until termination?
If so, the program is not intended for any existing or proposed digital
computer that I know of.  If KB'(p,q) is intended for the class of
mundane digital computers than the
proposed semantics is incorrect --- a programmer who believes the formal
semantics will be deceived, his/her program will behave in a manner not 
matching the formal semantics. The p.r. version provides the programmer with
a better grasp of the actual nature of digital computation. The programmer
is forced to consider the case where time runs out. 

>This Groebner basis algorithm has a similar structure as the
>Knuth-Bendix algorithm (Knuth-Bendix can be simulated by Groebner
>basis algorithm, probably vice versa). The Groebner basis algorithm
>runs in doubly exponential time (double exp. in the degree of the
>polynomials and exponential in the number of polys or so) and still it
>is useful (at least many mathematicians are using it).

BTW, I'm sure you know that double exponential functions are p.r.

>The algorithm
>is around since 1967, it took some 10 years until a bound was
>discovered for the case of polys of degree at most 2. There is still
>no bound known for the general case. Without a bound for the running
>time you cannot program this algorithm using PR constructs.
>

But when you compute the algorithm on a real computer, you do have bounds
on time and storage. Thus, even without a known bound, you implicitly 
program the algorithm in a bounded way. Why is it better to use implicit
bounds?

[more, similar examples deleted].

verwer@ruuinf.cs.ruu.nl (Nico Verwer) (06/15/90)

In article <15639@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>Since, programming languages are usually intended
>as aids in describing computations that we desire a digital computer to
>perform, it is not clear to me why we need to be able to describe
>computations that are not feasible.
>[...] Suppose we write a program in a language which
>allows expression of unbounded algorithms and have an unbounded version of
>knuth bendix KB'(p,q). What is the formal semantics of this function?
>Is it that the algorithm will be executed until termination?
>If so, the program is not intended for any existing or proposed digital
>computer that I know of.  [...]
>But when you compute the algorithm on a real computer, you do have bounds
>on time and storage. Thus, even without a known bound, you implicitly 
>program the algorithm in a bounded way. Why is it better to use implicit
>bounds?

Theoretically, you are right, and we can do without such things as
turing machines, lambda-calculus, and languages which are modelled by
these, since these can `execute' algorithms which are unbounded in 
space and/or time.
In this view, any digital computer is just a finite state machine, the
state being represented by the contents of memory and registers (and I/O
devices). This can be coded as a natural number (take the binary number
represented by all the bits in the system).

This is, however, not a very practical way of programming your computer.
In most cases, one prefers to think of a digital computer as modelling a
turing machine, or a lambda-calculus reduction machine. This allows you
to express algorithms in a more or less concise way, and doesn't force
the algorithm designer to account for memory or time limitations.

It is `impossible' to program the Knuth-Bendix algorithm on a
digital computer, since it has finite memory. Fortunately, in many cases
(theoretically, `many' should read `almost none'...)
the program doesn't run out of memory, and our finite state machine
does a good job in pretending it has infinite storage.

It is better to use implicit bounds because, even though a computer
memory is always finite, one may always buy a faster computer with more
memory, and change those bounds.
Thus, the formal semantics of the program KB'(p,q) is:
 Choose non-deterministically between 
 - KB(p,q)
 - an "out of memory" error
 - the user gets frustrated from waiting and kills the process.
The outcome depends on your particular system, and your patience...
It's the best you can get, I'm afraid :-) .

Nico Verwer                                       |  verwer@cs.ruu.nl
Dept. of Computer Science, University of Utrecht  |  +31 30 533921
p.o. box 80.089 
3508 TB Utrecht
The Netherlands

jrk@sys.uea.ac.uk (Richard Kennaway) (06/15/90)

In article <15594@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>The question here is whether or not we need non-p.r. functions to
>write programs? 

That is one question, and several examples have been posted; another is,
why should even a p.r. function necessarily be programmed in p.r.?

--
Richard Kennaway          SYS, University of East Anglia, Norwich, U.K.
Internet:  jrk@sys.uea.ac.uk		uucp:  ...mcvax!ukc!uea-sys!jrk

murthy@algron.cs.cornell.edu (Chet Murthy) (06/15/90)

yodaiken@freal.cs.umass.edu (victor yodaiken) writes:

>I'm not arguing that one can dispense with all but p.r. mathematics. I'm
>questioning the need for anything beyond p.r. functions in 
>a *programming language*. Since, programming languages are usually intended
>as aids in describing computations that we desire a digital computer to
>perform, it is not clear to me why we need to be able to describe
>computations that are not feasible.  One can write a p.r. function which
>computes a bounded Knuth-Bendix algorithm say KB(r,p,q), where "r"
>bounds the number of steps. Suppose we write a program in a language which
>allows expression of unbounded algorithms and have an unbounded version of
>knuth bendix KB'(p,q). What is the formal semantics of this function?
>Is it that the algorithm will be executed until termination?
>If so, the program is not intended for any existing or proposed digital
>computer that I know of.  If KB'(p,q) is intended for the class of
>mundane digital computers than the
>proposed semantics is incorrect --- a programmer who believes the formal
>semantics will be deceived, his/her program will behave in a manner not 
>matching the formal semantics. The p.r. version provides the programmer with
>a better grasp of the actual nature of digital computation. The programmer
>is forced to consider the case where time runs out. 

The beef I have with just prim. rec. is not about nontermination.  If
every algorithm had to have a bound, I'd be happy.  But prim. rec.
does not encompass all recursive functions, or even all
computationally interesting recursive functions.

>>The algorithm
>>is around since 1967, it took some 10 years until a bound was
>>discovered for the case of polys of degree at most 2. There is still
>>no bound known for the general case. Without a bound for the running
>>time you cannot program this algorithm using PR constructs.
>>

>But when you compute the algorithm on a real computer, you do have bounds
>on time and storage. Thus, even without a known bound, you implicitly 
>program the algorithm in a bounded way. Why is it better to use implicit
>bounds?

But your bound might not be p.r.  You bound might be some ordinal
ABOVE epsilon_0.  Such a program would still be bounded, still be
terminating, still efficient for the average case, but not provably
p.r.  To exclude such algorithms simply because they can't be coded
using primitive recursion seems a bit extreme.
	--chet--
	murthy@cs.cornell.edu

gb@cs.purdue.EDU (Gerald Baumgartner) (06/16/90)

In article <15639@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:
>In article <10838@medusa.cs.purdue.edu> gb@cs.purdue.edu (Gerald Baumgartner) writes:
>>This Groebner basis algorithm has a similar structure as the
>>Knuth-Bendix algorithm (Knuth-Bendix can be simulated by Groebner
>>basis algorithm, probably vice versa). The Groebner basis algorithm
>>runs in doubly exponential time (double exp. in the degree of the
>>polynomials and exponential in the number of polys or so) and still it
>>is useful (at least many mathematicians are using it).
>
>BTW, I'm sure you know that double exponential functions are p.r.

But I need to know more than just the asymptotical behaviour to use
it for a bound.

>But when you compute the algorithm on a real computer, you do have bounds
>on time and storage. Thus, even without a known bound, you implicitly 
>program the algorithm in a bounded way. Why is it better to use implicit
>bounds?

The usual way to run this algorithm on more difficult examples is to
start it and see whether you get a useful result before the machine
crashes the next time. I've seen useful results after 1-2 weeks of CPU
time on a Microvax II.

(BTW, you might have used the algorithm already, it's built in in
Mathematica, for simplifying systems of poly equations.)

Sure you can use bounds on the running time (say 1 year). But with
algorithms whose running time is completely unpredictable (that's how
it appears to the user) why not use the natural bound of waiting until
the next downtime of the machine.

I don't think your proposed PR constructs are a bad thing, but I'd like
to have non-PR constructs as well.

Gerald

--
Gerald Baumgartner
Dept. of Computer Sciences, Purdue University, W. Lafayette, IN 47907
Internet: gb@cs.purdue.edu  UUCP: ...!{decwrl,gatech,ucbvax}!purdue!gb

yodaiken@freal.cs.umass.edu (victor yodaiken) (06/16/90)

In article <10929@alice.UUCP> td@alice.UUCP (Tom Duff) writes:
>I read a bizarre interchange that starts
>>This seems like an opportune time to put in a plug for the Primitive
>>Recursive arithmetic
>and ends
>>What's an example of a function that is of real interest
>>to a programmer, and that cannot be  expressed cleanly as a p.r. function.
>
>The bizarre part is that people replying to this junk took
>it seriously!
>
>What are you people smoking?
>Is your lisp interpreter primitive-recursive?
>Is your UNIX kernel?
>Is your X server?
>Is your VAX microcode?


I've never used a computer that was not finite state, so I assume all the
programs that run on the computer are certainly primitive-recursive.
Counter-example?

norvell@csri.toronto.edu (Theo Norvell) (06/17/90)

Someone (Victor?) writes:

>>>What's an example of a function that is of real interest
>>>to a programmer, and that cannot be  expressed cleanly as a p.r. function.

>In article <10929@alice.UUCP> td@alice.UUCP (Tom Duff) writes:

>>[Inflammatory remarks deleted]
>>Is your lisp interpreter primitive-recursive?
>>[Non-function examples deleted]

In article <15713@dime.cs.umass.edu> yodaiken@freal.cs.umass.edu (victor yodaiken) writes:

>I've never used a computer that was not finite state, so I assume all the
>programs that run on the computer are certainly primitive-recursive.
>Counter-example?

Tom Duff was only answering the question; a (pure) Lisp interpreter is
	(a) ``a function of real interest to a programmer'' and 
	(b) ``cannot be expressed cleanly [or not] by as a p.r. function''.
The fact that no complete Lisp interpreter will fit on a real computer
misses the point that he answered the question.  Arguments that people
ought to be satisfied with ``not-quite Lisp interpreters'' are interesting,
but miss the point that people really are interested in real Lisp interpreters.

Here are some points in this discussion I haven't seen mentioned (perhaps
because everyone knows them; I am rather new to this subject, so they
are new to me):
	--So far all the examples have been of nontotal functions, like
	  interpreters.  P.r. does not cover all total g.r. functions.
	  Ackerman's function is a classic counterexample.
	--As has been pointed out, Blum showed that the size of the smallest
	  p.r. program for a p.r. function may be much much bigger than the
	  size of the smallest g.r. program for the same function.
	  But 
		(a) this is only so for functions with a high time
		    complexity (Constable), and
		(b) the run time of the p.r. program can be within a
		    linear factor of the run time of the g.r. program
		    (Constable and Borodin).
	-- No one has mentioned any programmer-friendly language proposals
	   that admit only always-terminating programs.  Nor even recursive
	   programming constructs, other than for-loops, that guarantee
	   termination.  Are there any?  (Let me know by email if you
	   think this net discussion is silly).

My apologies for taking this discussion seriously.

Theo Norvell

Disclaimer:  I don't claim to be an expert on this subject.  I may have
misstated the results.