[comp.lang.prolog] compatibility/elegance & *theory*

jackson@esosun.UUCP (Jerry Jackson) (04/01/88)

After reading Richard O'Keefe's response to Cris Kobryn's article, I would
like to make a *few* points...

	First of all, I agree with the idea of being able to plug in
different readers -- however, I don't think this is the area of
compatibility to which Mr. Kobryn was referring.  The chief problems
with CommonLisp (according to me) are semantic, not syntactic.  The
CommonLisp designers payed too much attention to the M**lisp and
Z***lisp communities who moaned and whined about the AWFUL cost of
updating old code. 

>>    It is theoretically *impossible* to write a *correct* automatic
>>    conversion program for Prolog or Lisp.

What an interesting statement.... Is it also theoretically *impossible* to
write a *correct* automatic conversion program from Lisp to assembler? 
Somehow, this seems like a similar problem to me... 


>>  For example, in Prolog-as-we-know-it,
>>      atom(X)
>>  fails quietly if X is a variable.  This is not terribly logical,
>>  but that is the way it has always been, and people have written
>>  their programs assuming that that's the way it works.  Even NU
>>  Prolog, which supports coroutining in a particularly nice way,
>>  doesn't change that.  (NU Prolog has isAtom/1, which IS logical.)
>>  Come to that the equivalents in VM/Prolog and Waterloo Prolog are
>>  similar.  But in the current BSI fragments, we find that
>>      atom(X)
>>  will report an ERROR if X is a variable.


Prolog is a relatively new language.  Why should we expect its form to
be frozen in stone so soon. It's hard to think in terms of "the way it
has always been" with regard to Prolog. I have a sneaking suspicion that
"Prolog-as-we-know-it" will probably change quite a bit in the
future.  It's a shame that code might need to be re-written, but, do we
want to burden future generations of logic programmers with today's
short-sightedness?  In this particular example, (which I consider a
strange one for Mr O'Keefe to use to make a point), it seems to me
that the BSI behavior is much more sensible.. after all, does it
really make sense to ask the question: "Is this unknown object an
atom?".



I'm sure Mr. O'Keefe has some valid complaints about the semantics of 
BSI.. (I haven't read the grimoire), but I fear many of his complaints are
grounded in a desire to avoid the (admittedly painful) task of upgrading
code.

How many times have I heard...

"But, we have too much COBOL code that already works -- we can't change now"

or,

"But, Fortran does everything I need..."

or even,

"BASIC is just fine.. after all, all languages are Turing-machine equivalent"


In some ways, these are extreme examples but, on the other hand, they
are less extreme because the changes that these folks are fighting are
far more pervasive and difficult than a few rather subtle changes.


Finally, do we want people 20 years from now thinking about Prolog the
way most people think about Fortran today.  This is what will happen
if we concern ourselves too much with compatibility at this early
date.


--Jerry Jackson

ok@quintus.UUCP (Richard A. O'Keefe) (04/01/88)

In article <136@vor.esosun.UUCP>, jackson@esosun.UUCP (Jerry Jackson) writes:
> After reading Richard O'Keefe's response to Cris Kobryn's article, I would
> like to make a *few* points...

A fair summary of what he says is "---- the users".

> >>    It is theoretically *impossible* to write a *correct* automatic
> >>    conversion program for Prolog or Lisp.
> What an interesting statement.... Is it also theoretically *impossible* to
> write a *correct* automatic conversion program from Lisp to assembler? 
> Somehow, this seems like a similar problem to me... 

Yes, it *is* theoretically impossible to write a correct automatic conversion
program from Lisp to assembler.  You have to have an interpreter as well.
Why?  Because you can construct code on the fly and execute it.  The problem
with automatic conversion from dialect X of Lisp (or Prolog, or Pop) to
dialect Y is that you need an interpreter for dialect **X** around at run
time, but what you get is an interpreter for dialect Y.  I gave an example
of this problem.  You don't even have to be able to construct arbitrary
chunks of code at run time, it is enough to be able to call a predicate
whose identity cannot be determined without running the code (or, to be
pedantic, without doing something approximately as costly).  I gave an
example in my previous message.  Here's another one:

	ordered(Test, List) :-
		ordered_1(List, Test).

	ordered_1([], _).
	ordered_1([Head|Tail], Test) :-
		ordered_1(Tail, Head, Test).

	ordered_1([], _, _).
	ordered_1([Head|Tail], Prev, Test) :-
		call(Test, Prev, Head),
		ordered_1(Tail, Head, Test).

	descending(List) :-
		ordered(compare(>), List).

In order to convert this to some other dialect,
    (a) the effect of compare(>, X, Y) must be available somehow.
    (b) the translator must be smart enough to know that it is ok to
        replace THIS instance of compare(>) by some other term, whereas
	other instances of compare(>) are data and should not be changed.
Unfortunately, because Lisp has EVAL and Prolog has call/1, whether a
particular instance of some term will be called or used as data is not a
simple syntactic property.  It's at least as hard as solving the halting
problem.  (In fact, a term may be used as BOTH code and data.)

> Prolog is a relatively new language.  Why should we expect its form to
> be frozen in stone so soon.

Because that's what a standard ***IS***.  I started using DEC-10 Prolog in
1979, and I think it had existed for over a year before I first saw it.
A language which is ten years old is hardly "relatively new".  Pascal
wasn't much older when it was standardised, and ADA was *born* standardised.

>  In this particular example, (which I consider a
> strange one for Mr O'Keefe to use to make a point), it seems to me
> that the BSI behavior is much more sensible.. after all, does it
> really make sense to ask the question: "Is this unknown object an
> atom?".

Jackson has managed to completely ignore my point.
I too explicitly said that the BSI behaviour is more "logical".
I explicitly said that they could add a new is_atom/1 predicate
similar to NU Prolog's isAtom/1 with my good will.
What I objected to was them CHANGING the definition of atom/1.
Look, all they have to do is leave atom/1 out of the standard entirely,
and define in its place is_atom/1 doing what they and Jackson want it to.

If var/1 makes sense (and it IS in the BSI fragments), then the existing
atom/1 makes sense.  My message included an example where the fact that
integer/1 fails for variables makes sense.  But that's not the point at
issue.  The point is that the BSI group *could* have provided the feature
they wanted in a way that would have made it possible for people to write
compatibility packages and would have significantly reduced the cost of
conversion, and by doing so they could have followed in the footsteps of
NU Prolog which already has this feature, but they chose not to do so.

> I'm sure Mr. O'Keefe has some valid complaints about the semantics of 
> BSI.. (I haven't read the grimoire), but I fear many of his complaints are
> grounded in a desire to avoid the (admittedly painful) task of upgrading
> code.

I read that as implying that I wanted to avoid changing MY code.
(BSI Prolog is hardly an improvement on anything, so you couldn't call it
"upgrading".)  That is not the case.  I don't intend to change any of my
code.  What I am worried about is paying customers.

> Finally, do we want people 20 years from now thinking about Prolog the
> way most people think about Fortran today.  This is what will happen
> if we concern ourselves too much with compatibility at this early
> date.

I want people 20 years from now thinking about Prolog the way most people
think about Autocoder.  Interesting, influential, useful in its time, but
long since surpassed.

Actually, come to think of it, I wouldn't mind too much if people DID
think of Prolog as being like Fortran.  I don't _like_ Fortran, but at
least I know what it does, and CALGO, JCAM, CJ, AS, LINPACK, TSPACK, ...
and many other collections of useful algorithms are in Fortran, so as
long as a Fortran processor does what the standard says it does, it can
be of use to me whether I know Fortran or not.  When it comes to cooking
a meal, I'd rather have an ugly old frying pan that works than a shiny new
one with no handle.

The point of a standard is precisely to freeze the state of the art so that
people who are interested in USING something can get on with the job,
secure in the knowledge that they have a workable definition of what their
tools are supposed to do.

Jackson quoted COBOL.  Well, I've news for him.  Successive versions of the
COBOL standard have been dramatically different.  It took *years* after
the introduction of COBOL 74 before many sites had upgraded, and the
conversion was barely finished when COBOL programmers were hit with a NEW
standard which had still more dramatic differences.  Jackson quoted FORTRAN.
Has he seen a draft of Fortran 8X, I wonder?  Having a standard which has
been ratified in a particular year does not freeze the language for all
time.

I keep trying to get this point across, and seem to keep failing:

    -	DESIGNING a language is one thing, with one set of criteria.

    -	STANDARDISING a language is another thing with another set.

    -	Just because there is a standard for a language doesn't mean
	you can't design a new language, only that your new language
	isn't the existing standard.

jackson@esosun.UUCP (Jerry Jackson) (04/02/88)

In response to my previous posting, Richard O'Keefe has had some interesting
things to say:

>> A fair summary of what he says is "---- the users".

no comment.


>>Yes, it *is* theoretically impossible to write a correct automatic conversion
>>program from Lisp to assembler.  You have to have an interpreter as well.
>>Why?  Because you can construct code on the fly and execute it.  The problem
>>with automatic conversion from dialect X of Lisp (or Prolog, or Pop) to
>>dialect Y is that you need an interpreter for dialect **X** around at run
>>time, but what you get is an interpreter for dialect Y. 

Golly, so you need an interpreter too? Having written a couple of Lisp 
interpreters, a couple of subset Lisp compilers, and a couple of Prolog
subset interpreters, I never would have guessed that.  So why can't the 
interpreter be part of the translation... Can you say 'action routine'???

For your information, most compilers have pieces of *pre-written* code that
are used to implement certain operations...

In any case, the original statement that prompted your *theoretically* 
*impossible* response merely stated that automatic conversion programs could
be helpful -- not that they were the ultimate solution to compatibility
problems.


>> > Prolog is a relatively new language.  Why should we expect its form to
>> > be frozen in stone so soon.

>>Because that's what a standard ***IS***.  I started using DEC-10 Prolog in
>>1979, and I think it had existed for over a year before I first saw it.
>>A language which is ten years old is hardly "relatively new".  Pascal
>>wasn't much older when it was standardised, and ADA was *born* standardised.

The "frozen in stone" that I was referring to was not the new developing
standard (which does have 10 years behind it), but the defacto standard that
O'Keefe has been pushing (DEC-10).. which had effectively NO history when
it first became widely used..



>>Jackson has managed to completely ignore my point.
>>I too explicitly said that the BSI behaviour is more "logical".
>>I explicitly said that they could add a new is_atom/1 predicate
>>similar to NU Prolog's isAtom/1 with my good will.
>>What I objected to was them CHANGING the definition of atom/1.
>>Look, all they have to do is leave atom/1 out of the standard entirely,
>>and define in its place is_atom/1 doing what they and Jackson want it to.


No, I didn't ignore your point -- I simply disagreed with it.  I feel that
part of the purpose of creating a standard is to clean up the language by
acting upon the experience of the user community -- even a language such as
'C' (which has MUCH more money and effort invested in pre-existing software),
has gone through some changes in the course of standardization. (And 'C' is
typically used in production environments where change is much harder to 
deal with than in the research oriented environments where most Prolog is
still done).

>> > Finally, do we want people 20 years from now thinking about Prolog the
>> > way most people think about Fortran today.  This is what will happen
>> > if we concern ourselves too much with compatibility at this early
>> > date.

>>I want people 20 years from now thinking about Prolog the way most people
>>think about Autocoder.  Interesting, influential, useful in its time, but
>>long since surpassed.

O'Keefe has managed to completely miss my point -- I consider Fortran (and
Autocoder) to be major WINS for the time they were created... When I referred
to the way people think about Fortran today, I wasn't referring to Fortran
as an interesting historical artifact (which it is), but about the fact that
inertia has managed to keep it in an objectionable state up to the present
day...


>>Jackson quoted COBOL.  Well, I've news for him.  Successive versions of the
>>COBOL standard have been dramatically different.  It took *years* after
>>the introduction of COBOL 74 before many sites had upgraded, and the
>>conversion was barely finished when COBOL programmers were hit with a NEW
>>standard which had still more dramatic differences.  Jackson quoted FORTRAN.
>>Has he seen a draft of Fortran 8X, I wonder?  Having a standard which has
>>been ratified in a particular year does not freeze the language for all
>>time.

Apparently a grain-size problem occured here... I was referring to people
changing FROM COBOL or Fortran to a computer language (that is why I mentioned
that it was an extreme example)


>>I keep trying to get this point across, and seem to keep failing:
>>
>>    -	DESIGNING a language is one thing, with one set of criteria.
>>
>>    -	STANDARDISING a language is another thing with another set.
>>
>>    -	Just because there is a standard for a language doesn't mean
>>	you can't design a new language, only that your new language
>>	isn't the existing standard.

Who could argue with this?  Perhaps the reason you keep failing to get
your point across is that this really isn't the point you are pushing.

--Jerry Jackson

lhe@sics.se (Lars-Henrik Eriksson) (04/06/88)

In article <136@vor.esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes:
>After reading Richard O'Keefe's response to Cris Kobryn's article, I would
>like to make a *few* points...

ROK wrote:
>>>    It is theoretically *impossible* to write a *correct* automatic
>>>    conversion program for Prolog or Lisp.

>What an interesting statement.... Is it also theoretically *impossible* to
>write a *correct* automatic conversion program from Lisp to assembler? 
>Somehow, this seems like a similar problem to me... 

Well, yes, it is. UNLESS you are willing to include in the resulting
assembler program a complete Lisp interpreter. The reason for this, of
course, is that a Lisp program can construct code and execute it while
it is running. Even if the Lisp program itself is translated into assembler,
what it constructs is still Lisp, and must be interpreted by a Lisp
interpreter.

So, if you want a correct automatic conversion program for Prolog or Lisp,
the output of the conversion must include a complete interpreter for the
particular dialect of Prolog you happen to be using. The same problem
would arise in a program to convert between different Prolog dialects.

This does not mean that you cannot in particular cases (of perhaps even
in most cases) get away without the interpreter, of course. But if you
want a conversion program that works 100% of the time you have no choice.

Lars-Henrik Eriksson				Internet: lhe@sics.se
Swedish Institute of Computer Science		Phone (intn'l): +46 8 752 15 09
Box 1263					Telefon (nat'l): 08 - 752 15 09
S-164 28  KISTA, SWEDEN

jeff@aiva.ed.ac.uk (Jeff Dalton) (04/25/88)

In article <841@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>Yes, it *is* theoretically impossible to write a correct automatic conversion
>program from Lisp to assembler.  You have to have an interpreter as well.
>Why?  Because you can construct code on the fly and execute it.

I take your point, Richard, but it's worth pointing out that you don't
need an interpreter per se: you can just keep the translator around.

>Unfortunately, because Lisp has EVAL and Prolog has call/1, whether a
>particular instance of some term will be called or used as data is not a
>simple syntactic property.  It's at least as hard as solving the halting
>problem.  (In fact, a term may be used as BOTH code and data.)

Scheme (which is after all a dialect of Lisp) doesn't have EVAL in
the R3 Scheme Report in part because they want to prevent arbitrary
data being turned into code.

It's interesting that call/1 in Prolog is basically an EVAL, while
Lisp has EVAL but also a less troublesome notion in APPLY.  

>> Prolog is a relatively new language.  Why should we expect its form to
>> be frozen in stone so soon.

Note that this is an argument that Prolog is not ready to be standardized,
not that the standard should be for an improved Prolog.  Unfortunately (for
that argument), people would very much like to have a standard now rather
than wait for Prolog to develop into something better.

>The point of a standard is precisely to freeze the state of the art so that
>people who are interested in USING something can get on with the job,
>secure in the knowledge that they have a workable definition of what their
>tools are supposed to do.

Just so.

But there's a problem with this view, and I think both the Lisp and Prolog
standardization efforts have encountered it.  The problem is that some
parts of the existing language do not, to some people, seem worthy of
standardization.  And so long as we're standardizing, they say, we may
as well clean things up a little.  These things may have been ok for an
individual implementation, they add, but a standard must meet higher
standards...

This sort of thing is fine to a limited extent.  Where different
implementations disagree, there has to be some adjustment.  But when such
changes amount to designing a new language, it is certainly legitimate to
complain that they have gone too far.

Whether the BSI changes do amount to designing a new langauge is another
matter, but one on which others are better qualified to comment than I.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

ok@quintus.UUCP (Richard A. O'Keefe) (04/26/88)

In article <365@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <841@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
> >Yes, it *is* theoretically impossible to write a correct automatic conversion
> >program from Lisp to assembler.  You have to have an interpreter as well.
> >Why?  Because you can construct code on the fly and execute it.

> I take your point, Richard, but it's worth pointing out that you don't
> need an interpreter per se: you can just keep the translator around.

I regard keeping the translator around as just another way of implementing
an interpreter.  (I wish someone would implement a Brown-et-al-style
"throw-away-compiler" for Prolog and get us some measurements.)  The point
is that you need a non-trivial chunk of code which in some sense defines the
semantics of the _OLD_ language.

> It's interesting that call/1 in Prolog is basically an EVAL, while
> Lisp has EVAL but also a less troublesome notion in APPLY.  

Actually, call/1 in Prolog is a pretty straightforward APPLY.  (There is
a family of call/N things each adding N-1 extra arguments.)  The problem
is that ','/2 and so on are FEXPRs.  (Is everyone nicely confused now?)
The point is that call(','(write(a)), write(b)) ends up calling write(a)
and write(b), not because call/2 does anything special, but because (A,B)
calls A and B.  It is as if you did
	(APPLY #'(LAMBDA (X Y) (AND (EVAL X) (EVAL Y)))
	    '(PRINT 'a) '(PRINT 'b) '())
in Lisp.

There is a splendid little article in the April 1988 issue of SigPlan
Notices:
	"The Role of the Language Standards Committee".
I particularly like his point 3.  (Which amongst other things suggests that
a standard is not the place for _MY_ pet ideas either.)

richard@aiva.ed.ac.uk (Richard Tobin) (04/28/88)

In article <903@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>Actually, call/1 in Prolog is a pretty straightforward APPLY.  (There is
>a family of call/N things each adding N-1 extra arguments.)  The problem
>is that ','/2 and so on are FEXPRs.  (Is everyone nicely confused now?)

Yup.

>The point is that call(','(write(a)), write(b)) ends up calling write(a)
>and write(b), not because call/2 does anything special, but because (A,B)
>calls A and B.  It is as if you did
>	(APPLY #'(LAMBDA (X Y) (AND (EVAL X) (EVAL Y)))
>	    '(PRINT 'a) '(PRINT 'b) '())
>in Lisp.

Is this really true?  Isn't it the case that in all reasonable prologs
call/1 *is* a simple interpreter, with the result that (in particular)
cut works reasonably?

For example,
  X = (repeat, write(x), !, fail), call(X).
prints just one x, but if I define the operator 'and' (with precedence like
that of comma) to be
  A and B :- call(A), call(B).
then
  X = (repeat and write(x) and ! and fail), call(X).
prints rather a lot of x's.

Are you being disingenuous here, or have I just eaten too much microwave 
popcorn?

-- Richard
-- 
Richard Tobin,                         JANET: R.Tobin@uk.ac.ed             
AI Applications Institute,             ARPA:  R.Tobin%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.                  UUCP:  ...!ukc!ed.ac.uk!R.Tobin

jeff@aiva.ed.ac.uk (Jeff Dalton) (04/28/88)

In article <365@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
- It's interesting that call/1 in Prolog is basically an EVAL, while
- Lisp has EVAL but also a less troublesome notion in APPLY.  

In article <903@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A.
O'Keefe) writes:
> Actually, call/1 in Prolog is a pretty straightforward APPLY.  (There is
> a family of call/N things each adding N-1 extra arguments.)  The problem
> is that ','/2 and so on are FEXPRs.  (Is everyone nicely confused now?)

I don't think I'm confused (yet).  You're saying call/1 is a normal
procedure byt ','/2 and friends are not.  So call/1, in itself,
is like APPLY, but you can apply things like ','/2, which actually
implement any unusual behavior.

> The point is that call(','(write(a)), write(b)) ends up calling write(a)
> and write(b), not because call/2 does anything special, but because (A,B)
> calls A and B.  It is as if you did
>	(APPLY #'(LAMBDA (X Y) (AND (EVAL X) (EVAL Y)))
>	    '(PRINT 'a) '(PRINT 'b) '())
> in Lisp.

I could put my point like this: in the Lisp "as if" you end up having
to call EVAL in order to convert "source code representations" to code.
Without EVAL, you couldn't make that kind of conversion.  (There might
be other ways, such as COMPILE in Common Lisp, but the basic point
still holds.  Note too that CL's APPLY does make such conversions,
but Scheme's doesn't.  Nor can Scheme apply fexprs.)

The difference between Prolog and Lisp is that in Prolog just saying

     call(X).

takes some "source code representation" (the value of X) and
interprets it as code while in Scheme-like Lisps

     (apply f ...)

doesn't.  The value of F is a procedure, not the name of one.  That
is why call/1 makes problems for Prolog module systems (you have a
procedure name whose reference has not yet been resolved with respect
to modules) while APPLY doesn't make analogous problems in Lisp.
(Actually, there are other factors involved as well; and some Prolog
module proposals get around the problem one way or another, but
anyway...)

In article <841@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. 
O'Keefe) writes:
+ Yes, it *is* theoretically impossible to write a correct automatic
+ conversion program from Lisp to assembler.  You have to have an
+ interpreter as well.  Why?  Because you can construct code on the fly
+ and execute it.

Bear in mind the "construct code on the fly and execute it".  This is
just what's difficult in Scheme.  To do it, you need something like
EVAL (or LOAD, it turns out).  APPLY won't do *unless* you think in
terms of Common Lisp's "auto-coercion" idea of APPLY, where you can
say (APPLY (CONS 'LAMBDA ...) ...).  That won't work in (R3RS) Scheme.

+ Unfortunately, because Lisp has EVAL and Prolog has call/1, whether a
+ particular instance of some term will be called or used as data is not
+ a simple syntactic property.  It's at least as hard as solving the
+ halting problem.  (In fact, a term may be used as BOTH code and data.)

- Scheme (which is after all a dialect of Lisp) doesn't have EVAL in
- the R3 Scheme Report in part because they want to prevent arbitrary
- data being turned into code. 

So, while EVAL and call/1 bring in the problem of whether or not
some term will be called, APPLY needn't do so.

Something trivial...

- I take your point, Richard, but it's worth pointing out that you don't
- need an interpreter per se: you can just keep the translator around.

> I regard keeping the translator around as just another way of 
> implementing an interpreter.  The point is that you need a
> non-trivial chunk of code which in some sense defines the
> semantics of the _OLD_ language.

We're not disagreeing.  I said I took your point, and by the "per se"
I meant to indicate that I meant an interpreter in a narrow sense.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

ok@quintus.UUCP (Richard A. O'Keefe) (04/30/88)

In article <383@aiva.ed.ac.uk>, richard@aiva.ed.ac.uk (Richard Tobin) writes:
> Is this really true?  Isn't it the case that in all reasonable prologs
> call/1 *is* a simple interpreter, with the result that (in particular)
> cut works reasonably?

Yes it is true, and no it is not the case that call/1 *is* an interpreter.
Be careful to distinguish what something _IS_ from how it is _implemented_.
Let me sketch a different implementation:

	call(Goal) :-
		callable(Goal),
		locate_procedure_record(Goal, Proc),
		!,
		unpack_goal_and_jump(Goal, Proc).
	call(Goal) :-
		callable(Goal),
		!,
		<report a call to an undefined predicate>.
	call(Goal) :-
		<report a call to a variable, number, &c>.

So, for example, call(append(A,B,C)) is basically an indirect jump to the
(presumably compiled) code for append/3; this jump having occurred nothing
further takes place that could be called "interpretation".

Accompanying this would be definitions like

	','(A,B) :-
		label(ChoicePoint),
		interpret((A,B), ChoicePoint).

	';'(A,B) :-
		label(ChoicePoint),
		interpret((A;B), ChoicePoint).

This is what I meant by saying that __call__ is like EVAL, but that
__,__ and __;__ and so on are FEXPRs.  It is not that 'call' interprets
anything, but that if you call ','/2 IT will do certain things.  (Think
of the compiler as doing partial execution.)

> Are you being disingenuous here, or have I just eaten too much microwave 
> popcorn?
I'm afraid it must be the latter.  Try cooking it first (:-).

ok@quintus.UUCP (Richard A. O'Keefe) (05/01/88)

In article <385@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <365@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> - It's interesting that call/1 in Prolog is basically an EVAL, while
> - Lisp has EVAL but also a less troublesome notion in APPLY.  
> In article <903@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A.
> O'Keefe) writes:
> + Actually, call/1 in Prolog is a pretty straightforward APPLY.  (There is
> + a family of call/N things each adding N-1 extra arguments.)  The problem
> + is that ','/2 and so on are FEXPRs.  (Is everyone nicely confused now?)

> The difference between Prolog and Lisp is that in Prolog just saying
>      call(X).
> takes some "source code representation" (the value of X) and
> interprets it as code while in Scheme-like Lisps
>      (apply f ...)
> doesn't.  The value of F is a procedure, not the name of one.

There are two differences between EVAL and APPLY
(1) EVAL evaluates the arguments of its argument, APPLY does not.
    In this respect, call/1 resembles APPLY, not EVAL.  call/1 as such
    does not do any interpretation other than to locate the predicate
    which is to be called.
(2) EVAL is given (a form in which there occurs) the _name_ of a function,
    not the function itself, APPLY is given a function pointer/closure &c.
    In this respect call/1 resembles EVAL, not APPLY.  There is no Prolog
    object which "directly" represents a predicate.

I have always regarded the essential feature of EVAL as being (1), and
have regarded (2) as a detail of implementation, so I understood "is
basically an EVAL" to mean "is like an interpreter which is responsible
for evaluating sub-forms".  That is what is false.  Jeff Dalton is quite
right about (2), and as he points out, you _do_ have to work a bit to
make a module system and call/1 work together.  If Prolog were typed,
with the void/N type constructor as Alan Mycroft suggested, it would be
possible to have a version of call/1 which resembled APPLY in both respects.

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/05/88)

The frist 4 paragraphs or so are a review of the story so far...

The original question was whether it is possible to write a correct
(and complete) automatic conversion program from Prolog (or Lisp) to
assembler.  It was pointed out (by OK and perhaps others) that this
was not possible because call/1 (in Prolog) or EVAL (in Lisp) could
use arbitrary data ("source code representations") as code and so
would require that the "conversion" include an interpreter for the
language -- the conversion wouldn't be complete for further
conversions might be required.

I then suggested that Lisp's APPLY was less troublesome in this regard
that call/1 or EVAL because it just called a function (which is a kind
of data objetc in Lisp) rather than process some source representation
that might turn out to do anything whatsoever.

Richard replied that call/1 was just APPLY, not EVAL.

[Aside: it isn't always implemented that way.  Sometimes call/1 is
an interpreter rather than having the "interpretation" put off into
"fexprs" such as ','/2.]

I replied that call/1 takes (and in Prolog must take) the name of a
procedure while APPLY can take the actual procedure and that this
causes problems for some kinds of Prolog module systems.  It turns
out that this name vs. procedure point isn't quite the right one
though: see below.

In article <922@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>There are two differences between EVAL and APPLY
>(1) EVAL evaluates the arguments of its argument, APPLY does not.
>    In this respect, call/1 resembles APPLY, not EVAL.  call/1 as such
>    does not do any interpretation other than to locate the predicate
>    which is to be called.
>(2) EVAL is given (a form in which there occurs) the _name_ of a function,
>    not the function itself, APPLY is given a function pointer/closure &c.
>    In this respect call/1 resembles EVAL, not APPLY.  There is no Prolog
>    object which "directly" represents a predicate.
>
>I have always regarded the essential feature of EVAL as being (1), and
>have regarded (2) as a detail of implementation, so I understood "is
>basically an EVAL" to mean "is like an interpreter which is responsible
>for evaluating sub-forms".  That is what is false.

The question of whether APPLY accepts procedures or (also) procedure
names is one of the key differences between Scheme and other Lisps
such as Franz and CL.  It is one way in which Scheme makes a clearer
distinction between procedures and source representations.

By saying call/1 is basically an EVAL, I meant that it brings in the
problems that EVAL does while APPLY does not.  One of these problems
is that procedure names can be passed around rather than procedures.
Then you may have do decide at call time which module to dereference
the name in unless (as in Common Lisp packages) the module information
is part of the name.  So this is a problem.  But it's not the right
problem, or at least not all of it.

The right problem is that EVAL means you can never completely
translate Lisp without supplying an EVAL on the object side too.
Call/1 shares this problem.

APPLY, defined as in Scheme, does not because
  (a) You can call procedures but not, say, lists like (LAMBDA (X) ...).
      So you can't build an arbitrary expression as a list and then
      call APPLY to get that list evaluated.
  (b) You can't call fexprs, only functions of the normal sort.
      So you can't say

        (apply if '(t <arbitrary expression as a list>))

      Richard's explanation of call/1 shows that it can do this.

>Jeff Dalton is quite right about (2), and as he points out, you _do_
>have to work a bit to make a module system and call/1 work together.

By the way, Richard, are you still opposed to the so-called atom-based
module schemes?  I suppose it should be a separate topic...

>If Prolog were typed, with the void/N type constructor as Alan
>Mycroft suggested, it would be possible to have a version of call/1
>which resembled APPLY in both respects.

Tell me more.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed             
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton