[comp.lang.prolog] BSI standards

vantreeck@curie.dec.com (03/09/88)

RE: Chris Moss' comments on the proposed BSI standards for PROLOG syntax.

I read the reports also. I'd rather see user defined operators left out the
standard entirely because it complicates the parsing current Edinburgh syntax
such that it is difficult to write a parser in a low level language (which
would provide significant space and time advantages). And I also don't like
user defined operators because it makes it impossible to write a simple BNF to
describe the syntax, i.e., makes the language much more difficult to
understand for very little gain in language capability. It seems like an lot
of work is being done to make PROLOG syntax work for a feature of very minor
importance. 

I know other disagree. And seeing as how user defined operators are going
to be part of PROLOG, I am *VERY HAPPY* to see that the syntax for operators
is being done such that it will be easier to write deterministic parsers.
I'd much rather have half a loaf than none.

RE: the rest of the standard.

I also see proposals for new data types, e.g., character, string, and list. I
wish someone could explain why these are really needed as part of standard.
PROLOG is supposed to be a high level language. It's advantage over lower
level languages is that it allows one to program at a more abstract level. One
should not have to be labored with low level, implementation details, of how
data is physically represented. 

It seems to me that type string is totally unnecessary. I have a small PROLOG
compiler on my Mac+ that does not make a distinction between string and atom.
The symbol table is garbage collected on backtracking. Yes, it is a little
slower than putting a string on top of the heap. But how much time do most
PROLOG programs spend doing string functions? 

I doubt that most PROLOG programs spend enough time doing string functions
that it should drive the PROLOG programmer into thinking about physical data
representation. If a lot of that kind of string manipulation is significantly
impacting performance of a PROLOG program, it would be better to write that
portion of the program in a language more suited to that task, and
link it into the PROLOG program.

If the consensus is that it is necessary to have implementation details drive
the standard, it should be clearly isolated as ANSI C and Ada do, e.g., use of
a ":- pragma" directive.

-George

shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) (03/09/88)

In article <8803082357.AA01587@decwrl.dec.com> vantreeck@curie.dec.com writes:

>I also see proposals for new data types, e.g., character, string, and list. I
>wish someone could explain why these are really needed as part of standard.
>PROLOG is supposed to be a high level language. It's advantage over lower
>level languages is that it allows one to program at a more abstract level.

String and list types have good and bad points, but the lack of character
types in older Prologs is a great embarassment.  Numbers like 26 and 65 
are sadly common, as is doing arithmetic on these to implement character
operations.  I think it's safe to say that any language requiring the use
of numbers to represent some other type of object is not "high level", at
least with respect to that type.

							stan shebs
							shebs@cs.utah.edu

shardy@teknowledge-vaxc.ARPA (Steve Hardy) (03/10/88)

vantreeck@curie.dec.com writes:
	            
	RE: Chris Moss' comments on the proposed BSI standards for
	PROLOG syntax.    
                                                                             
	I read the reports also. I'd rather see user defined
	operators left out the standard entirely because it
	complicates the parsing current Edinburgh syntax such
	that it is difficult to write a parser in a low level
	language (which would provide significant space and time
	advantages).

I disagree.  Even though user-defined operators do complicate
prolog syntax, I have found them extremely useful when using
prolog to prototype interpreters for new problem-specific
languages.  We used Prolog to build the first version of the M.1
expert system shell.  Because of user-definable operators, it
was not necessary to to write an parser for M.1 per se.  This
made development much simpler.  Recoding M.1 in C took more
effort than the original development.  Much of the extra cost
was incurred because of the need to write an explicit M.1 parser
in the C version.

DISCLAIMER: Even though I am an employee of Teknowledge, these
are my personal opinions and not those of the company.

Steve Hardy, SHARDY@TEKNOWLEDGE.ARPA, (415) 424-0500

ok@quintus.UUCP (Richard A. O'Keefe) (03/10/88)

In article <8803082357.AA01587@decwrl.dec.com>, vantreeck@curie.dec.com writes:
> RE: Chris Moss' comments on the proposed BSI standards for PROLOG syntax.
> 
> I read the reports also. I'd rather see user defined operators left out the
> standard entirely because it complicates the parsing current Edinburgh syntax
> such that it is difficult to write a parser in a low level language (which
> would provide significant space and time advantages).

Hmm.  The difficulty of writing a Prolog parser in a low-level language
(let's say "C", for argument's sake) is largely a matter of the skill of
the programmer.  The operator ambiguities in Prolog *do* cause problems
(though not nearly as many as people think!), but *not* user-defined
operators as such.  I've written a couple of Prolog parsers in C, and
the existence of user-defined operators as such doesn't complicate the
parser, no, not one little bit.  And I'll back mine against yours for 
speed.

> And I also don't like user defined operators because it makes it
> impossible to write a simple BNF to describe the syntax, i.e., makes
> makes the language much more difficult to understand for very little
> gain in language capability.

What do you want BNF for?  A DCG for Prolog is every bit as clear as BNF
and has no trouble with user-defined operators.  Let me show you two rules
from the grimoire:

5.8		term = prefix op, term | term, postfix op ;
Constraint	X		  f      f
Priority	N      N,R        R      L     N,L
Abstract func(F,[A])   F          A      A     F

5.9		term	       = term, infix op, term ;
Constraint	X		 f		 f
Priority	N		 L     L,N,R	 R
Abstract	func(F,[A,B])    A     F         B

There are also three rules (8, 9, 10 in the grimoire) to say which
things are prefix, postfix, and infix operators, but they just point to
tables, which have to exist anyway for built-in operators.  With the
exception of the rules for postfix operators, all of this is needed for
built-in operators like '<' and '-'. This is complicated?  If you don't
like the grimoire's NIH definition style, use an attribute grammar;
it comes to the same thing.

As DCGs, we'd have
	term(N, func(F,[A])) --> prefix_op(F, N, R), term(R, A).
	term(N, func(F,[A])) --> term(L, A), postfix_op(F, N, L).
	term(N, func(F,[A,B])) --> term(L, A), infix_op(F, N, L, R), term(R, A).

If you are prepared to object to the existence of built-in operators like
'<' and '-', then you are entitled to complain about the complexity of
these rules.  But if you want those operators, then you need two of these
rules to handle them, and user-defined operators do NOTHING to increase
their complexity.

> It seems like an lot of work is being done to make PROLOG 
> syntax work for a feature of very minor importance. 

By definition, "a feature of very minor importance" means "something
I don't use".  I think the same way.  For example, I would be quite happy
to see postfix operators removed entirely from the language; I've never
had any use for them at all.  But there are other people who use them.
So I'm wrong.

> It seems to me that type string is totally unnecessary.

Yes and no.  I whole-heartedly agree that Prolog as such has no need of
strings whatsoever.  There *is* a good reason for having strings in the
standard, though.  (There are grounds for doubting that it is the reason
that the BSI committee have in mind.  They appear to think that strings
are needed for text processing.  Someone should have told David Warren
before he wrote his editor in DEC-10 Prolog...)  The reason is this:
there are other programming languages in the world, and many Prolog systems
have to interface to them.  For example, a Prolog system embedded in Lisp
be given an atom and a string, and should be able to give them back.
There are enough Prolog+Lisp systems (Quintus have a good one, and of course
there's PopLog) to warrant standardising what strings look like, _if_ they
exist.  I agree that the standard should not _require_ their existence.

It is interesting to compare the speed of character list operations with
the speed of byte-vector operations.  In Xerox Quintus Prolog, for example,
text-processing operations which touch all the characters of a text object
are rather faster working on Prolog lists of characters than working on
Lisp strings.  For the most part, the desire to represent text as byte
vectors seems to result from either Basic envy or the use of slow Prologs.

ok@quintus.UUCP (Richard A. O'Keefe) (03/10/88)

In article <5334@utah-cs.UUCP>, shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) writes:
> String and list types have good and bad points, but the lack of character
> types in older Prologs is a great embarassment.  Numbers like 26 and 65 
> are sadly common, as is doing arithmetic on these to implement character
> operations.  I think it's safe to say that any language requiring the use
> of numbers to represent some other type of object is not "high level", at
> least with respect to that type.

A distinction:
	the lack of a _notation_ for characters other than the numbers.
	PopLog has `x`, DEC-10 Prolog, (some versions of) C Prolog, and
	Quintus Prolog have 0'x (yes, it's a hack, but there weren't
	any characters left over), ALS Prolog has ~x.

	the lack of a separate datatype for characters.

For once I'm not going to express an opinion!  Comparing Pascal code I've
seen with C suggests that using ORD and CHR a lot isn't a good idea either.
I've used Interlisp enough to know that I *definitely* don't want to use
atoms to represent characters, but I haven't used Common Lisp enough to
know whether that's better.

What problems are people having that could be reduced by having a separate
character type (as opposed to having characters coded as integers but with
a tolerable notation such as `x`)?  Anyone using a Prolog that has such a
data type?

shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) (03/10/88)

In article <751@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:

>	the lack of a _notation_ for characters other than the numbers.
>	PopLog has `x`, DEC-10 Prolog, (some versions of) C Prolog, and
>	Quintus Prolog have 0'x (yes, it's a hack, but there weren't
>	any characters left over),

Notation works for input, but fails for output.  Not so bad for batch
programming, but you get garbage (= mysterious numbers) when listing
a predicate.

>For once I'm not going to express an opinion!  Comparing Pascal code I've
>seen with C suggests that using ORD and CHR a lot isn't a good idea either.

What was it that made it a bad idea?  This is the first time I've heard
this view expressed by a non-assembly programmer.

>I've used Interlisp enough to know that I *definitely* don't want to use
>atoms to represent characters, but I haven't used Common Lisp enough to
>know whether that's better.

In my opinion, character objects is one of the few things that Common Lisp
can't be faulted on!  The first time you get back #\s instead of 115 is
enough to convince anyone that hasn't memorized the ASCII table.  Also we
should not neglect the sense of security gained, when a function refuses to
use a character as an index, or to multiply two characters together (there
are char<->int conversion functions if needed).

							stan shebs
							shebs@cs.utah.edu

miller@ACORN.CS.ROCHESTER.EDU (Brad Miller) (03/11/88)

    Date: 10 Mar 88 15:12:29 GMT
    From: shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs)

[...]

    In my opinion, character objects is one of the few things that Common Lisp
    can't be faulted on!  The first time you get back #\s instead of 115 is
    enough to convince anyone that hasn't memorized the ASCII table.  [...]

							    stan shebs
							    shebs@cs.utah.edu

Indeed, it need not even be ASCII... so it's even less dependant on the
hardware you happen to be hacking on!

------
miller@cs.rochester.edu {...allegra!rochester!miller}
Brad Miller
U. Rochester Comp Sci Dept.
'If anything is True, this is.'

chik@icot.JUNET (Chikayama Takashi) (03/11/88)

In article <751@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>What problems are people having that could be reduced by having a separate
>character type (as opposed to having characters coded as integers but with
>a tolerable notation such as `x`)?

I had at least one such experience.  When I wrote a CROSS compiler (or
rather, a cross preprocessor, that translates some language with
Prolog-like syntax to another) several years ago, it was quite nasty
that 0'x could not be distinguished from 120 once read in.  On the
target machine that uses different character codes, 120 should mean
120 but 0'x shouldn't.

Takashi Chikayama

vantreeck@curie.dec.com (03/12/88)

>I disagree.  Even though user-defined operators do complicate
>prolog syntax, I have found them extremely useful when using
>prolog to prototype interpreters for new problem-specific
>languages.  We used Prolog to build the first version of the M.1
>expert system shell.  Because of user-definable operators, it
>was not necessary to to write an parser for M.1 per se.  This
>made development much simpler.  Recoding M.1 in C took more
>effort than the original development.  Much of the extra cost
>was incurred because of the need to write an explicit M.1 parser
>in the C version.
> 
>Steve Hardy, SHARDY@TEKNOWLEDGE.ARPA, (415) 424-0500

Of course user defined operators can be useful for things like prototyping.
They can also end up as code that needs to be maintained by programmers that
have to carefully reverse engineer the thinking behind the priorities and
associativities of user defined operators to understand exactly how the
program should behave. Often the maintainers may be people who don't
thoroughly understand operator precedence and associativity.

Careful use of a user defined operator can increase readability. But the
potential for abuse is large. Languages that are being actively used by
business and government are evolving into languages that strongly "encourage"
programmers to write easilly maintainable code. For example, it's tough to get
an Ada program to compile. But once it does compile, it tends to be more
maintainable than something typically written in K&R's C or BASIC. 

Some complain that this trend makes it more difficult to write quick
programs. But that's not the interest of people who increasingly have to bet
their business or their country on correct, maintainable, software. If PROLOG
is to be accepted by business and government, it had better emphasize good
software engineering methodolgy (at the expense of convenience). 

If user defined operators are such a good idea, how come other languages
are not incorporating them into their standards?

>Newsgroups: comp.lang.prolog
>Path: decwrl!sun!quintus!ok
>Subject: Re: BSI standards
>Posted: 10 Mar 88 05:47:53 GMT
>Organization: Quintus Computer Systems, Mountain View, CA

>Hmm.  The difficulty of writing a Prolog parser in a low-level language
>(let's say "C", for argument's sake) is largely a matter of the skill of
>the programmer.  The operator ambiguities in Prolog *do* cause problems
>(though not nearly as many as people think!), but *not* user-defined
>operators as such.  I've written a couple of Prolog parsers in C, and
>the existence of user-defined operators as such doesn't complicate the
>parser, no, not one little bit.  And I'll back mine against yours for 
>speed.

You have a precedence parser for PROLOG with user defined operators that can
run as fast and be as small as a one-pass recursive decent or LALR(1) parser
on a PROLOG without user defined operators? Is this code public domain?
I'd like to look at it.
 
>What do you want BNF for?  A DCG for Prolog is every bit as clear as BNF
>and has no trouble with user-defined operators. 
 
>This is complicated?  If you don't
>like the grimoire's NIH definition style, use an attribute grammar;
>it comes to the same thing.

Look in the back of C, BASIC, Pascal, FORTRAN, PL/I, etc, manual. You can
generally find a simple BNF and/or syntax diagram descibing the language. It
doesn't require special non-standard BNFs (ones with an extra lines describing
priority and associativity), nor does it require DCGs with explicit
definitions of priority and associativity, nor do they require attribute
grammars. Operator precedence and associativity in the above languages is
implicit in the BNF or diagram. Why can't we have a simple syntax that the
larger world can understand via traditional means of specifying syntax? Or
is this to be standard which is fathomable only by the initiated?

If there were no user defined operators allowed, then traditional means of
specifying syntax would work just fine. I doubt the mountain of programmers in
this world will come to Mohammud. 

RE: the need for strings in PROLOG.

I agree PROLOG must be able to communicate with other languages. But that is
what a calling standard is for. And conversion of an atom to/from a string
should be isolated to the call of the foreign language. There should be no
string type visible to the PROLOG programmer. There's no reason why
conversions can't be handled automatically by the PROLOG compiler via glue
routines. If strings are not visible to the programmer, then there's no need
for it in the standard. 

But I suspect the real reason the BSI are putting strings in the standard is
that they want to manipulate atoms like strings without having to expand them
to lists, but they feel that it is too slow to assert/retract new atoms from
symbol tables. This is because most implementations never garbage collect the
symbol tables, they use hash tables which would have to be extended more
frequently, they don't use a reference count for each string in the symbol
table to know if it can be deleted, i.e., they are letting their
implementation details drive the standard. Speaking from experience, I can say
that it is easy to add string functions that operate on atoms and efficiently
assert/retract them from the symbol tables. I resent the idea of having to add
them to my PROLOG (if want to I conform to a standard) because others don't
wish to garbage collect their symbol tables!

My symbol table on my Mac PROLOG is an AVL tree (with a reference count). All
atoms are simply pointers the strings in the tree. Often, the newly created
atom becomes part of clause that is asserted. So, when backtracking happens, I
simply decrement the reference count. Yes, this slower than just moving the
pointer to top of heap back. But I doubt most _PROLOG_ programs spend their
time doing string functions, i.e., asserting/retracting from the symbol table
is infrequent. I think it's more important to have a simple, consistent
language than have the fastest possible language. 

George Van Treeck
Digital Equipment Corporation

These are my opinions and not necessarily those of my employer.

ok@quintus.UUCP (Richard A. O'Keefe) (03/12/88)

In article <8803112331.AA09995@decwrl.dec.com>, vantreeck@curie.dec.com writes:
> Of course user defined operators can be useful for things like prototyping.
> [other stuff deleted]
> If user defined operators are such a good idea, how come other languages
> are not incorporating them into their standards?

To paraphrase his argument, "if it's so great, why doesn't ADA do it"?
Interestingly enough, ADA and C++ _do_ support overloaded operators,
which has at least as much potential for causing maintenance problems
as user-defined operators.

If I were designing Prolog, I wouldn't have user-defined operators in
the language either.  (It was Bill Clocksin who suggested this to me.)
So what?  Prolog ALREADY EXISTS!  It was designed years ago.

The fundamental question is "what is a standard FOR"?
As far as I am concerned, the purpose of a standard is not to advance
the state of the art, not to force the very latest word in straitjackets
onto people, not to pander to my ego by designing the glossiest language
on the block, but to protect the interests of USERS of the language, by
providing as clear as possible a description of the language so that
(a) someone who writes a program in the language can be confident about
    what it _should_ do
(b) someone who wants to run his program on another machine or with another
    vendor's product can be confident that if his program uses only
    standard constructions and if the vendor's claims to follow the
    standard are truthful his program will require minimal changes

To adapt an old joke:
    I am enamoured of the existing defects of Prolog,
    the BSI committee want to replace them by new ones.

I have two examples of what I think are good standards:
 1. The Pascal standard (particularly "A Model Implementation...").
 2. The draft ANSI C standard.

> You have a precedence parser for PROLOG with user defined operators that can
> run as fast and be as small as a one-pass recursive decent or LALR(1) parser
> on a PROLOG without user defined operators?

I repeat, user-defined operators as such do *NOTHING* to the cost of
parsing.  There are enough built-in operators in Prolog that it is
cleaner to look them up in a table than to wire them into the control
structure of your program (disgusting) or use a giant LALR table.
Of _course_ my code is as fast as a recursive-descent parser,
it _IS_ a recursive-descent parser (plus an operator-precedence level).

> Is this code public domain?
You must be joking.  But I gave half the secret away in a previous message.

> Look in the back of C, BASIC, Pascal, FORTRAN, PL/I, etc, manual. You can
> generally find a simple BNF and/or syntax diagram describing the language.

Now I **KNOW** he's joking!  The Pascal grammar is huge.  The C grammar is
not only far too large, it's far too confusing.  How many people knew that
	unsigned long x;
is legal, but that
	long unsigned x;
has always been supposed to be illegal (though some compilers accept it)?
Have you ever read the PL/I standard?  It is _most_ illuminating.  How do
you get a simple PL/I grammar?  You lie.  More generally, the typical C,
Pascal, Fortran &c grammar or syntax diagram over-generates grossly;
there are additional context conditions which simply aren't in the grammar.
For example, in an old PL/I manual I have, the rules for "DEFINED" run to
six pages, including two tables and a flow-chart.

> Why can't we have a simple syntax that the
> larger world can understand via traditional means of specifying syntax?

Because the traditional means LIE.

We're not talking about a back-of-the-manual grossly-over-generating
sketch of a language.  Prolog has one of those, and arguments and all it
is half a page.  We're talking about the COMPLETE, FINAL, AUTHORITATIVE
DEFINITION of a language in a STANDARD.  Again, I suggest that a fairer
comparison is not with the lies in the back of a PL/I manual, but with
the actual PL/I standard.  Take a look at Algol 68.  Just about the only
language I know of which was actually given a formal definition that a
mortal has any hope of reading is Turing.  I dislike the language, but
the definition is great.

> If there were no user defined operators allowed, then traditional means of
> specifying syntax would work just fine.

You mean operator precedence isn't traditional?

Van Treeck has missed the biggest difference between Prolog and Pascal.
No disgrace:  the BSI committee seem never to have seen it either.  (By
the way:  don't imagine that I think I'm criticising Chris Moss.  The
BSI committee has been obstinately determined on that kind of grammar
since its inception, despite having been warned in December 1984 that
the approach couldn't possibly work.  I believe Chris Moss's contribution
to have been the method of presentation, which is admirably lucid.)

The point is this:  a language like Pascal or Turing is defined by saying
what a PROGRAM looks like and what a PROGRAM means.  But there isn't any
such animal as a Prolog program.  Prolog is the same kind of language as
Lisp:  you don't see grammars for "Lisp programs"!  What you see in
"Common Lisp, the Language" is a definition of what the language primitives
are (including "read") and a definition of how data structures are
interpreted as code.

The basic point which the BSI grimoire completely misses is that I may
want to use read/1 for other things than Prolog clauses.  I may want to
read some kind of circuit description language where
	r: p -> q.
means that there is a resistor connecting node p to node q.  Why not?
A grammar for Prolog programs which rigidly defines p->q to mean identically
the same thing as (p->q;fail) -- and the grimoire is just such a grammar --
is going to render read/1 utterly useless for this task.

There is a second basic point which the BSI grimoire also misses.
Telling me that consulting a file containing ... p->q ... will give
me sys(ifthenelse,[p,q,true]) -- the interpretation of which is _NOT_
given in the grimoire -- doesn't tell me what I am to do if I want to
construct such a term at run-time.

Let me give you another example of why it is useless to try to give a
grammar of the Pascal/ADA/BSI variety for real Prolog.  term_expansion/2.
DEC-10 Prolog has a built-in predicate expand_term/2, which takes care of
grammar rules.  The basic loop is

	see(File),
	repeat,
	    read(Term),
	    expand_term(Term, Clause),
	    (   Clause == end_of_file, !
	    ;   process(Clause), fail
	    ),
	/* ! done above */
	seen.

C Prolog added a user-definable hook term_expansion/2 to expand_term/2,
so that you can plug in your own translation rules.  Quintus Prolog
retains this feature.  It is extraordinarily useful.  For example, I
have used this hook to make

	rev(L) = rev(L, []).

	rev([], L) = L.
	rev([H|T], L) = rev(T, [H|L]).

acceptable as Prolog source code.  These things are read as terms, and
term_expansion/2 turns them into rather different clauses.

The grimoire doesn't leave these terms undefined.  It states clearly
and unambiguously that they *MUST* be treated as equivalent to

	rev(L) = rev(L, []) :- true.

and so on.  This is not because the grimoire is badly done, but because
it is a grammar for Prolog *code*, not a grammar for Prolog *data*.

> But I suspect the real reason the BSI are putting strings in the standard is
> that they want to manipulate atoms like strings without having to expand them
> to lists, but they feel that it is too slow to assert/retract new atoms from
> symbol tables.

I suspect this too.  In fact I know for a fact that one outfit represented
on the committee has exactly this motive.  But from my perspective, the
things you happen to like and the things I happen to like aren't all that
important.  Having argued that the point of a Prolog standard is to
protect the interests of people who are already using Prolog by standardising
the language we already have, I find that strings are a de facto part of
Prolog.  There are a lot of Prolog systems in daily use which have a string
data type.  The fact that I may think Prolog programmers who _use_ strings
are, um, not as skilled as they suppose, isn't a good enough reason for me
to break their programs.  Nobody asked me what Prolog should be like.
Nobody asked the BSI Prolog group either.  If I went over to someone using
Arity Prolog and told him "I think strings are so disgusting that none of
your programs should be allowed to run", he'd tell me where to get off.
It's the same with user-defined operations:  I don't believe that Steve
Hardy ever asked anyone on the Prolog committee "please design a new
version of Prolog for me which will break my programs."  For better or
worse, user-defined operators are part of most Prologs, and strings are
part of too many Prologs, and the BSI has no more of a divine mandate to
rubbish anyone else's work than I have.

dnm@mulga.oz (David Morley) (03/17/88)

In article <8803112331.AA09995@decwrl.dec.com> vantreeck@curie.dec.com writes
to Richard O'Keefe:
>You have a precedence parser for PROLOG with user defined operators that can
>run as fast and be as small as a one-pass recursive decent or LALR(1) parser
>on a PROLOG without user defined operators?

It is not dificult to modify SLR(1) or LALR(1) parser generators so that
they accept user definable operators.  I am working on a generalisation
of LR-style parser generating algorithms which makes this easy.  For
example, the modified SLR(1) parser generator can be given a DCG such as:

term(Prec) --> [fy_op(Prec)], term(P1), { Prec>=P1 }.
term(Prec) --> [fx_op(Prec)], term(P1), { Prec>P1 }.
term(Prec) --> term(P1), [yf_op(Prec)], { Prec>=P1 }.
term(Prec) --> term(P1), [xf_op(Prec)], { Prec>P1 }.
term(Prec) --> term(P1), [yfx_op(Prec)], term(P2), {Prec>=P1, Prec>P2}.
term(Prec) --> term(P1), [xfy_op(Prec)], term(P2), {Prec>P1, Prec>=P2}.
term(Prec) --> term(P1), [xfx_op(Prec)], term(P2), {Prec>P1, Prec>P2}.
term(0) --> simple_term.

and automatically churn out a deterministic shift-reduce parser for the
language (ignoring the fy500 atom yf500 conflicts, etc.).  No need to get
your hands dirty hacking YACC to shreds.

(By the way, if you hate DCG's, just think of a DCG as a finite
representation of an infinite BNF grammar).

As for a parser for PROLOG with all the bells and whistles, I'm working
on it, but don't hold your breath (my interest in this field is
improving automatic parser generation so that ambiguity can be resolved
in the grammar rather than by hacking the parser generator algorithm ala
YACC).  <-: By the way, does anyone want user definable brackets? :->

The only reason I haven't said anything about the BSI standards debate
before is that ROK says everything I want to say (and more)better than I can.
I hope the committee takes note.

	David Morley