[net.lang.prolog] PROLOG Digest V3 #25

RESTIVO@SU-SCORE.ARPA (05/24/85)

From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>


PROLOG Digest            Friday, 24 May 1985       Volume 3 : Issue 25

Today's Topics:
                   Query - Qualitative Reasoning,
                 Announcement - Seminar (Stanford),
                  Implementation - '%' Notation,
                LP Library - Berkeley PLM Benchmarks
----------------------------------------------------------------------

Date: Mon, 20-May-85 09:52:10 PDT
From: (David Sherman) pesnta!dave@UCB-Vax
Subject: Suggestions needed for tax rules

I am trying to design a system which will apply the rules of
the Income Tax Act (Canada) to a set of facts and transactions
in the area of corporate reorganizations.

This is not a typical "AI and law" problem, because the Income
Tax Act is highly technical and extremely specific about what
rules apply and when. To the extent there is "open texture", or
issues which require legal judgment (e.g., whether an amount
is "reasonable"), I assume the lawyer using the system will
provide the judgment as an input fact. The problems are therefore
quite different from those addressed by McCarty@Rutgers' TAXMAN
program, because the approaches of the Income Tax Act and the
Internal Revenue Code are very different.

The difficulty in programming the system is simply the complexity
of the Income Tax Act. On a given transaction, a large number of
rules have to be examined to determine possible tax effects. Some
of these rules create new facts (e.g., a deemed dividend or a deemed
disposition). The passage of time is very important: steps happen
in a particular sequence, and the "state of the world" at the moment
a step is taken is crucial to determining the tax effects. In the
context of corporate transactions, this state includes such things
as who controls a corporation; who owns what assets; the residence
and status of and relationships among taxpayers; cost bases and
proceeds of disposition; and so on.

(My background to this: I'm a tax lawyer and an experienced C
programmer. I'm doing this work towards an LL.M. thesis.)

I've tried several approaches to this field before. Last year
I did a small version in C which uses an event-driven simulation,
and as it encounters each event calls a function for each rule in
the database to generate new events and tax results. (Incidentally,
if anyone wants a copy of that paper, "Towards a
Comprehensive Computer-Based Problem Solving Model of the
Income Tax Act: A Suggested Approach and Implementation of
Examples from Corporate Reorganizations", let me know.)

One of the problems with the C implementation was designing the
order in which the rules should be applied. For even the small subset
I implemented, this was awkward and difficult.

I came across Prolog at an ICS course on expert systems a few
weeks ago. It looks like the perfect tool for much of what I want
to do. Besides the workshops at the course (taught=sdcsvax!vis!greg),
I've now read Clocksin & Mellish and skimmed through How To Solve
It In Prolog.

The definitional stuff is fine. Using C-Prolog 1.4 on a VAX (not this
machine), I've already coded the Income Tax Act's definitions for
things like "resident", "private corporation", etc.

The problem I have (if you've read this far, thank you!) is how
to deal with _time_. I suppose the overall model I need to work with
is one of "changing states", where a transaction (e.g., X transfers
property to corporation Y in exchange for shares in Y) changes the
world from state A to state B, and the rules can examine differences
between the states to determine the effects. But then how does that
jive with definitional tests which may need to look back in time
(e.g., a corporation is resident in Canada in a given year if it
meets certain conditions and it was resident in Canada, or carried
on business in Canada, in the previous year. This is a perfect
recursive definition for Prolog, and I've implemented the rule,
but only as a fixed definition, not as part of a "changing
state" system.)?

I guess part of my problem is that I'm not working in an
AI department, or even a Computer Science department, and
so I don't have knowledgeable people to bounce my ideas off.
If you have any suggestions as to approaches I should be taking,
I'd greatly appreciate hearing from you. I'd be happy to send you
a copy of the code I've written so far to explain what I mean.
(You could even learn all about Canadian tax law!)

-- David Sherman
   The Law Society of Upper Canada
   Osgoode Hall
   Toronto, Canada  M5H 2N6
   (416) 947-3466

------------------------------

Date: 23 May 85  0000 PDT
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: Special seminar

Thursday 6-6-85, 11am in MJH 352

                FUNCTIONAL PROGRAMMING AND THE LOGICAL VARIABLE

                                Gary Lindstrom
                        Department of Computer Science
                              University of Utah
                          Salt Lake City, Utah 84112

Logic programming offers a variety of computational effects
which go beyond those customarily found in functional
programming languages.  Among these effects is the notion of
the "logical variable," i.e. a value determined by the
intersection of constraints, rather than by direct binding.
We argue that this concept is "separable" from logic
programming, and can sensibly be incorporated into existing
functional languages.  Moreover, this extension appears to
significantly widen the range of problems which can
efficiently be addressed in function form, albeit at some
loss of conceptual purity.  In particular, a form of
side-effects arises under this extension, since a function
invocation can exert constraints on variables shared with
other function invocations.  Nevertheless, we demonstrate
that determinacy can be retained, even under parallel
execution.  The graph reduction language FGL is used for
this demonstration, by being extended to a language FGL+LV
permitting formal parameter expressions, with variables
occurring therein bound by unification.  The determinacy
argument is based on a novel dataflow-like rendering of
unification.  In addition the complete partial order
employed in this proof is unusual in its explicit
representation of demand, a necessity given the "benign"
side-effects that arise.  An implementation technique is
suggested, suitable for reduction architectures.

------------------------------

Date: Sun, 19 May 85 03:22:53 cdt
From: (Raghu Ramakrishnan) Raghu@UT-sally
Subject: '%' notation

Examples 1 and 2 discuss design decisions.

Example 1

This example discusses why unification with an unannotated
term or variable in the call should be defined to fail.
Suppose it is not so defined.

                Call:    Equivalence(Y, Y)
                Clause:  Equivalence(5, X%) <---  ...

Clearly, the two Y's equivalence X to 5, and a
left-to-right unification will succeed with X being
instantiated to 5. However, a right-to-left unification
will instantiate Y to X, with or without preserving the '%'
annotation depending on our definition of '%'.

If both the occurrences of Y are annotated with '%'
unification will suspend until some other process
instantiates Y regardless of the order of unification,
consistently yielding the semantics we desire. []

Example 2

This example emphasizes the fact that no assurance is given
as to which process instantiates the variables in a
%-annotated term.

             Call:    No_guarantee([X|Y]%, X, Y)
             Clause:  No_guarantee(Z%, 5, 6) <---  ...

Unification always succeeds, instantiating X to 5 and Y to
6. The only fact guaranteed by the %-annotation, that the
argument corresponding to Z will be instantiated to a term
by some other process, has been satisfied trivially by
virtue of the corresponding argument being a term [X|Y].
This term, moreover, is %-annotated, thus indicating that
this call is to be matched with a clause that expects to
find its first argument instantiated to a term; and since
this is precisely what the given clause expects,
unification succeeds. []

Examples 3 - 5 show why the syntactic restrictions are
necessary.

Example 3

This example shows why variables in the clause head must
be unique.

        Call:    Sneaky_Equivalence([X|Y%], [X|5%])
        Clause:  Sneaky_Equivalence(Z, Z) <---  ...

Left-to-right unification will succeed with Y being
instantiated to 5.  Right-to-left unification will suspend
until some other process instantiates Y, and will then try
to unify it with 5. Note that %-annotating the two Z's and
the corresponding terms would still yield the same undesirable
behaviour. The problem lies in the fact that the Z's
equivalence  Y with 5.  It might be argued that this
process will in any case fail if Y is not eventually
instantiated to 5, and there is no harm in letting this
failure be discovered later. The flaw lies in the fact that
there might be several alternative clauses (in other words,
potential process descriptions). The call, as we observed
earlier, is the specification of a needed process, and
this entire exercise of matching with the clause heads and
trying to satisfy the guards is intended to find a process
template (one of the clauses) which meets the
specifications. If, for instance, we assume an empty guard,
the choice is made once the unification of the call with a
clause head succeeds. The above order-dependent behaviour
could thus lead to the call reducing to a process that
fails or one that succeeds. Such a possibility is inherent
in any committed-choice logic programming language, but it
should at least be independent of things like the order of
unification! The fact that this objective is compromised by
some extra-logical features should not be  interpreted as
licence to design other extra-logical features with the same
deficiency. []

Example 4

This illustrates why variables in a call must be annotated
consistently.

                    Call:    Fickle(X%, X)
                    Clause:  Fickle(Y%, 5) <---  ...

Left-to-right unification will suspend until some other
process instantiates X but right-to-left unification will
succeed with X and Y instantiated to 5.  If both
occurrences of X in the call are %-annotated, this problem
does not arise since unification always suspends until X is
instantiated by another process. []

Example 5

If a term T in a clause head contains a %-annotated subterm
T1, then every sub-term of T including T must be %-annotated.
This is because the call call could equivalence two terms
in the head as in this example.

            Call:    Machiavelli(Z, Z)
            Clause:  Machiavelli([5|Y%], [5|6%]) <---  ...

Left-to-right unification will suspend whereas
right-to-left unification will succeed with Y instantiated
to 6. However, if both the terms in the clause head as well
as the two Z occurrences are %-annotated, unification will
always suspend until Z is instantiated by another process,
hopefully to some term of the form [_|U%], where the
underscore stands for an arbitrary term or variable, and U
is either '6' or a variable. At this point, unification
will again suspend if U is a variable. []

The following two examples illustrate the programming power
of '%'.

Example 6

This is an implementation of Quicksort in CP with the
%-annotation. The logic is exactly as in Shapiro's original
program.

quicksort(Unsorted%, Sorted) <---  qsort(Unsorted%, Sorted-[]).

qsort([X|Unsorted]%, Sorted-[]) <---
partition(Unsorted%, X%, Smaller, Larger),
qsort(Smaller%, Sorted-[X|Sorted1]),
qsort(Larger%, Sorted1-[]).

qsort([]%, []-[]).

partition([X|Xs]%, A%, Smaller, [Y|Larger]) <---
A < X, X = Y | partition(Xs%, A%, Smaller, Larger).

partition([X|Xs]%, A%, [Y|Smaller], Larger) <---
A >= X, X = Y | partitioned(Xs%, A%, Smaller, Larger).

partition([]%, _%, [], []).

While the logic is the same as in Shapiro's program, the
above version has one important difference. Each clause
describes precisely what annotations it expects of its
arguments. This information is available without looking
at any of its calls, and so each clause can be understood
by considering it statically, with the assurance that
nothing unexpected will happen at run-time due to some
annotations that are passed on during execution. []

Example 7

An important feature of CP is the ability to pass partially
instantiated data structures, which can then be filled in
by the receiver and thus provide a response for the sender.
The following example, again adapted from Shapiro's paper,
demonstrates that this feature is retained.

stack(S%) <--- stack(S%, []).

stack([pop(X)|S]%, [Y|Xs]) <---  X = Y | stack(S%, Xs).

stack([push(X)|S]%, Xs) <---  stack(S%, [X|Xs]).

stack([]%, []).

This program receives a stream of push and pop instructions
and services them, putting the response in the variable X.
It initialises itself using the first clause. []


-- Raghu

------------------------------

Date: Wed, 22 May 85 10:47:11 pdt
From: (Tep Dobry) Tep%ucbdali@Berkeley 
Subject: The Berkeley PLM Benchmarks


     At the Warren Abstract Machine Workshop a few weeks ago
I  was  asked to publish the set of benchmarks programs I've
been  using  on  my  simulator  for  the   Berkeley   Prolog
Machine(PLM).  I've  finally got them all collected together
in Prolog form (CProlog) and have sent them to  the  Digest.
They're  really  too  big  to just publish in the Digest, so
they are being set up in a directory in the PROLOG directory
at  SU-SCORE.  There are 11 files with a total of 400 lines.
Since our machine is based on compiled Prolog, the top level
queries  are  also  compiled  in, generally as the predicate
main/0.

     The benchmarks were primarily chosen to exercise all of
the  features of the PLM, not for any complexity of program-
ming. About half of them come from Warren's thesis, and  the
others  we've  added here.  Our original performance figures
were based on simulations of hand compiled versions of these
benchmarks.   We are currently looking for larger, more com-
plex benchmarks to run on the hardware when it is available.
So  I'd  be  interested  seeing large benchmarks sent to the
Digest.

-- Tep Dobry 

[ the code for these benchmarks is available from the SCORE
  PROLOG library under the subdirectory PS:<Prolog.BM> ]

------------------------------

End of PROLOG Digest
********************