[comp.lang.prolog] PROLOG Digest V5 #97

PROLOG-REQUEST@SUSHI.STANFORD.EDU (Chuck Restivo, The Moderator) (12/14/87)

PROLOG Digest            Monday, 14 Dec 1987       Volume 5 : Issue 97

Today's Topics:
     Announcement - Call for Kitchen Sink & Fellowship Available,
                         Theory - Impediments
----------------------------------------------------------------------

Date: 10 Dec 87 23:28:09 GMT
From: munnari!mulga!isaac@uunet.uu.net  (Isaac Balbin)
Subject: Call For Papers

                           Call for Papers
            International Computer Science Conference '88
                Artificial Intelligence: Theory and Applications

International Computer Science Conference '88 is to be the first
international conference in Hong Kong devoted to computer science. The
purpose of the conference is to bring together people from academia
and industry of the East and of the West, who are interested in
problems related to computer science.  The main focus of this
conference will be on the Theory and Applications of Artificial
Intelligence. Our expectation is that this conference will provide a
forum for the sharing of research advances and practical experiences
among those working in computer science.

Topics of interest include, but are not limited to:

     AI Architectures       Expert Systems        Knowledge Engineering
     Logic Programming      Machine Learning      Natural Languages
     Neural Networks        Pattern Recognition   Robotics
     CAD/CAM                Chinese Computing     Distributed Systems
     Information Systems    Office Automation     Software Engineering

Paper Submissions

Submit four copies of the paper by June 15, 1988 to either of the Program
Co-Chairmen:

     Dr. Jean-Louis Lassez                 Dr. Francis Y.L. Chin
     Room H1-A12                           Centre of Computer Studies and
     IBM Thomas J. Watson             Applications
Research Center                            University of Hong Kong
     P.O. Box 218                          Pokfulam Road
     Yorktown Heights NY                   Hong Kong
10598                                      (For papers from Pan-Pacific region
     U.S.A.                           only)
     e-mail: JLL@ibm.com                   e-mail: hkucs!chin@uunet.uu.net

The first page of the paper should contain the author's name,
affiliation, address, electronic address if available, phone number,
100 word abstract, and key words or phrases. Papers should be no
longer than 5000 words (about 20 double-spaced pages). A submission
letter that contains a commitment to present the paper at the
conference if accepted should accompany the paper.

Tutorials

The day after the conference will be devoted to tutorials. Proposals
for tutorials on Artificial Intelligence topics, especially advanced
topics, are welcome. Send proposals by June 15, 1988 to the Program
Co-Chairmen.

Conference Timetable and Information

     Papers due: June 15, 1988
     Tutorial proposals due: June 15, 1988
     Acceptance letters sent: September 1, 1988
     Camera-ready copy due: October 1, 1988

     In cooperation with;

     Center for Computing Studies and Services, Hong Kong Baptist College
     Centre of Computer Studies and Applications, University of Hong Kong
     Department of Computer Science, The Chinese University of Hong Kong
     Department of Computer Studies, City Polytechnic of Hong Kong
     Department of Computing Studies, Hong Kong Polytechnic

------------------------------

Date: 8 Dec 87 02:07:57 GMT
From: munnari!mulga!rwt@uunet.uu.net  (Rodney Topor)
Subject: Research Fellowship Available


Applications  are  invited  for a Research Fellow (Grade 2) to work on
the Machine Intelligence Project in the Department of Computer Science
at the University of Melbourne.

This internationally recognized project is  performing  research  into
knowledge-based systems, databases, logic programming, parallel compu-
ting, and programming  environments.   Project  facilities  include  a
network of SUN workstations and access to other departmental computers
and networks.  The project is supported by the ARGS and DITAC.

Applicants should have a PhD or equivalent qualifications and research
experience in appropriate areas.  The appointee will  be  expected  to
perform  independent  and  joint  research  in  any one or more of the
project research areas.

The period of appointment is from 1 January 1988 to  31 December 1988,
with  an  expectation  of  renewal  in  1989,  and possible subsequent
renewals.

Salary: A$28,380 to A$37,122 per annum depending on qualifications and
experience.

Mail  enquiries  to  Dr K.R. Rao  (rao@mulga.oz.au)  or  Dr R.W. Topor
(rwt@mulga.oz.au).

Applications quoting Position Number 622A845 and including  full  per-
sonal and academic details and the names and addresses of two referees
should be sent to Director, Personnel  Services,  University  of  Mel-
bourne, Parkville, Vic. 3052, Australia. Further information regarding
details  of  application  procedure and conditions of  appointment  is
available from Mr Butters +61 3 344 7546.

Closing date: 18 December 1987.

------------------------------

Date: Fri, 11 Dec 87 12:21:34 PST
From: narain%pluto@rand-unix.ARPA
Subject: Reply to O'Keefe


Results about complexity of proof-procedures may not be relevant to
Prolog, but are certainly relevant to SLD-resolution which was born in the
world of theorem proving. If complexity results which apply to general
resolution also apply to it, what then is its importance? Kowalski
pointed out that its importance is its **procedural interpretation**.

Well,  why  doesn't  general  resolution  have  a  procedural
interpretation?..... I believe it has to do with the fact that its
behavior is more difficult to visualize and predict than that of
SLD-resolution. Any proponents of programming in full-first order logic
care to comment?

For similar points, see also J.A. Robinson's introduction in the first
issue of the Journal of Logic Programming, and also Logic programming with
equatons, by M. van Emden and K. Yukawa, Tech report CS-86-05 University
of Waterloo.

-- Sanjai Narain

------------------------------

Date: Sun, 13 Dec 87 00:19:35 PST
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: Theoretical question

I've had a little bit of mail in answer to this.

The method for generating a random permutation of a list is
very direct.  Oddly enough, it looks like quick-sort, which
is a pretty appalling sorting algorithm, because it cannot
guarantee an even partition, but here we can.

random_permutation(A, B) :-
        same_length(A, B, N),
        random_permutation(N, A, B, []).

random_permutation(0, [], B, B).
random_permutation(1, [X], [X|B], B).
random_permutation(N, A, B0, B) :-
        N >= 2,
        P is N >> 1,
        Q is N - P,
        random_partition(P, N, A, X, Y),
        random_permutation(P, X, B0, B1),
        random_permutation(Q, Y, B1, B).

random_partition/5 makes a random partition of A into a subseqence X
having P elements and a complementary subsequence Y having Q = N-P
elements.  That needs no explanation; you'll find the method in Knuth,
and in Bentley, and I sent a C version to comp.sources.  I pass the
length N of the list A around for efficiency, recomputing it would
not change the O(N.lgN) bound.

If you have terms of unbounded length, it should be obvious that
you can APPLY a permutation of [1..N] in O(N) time:

apply_permutation(Permutation, A, B) :-
        length(Permutation, N),
        same_functor(A, B, f, N),
        apply_permutation(Permutation, 0, A, B).

apply_permutation([], _, _, _).
apply_permutation([P|Ps], I, A, B) :-
        J is I+1,
        arg(I, A, X),
        arg(P, B, X),
        apply_permutation(Ps, J, A, B).

Note that it is not feasible to solve for the permutation: for
terms of length N there could be as many as N! permutations to
enumerate.

Even with terms of unbounded length, it is not clear whether a
random permutation of [1..N] can be constructed in O(N) time.
It is easy to do in a conventional language, but any one element
of the array may be changed several times.

An obvious technique is something like this:
        make a term with k*N arguments, for some fixed k
        make a pass over the input, trying to put each
        item into a randomly chosen argument of the term.
        If the argument has already been filled, divert
        the item to an overflow list.

        Read the permutation off the term, ignoring unset
        arguments.

        If the overflow list is empty, stop.
        Otherwise, generate a random permutation of the overflow
        list, and append it to the read-off list.

The most straightforward approach chips off ~0.63N elements in the
first pass, so
        cost(N) = c.N + cost(0.27N)
cost(N) = (c/0.27)N is a solution of this equation, so this gives
us a linear-expected-time algorithm for generating random permutations
if you have terms of unbounded length available.  The worst case
appears to be O(N**2) (it has quicksortitis) but it is very unlikely.
Obviously it is possible to attain a version with O(N.lgN) worst case
by monitoring the cost so far, and when the cost so far reaches
N.lgN random number generations, switch over to another method.
But is there a more direct modification with an O(N.lgN) worst case?

Note that this approach requires a hack:
        we have to be able to tell whether a particular argument
        of the working term has been filled in or not.

We can distinguish eight levels of power:

1a) PURE FUNCTIONAL LANGUAGES.
    tuples are created with ground elements.
    tuples are bounded by some constant T characteristic of
    the given program.

1b) PURE FUNCTIONAL LANGUAGES + ARRAY-FORMING OPERATOR.
    tuples are created with ground elements.
    there is no prior bound on the size of a tuple.
    Typically there are functions
        mk-array f n : (int -> *) -> int -> * array
        array-elt a n : (* array) -> int -> *
    satisfying array-elt (mk-array f N) i = f i, for 1<=i<=N

2a) PURE HORN CLAUSE LANGUAGES.
    tuples are created with arbitrary terms as elements.
    there is an arg(N,T,A) operation, but no var/1 or !/0
    tuples are bounded by some constant T characteristic of
    the given program.

    Note that a program in such a language has no way of
    telling whether a particular element has been set or not.

2b) PURE HORN CLAUSE LANGUAGES + ARRAY-FORMING OPERATOR.
    As 2a) + functor(T,F,N), where N has no prior bound.

3a) BOUNDED-TERM PROLOG.
    as 2a) + var/1 or !/0.
    Most Prologs are in this class.
    What you get here are write-once arrays where you can tell
    whether a particular element has been written yet or not.

3b) UNBOUNDED-TERM PROLOG.
    as 2b) + var/1 or !/0.
    I think DEC-10 Prolog was in this class (I never found a bound
    other than memory size, which was NOT large, as Edinburgh never
    had an extended-addressing machine), I think POPLOG is in this
    class, and I have been told that VM/PROLOG is in this class.

    You can emulate conventional arrays in this class; array access
    and update have O(1) expected cost and O(N) worst case cost;
    I haven't analysed the use of this method for permutations yet.
    Also, the emulation doesn't make anything remotely resembling
    logical sense.

4a) BOUNDED-ARRAY IMPERATIVE LANGUAGES.
    Conventional arrays can be made.  Elements can be accessed and
    updated in O(1) average and worst case time, and any element
    can be updated any number of times.  However, the size of an
    array is bounded.

    Most programming languages are in this class.  Or rather, their
    implementations are.  For example, on the B6700, the absolute
    maximum size for an array was 2^20 words.  (An array could have
    arrays as elements, so you could have data structures that were
    larger, but it took more than one step to get to the leaves.)
    I used to use a Fortran compiler where an array could not
    contain more than 2^16 16-bit words, even though the machine
    was supposed to have an address space of 2^28 16-bit words.
    On a typical UNIX system, the size of local arrays in C or
    PASCAL had better be limited!

    If we're really honest about it, even implementations which impose
    no prior restrictions on the size of arrays are not accurately
    modelled by the RAM model for large data structures:  the pointers
    which make these things really trees are there, but they are in
    the operating system's paging tables.

4b) UNBOUNDED-ARRAY IMPERATIVE LANGUAGES.
    4a) + no prior bound on array size other than machine memory.
    This is what most programmers think they've got.

Now, remember that we can solve the permutation problems with
T-bounded terms in O(N) space and O(N.log(T,N)) time in languages
of class 1a).  It is straightforward to write the code for T=8.
For 2^30 words (all you can fit in a 32-bit byte-addressed machine),
log(8,N) <= 10, so the difference between O(N.log(8,N)) and O(N) is
not all that great.  For problems bigger than that, the pretence of
languages like Pascal to belong to class 4b) -- reading "machine
memory" as "store + disc space" -- is unpleasantly exposed.

One of the reasons for being interested in functional languages at
all is the hope that they might be useful for parallel systems.
The connection between forming permutations and data routing in
networks of processors is fairly clear, and connection networks
provide each processor with a fixed smallish number of neighbours.
O(1) access and update time across such a network is out.

There are a lot of problems where languages of class 1a) are no worse
than languages of class 4b):  the Fast Fourier transform is one,
transposing a matrix is another.  In these two cases, we are shuffling
data around, but we can analyse the shuffle in advance and route the
data so that each "Logical Inference" moves a bounded number of items
a bounded number of hops.

Where we are with respect to the permutation problems:

    generating a random permutation (of any list) is
        O(N.log(T,N))
    in a language of class 1a) or better, or
        O(N) average, O(NlgN) worst case
    in a language of class 3b) or better, or
        O(N) average and worst case
    in a language of class 4b)

These are upper bounds.  O(N) is a lower bound.
Can these bounds be tightened?  Allan van Gelder showed me a
random permutation method once for class 3b), and I have a
nasty feeling it was O(N) worst case, but I can't find my copy.

It is trivially easy to show that if bound O(f(N)) is attainable
for some algorithm in a language of class 4b), a bound of
O(f(N).T.log(T,N)) is attainable in a language of class 1a).
Often O(f(N)) is attainable.

What I'm hoping is that someone will be able to think of some
general techniques, either proof methods for showing that certain
problems must suffer, or coding methods for showing that they
needn't.

It is worth noting that in parallelising conventional algorithms,
paying an O(lgN) time and space cost by allocating new data
instead of changing the old may permit a subsequent O(N) time
speedup, for a speedup of O(N/lgN) relative to the original
algorithm.  I am not skilled in such matters, but I believe that
permutation generation may be such a case.  (That is, with N
processors, it would take lg(N) parallel steps).  Anyone know?

------------------------------

End of PROLOG Digest
********************