[comp.lang.prolog] Prolog's Pitfalls Revisited

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (07/02/90)

In article <1990Jun26.170835.11037@chaos.cs.brandeis.edu>,
dsmith@chaos.cs.brandeis.edu (Donald A. Smith) writes:

> Here's a summary of my polite counter-response to Richard O'Keefe's 
> response to my first posting on "Prolog's Pitfalls".

> Cut the vituperative, confrontational style and the character assassination.

This is the POLITE counter-response.  Thank goodness he didn't send a
rude one!  Consulting a dictionary, I find
	Character assination is the deliberate attempt to destroy
	someone's reputation, especially by criticizing them in an
	unfair and dishonest way when they are not their, so that
	everyone doubts their good qualities.

Now, what precisely did I do that is being described as "character
assasination?"  I said
! Get this and get this good:  use of quicksort in ANY programming
! language, is a mark of INCOMPETENCE...
I did spell out a circumstance (extreme shortage of storage) under
which the use of quicksort is entirely appropriate.

Whose reputation, precisely, was I trying to destroy?  Name him!
Is there anything dishonest about this claim?  No, I really believe it.
I am, in fact, prepared to defend the claim in court.
Is there anything unfair about it?  Well, what precisely _is_ my claim?
	A programming professional takes the trouble to discover
	the properties (quicksort is unstable) and costs (quicksort is
	slow) of the tools he uses.  The properties and costs of
	quicksort (relative to, say, merge sort) are obtainable from
	every good book on sorting or algorithms (Knuth, Mehlhorn,
	Sedgewick, &c).
What's unfair about that?
	
Ok, how about "vituperative"?  Back to the dictionary:
	vituperative means full of spiteful and bitter verbal abuse

Saying that using quicksort is a mark of incompetence is "spiteful"?
Spare me days.  It's sad times we've come to when one is not allowed
to disparage an algorithm which has been known to be inferior for
nearly 30 years without being accused of showing spite to someone.
WHO, precisely, am I being alleged to have abused?
Certainly not Donald A. Smith!  The force of my argument (quicksort is
not an argument against Prolog because no good programmer uses quicksort,
surely Smith *knows* that, so it is hard to suppose that any argument
against Prolog based on quicksort was intended seriously) would have
been lost entirely if anyone had imagined that I thought Smith didn't
know quicksort is inferior.

One last appeal to the dictionary:
	A confrontation is an attempt to accept the realities of
	a difficult or unpleasant situation and to deal with them.
If my style is indeed confrontational, that is high praise indeed.
Long may it continue so!


Now, let's recall the context.  Smith's original message said

>>	I will list some criticisms of logic programming/Prolog.

Criticisms of LOGIC PROGRAMMING were entirely absent from that
message.  When I pointed this out, Smith replied:

> I don't know what you mean by "None of the points addressed logic
> programming!"   All of the points addressed logic programming (which
> includes Prolog).  

No.  All the points addressed *Prolog*.  Let me point out a deficiency
of English.  The Polynesian and (most) Australian languages have a
similar pronoun system: there are three numbers (singular, dual, and
plural) and four persons (first person exclusive, first person inclusive,
second person, and third person).  [I ignore "honorific", "avoidance",
and "kinship" variations.]  I have often been frustrated by the absence
of dual pronouns in English.  English is an Indo-European language.  Have
I found a problem with Indo-European languages?  No!  Ancient Greek had
a perfectly good set of dual pronouns.  What has _really_ bothered me in
English is the lack of distinction between first person inclusive and
exclusive.  I sometimes shake my head in wonder that we can manage
without them.  Greek didn't have that either.  _Now_ have I found a
problem with Indo-European languages?  No!  Pijin makes the distinction
between "mi" and "yumi" perfectly plain.

In just the same way, a criticism of Prolog syntax or search order is
not a criticism of any other logic programming language (until it has
been _shown_ to apply to that language), still less of logic programming
as such.  You might as well try to discredit OOP by criticising C++.

> 0. Awkwardness
>   Prolog is often used for tasks where other languages are better suited.

Not a criticism of logic programming.
Not a criticism of Prolog, either.  Which programming language
*isn't* "often used ... where other languages are better ..."?

> Interfacing different languages tends to be ugly.
> While one could combine different languages (e.g., Prolog, C, and 
> Scheme),  in my experience the interfaces are hard to manage.

BE SPECIFIC.  Which Prolog systems have you had experience with?
Interfacing to which other languages?  Have you had experience
with Xerox Quintus Prolog mixing Lisp and Prolog?  Have you had
experience with ZYX's older Prolog system, again mixing Lisp and Prolog?
Have you had experience of PopLog, mixing Lisp, Pop, SML, and Prolog?

It's interesting to note that the interface between T ("A Dialect of Lisp")
and C is very close in style to the Quintus/SICStus/BIM/NU/ZYX style of
interface between Prolog and C.

> 1. Speed
> Getting better, but still deficient; faster systems not widely available.

The systems which *ARE* widely available include Quintus Prolog, for which
several programs that I have written outperformed C, SICStus Prolog (which
has a native code compiler for 680x0s that can be about twice as fast as
Quintus Prolog), BIM Prolog (with a native code compiler of similar quality),
and so on.

> My quoting of an associate wasn't intended to be a rational argument 
> that could itself discredit Prolog or logic programming.  It was just
> a sarcastic remark ...

This is the man who tells _me_ to cut the vituperation...

> Quicksort was offered as an example of slowness,
> not as an example of a good sorting algorithm.

I really don't see what relevance the slowness of a bad algorithm has
to anything.  Who *cares* how fast quicksort goes when you'd have to
be ignorant or incompetent to use it anyway?

As an example of slowness, it failed utterly, because we were given
NO DATA WHATSOEVER.  He specifically claimed that quicksort is an example
that shows that vanRoy et al's results aren't a defence against claims
that Prolog is slow.  But quicksort is precisely the kind of thing that
the new Berkeley compiler ought to be very good at!  Why should we
believe that van Roy's compiler can't do a good job of it (compared to C)
simply on Smith's say-so without any numbers?  And if the new Berkeley
compiler _can_ do a good job of it, the utility of quicksort as an
example of slowness vanishes entirely, so that if Smith didn't mean us to
believe that van Roy's compiler wouldn't make quicksort nearly as fast as
in C, all we are left with is the sarcasm.

There is an extremely important point about comparing languages to be
made.  The question "how well does language L handle algorithm A" is of
little practical interest.  Algorithms exist to solve PROBLEMS.  The
question which is of great practical interest is "how well can problem
P be solved in language L".  In this case, the question "how well can
Prolog do quicksort?" was *always* uninteresting.  The interesting
question is "how well can Prolog sort?"  That's a very different question,
which may involve finding a new algorithm.  I have often found that
designing an algorithm in Prolog is like designing a parallel algorithm,
(the difficulty in both cases being the absence of instantaneous
communication, you have to plan in advance to get the right data in the
right place at the right time).  Now, it is a platitude of parallel
programming that the most efficient parallel algorithms are often different
from the most efficient sequential algorithms for the same task.  (Sorting
is one of the cases where this platitude is true.)

So, we are left with the strange situation that Smith appears to insist
that Prolog should do well with an algorithm designed for conventional
languages and bad in them, and that he uses this to discredit van Roy's
results without presenting any figures at all for how van Roy's system
actually does it.

I'm still waiting for a serious argument against *Prolog* here,
let alone logic programming.

> 2. Logic-functional marriage

> I doubt that MERELY adding functional syntax to Prolog "disposes" of
> the issue of combining logic and functional programming.

Nobody ever said it did.  What it disposed of was the claim that you
had to use a functional language to get the benefits of functional
syntax.  Let's quote the original posting again:

Smith> 2. I have yet to see an elegant marriage of logic and functional 
Smith> programming. It seems clear that functional syntax is needed.
                    ************************************************

That looks to me a whole lot like a claim that functional SYNTAX is
needed.  I pointed out that you can *HAVE* it in Prolog, seamlessly.

> Clearly that's part of it, but there are other issues motivating all the
> work on narrowing, equational logic programming, CLP languages, etc.

Let's dig deeper into what Smith says.  He evidently HAS "seen"
narrowing, equational logic programming, and CLP.  Referring to his
original letter, we find that he is saying that NONE of these is
"elegant".  I think we are entitled to more than a bare assertion.
What, precisely, is wrong with them.  How does narrowing fail to be
elegant?  What's wrong with EqLog?  What's wrong with LogScheme?

Bear in mind that a functional language like Haskell *IS* a logic
programming language.  The point of Haskell is that it is based on
a very strong logical theory; Haskell, unlike C, isn't warmed-over
assembly code.  (Yes, Haskell has bignums.)


> 3. Inflexible computation/search rule

I pointed out that Prolog is a programming language.

> Yes, but people often advertise Prolog as an AI programming language.
> And Prolog incorporates search as a fundamental part of its computation.

They say the same thing about Lisp and SmallTalk, neither of which
offers even depth-first search.  Why is it ok to call Lisp an AI language,
but not Prolog?

I mentioned iterative deepening.

> Isn't there an overhead (of execution time and programming time) 
> incurred by using iterative deepening?

Compared with pure depth first search, yes.  If it's built into the
engine, the cost can be as low as one machine instruction per
procedure call, per choice point creation, and per choice point
restoration.  Mark Stickel's Prolog Technology Theorem Prover does
this.  (To be perfectly frank, that's where I got the idea.)  If it
isn't built into the engine, but done by means of a (trivial) preprocessor
the overhead is still under 5% for the programs where I've measured it.
(People who attend my Prolog course are given the source code for the
preprocessor.  It's a rather obvious modification of the DCG translator.)
Compared with pure breadth first search, iterative deepening does B/(B-1) more node expansions, but
in practice the execution time is greatly REDUCED.  There is no programming
overhead compared with depth-first, and there is in practice a reduction
compared with breadth-first.

> Iterative deepening is a programming technique used by Prolog programmers,
> but it's not built into the language.

It doesn't have to be built in any more than Definite Clause Grammars
have to be wired into the compiler.  It could be built in, and in Stickel's
system it _is_ built in.  What's more, the only thing which really needs to
be built in is depth bounding, and that *WAS* built into DEC-10 Prolog.
(Look in your Prolog manual for maxdepth/1.)

Alas, maxdepth/1 has gradually decayed (NU and SICStus Prolog both trap
to the debugger on a depth bound failure instead of failing quietly as
DEC-10 Prolog did, and some high-performance systems have dropped it).
The reason for this is that nobody used it.  But the feature *WAS* there.

> Furthermore, breadth first search (or its simulation by iterative deepening) does 
> does not exhaust the various search rules investigated in AI.

Neither it does.  So?  I repeat, why does the entire absence of any kind
of built-in searching method from Lisp or SmallTalk leave *them* acceptable
as AI languages (which they are), yet the presence in Prolog of depth-first
search in Prolog (and iterative deepening, practically speaking for free)
isn't enough, Prolog has to have _everything_?

I pointed out that ZYX Prolog, SICStus Prolog, NU Prolog, IBM Prolog,
and others have coroutining, so they *haven't* got a fixed left to right
goal ordering.

> I assume you're referring to features like the "freeze" predicate and 
> "wait" declarations.

I really had in mind ?- when declarations and the 'nac' utility, but yes.

> These add a coroutining component to Prolog without
> altering its basic, default left-to-right computation rule.

They leave left to right as the DEFAULT, but the programmer doesn't
need to do one tiny thing to get other orders.  What, precisely, is
wrong with having a default left to right order?  Let me give you an
example.  In NU Prolog, if I write
	| ?- length(X, N), length(Y, N), N = 3.
what happens is that the first call to length/2 goes to sleep,
then the second call to length/2 goes to sleep,
then the binding N = 3 is made, which wakes up the two sleeping goals,
and solutions are then found for X and Y.  What's more, if I reject
the solution I'm offered, the query fails instead of diverging.

> (Contrast query optimizers which will re-order calls to database
> predicates in order to speed up joins.)

In NU Prolog, there is a program which you can use to generate the
'when' declarations automatically.  It's called 'nac'.  Let me quote
from the NU Prolog manual:
	The second way nac adds control information is the
	REORDERING of calls in the bodies of clauses.
What was that contrast, again?

> Programmers have to concern themselves with details of the computation
> state (whether a variable is bound or not) and so use of the coroutining
> features places a great burden on the programmers' shoulders that ought
> to be better supported in the basic operations of the language.

Eh?  The whole point of coroutining in NU Prolog is that programmers
DON'T have to concern themselves with details of the computation state
and it IS supported in the basic operations of the language.  John Lloyd
once said "I never want to see another call to var/1 again, and with
MU Prolog I don't have to".  I really do wonder which, if any, coroutining
Prolog system Donald A. Smith has ever actually programmed in.

> 4. No nested scopes

As with ALL the points, this is a criticism of Prolog, not of logic
programming.  (I think Ptah has nested scopes, I could be wrong.)

> I rather think that the lack of nested variable scopes makes it more
> difficult to reach the ideal of "short clauses".

This has not been my experience.  Really, an example of something you
have written where you found it very hard to produce short clauses
because of the lack of nested scopes would be extremely helpful.  Then
I could show you how easy it all is...  It is rare for me to have long
clauses except when a clause is doing a lot of output, and the advent
of format/2 changed *that*.

I wrote
> > To be perfectly frank with you, I find deeply nested definitions where 
> > I have to look back over hectares of code to find the definition of a 
> > variable rather hard to read.

> How about looking back a _few_ lines?

I was talking about the *REAL* *ACTUAL* programs I have seen written
in versions of ML.  You see these lovely little examples in books (examples
which transliterate perfectly into Prolog), and then you look at a real
compiler or a real implementation of YACC and things are not so simple and
not so local.

> I think there are cases when nested
> scopes are preferable to high-arity clauses

There is no such animal as a high-arity clauses.  Clauses do not have arity.
I think my personal record for the arity of a predicate may be 10 (excluding
the output of a program transformation system).

> (or the use of constructors and selectors to "package-up" parameters).

I am entirely ready to believe this.  To convince me, all you have to do
is to present an example.

> > Design your program right in the first place.  I'm serious.

> Even if one does that, one often later decides to add functionality. 
> Specifications change, software grows.  Prolog is often used for 
> rapid prototyping and design exploration: the specifications 
> and the implementation evolve(sic).

Yes, but what has this to do with nested scopes?  If the specification
changes, the simpler your code was in the first place, the easier it is
going to be to adapt your code to the new specification.  [By the way,
the "(sic)" is there because "populations evolve, individuals die".]

Look, I'm really sympathetic to nested scopes.  I'm a long-time Algol fan.
I'm a Scheme fan right now (at *last* Algol 60 has a successor!).  When
Lawrence Byrd saw the way I used to lay my Prolog code out, he commented
"you really miss Algol, don't you".  But I eventually learned how to write
Prolog.  Try it, it's easier than you think.

> 5. Memory hog due to single-assignment.
>  Not solved. Garbage collection cleans up the garbage, whereas compile-time
> garbage collection might prevent it from being created at all.

This is in the summary, but it is not argued in the body of the message.
Calling something a "memory hog" is a very strong statement.  I pointed
out that ML implementations have got the cost of garbage collection down
to under one instruction per memory cell.  If this cost were eliminated
entirely, the greatest possible speedup would be 33% (a cell is going to
be written once, and if isn't likely to be read at least once, why make
it in the first place?).  Compile time garbage collection isn't going to
do one tiny little thing, no not anything at all, to the amount of 
memory a Prolog program requires at any one time.  A program is a memory
hog if and only if it claims a large volume of memory from the OS.  But
can you call a program (Prolog) running in 1 Megabyte (this is a realistic
figure) on a 4 Megabyte machine a memory hog?  If a program turns over a
lot of memory with the aid of a garbage collector, that may make it slow,
but it most certainly does not make it a memory hog.  I repeat, compile
time garbage collection doesn't reduce the amount of memory a Prolog
program has to have at any one time, it merely reduces the cost of reusing
the same amount of memory.  I already pointed out that Smith's only example
(quicksort) requires 2N list cells to sort a list of length N.  Compile time
garbage collection doesn't change that.  Having destructive assignment
would reduce the space claim by a factor of 2, but no more.

It's worth pointing out that some Prolog systems (IBM Prolog, PopLog, others)
*have* arrays with assignment as a built in data type and that some others
(C Prolog, SICStus, NU, ALS, others) have destructive assignment for
compound terms.  I think this is a major wart, but Standard ML also has
destructive assignment in what is otherwise a functional language.

> 6. Impure features
 (not argued in Smith's latest posting)

> 7. Program transformation/parallelism
> Prolog hasn't lived up to the hype surrounding 5th Generation Project.

- this tells us nothing about logic programming
- complain to the Japanese about their hype.  Prolog wasn't designed in
  Japan or by the 5th Generation Project.
- I would point out that if you read the proceedings of the Japanese
  logic programming conferences (yes, logic programming, a lot of what
  they do is not Prolog) they don't seem to be disappointed...
- As for program transformation, it's a tool I use.
- As for parallelism, what precisely is your gripe about the work done
  by the GigaLips project?  They have demonstrated parallel systems with
  good speedups.  What don't you like about Parallel NU Prolog?  Have
  you ever tried Logix or Strand?

Smith originally wrote
> >> Prolog as it is though is clearly deficient.

I replied
> >I am still waiting to hear the evidence for this.

and Smith responded:
> I think many of my points represented evidence of this.

Nope.  Smith's points were *ASSERTIONS*.  Assertions are not evidence.
They're evidence that he experiences Prolog as deficient, but they're
not evidence that it IS deficient.  In this message, as in my previous
reply, I have asked for some specific examples.  What, precisely, is
inelegant with narrowing?  What's wrong with LogScheme?  Is quicksort
in fact slow in the new Berkeley system?  Why does the absence of A*
as a built-in mechanism of Prolog (though it can easily be programmed
in only a few clauses) disqualify Prolog from being an AI language on
the grounds of being too rigid, when it doesn't disqualify Lisp?  Give
us an example of something which is forced to use long clauses because
there are no nested variable scopes.  Let's see some EVIDENCE.

> Your responses often twisted the intentions behind the points (quicksort
> was just an example),

This contradict's Smith's claim elsewhere that quicksort was intended
as sarcasm.  But I *reacted* to the mention of quicksort as to an example.
It simply isn't possible for me to "twist the intentions", they exist
inside Smith's head and I can't reach out and twist inside his head.
Nor did I make any allegations about what his intentions were.  I reacted
to what he wrote, is all.

> (You) denied the problem existed (variable scoping),

I didn't deny that the absence of variable scoping is fact.
I denied that it is a PROBLEM.  I am still waiting to be shown any
evidence that it is a problem.  All we've seen so far is proof by assertion.
In any case, it isn't a problem with *logic programming*.

> or proposed as a total solution what is in fact just a partial
> solution (functional syntax, garbage collection, use of programming
> tools, iterative deepening, coroutining).

I never in my posting proposed ANYTHING as a total solution.  Now who
is twisting whose words?

> >Yes, there are many details in which Prolog could be improved.
> >There are many directions in which Prolog _is being_ improved.

> I don't think Prolog's many problems are just "details".

Sorry, it's dictionary time again.
	A detail is an individual fact, piece of information,
	or visual feature which you notice when you look at
	something carefully or remember when you think about it.
Is Smith denying that his criticisms are (facts or pieces of information)
or that you notice then when you look at Prolog carefully?  If he isn't
denying that, then they are DETAILS.  There's nothing "just" about details.
Who was it said "God lives in the details"?

> And I do think Prolog is good for certain applications (which is as much
> as one could hope).

Indeed it is.

> When I posted "Prolog's Pitfalls" I was playing devil's 
> advocate in order that we can come to an accurate appraisal of the 
> strengths and weaknesses of Prolog and related logic programming systems.  

If you want to help us come to an accurate appraisal,
post EXAMPLES, not ASSERTIONS.  Give detail!

-- 
Science is all about asking the right questions.  | ok@goanna.cs.rmit.oz.au
I'm afraid you just asked one of the wrong ones.  | (quote from Playfair)

mgv@usceast.UUCP (Marco Valtorta) (07/03/90)

In article <3357@goanna.cs.rmit.oz.au> ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:
>
>                I said
>! Get this and get this good:  use of quicksort in ANY programming
>! language, is a mark of INCOMPETENCE...
>I did spell out a circumstance (extreme shortage of storage) under
>which the use of quicksort is entirely appropriate.
>
 .....
>Saying that using quicksort is a mark of incompetence is "spiteful"?
>Spare me days.  It's sad times we've come to when one is not allowed
>to disparage an algorithm which has been known to be inferior for
>nearly 30 years without being accused of showing spite to someone.
>WHO, precisely, am I being alleged to have abused?

Here is a quote from David Harel's _Algorithmics_, p.136:
"... Taking into account the fact that it requires only a small fixed amount
of additional storage space, and despite its inferior worst-case performance,
quicksort is actually one of the best sorting algorithms known, and is
definitely the best among the ones mentioned here."  Note that
mergesort is one of the algorithms mentioned in the same section (in
fact, on the previous page). 

It seems that Harel does know that quicksort has been known to be inferior
for nearly 30 years :-).  More seriously, I do not think that either
Knuth or Sedgewick support the view that quicksort is "inferior."
All experimental data (on collections of "random" arrays) I know 
of (admittedly not recent data) points
to quicksort being faster that mergesort on average.  I would like to know
of references to data that supports the opposite view.  Sedgewick's dissertation
on quicksort even has a variant that approaches (as the size of the array to be
sorted increases) the worst-case time performace of mergesort.  (He notes that 
the variant should not to be used in practice, of course.)  

As a bit of trivia 
for the readers of this newsgroup, Van Emden published in CACM a variant of an 
algorithm in CACM called qsort that was supposed to be an improvement of Hoare's
quicksort, but was shown (first empirically, then, by Sedgewick, analytically) 
to be inferior.  (To be precise, Sedgewick showed that Van Emden's analysis
of qsort was incorrect, thus explaining why experiments indicated qsort to
be slower that quicksort.  Still, qsort is a beautiful algorithm!)
I have always assumed that qsort's Van Emden is the same as the one
who co-authored "The Semantics of Predicate Logic as a Programming
Language" with R.A. Kowalski.  Can someone confirm this?

Marco Valtorta			usenet: ...!ncrcae!usceast!mgv
Department of Computer Science	internet: mgv@cs.scarolina.edu
University of South Carolina	tel.: (1)(803)777-4641
Columbia, SC 29208		tlx: 805038 USC
U.S.A.				fax: (1)(803)777-3065
usenet from Europe: ...!mcvax!uunet!ncrlnk!ncrcae!usceast!mgv

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (07/03/90)

In article <3309@usceast.UUCP>, mgv@usceast.UUCP (Marco Valtorta) writes:
> It seems that Harel does know that quicksort has been known to be inferior
> for nearly 30 years :-).  More seriously, I do not think that either
> Knuth or Sedgewick support the view that quicksort is "inferior."
> All experimental data (on collections of "random" arrays) I know 
> of (admittedly not recent data) points
> to quicksort being faster that mergesort on average.

The *worst* case behaviour of quicksort is not what I was referring to.
What I was referring to is the well-known fact that the average case of
quicksort does about 30% more comparisons than the *worst* case of
merge sort.  This information *IS* in Knuth Volume 3 and in Sedgewick's
"Algorithms" (at least the 1st edition, I haven't seen the second).

My reference to the known inferiority of quicksort was of course to
Hoare's 1961 paper in Computer Journal, where quicksort was first published.
He gave the formula there for the number of comparisons done by quicksort,
and the number of comparisons done by mergesort was known when the paper
was submitted in 1960 to be LESS.

Unfortunately, there is a tradition of performing sorting experiments
on arrays of integers.  This is certainly convenient for coding, because
then you can use the language's built-in "<" operator.  It is, however,
wildly unrealistic if you are NOT sorting things known at compile time
to be integers.  Now, if you KNOW at compile time that you are sorting
integers, then there are sorting algorithms with O(N) average cost (e.g.
radix sort).  If you don't know that you have integers, e.g. you are
sorting strings, then the cost of comparisons dominates, and quick sort
loses *badly*.

Sedgewick's paper in Acta Informatica contains detailed cost formulas.
It turns out that to get the favourable result for quicksort that everyone
has taken as Gospel, he assumes that a comparison costs 1/4 as much as
a memory reference.  In his dissertation, he was looking at code on a
/370 or that kind of machine, and charging a CR instruction as 1/4 of an
L instruction is not unreasonable.  But that ONLY applies when you know at
compile time that you are sorting integers.  It does not apply when you
are sorting strings or atoms or doing a procedure call for your comparison.

> I would like to know of references to data that supports the opposite view.

Look at any paper that compares quicksort and merge sort for anything
but integers.

It's worth pointing out that in a language which lacks arrays (such as
most Prologs and many functional languages) the space advantage of quick
sort does not exist in any form at all, and if you are sorting variable-
length records, there is no space advantage for quicksort in conventional
languages either.

-- 
Science is all about asking the right questions.  | ok@goanna.cs.rmit.oz.au
I'm afraid you just asked one of the wrong ones.  | (quote from Playfair)