kam@druhi.ATT.COM (MorrisseyKA) (07/29/87)
Here's a naive (:-) question: What is the formula for determining the number of LIs needed to reverse a list of length N, using the following definition? nairev([],[]). nairev([X|Xs],Zs) :- nairev(Xs,Ys), append(Ys,[X],Zs). append([],L,L). append([X|L1],L2,[X|L3]) :- append(L1,L2,L3). It seems that the answer should be easy to determine, but I'm having a heck of a time coming up with an answer that I feel confident about. Karen A. Morrissey
richard@aiva.ed.ac.uk (Richard Tobin, JANET: R.Tobin@uk.ac.ed ) (07/31/87)
In article <2080@druhi.ATT.COM> kam@druhi.ATT.COM (MorrisseyKA) writes: > What is the formula for determining the number of LIs needed > to reverse a list of length N, using the following definition? > > [ usual definition omitted ] A logical inference is a succesful unification with the head if a clause. append([], L, L) requires one logical inference. append([X|L1], L2, [X|L3]) requires one logical inference plus those required for append(L1, L2, L3). Writing a(N) for the logical inferences required for append with a first argument of length N, we have a(0) = 1 a(N) = 1 + a(N-1) so ('trivially') a(N) = N+1. nairev([], []) requires one logical inference. nairev([X|Xs], Zs) requires one logical inference plus those required for nairev(Xs,Ys), plus those required for append(Ys, [X], Zs). Writing n(N) for the logical inferences required for reversing a list of N elements, we have n(0) = 1 n(N) = 1 + n(N-1) + a(N-1) = N+1 + n(N-1) since a(N-1) is N = N+1 + N + n(N-2) expanding n(N-1) = N+1 + (N-1)+1 + ... + (2-1)+1 + n(2-2) = N+1 + N + ... + 2 + 1 = (N+1)(N+2)/2 as is well known (:-) You could do the last bit by induction if you wanted to be rigorous. In particular, n(30) = 496. -- Richard Tobin, JANET: R.Tobin@uk.ac.ed AI Applications Institute, ARPA: R.Tobin%uk.ac.ed@cs.ucl.ac.uk Edinburgh University. UUCP: ...!ukc!ed.ac.uk!R.Tobin
tree@sun.soe.clarkson.edu (Tom Emerson) (03/20/89)
I am looking for the Prolog source for the naive reverse benchmark. Thanks in advance for your help. Tom +----------------------------------------------------------------------+ | It matters not how strait the gate, | | | How charged with punishments the scroll, | | | I am the master of my fate; |Tom Emerson | | I am the captain of my soul. |tree@sun.soe.clarkson.edu | | --- From W.E. Henley, "Invictus" | | +----------------------------------------------------------------------+
broome@adm.BRL.MIL (Paul Broome ) (03/20/89)
In <2695@sun.soe.clarkson.edu> tree@sun.soe.clarkson.edu (Tom Emerson) writes:
:I am looking for the Prolog source for the naive reverse benchmark.
The naive reverse benchmark is popular because it generates a lot of procedure
calls. But it seems that benchmarks for NP-complete problems would be
most interesting since they require search and search is easy in Prolog. Plus
we'd be attacking a famous unsolved problem.
I know about the 'Stock-cutting problem' by Dincbas, et al in LP'88. Has
anyone heard of other benchmarks for NP-complete problems?
Paul <Broome@brl.mil>
p.s. The following is bench/nrev.P from the SBProlog distribution.
-----------------
/* The naive reverse benchmark */
nrev([],[]).
nrev([X|Rest],Ans) :- nrev(Rest,L), append(L,[X],Ans).
append([],L,L).
append([X|L1],L2,[X|L3]) :-
append(L1,L2,L3).
bench(Count) :- cputime(T0),dodummy(Count),cputime(T1),
dobench(Count),cputime(T2),
report(Count,T0,T1,T2).
dobench(Count) :- data(List),repeat(Count),nrev(List,_),fail.
dobench(_).
dodummy(Count) :- data(List),repeat(Count),dummy(List,_),fail.
dodummy(_).
dummy(_,_).
data(X) :- data(X,30).
data([],0).
data([a|Y],N) :- N > 0, N1 is N-1, data(Y,N1).
/* data([a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,aa,bb,cc,dd]). */
repeat(N).
repeat(N) :- N > 1, N1 is N-1, repeat(N1).
report(Count,T0,T1,T2) :- write(T0),nl,write(T1),nl,write(T2),nl,
Time1 is T1 - T0,write(Time1),nl,
Time2 is T2 - T1,write(Time2),nl,
Time is Time2-Time1,write(Time),nl,
Lips is (496*Count*1000)/Time,
write('Lips = '),write(Lips), nl.
ok@aucsv.UUCP (Richard Okeefe) (03/30/89)
In article <18765@adm.BRL.MIL>, broome@adm.BRL.MIL (Paul Broome ) writes: > The naive reverse benchmark is popular because it generates a lot of procedure > calls. But it seems that benchmarks for NP-complete problems would be > most interesting since they require search and search is easy in Prolog. > Plus we'd be attacking a famous unsolved problem. It wouldn't be terribly interesting to try NP-complete benchmarks. NP-complete problems are hard no matter _what_ the programming language. A Prolog program _cannot_ be asymptotically better than the best RAM algorithms. Just because hard problems can be _stated_ easily in a language doesn't mean they can be _solved_ fast. (This is not to say that Prolog may not be an appropriate language for parallel systems, but it can't do any better than the best parallel algoriothms either.)
broome@adm.BRL.MIL (Paul Broome ) (04/03/89)
In article <193@aucsv.UUCP> ok@aucsv.UUCP (Richard Okeefe) writes: :In article <18765@adm.BRL.MIL>, broome@adm.BRL.MIL (Paul Broome ) writes: :> ... benchmarks for NP-complete problems would be most interesting since :> they require search and search is easy in Prolog. Plus we'd be attacking :> a famous unsolved problem. : :It wouldn't be terribly interesting to try NP-complete benchmarks. ... :A Prolog program _cannot_ be asymptotically better than the best RAM :algorithms. Sorry, my article was terribly terse and not really about benchmarks. I had in mind using the flexibility of Prolog to explore different algorithms for NP-complete problems. In particular, replacing generate and test methods with those that test (or constrain) while generating. There's an example in CHIP (Constraint Handling in Prolog (Dincbas, LP'88)) where these methods beat existing ones in Fortran. Are there others? Who knows, there might yet be a deterministic polynomial time algorithm for these problems. Failing that, the comparison with existing methods would still be interesting. Paul <broome@brl.mil>
@DOUGHNUT.CS.ROCHESTER.EDU:miller@CS.ROCHESTER.EDU (04/04/89)
From: Brad Miller <miller@CS.ROCHESTER.EDU> Date: 3 Apr 89 00:48:56 GMT From: broome@adm.BRL.MIL (Paul Broome ) Sorry, my article was terribly terse and not really about benchmarks. I had in mind using the flexibility of Prolog to explore different algorithms for NP-complete problems. In particular, replacing generate and test methods with those that test (or constrain) while generating. There's an example in CHIP (Constraint Handling in Prolog (Dincbas, LP'88)) where these methods beat existing ones in Fortran. Are there others? I don't understand how using Prolog as a test bed for algorithms has something to do with benchmarks. I certainly concur that naive reverse only excercises the unifier and procedure call; I think a better benchmark of logic programming systems would include something that required backtracking, at least in the naive case. The benchmark would then examine the strategy of the logic system to best implement search for a particular solution, or search for all solutions. The problem with too general a benchmark, however, is that it won't allow you to isolate exactly what you are testing. I'd be interested if anyone has an idea for a benchmark that would, for example, do well in a system that does intelligent backtracking and poor in a system that does naive, but doesn't otherwise stress the sorts of things naive reverse does, i.e. unification and procedure call. ---- Brad Miller U. Rochester Comp Sci Dept. miller@cs.rochester.edu {...allegra!rochester!miller}
debray@arizona.edu (Saumya K. Debray) (04/04/89)
> From: Brad Miller <miller@CS.ROCHESTER.EDU> > > I certainly concur that naive reverse only excercises the unifier and > procedure call; On modern Prolog systems, it doesn't even do this too well, since it can be compiled into very compact code with much of the procedure calling and unification optimized away. I've heard stories that one commercial Prolog vendor[*] managed to compile append/3 into sufficiently few machine instructions that code for the whole procedure fit into the instruction cache of the 68020, giving huge LIPS numbers. A competing vendor[*] apparently felt compelled to introduce an "append" instruction in their interpreter after this, to match these LIPS figures. [*] I have friends at both places, and financial interests in neither. Hence (and also because aforementioned friends are quite likely to referee my papers in the future), both these systems will go unnamed. > I'd be interested if anyone has an idea for a benchmark that would, for > example, do well in a system that does intelligent backtracking and poor in > a system that does naive ... I'd think any "search"-type program with a poorly chosen order of goals would do (e.g. map coloring). The problem with most intelligent backtracking schemes is that because of the runtime overhead they incur, they end up doing well on badly written programs but poorly on well written ones. -- Saumya Debray CS Department, University of Arizona, Tucson internet: debray@arizona.edu uucp: arizona!debray
broome@adm.BRL.MIL (Paul Broome ) (04/05/89)
In article <10073@megaron.arizona.edu> debray@arizona.edu (Saumya K. Debray) writes: :> From: Brad Miller <miller@CS.ROCHESTER.EDU> :> I certainly concur that naive reverse only excercises the unifier and :> procedure call; :On modern Prolog systems, it doesn't even do this too well, since :it can be compiled into very compact code with much of the procedure :calling and unification optimized away. I don't know that it's done, but I'd say it's even proper for a compiler to transform naive reverse into its more efficient cousin. If it implements the general unfold/fold technique it's just good optimization. But that changes the complexity of the computation and defeats the purpose of the benchmark. The two goals, optimization and benchmarking, clash. > I don't understand how using Prolog as a test bed for algorithms has > something to do with benchmarks. The graph (map) coloring problem, on the other hand, is a member of the class of problems for which no deterministic polynomial time algorithm is known; it seems to _require_ search. So whatever tricks the compiler/programmer does would be welcomed as a contribution toward an important problem instead of as a defeat of the benchmark. Paul <broome@brl.mil>