bimandre@saturnus.cs.kuleuven.ac.be (Andre Marien) (02/14/91)
> As a computational engine, Prolog is Turing complete. The issue > is not whether the application can be implemented Prolog, but > whether one can implement this particular data structure, with > its particular performance characteristics.) > > Russell Don't blame Prolog; some Prologs can do it: you need support for infinite terms. PrologII and PrologIII are the leading examples, I think. (not that others don't have it) The following program works as requested with inf tree support: register_person(P,L,L) :- member_chk(P,L), ! . register_person(P,L,[P|L]) . write_person(P) :- name_person(P,NN), age_person(P,AN), ref_partner(P,S), name_person(S,NS), age_person(S,AS), write(name = NN), nl, write(' age' = AN), nl, write(partner = NS), nl, write(' age' = AS), nl . name_person(p(N,S,A),N) . ref_partner(p(N,S,A),S) . age_person(p(N,S,A),A) . p(L,a(NN,NS,NA),NL) :- name_person(PS,NS), ref_partner(PS,PN), name_person(PN,NN), ref_partner(PN,PS), age_person(PN,NA), register_person(PN,L,TL), register_person(PS,TL,NL), write('added:'), nl, write_person(PN)more, nl . p(L,n(NN),NL) :- name_person(PN,NN), register_person(PN,L,NL), write_person(PN), nl . member_chk(X,[Y|L]) :- member_chk(X,Y,L) . member_chk(X,X,L) :- ! . member_chk(X,_,[Y|L]) :- member_chk(X,Y,L) . test :- p([],a(john,jane,22),L1), p(L1,a(jane,john,21),L2), p(L2,a(mark,mia,60),L3), p(L3,a(mia,mark,18),L), p(L,n(john),_), p(L,n(jane),_), p(L,n(mark),_), p(L,n(mia),_) . Andre' Marien bimandre@cs.kuleuven.ac.be
turpin@cs.utexas.edu (Russell Turpin) (02/15/91)
----- I wrote: >> As a computational engine, Prolog is Turing complete. The issue >> is not whether the application can be implemented Prolog, but >> whether one can implement this particular data structure, with >> its particular performance characteristics.) In article <1948@n-kulcs.cs.kuleuven.ac.be> bimandre@saturnus.cs.kuleuven.ac.be (Andre Marien) writes: > Don't blame Prolog; some Prologs can do it: you need support for > infinite terms. PrologII and PrologIII are the leading examples, > I think. (not that others don't have it) > > The following program works as requested with inf tree support: Point taken. After some study, I was even able to understand how the infinite terms corresponded to this simple data structure. But Jesus! H! Christ! What is the point of putting into a *logic* language a mechanism by which even simple data structures become impenetrable mazes. I suspect it would be easier to prove correct the C program, in which the intended data structure is clear. If a simple data structure such as two mutual referencing lists is impenetrable in Prolog, what will become of a complex data structure like a cone? (A cone of depth N is N circular queues. The i-th queue has i elements, each of which point to the first i elements of the (i+1)-st queue. Russell
thom@ecrc.de (Thom Fruehwirth) (02/15/91)
One of the main achievements of Prolog is that pointer manipulation is replaced by the concept of unification and the logical variable. I don't see why infinite data structures are 'impenetrable mazes' and 'tangled webs' (as pop@cs.umass.edu put it). I also can't follow her/his comments that 'variables would be unbound as soon as each recursive subgoal suceeded' and why (s)he suggests using the assert-predicate. thom
newton@gumby.cs.caltech.edu (Mike Newton) (02/15/91)
[this was copied/input by mouse from another system, pardon any typos.... ] %% Andre' Marien's beautiful example of the partners program had a few %% problems if not called just right. I've made it a little more general %% and added a few comments.... %% %% (Note that it doesnt take Prolog III to run this... SICStus ran it fine.) %% %% - mike (newton@gumby.cs.caltech.edu) %% %% ---------------- snip snip ---------------- /* 91.2.14, Andre' Marien, bimandre@cs.kuleuven.ac.be, Author */ /* * 91.2.15, Mike Newton, newton@gumby.cs.caltech.edu: * Original version had some serious errors if you wanted to add people * ''in the future''. See the example people 'spam', 'tuna', 'mort' and * 'mindy' in the test predicate. Runs under SICStus... [uses nonvar/1]. */ /* register_person/3 -- add new entry to db w/o creating duplicates */ register_person(P,L,L) :- member_chk(P,L), ! . register_person(P,L,[P|L]) . write_person(P) :- name_person(P,NN), age_person(P,AN), ref_partner(P,S), name_person(S,NS), age_person(S,AS), write(name = NN), nl, write(' age' = AN), nl, write(' partner' = NS), nl, write(' partner age' = AS), nl . /* N,S,A == name, significant_other, age */ name_person(p(N,_S,_A),N) . ref_partner(p(_N,S,_A),S) . age_person(p(_N,_S,A),A) . p(L,a(NN,NS,NA),NL) :- name_person(PS,NS), /* build up new Partner: PS = p(NS, PN, _) */ ref_partner(PS,PN), /* PN becomes record of PS and ... */ name_person(PN,NN), /* build up Newuser: PN = p(NN, PS, NA) */ ref_partner(PN,PS), /* ... PS becomes record of PN */ age_person(PN,NA), register_person(PN,L,TL), register_person(PS,TL,NL), write('added:'), nl, write_person(PN)/* more */, /* as received by news, this had */ /* write_person(PN)more, */ nl . p(L,n(NN),NL) :- name_person(PN,NN), register_person(PN,L,NL), write_person(PN), nl . /* * check memberbership. note that a var as a name should instantly fail * so that we create a new record (which will later become instantiated). */ member_chk(p(N,_,_),_) :- var(N), !, fail. /* can be eliminated */ member_chk(X,[Y|L]) :- /* by a nonvar test here */ member_chk(X,Y,L) . /* assumes names are unique -- MON */ /* * note we can't unify either of N, or S in the head, as * subterms (infinite) may cause infinite hassle! */ member_chk(p(N1,S1,A), p(N2,S2,A),_L) :- nonvar(N1), nonvar(N2), N1 = N2, S1 = S2, ! . member_chk(X,_,[Y|L]) :- member_chk(X,Y,L) . test :- p([],a(john,jane,22),L1), p(L1,a(jane,john,21),L2), p(L2,a(mark,mia,60),L3), p(L3,a(mia,mark,18),L4), /* added ... -- MON */ p(L4,a(spam,_,25),L5), p(L5,a(tuna,spam,34),L6), p(L6,a(mindy,_,30),L7), p(L7,a(_,mindy,-34),L8), /* mindy's partner is -34 */ p(L8,a(mort,mindy,_),L), /* and its a mort */ p(L,n(tuna),_), p(L,n(spam),_), p(L,n(mindy),_), p(L,n(mort),_), /* ... added -- MON */ p(L,n(john),_), p(L,n(jane),_), p(L,n(mark),_), p(L,n(mia),_) .
turpin@cs.utexas.edu (Russell Turpin) (02/16/91)
----- In article <1991Feb15.101435.16112@ecrc.de> thom@ecrc.de (Thom Fruehwirth) writes: > I don't see why infinite data structures are 'impenetrable mazes' > and 'tangled webs' (as pop@cs.umass.edu put it). ... It takes greater mathematical sophistication to understand infinite data structures. I can explain a circular queue to the average sophomore taking a Pascal class in about three minutes. To explain its equivalence as an infinite data structures, I have to either talk about the view of the finite structure as its pointers are infinitely chased, or I have to talk about taking quotients, ie, mapping the infinite structure onto the finite structure. What is the point? What is the advantage of considering something that is essentially finite as an infinite expansion? Oh, yeah. So you can program it in Prolog. I would still like to see a more complex structure, such as a cone treated in this fashion. Any takers? Russell
dave@quintus.UUCP (David Bowen) (02/17/91)
The point of using Prolog is to program at a "higher level" than is possible with languages like C; that is, to write code which is more closely related to the problem that you are trying to solve than to the machine on which the program is to be run. Pointers are a low-level concept. The lack of pointers in Prolog is one of its major advantages. Since there are no pointers there are no dangling pointers, and one major source of difficult-to-debug programming errors is entirely eliminated. I agree with you that using infinite structures in Prolog is more trouble than its worth. If the problem to be solved intrinsically requires a complex data structure such as a cone, then I would suggest writing at least part of your program in a language like C. You may be able to have the best of both worlds by defining the cone as an abstract data type in C, and making all the operations on that data type available to Prolog. The way that I would program your original example of a tree where each node contains data about a person, and you want constant time access from a person to his/her spouse, is to put all the data about the individuals into the Prolog database. Assuming that you have some good way of identifying individuals (either by name or by some unique identifying number), and assuming that your Prolog system gives you good (hash-table) indexing, you can have essentially constant time access to any record. You can then build the tree with the nodes containing only the identifying numbers of the persons, and look in the database whenever more information is needed.
turpin@cs.utexas.edu (Russell Turpin) (02/17/91)
----- In article <1489@quintus.UUCP> dave@quintus.UUCP (David Bowen) writes: > The point of using Prolog is to program at a "higher level" than > is possible with languages like C; that is, to write code which > is more closely related to the problem that you are trying to > solve than to the machine on which the program is to be run. > > Pointers are a low-level concept. ... Sometimes the problem one wants to solve directly concerns pointers and data structures based on them. Consider someone who has invented a new data structure with promising performance characteristics, who wants to run some experiments by making a trial implementation. Alternatively, consider someone who wants to make one spreadsheet program run against the file produced by a different application. These kind of problems are going to remain with us for a long time yet. > I agree with you that using infinite structures in Prolog is > more trouble than its worth. If the problem to be solved > intrinsically requires a complex data structure such as a cone, > then I would suggest writing at least part of your program in > a language like C. ... I would argue that if a logic language is "more trouble than its worth" to deal with a particular domain, that is because the underlying logic does not include the necessary concepts. Finite lists are not enough, and the programs for infinite structures complex, even in simle cases. (Nor is it clear that one can always create the data structure of concern.) Of course, I am biased, because I am researching the use of logic languages for data structure programming. In part, I have developed a first-order logic that includes pointer based data structures as natural objects. Russell
lee@munnari.oz.au (Lee Naish) (02/18/91)
This is all getting a rather pointless, but who can resist a bit of hacking.... One of the limitations of other people's code is that you can get from one record in the data structure to another record, but you can't go on from there to other parts of the data structure (eg, children of that node. The following code allows this more complex navigation. One consequence of the extra generality is that the basic data structure has to be set up completely before adding the extra "pointers". Dynamic update does not seem feasible without destructive assignment (it would be nice if someone could prove this wrong). Another feature of the code is that it is pure (unlike the other postings, which contain cut and nonvar). The declarative semantics is straightforward if you allow rational trees. The code actually works on systems such as NU-Prolog which don't support infinite terms properly. lee % Implementation of binary search tree with "pointers" % from one tree node to another. Nodes can point to each other % or higher in the tree (so the data structure can be infinite) % Note that it could just as easily be an arbitrary binary tree, % or even a set of binary trees linked together, but insertion % and retrieval code would have to be changed. % % Due to the logical nature of things (no destructive update), updating % the tree will not change "pointers" to the old version of the tree. % Thus the tree should be constructed first, then all the pointers % added. % % Each internal node is of the form pt(Key, LTree, RTree, PTree) % For generality, there should be a data field as well as key % % All the code is "pure"; there has been no attempt to remove excess % choice points or avoid creating unnecessary heap structures. % Check something is a "pointer tree" % Doesn't check pointers (can this be done % without looping, even in languages like Prolog-III?) ptree(nil). ptree(pt(_K, LT, RT, _PT)) :- % key(K), % could check type of key ptree(LT), ptree(RT). % pt(PT). % this could cause loops -> don't check! % insert key (+ no data) plus pointer into ptree % (generally best to leave pointer uninstantiated here) ptree_insert(nil, K, PT, pt(K, nil, nil, PT)). ptree_insert(pt(K0, LT0, RT0, PT0), K, PT, pt(K0, LT, RT0, PT0)) :- K0 @> K, ptree_insert(LT0, K, PT, LT). ptree_insert(pt(K0, LT0, RT0, PT0), K, PT, pt(K0, LT0, RT, PT0)) :- K0 @=< K, % = case rather dubious for BST ptree_insert(RT0, K, PT, RT). % retrieve subtree with given key at root ptree_find(pt(K0, LT0, RT0, PT0), K0, pt(K0, LT0, RT0, PT0)). ptree_find(pt(K0, LT0, RT0, PT0), K, T) :- K0 @> K, ptree_find(LT0, K, T). ptree_find(pt(K0, LT0, RT0, PT0), K, T) :- K0 @< K, ptree_find(RT0, K, T). % given tree with particular key in the root, % follow the pointer ptree_ptr(pt(_, _, _, PT), PT). % convert list of pairs+singletons into ptree with pointers % between members of each pair list_to_ptree(L, T) :- build_ptree_a(L, nil, T), add_ptrs_l(L, T). % as above with accumulator % ignore pointers for pairs build_ptree_a([], T, T). build_ptree_a(single(A).As, T0, T) :- ptree_insert(T0, A, nil, T1), % single -> pointer = nil build_ptree_a(As, T1, T). build_ptree_a(pair(A, B).As, T0, T) :- ptree_insert(T0, A, _, T1), ptree_insert(T1, B, _, T2), build_ptree_a(As, T2, T). % add pointers for each pair add_ptrs_l([], _). add_ptrs_l(single(_).As, T) :- add_ptrs_l(As, T). add_ptrs_l(pair(A, B).As, T) :- add_ptrs(T, A, B), add_ptrs_l(As, T). % as above, but for single pair add_ptrs(T, A, B) :- ptree_find(T, A, TA), ptree_find(T, B, TB), ptree_ptr(TA, TB), ptree_ptr(TB, TA). % may create infinite term! % beware of printing infinite terms when testing! test(Baz) :- list_to_ptree([single(mid), pair(black,white), pair(foo,baz)], T), ptree_find(T, foo, TF), ptree_ptr(TF, pt(Baz, _, _, _)).
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (02/19/91)
In article <17914@cs.utexas.edu>, turpin@cs.utexas.edu (Russell Turpin) writes: > It takes greater mathematical sophistication to understand > infinite data structures. I can explain a circular queue to > the average sophomore taking a Pascal class in about three > minutes. *PLEASE* come and teach my 2nd year Pascal data structures class for me! Look, folks, the answer is simple. There *are* data structures ("rats' nests") that are hard to do using Prolog. Never mind the fact that Prolog implementations (some via hacks, some via principled redefinition of equality) may let your program handle cyclic terms. If the Pascal operations involve *changing* pointers, cyclic terms aren't enough. They don't like being changed. There are three approaches. 1. Sit back and whine. (If the hat fits...) 2. Decide to be logical about it. If you want to represent a finite set of arcs *DO* *IT*, don't delude yourself that Prolog variables are the same thing as Pascal pointers. That means not going as fast as Pascal pointer hacking. So? What god said on which mountain that Prolog _had_ to be good at everything? 3. Remember that data structures are MEANS, not ENDS. Redesign the program so that it can use data structures that Prolog _is_ good at. (We have an absolute guarantee that the slowdown is never going to be more than O(lgN) where N is the number of items in a collection.) What, for example, is the task for which `cones' are an appropriate tool? (I've never met them before, so this is a genuine question. From the brief description we've had so far, I've no idea what they represent or what the operations on them are.) -- Professional programming is paranoid programming
aipdc@castle.ed.ac.uk (Paul Crowley) (02/20/91)
In article <4789@goanna.cs.rmit.oz.au> ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes: >1. Sit back and whine. (If the hat fits...) A classic. I take it that in order that no-one thinks _you're_ a whiner, you program mainly in INTERCAL. Prolog - Love it or Leave it. Riiiiight. ____ \/ o\ Paul Crowley aipdc@uk.ac.ed.castle \ / /\__/ Trust me. I know what I'm doing. \/
Chris.Holt@newcastle.ac.uk (Chris Holt) (02/20/91)
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes: >In article <17914@cs.utexas.edu>, turpin@cs.utexas.edu (Russell Turpin) writes: >> It takes greater mathematical sophistication to understand >> infinite data structures. I can explain a circular queue to >> the average sophomore taking a Pascal class in about three >> minutes. >What, for example, is the task for which `cones' are an appropriate >tool? (I've never met them before, so this is a genuine question. >From the brief description we've had so far, I've no idea what they >represent or what the operations on them are.) Not cones, exactly, but: Given a large data base, with many ways of accessing a given piece of data (e.g. "snow" can be accessed via "white", "crystalline", and "cold"); where updates are allowed (since one cannot create an entire new copy of the data base with the updated value). There has to be an axiom that insists that the terminus of all the access paths is always the same. And the usual declarative model, although fine for trees, isn't good enough for graphs. Does this help? ----------------------------------------------------------------------------- Chris.Holt@newcastle.ac.uk Computing Lab, U of Newcastle upon Tyne, UK ----------------------------------------------------------------------------- "[War's] glory is all moonshine... War is hell." - General Sherman, 1879
markh@csd4.csd.uwm.edu (Mark William Hopkins) (02/24/91)
In article <1489@quintus.UUCP> dave@quintus.UUCP (David Bowen) writes: > >The point of using Prolog is to program at a "higher level" than is >possible with languages like C; that is, to write code which is more >closely related to the problem that you are trying to solve than to the >machine on which the program is to be run. > >Pointers are a low-level concept. The lack of pointers in Prolog is one of >its major advantages. Since there are no pointers there are no dangling >pointers, and one major source of difficult-to-debug programming errors is >entirely eliminated. I disagree with the premise that there are no pointers in Prolog. They're just called something else (see below). Anyhow... I used to think this way several years ago too before I began to intuit and experience the inherent relationship between pointers and associative memory. The conclusions of these experiences (and the consequent innovations in my programming practice) are that pointers are an extremely high level concept that would be used in programs whose behavior borders on the same kind of intelligence we have. The mathematics underlying their use is graph theory. They're difficult to deal with NOT because they're low-level (as if low-level things are supposed to somehow be difficult), but because their mathematics are so substantial. The tendency to try and cast the whole universe into trees (not to mention directed acyclic graphs), and eliminate the inherently graph theoretical concept of pointers only serves in the long run to re-introduce the very kinds of redundancies that intelligence was supposed to eliminate. Also, trying to cast everything into trees is just a cop-out from having to do Real Graph Theory. For instance, I know of an efficient parsing algorithm that could be characterized as breadth-first non-deterministic LR(k), which uses pointers in a non-trivial way to enable such extensive sharing amongst its parse trees. The parse graph a sentence with a 1000-fold ambiguity would take up the space and time to create maybe 5 to 10 larger than that of the individual trees (polynomial parsing for exponential ambiguity). Pointers figure into this application via the concept of "aliasing" and "sharing", which are the high-level concepts ultimately that have to do with graph theory. A good data base-handling program, if it needed to organize the same information sorted several ways at once would also do this by sharing. Each sorted structure would be a structure consisting only of pointers to the actual data objects. A program I wrote to read in Context-Free Grammars's made non-trivial use of cyclic structures to represent CFG's as cyclic AND/OR trees. This enabled the high-level and more or less standard set of tools for dealing with AND/OR trees to be used in processing the grammar. The semantics of these kinds of programs will often use the concept of "Least-Fixed Points". CFG's can be represented by Context-Free Expressions (analogous to Regular Expressions) defined by mutually recursive systems of equations involving regular expressions, whose meanings are defined by least-fixed point semantics. And this fact is effectively used in programs like the one described above. There's this inherent relation between cyclic structures and recursion. In any case, I thought Prolog was supposed to be so powerful partly because of the way of integrates pointers into logic programming. Only you call them logical variables. Unification is really a graph algorithm in disguise... So my perspective today is that something which does NOT use pointers or something equivalent where associative memory or sharing is required is low-level, rather than the opposite.
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (02/25/91)
In article <1991Feb19.235323.7748@newcastle.ac.uk>, Chris.Holt@newcastle.ac.uk (Chris Holt) writes: > ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes: > >What, for example, is the task for which `cones' are an appropriate > >tool? (I've never met them before, so this is a genuine question. > >From the brief description we've had so far, I've no idea what they > >represent or what the operations on them are.) > Not cones, exactly, but: Given a large data base, with many ways > of accessing a given piece of data (e.g. "snow" can be accessed > via "white", "crystalline", and "cold"); where updates are allowed > (since one cannot create an entire new copy of the data base with > the updated value). There has to be an axiom that insists that > the terminus of all the access paths is always the same. And the > usual declarative model, although fine for trees, isn't good enough > for graphs. Does this help? No, it doesn't help, and that for two reasons. 1) "cones" were presented as an example of something Prolog *should* do. It doesn't help to explain something else instead. 2) The response is rather vague, so I am far from sure that I understand it. As I understand it, we have two mappings: st : strings -> termini tv : termini -> values and an interface like fetch(s:string) = s.st.tv create(L: string*, v: value) is t := NEW terminus; t.tv := v; for s in L do s.st := t update(s:string, v:value) is s.st.tv := v modulo hand-waving. Cyclic terms have nothing to offer for this problem; indeed cyclic terms only make updates very much harder. Assuming this interface it is very simple to implement the required mappings. Assume that we have empty_mapping(-EmptyMapping) lookup_mapping(+Key, +Mapping, -Val) update_mapping(+Key, +OldMapping, +Val, -NewMapping) We'll represent an instance of this problem by a pair db(ST, TV) where ST is a mapping which maps names to representatives (think UNION-FIND) and TV is a mapping which maps representatives to values. empty_data_base(db(E,E)) :- empty_mapping(E). create_data_base_cluster(Keys, db(ST0,TV0), Val, db(ST,TV)) :- /* from library(Ordered), picks the smallest */ min_member(Representative, Keys), update_mapping(Representative, TV0, Val, TV), update_cluster(Keys, ST0, Representative, ST). update_cluster([], ST, _, ST). update_cluster([Key|Keys], ST0, Representative, ST) :- update_mapping(Key, ST0, Representative, ST1), update_cluster(Keys, ST1, Representative, ST). update_data_base(Key, db(ST,TV0), Val, db(ST,TV)) :- lookup_mapping(Key, ST, Representative), update_mapping(Representative, TV0, Val, TV). lookup_data_base(Key, db(ST,TV), Val) :- lookup_mapping(Key, ST, Representative), lookup_mapping(Representative, TV, Val). The asymptotic cost of these operations is determined by the asymptotic cost of the "mapping" operations, which depends on what sort of indexing you have available. Given that you have to enter and look up the keys *somehow*, the absolute maximum that some non-logical construct could buy you would be a factor of two. If you want to add new equivalences dynamically, a UNION-FIND data structure is appropriate for ST. I don't quite grasp what "one cannot create an entire new copy of the data base with the updated value" means. That is precisely what one normally does in Prolog, EXCEPT that the new copy shares most of the structure of the old. If I do X = db(ST,TV0), % unpack X foo(...TV0...TV...), % make a "new copy" of TV Y = db(ST,TV) % construct Y then the "ST" component of Y will be the same physical storage structure as the "ST" component of X, and in practice, virtually all of TV will be shared with TV0. It turns out that in Prolog, updating data structures involves building new ones, thus turning over garbage, but that has low cost in good implementations. Nor is it clear to me what "the usual declarative model" is supposed to be, nor in what way it fails for graphs. The usual declarative model of a graph is a pair containing a set of nodes and a set of edges (that's what all the textbooks I checked use) and that works just fine. I am a bear of very little brain. I can understand quite well what it means to have several mappings, but cyclic data structures confuse me in _any_ programming language. I have used Lisp systems that could print cyclic terms (CIRCLPRINT in InterLisp) and I'm afraid that anyone who calls X = #1=[a,#2=[#1^]|#2^] or the like "readable" is considerably more intelligent than I. {Read #n= as defining n, #n^ as referring to n.} I am still interested in a *clear* description of - what "cones" _are_, and - what they are _for_. -- The purpose of advertising is to destroy the freedom of the market.
Chris.Holt@newcastle.ac.uk (Chris Holt) (02/28/91)
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes, in response to my rather confused attempt to describe when circular data structures might be useful: >> [Chris Holt] Given a large data base, with many ways >> of accessing a given piece of data (e.g. "snow" can be accessed >> via "white", "crystalline", and "cold"); where updates are allowed >> (since one cannot create an entire new copy of the data base with >> the updated value). There has to be an axiom that insists that >> the terminus of all the access paths is always the same. And the >> usual declarative model, although fine for trees, isn't good enough >> for graphs. >2) The response is rather vague, so I am far from sure that I understand > it. As I understand it, we have two mappings: > st : strings -> termini > tv : termini -> values > and an interface like > fetch(s:string) = s.st.tv > create(L: string*, v: value) is > t := NEW terminus; t.tv := v; for s in L do s.st := t > update(s:string, v:value) is s.st.tv := v > modulo hand-waving. >Cyclic terms have nothing to offer for this problem; indeed cyclic >terms only make updates very much harder. The problem involves cycles when values contain strings and/or termini. E.g. where "snow".st returns a terminus and "snow.st.tv" returns a value that is a function, whose domain includes name and properties: "snow".st.tv.name = "snow" "snow".st.tv.properties = {"white","crystalline","cold"}. We can also have substances in value domains: "white".st.tv.substances = {"snow","paper","salt"} "crystalline".st.tv.substances = {"snow","salt","diamond"} "cold".st.tv.substances = {"snow","liquid nitrogen"} "frozen".st.tv.substances = {"snow","ice cream"}. Now suppose we want to add that snow melts at 32 degrees F. We create a new value, "snow".st.tv + (melting_point -> 32_degrees_F) where the latter just indicates a mapping from one value to the other. We can create a new terminus if we know everything that returns the old "snow".st as its value; in general, this may be difficult. As I've written it above, everything goes through "snow", so we can change "snow".tv without worrying about dangling references; (("white" and "frozen").st.tv.substances).st.tv.melting_point yields 32_degrees_F (where the and indicates intersection). Don't read too much into this, I wasn't trying to be very profound. >Nor is it clear to me what "the usual declarative model" is supposed to >be, nor in what way it fails for graphs. The usual declarative model of >a graph is a pair containing a set of nodes and a set of edges (that's >what all the textbooks I checked use) and that works just fine. If one can get from node x to another node via either path a or path b, there is an implicit invariant x.a=x.b. Creating a new graph by x.a := new_node_value doesn't maintain the invariant; that's all I meant. >I am still interested in a *clear* description of > - what "cones" _are_, and > - what they are _for_. Russell Turpin posted this; I assume you saw it. ----------------------------------------------------------------------------- Chris.Holt@newcastle.ac.uk Computing Lab, U of Newcastle upon Tyne, UK ----------------------------------------------------------------------------- "A peace I hope with honour." - Disraeli 1878
thom@ecrc.de (Thom Fruehwirth) (03/01/91)
Instead of encoding the problem with a data-structure, one could encode it with a program-structure. Functions become predicates: properties(snow,white). properties(snow,crystalline). properties(snow,cold). substances(white,snow). substances(white,paper). substances(white,salt). substances(crystalline,snow). substances(crystalline,salt). substances(crystalline,diamond). substances(cold,snow). substances(cold,liquid_nitrogen). substances(frozen,snow). substances(frozen,ice_cream). Adding that snow melts ar 32 degrees F: :- assert(melting_point(snow,32_degrees_F)). Asking the melting point of something white and frozen: :- substances(white,S),substances(frozen,S),melting_point(S,MP). yields S=snow, MP=32_degrees_F. Thom Fruehwirth