[comp.lang.prolog] Can I talk about Parlog here?

eiverson@nmsu.edu (Eric Iverson) (11/13/90)

I am in the process of writing a parser in Parlog on the Sequent
Symmetry and am having a rather difficult time of it.  I am beginning
to suspect that it is Parlog and not me that is at fault.  One main
reason for this suspicion is that I typed in a parser program from
Conlon's "Programming in Parlog" and it does *not* work.  Let me
rephrase that.  It does not work *consistently.*  I can get it to do
sentence fragments but not sentences.  Does anyone have an parallel
parser that does work in Parlog?  Has anyone else had problems with
deeply recursive and/or large programs?  Why can I only recompile about
4 or 5 times on average before I get a segmentation error and have to
reboot?  This is getting very annoying.

--
------------------------------------------------------------------------
Eric Iverson				Internet: eiverson@nmsu.edu
Computing Research Lab
Box 30001/3CRL				Life is something to do when
New Mexico State University		you can't get to sleep.
Las Cruces, NM 88003-0001			-Fran Lebowitz
(505) 646-5711

steve@CompSci.Bristol.AC.UK (Steve Gregory) (11/14/90)

In article <EIVERSON.90Nov12212535@thrinakia.nmsu.edu> eiverson@nmsu.edu (Eric Iverson) writes:
>
>I am in the process of writing a parser in Parlog on the Sequent
>Symmetry and am having a rather difficult time of it.  I am beginning
>to suspect that it is Parlog and not me that is at fault.  One main
>reason for this suspicion is that I typed in a parser program from
>Conlon's "Programming in Parlog" and it does *not* work.........

The Parlog parser published in Tom Conlon's book is correct, as far as
I'm aware, if not very efficient.  There is no difficulty in running it 
on a full implementation of Parlog, such as MacParlog or PC-Parlog.  The 
problem appears to be that the Parallel Parlog system (as on the Sequent)
imposes the restriction that "deep" guards, which are heavily used in
Tom's parser, must be tried sequentially, not in (or-)parallel.
The solution is to change the clause search operator to ';' in the
procedures that use deep guards.  This seems to work, though I don't
know why the original version simply fails with no error message.

Specific questions about the behaviour of the Parallel Parlog system are
best sent to Imperial College (parlog@doc.ic.ac.uk).

Steve Gregory
University of Bristol
steve@cs.bris.ac.uk

eiverson@nmsu.edu (Eric Iverson) (11/14/90)

In article <EIVERSON.90Nov12212535@thrinakia.nmsu.edu> eiverson@nmsu.edu (Eric Iverson) writes:

> I am in the process of writing a parser in Parlog on the Sequent
> Symmetry and am having a rather difficult time of it.  I am beginning
> to suspect that it is Parlog and not me that is at fault.  One main
> reason for this suspicion is that I typed in a parser program from
> Conlon's "Programming in Parlog" and it does *not* work.  Let me
> rephrase that.  It does not work *consistently.*  

It has come to my attention that the above statement may be
interpreted to mean that Conlon's parser does not work.  In fact, I
was using this parser as proof that it was IC Parlog that was
misfunctioning.  As it turns out this "misfunction" is actually an
inability to handle deep guards and is not a misfunction at all.  When
I asked if there were any parsers that worked in Parlog, I should have
specified IC Parlog; as Conlon's parser is no doubt perfectly
functional in dialects which handle deep guards.  The question still
stands:  Is there an efficient parallel parser that works under IC
Parlog without crashing it?  Even after minor modification, Conlon's
parser is still generating copious segmentation fault errors in IC
Parlog when run in conjunction with my lexicon lookup routine.  This
does not surprise me, as *everything* I write in IC Parlog seems to
generate segmentation faults and dead child processes given enough
time or processors.  If that doesn't happen, I can usually crash it
after recompiling my code a few times.  Perhaps I have some deep
guards lurking around that are causing these problems.  Why then do
they not cause these problems consistently?  Any help on these matters
would be appreciated.

--
------------------------------------------------------------------------
Eric Iverson				Internet: eiverson@nmsu.edu
Computing Research Lab
Box 30001/3CRL				Life is something to do when
New Mexico State University		you can't get to sleep.
Las Cruces, NM 88003-0001			-Fran Lebowitz
(505) 646-5711

ken@aiai.ed.ac.uk (Ken Johnson) (11/15/90)

In article <EIVERSON.90Nov13182825@aigyptos.nmsu.edu> eiverson@nmsu.edu
(Eric Iverson) writes:

> Is there an efficient parallel parser that works under IC
> Parlog without crashing it?

Here is a version of Conlon's parser written in Strand-88.  The changes
needed to convert it to Parlog should be obvious, i.e.  I have not tried
to do it.  The code was written by Malcolm Brown of Strand Software; I
had nothing to do with it. 


%	Malcolm Brown
%	22/02/1990

% 	parallel parser.  based on a prolog program given to me by Ken Johnson
%	from AIAI in Edinburgh, who in turn took it from Tom Conlon's Parlog book
%	pages 241 - 252

%	This parser finds all possible parses to an input sentence.

%	The module has two entry points:
%	1) parse([list of words], Resulting_Parse)
%	2) parse_form([list of words], form, Resulting_Parse)

%	The program is similar in style to the original prolog program with 
%	the exception of linking the dictionary with the grammer. If a dictionary
%	for an expansion cannot be found then a grammar rewrite is looked for in 
%	grammar database.  This takes care of the case:
%		parse_form([noun],noun_expression,Parse) 
%	and there is a rewrite of the form: noun_expression -> noun 
%	
%	The orginal prolog program was roughly 1.75 a4 pages so about 100 lines 
%	The Parlog program was about 2 pages long so about 120 lines
%	The Strand program (without comments, to be comparable) is about 160 lines
%	


%--------------------------------------------------

-compile(free).
-exports([parse/2,parse_form/3,expansions_parse/5,splits_parse/4]).


%--------------------------------------------------

parse(S,P):-
	parse_form(S,sentence,P).



parse_form([],Form,P):-
	P:={Form,[]}.

parse_form([Word],Form,P):-
	dictionary(Form,Dict_or_grammar),
	terminal_or_not(Dict_or_grammar,Word,P,Found,Form). % check if Form is a terminal 

parse_form(Words,Form,P):-
	otherwise |
	grammar(Form,Expansions),
	expansions_parse(Words,Expansions,[],P_list,Res),
	check_parse(Res,P_list,Form,P).	% check result

% if Form is not a terminal i.e. a rewrite rule has been returned by dictionary
% the parse the Expansions
terminal_or_not({grammar,Expansions},Word,P,Res,Form):-
	expansions_parse([Word],Expansions,[],P_list,Res),
	check_parse(Res,P_list,Form,P).

terminal_or_not(Dict,Word,P,Found,Form):-
	otherwise |
	member(Word,Dict,Found),
	check_parse(Found,Word,Form,P).

% find out if the parse was successful

check_parse(fail,_,Form,P):-
	P:= {Form,[]}.  	% [] is the failure token.

check_parse(_,P_list,Form,P):-
	otherwise |
	P:= {Form,P_list}.	% return the list of parses


%--------------------------------------------------

expansions_parse(Words,[Exp | Exp_rest],Sofar,P,Res):-
	one_expansion(Words,Exp,One_parse),
	add_parse(One_parse,Sofar,New_sofar),	% add parse if Exp parsed Words
	expansions_parse(Words,Exp_rest,New_sofar,P,Res).	% try next expansion


expansions_parse(W,[],[],_,Res):-  	%  couldn't parse any expansion
	Res := fail.

expansions_parse(_,[],Parses,P,Res):-	% found some parses
	otherwise |
	Res := success,
	P:= Parses.


add_parse([],Sofar,New_sofar):-
	New_sofar := Sofar.		% no parses so short circuit

add_parse(Parse,Sofar,New_sofar):-	% got one.  add to the list
	otherwise |
	New_sofar := [Parse | Sofar].


%--------------------------------------------------

%	All possible parses resulting from all splits of words are
% 	inserted into the Parse_stream.  Thus each element in this stream
%	represents a valid parse of Words using the expansion [Form1,Form2]
	
one_expansion(Words,[Form1,Form2],Parse):-
	gen_splits(Words,Splits),
	splits_parse(Splits,Form1,Form2,Parse_stream),
	check_split_parse(Parse_stream,Parse).	% were any parses found?

one_expansion(Words,[Form],Parse):-
	parse_form(Words,Form,Parse1),
	check_single_exp(Parse1,Parse).

check_split_parse([[] | Rest], Parse):-		% failed parse. look in the tail
	check_split_parse(Rest,Parse).

check_split_parse([[Parse1,Parse2]| _ ],Parse):-	% got one
	Parse := [Parse1,Parse2].

check_split_parse([],Parse):-			% no more
	Parse := [].


%--------------------------------------------------

check_single_exp({_,[]},P):-		% didn't parse
	P := [].

check_single_exp(Parse,P):-		% did parse
	otherwise |
	P:= Parse.


%--------------------------------------------------

%	In this procedure an individual split is parsed and the result
%	is merged with the results of parsing the remaining splits, into
%	the return stream.  (Note an inefficient, software merge is used here)
 	
splits_parse([pair(Front,Back)|Splits],Form1,Form2,P_stream):-
	parse_form(Front,Form1,Parse1),
	parse_form(Back,Form2,Parse2),
	merge(P_new,Res_stream,P_stream),
	check_split(Parse1,Parse2,Res_stream),
	splits_parse(Splits,Form1,Form2,P_new).


splits_parse([],_,_,P_stream):-
	P_stream := [].

check_split({_,[]},_,Res_stream):-		% no parse for Front split
	Res_stream := [].

check_split(_,{_,[]},Res_stream):-		% no parse for Back split
        Res_stream := [].

check_split(P1,P2,Res_stream):-			% parsed front and back
	otherwise |
	Res_stream := [[P1,P2]].


%--------------------------------------------------

gen_splits([Word], Splits):-
	Splits:=[].

gen_splits([Word | Words], Splits):-
	otherwise |
	Splits := [pair([Word],Words) | S1],
	gen_splits(Words,Tsplits),
	front_insert(Word,Tsplits,S1).

	

front_insert(_,[],S):-
	S:=[].

front_insert(Word,[pair(F,B) | Trest],Splits):-
	Splits := [pair([Word|F],B) | S1],
	front_insert(Word,Trest,S1).
	

merge([H|T],B,O):-
	O:=[H|O1],
	merge(T,B,O1).


merge(A,[H|T],O):-
	O:=[H|O2],
        merge(A,T,O2).

merge(A,[],O):-
	O:=A.

merge([],A,O):-
	O:= A.
	
	
member(I,[I | _], Res):-
	Res := I.

member(I,[_|T],Res):-
	otherwise |
	member(I,T,Res).

member(_,[],Res):-
	Res := fail.

%-------------------------

-mode dictionary(?,^).

dictionary(adverb,[quickly,easily]).
dictionary(determiner,[a,an,the]).
dictionary(adjective,[good,bad,naughty,drunk]).
dictionary(noun,[boy,girl,table,tree,apple,ball]).
dictionary(verb,[likes,kicks,smiles,admires,eats]).
dictionary(Cat,Res):-   	% not a terminator so try a rule
	otherwise |
	grammar(Cat,G),
	Res := {grammar,G}.

-mode grammar(?,^).

% repeat expansions force the system to find the same solution twice

grammar(noun_expression,[[noun],[adjective,noun_expression]]).
grammar(noun_phrase,[[determiner,noun_expression]]).
grammar(sentence,[[noun_phrase,verb],[noun_phrase,verb_phrase]]).
grammar(verb_expression,[[verb],[adverb,verb]]).
grammar(verb_phrase,[[verb_expression,noun_phrase]]).
grammar(_,Res):-
	otherwise |
	Res := [].


-- 
Ken Johnson, AI Applications Inst., 80 South Bridge, Edinburgh EH1 1HN
       E-mail ken@aiai.ed.ac.uk,   ******************************************
phone 031-225 4464 extension 213   **  `You can resole your boot, but you  **
                 new quotation --> ** can't reboot your soul' [The Oracle] **

felkl@aste16.Berkeley.EDU (Feliks Kluzniak) (11/16/90)

|> The question still
|> stands:  Is there an efficient parallel parser that works under IC
|> Parlog without crashing it?  Even after minor modification, Conlon's
|> parser is still generating copious segmentation fault errors in IC
|> Parlog when run in conjunction with my lexicon lookup routine.  This
|> does not surprise me, as *everything* I write in IC Parlog seems to
|> generate segmentation faults and dead child processes given enough
|> time or processors.  If that doesn't happen, I can usually crash it
|> after recompiling my code a few times.  Perhaps I have some deep
|> guards lurking around that are causing these problems.  Why then do
|> they not cause these problems consistently?  Any help on these matters
|> would be appreciated.

Ah, I've never used IC Parlog, and I'm not particularly interested in
parallel parsing. What I AM interested in is why people choose to use
logic programming systems (parallel or otherwise) that ever crash with a
segmentation fault. This is not an error in your program, no matter how
erroneous it is: this is an inexcusable error in the implementation. The
only solution is to find an implementation written by responsible
professionals.
 
No offence intended to anyone. And yes, I think I do know what I am
talking about.

-- Feliks Kluzniak

mmh@cs.qmw.ac.uk (Matthew Huntbach) (11/19/90)

In article <1990Nov15.190217.21923@ida.liu.se> felkl@aste16.Berkeley.EDU (Feliks Kluzniak) writes:
>Ah, I've never used IC Parlog, and I'm not particularly interested in
>parallel parsing. What I AM interested in is why people choose to use
>logic programming systems (parallel or otherwise) that ever crash with a
>segmentation fault.

Because these are experimental systems. If nobody ever
experimented with new languages, we'd still be writing in
machine code. The developers of new languages need experience
and feedback from potential users before doing all the tedious
work involved in getting something to the perfection required
by a commercial system.

Those taking the brave step of using an experimental language
have the advantage of being one step ahead in their research if
the language really is useful and enables one to express things
with ease which would otherwise be extremely difficult to work
with (as I believe is the case with Parlog).

Matthew Huntbach

felkl@aste16.Berkeley.EDU (Feliks Kluzniak) (11/21/90)

In article <3056@sequent.cs.qmw.ac.uk>, mmh@cs.qmw.ac.uk (Matthew
Huntbach) writes:

|> The developers of new languages need experience
|> and feedback from potential users before doing all the tedious
|> work involved in getting something to the perfection required
|> by a commercial system.

The questions are: is it TEDIOUS to write your program so that it will
not crash? Can you expect only commercial programs not to crash?

I thought writing programs that don't crash (and in general do what they
are supposed to do) is what programming is all about...

But if the "potential users" are happy, who am I to complain? 

-- F.K.

ad@doc.ic.ac.uk (11/21/90)

Jim Crammond, the implementor of the Parallel Parlog system,
has asked me to post the following mail message to the news.

I should point out that Jim is no longer a researcher in the
Parlog Group (we miss him), and so technical questions should
be sent to :

          parlog@doc.ic.ac.uk


    Andrew Davison



_____________________________________________________________
From: Jim Crammond
Date: Mon, 19 Nov 90 10:20:41 pst
Subject: Re: Can I talk about Parlog here?

> Ah, I've never used IC Parlog, and I'm not particularly interested in
> parallel parsing. What I AM interested in is why people choose to use
> logic programming systems (parallel or otherwise) that ever crash with a
> segmentation fault. This is not an error in your program, no matter how
> erroneous it is: this is an inexcusable error in the implementation. The
> only solution is to find an implementation written by responsible
> professionals.

Sorry, but as the "irresponsible" professional who wrote the Parallel Parlog
system I remain offended.  How can you possibly know what you're talking about
if you have never used the Parlog system and don't use/have interest in
parallel implementations?

Segmentation faults are in fact caught in the parallel parlog system but
I considered it too difficult to attempt to return the system into a
consistent state so that you return to the top level *given the limited
resources available to do this*. We are not simply dealing with one
processor's inconsistent state here but several and issue was non-trivial.
Thus I simply ensure that all processors cleanup and exit.

I would agree that the compiler was - and is - erroneous for not detecting
bad Parlog programs and rejecting them, which is possible to do; however
given limited resources this was not high enough on the priority list.

The parallel parlog system isn't a commercial piece of software (although
when used correctly it is more robust than some commercial systems I have
seen). When you develop a system as part of research you tend to have to
sacrifice some robustness/useability in favour more "publishable" things.

-Jim Crammond.

felkl@aste16.Berkeley.EDU (Feliks Kluzniak) (11/23/90)

In article <2540@gould.doc.ic.ac.uk>, Jim Crammond says:

|> 
|> Sorry, but as the "irresponsible" professional who wrote the Parallel Parlog
|> system I remain offended....
|>  ...
|> I would agree that the compiler was - and is - erroneous for not detecting
|> bad Parlog programs and rejecting them, which is possible to do; however
|> given limited resources this was not high enough on the priority list.
|> 
|> The parallel parlog system isn't a commercial piece of software (although
|> when used correctly it is more robust than some commercial systems I have
|> seen). When you develop a system as part of research you tend to have to
|> sacrifice some robustness/useability in favour more "publishable" things.

Since Jim Crammond chose to post only his own part of the exchange we
had through e-mail, I will refrain from posting my answer. This part of
his own text says it all: what remains is whether we deem such behaviour
responsible and professional or not. I DON't think this is a matter of
taste, but let it rest. It is not a particular system or person that is
of interest here, but whether people wish to maintain (and insist on
maintaining) certain standards of programming.  Personally, I find the
distinction between commercial software and research software (i.e.
software that just does not have to work, as long as it works
sometimes!) rather strange. As a practising programmer, I am shocked by
the apparently widespread acceptance of this distinction. 

--- Feliks Kluzniak

ted@nmsu.edu (Ted Dunning) (11/23/90)

In article <1990Nov22.174222.8614@ida.liu.se> felkl@aste16.Berkeley.EDU (Feliks Kluzniak) writes:

   In article <2540@gould.doc.ic.ac.uk>, Jim Crammond says:
	...
   |> The parallel parlog system isn't a commercial piece of software
	...

   ...  Personally, I find the distinction between commercial software
   and research software (i.e.  software that just does not have to
   work, as long as it works sometimes!) rather strange. As a
   practising programmer, I am shocked by the apparently widespread
   acceptance of this distinction.



given the above statement, i find it hard to imagine that mister
kuzniak has _ever_ worked on a commercial grade software product.  it
may be that he has been involved in some garage scale products, but
even this seems unlikely.

anyone who has actually had to deliver a working product to several
thousand customers would know that there is a world of difference
between writing a program that satisfies your own needs and then
making it available to others and producing a program that can be sold
on any reasonable scale.

writing the code in the first place is only a minor part of the
problem.  installation instructions, user documentation, internal
documentation for maintenance by others, and even setting up systems
for producing shippable copies are all an enormous time burden.  after
the sale, the support issue raises its head with problems which
utterly the original problem of getting some code to do something
interesting. 

in a research environment, it _is_ satisfactory to solve part of the
problem, and it may be that this solution will be interesting or
useful to others.  in a product, it is _not_ usually satisfactory to
create a product which does not live up to reasonable expectations of
completeness and documentation on the part of the user.

if mister kluzniak would like to learn the difference between the two
worlds, i suggest he get a job in support for a large computer
company.  this is where the difference between research and product is
most apparent.


--
I don't think the stories are "apocryphal".  I did it :-)  .. jthomas@nmsu.edu

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (11/23/90)

In article <1990Nov22.174222.8614@ida.liu.se>, felkl@aste16.Berkeley.EDU (Feliks Kluzniak) writes:
> Personally, I find the
> distinction between commercial software and research software (i.e.
> software that just does not have to work, as long as it works
> sometimes!) rather strange. As a practising programmer, I am shocked by
> the apparently widespread acceptance of this distinction. 

There is an important difference between research software and
commercial software, and I think it's this difference Jim Crammond had
in mind.

Money.

Commercial software has money behind it.  There is someone paying the
programmers to produce a product.  Research software is developed by
people are being paid to produce interesting ideas.  I think that it
is ok to include experimental features that don't work yet in research
software.

But as soon as you *sell* the program to someone it stops being research
software.  There can be any number of mistakes in SB Prolog and I have no
grounds for complaint:  the source code is free, and my having a copy of
it does _nothing_ to contribute to its development or repair.  As a
practical matter, let's agree to consider programs distributed in source
form for a reasonable handling charge as being in the non-commercial
category.  If you paid more than postage and handling for it, it's a
commercial product, and the vendor has a moral obligation to try to make
it work.  (Legal obligations are a different matter.)

Even for a commercial product, I think it's ok to provide new experimental
features as long as they are not part of what the customer was led to
expect in return for his money.  If, for example, ALS were to offer an
experimental coroutining facility in ALS Prolog, and it sometimes crashed,
that would be ok, as long as it _didn't_ crash when you _didn't_ use that
feature and as long as you hadn't been talked into buying on the strength
of _having_ coroutining.  (I use this example because to the best of my
knowledge ALS do _not_ yet offer a coroutining feature.)

As a (former) practising programmer, I fully agree that products should
work as advertised.  Writing programs that work is _hard_, but that's
what professionalism is all about.  
-- 
I am not now and never have been a member of Mensa.		-- Ariadne.

felkl@aste16.Berkeley.EDU (Feliks Kluzniak) (11/26/90)

In article <TED.90Nov22143601@kythera.nmsu.edu>, ted@nmsu.edu (Ted
Dunning) writes:

|> given the above statement, i find it hard to imagine that mister
|> kuzniak has _ever_ worked on a commercial grade software product.  it
|> may be that he has been involved in some garage scale products, but
|> even this seems unlikely....

Well, I need not react to the tone of this posting. 
However, Ted Dunning completely missed the point. I am NOT saying that
there is no difference  between the work involved in writing commercial
software and that involved in writing software for more limited
purposes.  What I am saying is that the difference is no excuse for
sloppiness. You may not want to spend time on extensive user manuals and
menus, but if your program bombs, then you just haven't done your job
properly.  "It's only research software" is often accepted as an excuse,
and I find it ridiculous.
-- F.K.

mmh@cs.qmw.ac.uk (Matthew Huntbach) (11/26/90)

In article <1990Nov20.182520.9609@ida.liu.se> felkl@aste16.Berkeley.EDU (Feliks Kluzniak) writes:
>The questions are: is it TEDIOUS to write your program so that it will
>not crash? Can you expect only commercial programs not to crash?
>
Suppose the point which causes the program to crash occurs
somewhere where it is very difficult to track down, and will
require months of programming to solve. A version of the
program released for experimental use might result in the users
deciding that the bit which causes the crash isn't important
and can be left out. The implementors are then freed from
wasting their time debugging something that wasn't useful
anyway.

Releasing experimental languages before the code is fully
debugged is like getting your papers refereed. It's better for
the research community that interesting papers are debugged
through the refereeing process than that research results don't
get heard about for years because the researchers are spending
all their time trying to get their papers to the point where
they can be published without alteration.

Matthew Huntbach

felkl@aste16.Berkeley.EDU (Feliks Kluzniak) (11/28/90)

In article <3079@sequent.cs.qmw.ac.uk>, mmh@cs.qmw.ac.uk (Matthew
Huntbach) writes:
|> >
|> Suppose the point which causes the program to crash occurs
|> somewhere where it is very difficult to track down, and will
|> require months of programming to solve. A version of the
|> program released for experimental use might result in the users
|> deciding that the bit which causes the crash isn't important
|> and can be left out. The implementors are then freed from
|> wasting their time debugging something that wasn't useful
|> anyway.
|> 
|> Releasing experimental languages before the code is fully
|> debugged is like getting your papers refereed. It's better for
|> the research community that interesting papers are debugged
|> through the refereeing process than that research results don't
|> get heard about for years because the researchers are spending
|> all their time trying to get their papers to the point where
|> they can be published without alteration.
|> 

I heartily disagree!  If you've got points in your program that require
months of debugging and modifications to fix, then you haven't written
the program: you only kidded yourself that you had! And if you need that
kind of experiment to find out what is important, then you have no
business implementing the language: design it first!

The refereeing process can often improve the paper, but as a referee I
HATE to have my time wasted by people who just don't care about how they
write. Write it as well as you can, then submit: don't worry, there will
be enough comments anyway!

A badly written program or paper contributes very little and wastes
other people's valuable time. If one doesn't strive for excellence,
one's activity is probably more harmful than beneficial.

-- Feliks Kluzniak