larsaam@idt.unit.no (Lars AAsmund Maele) (04/30/91)
I am writing a thesis where I am comparing Explanation-based learning (EBL) and partial evaluation in logic programming (partial deduction). I am looking for an article written by Tadepalli at Oregon State University about the use of training examples used in partial evaluation. Do any of you know where I can find it? AAsmund Maehle Trondheim Norway
tadepall@godel.orst.EDU (Prasad Tadepalli) (05/02/91)
> Path: orstcs!ogicse!mintaka!think.com!snorkelwacker.mit.edu!bloom-beacon!eru!h agbard!sunic!ugle.unit.no!sigyn.idt.unit.no!larsaam > From: larsaam@idt.unit.no (Lars AAsmund Maele) > Newsgroups: comp.ai > Subject: Article by Tadepalli > Keywords: Training examples in Partial Evaluation > Message-ID: <1991Apr30.142338.5652@ugle.unit.no> > Date: 30 Apr 91 14:23:38 GMT > Article-I.D.: ugle.1991Apr30.142338.5652 > Sender: news@ugle.unit.no > Reply-To: larsaam@idt.unit.no (Lars AAsmund Maele) > Organization: Div. of CS & Telematics, Norwegian Institute of Technology > Lines: 8 > > I am writing a thesis where I am comparing Explanation-based learning (EBL) and > partial evaluation in logic programming (partial deduction). I am looking for > an article written by Tadepalli at Oregon State University about the use of training examples > used in partial evaluation. Do any of you know where I can find it? > > AAsmund Maehle > Trondheim > Norway You must have picked up the reference from the Readings in ML. The report that was referenced there underwent a few revisions, and will be appearing as an IJCAI-91 paper titled "A Formalization of Explanation-Based Macro-operator Learning". This paper may not exactly be what you are looking for because it does not explicitly address the relationship of EBL to PE. However, it shows that EBL benefits from examples in two ways: First, it learns the distribution of problems it has to tune itself to; and second, it also uses the examples to avoid searching for solutions. One of the main results is that EBL can achieve exponential speedup WITHOUT using the algorithm of Kedar-cabelli and McCarty, which is the critical link between the PE work and the EBG work in the paper by van Harmelen and Bundy (AI Journal, 36). The key reasons for the success of EBL seem to be (a) the ability to decompose a solution to a small set of macros which can be composed, and/or (b) the sparseness of the solution space. Since neither of these reasons is emphasized in PE work, it provides a counter-point to the thesis "EBG = PE", and, in this sense, might be relevant to your work. I must also point out that my work is based on Korf's pioneering work on macro-operator learning. I will be glad to send a pre-print of the paper if you are interested. Thanks, Prasad Tadepalli Department of Computer Science Oregon State University Corvallis OR 97331-3202