[comp.ai.digest] Newell's response to KL questions

Allen.Newell@CENTRO.SOAR.CS.CMU.EDU (09/15/88)

To: acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Newell's response to KL questions
Date: Mon, 5 Sep 88 12:14 EDT
From: Allen.Newell@CENTRO.SOAR.CS.CMU.EDU


> From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
> Subject: Newell's Knowledge Level
> From: Andrew Basden, I.T. Institute, University of Salford, Salford.

> Please can anyone help clarify a topic?

> In 1982 Allen Newell published a paper, 'The Knowledge Level' (Artificial
> Intelligence, v.18, p.87-127), in which he proposed that there is a level
> of description above and separate from the Symbol Level.  He called this
> the Knowledge Level.  I have found it a very important and useful concept
> in both Knowledge Representation and Knowledge Acquisition, largely
> because it separates knowledge from how it is expressed.
> 
> But to my view Newell's paper contains a number of ambiguities and
> apparent minor inconsistencies as well as an unnecessary adherence to
> logic and goal-directed activity which I would like to sort out.  As
> Newell says, "to claim that the knowledge level exists is to make a
> scientific claim, which can range from dead wrong to slightly askew, in
> the manner of all scientific claims."  I want to find a refinement of it
> that is a bit less askew.
> 
> Surprisingly, in the 6 years since the idea was introduced there has
> been very little discussion about it in AI circles.  In psychology
> circles likewise there has been little detailed discussion, and here the
> concepts are only similar, not identical, and bear different names.  SCI
> and SSCI together give only 26 citations of the paper, of which only four
> in any way discuss the concepts, most merely using various concepts in
> Newell's paper to support their own statements.  Even in these four there
> is little clarification or development of the idea of the Knowledge
> Level.

[[AN: I agree there has been very little active use or development of the 
  concept in AI, although it seems to be increasing somewhat.  The two most 
  important technical uses are Tom Ditterich's notion of KL vs SL learning and 
  Hector Levesque's work on knowledge bases.  Zenon Pylyshyn uses the notion
  as an appropriate way to discuss foundation issues in an upcoming book
  on the foundatations of cognitive science (while also using the term
  semantic level for it).  And David Kirsh (now at MIT AIL) did a thesis in 
  philosophy at Oxford on the KL some time ago, which has not been published, 
  as far as I know.  We have continued to use the notion in our own research 
  and it played a strong role in my William James Lectures (at Harvard).  But, 
  importantly, the logicists have not found it very interesting (with the 
  exception of Lesveque and Brachmann).  I would say the concept is doing 
  about as well as the notion of weak methods did, which was introduced in 
  1969 and didn't begin to play a useful role in AI until a decade later.

  I might say that the evolution of the KL in our own thinking has been 
  (as I had hoped) in the direction of seeing the KL as just another systems
  level, with no special philsophical character from the other levels.
  In particular, there seems to me no more reason to talk about an observer 
  taking an intentional stance when using the knowledge level to describe
  a system than there is to talk about an engineering taking the electronic-
  circuits stance when he says "consider the circuit used for ...".  It is
  ok, but the emphasis is on the wrong syllABLE.  One other point might be
  worth making.  The KL is above the SL in the systems hierarchy.  However,
  in use, one often considers a system whose internal structure is described
  at the SL as a collection of components communicating via languages and
  codes.  But the components may themselves be described at the KL, rather
  than at any lower level.  Indeed, design is almost always an approximation
  to this situation.  Such usage doesn't stretch the concept of KL and SL in
  anyway or put the KL below the SL.  It is just that the scheme to be used
  to describe a system and its behavior is always pragmatic, depending on
  what is known about it and what purposes the description is to serve.
]]

> So I am turning to the AILIST bulletin board.  Has anyone out there any
> understanding of the Knowledge Level that can help in this process?
> Indeed, is Allen Newell himself listening to the board?

[[AN: No, but one of my friends (Anurag Acharya) is and forwarded it to me,
  so I return it via him.]]

> Some of the questions I have are as follows:
> 
> 1.  Some (eg. Dennett) mention 3 levels, while Newell mentions 5.  Who is
> 'right' - or rather, what is the relation between them?

[[AN: The computer systems hierarchy (sans the KL), which is what I infer
  the "5" refers to, is familiar, established, and technical (i.e., welded 
  into current digital technology).  There may also exist other such
  systems hierarchies.  Dennett (and Pylyshyn, loc cit) talk about 3, simply
  because the details of the lower implementations are not of interest to
  them, so they simply talk about some sort of physical systems.  There is
  no doubt that the top two levels correspond: the program or symbol level,
  and above that the knowledge, semantic or intentional systems level.  That
  does not say the formulations or interpretations of the intentional systems
  level and the KL are identical, but they are aimed at the same phenomena
  and the same systems possibilities.  There is an upcoming Brain and 
  Behavioral Science treatment of Dennett's new book on the Intentional 
  Stance, in which my own (short) commentary raises the question of the
  relation of these two notions, but I do not know what Dennett says about
  it, if anything.]]

> 2.  Newell says that logic is at the Knowledge Level.  Why?  I would have
> put it, like mathematics, very firmly in the Symbol Level.

[[AN: Here Basden mystifies me.  However obscure I may have been in the KL
  paper, I did not say that logic was at the KL.  On the contrary, as
  the paper says in section 4.4, "A logic is just a representation of 
  knowledge.  It is not the knowledge itself, but a structure at the symbol 
  level."]]

> 3.  Why the emphasis on logic?  Is it necessary to the concept, or just
> one form of it?  What about extra-logical knowledge, and how does his
> 'logic' include non-monotonic logics?

[[AN: Again, the paper seems to me rather clear about this.  Logics are
  simply languages that are designed to be clear about what knowledge
  they represent.  They have lots of family resemblances, because certain
  notions (negation, conjunction, disjunction, functions and parameters) 
  are central to saying things about domains.  Monotonic logics are so called,
  because they are members of this family.  I don't have any special 'logic'
  that I am talking about, just what the culture calls logic.  The emphasis 
  on logic is real, just like the emphasis on analysis (the mathematics of 
  the continuum) is real for physics.  But there are lots of other ways of 
  representing knowledge, for example, modeling the situations being known.  
  And there is plenty of evidence that logics are not necessarily efficient 
  for extracting new useful expressions.  This evidence is not just from AI, 
  but from all of mathematics and science, which primarily use formalisms 
  that are not logics.  As to "extra-logical" knowledge, I understand that
  term ok as a way of indicating that some knowledge is difficult to express
  in logics, but I do not understand it in any more technical way.  Certainly,
  the endeavor of people like McCarthy has been to seek ways to broaden
  the useful expressiveness of logic -- to bring within logic kinds of
  knowledge that here-to-fore seemed "extra-logical".  Certainly, there is lots
  of knowledge we use where we have not yet developed ways of expressing in
  external languages (data structures outside the head); and having not done
  it cannot be quite sure that it can be done.

  I should say that in other people's (admittedly rare) writings about the
  KL there sometimes seems to be a presumption that logic is necessary and 
  that, in particular, some notion of implicational closure is necesssary.
  Neither are the case.  Often (read: usually) agents have an indefinitely
  large body of knowledge if expressed in terms of ground expressions of
  the form "in situation S with goal G take action A".  Thus, such knowledge
  needs to be represented (by us or by the agent itself) by a finite physical 
  object plus some processes for extracting the applicable ground expressions
  when appropriate.  With logics this is done by taking the knowledge to be 
  the implicational closure over a logic expression (usually a big 
  conjunction).  But, it is  perfectly possible to have other productive ways 
  (models with legal transformations), and it is perfectly possible to 
  restrict logics so that modus ponens does not apply (as Levesque and others 
  have recently emphasized).  I'm not quite sure why all this is difficult to 
  be clear about.  It may indeed be because of the special framing role of 
  logics, where to be clear in our analyses of what knowledge is there we 
  always return to the fact that other representations can be transduced 
  to logic in a way that preserves knoweldge (though it does not preserve
  the effort profile of what it takes to bring the knowledge to bear).]]
  
> 4.  The definition of the details of the Knowledge Level is in terms of
> the goals of a system.  Is this necessary to the concept, or is it just
> one possible form of it?  There is much knowledge that is not goal
> directed.

[[AN: In the KL formulation, the goals of the system are indeed a necessary
  concept.  The KL is a systems level, which is to say, it is a way of
  describing the behavior of a system.  To get from knowledge to behavior
  requires some linking concept.  This is all packaged in the principle
  of rationality, which simply says that an agent uses its knowledge to
  take actions to attain its goals.  You can't get rid of goals in that
  formulation.  Whether there are other formulations of knowledge that might
  dispense with this I don't rightly know.  Basden appears to be focussing 
  simply on the issue of a level that abstracts from representation and 
  process.  With only that said, it would seem so.  And certainly, generally
  speaking, the development of logic and epistemology has not taken goals as 
  critical.  But if one attempts to formulate a system level and not just
  a level of abstraction, then some laws of behavior are required.  And
  knowledge in action by agents seems to presuppose something in the agents
  that impels them to action.
	
  Dennett, D. The Intentional Stance, Cambridge, MA: Bradford Books MIT
  Press, 1988 (in press).
  
  Dietterich, T. G. Learning at the knowledge level.  Machine Learning, 
  1986, v1, 287-316.

  Levesque, H. J. Foundations of a functional approach to knowledge 
  representation, Artificial Intelligence, 1984, v23, 155-212.

  Levesque, H. J. Making believers out of computers, Artificial Intelligence, 
  1987, v30, 81-108.

  Newell, A. The intentional stance and the knowledge level: Comments on
  D. Dennett, The Intentional Stance. Behavioral and  Brain Sciences (in 
  press).

  Newell, A., Unified Theories of Cognition, The William James Lectures.
  Harvard University, Spring 1987  (to be published).  (See especially
  Lecture 2 on Foundations of Cognitive Science.)

  Pylyshyn, Z., Computing in cognitive science, in Posner, M. (ed) Foundations
  of Cognitive Science, MIT Bradford Press  (forthcoming).

  Rosenbloom, P. S., Laird, J. E., & Newell, A. Knowledge-level learning
  in Soar, AAAI87.

  Rosenbloom, P. S., Newell, A., &  Laird, J. Towards the knowledge level
  in Soar: The role of architecture in the use of knowledge, in VanLehn, K.,
  (ed), Architectures for Intelligence, Erlbaum (in press).

]]

goel-a@TUT.CIS.OHIO-STATE.EDU (Ashok Goel) (09/30/88)

I appreciate Professor Allen Newell's explanation of his scheme of
knowledge, symbolic, and device levels for describing the architecture
of intelligence. More recently, Prof. Newell has proposed a scheme
consisting of bands, specifically, the neural, cognitive, rational,
and social bands, for describing the architecture of the mind-brain.
Each band in this scheme can have several levels; for instance, the
cognitive band contains (among others) the deliberation and the
operation levels.  What is not clear (at least not to me) is the
relationship between the two schemes.  One possible relationship is
colinearity in that the device level corresponds to the neural band,
the symbolic level to the cognitive band, and the knowledge level to
the rational band. Another possibility is containment in the sense
that each of band consists of (the equivalents of) knowledge,
symbolic, and device levels. A yet another possibility is
orthogonality of one kind or another. Which relationship (if any)
between the two schemes does Prof. Newell imply?

A commonality between Newell's two schemes is their emphasis on
structure.  A different scheme, David Marr's, focuses on the
processing and functional aspects of cognition. Again, what (if any)
is the relationship between Newell's levels/bands and Marr's levels?
Colinearity, containment, or some kind of orthogonality?

--ashok--