[mod.ai] semantic knowledge

KRULWICH@C.CS.CMU.EDU (Bruce Krulwich) (09/25/86)

howdy.

i think there was a discussion on searle that i missed a month or so
ago, so this may be rehash.  i disagree with searle's basic conjecture
that he bases all of his logic on, namely that since computers represent
everything in terms of 1's and 0's they are by definition storing
knowledge syntactically and not semantically.  this seems wrong to me.
as a simple counterexample, consider any old integer stored within a
computer.  it may be stored as a string of bits, but the program
implicitely has the "semantic" knowledge that it is an integer.
similarly, the stored activation levels and connection strengths in a
connectionist model simulator (or better, in a true hardware
implementation) may be stored as a bunch of munerical values, but the
software (ie, the model, not the simulator) semantically "knows" what
each value is just as the brain knows the meaning of activation patterns
over neurons and synapses (or so goes the theory).

i think the same can be said for data stored in a more conventional AI
program.  in response to a recent post, i don't think that there is a
fundamental difference between a human's knowledge of a horse and a
computers manipulation of the symbol it is using to represent it.  the
only differences are the inherently associative nature of the brain and
the amount of knowledge stored in the brain.  i think that it is these
two things that give us a "feel" for what a horse is when we think of
one, while most computer systems would make a small fraction of the
associations and would have much less knowledge and experience to
associate with.  these are both computational differences, not
fundamental ones.

none of this is to say that we are close or getting close to a seriously
"intelligent" computer system.  i just don't think that there are
fundamental philosophical barriers in our way.

bruce krulwich

arpa: krulwich@c.cs.cmu.edu
bitnet: krulwich%c.cs.cmu.edu@cmccvma