[comp.ai] Defining Machine Intelligence and Consciousness

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/15/88)

In article <1648@nmtsun.nmt.edu> caasnsr@nmtsun.nmt.edu (Clifford Adams) 
joins the discussion on proofs of machine consciousness:

 > In article <42835@linus.UUCP> bwk@mbunix (Barry Kort) writes:
 > > I would be convinced if, upon acquiring language skills, the intelligent
 > > machine unexpectedly uttered the assertion, "I am."
 >
 >	Would you be convinced if a machine wrote the above after
 > reading comp.ai?  I have never heard a person simply say "I am."
 > without any challenge or suggestion to do so.  Would it be fair to let
 > the machine read comp.ai?  Or "attend" a philosophy lecture?  Perhaps
 > the machine does not believe that the statement is adequate, or has a
 > different philosophy of existence.

I think reading comp.ai is an excellent way to acquire consciousness.
But suppose us humans were discussing the desirability of imbuing an
AI system with consciousness, and I asked, "Who among us are in favor
of doing so?"  I would be delighted if among the responses, the candidate
AI system unexpectedly piped up and said, "I am."

 >	On another track... 
 >
 > I define intelligence as "the ability to solve problems."
 > Finding that a solution is impossible/impractical also counts. 

Of course.  In identifying a workable solution, it is useful if the
problem solver can discard candidate solutions which turn out not
to work.  But as you suggest Clifford, sometimes the answer is, 
"There is no answer."

 > Now "ability" and "problems" need to be defined. 

OK.  I define "ability" as "capacity" or "power".  I define "problem"
as "an undesirable state of affairs for which an appropriate idea
has not yet been generated or agreed upon."  (I define "idea" as
"a possibility for changing the state of affairs".)

 > This simple definition
 > describes fairly well what many people call intelligence.
 > Intelligence is always (in my experience) "measured" by
 > problem-solving abilities.  The problems vary, but solutions are
 > usually required.  Solutions to trivial problems use trivial
 > intelligence.  More complex problems require "more" intelligence.
 > Adding two numbers needs trivial intelligence.  "Intelligent"
 > activities, or ones which are needed to pass the Turing test, are more
 > difficult.

Yes.  Your definition agrees with mine:  "Intelligence is the faculty
which enables a sapient system to think and solve problems."  (And
going down one level, "sapience is the ability of a system to repose
knowledge" and "thinking is a rational form of information processing
which reduces the entropy or uncertainty of a knowledge base, generates
solutions to outstanding problems, and conceives goal-oriented courses
of action".)

 > 	I define consciousness as "the ability to create problems."

I would amend your definition to replace "create" with "recognize
and identify".  But I have an alternate definition:  "Consciousness
is the capacity of a sentient system to monitor itself."  If we combine
the two definitions, we get that consciousness is the ability to
recognize opportunities for self-improvement.  That is, consciousness
is a prerequisite for self-directed learning.

 > The consciousness uses intelligence like a person uses a computer.

Good analogy.  I use computers as an aid to problem solving.

 > Problems are fed into the intelligence for a solution.  The answers
 > can then be used to find more problems to solve.

Ah.  Scientific inquiry expands to meet the needs of an expanding
consciousness.

 > 	[Without consciousness we would be like mobile plants.  
 >       You live, you die.  No problem.  :-)]
 

No problem.  No solution.  No joy in life. 

Clifford, it is a joy to exchange ideas with you.

--Barry Kort