[mod.ai] Consciousness?

sas@BFLY-VAX.BBN.COM.UUCP (01/29/87)

I always thought that a scientific theory had to undergo a number of
tests to determine how "good" it is.  Needless to say, a perfect score
on one test may be balanced by a mediocre score on another test.  Some
useful tests are:

- Does the theory account for the data?
- Is the theory simple?  Are there unnecessary superfluousities?
- Is the theory useful?  Does it provide the basis for a fruitful
	program of research?

There are theories of the mind which include consciousness and those
arguing that it is secondary - a side effect of thought.  It seems
quite probable that the bulk of artificial intelligence work (machine
reasoning, qualitative physics, theorem proving ... ) can be performed
without considering this thorny issue.  While I frequently accuse my
computers of malice, I doubt they are consciously malicious when they
flake out on me.

While the study of consciousness is fascinating and lies at the base of
numerous religions, it doesn't seem to be scientifically useful.  Do I
rewrite my code because the machine is conscious or because it is
getting the wrong answer?  Is there a program of experimentation
suggested by the search for consciousness?  (Don't confuse this with
using conscious introspection to build unconscious intelligence as I
would to guide a toy tank from my office to the men's room).  Does
consciousness change the way artificial intelligence must be
programmed?  The evidence so far says NO.  [How is that for a baldfaced
assertion?  Send me your code with comments showing how consciousness
is taken into account and I'll see if I can rewrite it without
consciousness].

I don't think scientific theories of consciousness are incorrect, I
think they are barren.

					Seth

P.S. For an excellent example of a nifty but otherwise barren theory
read the essay Adam's Navel in Stephen Gould's book the Flamingo's
Smile.