[net.ai] AI Projects on the Net

sts@ssc-vax.UUCP (Stanley T Shebs) (08/16/83)

OOPS! that should be "express some sort of INtolerance"...

This is a really fun topic.  The problem of the Turing Test is
enormously difficult and *very* subtle (either that or we're
overlooking something really obvious).  Now the net provides
a gigantic lab for enterprising researchers to try out their
latest attempts.  So far I have resisted the temptation, since
there are more basic problems to solve first!  The curious
thing about an AI project is that it can be made infinitely
complicated (programs are like that; consider emacs or nroff),
certainly enough to simulate any kind of behavior desired,
whether it be bigotry, right-wingism, irascibility, mysticism,
or perhaps even ordinary rational thought.  This has been
demonstrated by several programs, among them PARRY (simulates
paranoia), and POLITICS (simulates arguments between ideologues)
(mail me for refs if interested).  So it doesn't appear that
there is a way to detect an AI project, based on any *particular*
behavior.  

A more productive approach might be to look for the capability 
to vary behavior according to circumstances (self-modifiability).  
I can note that all humans appear capable of modifying their 
behavior, and that very few AI programs can do so.  However, not 
all human behavior can be modified, and much cannot be modified 
easily.  "Try not to think of a zebra for the next ten minutes" - 
humans cannot change their own thought processes to manage this 
feat, while an AI program would not have much problem.  In fact,
Lenat's Eurisko system (assuming we can believe all the claims)
has the capability to speed up its own operation! (it learned that
Lisp 'eq' and 'equal' are the same for atoms, and changed function
references in its own code)  The ability to change behavior
cannot be a criterion.

So how does one decide?  The question is still open....

					stan the leprechaun hacker
					ssc-vax!sts (soon utah-cs)

ps I thought about Zeno's Paradox recently - the Greeks (especially
Archimedes) were about a hair's breadth away from discovering
calculus, but Zeno had crippled everybody's thinking by making a
"paradox" where none existed.  Perhaps the Turing Test is like that....

brh@wjh12.UUCP (Holt) (08/18/83)

	I realize this article was a while ago, but I'm just
catching up with my news reading, after vacation.  Bear with me.
	I wonder why folks think it would be so easy for an
AI program to "change it's thought processes" in ways we humans 
can't.  I submit that (whether it's an expert system, experiment
in KR or what) maybe the suggestion to 'not think about zebras'
would have a similiar effect on an AI proj. as on a human.  After
all, it IS going to have to decipher exactly what you meant by the
suggestion.  On the other hand, might it not be easier for one
of you humans .... we, I mean ... to consciously think of something
else, and 'put it out of your mind'??
	Still an open question in my mind...
(Now, let's hope this point isn't already in an article I haven't read...)

			Brian Holt
			wjh!brh