[comp.ai] Constructive Question

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (06/01/88)

What's the difference between Cognitive Science and AI?  Will the
recent interdisciplinary shift, typified by the recent PDP work, be
the end of AI as we knew it?

What IS in a name?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines

ok@quintus.UUCP (Richard A. O'Keefe) (06/03/88)

In article <1313@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> What's the difference between Cognitive Science and AI?  Will the
> recent interdisciplinary shift, typified by the recent PDP work, be
> the end of AI as we knew it?
> 
> What IS in a name?

To answer the second question first, what's in a name is "history".

I do not expect much in the way of agreement with this, but for me
- Cognitive Science is the discipline which attempts to understand
  and (ideally) model actual human individual & small-group behaviour.
  People who work in this area maintain strong links with education,
  psychology, philosophy, and even AI.  Someone who works in this
  area is likely to conduct psychological experiments with ordinary
  human subjects.  It is a science.

- AI is the discipline which attempts to make "intelligent" artefacts,
  such as robots, theorem provers, IKBSs, & so on.  The primary goal
  is to find *any* way of doing things, whether that's the way humans
  do it or not is not particularly interesting.  Machine learning is a
  part of AI:  a particular technique may be interesting even if humans
  *couldn't* use it.  (And logic continues to be interesting even though
  humans normally don't follow it.)  Someone trying to produce an IKBS may
  obtain and study protocols from human experts; in part it is a matter
  of how well the domain is already formalised.
  AI is a creative art, like Mathematics.

- The "neural nets" idea can be paraphrased as "it doesn't matter if you
  don't know how your program works, so long as it's parallel."

If I may offer a constructive question of my own:  how does socialisation
differ from other sorts of learning?  What is so terrible about learning
cultural knowledge from a book?  (Books are, after all, the only contact
we have with the dead.)

ok@quintus.uucp (Richard A. O'Keefe) (06/13/88)

In article <1335@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>IKBS programs are essentially private readings which freeze, despite
>the animation of knowledge via their inference mechanisms (just a
>fancy index really :-)).  They are only sensitive to manual reprogramming,
>a controlled intervention.  They are unable to reshape their knowledge
>to fit the current interaction AS HUMANS DO.  They are insensitive,
>intolerant, arrogant, overbearing, single-minded and ruthless.  Oh,
>and they usually don't work either :-) :-)

This is rather desperately anthropomorphic.  I am surprised to see Gilbert
Cockton, of all people, ascribing such human qualities to programs.

There is no reason why a program cannot learn from its input; as a
trivial example, Rob Milne's parser for PRESS could acquire new words
from the person typing to it.  What does it mean "to reshape one's
knowledge to fit"?  Writing programs which adapt to the particular
client has been an active research area in AI for several years now.  As
for insensitivity &c, if we could be given some examples of what kinds
of IKBS behaviour Gilbert Cockton interprets as having these qualities,
and or otherwise similar behaviours not so interpreted, perhaps we could
get some constructive criticism out of this.

The fact that "knowledge", once put into an IKBS, is fossilized, bothers
me.  I am so far in sympathy with Cockton as to think that any particular
set of facts & rules is most valuable when it is part of a tradition/
practice/social-context for interpreting, acquiring, and revising such
facts & rules, and I am worried that chunks of "knowledge", once handed
over to computers, may be effectively lost to human society.  But this
is no different from the human practice of abdicating responsibility to
human experts, who are also insensitive, &c.  Expert systems which are
designed to explain (in ICAI style) the knowledge in them as well as to
deploy it may in fact be a positive social factor.

Instead of waffling on in high-level generalisations, how would it be if
one particular case were to be examined.  I propose that the social effect
of Nolo Press's "WillWriter" should be examined (or a similar product).
It is worth noting that the ideology of Nolo Press is quite explicitly to
empower the masses and reduce the power of lawyers.  What _might_ such a
program do to society?  What _is_ it doing?  Do people who use it experience
it as more or less intolerant than a lawyer?  And so on.  This seems like a
worthy topic for a Masters in Sociology, whatever attitude you take to AI.
(Not that WillWriter is a notable AI program, but it serves the function of
an IKBS.)  Has a study like this already been done?

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/06/88)

This exchange between Richard O'Keefe and Gilbert Cockton is the
first worthwhile discussion to come out of the recent flames, and
I would like to thank Richard for ititiating it.