[comp.ai] AI and Sociology

ok@quintus.UUCP (Richard A. O'Keefe) (05/28/88)

I believe it was Gilbert Cockton who raised the question "why does AI
ignore Sociology?"  Two kinds of answers have been given so far:
(1) AI is appallingly arrogant trash.
(2) Sociology is appallingly arrogant trash.

I want to suggest that there is a straightforward reason for the mutual
indifference (some Sociologists have taken AI research as _subject_
material, but AI ideas are not commonly adopted by Sociologists) which
is creditable to both disciplines.

Lakatos's view of a science is that it is not so much a set of theories as
a research programme: a way of deciding what questions to pursue, what
counts as an explanation, and a way of dealing with puzzles.  For example,
he points out that Newton's theory of gravity is in principle unfalsifiable,
and that the content of the theory may be seen in the kinds of explanations
people try to come up with to show that apparent exceptions to the theory
are not real exceptions.

The key step here is deciding what to study.

Both in its application to Robotics and its application to Cognitive
Science, AI is about the mental processes of individuals.

As a methodological basis, Sociology looks for explanations in terms of
social conditions and "forces", and rejects "mentalistic" explanations.

Let me provide a concrete example.  One topic of interest in AI is how
a program could make "scientific discoveries".  AM and Eurisko are
famous.  A friend with whom I have lost contact was working on a program
to try to predict the kinetics of gas phase reactions.  Pat Langley's
"BACON" programs are well known.

Scientific discovery is also of interest to Sociology.  One book on this
topic (the only one on my shelves at the moment) is
	The social basis of scientific discoveries
	Augustine Brannigan
	Cambridge University Press, 1981
	0 521 28163 6
I take this as an *example*.  I do not claim that this is all there is
to Sociology, or that all Sociologists would agree with it, or that all
Sociological study is like this.  All I can really claim is that I am
interested in scientific discovery from an AI point of view, and when I
went looking for Sociological background this is the kind of thing I found.

Brannigan spends chapter 2 attacking some specific "mentalistic" accounts
of scientific discovery, and in chapter 3 rubbishes the mentalistic
approach completely.  If I understand him, his major complaint is that
accounts such as Koestler's "bisociation" fail to be accounts of
*scientific* *discovery*. Indeed, a section of chapter 3 is headed
    "Mentalistic models confuse learning with discovery."

It turns out that he is not concerned with the question "how do scientific
discoveries happen", but with the question "what gets CALLED a scientific
discovery, and why?"  Which is a very interesting question, but ignores
everything about scientific discovery which is of interest to AI people.

The very reason that AI people are interested in scientific discovery
(apart from immediately practical motives) is that it is a form of learning
in semi-formalised domains.  If one of Pat Langley's programs discovers
something that happens not to be true (such as coming up with Phlogiston
instead of Oxygen) he is quite happy as long as human scientists might have
made the same mistake.  As I read Brannigan's critical comments on the
"mentalistic" theories he was rubbishing, I started to get excited, seeing
how some of the suggestions might be programmable.

Page 35 of Brannigan:
    "... in the social or behavioural sciences we tend to obfuscate the
    social significance of familiar phenomena by explaining them in terms
    of 'underlying' causes.  Though this is not always the case, it is
    true with discovery and learning."
This is to reject in principle attempts to explain discovery and learning
in terms of underlying causes.
    "... the equivalence of learning and discovery is a _confusion_.
    From a social perspective, 'to _learn_' means something quite
    different from 'to _discover_'."
Emphasis his.  He would classify a rediscovery as a mere learning,
which at the outset rejects as uninteresting precisely the aspects that
AI is interested in.

Something which is rather shocking from an AI perspective is found on
page 64:
    "... the hallmark of this understanding is the ascription of learning
    to some innate abilities of the individual.  Common sensically,
    learning is measured by the degree of success that one experiences
    in performing certain novel tasks and recalling certain past events.
    Mackay's ethnographic work suggests, on the contrary, that learning
    consists in the institutional asciprtion of success whereby certain
    ordered and identified as learning achievements to the exclusion of
    other meaningful performances."
Page 66:
    "Although as folk members of society we automatically interpret
    individual discovery or learning as the outcome of a motivated
    course of inference, sociologically we must consider the cognitive
    and empirical grounds in terms of which such an achievement is
    figured.  From this position, cleverness in school is understood,
    not as a function of innate mental powers, but as a function of
    the context in which the achievements associated with cleverness
    are made accountable and remarkable."

To put it bluntly, if we take statements made by some AI people or some
Sociologists at face value, they cast serious doubts on the sanity of
the speakers.  But taken as announcements of a research programme to
be followed within the discipline, they make sense.

AI says "minds can be modelled by machines", which is, on the face of it,
crazy.  But if we read this as "we propose to study the aspects of mind
which can be modelled by machines, and as a working assumption will suppose
that all of them can", it makes sense, and is not anti-human.
Note that any AI practicioner's claim that the mechanisability of mind is
a discovery of AI is false, that is an *assumption* of AI.  You can't
prove something by assuming it!

Sociology says "humans are almost indefinitely plastic and are controlled
by social context rather than psychological or genetic factors", which is,
on the face of it, crazy.  But if we read this as "we propose to study the
influence of the social context on human behaviour, and as a working
assumption will suppose that all human behaviour can be explained this way",
it makes sense, and is not as anti-human as it at first appears.
Note that any Sociologist's claim that determination by social forces is
a discovery of Sociology is false, that is an *assumption* of Sociology.

Both research programmes make sense and both are interesting.
However, they make incompatible decisions about what counts as interesting
and what counts as an explanation.  So for AI to ignore the results of
Sociology is no more surprising and no more culpable than for carpenters
to ignore Musicology (both have some sort of relevance to violins, but
they are interested in different aspects).

What triggered this message at this particular date rather than next week
was an article by Gilbert Cockton in comp.ai.digest, in which he said

    "But perhaps this is what AI is doing, trying to improve our
    understanding of ourselves.  But it may not do this because of 
    (2) it forbids something
    that is, any approach, any insight, which does not have a computable
    expression.  This, for me, is anathema to academic liberal traditions ..."

But of course AI does no such thing.  It merely announces that
computational approaches to the understanding are part of _its_ territory,
and that non-computational approaches are not.  AI doesn't say that a
Sociologist can't explain learning (away) as a function of the social
context, only that when he does so he isn't doing AI.

A while back I sent a message in which I cited "Plans and Situated Actions"
as an example of some overlap between AI and Sociology.  Another example
can be found in chapter 7 of
	Induction -- Processes of Inference, Learning, and Discovery
	Holland, Holyoak, Nisbett, and Thagard
	MIT Press, 1986
	0-262-08160-1

Perhaps we could have some other specific examples to show why AI should
or should not pay attention to Sociology?

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/30/88)

Firstly, thanks very much to Richard O'Keefe for taking time to put
together his posting.  It is a very valuable contribution.  One
objection though:
In article <1033@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>But of course AI does no such thing.  It merely announces that
>computational approaches to the understanding are part of _its_ territory,
>and that non-computational approaches are not.
This may be OK for Lakatos, but not for me.  Potty as some of the
ideas I've presented may seem, they are all well rehearsed elsewhere.

It is quite improper to cut out a territory which deliberately ignores
others.  In this sense, psychology and sociology are guilty like AI,
but not nearly so much, as they have territories rather than a
territory.  Still, the separation of sociology from psychology is
regrettable, but areas like social psychology and cognitive sociology
do bridge the two, as do applied areas such as education and management.
Where are the bridges to "pure" AI?  Answer that if you can.

The place such arguments appear most are in curriculum theory (and
also some political theory, especially Illich/"Tools for Conviviality"
and democrats concerned about technical imperialism).  The argument
for an integrated approach to the humanities stems from the knowledge
that academic disciplines will always adopt a narrow perspective, and
that only a range of disciplines can properly address an issue.  AI can be
mulitidisciplinary, but it is, for me, unique in its insistence on a single 
paradigm which MUST distort the researcher's view of humanity, as well as the 
research consumer's view on a bad day.  Indefensible.

Some sociologists have been no better, and research here has also lost support
as a result.  I do not subscribe to the view that everything has nothing but a
social explanation.  Certainly the reason the soles stay on my shoes has
nothing much to do with my social context.  Many societies can control the
quality of their shoe production, but vary on nearly everything else.
Undoubtedly my mood states and my reasoning performance have a physiological
basis as amenable to causal doctrines as my motor car.  But I am part of a
social context, and you cannot fully explain my behaviour without appeal to it.

Again, I challenge AI's rejection of social criticisms of its paradigm.  We
become what we are through socialisation, not programming (although some
teaching IS close to programming, especially in mathematics).  Thus a machine
can never become what we are, because it cannot experience socialisation in the
same way as a human being.  Thus a machine can never reason like us, as it can
never absorb its model of reality in a proper social context.  Again, there are
well documented examples of the effect of social neglect on children.  Machines
will not suffer in the same way, as they only benefit from programming, and
not all forms of human company.  Anyone who thinks that programming is social
interaction is really missing out on something (probably social interaction :-))

RECOMMENDED READING

Jerome Bruner on MACOS (Man: A Course of Study), for the reasoning
behind interdisciplinary education.

Skinner's "Beyond Freedom and Dignity" and the collected essays in
response to it, for an understanding of where behaviourism takes you
("pure" AI is neo-behaviourist, it's about little s-r modelling).

P. Berger and T. Luckman's "Social Construction of Reality" and
I. Goffman's "Presentation of Self in everyday life" for the social
aspects of reality.

Feigenbaum and McCorduck's "Fifth Generation" for why AI gets such a bad name
(Apparently the US invented computers single handed, presumably while John
 Wayne was taking the Normandy beaches in the film :-))
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/04/88)

In article <1301@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>  AI can be
>mulitidisciplinary, but it is, for me, unique in its insistence on a single 
>paradigm which MUST distort the researcher's view of humanity, as well as the 
>research consumer's view on a bad day.  Indefensible.
>
. . . and patently untrue!  Perhaps Mr. Cockton has suffered from an attempt to
study AI in such a dogmatic environment.  His little anecdote about the
advior who put him off AI is quite telling.  I probably would have been
put off by such an attitude, too.  Fortunately, I could affort the luxury
of changing advisors without changing my personal interest in questions I
wanted to pursue.

First of all, it is most unclear that there is any single paradigm for the
pursuit of artificial intelligence.  Secondly, it is at least somewhat unclear
that any paradigm which certainly will INFLUENCE one's view of humanity also
necessarily DISTORTS it.  To assume that the two thousands years of philosophy
which have preceded us have provided an undistorted view of humanity is
arrogance in its most ignorant form.  Finally, having settled that there
is more than one paradigm, we can hardly accuse the AI community of INSISTING
on any paradigm.
>
>Again, I challenge AI's rejection of social criticisms of its paradigm.  We
>become what we are through socialisation, not programming (although some
>teaching IS close to programming, especially in mathematics).  Thus a machine
>can never become what we are, because it cannot experience socialisation in
>the
>same way as a human being.  Thus a machine can never reason like us, as it can
>never absorb its model of reality in a proper social context.  Again, there
>are
>well documented examples of the effect of social neglect on children.
>Machines
>will not suffer in the same way, as they only benefit from programming, and
>not all forms of human company.

Actually, if there is any agreement at all in the AI community it is in the
conviction to be sceptical of all authoritative usage of the word "never."
I, personally, do not feel that any social criticisms are being rejected
wholesale.  However, AI is a very difficult area to pursue (at least if
you are really interested in a research pursuit, as opposed to marketing
a new shell for building expert systems).  One of the most important keys
to getting any sort of viable result at all is understanding how to break
off a piece of the whole, big, intimidating problem whose investigation is
likely to provide some insight.  This generally leads to the construction
of a model, usually in the form of a software artifact.  The next key is
to investigate that model to see what it has REALLY told us.  A good example
of such an investigation is the one by Brown and Lenat on what AM and EURISKO
APPEAR (their words) to work.

There are valid questions about socialization which can probably be formulated
in terms of communities of automata.  However, we need to form a better vision
of what we can expect by way of the behavior of individual automata before we
can express those questions in any useful way.  There is no doubt that this
will take some time.  However, there is at least a glimmer of hope that when
we get around to expressing them, we will have a better idea of what we are
talking about than those who have chosen to reject the abstraction of
automata out of hand.

dharvey@wsccs.UUCP (David Harvey) (06/11/88)

In article <1301@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> It is quite improper to cut out a territory which deliberately ignores
> others.  In this sense, psychology and sociology are guilty like AI,
> but not nearly so much, as they have territories rather than a
> territory.  Still, the separation of sociology from psychology is
> regrettable, but areas like social psychology and cognitive sociology
> do bridge the two, as do applied areas such as education and management.
> Where are the bridges to "pure" AI?  Answer that if you can.
> 
You are correct in asserting that these are the bridges between
Psychology and Sociology, but my limited observation of people in both
groups is that people in Social Psychology rarely poke their heads into
the Sociology department, and people in Cognitive Sociology rarely
interact with the people in Cognitive Psychology.  The reason I know is
that I have observed them first-hand while getting degrees in Math and
Psychology.  In other words, the bridges are quite superficial, since
the interaction between the two groups is minimal.  In regards to this
situation I am referring to the status quo as it existed at the
University of Utah where I got my degrees and at Brigham Young
University which I visited fairly often.  And in answer to your demands
of AI, perhaps you better take a very good look at how well social
scientists are at answering questions about thinking.  They are making
progress, but it is not in the form of a universal theory, ala Freud.
In other words, they are snipping away at this little idea and that
little paradigm, just like AI researchers are doing.
>
> Again, I challenge AI's rejection of social criticisms of its paradigm.  We
> become what we are through socialisation, not programming (although some
> teaching IS close to programming, especially in mathematics).  Thus a machine
> can never become what we are, because it cannot experience socialisation in the
> same way as a human being.  Thus a machine can never reason like us, as it can
> never absorb its model of reality in a proper social context.  Again, there are
> well documented examples of the effect of social neglect on children.  Machines
> will not suffer in the same way, as they only benefit from programming, and
> not all forms of human company.  Anyone who thinks that programming is social
> interaction is really missing out on something (probably social interaction :-))
>
You obviously have not installed a new operating system on a VAX only to
discover that it has serious bugs.  Down comes the machine to the >>>
prompt and the process of starting the machine up with old OS that
worked begins.  Since the machine does not have feelings (AHA!) it
doesn't care, but it certainly was not beneficial to its performance.
Or a student's program with severe bugs that causes core dumps doesn't
help either.  Then there is the case of our electric news feed being
down for several weeks.  When it finally resumed operation it completely
filled the process table, making it impossible to even sign on as
super-user and do an 'ls'!  The kind of programming that allowed it to
spawn that many child processes is not my idea of something beneficial!
In other words, bad programming is to a certain extent an analog to
social neglect. Running a machine in bad physical conditions and
physically abusing a person are also similar.  Yes, you can create
enough havoc with Death Valley heat to totally kill a computer!
>
> RECOMMENDED READING
> 
> Jerome Bruner on MACOS (Man: A Course of Study), for the reasoning
> behind interdisciplinary education.
>
   ^^^ No qualms with the ideas presented in this book
>
> Skinner's "Beyond Freedom and Dignity" and the collected essays in
> response to it, for an understanding of where behaviourism takes you
> ("pure" AI is neo-behaviourist, it's about little s-r modelling).
>
   ^^^ And I still think his model has lots of holes in it!

dharvey @ WSCCS (David A Harvey)