[comp.ai] Artificial Intelligence and Intelligence

peru@soleil.UUCP (Dave Peru) (11/15/88)

Definition of Intelligence:

1. Know how to solve problems.
2. Know which problems are unsolvable.
3. Know #1, #2, and #3 defines intelligence.

This is the correct definition of intelligence.  If anyone disagrees, please
state so and why.

If you take into account the unsolvable problems of Turing machines, then this
proves Artificial Intelligence is impossible.

"Artificial Intelligence" is an unsolvable problem.

Human beings are not machines.

Human beings are capable of knowing which problems are unsolvable, while
machines are not.

joe@athena.mit.edu (Joseph C Wang) (11/15/88)

In article <484@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Definition of Intelligence:
>
>1. Know how to solve problems.
>2. Know which problems are unsolvable.
>3. Know #1, #2, and #3 defines intelligence.
>
>This is the correct definition of intelligence.  If anyone disagrees, please
>state so and why.

Find an even number that is not the sum of two prime numbers.

Find a set of integers a, b, c, and n in which a^n + b^n = c^n where n is
greater than two.

Are these problems solvable?

Don't know?  Must not be intelligent.
--------------------------------
Joseph Wang (joe@athena.mit.edu) 
450 Memorial Drive C-111
Cambridge, MA 02139

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/15/88)

In article <484@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
:Definition of Intelligence:
:
:1. Know how to solve problems.
:2. Know which problems are unsolvable.
:3. Know #1, #2, and #3 defines intelligence.
:
:This is the correct definition of intelligence.  If anyone disagrees, please
:state so and why.
:
I disagree.  Is a steelworker intelligent?  Does a steelworker
know which problems are unsolvable (without being told)?
:
:Human beings are not machines.
:
Says who?  Can you prove this?  All the evidence I know points
toward human beings as being machines.

:Human beings are capable of knowing which problems are unsolvable, while
:machines are not.

What does knowing mean?

I thought Douglas Hofstedter's book put this argument to rest some
time ago. 

ok@quintus.uucp (Richard A. O'Keefe) (11/15/88)

In article <484@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Definition of Intelligence:
>
>1. Know how to solve problems.
>2. Know which problems are unsolvable.
>3. Know #1, #2, and #3 defines intelligence.
>
>This is the correct definition of intelligence.  If anyone disagrees, please
>state so and why.
>
(Gilbert Cockton is going to love me for this, I can tell...)
Intelligence is a social construct, an ascription of value to certain
characteristics and behaviours deemed to be mental.  One child who has
memorized the periodic table of the elements will be deemed intelligent,
another child who has memorized baseball scores for the last N years
will be deemed sports-mad, even though they may have acquired comparable
bodies of information _by_comparable_means_.  If we have three people in
a room: Subject, Experimenter, and Informant, if Subject does something,
and Informant says "that was intelligent", Experimenter is left wondering
"is that a fact about Subject's behaviour, or about Informant's culture?"
The answer, of course, is "yes it is".

Dijkstra's favourite dictionary entry is
    "Intelligent, adj. ... able to perform the functions of a computer ..."
(Dijkstra doesn't think much of AI...)

In at least some respects, computers are already culturally defined as
intelligent.

>Human beings are not machines.
I agree with this.

>Human beings are capable of knowing which problems are unsolvable, while
>machines are not.
But I can't agree with this!  There are infinitely many unsolvable
problems, and determining whether a particular problem is unsolvable
is itself unsolvable.  This does _not_ mean that a machine cannot
determine that a particular problem _is_ solvable, only that there
cannot be a general procedure for classifying _all_ problems which is
guaranteed to terminate in finite time.  Human beings are also capable
of giving up, and of making mistakes.  Most of the unsolvable problems
I know about I was _told_; machines can be told!

Human beings are not machines, but they aren't transfinite gods either.

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (11/15/88)

/ Definition of Intelligence:
/
/ 1. Know how to solve problems.
/ 2. Know which problems are unsolvable.
/ 3. Know #1, #2, and #3 defines intelligence.
/
/ This is the correct definition of intelligence.  If anyone disagrees, please
/ state so and why.

Well, it seems to somehow miss the point... but I won't argue 'cause I don't
know enough about intelligence. (Does anyone?)

/ If you take into account the unsolvable problems of Turing machines, then this
/ proves Artificial Intelligence is impossible.
/
/ "Artificial Intelligence" is an unsolvable problem.
/
/ Human beings are not machines.
/
/ Human beings are capable of knowing which problems are unsolvable, while
/ machines are not.

Wrong, wrong. I've had enough comp.sci. to have heard the proofs that certain
problems are unsolvable. A notable fact: all those proofs were presented
formally -- that is, the problem "is this (particular) problem Turing-solvable?"
*was* Turing-solvable.

There's also a misconception that humans really *can* solve all those
Turing-insoluble problems. For example, the problem "Will this program always
terminate for all inputs, or might it go into an infinite loop?" That's
definitely Turing-insoluble, but can a human solve it? For *any* program given
to him?

I maintain that a human can be simulated by a Turing machine. Comments?

--Z

arrom@aplcen.apl.jhu.edu (Ken Arromdee ) (11/15/88)

>Human beings are capable of knowing which problems are unsolvable, while
>machines are not.

Says who?
--
"I don't care if you _did_ do it in a movie once, Gilligan is not breathing
through that reed!"

--Kenneth Arromdee (ins_akaa@jhunix.UUCP, arromdee@crabcake.cs.jhu.edu,
	g49i0188@jhuvm.BITNET) (not arrom@aplcen, which is my class account)

rolandi@gollum.UUCP (w.rolandi) (11/15/88)

In response to O'Keefe's:
>>Human beings are not machines.
>I agree with this.

I too agree.  Just the same, I would be interested in hearing your 
opinions as to how or why human beings differ from machines.  


Walter Rolandi
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

marty@homxc.UUCP (M.B.BRILLIANT) (11/15/88)

In article <484@soleil.UUCP>, peru@soleil.UUCP (Dave Peru) writes:
> Definition of Intelligence:
> 
> 1. Know how to solve problems.
> 2. Know which problems are unsolvable.
> 3. Know #1, #2, and #3 defines intelligence.

Definitions are arbitrary.  My only criteria for definitions are
whether they create useful terms, and whether they conflict with
previously accepted definitions.  This one might be useful, conflicts
with other definitions, but I'm willing to try it out.

One problem with it is the use of the word ``know.''  There is no such
think as ``knowing'' a definition abstractly.  Definitions change, as
people change the way they use words, because they change the way they
use the ideas the words represent.  A definition that is useful one day
may be found later to be inadequate or self-contradictory.

> This is the correct definition of intelligence.  If anyone disagrees, please
> state so and why.

I disagree.  There is no such thing as a correct definition, only
useful or useless.  The definition is interesting, at least, but, as
noted above, it has problems.

There is already a vague concept of ``intelligence'' waiting for a
precise definition.  The concept contains the notion that human
behavior is characterized by some property called ``intelligence''
which is absent from the behavior of most other living things and most
machines.  That notion prejudices any attempt to define artificial
intelligence, because it presupposes that machines are not intelligent.

Any definition of ``artificial intelligence'' must allow intelligence
to be characteristically human, but not exclusively so.

> If you take into account the unsolvable problems of Turing machines, then this
> proves Artificial Intelligence is impossible.
> 
> "Artificial Intelligence" is an unsolvable problem.

This is a statement of a proposition and a preface to a proof.  It is
not a proof.  Proof required.

> Human beings are not machines.

Therefore..... what?  Is that supposed to mean that all machines are
bound by certain unsolvability rules derived from the study of Turing
machines, but the human mind is exempt?  Can that be proved?

> Human beings are capable of knowing which problems are unsolvable, while
> machines are not.

Proof?  It seems to me that all we humans have is an existence proof that
some problems are unsolvable.  Classifying all problems into solvable and
unsolvable may be itself an unsolvable problem - does anybody know?

We are, I think, back to the primitive notion that (a) intelligence is
what only humans do, (b) machines are not human, hence (c) machines are
not intelligent.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201)-949-1858
Holmdel, NJ 07733	att!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

jsb@dasys1.UUCP (The Invisible Man) (11/16/88)

In article <7974@bloom-beacon.MIT.EDU> joe@athena.mit.edu (Joseph C Wang) writes:
:Find an even number that is not the sum of two prime numbers.
How about 2?  Or do you consider 1 a prime?
:
:Find a set of integers a, b, c, and n in which a^n + b^n = c^n where n is
:greater than two.
How about a = b = c = 0?
:
:Are these problems solvable?
:
:Don't know?  Must not be intelligent.
I won't say you're not intelligent; just careless.
-- 
Jim Baumbach  uunet!actnyc!jsb   
Disclaimer: The Tao that appears in this signature is not the eternal Tao.

bradb@ai.toronto.edu (Brad Brown) (11/16/88)

Claiming that the existence of Turning-unsolvable problems precludes AI
is just plain wrong.  That old chestnut has been around for a long time
and has been debunked many times, so here's my attempt to debunk it 
(again :-)

Assumption 1:

	Turning machines are capable of computing any function that 
	any imaginable computer (or what we know of as a computer)
	can compute.  Therefore the limits of Turing machines apply
	to any computer that we can construct.

Assumption 2:

	There are problems that Turing machines cannot compute.
	The canonical example is the halting problem -- there is
	a simple proof that it is impossible to write a program 
	that will take as input another program and the input to
	the other program and decide whether the program will
	halt.  There are other examples which can be proved to 
	be non-computable because they can be related to
	(technically, "reduced to,") the halting problem.

False assumption 3:

	Since Turing machines cannot compute everything imaginable,
	they can't compute (ie simulate) human intelligence.


Assumption 3 is implicitly based on an assumption that 
artificial intelligence requires a solution to the halting
problem.  It's not at all clear that this is neccessarily
the case.

To show that a solution to the halting problem is required,
you would have to show that intelligent beings could perform
some task which was reducable to the halting problem.
For instance, you would have to show me that you were able
to determine whether a program would halt given the program
and the input -- for any program, not just a specific one.
Bet you can't think of *any* task humans perform that 
is *provably* halting-reducable.

People have known for some time that there are limits to 
what can be computed by *any* mathematically-based system.
The halting problem for Turing machines is only one example --
Godel's Incompleteness Theorem is a more general result
stating that any axiomatic system of sufficient power will
be fundimentally incomplete.  (NB  Systems that are not of
'sufficient power' are also not of sufficient interest to
consider for the purpose of AI)

Unfortunately, Godel's result is another example of a limitation
of formal systems that is misunderstood to mean that formal 
systems cannot exhibit intelligence.  In this case, Godel
shows that there will exist theorems which are true within
a system that the system will not be able to prove true.

Again, the fallacy of the popular opinion is that intelligent
systems will require that such truths be provably true.  This
is an even more tenuous claim than the requirement for solutions
to the halting problem -- given the number of mistakes that
humans make all the time, why should we expect that humans
will have to know the truths of these pathalogical cases?
Indeed, most of the "errors" that formal system will make
in determining the truth of a theorem will occur only for
pathological cases that are probably not important anyway.


So, I regard claims that AI requires some form of unachievably
"perfect" reasoning with great schepticism.  I feel strongly
that if you are a person who looks at the brain from the 
point of view of a scientist who examines causes and effects,
then the possibility (though certainly not the achievability)
of AI is very credible.  IMHO, beliefs to the contrary imply
a belief in a "magic" component to the brain that defies
rational explanation.  Sorry, but I just don't buy that.

					(-:  Brad Brown  :-)
					bradb@ai.toronto.edu

					+:   O
					+:  -+-
	Flame-retardant shielding -->   +:   |
					+:  / \

nick@hp-sdd.hp.com (Nick Flor) (11/16/88)

In article <484@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Definition of Intelligence:
>
>1. Know how to solve problems.

Okay, my calculator (an HP28S) can do this.

>2. Know which problems are unsolvable.

My 28S can do this too.  (It beeps when I ask it to solve an  unsolvable
equation).

>3. Know #1, #2, and #3 defines intelligence.

The only  thing  interesting  about  this last  statement  is that  it's
recursive.  Well...  my 28S can even do recursion.

So, by your  definition, my calculator is  intelligient.  But by current
standards of  intelligience,  it isn't.  Therefore  your  definition  is
wrong.

>"Artificial Intelligence" is an unsolvable problem.

But is the  creation  of  intelligience  an  unsolvable  problem?  
Nah, people make babies all the time.  


Nice try.


Nick
-- 
+ Disclaimer: The above opinions are my own, not necessarily my employer's.   +
+ Oh, sure. Sure. I'm going. But I got  | Nick V. Flor           * * o * *    +
+ your number, see? And one of these    | Hewlett Packard SDD   * * /I\ * *   +
+ days, the joke's gonna be on you.     | ..hplabs!hp-sdd!nick  * * / \ * *   +

peru@soleil.UUCP (Dave Peru) (11/16/88)

In article <484@soleil.UUCP> I write:

>>Definition of Intelligence:
>>
>>1. Know how to solve problems.
>>2. Know which problems are unsolvable.
>>3. Know #1, #2, and #3 defines intelligence.

There is a misunderstanding what I meant by this statement, especially #2.

Human beings KNOW the "halting problem for Turing machines", my point is
that machines can NOT know the "halting problem for Turing machines".

Please describe how you would give this knowledge to a computer.

All uncomputability problems come from dealing with infinity.

Like naive set theory, naive Artificial Intelligence does not deal with
paradoxes and the concept of "infinity".

Human beings understand the concept of "infinity", most of mathematics would
be meaningless if you took out the concept of "infinity".  Mathematicians
would be quite upset if you told them that they were really fooling themselves
all this time.  Physicists use "infinity" for predicting reality.

Finite machines cannot understand "infinity".

For the concept of "infinity" to have any meaning at all you MUST have the
computational strength of reality.

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (11/16/88)

/ Human beings KNOW the "halting problem for Turing machines", my point is
/ that machines can NOT know the "halting problem for Turing machines".
/ Please describe how you would give this knowledge to a computer.
/ All uncomputability problems come from dealing with infinity.
/ Like naive set theory, naive Artificial Intelligence does not deal with
/ paradoxes and the concept of "infinity".
/ Finite machines cannot understand "infinity".

Finite machines can understand both infinity and the halting problem the same
way they can understand any other theorem in a formal system. Give it the
definitions and axioms, and let it crank out theorems. "The halting problem is
Turing-insoluble" is a theorem that's not hard to prove. Program to do this sort
of thing have been written. Hell, that's how humans did it.

"Infinity" is a single concept; you don't need an infinite amount of information
to comprehend it or use it. You only need the axioms and definitions that
mention it.

/ For the concept of "infinity" to have any meaning at all you MUST have the
/ computational strength of reality.

 In mathematics, the concept of "infinity" has no meaning at all. In "real
life," it is useful in describing certain aspects of the universe, but no more
useful than the concept "two". A program which can handle one can handle the
other.

--Z

bjornl@blake.acs.washington.edu (Bjorn Levidown) (11/16/88)

In article <490@soleil.UUCP> it is stated that

>For the concept of "infinity" to have any meaning at all you MUST have the
>computational strength of reality.

What is the computational strength of reality and what is inherent in 
a computer system which forces this limitation?  Why can the human
brain/mind comprehend reality?  It is, after all, only a biological
computer of a sort.

Bjorn Levidow
bjornl@blake.UUCP
University of Washington
Department of Psychology, NI-25

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (11/16/88)

/:: Find a set of integers a, b, c, and n in which a^n + b^n = c^n where n is
/:: greater than two.
/::
/:: Are these problems solvable?
/::
/:: Don't know?  Must not be intelligent.
/
/ How about a = b = c = 0?
/ I won't say you're not intelligent; just careless.

Joking aside, I'd sort of expect an AI to be as careless as a human...

--Z

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/16/88)

In article <1738@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>Says who?  Can you prove this?  All the evidence I know points
>toward human beings as being machines.
Can't know much then.

See C18 enlightenment debate, Descartes, goals of encylopaedists,
limitations of enlightenment rationality (unfortunately ossified in
American political values).

This is at least a 300 year old debate (not counting earlier millenium
debate on free will, and still going strong).

I cannot see how an educated (and intelligent :-)) person could
possibly be so ignorant of the cultural context of mechanistic models of
humans.  Looking at other US intellectual traditions like functionalism
and sociobiology, it's no surprise though.

All machines are artefacts.
Humans are not artefacts.
Humans are not machines.

Humanity and culture are inseparable.
Techies are uncultured.
Techies are inhuman.

Don't flame me, it was only my deterministic logic which infered this
:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

ok@quintus.uucp (Richard A. O'Keefe) (11/16/88)

In article <490@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>In article <484@soleil.UUCP> I write:
>>>Definition of Intelligence:
>>>1. Know how to solve problems.
>>>2. Know which problems are unsolvable.
>>>3. Know #1, #2, and #3 defines intelligence.

>There is a misunderstanding what I meant by this statement, especially #2.
>Human beings KNOW the "halting problem for Turing machines", my point is
>that machines can NOT know the "halting problem for Turing machines".
>Please describe how you would give this knowledge to a computer.

I was afraid for a minute there that you were going to say "know how to
solve _practical_ problems, have a _practical_ grasp of which problems
are _feasible_", but no, it's the halting-problem type of problem after all.

What does it mean to say 'Human beings KNOW the Halting Problem'?
As a plain matter of fact, most of them _don't_.  I think I do, but what
I mean by that is that I have enough formal knowledge about mathematical
logic to follow the books, to relate some of the key concepts to each
other, and to deploy this information in further formal reasoning.  I do
_not_ know, for any given program, whether or not it halts until I have
examined that particular program, and even then the Law I _really_ rely
on is Murphy's 1st law of bugs:  "There is a mistake in this program."

Boyer & Moore have a paper on a machine-generated proof of Goedel's Theorem.
Read that paper to see how to "give this knowledge to a computer".
Getting a computer to "know" mathematical things is a Small Matter of
Programming.

>All uncomputability problems come from dealing with infinity.

In a strict sense, yes.  But you can find oodles of NP-hard problems
without thinking once about infinities, and NP-hard is just as good as
uncomputable for most practical purposes.

>Like naive set theory, naive Artificial Intelligence does not deal with
>paradoxes and the concept of "infinity".

There have been several conferences and workshops on knowledge
representation.  There is no shortage of papers discussing paradoxes
in these areas.  Smullyan has a fine paper on some paradoxes of belief.
Not all AI work is naive.  (There is an infinite regress in "I think 
that he thinks that I think ..." which has to be headed off; this _has_
occurred to people.)

Surely set theory has taught us by now that there is no such thing as
THE concept of infinity:  omega is not Aleph-null is not the-point-at-
infinity is not ... is not the common-sense notion of infinity.

>Human beings understand the concept of "infinity", most of mathematics would
>be meaningless if you took out the concept of "infinity".

As for the second clause, clearly you are not an Intuitionist.
As for the first, this is simply false:  the vast majority of human beings
have not had the technical training to distinguish between omega, Aleph-null,
and the reciprocal of an infinitesimal in non-standard arithmetic, and those
who _have_ had such training would probably be humble enough to admit that
we are still nibbling at the edges of the concepts.

>Finite machines cannot understand "infinity".

Why not?  The whole human race has so far encountered only a finite number
of facts about the various infinities.  By starting this joust in a FORMAL
field you've lost the game in advance.

>For the concept of "infinity" to have any meaning at all you MUST have the
>computational strength of reality.

For the concept of infinity to have any meaning at all to WHOM?
If you mean "in order to understand infinity correctly, the understander
must have the computational strength of infinity", maybe, but it is not
clear to me that any human being understands infinity that well,
particulary not one who talks about THE concept of infinity.
What _is_ "the computational strength of reality", and how is it
possible for finite beings to possess it?

[Someone who believes that we are god could consistently believe that
 humans are not finite bounded creatures and so can "know" infinity.
 New AIge?  On the other hand, a god so easily deceived could be wrong...]

kers@otter.hpl.hp.com (Christopher Dollin) (11/16/88)

Dave Peru says:

| Human beings KNOW the "halting problem for Turing machines", my point is
| that machines can NOT know the "halting problem for Turing machines".
|
| Please describe how you would give this knowledge to a computer.

The same way I would give the knowledge to a student. Oh, machines aren't
intelligent? But isn't that what we're discussing already?

| All uncomputability problems come from dealing with infinity.

Hm. Wouldn't "unboundedness" be a better term to use than "infinity"? [Since
you have elsewhere claimed said "zero = infinity" isn't against your
intuition, doesn't it follow that computability problems come from dealing
with zero too?]

| Like naive set theory, naive Artificial Intelligence does not deal with
| paradoxes and the concept of "infinity".

But does that matter?

| Human beings understand the concept of "infinity", most of mathematics would
  ^^^^^^^^^^^^
| be meaningless if you took out the concept of "infinity".  Mathematicians
| would be quite upset if you told them that they were really fooling themselves
| all this time.  Physicists use "infinity" for predicting reality.

*Some* human beings do. Some don't. Some mathematicians have a downer on
"infinity".

| Finite machines cannot understand "infinity".

Hm. I claim human beings are counterexamples, but you don't believe that
humans are machines. [Probably none of us agree on what "machine" means
anyway. We might have better agreement on "human".] Do you claim that an
infinite concept can't be captured in a finite way? I would have thought that
many of our early notions of infinity were derived by a uniform extension of
finite patterns ... to borrow the book title, "One ... Two ... Three ...
Infinity".

| For the concept of "infinity" to have any meaning at all you MUST have the
| computational strength of reality.

Is reality finite? ["Is there an odd or even number of elementary particles in
the universe?" - no I *don't* propose to reopen that flamefeast.] What is
"computational strength"? Are machines unreal? Is the filling in my sandwiches
determined only when I bite through the bread? When will the long dark teatime
of the soul be a paper back?

Regards,    | "Once   I would have been glad   to have made your acquaintance
Kers.       | Now   I feel the danger   brought about   by circumstance."

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/16/88)

In article <490@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>
>Finite machines cannot understand "infinity".
>
Are human beings not finite?
Are they not machines?
If they are not machines, then how do they differ?
What is it about this difference that allows us to understand infinity?

Since you made this assertion, I must assume you have some belief
or evidence to back it up, would you please provide it?  If the
answer is "I don't know", then how do you know that the essence
of understanding infinity isn't mechanical?

>For the concept of "infinity" to have any meaning at all you MUST have the
>computational strength of reality.

I don't understand this statement.  What computational strength does reality
have?  Are computers not as real as humans?

anderson@secd.cs.umd.edu (Gary Anderson) (11/17/88)

In article <490@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>In article <484@soleil.UUCP> I write:
>

>All uncomputability problems come from dealing with infinity.
>
>Like naive set theory, naive Artificial Intelligence does not deal with
>paradoxes and the concept of "infinity".
>
>Human beings understand the concept of "infinity", most of mathematics would
>be meaningless if you took out the concept of "infinity".  Mathematicians
>would be quite upset if you told them that they were really fooling themselves
>all this time.  Physicists use "infinity" for predicting reality.


There are very many different concepts of infinity even within mathematics.
I do not know the relevant comp sci literature, but I would be surprised if you could not in some way model or otherwise provide
 some of these  concepts to a program designed to prove theorems in certain
contexts.

It seems to me that the power of the concept of infinity lies in how its is used
to solve or to characterize the solution of certain questions in mathematics
, and that so far as mathematics is concerned it does not have 
an existential basis
independent of how it is or ought to be used.


Anecdote:

Professor Garrett Birkhoff, a prominent mathematician not unfamiliar with the
use of infinity in mathematics, often remarks in his undergraduate ODE class
that when mathematicians begin to speak about infinity, they don't know 
what they are talking about.



From this observation, I suggest another difference between man and machine.

Perhaps man is just better at coping with the "reality" of not "knowing".

jbn@glacier.STANFORD.EDU (John B. Nagle) (11/17/88)

      We already had this discussion this year.  We had it at least
twice last year.  So can it.
		
					John Nagle

hassell@tramp.Colorado.EDU (Christopher Hassell) (11/17/88)

Upon jumping into this fray, first I might notice that so much of this
(quite logically so) is the discussion of the meaning of intelligence as 
we try to get it *NOT* related to humans. (We are our own first model it seems.)

I suggest that we cannot approach one that is not embedded with human implica-
tions so it might be a better idea to *somehow* find another word, being that
intelligence is too loaded to formalize.

Here goes the main stuff:
The most ticklish of the issues that confront formalizing intelligence is simply
an observation that we (*humans*) seem innately different from computers 
(*machines*) and we, as with any good guess, seek to support it, being it has 
some apparently solid stuff to support it.

I think this issue boils down to something else : -Determinism-.   The typical
picture and one that I, myself, note causes revulsion is that of us as machines
which seems to imply deterministic and with predictable boundaries (finite),
and ones without ability to modify.  (This is usually the issue in alot of sci-
fi on the subject)

Determinism in humans is always a fun issue, but also a irritatingly difficult
one to REALLY nail down.  In other words, we ain't.  At the nth degree, we
aren't predictable so no one need worry about the limits on their thoughts.
Other issues like seeing the future or the general flow of things [fate!!]
always seem relative to the person believing them and are therefore moot.

The hypothesis that seems to make sense is that, given those noisy neurons
and their interconnections, CHAOS seems to be the logical assumption at 
some level (hopefully a lower one, well for some people |-)

On to more meaningful stuff, other issues are that if we are machines where
can we arrive at conclusions about ourselves and also where do we this knowledge
of the world?  That would seem to come from the fact that any answer lies in a
*complete* definition of the problem.  So given problems we can find answers, 
because the problems have the arbitrary information.  (all of the above 
supposes a closure theorum on information as a substance, it can only be lost
supposedly (the world is full of chaos 'making' more?))  

Given this it seems natural that if we get at *SYMBOLIC* scheme of
 observation we can adapt to learning anything within our grasp, 
(worked out pretty neat huh?) and that includes introspection 
                                           (wierd, but still symbolizible).

------THE LAST POINT-------
The MOST ticklish concept left would now be old hat.
       We started with stuff like believing we didn't have the only nice 
          flat perspective (i.e. heavenly bodies existing, w/round earth)
       The next general milestone was our conceding that we *ain't* at the
           center of our universe either (This one's still going)
       After this was a shocker, we are not separate from animals and the
           `unintelligent' entities we see around us. (Tough one)
    
    The last may be that we can derive the instructions for making an inherently
         large part of what we call in ourselves `human', (`thinking beings')

What this may lead to if the link is proven, is the most fun of all. (ugh)
   if our self-proclaimed `essance' is creatible and finite and we now know the 
      tools are left out that could make another for ourselves ... egad

What, completely defined, would we create once we figure out how to use them????
       (being the notibly imperfect and selfish creatures we are!!)
  
If you've persevered this far through the text, you must either have a flame in
    mind or have fallen asleep thumping the space bar  |^/ zzzzzz

-------------------------------------------------------------------------------
YES, more mental masturbation for use in your home or office!!
provided by (without responsibility of course!  He was probably possessed at the
             time.  What else could explain it?)          >>>>C>>>H>>>>>
...ncar!boulder!tramp!hassell (and oh so much of it)
-------------------------------------------------------------------------------

fransvo@htsa (Frans van Otten) (11/18/88)

In article <88Nov15.170837est.707@neat.ai.toronto.edu> bradb@ai.toronto.edu (Brad Brown) writes:
>of AI is very credible.  IMHO, beliefs to the contrary imply
                          ^^^^

What does IMHO stand for ?

-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

fransvo@htsa (Frans van Otten) (11/18/88)

In article <484@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Human beings are not machines.

I don't know how important intuition etc. is for (human) intelligence, nor
do I know much about neurology, but viewing the human brain as a lot of
'biological transistors' (neurons etc.), in what way do we differ from a
computer ? Why would it be impossible to make a machine, maybe as big as
the Wolrd Trade Center, that performs (part of) the functions the
human brain (small as it is) performs ? 

Besides, there is not just one kind of intelligence.

-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

dhw@itivax.UUCP (David H. West) (11/18/88)

In article <88Nov15.170837est.707@neat.ai.toronto.edu> bradb@ai.toronto.edu (Brad Brown) writes:
>Godel's Incompleteness Theorem is a more general result
>stating that any axiomatic system of sufficient power will
>be fundimentally incomplete.  (NB  Systems that are not of
>'sufficient power' are also not of sufficient interest to
>consider for the purpose of AI)

That's by no means obvious.  Formal reasoning is a relatively recent
addition to the human behavioral repertoire; people do it neither
naturally (it takes them until well into adolescence to become
halfway competent, whereas they walk and talk adequately ten years
earlier) nor well (look at all the incorrect 'proofs' that get 
published), and most people manage perfectly well most of the time 
without using it at all (see the work of Kahneman and Tversky).  Let's 
not get too logicocentric when talking about intelligence.  It's at
least plausible that nature wouldn't produce complete implementations 
of computationally problematic paradigms if there were a cheaper but
adequate alternative. (Cf. Simon's satisficing, optical illusions
etc.) 

>[...] Godel
>shows that there will exist theorems which are true within
>a system that the system will not be able to prove true.
[...]

>a belief in a "magic" component to the brain that defies
>rational explanation.  Sorry, but I just don't buy that.

Eh?  You have to admit the possibility, because you've just declared
your belief in theorems which are true-but-unprovable in a system.
If that isn't "defying rational explanation" (within the system), I
don't know what is.

-David West            dhw%iti@umix.cc.umich.edu
		       {uunet,rutgers,ames}!umix!itivax!dhw
CDSL, Industrial Technology Institute, PO Box 1485, 
Ann Arbor, MI 48106

ok@quintus.uucp (Richard A. O'Keefe) (11/18/88)

In article <4714@boulder.Colorado.EDU> hassell@tramp.Colorado.EDU (Christopher Hassell) writes:
>The MOST ticklish concept left would now be old hat.
>       We started with stuff like believing we didn't have the only nice 
>          flat perspective (i.e. heavenly bodies existing, w/round earth)
That has indeed been old hat for >2200 years.

>       The next general milestone was our conceding that we *ain't* at the
>           center of our universe either (This one's still going)
Note that Ptolemaic astronomy did not put the Earth at the centre of
the universe & that the mediaeval perspective could more accurately be
described as "at the bottom of a pit" than "at the centre of the universe".

>       After this was a shocker, we are not separate from animals and the
>           `unintelligent' entities we see around us. (Tough one)
There is a fascinating book called "The Criminal Prosecution and Execution
of Animals" (that's from memory, the book is at home) which was republished
a couple of years ago.  Up until about the 18th century, it turns out that
it was a common occurrence for animals (even a swarm of locusts) to be
tried in court.  This doesn't sound as though premodern Europe thought
there was such a great separation.  How can you possibly try a sow which
ate its piglets for murder unless you believe that it was a responsible
moral being?

History is not only stranger than we imagine, it is _other_ than we imagine.

smryan@garth.UUCP (Steven Ryan) (11/18/88)

>Professor Garrett Birkhoff, a prominent mathematician not unfamiliar with the
>use of infinity in mathematics, often remarks in his undergraduate ODE class
>that when mathematicians begin to speak about infinity, they don't know 
>what they are talking about.

Just a short note that is a mortal sin for a mathematician to say
`inf***ty.' Penance usually is usually five Hail Davids.

The word `infinite' only occurs before `set,' `infinite set,' where it has
a very precise meaning.

The correct term is `arbitrary.'

Discrete transistion machines with arbitrary resources cannot necessarily
navigate through a partial recursive set in finite time. Whether humans
can is an open question.
-- 
                                                   -- s m ryan
+---------------------------------------+--------------------------------------+
|    ....such cultural highlights as    |    Nature is Time's way of having    |
|    Alan Thicke, and, uh,....          |    pausible deniability.             |
+---------------------------------------+--------------------------------------+

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/18/88)

In article <0XTukNy00Xol41W1Ui@andrew.cmu.edu> ap1i+@andrew.cmu.edu (Andrew C. Plotkin) writes:
>I maintain that a human can be simulated by a Turing machine. Comments?
OK, propose a Turing Machine configuration that will emulate the
decision making which made Quayle candidate for vice-president :-)

(Mickey Mouse is rumoured to wear a Dan Quayle wrist-watch, BBC radio
 4 morning news!)
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/18/88)

In article <1654@hp-sdd.HP.COM> nick@hp-sdd.hp.com.UUCP (Nick Flor) writes:
>But is the  creation  of  intelligence  an  unsolvable  problem?  
>Nah, people make babies all the time.  

Babies are pretty dumb.  Left to themselves, their MTBF is about a
day, and then you don't get another chance to measure it.

Intelligence arises through socialisation.  No-one can guarantee to
give any child a given level of 'intelligence' by the age of 18, although
the bases of educational attainment are fairly well understood.

Leave a child to themself and they will develop little of what non-AI
types call intelligence (although they will pick up the blue block).
This is a fact, as a few very miserable individuals have had the
misfortune to prove.

-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

bradb@ai.toronto.edu (Brad Brown) (11/18/88)

In article <392@itivax.UUCP> dhw@itivax.UUCP (David H. West) writes:
>In article <88Nov15.170837est.707@neat.ai.toronto.edu> bradb@ai.toronto.edu (Brad Brown) writes:
>>Godel's Incompleteness Theorem is a more general result
>>stating that any axiomatic system of sufficient power will
>>be fundimentally incomplete.  (NB  Systems that are not of
>>'sufficient power' are also not of sufficient interest to
>>consider for the purpose of AI)
>
>That's by no means obvious.  Formal reasoning is a relatively recent
>addition to the human behavioral repertoire...

I conceed that I don't have the backround to support this statement,
but my article was a followup to a proposal that because Turing 
machines could not compute some functions (eg the halting problem)
no computer could be made to compute "intelligence."  I was arguing
that the incompleteness of formal systems is not a sufficient to 
proof that formal systems cannot be intelligent.


>>[...] Godel
>>shows that there will exist theorems which are true within
>>a system that the system will not be able to prove true.
>[...]
>
>>a belief in a "magic" component to the brain that defies
>>rational explanation.  Sorry, but I just don't buy that.
>
>Eh?  You have to admit the possibility, because you've just declared
>your belief in theorems which are true-but-unprovable in a system.
>If that isn't "defying rational explanation" (within the system), I
>don't know what is.

NO!!  All Godel's theorem says is that every sufficiently powerful
formal system will not be able to prove the truth of some true
theorems.  I implied earlier in my article that I felt the brain
could not prove these truths *either*, so there is no contradiction.

				(-:  Brad Brown  :-)

				bradb@ai.toronto.edu

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (11/18/88)

From article <392@itivax.UUCP>, by dhw@itivax.UUCP (David H. West):
" In article <88Nov15.170837est.707@neat.ai.toronto.edu> bradb@ai.toronto.edu (Brad Brown) writes:
" >Godel's Incompleteness Theorem is a more general result
" >stating that any axiomatic system of sufficient power will
" >be fundimentally incomplete.  (NB  Systems that are not of
" >'sufficient power' are also not of sufficient interest to
" >consider for the purpose of AI)
" 
" That's by no means obvious.  Formal reasoning is a relatively recent
" addition to the human behavioral repertoire; people do it neither
" naturally (it takes them until well into adolescence to become

And for that matter, we can do a certain amount of formal reasoning
within the bounds of a complete system, since predicate logic
is complete.  Goedel published a proof in 1930.

		Greg, lee@uhccux.uhcc.hawaii.edu

marty@homxc.UUCP (M.B.BRILLIANT) (11/19/88)

In article <17847@glacier.STANFORD.EDU>, jbn@glacier.STANFORD.EDU
(John B. Nagle) writes:
> 
>       We already had this discussion this year.  We had it at least
> twice last year.  So can it.

Interesting comment.  The title of the newsgroup is an abbreviation for
``artificial intelligence.''  Its subject is ``artificial
intelligence.''  The discussion is about what it means when you
juxtapose the two words ``artificial'' and ``intelligence.''  The poor
sap who started it wanted to know whether he had the right definition
of intelligence.  Did he?

If we had the same discussion already this year, and twice last year,
did we settle it?  What was the outcome?  Do we know what ``artificial
intelligence'' is?  Or are we just talking about something we don't
know anything about?

I am ready to allow that ``artificial intelligence'' could be a word
all by itself, which does not derive its meaning from the two separate
words ``artificial'' and ``intelligence.''  But I certainly don't want
to reopen a discussion that has already been settled.  Just somebody
tell us what the answer is.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

mark@verdix.com (Mark Lundquist) (11/19/88)

In article <484@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Definition of Intelligence:
>
>1. Know how to solve problems.
>2. Know which problems are unsolvable.
>3. Know #1, #2, and #3 defines intelligence.
>
>This is the correct definition of intelligence.  If anyone disagrees, please
>state so and why.

OK, I'll bite.

	First of all, in regard to (2), since there is no algorithm for deciding
whether or not an arbitrary problem is solvable, it's hard to see how
anything could posses property (2).
	Also, I don't understand why you include (3).  What claims does it allow
you to make about intelligent agents or intelligent behavior?  What would
you say about a being that possessed (1) and (2) but not (3)?
	Each of (1), (2), and (3) is formulated in terms of 'knowing'.  Just
what do you mean by "knows"?  It seems possible that there is a sense of
'knowing' such that if a being could be truly said to 'know' even one thing
in that sense, that being would be intelligent.
	As for proclaiming the above definition to be "correct", again it's not
clear to me what you mean.  You need to show that your definition is what
people mean (or at least ought to mean) when they use the word
"intelligent".  Your definition certainly doesn't seem to be correct in this
sense.  You might choose to propose this definition as that of some
particular _species_ of intelligence (I don't know why you would; I cofess
that I'm quite unable to imagine what such a species of intelligence would
be like), but I don't think in that case that you would have arrived at any
useable concept of intelligence.
	This leads to my final point, which is that there's a significant
burden-of-proof issue.  Defining things is tricky business.  There are
predicates that we can and do use meaningfully but which don't appear to
have any correct definition.  Such terms are normative rather than
descriptive, and they appear to be most adequately formulated in terms of "what
it isn't" than "what it is" (I personally suspect that "intelligent" is one
of these terms).  When proposing a definition of something, especially a
definition as problematic as this one, a little explaining of how you arrived at
the definition is in order.  You might try to anticipate some of the
difficulties with it, and show why you believe that your definition isn't
specious.  As it is, the definition is little better than

	(4) Prefers pink grapefruit to yellow grapefruit
	(5) Stares mindlessly at toothpicks
	(6) Knows that (4), (5), and (6) constitute intelligence
"This is right and if you don't think so, say why".

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (11/19/88)

/ In article <1738@cadre.dsl.PITTSBURGH.EDU>
/ geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks)  writes:
/ >Says who?  Can you prove this?  All the evidence I know points
/ >toward human beings as being machines.
/
/ Can't know much then.
/ See C18 enlightenment debate, Descartes, goals of encylopaedists,
/ limitations of enlightenment rationality

I know some Descartes, little of the others. However, I know a lot of human
beings, and I've never seen evidence that they (or I) are not machines.

/ I cannot see how an educated (and intelligent :-)) person could
/ possibly be so ignorant of the cultural context of mechanistic models of
/ humans.

What does cultural context have to do with it? But if they've shown evidence
that human beings aren't machines, by all means post a summary -- many many AI
researchers will be fascinated.

/ All machines are artifacts.
/ Humans are not artifacts.
/ Humans are not machines.

Fallacy there. You should know better.

--Z

numork@ndsuvax.UUCP (James Mork) (11/20/88)

>
>Dijkstra's favourite dictionary entry is
>    "Intelligent, adj. ... able to perform the functions of a computer ..."
>(Dijkstra doesn't think much of AI...)
>

   Does anyone besides myself have this almost cynical hatred for
   Dijkstra and the way that the Computer Science community revers
   him?  I have seen so many algorithms atributed to Dijkstra that
   have been common solutions in everyday problems for thousands
   of years.  It makes me want to throw up.

--
                  UUCP                Bitnet                Internet
          uunet!ndsuvax!numork    numork@ndsuvax     numork@plains.nodak.edu

ok@quintus.uucp (Richard A. O'Keefe) (11/20/88)

In article <1808@ndsuvax.UUCP> numork@ndsuvax.UUCP (James Mork) writes:
>   Does anyone besides myself have this almost cynical hatred for
>   Dijkstra and the way that the Computer Science community reveres
>   him?  I have seen so many algorithms attributed to Dijkstra that
>   have been common solutions in everyday problems for thousands
>   of years.  It makes me want to throw up.

From Funk&Wagnall's dictionary:
	cynical:  Distrusting or contemptuous of virtue in others;
		  sneering; sarcastic.
	[The] doctrine [of the Cynics] came to represent
	insolent self-righteousness.

If you have seen any "algorithms attributed to Dijkstra" which "have
been common solutions in everyday problems for thousands of years",
NAME THEM.  Ok, so Dijkstra is a Dutch curmudgeon with a low tolerance
for hype, and has in consequence little liking for AI (but not robotics).
That's a reason for throwing up?  Not on _my_ terminal, thank you!

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (11/20/88)

From article <1919@garth.UUCP>, by smryan@garth.UUCP (Steven Ryan):
" >Professor Garrett Birkhoff, a prominent mathematician not unfamiliar with the
" >use of arbitrary in mathematics, often remarks in his undergraduate ODE class
" >that when mathematicians begin to speak about arbitrary, they don't know 
" >what they are talking about.
" 
" Just a short note that is a mortal sin for a mathematician to say
" `arb***ry.' Penance usually is usually five Hail Davids.
" 
" The word `infinite' only occurs before `set,' `infinite set,' where it has
" a very precise meaning.
" 
" The correct term is `arbitrary.'

Now it makes even more sense.

morrison@grads.cs.ubc.ca (Rick Morrison) (11/21/88)

In article <4264@homxc.UUCP> marty@homxc.UUCP (M.B.BRILLIANT) writes:
> ...  Do we know what ``artificial intelligence'' is?  Or are we just talking
> about something we don't know anything about? ...  Just somebody
>tell us what the answer is.

The answer is "who cares?" Does anyone in this group actually _do_ AI? I'm 
beginning to think that the most appropriate definition of AI is "a discipline
concerned with the uninformed examination of unresolvable philosophical
and psychological issues."

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/21/88)

In article <1908@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>
>Intelligence arises through socialisation.  
>
Why is this a good argument against the possibility of machine intelligence?
If a large enough neural network could be created, could it also not
receive socialization a la HAL?

doug@feedme.UUCP (Doug Salot) (11/21/88)

Gilbert Cockton writes:
>Intelligence arises through socialisation.

So do other diseases, but that doesn't mean it's the only way to get
sick.  The Eskimos have lots of words to distinguish between various
types of snow; would somebody from sci.lang care to give us a few more
words for intelligence?

Culturation is one contributing factor to our peculiar brand of
intelligence.  Our particular set of sensory detectors and motor
actuators is another.  Neurophysiological constraints and mechanisms
is yet another.  The chemical makeup of the earth is one.  The physics
of the universe plays a role.  So!?  Is our definition of intelligence
really so limited as to exclude all other domains?

Machines will exhibit the salient properties of human intelligence.
A fun book to read is Braitenberg's "Vehicles: Experiments in
Synthetic Psychology."  Another is Edelman's "Neural Darwinism."
Bury Descartes already.  Connectionist modeling and neurobiological
research will bear fruit; why fight it?

We should already be starting on the other hard task: creating
robust, self-organizing, self-motive, self-sustaining, self-replicating
machines.  Artificial Intelligence will look like a cake-walk compared
to Artificial Life.  Now, leave me alone while I wire-wrap this damn
frog.


-- 
Doug Salot || doug@feedme.UUCP || ...{zardoz,dhw68k}!feedme!doug
                    "vox populi, vox canis"

throopw@xyzzy.UUCP (Wayne A. Throop) (11/22/88)

> peru@soleil.UUCP (Dave Peru)
> Definition of Intelligence:
>  1. Know how to solve problems.
>  2. Know which problems are unsolvable.
>  3. Know #1, #2, and #3 defines intelligence.
> This is the correct definition of intelligence.  If anyone disagrees, please
> state so and why.

I disagree, because this matches very poorly with what people seem
operationally to mean when they use the term.  Better matches are
gotten by "Intelligence is the extent to which one knows things about
stuff", or perhaps "Intelligence is the ability to come to know more
things about more stuff."


Of course, despite being better than Dave's attempts, my trial definitions
suffer the same flaw as Dave's... namely, they don't tie down what it
means to know something.  And, in fact, this is really the basic question
here, compared to which the question of "what is Intelligence" is a mere
quibble, and that is "what does it mean to know something?".  (And even
more basic, and even harder, "what does it mean to mean something?".)

It's questions like these that keep people arguing in talk.philosophy.misc.

And we can see fairly clearly that the question about "to know" is
what Dave is really getting at, because the assertions at the end of
his posting:

> If you take into account the unsolvable problems of Turing machines,
> then this proves Artificial Intelligence is impossible.

> "Artificial Intelligence" is an unsolvable problem.

> Human beings are not machines.

> Human beings are capable of knowing which problems are unsolvable, while
> machines are not.

... boil down to the assertion that humans "know" things in some
mysterious way different from the way that machines "know" things.
If Dave wished to convince me of the assertions he made, he would have
to convince me that machines and humans "know" things in ways
fundamentally distinct (as opposed to being distinct only in
complexity or superficial organization).

--
Alan Turing thought about criteria to settle the question of whether
machines can think, a question of which we now know that it is about
as relevant as the question of whether submarines can swim.
                                        --- Edgser Dijkstra
-- 
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

throopw@xyzzy.UUCP (Wayne A. Throop) (11/22/88)

> peru@soleil.UUCP (Dave Peru)
> There is a misunderstanding what I meant by [...my definition of
> Intelligence...] especially [...the part reading "know which
> problems are unsolvable." ...]

> Human beings KNOW the "halting problem for Turing machines", my point is
> that machines can NOT know the "halting problem for Turing machines".

But there is no single "halting problem for Turing machines".  There
is only the halting problem for a particular machine programed in a
particular way.  Other "machines" can "know" the problems of some
given machine.

> Please describe how you would give this knowledge to a computer.

If by "this knowledge", is meant the ability to solve the halting
problem for all possible machines, it is of course impossible.  But
then again, there is no particular reason to think that humans can
solve the problem for all machines either, as pointed out in
Hofstadter's rebuttal to Lucas in "The Mind's I".

The rest of Dave's posting boils down to the notion that

> Finite machines cannot understand "infinity".

... which seems an interesting, though unsupportable assertion.  All
that is needed to model/understand "infinity" is the notion of
boundlessness.  Finite machines can encompas this notion easily, as in
symbolic mathematical systems dealing with infinite series, and prolog
and like systems having the ability to specify "infinite" lists or
sequences.

Further, is there any reason to suppose that humans are "infinite" in
any relevant respect?  I see no particular reason to suppose so.

--
#2:  ... just think of what we'll have!
#6   A row of cabbages.
#2   But highly educated cabbages!
     --- from "The Prisoner"
-- 
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

smryan@garth.UUCP (Steven Ryan) (11/22/88)

>/ >toward human beings as being machines.

What do you mean by machine?

I mean a turing machine or any proven equivalent system.
-- 
                                                   -- s m ryan
+---------------------------------------------------------------+--------------+
| And they looked across the PDP7 and saw that it was idle and  |  OSF is the  |
| without users. They `said let there be code' and it was.      |  antiUnix.   |
+---------------------------------------------------------------+--------------+
  There was a read and a write and it was the first memory cycle.

smryan@garth.UUCP (Steven Ryan) (11/22/88)

>" Just a short note that is a mortal sin for a mathematician to say
>" `arb***ry.' Penance usually is usually five Hail Davids.
>" 
>" The word `infinite' only occurs before `set,' `infinite set,' where it has
>" a very precise meaning.
>" 
>" The correct term is `arbitrary.'
>
>Now it makes even more sense.

Without wanting to start a jihad, but in most math and computer science, there
is a real attempt to avoid the word `infinity' and `infinite' really is
restricted to `infinite set.'

That little thing that looks like a propellor is read as `unbounded':
x -> oo, f(x) -> oo. `As x increases unbounded, f at x increases unbounded.'

The problem is that `infinity' is such a transcendental kind of word, that any
attempt to define it causes problem. Aleph-0 is an infinite set, but
2**aleph-0=aleph-1 is an even larger infinite set. If we try to define
infinity as the largest possible set, then we are left with
infinity=2**infinity.

`Arbitrary' means the same, more or less, as `unbounded.'

Finally, note that `unbounded' does not always mean infinite. If a Turing
machine accepts, its tape is unbounded, but it must be finite.
-- 
                                                   -- s m ryan
+---------------------------------------------------------------+--------------+
| And they looked across the PDP7 and saw that it was idle and  |  OSF is the  |
| without users. They `said let there be code' and it was.      |  antiUnix.   |
+---------------------------------------------------------------+--------------+
  There was a read and a write and it was the first memory cycle.

sp299-ad@violet.berkeley.edu (Celso Alvarez) (11/22/88)

In article <151@feedme.UUCP> doug@feedme.UUCP (Doug Salot) asks:
>would somebody from sci.lang care to give us a few more
>words for intelligence?

Intelligence is the art of sounding intelligent.

Celso Alvarez
sp299-ad@violet.berkeley.edu

ok@quintus.uucp (Richard A. O'Keefe) (11/22/88)

In article <151@feedme.UUCP> doug@feedme.UUCP (Doug Salot) writes:
>Machines will exhibit the salient properties of human intelligence.
>A fun book to read is Braitenberg's "Vehicles: Experiments in
>Synthetic Psychology."  Another is Edelman's "Neural Darwinism."
>Bury Descartes already.  Connectionist modeling and neurobiological
>research will bear fruit; why fight it?

To continue this rather constructive approach of suggesting good books
to read that bear on the subject, may I recommend

	Women, Fire, and Dangerous Things
	-- what categories reveal about the mind
	George Lakoff, 1987
	U of Chicago Press, ISBN 0-226-46803-8

I don't think the data he presents are quite as much of a challenge to
the traditional view of what a category is as he thinks, provided you
think of the traditional view as an attempt to characterise ``valid''
categories rather than actual cognition, just as classical logic is
an attempt to characterise valid arguments rather than what people
actually do.  As an account of what people do, it is of very great
interest for both AI camps, and

	it provides *evidence* for the proposition that non-human
	intelligences will not "naturally" use the same categories
	as humans.
	
As for connectionist modelling, it doesn't tell us one teeny tiny little
thing about the issues that Lakoff discusses that "classical AI" didn't.
Why pretend otherwise?  Case-based reasoning, now, ...

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/22/88)

In article <1791@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>In article <1908@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>>
>>Intelligence arises through socialisation.  
>>
>Why is this a good argument against the possibility of machine intelligence?
Cos you can't take a computer, not even the just truly awesomest
nooral network ever, to see the ducks, get it to throw them bread,
etc, etc.

Take a walk through your life.  Can you really see a machine going
through that with an identical outcome?  If so, lay off the cyberpunk
and get some fresh air with some good folk :-)

-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

leverich@randvax.UUCP (Brian Leverich) (11/23/88)

In article <17@ubc-cs.UUCP> morrison@grads.cs.ubc.ca (Rick Morrison) writes:
>
>Does anyone in this group actually _do_ AI?
>

Yup, there are readers who are researchers in areas generally considered
to be AI-related disciplines.

Quality of the postings seems to have fallen off as quantity has climbed.
Perhaps we need a talk.ai newsgroup?  :-)  -B
-- 
  "Simulate it in ROSS"
  Brian Leverich                       | U.S. Snail: 1700 Main St.
  ARPAnet:     leverich@rand-unix      |             Santa Monica, CA 90406
  UUCP/usenet: decvax!randvax!leverich | Ma Bell:    (213) 393-0411 X7769

alexandr@surya.cad.mcc.com (Mark Alexandre) (11/23/88)

In article <17347@agate.BERKELEY.EDU> sp299-ad@violet.berkeley.edu
(Celso Alvarez) writes:

>In article <151@feedme.UUCP> doug@feedme.UUCP (Doug Salot) asks:
>>would somebody from sci.lang care to give us a few more
>>words for intelligence?
>
>Intelligence is the art of sounding intelligent.
>

And art is what I say it is.

bwk@mitre-bedford.ARPA (Barry W. Kort) (11/24/88)

I enjoyed reading Brad Brown's comments regarding Turing-computability
and the limits of AI.

It seems to me that one of the most powerful forms of human reasoning
is reasoning by analogy, or model-based reasoning.  We use models,
metaphors, parables and analogies to transform problems from one
domain to another, thereby borrowing ideas and translating them to
novel situations.

Reasoning by analogy requires pattern matching with respect to the
structure of a knowledge base.  We look at the shape of the semantic
network or the shape of the tree and ask ourselves if we have encountered
a similarly structured knowledge base before.  Then, mutatis mutandis,
we map out the analogy and fill in the missing pieces.  Natural language
is a powerful tool for suggesting metaphors.

In mathematical circles, proof by analogy is emerging as an interesting
research frontier.  Saul Kripke has done seminal work in modal reasoning
and intuitionist logic, which formalize these ideas.

I wonder how the "limits of AI" argument will look after machines
learn how to mimic human-style model-based reasoning.

--Barry Kort

smryan@garth.UUCP (Steven Ryan) (11/24/88)

I've heard mazerunners define intelligence as the ability to learn. Place
motivated rats, dogs, humans, .... in a maze and see how many trials it
takes for them to learn the maze.

Well, this isn't math, so maybe the Pope will be quiet.
-- 
                                                   -- s m ryan
--------------------------------------------------------------------------------
As loners, Ramdoves are ineffective in making intelligent decisions, but in
groups or wings or squadrons or whatever term is used, they respond with an
esprit de corps, precision, and, above all, a ruthlessness...not hatefulness,
that implies a wide ranging emotional pattern, just a blind, unemotional
devotion to doing the job.....

linhart@topaz.rutgers.edu (Phil) (11/24/88)

The gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes (jokingly?):
[ How does "Intelligence arises thru socialisation." exclude machines? ]
-=> Cos you can't take a computer, not even the just truly awesomest
-=> nooral network ever, to see the ducks, get it to throw them bread,
-=> etc, etc.

Nor a blind paraplegic.

If a computer can launch a missile, then surely it can launch a crust
of bread.  Why, it'd be like shooting ducks in a pond... :-)

-=> Take a walk through your life.  Can you really see a machine going
-=> through that with an identical outcome?  If so, lay off the cyberpunk
-=> and get some fresh air with some good folk :-)

Why need it be identical?  Human intelligence is the example, not the
definition.

( The smiley may have applied to the whole message, seeing as how the
  poster works in a CS department.  Think before you flame. )

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/25/88)

In article <1918@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>Cos you can't take a computer, not even the just truly awesomest
>nooral network ever, to see the ducks, get it to throw them bread,
>etc, etc.
>
>Take a walk through your life.  Can you really see a machine going
>through that with an identical outcome?  If so, lay off the cyberpunk
>and get some fresh air with some good folk :-)
>
I agree that we aren't nearly advanced enough to see our way through
to making a robot capable of "acting human", but why do you think
that this is impossible?  (For now let's not assume I am saying that
we SHOULD try to make a mechanical human, that's another question.)
Did we not become what we are through a
natural and (at least potentially) comprehensible process?  Then
could we not be functionally duplicated, given sufficient knowledge?
It isn't like we were talking about exceeding
the speed of light or making time run backwards.  Or is it?  I wish
I could get you to say why you really don't believe it is possible.
Or do you think we are just dumb American "techies" who wouldn't be
able to understand your learned discourse?  (If Decartes answered
this to your satisfactorily, pray give us the reference, then.)

maddoxt@novavax.UUCP (Thomas Maddox) (11/25/88)

In article <1791@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>In article <1908@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>>Intelligence arises through socialisation.  

>Why is this a good argument against the possibility of machine intelligence?
>If a large enough neural network could be created, could it also not
>receive socialization a la HAL?

	Not in Cockton's eyes.  He wants to keep the argument focused
on what he calls "socialization" rather than experience, which is the
central issue.  Almost certainly, whatever we mean when we say
intelligence has reference to abilities acquired through interaction
with the universe.  However, if like Cockton you restrict the
possibilities of acquisition of intelligence to social situations,
then you have demonstrated (albeit through circular reasoning) the
impossibility of machine intelligence.  Which is of course the
hobbyhorse he continues to ride.  

jack@cs.glasgow.ac.uk (Jack Campin) (11/25/88)

bwk@mbunix (Kort) wrote:
> In mathematical circles, proof by analogy is emerging as an interesting
> research frontier.  Saul Kripke has done seminal work in modal reasoning
> and intuitionist logic, which formalize these ideas.

I find this utterly unintelligible.  What has intuitionistic logic got to do
with analogy?  What specific work of Kripke's are you talking about?

[ To anticipate one likely question: if anyone wants to know what modal and
  intuitionistic logics are and what Kripke's contributed to them, read the
  relevant articles in the Handbook of Philosophical Logic.  I am not very
  interested in debating people who haven't read some exposition like that. ]

I am not saying there can't be a logic of analogy - though I have no idea what
shape it would take.

-- 
ARPA: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk       USENET: jack@cs.glasgow.uucp
JANET:jack@uk.ac.glasgow.cs      useBANGnet: ...mcvax!ukc!cs.glasgow.ac.uk!jack
Mail: Jack Campin, Computing Science Dept., Glasgow Univ., 17 Lilybank Gardens,
      Glasgow G12 8QQ, SCOTLAND     work 041 339 8855 x 6045; home 041 556 1878

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/26/88)

In article <819@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>However, if like Cockton you restrict the
>possibilities of acquisition of intelligence to social situations,

   Intelligence is a social construct.  The meaning of the word is
   defined through interaction.  Dictionary definitions are
   irrelevant, and certainly never accurate or convincing.  

   I have keep referring to the arguments against (or grounds for failure of)
   the C18 encyclopadists.  Dictionaries arose in the enlightenment as
   well.  Diderot, amongst others, recognised that the impossibility
   of prescribing meaning was a major obstacle, if not an unavoidable
   barrier, to the encyclopaedic endeavour.  If dictionaries were only
   meant as spelling aids, there was less of a problem here.

   Since AI is just Diderot on disc, arguments against the C18
   encyclopaedists, apart from being more convincing than the 
   encyclopaedists' case, are also highly relevant today.  Someone
   mailed me with the ignorant comment that C18 philosophy was adolescent
   and whiggishly outdated by modern developments.  Is it hell.  Read
   before you wallow in ignorance.  Wittgenstein however backs up much
   of the case against the encyclopaedists.  His arguments on the
   centrality of practice to knowledge and meaning rule out a
   logocentric theory of truth.  I regard all symbol systems as
   effectively logocentric.

   Intelligence can only be acquired in social situations, since its
   presence is only acknowledged in social situations.  The meanings
   are fluid, and will only be accepted (or contested) by humans in
   social contexts.  AI folk can do what they want, but no one will
   ever buy their distortions, nor can they ever have any grounds for
   convincement in this case.

   What I am saying is that you cannot prove anything in this case by
   writing programs.  Unlike sociology, they are irrelevant.  You can
   stick any process you like in a computer, but its intelligence is a
   a matter for human negotiation.  It may seem smart at first, but
   after a few days use in a real task? (ho, ho, ho - ever seen that
   with an AI program? - Prospector does seem to have done well, so
   no quibbles here.  Anything else though?  Also, even Prospector's
   domain restricted, unlike smart humans.

   AI cannot prove anything here.  It can try to convince (but doesn't
   because of a plague of mutes), but the judgement is with the wider
   public, not the self-satisfied insiders.

   Now brave freedom fighter against the tyranny of hobbyhorses, show
   me my circular reasoning?

linhart@topaz.rutgers.edu (Phil) writes in <Nov.24.09.07.42.1988.6716@topaz.rutgers.edu>

> The gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes (jokingly?):
> -=> Cos you can't take a computer, not even the just truly awesomest
> -=> nooral network ever, to see the ducks, get it to throw them bread,
> -=> etc, etc.

Not completely jokingly, just a less direct European style.  There was
a serious point in there, but the etc. etc. marked out my feeling that
full elaboration was unnecessary.

I'll try to sum up in a more direct manner for SAT level literacy :-)
(but really, it's a question of styles across cultures, which is
ironic, for those cultures which understand irony that is!)

Until a machine can share in socialisation, as a normal human, it will
not be able to match the best human capabilities for any task.  Thus
> a blind paraplegic.
does suffer some disadvantages, but for reasons which I cannot
comprehend, but wonder at, such an individual can still get a lot out
of life.  This presence of humanity in the face of gross disability (or
in the face of cruel oppression, e.g. U.S. slavery), is, for me,
further proof that a mechanistic or biological account of being is
going to miss out on the fundamentals of being.

I'd still take a blind paraplegic to see the ducks.  Even though they
couldn't see, they could hear.  Even though they can't throw, they
might drop.  If not, I'd throw for them.

> If a computer can launch a missile, then surely it can launch a crust
> of bread.  Why, it'd be like shooting ducks in a pond... :-)
And I don't think for one minute your machine you reflect on the
morality of its action, as a group of children would. (no :-))

> Human intelligence is the example, not the  definition.
Example for what?  I need to see more of the argument, but this
already looks a healthier position than some in AI.

> ( The smiley may have applied to the whole message, seeing as how the
>   poster works in a CS department.  Think before you flame. )
This is Britain.  You will find a range of people working in CS
departments.  As part of the Alvey programme, HCI research was
expanded in the UK.  You'll find sociologists, historians, fine
artists, literature graduates, philosophers and educationalists working
in CS departments here, as well as psychologists and ergonomists.
As part of the Alvey HCI programme, technical specialists HAVE come in
(perhaps unfairly at times) for a lot of flack over the way they
design (on a good day) computer systems.  No need to think before I
flame, as we don't expect blind dormitory brotherhood loyalty over
here.  This is a university, not a regiment.

HCI isn't about automating everything (the AI mania), it's about improved
use of computers, whatever the level of technology.  The CS
contribution is on the construction of programs.  Robust and
verifiable programs are a major requirement for improved interaction,
but this alone will not significantly stop the problems of misuse,
disuse and abuse associated with poorly designed human-computer
systems.  Good code bricklayers and engineers will design good walls,
but they will rarely make it to a habitable building, and even then
never by intent.  I talk of roles here of course, not individuals.
Some good CS trained code brickies are also sensitive to human issues,
and effectively so.  Many of the non-CS graduates in UK HCI research
also happen to be good code brickies as well.  To build habitable
systems, you need technical *AND* human skills.  There are two roles,
and you can fill them with one or many people.  Both roles MUST be
filled though.  AI rarely fills either.

CS needs taking down a peg on some issues (largely ones it ignores
when it cannot, and thus does not), so there's no worry about
righteous indignation from the unbigoted.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/26/88)

In article <819@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:

>>receive socialization a la HAL?
>
>	Not in Cockton's eyes.  He wants to keep the argument focused
>on what he calls "socialization" rather than experience, which is the
>central issue.  Almost certainly, whatever we mean when we say
>intelligence has reference to abilities acquired through interaction
>with the universe.  However, if like Cockton you restrict the
>possibilities of acquisition of intelligence to social situations,
>then you have demonstrated (albeit through circular reasoning) the
>impossibility of machine intelligence.  Which is of course the
>hobbyhorse he continues to ride.  

Oh, I see.  I thought he was going to trot out the old self-reference
argument or some rehash of Dreyfus.  That is interesting. But
that would mean that we couldn't really talk about intelligence
as being a property of an individual, but collectively of a species,
sort of like a hive of bees or something.  This would rule out 
intelligence, I suppose, in species without significant socializing,
such as egg-layers where the parents don't stick around.
I suppose one could speculate on two types of attempts to create
artificial intelligence(s).  One would be to create an artificial
member of the human species (like an android) and another to
create an entire artificial species which the individuals would interact with
each other.  The latter seems more difficult and probably more
dangerous.  I don't think Cockton has given a good argument for
why eventually (given enough understanding of neural connections in
humans and sufficient advances in silicon) an artificial child could
not be created which could be socialized much as our real children are
(even as far as teaching it about feeding ducks).

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/28/88)

In article <1976@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>
>   Intelligence can only be acquired in social situations, since its
>   presence is only acknowledged in social situations. 

If a tree falls in the woods and nothing with ears is around, does
it make a sound?  I think it is imaginable that there could be
in this universe solitary entities that "think".  The social
definition of intelligence might also lead to making the mistake
of attributing intelligence to natural processes which (at least
by current definitions) are not, e.g. theories that planets
hold their orbits voluntarily in response to God's law, which 
were at one time common.

>   AI cannot prove anything here.  It can try to convince (but doesn't
>   because of a plague of mutes), but the judgement is with the wider
>   public, not the self-satisfied insiders.
>

AI will not convince except with results.  The proof of the pudding
is in the eating.  So far there has been too much talking and too
little results.  As for the value of HCI endeavors, the same applies
there.  We'll see how much the eclectic nature of this effort pays
off by the hard results, won't we?
>
>Not completely jokingly, just a less direct European style. 
..
>I'll try to sum up in a more direct manner for SAT level literacy :-)
>(but really, it's a question of styles across cultures, which is
>ironic, for those cultures which understand irony that is!)
..
>in the face of cruel oppression, e.g. U.S. slavery), 

Your not so subtle needling of Americans would seem less
hypocritical if it originated in a country (and continent)
whose own hands were cleaner.

>further proof that a mechanistic or biological account of being is
>going to miss out on the fundamentals of being.
>
I agree that socialization of the machine is probably going to be
needed to create anything that we might recognize as an artificial
person.  But can you talk about what the fundamentals of being
are and what denies them to even the most complex artifact (other than
feeding the ducks)?  Is this some supernatural notion, or is
it the result of evolution, or culture, or some other natural process?

>>   Why, it'd be like shooting ducks in a pond... :-)
>And I don't think for one minute your machine you reflect on the
>morality of its action, as a group of children would. (no :-))
>
Depends entirely what the children have been taught.  There are
lots of children who wouldn't reflect on the morality of such actions.
Conversely, the program could well be made to reflect on such moral
issues.

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (11/28/88)

From article <1976@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
"    ....  I regard all symbol systems as effectively logocentric.

Would you tell us why *we* should?  E.g., taking a look at some axiom
sets for symbolic logic, it is less than obvious that prescribing
the meanings of words is what is going on.

"    Intelligence can only be acquired in social situations, since its
"    presence is only acknowledged in social situations.

whoops.

"    The meanings
"    are fluid, and will only be accepted (or contested) by humans in
"    social contexts.  AI folk can do what they want, but no one will
"    ever buy their distortions, nor can they ever have any grounds for
"    convincement in this case.

As applied to the meaning of 'intelligence', there seems to be a
kind of contradiction here.  "Noone will ever buy their distortions"
translates to "language use ascribing intelligence to computers
will never come to be regarded as fair play in the language game."
If we accept the translation, we see that the thesis is obviously
false, since language has already come to be used this way.  Listen
around. "Dumb program!"  "If you put the name in quotes, the
computer will understand you." ...

		Greg, lee@uhccux.uhcc.hawaii.edu

sbigham@dukeac.UUCP (Scott Bigham) (11/28/88)

In article <1972@garth.UUCP> smryan@garth.UUCP (Steven Ryan) writes:
>I've heard mazerunners define intelligence as the ability to learn. Place
>motivated rats, dogs, humans, .... in a maze and see how many trials it
>takes for them to learn the maze.

Just one problem.  The computer you're reading this article on can learn the
maze in -one- trial, and I don't think anyone would call it intelligent.

						sbigham

-- 
Scott Bigham                         "The opinions expressed above are
Internet sbigham@dukeac.ac.duke.edu   (c) 1988 Hacker Ltd. and cannot be
USENET   sbigham@dukeac.UUCP          copied or distributed without a
...!mcnc!ecsgate!dukeac!sbigham       Darn Good Reason."

engelson@cs.yale.edu (Sean Philip Engelson) (11/28/88)

In article <1918@crete.cs.glasgow.ac.uk>, gilbert@cs (Gilbert Cockton) writes:
>In article <1791@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>>In article <1908@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>>>
>>>Intelligence arises through socialisation.  
>>>
>>Why is this a good argument against the possibility of machine intelligence?
>Cos you can't take a computer, not even the just truly awesomest
>nooral network ever, to see the ducks, get it to throw them bread,
>etc, etc.

Yet, my good friend.  YET.

>Take a walk through your life.  Can you really see a machine going
>through that with an identical outcome?

Of course not.  Intelligent machines won't act much like humans at
all.  They will have different needs, different feelings, different
goals, plans, desires for life than we.  But they'll be no less
intelligent, thinking, feeling beings than we, for it.

>If so, lay off the cyberpunk
>and get some fresh air with some good folk :-)

Perhaps you should lay off the mysticism and get some fresh
rationality with some good folk :-)

>-- 
>Gilbert Cockton, Department of Computing Science,  The University, Glasgow
>	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

Sean

----------------------------------------------------------------------
Sean Philip Engelson, Gradual Student
Yale Department of Computer Science
51 Prospect St.
New Haven, CT 06520
----------------------------------------------------------------------
The frame problem and the problem of formalizing our intuiutions about
inductive relevance are, in every important respect, the same thing.
It is just as well, perhaps, that people working on the frame problem
in AI are unaware that this is so.  One imagines the expression of
horror that flickers across their CRT-illuminated faces as the awful
facts sink in.  What could they do but "down-tool" and become
philosophers?  One feels for them.  Just think of the cut in pay!
		-- Jerry Fodor
		(Modules, Frames, Fridgeons, Sleeping Dogs, and the
		 Music of the Spheres)

rjc@aipna.ed.ac.uk (Richard Caley) (11/28/88)

In article <1976@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>   Intelligence is a social construct.  The meaning of the word is
>   defined through interaction.  Dictionary definitions are
>   irrelevant, and certainly never accurate or convincing.  

This is one argument. . .

>   Intelligence can only be acquired in social situations, since its
>   presence is only acknowledged in social situations.  


This is another. They are not in any way equivalent nor does one
necessarilly follow from the other. ( I agree with the first but not
with the second - the property of being a chair is also socially
defined, however a naturally formed object can be recognised as a chair
without having attained its form and function via social interaction ).


>   The meanings
>   are fluid, and will only be accepted (or contested) by humans in
>   social contexts.  AI folk can do what they want, but no one will
>   ever buy their distortions, nor can they ever have any grounds for
>   convincement in this case.

So what's new here. I should think you would have to look hard for an AI
researcher who didn't believe this. This is what the Turing test is all
about, putting a machine in a context where its being a machine will not
bias social interaction and seeing if it accepted as intelligent. What
distortions are you talking about here? This sounds like a straw man to
me.

>   What I am saying is that you cannot prove anything in this case by
>   writing programs.  Unlike sociology, they are irrelevant.

Now you argue against yourself. If intelligence can only be recognised
via social interaction then the systems which are puported to have this
property _must_ be built ( or programmed ) to be tested. Sociology can
not say yes or no, though it can point out hopeless paths. You have
yourself, if I remember correctly, said that AI workers lack training in
experimental design as would be given to psycology undergraduates - are
you now saying that experimentation is useless after all?

>   Also, even Prospector's
>   domain restricted, unlike smart humans.

Most "smart humans" also have restricted domains ( though admittedly
rather larger than that of Prospector ). I doubt many people have expert
level knowledge in, say, both 12th century history and particle physics.

Where people differ from so called "expert systems" is in their ability
to cope with non "expert" tasks, such as throwing bread to ducks.

>   Now brave freedom fighter against the tyranny of hobbyhorses, show
>   me my circular reasoning?

I am not the brave freedom fighter adressed, but . . . .

The argument seems to go something like the following -

	1 ) "Intelligence" can only be judged by social interaction with
		the supposedly intelligen system.

	2 ) I can not concieve of a computer system capable of
		succesfully interacting in this way.

	3 ) Therfore no computer system can ever be intelligent.

	4 ) Therfore (2)

Just saying that intelligence requires socialisation does not prove the
impossibility of machine intelligence without the lemma that machines
can not be social entities, which is at least as big an assumption as
the impossibility of intelligence.


>Until a machine can share in socialisation, as a normal human, it will
>not be able to match the best human capabilities for any task.

I agree with reservations ( there are tasks in which a machine can
exceed the capabilities of any human, take leveling a city as an
example.)

>And I don't think for one minute your machine you reflect on the
>morality of its action, as a group of children would. (no :-))

This would seem to be based on another circular argument - machines can
not be socialised, so machines cannot acquire a morallity, so I would
never accept a machine as a social entity . . .

>> Human intelligence is the example, not the  definition.
>Example for what?  I need to see more of the argument, but this
>already looks a healthier position than some in AI.

If I may once again answer for the person adressed ( I have managed to
delete their name, my appologies ), I believe he meant an example for
the kind of abilities and behaviours which are the target for AI. That
is, human beings are intelligent entities, but the reverse is not
neccesarilly the case.

>HCI isn't about automating everything (the AI mania), 

Except in a derivative sence, AI is not about automation. Although it
often procedes by trying to automate some task, in order to gain insight
into it, that is a research strategy, not a definition of the field.

>	{ paragraph about system design vs. implementation }
>
>Both roles MUST be
>filled though.  AI rarely fills either.

So what, AI is not doing HCI, it is also not doing biochemistry, why
should it be?

AI has created some tools which people are using to create computer
systems for various tasks, and you are quite at liberty to critisise the
design of such systems. However that is not critisism of AI any more
than critisism of the design of a television is critisism of the physics
which lead to the design of the electronics.
-- 
	rjc@uk.ac.ed.aipna	AKA	rjc%uk.ac.ed.aipna@nss.cs.ucl.ac.uk

"We must retain the ability to strike deep into the heart of Edinburgh"
		- MoD

maddoxt@novavax.UUCP (Thomas Maddox) (11/28/88)

In article <1976@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>In article <819@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>>However, if like Cockton you restrict the
>>possibilities of acquisition of intelligence to social situations,
>
>   Intelligence is a social construct.  The meaning of the word is
>   defined through interaction.  Dictionary definitions are
>   irrelevant, and certainly never accurate or convincing.  

	Indeed dictionary definitions are irrelevant to this argument,
and I have no idea why Cockton brought them in.  In fact, *definition*
is not in question; *acquisition* is.

>   I have keep referring to the arguments against (or grounds for failure of)
>   the C18 encyclopadists.  Dictionaries arose in the enlightenment as
>   well.  

	More irrelevancies.  Prescribing meaning is no more an issue
than definition was to begin with. 
 
>   Since AI is just Diderot on disc, arguments against the C18
>   encyclopaedists, apart from being more convincing than the 
>   encyclopaedists' case, are also highly relevant today.  Someone
>   mailed me with the ignorant comment that C18 philosophy was adolescent
>   and whiggishly outdated by modern developments.  Is it hell.  Read
>   before you wallow in ignorance.  Wittgenstein however backs up much
>   of the case against the encyclopaedists.  His arguments on the
>   centrality of practice to knowledge and meaning rule out a
>   logocentric theory of truth.  I regard all symbol systems as
>   effectively logocentric.

	"AI is just Diderot on disk":  a remarkable statement, one
that if true would surprise a number of people; if Diderot could be
around, him most of all.  
	In what sense is your statement true? one might
ask.  Also, are you referring to the Diderot of the Encyclopedia, of
Rameau's Nephew, Jacques the Fatalist?
	Then Wittgenstein gets dragged in, and Derrida is invoked
indirectly through the mention of logocentrism.  This is not an
argument so much as a series of irrelevancies and non sequiturs.

>   Intelligence can only be acquired in social situations, since its
>   presence is only acknowledged in social situations.  The meanings
>   are fluid, and will only be accepted (or contested) by humans in
>   social contexts.  AI folk can do what they want, but no one will
>   ever buy their distortions, nor can they ever have any grounds for
>   convincement in this case.

	Again a series of disconnected and entirely unsupported
remarks.  Given that intelligence is acknowledged in social
situations, how does this affect the case for or against AI?
Presumably any artificial intelligence could be acknowledged in a
social situation as easily as organic intelligence howsoever defined.

>   What I am saying is that you cannot prove anything in this case by
>   writing programs.  Unlike sociology, they are irrelevant.  You can
>   stick any process you like in a computer, but its intelligence is a
>   a matter for human negotiation.  

	This is no more than a restatement of Turing's position,
hardly, therefore, a refutation of AI.

[. . .]
>   AI cannot prove anything here.  It can try to convince (but doesn't
>   because of a plague of mutes), but the judgement is with the wider
>   public, not the self-satisfied insiders.
>
>   Now brave freedom fighter against the tyranny of hobbyhorses, show
>   me my circular reasoning?

	At this point your reasoning is not so much circular as
non-existent.  You presume that the current non-existence of
artificial intelligence proves the impossibility of same.  I leave to
the reader the elementary disproof of this position.  

>linhart@topaz.rutgers.edu (Phil) writes in <Nov.24.09.07.42.1988.6716@topaz.rutgers.edu>
>
>> The gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes (jokingly?):
>> -=> Cos you can't take a computer, not even the just truly awesomest
>> -=> nooral network ever, to see the ducks, get it to throw them bread,
>> -=> etc, etc.
>
>Not completely jokingly, just a less direct European style.  There was
>a serious point in there, but the etc. etc. marked out my feeling that
>full elaboration was unnecessary.

	Note Cockton's implication that those wily Europeans need to
be explained to us simpletons over here in the USA.

>I'll try to sum up in a more direct manner for SAT level literacy :-)

	Sorry, pal, but that smiley doesn't obscure the offensiveness
of the remark.  Your continuing snottiness in these matters perhaps deserves
equally snotty and waspish rebuttal.

>(but really, it's a question of styles across cultures, which is
>ironic, for those cultures which understand irony that is!)

	Now what cultures might those be, Cockton?   Could you be
referring to all those complex and indirect European cultures, where
everyone could be expected to sift through the fine ironies you employ
to the wisdom buried within?  Certainly you couldn't be referring to
American culture, that callow, mechanistic, unironic haven for
soulless technocrats.  

	You really are a pretentious, overbearing shit.  While you are 
certainly free to have contempt for things American, you are not free to 
substitute your prejudices for discussion without being answered with a 
rudeness equal to your own.
 
[. . .] 

>This is Britain.  You will find a range of people working in CS
>departments.  As part of the Alvey programme, HCI research was
>expanded in the UK.  You'll find sociologists, historians, fine
>artists, literature graduates, philosophers and educationalists working
>in CS departments here, as well as psychologists and ergonomists.
>As part of the Alvey HCI programme, technical specialists HAVE come in
>(perhaps unfairly at times) for a lot of flack over the way they
>design (on a good day) computer systems.  No need to think before I
>flame, as we don't expect blind dormitory brotherhood loyalty over
>here.  This is a university, not a regiment.

	Again the implication that *there* things are done properly,
humanly, with a due respect for all human culture, while *here* . . .
well, my dear, we are (see above) a narrow group of regimented
technophiles. 

	Cockton, I've said it before, but as you've done nothing to
change my evaluation, I'll say it again:  you read like a particularly
narrowly-conceived language-generating program compounded of equal
parts Dreyfus and Jeremy Rifkin.  Now, however, you've apparently added
a sub-program intended to reproduce the rude anti-Americanism of Evelyn 
Waugh on an especially nasty day.

ok@quintus.uucp (Richard A. O'Keefe) (11/28/88)

In Chapter 2 of  "Ten Philosophical Mistakes", Mortimer J. Adler says

	We ordinarily speak of any living organism that has some
	consciousness of its environment and of itself as having a
	mind.  We also attribute intelligence to that organism if, in
	addition to such consciousness, it reacts in some
	discriminating fashion to the environment of which it is aware.

	It should be added, perhaps, that we generally regard mind and
	intelligence as the means by which sentient organisms learn
	from experience and modify their behaviour accordingly.

	By these criteria, the only animals to which we would not
	attribute mind or intellignece are those the behaviour of which
	is completely determined by innate, preformed patterns of
	behaviour that we call instincts.

	...
	For [humans] as well as for [animals], mind or intelligence
	stands for faculties or powers employed in learning from
	experience and in modifying behaviour in consequence of such
	learning.

This definition of intelligence would appear to be one that could
meaningfully be applied to machines.  A machine which learned in fewer
trials, or was capable of learning more complex ideas, could be said
to possess super-human intelligence.  It is interesting that Adler's
definition implicitly takes into account the varying sensory capacities
of organisms:  unlike a fish we cannot learn to adapt our behaviour to
the presence or absence of a weak electric field, but that is not a
defect of intelligence, because we are incapable of experiencing the
presence or absence of the field.  It would be a defect of intelligence
to be unable to learn from something that we _are_ capable of experiencing.
In fact, psychologists sometimes test whether an organism can learn to
dsicriminate between two conditions as a method of determining whether
the organism can perceive the difference.

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/28/88)

In article <1816@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>it make a sound?  I think it is imaginable that there could be
>in this universe solitary entities that "think".
Not the same as intelligence.  You can think dumb things.

>AI will not convince except with results.
Wrong, and you misunderstand the problem of 'results' in human
science.   An experiment proves very little in psychology.  It is the
design and the generalisation which are crucial, and these can only proceed
by argument.

>Your not so subtle needling of Americans would seem less
>hypocritical if it originated in a country (and continent)

Before this gets out of hand, the reference to U.S. slavery was just
the best example I know of, due to the quality of the historical
sources.  I wasn't singling out America, and I apologise to anyone who
was (mistakenly) annoyed, either by my posting, or by believing this
misinterpretation.

As for cultural styles, I was only drawing attention to the
differences.  I was not trying to create division or claim supriority.
As for approaches to literacy, well there are differences here, and it
looks like at least one American takes gentle jokey references
to this the wrong way.  Still, this is an international net, and we
can't all tailor our style of writing to suit one culture.

>Is this some supernatural notion, or is
>it the result of evolution, or culture, or some other natural process?
Don't know, do you?  There is a whole range of experience which does
not seem to hav a mechanical basis.  Which behaviour is AI trying to
cover (and do say 'intelligent' behaviour, since this means nothing here)?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/28/88)

In article <2717@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>Would you tell us why *we* should?  E.g., taking a look at some axiom
>sets for symbolic logic, it is less than obvious that prescribing
>the meanings of words is what is going on.

On the contrary, most AIers believe the assertion that logic encapulates
the rules of thought, and that all sentences can be given a semantics
in formal logic (note how some famous mathematical logicians disagree
and stick to formal languages as being very different things).  

>translates to "language use ascribing intelligence to computers
>will never come to be regarded as fair play in the language game."
>If we accept the translation, we see that the thesis is obviously
>false, since language has already come to be used this way.  Listen
>around. "Dumb program!"  "If you put the name in quotes, the
>computer will understand you." ...

None of your examples would be accepted as anything except sloppy
language, but acceptable sloppiness given that "Dumb programmer who
wrote this program" or "Dumb design team who got the functionality of
this software wrong" are far too long winded.

Part of the consciousness raising of HCI is that programs are never
dumb, only the programmers and designers who make them inadequate, or
the design which commercial considerations forced on the designers or
programmers.

Anyone who talks of computers "understanding" does so:

  a) to patronise users whom they don't know how to instruct properly;
  b) because they are AI types.

The majority of competent instructtors and realists wouldn't use
"understand" for "fit the dumbly designed syntax".

People are loose with their language.  What counts is what they stick
out for.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/28/88)

In article <821@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>
>	Cockton, I've said it before, but as you've done nothing to
>change my evaluation, I'll say it again:  you read like a particularly
>narrowly-conceived language-generating program compounded of equal
>parts Dreyfus and Jeremy Rifkin.  Now, however, you've apparently added
>a sub-program intended to reproduce the rude anti-Americanism of Evelyn 
>Waugh on an especially nasty day.

Well said!  Could this display of snobbery reflect an
attempt at creating a simulation of a hierarch
of the class-bound British educational establishment?
Maybe he dislikes our meritocracy, but at least here
a child of working class parents can become a professional
without having to learn to disguise an accent.  

jackson@freyja.css.gov (Jerry Jackson) (11/29/88)

In article <44150@yale-celray.yale.UUCP>, engelson@cs (Sean Philip Engelson) writes:
>
>Of course not.  Intelligent machines won't act much like humans at
>all.  They will have different needs, different feelings, different
>goals, plans, desires for life than we.  But they'll be no less
>intelligent, thinking, feeling beings than we, for it.

I can accept the needs, goals, and plans... but why does everyone
assume that an intelligent machine would be a *feeling* being?  I see
no reason to assume that an IM would be capable of experiencing
anything at all.  This doesn't imply that it wouldn't be intelligent.
For instance: some machines are already capable of distinguishing blue
light from red.  This doesn't mean that they have anything like our
*experience* of blue. (Or pain, or sorrow, or pleasure... etc.)
Personally, I think this is a good thing.  I would rather not have a
machine that I would be afraid to turn off for fear of harming
*someone*.  It does seem that our experience is rooted in some kind of
electro-chemical phenomenon, but I think it is an incredible leap of
faith to assume that logic circuits are all that is required :-).

BTW: It is perfectly consistent to assume that only I experience
anything.  i.e. it seems other people can be explained quite well
without resorting to notions of experience.  I claim that it is very
likely that this position would be accurate in the case of intelligent
machines (at least...  intelligent digital computers ;-)

--Jerry Jackson

sarge@metapsy.UUCP (Sarge Gerbode) (11/29/88)

In article <757@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>In Chapter 2 of  "Ten Philosophical Mistakes", Mortimer J. Adler says
>
>	...
>	For [humans] as well as for [animals], mind or intelligence
>	stands for faculties or powers employed in learning from
>	experience and in modifying behaviour in consequence of such
>	learning.
>
>This definition of intelligence would appear to be one that could
>meaningfully be applied to machines.

The significance of this definition would depend on what is to be
included as "learning".  A mere modification of behavior based on a
change of environment would not, to my mind, qualify as "learning".
For instance, the switching action of a thermostat in response to
environmental changes in temperature would not entitle it to be
considered to have "learned" anything, nor to be considered
intelligent.

And a person can exercise intelligence without behaving (physically),
e.g. by thinking up a brilliant thought.  Some very intelligent people
("effete intellectual snobs", I believe they used to be called :-) )
are very good at not applying their intelligence to real life.

So the "behavior" part is extraneous to intelligence.  It is the
"learning" that is crucial.

We could say that anything that could learn could be intelligent.  Or,
intelligence is the ability to learn.  Intelligence tests were
originally designed to predict school performance, i.e. learning
ability, so that would fit this definition.

The next question is whether machines could be said to "learn" in
anything but a metaphorical sense.  Perhaps they can be taught to
behave in a way that imitates behavior that is thought to be
consequent to actual learning, but would that mean that they actually
"learned" anything?

Each of us humans has direct subjective apperception of what it is to
learn -- it is to acquire knowledge, to come to know something, to
acquire a fact.  What we do behaviorally with what we learn is another
matter.

Do machines have the same subjective experience that we do when we
say we have learned something, or any subjective experience at all?
It seems quite questionable.  Since their behavior is completely
explainable in terms of the hardware design, the software program,
and the input data, Occam's Razor demands that we not attribute
subjectivity to them.
-- 
--------------------
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
950 Guinda St.  Palo Alto, CA 94301

ok@quintus.uucp (Richard A. O'Keefe) (11/29/88)

In article <1821@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>Well said!  Could this display of snobbery reflect an
>attempt at creating a simulation of a hierarch
>of the class-bound British educational establishment?
>Maybe he dislikes our meritocracy, but at least here
>a child of working class parents can become a professional
>without having to learn to disguise an accent.  

Several false assumptions in here:
(a) the USA has a meritocracy.  A meritocracy is "rule by persons chosen
    for their superior talents or intellect".  The ruling class in the
    USA is chosen for its ability to pay enough to look good in the media.
    As Ambrose Bierce put it:  "The best Congress money can buy."
    [That may well be the best practical criterion, what do I know?]

(b) maybe the assumption was that the educational system in the USA
    is meritocratic.  In relation to students, this may well be so, but
    considering the number of students who have to go into debt to
    finance their education at tertiary level, and the growing home-
    schooling movement, one suspects "The best education money can buy."

(c) A child of working class parents cannot become a professional in the
    UK without having to disguise an accent.  Maybe I disbelieve this
    because I studied in Edinburgh, but I visited friends in Oxford where
    there were N different accents, _and_ working-class students.

(d) This discussion is getting us anywhere.

Once before I tried to give an account of why it was reasonable for
people working on AI to pay little attention to sociology.

This time I'm going to attempt a sociological explanation.

  It is a lot of work trying to stay informed in one subject, let alone
  several.  I for one am trying to keep reasonably current in half a dozen
  topics, and I'm really stretched thin (my pockets are suffering too).  I
  literally haven't got the _time_ to study the philosophy and sociology I
  would like to.  (Adler and Bok are all I can manage at the moment, thank
  you.)  So what do I do?  I have to trust someone.  Given the choice of
  trusting John McCarthy (say) or Gilbert Cockton (say), who do I trust?
  Well, one of these people belongs to some of the same fields that I do.
  If he puts me crook, a field _he_ helped found is injured.  What's more,
  he keeps working on trying to solve the problems, and a few years ago
  came up with an approach which is of considerable mathematical interest,
  if nothing else.  (I could say similar things about quite a lot of
  people in AI.)  I claim that it is rational for me to trust McCarthy's
  (say) evaluation of the possibility of AI in preference to Cockton's.
  [It would _not_ be rational for me to prefer J.Random Hacker's; she
  hasn't the background of McCarthy.]

There _is_ a very serious challenge to the possibility of AI (in the
let's-build-a-god sense; the let's-make-amplifiers-for-the-mind camp
can only gain) in Lakoff's "Women, Fire, and Dangerous Things".  I
think that's a very exciting book.  He tackles some very basic topics
about language, thought, and meaning, and attempts to show that the
physical-symbol-system approach is founded on sand.  But he doesn't
leave us with mysteries like "socialisation"; an AI/NL person could
expect to read this book and come away with some ideas to try.  I
would really like to see that book discussed in this newsgroup.

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/30/88)

In article <562@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
>
>We could say that anything that could learn could be intelligent.  Or,
>intelligence is the ability to learn.  Intelligence tests were
>originally designed to predict school performance, i.e. learning
>ability, so that would fit this definition.
>
Then I presume that patients with various brain lesions, such as
bilateral lesions of the hippocampus are to be considered non-intelligent?
They certainly can't learn new facts (although there is good evidence that
they still can be conditioned operantly to change their behavior).
When presented with new problems to solve, they do about as well as normals,
but on repeated presentation of the same problem, normals obviously improve
quickly, whereas these patients do not.  I would argue that their I.Q.
is definitely not 0 by any reasonable definition of intelligence.

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/30/88)

From: ok@quintus.uucp (Richard A. O'Keefe)
>In article <1821@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>> but at least here
>>a child of working class parents can become a professional
>>without having to learn to disguise an accent.  
>
>(c) A child of working class parents cannot become a professional in the
>   UK without having to disguise an accent.  Maybe I disbelieve this
>   because I studied in Edinburgh, but I visited friends in Oxford where
>   there were N different accents, _and_ working-class students.

I agree that this probably isn't the newsgroup for this discussion,
but I will just make this one explanation of what I said.
I didn't mean this as a universal absolute about the UK but intended to mock 
the spirit of Cockton's stereotypes of the US.  I also studied at Edinburgh and
at Newcastle-upon-Tyne.  My statement was based on my personal experiences
in the UK.  I met professionals from the working class who had
changed their accent.  I remember one chap that as he drank more beer
at the pub his accent changed from "BBC" to Geordie.  There was definitely
the idea that a proper physician simply could not speak with a working
class accent.  That certainly is not true in the States, where some of
the most prominent neurologists I know have strong New York City accents,
for example.  Others sound like the Southern sheriff in "Smokey and the 
Bandit".  While a Scots accent in Edinburgh was fine, I wonder how it would 
play in London?  I found British society to be an order of magnitude more
class conscious than that of the US, and accent was the main way you were
typed.  This may have all changed since I was there (1976), but I doubt it.
No one should get the idea that I don't like the British.  I had a great
experience and made a lot of friends, but all cultures have their faults,
not just the US, ok?

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (11/30/88)

/>Of course not.  Intelligent machines won't act much like humans at
/>all.  They will have different needs, different feelings, different
/>goals, plans, desires for life than we.  But they'll be no less
/>intelligent, thinking, feeling beings than we, for it.
/
/ I can accept the needs, goals, and plans... but why does everyone
/ assume that an intelligent machine would be a *feeling* being?  I see
/ no reason to assume that an IM would be capable of experiencing
/ anything at all.  This doesn't imply that it wouldn't be intelligent.
/ For instance: some machines are already capable of distinguishing blue
/ light from red.  This doesn't mean that they have anything like our
/ *experience* of blue. (Or pain, or sorrow, or pleasure... etc.)

Aren't feeling and emotion automatic byproducts of thought? I find it hard to
imagine an entity capable of thought which would *not* have (for example) some
negative reaction to a threat, involving some attempt to rapidly find
alternatives, etc... which is to say, fear and excitement. Other emotional
responses can be argued similarly.

It's true that our physical bodies provide extra sensation by pumping out
adrenaline and so forth, but the original emotions are often generated by the
mind first and -then- relayed to the glands; for example, the panic you feel
when a pay cut notice appears on your desk.

(One can still argue that "emotional reactions" don't prove that the machine is
"really feeling emotion." I'll be glad to answer this point, if you can first
prove to me that *you* "really feel" pain, sorrow, or pleasure, and don't just
react mechanically...)

/ I would rather not have a
/ machine that I would be afraid to turn off for fear of harming
/ *someone*.

If you don't want a "someone", you'd better stay out of AI research... :-)

/  It does seem that our experience is rooted in some kind of
/ electro-chemical phenomenon, but I think it is an incredible leap of
/ faith to assume that logic circuits are all that is required :-).

It's a point of faith, certainly, since we don't have more than one example. But
I don't think it's unjustified, since nothing more than logic circuits has ever
been observed. ("Logic circuits" is a bit of a misnomer, since neurons don't act
like single standard logic gates. However, it looks possible to simulate them
with electronics.)

--Z

jeff@lorrie.atmos.washington.edu (Jeff L. Bowden) (11/30/88)

In article <1985@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>None of your examples would be accepted as anything except sloppy
<language, but acceptable sloppiness given that "Dumb programmer who
>wrote this program" or "Dumb design team who got the functionality of
<this software wrong" are far too long winded.
>
<Part of the consciousness raising of HCI is that programs are never
>dumb, only the programmers and designers who make them inadequate, or
<the design which commercial considerations forced on the designers or
>programmers.
<
>Anyone who talks of computers "understanding" does so:
<
> a) to patronise users whom they don't know how to instruct properly;
< b) because they are AI types.
>

  If someone says something to you and you don't understand is it
	a) Your fault?
	b) God's fault?
	c) Your mother's fault?
	d) The fault of some other thing to which you give
	   credit (blame?) for your existence?

  Certainly it is the fault of the programmer if a program is deficient in
understanding something, but it is certainly not sloppy English to say that
the program does not understand.  It doesn't.  It was not imbued by its
creator with the ability to understand.  Fault has little to do with this.

  It appears to me that Mr. Cockton has an axe to grind with those who assume
that every computer scientist accepts materialism.

bwk@mitre-bedford.ARPA (Barry W. Kort) (11/30/88)

In article <0XTukNy00Xol41W1Ui@andrew.cmu.edu> ap1i+@andrew.cmu.edu
Andrew C. Plotkin writes:
>I maintain that a human can be simulated by a Turing machine.  Comments?

As a human being, I occasionally cast lots to choose a course of action
when my value system is balanced on the razor's edge between two
alternatives.  Because I want to be sure that my Random Number
Generator is not deterministic (like a pseudo-random sequence
generator), I use a quantum amplifier in my coin flipper.

Correct me if I'm wrong.  But a Turing Machine is obliged to follow
a deterministic program.  Hence a Turing machine cannot simulate my
dice-tossing method.

--Barry Kort

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (11/30/88)

From article <1985@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
" ...
" People are loose with their language.  What counts is what they stick
" out for.

I guess I didn't make myself clear.  You had argued, as I understood you,
that language is loose, and AI approaches do not take this adequately
into account.  But in judging the prospects for creating artificial
minds, you use a preconceived notion of what intelligence "really"
means, rather than letting the meaning emerge, loosely, from a social and
conversational consensus (as appears to be happening).  There appear
to be two different and conflicting ideas about the nature of meaning
in language involved.

		Greg, lee@uhccux.uhcc.hawaii.edu

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/01/88)

From article <562@metapsy.UUCP>, by sarge@metapsy.UUCP (Sarge Gerbode):
" ...
" Do machines have the same subjective experience that we do when we
" say we have learned something, or any subjective experience at all?
" It seems quite questionable.  Since their behavior is completely
" explainable in terms of the hardware design, the software program,
" and the input data, Occam's Razor demands that we not attribute
" subjectivity to them.

A more proper application of Occam's Razor would be that it prevents
us from assuming a difference between humans and machines in this
regard without necessity.  What does explaining behavior have to
do with it?  If I could explain your behavior, would this have the
consequence that you cease to have subjective experience?  Of course
not.  (If *you* could explain your behavior, perhaps the case could
be made ...)

		Greg, lee@uhccux.uhcc.hawaii.edu

jackson@freyja.css.gov (Jerry Jackson) (12/01/88)

In article <IXYmnvy00XoGA0UVFk@andrew.cmu.edu>, ap1i+@andrew (Andrew C. Plotkin) writes:
>/>goals, plans, desires for life than we.  But they'll be no less
>/>intelligent, thinking, feeling beings than we, for it.
>/
>/ I can accept the needs, goals, and plans... but why does everyone
>/ assume that an intelligent machine would be a *feeling* being?  I see
>/ no reason to assume that an IM would be capable of experiencing
>/ anything at all.  This doesn't imply that it wouldn't be intelligent.
>
>Aren't feeling and emotion automatic byproducts of thought? I find it hard to
>imagine an entity capable of thought which would *not* have (for example) some
>negative reaction to a threat, involving some attempt to rapidly find
>alternatives, etc... which is to say, fear and excitement. Other emotional
>responses can be argued similarly.
>

I agree that any thinking entity would have a negative reaction to a threat,
involving some attempt to rapidly find alternatives.  I just don't see this
as being "fear" and "excitement".  Let me explain why with an analogy:

Why does a person take aspirin?  I don't believe that the following
goes on in his head -- "I say, It appears that those neurons over
there are firing excessively.  Perhaps I should interrupt their overly
enthusiastic behavior..".  I claim it is more like:  "Owww... that really
*hurts*.  Gimme some aspirin... NOW!"  Although the physical effect of
the aspirin might be to cut off some signal in the nervous system, this
has very little to do with a person's immediate motivation for taking it.
I claim that the signal and the pain are two entirely different sorts of
beasts.

>"really feeling emotion." I'll be glad to answer this point, if you can first
>prove to me that *you* "really feel" pain, sorrow, or pleasure, and don't just
>react mechanically...)

I've heard people (usually behaviorists) make this point but I'm never sure
if they're serious (I didn't see a smiley :-).  An attempt to answer the
riddle of subjective experience by denying its existence seems somewhat 
pointless.  BTW: In a torture situation, I don't think I would have a hard
time convincing *anyone* that they can "really feel" pain. Would you agree
that torture is wrong?  Why? :-)

>
>/ I would rather not have a
>/ machine that I would be afraid to turn off for fear of harming
>/ *someone*.
>
>If you don't want a "someone", you'd better stay out of AI research... :-)
>

I am definitely *not* an opponent of AI.  I think it is very likely
that we will be able to create systems that are *operationally*
indistinguishable from humans doing the same tasks.  I think this will
be a great thing.  I do claim, however, that there is still likely to
be a difference between an intelligent machine (here referring to a
machine that models intelligent behavior in a functionalist sense, not
by physically copying the brain) and a human (or other animal).


>/  It does seem that our experience is rooted in some kind of
>/ electro-chemical phenomenon, but I think it is an incredible leap of
>/ faith to assume that logic circuits are all that is required :-).
>
>It's a point of faith, certainly, since we don't have more than one example. But
>I don't think it's unjustified, since nothing more than logic circuits has ever
>been observed. ("Logic circuits" is a bit of a misnomer, since neurons don't act
>like single standard logic gates. However, it looks possible to simulate them
>with electronics.)
>
>--Z


As I mentioned earlier, I believe the standard functionalist approach
to AI will bear fruit -- In fact, I think we will be able to generate
systems to perform any tasks we can think of... even simulate a human!
It seems unlikely that the same approach will generate artificial
*beings* with subjective experience, but this is just fine with me. ;-)

--Jerry Jackson

gibsong@gtephx.UUCP (Greggo) (12/01/88)

There are lots of good comments on this subject, but it's starting to
degrade a bit into nit-picking on fine points of definitions, without
attending to the main subject at hand.

As to my views, I agree that learning and intelligence are related.
However, much of the discussion has focused on technical definitions
of intelligence.  Don't emotions enter into intelligence at all, or
do they just "get in the way"?  One of the prime foundations for
intelligence would surely be "an awareness of self".  Most of the
comments about considering whether the computer or the programmer
understands assume a central point of control for intelligence.  Are
we intelligent first because we realize that we exist as an independent
mind?  How does this then apply to AI?

Also, it is the _ability_ to learn, interpret environment, build
experience, etc. that forms the foundation for intelligence, not the
actual use.  This explains how just because someone doesn't hear or
care about what you're saying doesn't mean they're not intelligent.
Again, this brings in attitudes and emotions, which at least influence
our ability to exercise intelligence, if not directly a part of
intelligence.

In summary, some main ingredients of intelligence (one man's opinion):
	- awareness of self
	- ability to learn
	- emotions (curiosity, drive, satisfaction)?

Anyway, I find this whole conversation fascinating.  Please forgive
the rambling nature of this posting.

- greggo

Disclaimer:  Me, not GTE!

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/01/88)

In article <1983@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>In article <1816@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>
>>Your not so subtle needling of Americans would seem less
>>hypocritical if it originated in a country (and continent)
>
>As for cultural styles, I was only drawing attention to the
>differences.  I was not trying to create division or claim supriority.
>As for approaches to literacy, well there are differences here, and it
>looks like at least one American takes gentle jokey references
>to this the wrong way.  Still, this is an international net, and we
>can't all tailor our style of writing to suit one culture.
>
Hmmm, well, ok. It looked to me like you were being pretty snide, but
I'll take your word for it that you weren't.  I certainly
am quite familiar with British and European styles of humor, having
spent a good deal of time there both as student and visiting professor.
Anyhow I think it best we
look more to the problems closer to home (of which there are plenty
on both sides of the Atlantic) and leave the foreigners to generate
their own social criticism from now on.
>>Is this some supernatural notion, or is
>>it the result of evolution, or culture, or some other natural process?
>Don't know, do you?  There is a whole range of experience which does
>not seem to hav a mechanical basis.  Which behaviour is AI trying to
>cover (and do say 'intelligent' behaviour, since this means nothing here)?
No, I don't know.  I was the one asking the questions.  
From your strong statements that "humans are not machines" it appeared that 
you (at least thought you) had some answers.  If by "not seem to have a 
mechanical basis" you mean that we can not duplicate the behavior (yet) or
understand it mathematically, then fine, I agree.  But prior to Newton,
the same could be said for the motions of the planets.  At least back
to the time of Helmholtz, people began to realize that the body was
a machine.  That coupled with the idea that the brain is a
physical system and is the locus of the mind and behavior, seems to
me to indicate that there is a very significant probability that
what we observe as our very complex behavior may be that of machines.
This does not prove that this is so.  Even if we could duplicate
our behavior with machines of our own creation, one could never disprove
absolutely that there wasn't some other force involved (perhaps
God would send souls to occupy the machine bodies of the robots).

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/01/88)

In article <281@esosun.UUCP> jackson@freyja.css.gov (Jerry Jackson) writes:

> ... why does everyone
>assume that an intelligent machine would be a *feeling* being?  

An ambulatory robot would be well advised to have a sensory alarm
system to report mechanical stress in its limbs.  Otherwise it is
liable to damage itself while navigating through a hazardous
environment.

An intelligent machine that seeks to explore and learn about the
world in which it is embedded would be well advised to have an
emotional system which monitors its success or failure in knowledge
acquisition.  Successful lines of investigation would thereby be
encouraged, while fruitless efforts would be abandoned in favor
of a fresh tack.

By monitoring its emotions, a learning system would also know whether
to report its progress or ask for assistance when its mentor inquires,
"How are you doing, today?"

--Barry Kort

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/01/88)

In article <1829@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu
(Gordon E. Banks) writes in response to Sarge Gerbode:

>Then I presume that patients with various brain lesions, such as
>bilateral lesions of the hippocampus are to be considered non-intelligent?

Intelligence is not a binary trait.  Last night on the PBS series,
"The Mind", we saw how a stroke affected the mental life of a promising
young attorney.  The loss of function in his prefrontal lobes impaired
his ability to conceive and plan a course of action and to solve
problems.  He is now a truck driver.  After his stroke, it took a
long time for the therapists to identify which faculties of intellect
were lost.  It is not yet clear whether the lost faculties can be
reaquired.

--Barry Kort

paulg@iisat.UUCP (Paul Gauthier) (12/02/88)

In article <281@esosun.UUCP>, jackson@freyja.css.gov (Jerry Jackson) writes:
> In article <44150@yale-celray.yale.UUCP>, engelson@cs (Sean Philip Engelson) writes:
> >all.  They will have different needs, different feelings, different
> 
> I can accept the needs, goals, and plans... but why does everyone
> assume that an intelligent machine would be a *feeling* being?  I see
> no reason to assume that an IM would be capable of experiencing
> anything at all.  This doesn't imply that it wouldn't be intelligent.
	Actually, I think it does. Feelings are simply products of
intelligence. Once any form of intelligence reaches the complexity of
the human mind it will undoubtably experience 'feelings.' Feelings are
simply manifestations of the minds goals and needs. You feel 'sad' when
you don't attain a goal, this is simply a negative feedback response to
prd you into trying harder. It might not work in all cases, but it helps.
The word feeling is very broad. Feelings of fear are manifestations of your
minds attempts to deal with the unkown or threats. What you experience as
fear is the workings of your mind trying to come to a decision in a tough
situation.
	This whole topic is very hard to discuss and I'm sure I've
bungled it quite nicely, but I hope I have put accross something resembling
my true opinion on this. All this things people refer to as feelings,
things which many consider for humans-alone, are results of inconsistancies
in our knowledge-bases and signs of our intelligense working. A feeling is
an educated guess that our mind makes based on what it can puzzle out
from known facts. As you can see, the word 'feeling' doesn't do well to
classify all the myriad of types of feeling there are so it is hard to
discuss...

> For instance: some machines are already capable of distinguishing blue
> light from red.  This doesn't mean that they have anything like our
> *experience* of blue. (Or pain, or sorrow, or pleasure... etc.)

	All your *experience* of blue is is your brain searching its
memory to figure out what 'blue' is. Undoubtably it flashes through
memories connected to 'blue' which trigger the *experience* of blue. When
machines have large enough inter-connected knowledge-bases they too will
come accross past experiences which relate to blue and *experience* the
color.

> Personally, I think this is a good thing.  I would rather not have a
> machine that I would be afraid to turn off for fear of harming
> *someone*.  It does seem that our experience is rooted in some kind of
> electro-chemical phenomenon, but I think it is an incredible leap of
> faith to assume that logic circuits are all that is required :-).

	Personally, I find the idea exciting. I'm patiently waiting for
the first machine sentience to emerge. I feel it is possible, and it is
only a matter of time. After all, humans are only carbon-based machines.

> 
> --Jerry Jackson


-- 
|=============================================================================|
| Paul Gauthier:    {uunet, utai, watmath}!dalcs!iisat!{paulg | brains!paulg} |
|                   Cerebral Cortex BBS: (902)462-7245  300/1200  24h/7d  N81 |
|==============================================================================

hawley@icot32.icot.junet (David John Hawley) (12/02/88)

In article <2732@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <562@metapsy.UUCP>, by sarge@metapsy.UUCP (Sarge Gerbode):
>" ...
>" Do machines have the same subjective experience that we do when we
...
>" and the input data, Occam's Razor demands that we not attribute
>" subjectivity to them.
>
>A more proper application of Occam's Razor would be that it prevents
>us from assuming a difference between humans and machines in this
>regard without necessity.  What does explaining behavior have to
...

What are the criteria by which I may judge the suitability of an application of
Occam's Razor? I know the folktale is basically the KISS principle,
and I have heard that the actual criterion of simplicity is the number of
'blat' that need to be postulated (where a blat can be a thing, entity,
?property?, ...). Is this correct? 

This has something to do with theory formation, as per for example
David Poole's Theorist default-reasoning system.
Does anyone have any pointers to literature on theory preference,
relative strength of arguments, preferably in an how-could-we-build-it vein?

Yoroshiku (AdvTHANKSance)
	David Hawley

ok@quintus.uucp (Richard A. O'Keefe) (12/02/88)

In article <3ffb7cfc.14c3d@gtephx.UUCP> gibsong@gtephx.UUCP (Greggo) writes:
>Don't emotions enter into intelligence at all, or
>do they just "get in the way"?

Emotions have often been discussed in the AI literature.  See, for example,
Aaron Sloman's "You don't need a soft skin to have a warm hear."
Emotions have a large cognitive component; they aren't just physiological.
(C.S.Lewis in his essay "Transposition" pointed out that Samuel Pepys
reported the same phsyical sensations when seasick, when in love with his
wife, and on hearing some wind music, and in the latter case promptly
decided to practice the instrument.)  Considering the range of human
temperaments, the experience and expression of emotion probably isn't
necessary for "intelligence".  I wonder, though.  I have seen programs
which nauseated me, and they were bad programs, and I have seen programs
which brought tears of pleasure to my eyes, and they were good programs.
If emotions can be aroused by such "mathematical" things as programs,
and aroused *appropriately*, perhaps they are more important than I
think.  Such emotions certainly motivate me to write better programs.

>One of the prime foundations for
>intelligence would surely be "an awareness of self".

"Foundation" in what sense?  Let's be science fictional for a moment,
and imagine a sessile species, which every spring buds off a mobile
ramet.  The mobile ramet sends sense data to the sessile ramet, and
the sessile ramet sends commands to the mobile one.  The sessile
ramets live in a colony, and the mobile ones gather food and bring
it back, and otherwise tend the colony.  Every winter the mobile
ramets die and the sessile ones hibernate.  The mobile ramets are
"cheap" to make because they have just enough brain to maintain their
bodies and communicate with the sessile ones, which means that they
can be quite a bit smaller than a human being and still function
intelligently, because the brain is back in the sessile ramet.

Is it necessary for the sessile ramet to know which of the ones in the
colony is itself?  No, provided all the sessiles are maintained, it
doesn't much matter.  (It helps if physiological states of the sessiles
such as hunger and illness are obvious from the outside, wilting leaves
or something like that.)  These creatures would presumably be aware of
themselves *as*mobiles*.

I was about to write that an intelligent entity would need access to its
own plans in order to critise them before carrying them out, but even that
may not be so.  Imagine a humanoid robot which is *not* aware of its own
mental processes, but where information about those processes is visible
on a debugging panel at the back.  Two such robots could check each other
without being able to check themselves.

"An awareness of self" might be important to an intelligent organism,
but it might be a *consequence* of intelligence rather than a
*precondition* for it.  It is usually claimed that human babies have
to learn to distinguish self from non-self.  (How anyone can _know_
this I've often wondered.)

ok@quintus.uucp (Richard A. O'Keefe) (12/03/88)

In article <177@iisat.UUCP> paulg@iisat.UUCP (Paul Gauthier) writes:
>Feelings are
>simply manifestations of the mind's goals and needs. You feel 'sad' when
>you don't attain a goal, this is simply a negative feedback response to
>prd you into trying harder. It might not work in all cases, but it helps.

If I heard that you had died unexpectedly, I would feel sad.
But the preservation of your life would not previously have been one of
my goals.  (I would, for example, be unable to distinguish you from
Gilbert Cockton.)

There's an interesting question here about human psychology:  which
emotions are innate, and which emotions are culture-dependent.  Lakoff's
book includes a list of apparently culture-independent emotional states
(unfortunately I left W,F&DT at home, so I can't quote it), and I was
surprised at how short it was.  One anthropological commonplace is the
distinction between "guilt" cultures and "shame" cultures.  Most humans
presumably have the capacity to learn these emotions, but apparently we
do not all experience the same emotions in the same circumstances.
(Which emotions a culture has basic _terms_ for is another question.)

raino@td2cad.intel.com (Rodger Raino) (12/03/88)

Hi I'm an outsider to this group, but our newsfeed seems blocked so
I stumbled into this lively discussion.  You guys are having fun arguing
it looks like.  I thought I'd let you know how it's *really* gonna
work out.  Now most of you are going to read this and go, "boy is
that simpleminded/dumb/naive" but just stick this in the back of your
head and remember it in about fifty years.  You'll see.

Shortly lots of "smart" traffic lights are going to start talking
to each other upstream and downstream to make a smoother flow of 
traffic.  After awhile this will set up a bunch of resonance patterns
in the flow of traffic.  This in turn will become coupled to a
host of "other" information passing networks (ei. drivers = capacitors,
you're now always turing (triple pun intended) on Entertainment Tonight at 
exactly the same time).  What you end up with is a vast complex and fuzzy
network that will spontaneously become intelligent.  Of course this
"new" entinity mignt not be able to feed the ducks, but then the
duck feeder can't decide to power down the planet either.

cheers
rodger
-- 
-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
i know i'm a bad speller, don't waste FlameWidth pointing out old news
intel agrees with this, but not necessarly anything above the line.
 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

chrispi@microsoft.UUCP (Chris Pirih) (12/03/88)

In article <42328@linus.UUCP> bwk@mbunix (Kort) writes:
>Andrew C. Plotkin writes:
>>I maintain that a human can be simulated by a Turing machine.  Comments?
> ... I use a quantum amplifier in my coin flipper.
>Correct me if I'm wrong.  But a Turing Machine is obliged to follow
>a deterministic program.  Hence a Turing machine cannot simulate my
>dice-tossing method.

Nothing prevents the Turing machine from flipping a coin and acting
[deterministically] on the [random[?]] result.  (What exactly are we
trying to simulate here, Barry?  Is the coin, with its quantum
randomness, a part of the human who consults it?)  Besides, is it
necessary that a simulated coin-flip be "truly" random, or just
effectively unpredictable (i.e., that the Turing android eschew
foreknowledge of its pseudo-random pseudo-coin-flip)?  The latter
seems sufficient to me.

---
chris

(Besides, I never toss dice to make a decision; your Turing machine
should have no problem simulating me...)

maddoxt@novavax.UUCP (Thomas Maddox) (12/04/88)

In article <800@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>There's an interesting question here about human psychology:  which
>emotions are innate, and which emotions are culture-dependent.  Lakoff's
>book includes a list of apparently culture-independent emotional states
>(unfortunately I left W,F&DT at home, so I can't quote it), and I was
>surprised at how short it was. 

	From Lakoff, _Women, Fire and Dangerous Things_, p. 38:

In a major crosscultural study of facial gestures expressing emotion,
Ekman and his associates discovered that there were basic emotions
that seem to correlate universally with facial gestures:  happiness,
sadness, anger, fear, surprise, and interest.  Of all the subtle
emotions that people feel and have words and concepts for around the
world, only these have consistent correlates in facial expressions
across cultures.

	end-quote

	I agree that Lakoff's book is extremely interesting with
regard to key problems in AI, particularly in its replacement of what
he calls the "classical theory that categories are defined in terms 
of common properties of their members" with a new view ("experimental
realism" or "experientialism").  In his "Preface," Lakoff says,

The issue is this:

Do meaningful thought and reason concern merely the manipulation of
abstract symbols and their correspondence to an ojbective reality,
independent of any embodiment (except, perhaps, for limitations
imposed by the organism)?

Or do meaningful thought and reason essentially concern the nature of
the organism doing the thinking--including the nature of its body, its
interactions in its environment, its social character, and so on?

	end-quote

	Like Lakoff, I'm convinced that the second set of answers
points in the correct direction.  As a science fiction writer who has
tried to present an artificial intelligence realistically, I saw from
the start that the *embodied* categories Lakoff speaks of had to be
presupposed in order to present a being I could consider intelligent.

	(By the way, I hope readers see there is a difference between 
Lakoff's view, which poses interesting questions for AI research, and 
the views of eminent anti-AI theorists such as Dreyfus and Weizenbaum 
[and vocal net anti-AI types such as Cockton].  Lakoff says (p. 338):

I should point out that the studies discussed in this volume do not in
any way contradict studies in artificial intelligence . . . in
general. . . . We shall discuss only computational approaches to the
study of mind.  Even there, our results by no means contradict all
such approaches.  For example, they do not contradict what have come
to be called "connectionist" theories, in which the role of the body
in cognition fits naturally. 

	end-quote) 

	Lakoff's work is especially interesting when set next to
a recent book by Terry Winograd and Fernando Flores, _Understanding
Computers and Cognition_.  They likewise reject the tradition which
sees reason as "the systematic manipulation of representations."
However, they use a philosophical tradition very different from that
employed in usual AI studies:   to wit, the Continental tradition of
hermeneutics and phenomenology that includes Heidegger and Gadamer.
They also include "speech act" theory, from Austin and Searle and, 
in biology, Maturana's work.

	What these books (along with some essays of Daniel Dennett's)
represent to me is an attempt at coming to terms conceptually with the
high-level problems posed by AI.  The doctrinaire anti-AI group
continue to snipe from the sidelines, with arguments that say (1) it
can't be done, and (2) even if it could, it shouldn't; the workers who
are trying to create artificial intelligence (i.e., the makers of the
hardware and software) quite often are submersed entirely in their
particular problems and speak almost exclusively in the technicalities
appropriate to those problems.  Thus, the intelligent and approachable
work being done by Lakoff et alia serves us all:  this is one of the
characteristic problems of our time and one of our civilization's
greatest wagers, and those of us who are trying to understand it
(rather than deride or implement it) need a coherent universe of
discourse in which understanding might take place.

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (12/04/88)

/>Aren't feeling and emotion automatic byproducts of thought? I find it hard to
/>imagine an entity capable of thought which would *not* have (for example) some
/>negative reaction to a threat, involving some attempt to rapidly find
/>alternatives, etc... which is to say, fear and excitement. Other emotional
/>responses can be argued similarly.
/>
/
/ I agree that any thinking entity would have a negative reaction to a threat,
/ involving some attempt to rapidly find alternatives.  I just don't see this
/ as being "fear" and "excitement".  Let me explain why with an analogy:
/
/ Why does a person take aspirin?  I don't believe that the following
/ goes on in his head -- "I say, It appears that those neurons over
/ there are firing excessively.  Perhaps I should interrupt their overly
/ enthusiastic behavior..".  I claim it is more like:  "Owww... that really
/ *hurts*.  Gimme some aspirin... NOW!"  Although the physical effect of
/ the aspirin might be to cut off some signal in the nervous system, this
/ has very little to do with a person's immediate motivation for taking it.
/ I claim that the signal and the pain are two entirely different sorts of
/ beasts.

Even today we have computer programs that have no "idea" (no access to) what
goes on in their lower levels (the machine language.) A Lisp program manipulates
lists without ever referring to the RAM addresses they're stored at. This seems
to me to be an exact equivalent (much simpler of course) to the way we don't
worry about what neurons are firing. If an AI is written in a high-level
language, I would expect that it would have no idea of what routines are running
it. Similarly, if an AI is developed by making a big neural net and kicking it,
it would not know what sort of patterns are running around its circuits. It
would just react by saying things like "OW!"

/ I think we will be able to generate
/ systems to perform any tasks we can think of... even simulate a human!
/ It seems unlikely that the same approach will generate artificial
/ *beings* with subjective experience, but this is just fine with me. ;-)

You mentioned torture -- if you had a computer console in front of you with a
button marked "pain" (any human simulator had better have some sort of sensory
input), would you consider it okay to push it? How about if a screen (or
speaker) was printing the output produced by the program as it did a nice
simulation of a human begging you to stop? If you first spent an hour or so
discussing your favorite music or movies? (Excuse me; I mean "using the
simulation to see how the human being simulated would respond to your opinions
on music or movies.")
    Yes, I know, that paragraph is intended to play on your sympathy. But
consider it seriously. You go into a room, spend a while talking to and getting
answers from a computer console. Being a produced by a good human simulation,
the conversation is typical of any you'd have with a random stranger. Would you
then feel morally justified pushing that button, saying "it's only a simulation"?

/>"really feeling emotion." I'll be glad to answer this point, if you can first
/>prove to me that *you* "really feel" pain, sorrow, or pleasure, and don't just
/>react mechanically...)
/
/ I've heard people (usually behaviorists) make this point but I'm never sure
/ if they're serious (I didn't see a smiley :-).  An attempt to answer the
/ riddle of subjective experience by denying its existence seems somewhat
/ pointless.

I'm not -denying- subjective experience. I'm saying that, -whatever subjective
experience is-, if system X and system Y both act the same, it's silly to say X
has subjective experience and Y doesn't. Especially when the only difference is
that X grew by itself and Y was built by X.

This subject has been covered by more impressive people than me; dig up _The
Mind's I_ by Hofstadter and Dennet, which has many essays (and fiction) by lots
of people and commentary by H & D, all on AI and minds and brains and stuff. Fun
to read, too.

--Z

sarge@metapsy.UUCP (Sarge Gerbode) (12/04/88)

In article <2732@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <562@metapsy.UUCP>, by sarge@metapsy.UUCP (Sarge Gerbode):
>" Since [machines']  behavior is completely
>" explainable in terms of the hardware design, the software program,
>" and the input data, Occam's Razor demands that we not attribute
>" subjectivity to them.
>
>A more proper application of Occam's Razor would be that it prevents
>us from assuming a difference between humans and machines in this
>regard without necessity.  What does explaining behavior have to
>do with it?  If I could explain your behavior, would this have the
>consequence that you cease to have subjective experience?  Of course
>not.  (If *you* could explain your behavior, perhaps the case could
>be made ...)

I don't need a mechanistic explanation of my own behavior (much of
it, at least), because I am directly aware of causing it by
intention.  Furthermore, the most major observable difference between
myself and a machine is that the latter is explainable in mechanistic
terms, whereas I am not.  On the other hand, if I could explain
*your* behavior entirely on mechanistic grounds, then I think I would
have grounds (Occam's Razor) for not attributing subjectivity to
you.  However, I don't think I can do that, and so I don't think you
are a machine.  It is because others are observably *not* machines,
not explainable in mechanistic terms, that we attribute subjectivity
(and humanity) to them, in order to explain their behavior.

People don't like to be manipulated, programed, treated like machines,
and part of the reason why is, I believe, that they have an immediate
awareness of themselves as not being mechanistically determined, and
that sort of treatment observably embodies a lie.
-- 
--------------------
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
950 Guinda St.  Palo Alto, CA 94301

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (12/05/88)

/In article <42328@linus.UUCP> bwk@mbunix (Kort) writes:
/>Andrew C. Plotkin writes:
/> >I maintain that a human can be simulated by a Turing machine.  Comments?
/>
/> ... I use a quantum amplifier in my coin flipper.
/>Correct me if I'm wrong.  But a Turing Machine is obliged to follow
/>a deterministic program.  Hence a Turing machine cannot simulate my
/>dice-tossing method.

chrispi@microsoft.UUCP (Chris Pirih) replies...

/Nothing prevents the Turing machine from flipping a coin and acting
/[deterministically] on the [random[?]] result.  (What exactly are we
/trying to simulate here, Barry?  Is the coin, with its quantum
/randomness, a part of the human who consults it?)  Besides, is it
/necessary that a simulated coin-flip be "truly" random, or just
/effectively unpredictable (i.e., that the Turing android eschew
/foreknowledge of its pseudo-random pseudo-coin-flip)?

I agree with Chris here. When I flip a coin in my head, I have serious doubts
that the results are truly random (based on an amplified quantum randomness.) It
might just as well be a complex pseudo-random generator. It would work just as
well for any practical purpose, such as the balanced ethical dilemma (sp?) you
mentioned.

(Practical purpose meaning anything where you need a single random bit. If you
try to spit out a long string of random bits, the non-randomness of the process
becomes painfully clear -- there is a strong correlation between each bit and
the several-bit sequence that precedes it. This can be checked with a simple
computer program (on each bit, using the previous four or five bits to predict
the next). I've found it a 60% to 70% accurate predictor; no doubt a more
complex program could do better.)

--Z

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/05/88)

In article <1069@microsoft.UUCP> chrispi@microsoft.UUCP (Chris Pirih) 
joins the discussion on quantum randomness vs. Turing-computable randomness:

 >  ...  Besides, is it
 > necessary that a simulated coin-flip be "truly" random, or just
 > effectively unpredictable (i.e., that the Turing android eschew
 > foreknowledge of its pseudo-random pseudo-coin-flip)?  The latter
 > seems sufficient to me.

It is enough that the random number generator be unpredictable
(by *any* predictor, including one who would cheat and examine
my random number generation process).  The only way I know to
do this is to use a quantum amplifier, so that no one can
anticipate the outcome (myself included).  A Turing machine
can compute a pseudo-random sequence, but it cannot implement
a random number generator based on a quantum amplifier.  (Such
a device would cease to be a Turing Machine.)

--Barry Kort

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/05/88)

In article <563@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:
>I don't need a mechanistic explanation of my own behavior (much of
>it, at least), because I am directly aware of causing it by
>intention.  Furthermore, the most major observable difference between
>myself and a machine is that the latter is explainable in mechanistic
>terms, whereas I am not.  

Neither of these two propositions can be demonstrated reliably.
The behaviorists have shown that behavior which subjectively seems
to us to be caused by intention can be determined (even hypnotists
can demonstrate this), therefore your impressions are unreliable.
In addition, a complex enough neural network can demonstrate behavior
the cause of which is not immediately apparent.  Obviously no network
has been invented as complex as the human brain, and until one is
we won't be able to answer the question experimentally.  Those bothered
by possible loss of free will should recall that in a system complex
enough, there is room for the possibility of indeterminacy, be
it a biological system or whatnot.  

I will ask Serge the same questions I asked Gilbert: if humans are
not a machine, what elements are added to the body (which seems to
be a physical machine as far as we can tell) which make it otherwise?
Are these material or immaterial?  Is there some aspect of human
beings which does not obey the laws of nature?

hajek@gargoyle.uchicago.edu.UUCP (Greg Hajek) (12/06/88)

In article <1841@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>Those bothered
>by possible loss of free will should recall that in a system complex
>enough, there is room for the possibility of indeterminacy, be
>it a biological system or whatnot.

Well, it's not immediately apparent that indeterminacy is a function of 
complexity, in any sense.  The two-slit experiment is extremely simple,
analyzed within the context of quantum mechanics, but that doesn't resolve the
question of point-wave duality.  Similarly, no PDP network will exhibit behavior
that defies a deterministic explanation when run on a computer; indeed, just
dump every step of processing, and you have a low-level explanation right there
(of course, as complexity increases, you increase the possibility that, say,
a cosmic-ray particle will come screaming through your computer, but even such
an event as this is not "indeterminate").

>I will ask Serge the same questions I asked Gilbert: if humans are
>not a machine, what elements are added to the body (which seems to
>be a physical machine as far as we can tell) which make it otherwise?
>Are these material or immaterial?  Is there some aspect of human
>beings which does not obey the laws of nature?

I wasn't asked, but while I'm shooting my mouth off . . . if humans are not
machines, of course there is no material addition to the body, since that would
just comprise a different machine.  Nor is there any assumption that humans do
not obey the laws of nature, but rather that our perspective on the laws of
nature as being equivalent to the "laws" of physics is erroneous.  This is
required from a dualist point of view, for instance:  if a non-physical event
can govern physical behavior, conservation of energy goes right out the window,
and not just such that (delta E)*(delta t) <= h.

But assuming such a dualist stance leaves some ugly questions:  why do the
empirical observations of physicists generalize so well to encompass new
situations (that is, why does physics work)?  If a nonphysical theory of the
world is going to be adopted, it's really not a good enough reason to do so
just so that the task of creating an intelligent machine is impossible, sans
any other motivation.  I would certainly expect God to pick on the economists,
too, at least.
----------
Greg Hajek {....!ihnp4!sphinx!gargoyle!hajek}
"I don't know what the big deal is, I don't feel anything yeeeEEEEAAAAAA...."

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/06/88)

In article <563@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:

 > I don't need a mechanistic explanation of my own behavior (much of
 > it, at least), because I am directly aware of causing it by
 > intention.

I agree with Sarge that, as a human being, I frequently engage in 
goal-seeking behavior.  That is, I have intentions.

I also engage (from time to time) in goal-choosing behavior.  But
unlike my goal-seeking behavior, my goal-choosing behavior seems
much more unintentional.  Sometimes goals are thrust upon me by
circumstances or cultural expectations.  Sometimes goals surface
as part of a natural progression of learning (as in research).

In any event, I find it hard to predict what goals I will adopt
after I complete my current agenda.  (But I also suspect that a
more sagacious soul than I would have less trouble ancticipating
my future goals.)

--Barry Kort

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/06/88)

In article <wXaC5Ky00V4U05hEUM@andrew.cmu.edu> ap1i+@andrew.cmu.edu
(Andrew C. Plotkin) writes:

 > ... if you had a computer console in front of you with a button
 > marked "pain" (any human simulator had better have some sort of sensory
 > input), would you consider it okay to push it? 

Yes, if I was testing the computer's pain circuits.  When a computer
is in pain (i.e. a circuit board is burning out, or a cable is being
cut), I want to be sure that it can sense its distress and accurately
report its state of well-being.  Similarly, if I put the machine in
emotional pain (by giving it a program that runs forever and does
no useful work), I hope the machine can diagnose the problem and
gracefully aprise me of my error.  Getting an incomprehensible
core dump is like having a baby throw up because something it ate
was indigestible.  (I find core dumps indigestible.)

Barry Kort

jeff@censor.UUCP (Jeff Hunter) (12/06/88)

In article <177@iisat.UUCP>, paulg@iisat.UUCP (Paul Gauthier) writes:
> In article <281@esosun.UUCP>, jackson@freyja.css.gov (Jerry Jackson) writes:
> > I can accept the needs, goals, and plans... but why does everyone
> > assume that an intelligent machine would be a *feeling* being?  I see
> > no reason to assume that an IM would be capable of experiencing
> > anything at all.  This doesn't imply that it wouldn't be intelligent.
> 	Actually, I think it does. Feelings are simply products of
> intelligence. Once any form of intelligence reaches the complexity of
> the human mind it will undoubtably experience 'feelings.' Feelings are
> simply manifestations of the minds goals and needs. You feel 'sad' when

    I disagree. Emotions form a relatively simple reasoning system. (Lest 
you get the wrong impression from the start let me hasten to add that I
enjoy my emotions [most of the time at least]. I'm just saying that they're
not all that bright.)
    For example : I like you. I share with you. You like me. You share back.
There's a viable society without needing deep thought on the economics of
co-operation vs competition, or long computer modelling runs, etc...
"Like", "trust", "distrust", and "hate" form such useful models of 
behaviour that just about any mammal or bird use them to reason about
relationships.
    I assume that any relatively dumb intelligences that need to operate
in some social environment would need some similar shortcuts to reason with.
Smarter intelligences "evolved" from the dumb ones would probably retain
the emotions just from design economy.

    Emotional reasoning can often outperform logical reasoning (watch any
episode of Star Trek :-). Lots of people have stopped smoking because of
guilt rather than reasoned argument. However emotions (especially strong
ones) can make people do really dumb things too. Blind love and blinding
hatred are cliches.

    If I was dealing everyday with an artificial intelligence then I'd 
prefer it to have human-like emotions (or at least dog-like). I'd make 
an emotional attachment and I'd be sort-of disappointed if it declared
me surplus protein and fed me to the grinder :-).

    However an intelligence that didn't have to interact with others
wouldn't need to be run by emotions. A robot asteroid miner, for example,
could be set the task of converting a planetoid into can openers and 
paper weights. It wouldn't have to have a favourite ore truck, or be
pleased with the day's output, or panic and freeze if a major mechanical
failure happens. It wouldn't even have to feel disappointed or noble
as it melts itself down to make the last crate of paper weights.
    Conversely I could see an emotional version of the same machine
that could probably do just as good a job. (The emotions would have 
to be adjusted from human norms though.)

    In summary I think that intelligence doesn't require emotions, but
near-term "real" artificial intelligences will need them to interact
with humans, and the emotions will probably hang around unless they
are degined out for a purpose.
-- 
      ___   __   __   {utzoo,lsuc}!censor!jeff  (416-595-2705)
      /    / /) /  )     -- my opinions --
    -/ _ -/-   /-     The first cup of coffee recapitulates phylogeny...
 (__/ (/_/   _/_                                    Barry Workman

pepke@loligo.fsu.edu (Eric Pepke) (12/06/88)

A Turing machine cannot just consult a truly random coin flipper.  If it
could, it wouldn't be a Turing machine.  However, there is a much simpler
objection to the argument.  A random number generator can only be consulted
a finite number of times in a lifetime.  For every finite sequence of
such random numbers, you can produce a partial Turing machine specification
which produces that sequence.  So, there's no problem.

-EMP

ok@quintus.uucp (Richard A. O'Keefe) (12/06/88)

In article <1841@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>Neither of these two propositions can be demonstrated reliably.
>The behaviorists have shown that behavior which subjectively seems
>to us to be caused by intention can be determined (even hypnotists
>can demonstrate this), 

Er, how do hypnotists demonstrate that?  Perhaps I've read too many issues
of the Skeptical Inquirer and not enough of the National Enquirer, but my
understanding was that hypnotism is these days regarded as a form of
voluntary fantasy.  (We'll just have to put up with "voluntary" until
Skinner sends me the Official Phrasebook.)

As for the first part of this, there is a philosophical tradition called
"compatibilism", which holds that "it was caused by intention" and
"it was determined" are not contradictory.

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/06/88)

In article <286@gargoyle.uchicago.edu> hajek@gargoyle.uchicago.edu.UUCP (Greg Hajek) writes:
>
>Well, it's not immediately apparent that indeterminacy is a function of 
>complexity, in any sense.  

Simple systems of macroscopic dimensions are clearly deterministic,
would you agree?  Thus, any hope for indeterminacy lies in the complexity
of a system being such that the non-analyticity is guaranteed.  For
example, when you stand at the base of a waterfall, you will from time
to time be splashed by jets of water.  But you can not mechanistically
compute when you will be splashed because of the complexity of the system.

>Similarly, no PDP network will exhibit behavior
>that defies a deterministic explanation when run on a computer; indeed, just
>dump every step of processing, and you have a low-level explanation right there
Ah, but the low level explanation may not make any sense of the behavior,
but only describes it.  Making sense of it requires interpretation.  Take
simple backprop programs, for example.  The experimenter knows what the
input and output units are to be, but does not determine the final
successful configuration of the hidden units.  Often, their final state
is a surprise, but still makes sense after interpretation.
>
>>I will ask Serge the same questions I asked Gilbert: if humans are
>>not a machine, what elements are added to the body (which seems to
>>be a physical machine as far as we can tell) which make it otherwise?
>>Are these material or immaterial?  Is there some aspect of human
>>beings which does not obey the laws of nature?
>
>I wasn't asked, but while I'm shooting my mouth off . . . if humans are not
>machines, of course there is no material addition to the body, since that would
>just comprise a different machine.  Nor is there any assumption that humans do
>not obey the laws of nature, but rather that our perspective on the laws of
>nature as being equivalent to the "laws" of physics is erroneous.  This is
>required from a dualist point of view, for instance:  if a non-physical event
>can govern physical behavior, conservation of energy goes right out the window,
>and not just such that (delta E)*(delta t) <= h.
>
This recapitulates Helmholz' reasoning when he decided that conservation
of energy required humans to be machines.  I have yet to see anyone
make a satisfactory argument to the contrary.  Obviously, if one brings
religion or magic into the equation then it opens many possibilities,
but so far no one in this discussion has cited either of those as
their reasons for denying that humans are machines.

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/07/88)

In article <817@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>In article <1841@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>>Neither of these two propositions can be demonstrated reliably.
>>The behaviorists have shown that behavior which subjectively seems
>>to us to be caused by intention can be determined (even hypnotists
>>can demonstrate this), 
>
>Er, how do hypnotists demonstrate that?

People who were hypnotized usually report not that they were 
compelled to perform the suggested actions but that they "felt
like it".  In other words, the subjective impression was that the
actions were voluntary, yet they do ridiculous things that are clearly
determined by the suggestion.  If you wish to claim that post-hypnotic
suggestions are true free-will voluntary actions, then I can
only argue with your definition.

ray@bcsaic.UUCP (Ray Allis) (12/07/88)

ok@quintus.uucp (Richard A. O'Keefe) says:

>"An awareness of self" might be important to an intelligent organism,
>but it might be a *consequence* of intelligence rather than a
>*precondition* for it.

"Self-awareness" is not required for intelligent behavior (my definition
of intelligence, of course), but it IS necessary for the association of
experiences which constitutes symbol definition.  Note that symbol
*manipulation*, as by digital computers, can proceed in the total absense
of intelligence.

rjc@aipna.ed.ac.uk (Richard Caley) (12/07/88)

In article <563@metapsy.UUCP> sarge@metapsy.UUCP (Sarge Gerbode) writes:

>I don't need a mechanistic explanation of my own behavior (much of
>it, at least), because I am directly aware of causing it by
>intention.

It is precisly because of this I need an expalanation. I have a
reasonably strong belief in causality and without an explanation I am
reduced to talking about "intention" or something similar as a magical
something to break the causal chain at me. 

>Furthermore, the most major observable difference between
>myself and a machine is that the latter is explainable in mechanistic
>terms, whereas I am not.  On the other hand, if I could explain
>*your* behavior entirely on mechanistic grounds, then I think I would
>have grounds (Occam's Razor) for not attributing subjectivity to
>you.

Come on, I can not explain al the beaviour of the global weather system
on mechanistic grounds - in fact, nobody can at this time. does this
mean we have to go back to rain gods and thunder gods, atributing
'subjectivity' to the weather.

I would chalange you to expalain the behaviour of the computer on which
you read this on mechanistic grounds. I can't. In fact until the
physicists have solved all tehir problems no one will be able to. By
your own argument. . . .

>It is because others are observably *not* machines,
>not explainable in mechanistic terms, that we attribute subjectivity
>(and humanity) to them, in order to explain their behavior.

Defining subjectivity as humanity is rather a good way of wining the
argument. However, this is usually called "cheating". I can prove that
it is impossible to sit on a stool by defining the property of being a
seat as being "chairness" and so since stools are not chairs . . .

>People don't like to be manipulated, programed, treated like machines,
>and part of the reason why is, I believe, that they have an immediate
>awareness of themselves as not being mechanistically determined, and
>that sort of treatment observably embodies a lie.


This does not make them correct.

People believe all sorts of things. I believe that pushing something
twice as hard makes it go twice as fast. This is incorrect, but it gets
me around the supermarket with my trolly so what the hell.

-- 
	rjc@uk.ac.ed.aipna	AKA	rjc%uk.ac.ed.aipna@nss.cs.ucl.ac.uk

"We must retain the ability to strike deep into the heart of Edinburgh"
		- MoD

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/07/88)

From article <163@censor.UUCP>, by jeff@censor.UUCP (Jeff Hunter):
" 
"     I disagree. Emotions form a relatively simple reasoning system.... 
" There's a viable society without needing deep thought on the economics of
" co-operation vs competition, or long computer modelling runs, etc...

This is a parochially human view.  Emotions seem simple to you, because
you can experience them without effort or reflection, and because
"lower" animals have them too.  Why not reason that since the
mechanisms that permit animals to form societies took much longer
to evolve than those permitting human reason that the former must
be more complex?  I think that would make at least as much sense.

		Greg, lee@uhccux.uhcc.hawaii.edu

ap1i+@andrew.cmu.edu (Andrew C. Plotkin) (12/08/88)

/ > ... if you had a computer console in front of you with a button
/ > marked "pain" (any human simulator had better have some sort of sensory
/ > input), would you consider it okay to push it?

/ Yes, if I was testing the computer's pain circuits.  When a computer
/ is in pain (i.e. a circuit board is burning out, or a cable is being
/ cut), I want to be sure that it can sense its distress and accurately
/ report its state of well-being.

Ah, but I'm not talking about a system that senses damage to the computer. I'm
talking about something that applies stimuli to the simulated pain inputs of the
simulated human.

    You brought up "computers being able to simulate humans," and I'm using that
concept. To clarify it, let me describe it as a program running on a computer;
with input routines that feed data to the same thought-mechanisms that human
sensory nerves feed to in the human mind; with output routines that take data
from the appropriate thought-mechanisms and display it in suitable form. Given
any input, it will produce output as a typical human would. (Passing the Turing
test, therefore.)

    (The "easiest" way to this is to create a trillion CPU's, each capable of
simulating one neuron, and hooking them together. Sensory input could then be
pushed into the "sensory neurons" directly. However, the exact mechanism is not
relevant here.)

    Now, there's a big difference between damage to the computer and simulated
pain. One degrades the performance of the simulation; the other makes the
simulation yell "ouch!" (assuming it's a good simulation.)
    Obvious example: if a brain surgeon is working on a conscious patient, the
patient feels no pain (assuming the cut scalp has been numbed.) The surgeon can
poke around, feed minute electrical currents in, and so forth; the patient will
see strange flashes, have odd bits of memory pop up, and so forth. If the
surgeon drops his scalpel in, the patient will stop thinking or suffer
functional loss, but no pain is involved, unless sensory centers are hit.

/   Similarly, if I put the machine in
/ emotional pain (by giving it a program that runs forever and does
/ no useful work), I hope the machine can diagnose the problem and
/ gracefully aprise me of my error.

Keep thinking human simulation. The machine would simulate reactions like "Damn,
this is boring." Or, more likely, "Why should I do this idiot work? Program it
into a computer!"
   (Of course, if it was a simulation of a reasonably open-minded human, you
could easily convince it that it was really a computer. That its optical inputs
are coming from cameras would be a giveaway. But I doubt it was settle down and
execute C for the rest of its existence. Assume it was a simulation of you --
would you?)

--Z

ok@quintus.uucp (Richard A. O'Keefe) (12/08/88)

In article <1847@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>In article <817@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>>In article <1841@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>>>The behaviorists have shown that behavior which subjectively seems
>>>to us to be caused by intention can be determined (even hypnotists
>>>can demonstrate this), 
>>
>>Er, how do hypnotists demonstrate that?
>
>People who were hypnotized usually report not that they were 
>compelled to perform the suggested actions but that they "felt
>like it".  In other words, the subjective impression was that the
>actions were voluntary, yet they do ridiculous things that are clearly
>determined by the suggestion.  If you wish to claim that post-hypnotic
>suggestions are true free-will voluntary actions, then I can
>only argue with your definition.

I note that Banks didn't quote the bit where I pointed out that hypnosis
is understood these days as a sort of voluntary fantasy:  the subject
does what s/he thinks a hypnotic subject ought to do.  To say that the
actions "are clearly _determined_ by the suggestion" begs the question.
How would you show that an action performed in response to a post-
hypnotic suggestion is not voluntary?  (Anyone who wants to claim "I
was hypnotised" as a defence in court had better be prepared for a
nasty surprise.)  The thing is, being-a-hypnotic-subject is a social
context in which it is acceptable, even *expected*, for the subject to
"do ridiculous things".  Try instead a hypnotic experiment where the
subjects are told in advance that if for every commmand they obey they
will be fined $100, or try a stage hypnotist's act with an audience of
confederates who boo whenever the subject does something silly.

Instead of arguing with my definition of "voluntary", it might be better
to read up on hypnotism in the scientific literature.

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/08/88)

In article <180@loligo.fsu.edu> pepke@loligo.UUCP (Eric Pepke) writes:
 > A random number generator can only be consulted a finite number of
 > times in a lifetime.  For every finite sequence of such random numbers,
 > you can produce a partial Turing machine specification which produces
 > that sequence.  So, there's no problem.

Just one problem, Eric.  You have to build your Turing Machine
emulator before I have finished living my life.  The information
you need to construct it is not available just yet.

--Barry Kort

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/08/88)

In article <1841@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu
(Gordon E. Banks) writes:

 > In addition, a complex enough neural network can demonstrate behavior
 > the cause of which is not immediately apparent.  Obviously no network
 > has been invented as complex as the human brain, and until one is
 > we won't be able to answer the question experimentally.  Those bothered
 > by possible loss of free will should recall that in a system complex
 > enough, there is room for the possibility of indeterminacy, be
 > it a biological system or whatnot.  

Not only is there room for the possibility of indeterminacy, some
of us deliberately encorporate elements of randomness into our
behaviors.  (I for one don't want certain people to be able to predict
every move I'm going to make.)

As to Free Will, I define it as the capacity to make and enact
choices consistent with my knowledge and values.

 > If humans are not a machine, what elements are added to the body
 > (which seems to  be a physical machine as far as we can tell)
 > which make it otherwise?  Are these material or immaterial? 

One of the more interesting elements that is added to the human
body is the element of information.  (There is at least one
school of physics which proposes that the Universe is composed
of matter, energy, and information.)  The key information added
to the human body is in the form of Knowledge and Values.  In
deference to our Eastern philosophical friends, we may think of
such information as "Patterns of Organic Energy".  (It is immaterial
whether we think of such patterns as material or immaterial.)

 > Is there some aspect of human beings which does not obey the
 > laws of nature?

Not to my knowledge.

--Barry Kort

tmoody@sjuvax.UUCP (T. Moody) (12/08/88)

In article <1841@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:

>I will ask Serge the same questions I asked Gilbert: if humans are
>not a machine, what elements are added to the body (which seems to
>be a physical machine as far as we can tell) which make it otherwise?
>Are these material or immaterial?  Is there some aspect of human
>beings which does not obey the laws of nature?

The assumption here is that anything that "obeys the laws of nature" [as
currently understood, or some future set?] is a machine.  I have stayed
out of the discussion so far, because this is a singularly uninteresting
conception of "machine," in my view.  If you don't understand "machine"
in a way that lets you distinguish between, say, trees and clocks, then
you are taking this word on a long holiday.

-- 
Todd Moody * {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody * SJU Phil. Dept.
            "The mind-forg'd manacles I hear."  -- William Blake

throopw@xyzzy.UUCP (Wayne A. Throop) (12/09/88)

> gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
> [...] most AIers believe the assertion that logic encapulates
> the rules of thought, and that all sentences can be given a semantics
> in formal logic (note how some famous mathematical logicians disagree
> and stick to formal languages as being very different things).

I dunno.  I thought "most AIers" agreed with the formal logicians.
It is true that "Knowledge Engineers" and "Expert System" designers
are trying to model knowledge as imense sets of formal propositions.
But it isn't clear to me that these constitute research so much as
technologists spinning off results from former generations of AI
research that didn't pan out *as* *AI* (as a whole)
(but which might pan out as technology).

> Anyone who talks of computers "understanding" does so:
>   a) to patronise users whom they don't know how to instruct properly;
>   b) because they are AI types.

I dunno.  The forms "the program knows <mumble>", or "the program will
try to <mumble>" seem apt to me.  No worse than "the moth knows its mate
is upwind" or "the moth will try to reach its mate".  I don't think
these forms are necessarily silly anthropomorphisms.  Hence, I don't
think there is necessarily anything wrong with the AIer's use of
"understanding", "know", "try" and so on as technical terms for what
their programs are doing (or states that their programs are in).

--
"Technical term" is a technical term for a common word or phrase whose
meaning is, in some contexts, distorted beyond mortal comprehension.
                                --- Hal Peterson cray!hrp
-- 
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

peru@soleil.UUCP (Dave Peru) (12/09/88)

I was wrong.  It's only a matter of time.

Artificial Intelligence = Intelligence

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/09/88)

In article <1736@sjuvax.UUCP> tmoody@sjuvax.UUCP (T. Moody) writes:
>
>The assumption here is that anything that "obeys the laws of nature" [as
>currently understood, or some future set?] is a machine.  I have stayed
>out of the discussion so far, because this is a singularly uninteresting
>conception of "machine," in my view.  If you don't understand "machine"
>in a way that lets you distinguish between, say, trees and clocks, then
>you are taking this word on a long holiday.

Perhaps children would require such a restrictive concept of machine in
order to differentiate trees and clocks, but I do not.  I would be happy
to hear of some other word, broad enough to include trees and clocks
which we could use instead of machine.  The concept as I am using it
is that of a system which is potentially capable of being created,
given sufficient natural (as opposed to supernatural) knowledge of
its workings.  The controversy is over whether humans (and I suppose
plants and animals) are such systems.  I hold that if humans are
such "machines" then it is possible that someday we will be able
to construct an artificial person.

cam@edai.ed.ac.uk (Chris Malcolm) (12/10/88)

In article <817@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>In article <1841@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu
>(Gordon E. Banks) writes:
>> .....
>>The behaviorists have shown that behavior which subjectively seems
>>to us to be caused by intention can be determined (even hypnotists
>>can demonstrate this), 
>
>Er, how do hypnotists demonstrate that?
> ....
>As for the first part of this, there is a philosophical tradition called
>"compatibilism", which holds that "it was caused by intention" and
>"it was determined" are not contradictory.

I think what Gordon Banks is referring to is the rationalisation of
post-hypnotic suggestion, where the victim is instructed under hypnosis
to do something mildly bizarre at a certain time, and also to "forget"
(have no conscious knowledge of) the instruction. At the appointed time
the victim will perform the bizarre act, and on being asked why, will
produce some spurious rationalisation, and insist under questioning
that this rationalisation is the true, real, sincere motive of an act
which was performed freely and with intention.

Exactly the same phenomenon is exhibited by split-brain victims, when
one half of the brain is asked to account for an action performed by
the other half, where the action was taken on the basis of perceptual
data only available to the performing half, and not to the explaining
half. Once again, a spurious and often ingeniously contrived
rationalisation is offered, with the speaker (speaking half) apparently
quite sincere in believing it, and without any sensation of strain or
puzzlement.

A natural conclusion is that introspection is not a privileged window
into the operations of the mind, but exactly the same kind of gifted
hypothesizing we perform when "seeing" the motives of others as revealed
by their behaviour in the context of our knowledge and suspicions, only
of course with access to a larger fund of knowledge.  In other words,
the "subjective seeming" of one's introspection has exactly the same
epistemic status as one's educated guess about the feelings, 
motivation, and mental processes of one's employer (for example).

If this is true - and I think it is - then the commonsense folk
psychology which is justified by appeals to our shared introspective
experiences, although very useful for negotiating with one another, is
dangerous mental luggage when doing AI research.

Supposing free-will to be contradicted by determinism has always seemed
to me to be due to confusing "determined" in the sense of "co-erced"
(which is not free behaviour) with "determined" in the sense of "in
principle predictable" which is quite another thing.  Anyone who
supposes that exercise of their free-will must depend upon some
essentially unpredictable (random) component is clearly suffering from a
shortage of good reasons for doing things, no?

Chris Malcolm

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/10/88)

In article <42571@linus.UUCP> bwk@mbunix (Kort) writes:
>Not only is there room for the possibility of indeterminacy, some
>of us deliberately encorporate elements of randomness into our
>behaviors.

Interesting.  Do you use a true or a pseudo random number generator
to introduce this randomness?

>One of the more interesting elements that is added to the human
>body is the element of information. 

The information necessary to reproduce the organism seems to be
encoded in DNA.  One would suppose that knowledge and values are
also encoded in physical systems whether neural networks or something
as yet undiscovered.

> (There is at least one
>school of physics which proposes that the Universe is composed
>of matter, energy, and information.)

What school is this?  I hadn't heard of it.

>deference to our Eastern philosophical friends, we may think of
>such information as "Patterns of Organic Energy". 

How does this Organic Energy differ from other forms of energy?  Does
it have separate conservation laws?  Is this a new form of vitalism?

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/10/88)

Richard A. O'Keefe says:

 > "An awareness of self" might be important to an intelligent organism,
 > but it might be a *consequence* of intelligence rather than a
 > *precondition* for it.

The ability to repose a self model is a consequence of the ability
to repose models.  Model-based reasoning is one facet of intelligence,
and a useful one for a sentient being who wishes to survive in a
dangerous world.  One of the interesting issues in defining a
self model is the location of the boundary between the self and
non-self portions of the model.  There is some evidence that humans
don't agree on the location of such boundaries.

--Barry Kort

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/11/88)

In article <1859@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu
(Gordon E. Banks) inquires about my earlier remarks:

>In article <42571@linus.UUCP> bwk@mbunix (Kort) writes:
>>Not only is there room for the possibility of indeterminacy, some
>>of us deliberately encorporate elements of randomness into our
>>behaviors.
>
>Interesting.  Do you use a true or a pseudo random number generator
>to introduce this randomness?

Both, Gordon.  For very long sequences, it becomes important if
there is an adversary who is attempting to model my pattern of moves.

>> (There is at least one
>>school of physics which proposes that the Universe is composed
>>of matter, energy, and information.)
>
>What school is this?  I hadn't heard of it.

I ran accros this item in the Science and the Citizen column
of Scientific American some months ago.  Perhaps another netter
can provide a more detailed account.

>>Indeference to our Eastern philosophical friends, we may think of
>>such information as "Patterns of Organic Energy". 
>
>How does this Organic Energy differ from other forms of energy?  Does
>it have separate conservation laws?  Is this a new form of vitalism?

It differs only in its structural relationship, the way pieces of
a jigsaw puzzle relate to each other when they are assembled to
form a big picture.  Other than that, same pieces as the ones in
the box.  I am not familiar with the theory of vitalism.

--Barry Kort

ok@quintus.uucp (Richard A. O'Keefe) (12/12/88)

In article <215@edai.ed.ac.uk> cam@edai (Chris Malcolm) writes:
>In article <817@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>>Er, how do hypnotists demonstrate that?

>I think what Gordon Banks is referring to is the rationalisation of
>post-hypnotic suggestion, where the victim is instructed under hypnosis
>to do something mildly bizarre at a certain time, and also to "forget"
>(have no conscious knowledge of) the instruction. At the appointed time
>the victim will perform the bizarre act, and on being asked why, will
>produce some spurious rationalisation, and insist under questioning
>that this rationalisation is the true, real, sincere motive of an act
>which was performed freely and with intention.

There is a fairly major sort of non-sequitur here, plus a misapprehension
of hypnotism.  The non-sequitur is this:
from:	the act was suggested by the hypnotist
	the act was performed
	the actor produces a "spurious" rationalisation
it does **NOT** follow that the act was not done freely.  The misapprehension
is the hypnotism is not a process whereby the hypnotist controls the will of
the subject, but a voluntary fantasy which is particularly good at implanting
false memories.  For example, it is not the case that all people can be
hypnotised, whereas if a subject can be hypnotised by one mesmerist he or
she can usually be hypnotised by another.  This ought to suggest to us that
just maybe hypnosis might be something that subjects do, rather than something
that hypnotists do.  Let me propose another account of what might be going on.
	Subject agrees to play in hypnotic drama.
	Hypnotist makes suggestion.
	Subject voluntarily agrees to do so, rather than spoil the game.
	However, this doesn't seem like a good enough motive for the act,
	so subject confabulates another reason.  (This is just cognitive
	dissonance at work.)
	Subject comes out of trance believing confabulation.
	Subject performs act.

A key point here is cognitive dissonance.  (Look it up in any good
Psychology library.)  People make up stories to account for their actions
all the time, and believe them too.  But it doesn't follow from that
that their actions are not free.

fransvo@htsa (Frans van Otten) (12/12/88)

In article <1736@sjuvax.UUCP> tmoody@sjuvax.UUCP (T. Moody) writes:
>                                      If you don't understand "machine"
>in a way that lets you distinguish between, say, trees and clocks, then
>you are taking this word on a long holiday.

When you are 'normally' talking about machines, you are right. But in this
discussion, you are wrong. A 'biological system' (a human being, an animal,
plants, trees, etc.) can be considered as black boxes. Put something into
it (food), what happens ? (it grows). Now give information instead of food,
what happens ? That's the way we are discussing human beings as machines.

-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/13/88)

From article <840@quintus.UUCP>, by ok@quintus.uucp (Richard A. O'Keefe):
" ...  For example, it is not the case that all people can be
" hypnotised, whereas if a subject can be hypnotised by one mesmerist he or
" she can usually be hypnotised by another.  This ought to suggest to us that
" just maybe hypnosis might be something that subjects do, rather than something
" that hypnotists do.

Some bottles are harder to open than others, therefore bottles open themselves.
Some programs are harder to write than others, so computers have free
will.  (This method of argument has great possibilities.)

" ...
" A key point here is cognitive dissonance.  (Look it up in any good
" Psychology library.)  People make up stories to account for their actions
" all the time, and believe them too. ...

All the time.  Precisely.  And one of the most popular is the story
about free will.

		Greg, lee@uhccux.uhcc.hawaii.edu

ok@quintus.uucp (Richard A. O'Keefe) (12/13/88)

In article <2804@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <840@quintus.UUCP>, by ok@quintus.uucp (Richard A. O'Keefe):
>" ...  For example, it is not the case that all people can be
>" hypnotised, whereas if a subject can be hypnotised by one mesmerist he or
>" she can usually be hypnotised by another.  This ought to suggest to us that
							    ^^^^^^^
>" just maybe hypnosis might be something that subjects do, rather than something
	^^^^^	       ^^^^^
>" that hypnotists do.

>Some bottles are harder to open than others, therefore bottles open themselves.
>Some programs are harder to write than others, so computers have free
>will.  (This method of argument has great possibilities.)

I think you had better find some better analogies.  At *NO* point in my
message did I say ">therefore< people have free will".  All I suggested
in the passage quoted was that the fact that some people can be
hypnotised and others can't **suggests** that hypnosis is something that
the subjects do.  THIS IS A TESTABLE HYPOTHESIS!  (And in fact it is the
current theory.)  It does not follow from that that people have free will,
nor that they don't.  In this series of messages on this topic I have
been careful to avoid stating my opinion about free will at all.  I only
criticsed a naive and out-of-date view of hypnosis.

	Some bottles are harder to open than others, this ought to suggest
	to us that a property of the bottle rather than the opener might
	be involved.

Now _that_ would have been a fair analogy.

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/13/88)

In article <215@edai.ed.ac.uk> cam@edai (Chris Malcolm) writes:

 > Anyone who supposes that exercise of their free-will must depend
 > upon some essentially unpredictable (random) component is clearly
 > suffering from a shortage of good reasons for doing things, no?

I define Free Will as the capacity to make and enact choices consistent
with my knowledge and values.  It is only when my value system is
teetering on the razor's edge between two choices that I turn to my
random number generator to resolve the choice and get on with my life.

--Barry Kort

smoliar@vaxa.isi.edu (Stephen Smoliar) (12/13/88)

In article <1859@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu
(Gordon E. Banks) writes:
>
>> (There is at least one
>>school of physics which proposes that the Universe is composed
>>of matter, energy, and information.)
>
>What school is this?  I hadn't heard of it.
>
The only proponent of this school with whom I am familiar is  David Bohm,
who gave two lectures on the subject at UCLA last year.  The best place to
find a summary of Bohm's ideas is in a book entitled UNFOLDING MEANING:  A
WEEKEND OF DIALOGUE WITH DAVID BOHM, edited by Donald Factor and published
by Foundation House Publications in Gloucestershire.

pepke@loligo.fsu.edu (Eric Pepke) (12/13/88)

In article <42569@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>In article <180@loligo.fsu.edu> pepke@loligo.UUCP (Eric Pepke) writes:
> > A random number generator can only be consulted a finite number of
> > times in a lifetime.  For every finite sequence of such random numbers,
> > you can produce a partial Turing machine specification which produces
> > that sequence.  So, there's no problem.
>
>Just one problem, Eric.  You have to build your Turing Machine
>emulator before I have finished living my life.  The information
>you need to construct it is not available just yet.

No, I just need to have it complete by the time you are ready to compare
your life against it.  Betcha that given enough resources I can make one
between the time you die and the time you get used to flapping around your
ectoplasm.

>--Barry Kort

-EMP

"I don't know what this ectoplasm is that Arthur Conan Doyle keeps talking
 about, but it sounds like it would be great stuff to mend furniture with."
                                                           -Archy

mintz@io.UUCP (Richard Mintz) (12/14/88)

In article <2221@xyzzy.UUCP> throopw@xyzzy.UUCP (Wayne A. Throop) writes:

>> gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
>> Anyone who talks of computers "understanding" does so:
>>   a) to patronise users whom they don't know how to instruct properly;
>>   b) because they are AI types.

>I dunno.  The forms "the program knows <mumble>", or "the program will
>try to <mumble>" seem apt to me.  No worse than "the moth knows its mate
>is upwind" or "the moth will try to reach its mate".  I don't think
>these forms are necessarily silly anthropomorphisms.
>Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

As a professional writer of user documentation, I heartily agree. Often
the phrasing "if you do X, the software will do Y" is far preferable to 
the alternatives (usually "if you do X, you will see Y"). The former 
offers a more precise level of detail in an approximately equal
number of words, with the added advantage of more active language. How 
(for example) can you concisely explain the steps in a complex software 
algorithm ("How a Document is Paginated") without resorting to 
constructions such as "First the software checks X, then it adjusts Y, 
then...."? The alternative ("The pagination algorithm consists of the 
following steps: [a] Verification of value X. [b] Adjustment of Y 
accordingly. [c]...") is likely to cause migraine headache in everyone 
except those who already understand, thus defeating the purpose of 
technical documentation.

Rich Mintz	eddie.mit.edu!ileaf!mintz
The foregoing does not represent the opinion of Interleaf, Inc.

harper@hi-csc.UUCP (Paul L. Harper) (12/14/88)

I am continually amazed at the faith of AI "researchers"
(programmers?). I have seen nothing whatsoever from the AI
community that indicates there is any hope of producing
intelligence by running instructions on computers. 

It is an incredible leap of faith, completely unfounded 
by science, to assume that computers can obtain the human
quality we call intelligence. Where is the scientific justification
for the assumption? 

peru@soleil.UUCP (Dave Peru) (12/15/88)

>>In article <215@edai.ed.ac.uk> cam@edai (Chris Malcolm) writes:
>>
>> Anyone who supposes that exercise of their free-will must depend
>> upon some essentially unpredictable (random) component is clearly
>> suffering from a shortage of good reasons for doing things, no?

In article <42939@linus.UUCP> (Barry W. Kort) writes:

>I define Free Will as the capacity to make and enact choices consistent
>with my knowledge and values.  It is only when my value system is
>teetering on the razor's edge between two choices that I turn to my
>random number generator to resolve the choice and get on with my life.

Sometimes I think my brain is a chunk of clay.  Incoming reality molds and
defines my knowledge and values.  Sometimes I wonder whether I or anyone else
has Free Will at all.  Just think, if the universe is deterministic, at the
time of the Big-Bang, you could have predicted this article and the exact
words used.

Remember when a person could get a 30 year fixed rate mortgage at 7%.

Cigarette smoke is toxic waste, do you have a choice?

andy@cs.columbia.edu (Andy Lowry) (12/15/88)

In article <4040a289.9d8d@hi-csc.UUCP> harper@hi-csc.UUCP (Paul L. Harper) writes:
>I am continually amazed at the faith of AI "researchers"
>(programmers?). I have seen nothing whatsoever from the AI
>community that indicates there is any hope of producing
>intelligence by running instructions on computers. 
>
>It is an incredible leap of faith, completely unfounded 
>by science, to assume that computers can obtain the human
>quality we call intelligence. Where is the scientific justification
>for the assumption? 

I am continually amazed at the closed-mindedness of certain
individuals.  On the contrary... it is an incredible leap of faith (in
my book) to assume that this goal is unattainable.  That is, I cannot
conceive of ANY argument that intelligence cannot be fabricated other
than one based on a belief in God.  And that is a belief that I do not
hold any part of, and that I consider an "incredible leap of faith."

That I believe "true" artificial intelligence to be attainable does
not mean that I necessarily believe it will be attained.  That depends
on a fair amount of luck, among other things.  It does mean that I
consider it a worthy goal for research effort.

In fact, even if I were not so convinced that the goal can,
theoretically, be achieved, I would still consider it a worthy
pursuit.  How many programs of research are undertaken with full
confidence in their eventual success?  Attempting to obtain a goal is
certainly one valid way to go about seeing how attainable it is.

-Andy

josh@klaatu.rutgers.edu (J Storrs Hall) (12/16/88)

Paul Harper writes:
    I am continually amazed at the faith of AI "researchers"
    (programmers?). I have seen nothing whatsoever from the AI
    community that indicates there is any hope of producing
    intelligence by running instructions on computers. 

Why "continually"?  If this is so off-the-wall a pursuit, why does Mr.
Harper bother "continually" reading this newsgroup, and worrying about
it?  Why doesn't he read talk.politics instead, and be continually
amazed that there are so many people that believe any of the creeds
expounded there?

    It is an incredible leap of faith, completely unfounded 
    by science, to assume that computers can obtain the human
    quality we call intelligence. Where is the scientific justification
    for the assumption? 

The scientific method has little to say about justification for
assumptions.  AI is a scientific hypothesis, a theory.  AI
practitioners are doing experiments, or if you wish, contributing
to the big overall experiment, to test the theory.  

There is an attitude, all too common among business DP types,
that "If I can't program it in COBOL, it's worthless."  (For 
science/engineering types, /COBOL/FORTRAN/.)  It turns out that
there are plenty are reasons to believe that (a) intelligence
is possible in computers of sufficient memory and processing 
power; (b) it will be possible for us to create such intelligent
programs after a sufficient investment in software capital; and
(c) the knowledge gained in attempting this will be interesting
and useful.  

However, it would be boring and worthless to explain this to
Mr. Harper, so I shall not try.  Instead I will merely offer him
the advice that if AI baffles him, ignore it, and we'll all be
better off.

--JoSH

bph@buengc.BU.EDU (Blair P. Houghton) (12/16/88)

In article <4040a289.9d8d@hi-csc.UUCP> harper@hi-csc.UUCP (Paul L. Harper) writes:
>I am continually amazed at the faith of AI "researchers"
>(programmers?). I have seen nothing whatsoever from the AI
>community that indicates there is any hope of producing
>intelligence by running instructions on computers. 
>
>It is an incredible leap of faith, completely unfounded 
>by science, to assume that computers can obtain the human
>quality we call intelligence. Where is the scientific justification
>for the assumption? 

In a hundred and fifty years of neural science that has determined the
primary functions of the elements involved in thought and has but to
determine the architecture before it understands the function
of the whole machine.

This member of the AI community says:  "give me a map of the brain
and I'll make it compose piano sonatas while solving the middle-east
peace problem, and I'll do it all on my little Connection Machine.
Just don't expect it to do it quickly, and don't expect it this
century.  A trillion neurons is a lot of code."

				--Blair
				  "Expert System is an oxymoron."

sworking@teknowledge-vaxc.ARPA (Scott Workinger) (12/16/88)

In article <42571@linus.UUCP> bwk@mbunix (Kort) writes:

>(There is at least one
>school of physics which proposes that the Universe is composed
>of matter, energy, and information.)

Having come to the same conclusion, independently, I would be
very interested in seeing a reference on this.  (Another
way of looking at it is that everything that exists is
categorizable as matter, energy, and structure, where
structure is equivalent to information.)

Some people of a more philosophical bent would say that
it is the hole in the center of the wheel that makes
it useful.

Scott

throopw@xyzzy.UUCP (Wayne A. Throop) (12/17/88)

> tmoody@sjuvax.UUCP (T. Moody)
> If you don't understand "machine"
> in a way that lets you distinguish between, say, trees and clocks, then
> you are taking this word on a long holiday.

This is a little unclear to me.  Do you mean that not to place a boundary
on the definition of "machine" between trees and clocks is wrong-headed, or
do you mean that the definition of machine must be segmented in such a way
that a boundary (of what fuzziness?) must exist between trees and clocks?

But question: *Is* there any difference in the way trees and clocks
operate (except for the obvious difference in complexity)?

And further: what about between trees and humans?

--
The real problem is not whether machines think, but whether men do.
                                        --- B.F. Skinner
-- 
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/17/88)

From article <26161@teknowledge-vaxc.ARPA>, by sworking@teknowledge-vaxc.ARPA (Scott Workinger):
" In article <42571@linus.UUCP> bwk@mbunix (Kort) writes:
" 
" >(There is at least one
" >school of physics which proposes that the Universe is composed
" >of matter, energy, and information.)
" 
" Having come to the same conclusion, independently, I would be
" very interested in seeing a reference on this.  (Another

I have seen something by Erwin Schro"dinger sort of in this
direction.  Sorry I can't give a specific reference.
		Greg, lee@uhccux.uhcc.hawaii.edu

sbigham@dukeac.UUCP (Scott Bigham) (12/17/88)

In article <Dec.15.11.41.17.1988.12131@klaatu.rutgers.edu> josh@klaatu.rutgers.edu (J Storrs Hall) writes:
>                                              It turns out that
>there are plenty are reasons to believe that (a) intelligence
>is possible in computers of sufficient memory and processing 
>power; (b) it will be possible for us to create such intelligent
>programs after a sufficient investment in software capital; and
>(c) the knowledge gained in attempting this will be interesting
>and useful.  

What are they?  I'd love to know, because I'm inclined not to believe any of
the above (except for (c), which I firmly support).  Inquiring minds wanna
know (computers rarely if ever inquire...)

						sbigham
-- 
Scott Bigham                         "The opinions expressed above are
Internet sbigham@dukeac.ac.duke.edu   (c) 1988 Hacker Ltd. and cannot be
USENET   sbigham@dukeac.UUCP          copied or distributed without a
...!mcnc!ecsgate!dukeac!sbigham       Darn Good Reason."

marty@homxc.UUCP (M.B.BRILLIANT) (12/17/88)

In article <4040a289.9d8d@hi-csc.UUCP>, harper@hi-csc.UUCP (Paul L.
Harper) drew a lot of FLAK for writing:

> I am continually amazed at the faith of AI "researchers"
> (programmers?). I have seen nothing whatsoever from the AI
> community that indicates there is any hope of producing
> intelligence by running instructions on computers. 
> 
> It is an incredible leap of faith, completely unfounded 
> by science, to assume that computers can obtain the human
> quality we call intelligence. Where is the scientific justification
> for the assumption? 

The probable reason why he drew so much FLAK is that there is some
truth in what he said, some elements that are unfair, and much that is
unwelcome and confusing to some people.

It is a matter of faith to some that the activities of living beings
are produced by the interaction of objectively detectable matter and
energy, and therefore technology can come progressively closer to
understanding and reproducing them.

It is a matter of faith to others that the same activities of living
beings depend on something that is not explainable by the interaction
of objectively detectable matter and energy, and therefore technology
will always fall short of understanding and reproducing them.

Two points in the preceding statements are worth noting:

First, the philosophical bases are diametrically opposite.

Second, the conclusions are totally consistent and cannot be
distinguished by observation.

To a perfect scientist or engineer, it does not matter whether you say
that we will come ever closer to simulating life, or that we will never
perfectly simulate life.  Both are good working hypotheses.

Once we realize that the argument is essentially religious, we will
stop arguing about it.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858		Home (201) 946-8147
Holmdel, NJ 07733	att!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

josh@klaatu.rutgers.edu (J Storrs Hall) (12/17/88)

Marty Brilliant writes:
    In article <4040a289.9d8d@hi-csc.UUCP>, harper@hi-csc.UUCP (Paul L.
    Harper) drew a lot of FLAK for writing:
    > I am continually amazed at the faith of AI "researchers"
     ...

    The probable reason why he drew so much FLAK is that there is some
    truth in what he said, some elements that are unfair, and much that is
    unwelcome and confusing to some people.

The real reason he drew so much "FLAK" is that he wanted to, and said
exactly what would press the most buttons on an AI devotee.

Even if every word he said were true, and AI really a "leap of faith"
religious creed, his action in posting would be equivalent to running
into a church service and shouting denunciations of the worshippers.
If AI is a religion, why not let us practice it in peace?  --and of
course if it is not, the denunciations are stupid as well as malicious.

--JoSH

jeff@aipna.ed.ac.uk (Jeff Dalton) (12/18/88)

In article <4040a289.9d8d@hi-csc.UUCP> harper@hi-csc.UUCP (Paul L. Harper) writes:
>It is an incredible leap of faith, completely unfounded 
>by science, to assume that computers can obtain the human
>quality we call intelligence. Where is the scientific justification
>for the assumption? 

Where is anyone assuming this?  People are (1) trying to see what
machines can do, (2) wondering whether they might be able to do
sufficiently complex, etc. things so as to be considered intelligent,
(3) noticing that computers are pretty general beasts and that so
far we haven't found any hard limits that say they must fall short
of intelligence, (4) pointing out what they think are mistakes in
arguments that AI won't work.

Why should no one ever try to make machines that can identify objects in
images or perform other new and interesting tasks until they are sure it
could be done?  Very little AI work is directed at "real intelligence".
It's clear that we're very far from that and not clear that it's even
possible.  But artificial insects, for example, might be another matter.

rjc@aipna.ed.ac.uk (Richard Caley) (12/18/88)

In article <4040a289.9d8d@hi-csc.UUCP> harper@hi-csc.UUCP (Paul L. Harper) writes:

>It is an incredible leap of faith, completely unfounded 
>by science, to assume that computers can obtain the human
>quality we call intelligence. Where is the scientific justification
>for the assumption? 

Where is the scientific justification for the oposite assumption? It is
an open question, and one certainly gets no answers by assuming it is
impossible, so the best strategy seems to be to assume it is possible
and try to disprove this, since this could be done by one facet of human
"inteligence" which is proved to be incomputable.

There is no "scientific" justification for the assumption that the
entire universe obeys those laws we have observed locally on the earth,
it is an assumption. One must assume something to get anywhere.

-- 
	rjc@uk.ac.ed.aipna	AKA	rjc%uk.ac.ed.aipna@nss.cs.ucl.ac.uk

"We must retain the ability to strike deep into the heart of Edinburgh"
		- MoD

dc@gcm (Dave Caswell) (12/19/88)

In article <4639@homxc.UUCP> marty@homxc.UUCP (M.B.BRILLIANT) writes:

(ONE)
.It is a matter of faith to some that the activities of living beings
.are produced by the interaction of objectively detectable matter and
.energy, and therefore technology can come progressively closer to
.understanding and reproducing them.
 
(TWO)
.It is a matter of faith to others that the same activities of living
.beings depend on something that is not explainable by the interaction
.of objectively detectable matter and energy, and therefore technology
.will always fall short of understanding and reproducing them.

Is it safe to say that the people who don't think there is any faith
involved in believing (ONE) think that it is at least possible we
shall someday attain machine intelligence?  Is there any reason to
think that believing in (ONE) is an "act of faith"?  Is there any
reason to believe in (TWO) except faith?  My mind is made up; I
wouldn't have used the words "a matter of faith" in (ONE).


-- 
Dave Caswell
Greenwich Capital Markets                             uunet!philabs!gcm!dc

fransvo@htsa.uucp (Frans van Otten) (12/19/88)

In article <2804@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <840@quintus.UUCP>, by ok@quintus.uucp (Richard A. O'Keefe):
>" ...  For example, it is not the case that all people can be
>" hypnotised, whereas if a subject can be hypnotised by one mesmerist he or
>" she can usually be hypnotised by another.  This ought to suggest to us that
>" just maybe hypnosis might be something that subjects do, rather than something
>" that hypnotists do.
>
>Some bottles are harder to open than others, therefore bottles open themselves.
>Some programs are harder to write than others, so computers have free
>will.  (This method of argument has great possibilities.)
>
>		Greg, lee@uhccux.uhcc.hawaii.edu

Let's define hypnosis as 'being in an altered state of consciousness'. Then
the question becomes: how does someone get into this state ? There are some
possibilities:

  1) you do it yourself,
  2) you agree with someone that (s)he helps you,
  3) someone having 'hypnotizing power' 'does it to you' before you are aware
     it's happened.

Note on 3): In my opinion, you must have some vulnerable spot in your per-
sonality which the other person (knowing it or not) uses. I have never met
someone without such a spot. By the way, the hypnotizing person doesn't
need to be aware of what (s)he is doing; (s)he may very well be doing this
unconsciously (because (s)he has some psycholocal trauma/.../...)

Also, from personal experience I believe that everyone can be hypnotized. It
is true that this is very hard sometimes; even someone who wants to get
hypnotized may have blocks on other (unconscious/subconscious) levels.
-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

fransvo@htsa.uucp (Frans van Otten) (12/19/88)

In article <4040a289.9d8d@hi-csc.UUCP> harper@hi-csc.UUCP (Paul L. Harper) writes:
>I am continually amazed at the faith of AI "researchers"
>(programmers?). I have seen nothing whatsoever from the AI
>community that indicates there is any hope of producing
>intelligence by running instructions on computers. 
>
>It is an incredible leap of faith, completely unfounded 
>by science, to assume that computers can obtain the human
>quality we call intelligence. Where is the scientific justification
>for the assumption? 

Where is the (scientific ?) justification for the above assumption ?
-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

markh@csd4.milw.wisc.edu (Mark William Hopkins) (12/21/88)

In article <17@ubc-cs.UUCP> morrison@grads.cs.ubc.ca (Rick Morrison) writes:
>In article <4264@homxc.UUCP> marty@homxc.UUCP (M.B.BRILLIANT) writes:
>> ...  Do we know what ``artificial intelligence'' is?  Or are we just talking
>> about something we don't know anything about? ...  Just somebody
>>tell us what the answer is.
>
>The answer is "who cares?" Does anyone in this group actually _do_ AI?

Well, if Natural Language processing counts as AI, the answer is yes.

>I'm beginning to think that the most appropriate definition of AI is
>"a discipline concerned with the uninformed examination of unresolvable 
>philosophical and psychological issues."

I do believe that human intelligence is beyond human comprehension.  The
reason is that the day we learn about our intelligence as it currently
exists, we'll experience a quantum leap in our own intelligence AS A RESULT.
So we'll always be one step behind ourselves.

None of this rules out AI, though ... especially if AI were to be
automated ;-).

Another, related, posting described the impossibility of AI by using the
classic Sentimentalist argument: machines won't be able to recognize a Duck
when it sees one so it could "throw" it a piece of bread out of humanitarian
concern.

The machine's response: "I see no cause for that kind of insult! Really!
Comparing *ME* to a human!"

Signals in our nervous system travel at about 700 MPH (if my memory is 
correct).  Signals in Silicon travel about 1 *MILLION* times faster.
It's not whether AI is possible, no, the question is how long it will be
before the machine's capacity exceeds our own, as it will.

Being scared of the possibility is no excuse for denying its inevitability,
because the machine is our own creation and our own tool, albeit an intelligent
one.  But we don't deify cars because they travel faster than us (well, normal
people don't, at least), so nobody would ever deify a human artifact that
happened to think faster and better than us.

And as for unresolvable philosophical problems such as the one behind Goedel's
Theorem: you may not realise it, but even there there is a tiny crack that may
allow for a resolution.  You see, nobody ever showed that the negation of
the key statement which Goedel used in his theorem is not proveable.  In fact, 
the Number Theory that Goedel looked at may actually be both complete and
consistent if that negation IS proveable within the original theory itself.
What Number Theory would be then is "omega-consistent" ...  in which case you
would have infinite numbers, infinitesimals and the like rammed down your 
throat ... a possibility I savor, and a perfect resolution to the
philosophical dilemma.

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/22/88)

In article <4040a289.9d8d@hi-csc.UUCP> harper@hi-csc.UUCP (Paul L. Harper) writes:
>I am continually amazed at the faith of AI "researchers"
>(programmers?). I have seen nothing whatsoever from the AI
>community that indicates there is any hope of producing
>intelligence by running instructions on computers. 
>
>It is an incredible leap of faith, completely unfounded 
>by science, to assume that computers can obtain the human
>quality we call intelligence. Where is the scientific justification
>for the assumption? 

Let me try to briefly answer your question.  
The assumption derives from the following assumptions:

1. Intelligence is a function of the human brain.
2. The brain is a physical object and its functioning is
   explainable in terms of its organization and the laws of physics.
3. Given sufficient understanding of the composition and organization
   of the brain, and sufficient progress in technology, it should
   be possible to artificially create intelligence (or for Gilbert
   Cockton, a system capable of acquiring intelligence through
   appropriate socialization).

As far as creating a complete intelligence by "executing instructions",
I would have to know what you mean by that term.  If you are talking
about a Turing machine, then I would say that if you had one fast enough,
you could probably simulate all of the functions of the massively
parallel brain serially and create such an intelligence, but I seriously
doubt if that is the way it will be done (at least at first), since that
would require more knowledge than other approaches.

Of course if you do not accept the 3 assumptions (for example, if you
believe that intelligence is a function of man's spirit rather than
the brain) then you have a logical reason for rejecting the idea
of an artificial intelligence.  (Even that objection might be met
with the notion that given a sufficiently complex machine, a spirit
might be found to inhabit it, as well, although that is truly a leap
of faith!)

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (12/22/88)

In article <4639@homxc.UUCP> marty@homxc.UUCP (M.B.BRILLIANT) writes:
>
>It is a matter of faith to others that the same activities of living
>beings depend on something that is not explainable by the interaction
>of objectively detectable matter and energy, and therefore technology
>will always fall short of understanding and reproducing them.
>
>Two points in the preceding statements are worth noting:
>
>First, the philosophical bases are diametrically opposite.
>
>Second, the conclusions are totally consistent and cannot be
>distinguished by observation.
>

Perhaps I can agree with your first statement but not your second.
If indeed an artificial intelligence is created, that action will
negate the premise.  Almost all unknown phenomena were initially given 
supernatural explanations.  The history of science traces out a point by point
collapse of such notions (although there are still stragglers
who refuse to give up even the notion of the flat earth).  I feel that
it would be better if people would avoid staking their religious beliefs
on such questions, but I am sure religion will survive the creation
of artificial persons.  Wouldn't it be interesting if these artificial
persons were also religious?  (Maybe I'll write a SF story abou this.)

bickel@nprdc.arpa (Steven Bickel) (12/23/88)

In article <1901@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
> {...}  for example, if you
>believe that intelligence is a function of man's spirit rather than
>the brain) then you have a logical reason for rejecting the idea
>of an artificial intelligence.  (Even that objection might be met
>with the notion that given a sufficiently complex machine, a spirit
>might be found to inhabit it, as well, although that is truly a leap
>of faith!)

  An interesting twist that I have often pondered is that intelligent
  life might require a machine (brain etc.) within an energy field
  (ie. electromagnetic or some derivative that we can or cannot currently
  measure).  Just a metaphysical thought. :-)

Steve Bickel

anderson@secd.cs.umd.edu (Gary Anderson) (12/23/88)

The successful application of  artificial intelligence tools 
and techniques will increase the degree to which humans depend
on machines. Clearly, human kind will be able to construct 
smart, productive, efficient machines to carry out many important tasks.
These machines will  become  superior to humans 
along  certain dimensions (ie computational speed, physical strength, agility,
stamina) in specific applications.  As the performance of these machines 
improves, and their use becomes more pervasive, we will come to
rely on them more and more for our survival and comfort.
These ubiquitous machines will touch the lives of nearly everyone and
they will  be a source of great anxiety for many people.  


As these machines become more complex and more versatile,  
their behavior in new and unanticipated situations will become more 
difficult to predict. Even hindsight understanding  of the behavior 
of these machines in new situations may be very difficult to achieve.
I wonder if  even the developers of these machines will be able to 
distinguish  free will from poorly understood programmed behavior.
If the machines  become so smart and so complex that we cannot 
easily predict their behavior in new situations, we 
will have no recourse but to ask "them" what they would do.


It is my impression that a major reason for hoping to observe consciousness
and free will in our machines is that observing these anthropomorphic 
characteristics  would ease our understandable anxiety about depending on 
the behavior of machines we don't fully understand.
Consciousness would provide another 
channel for communicating with and "understanding"  
( or at least *ir*rationalizing  :-} )
the behavior of these complicated machines.

I wonder how effective this channel would be in reducing anxiety about
smart machines, and how useful it would be in predicting, monitoring 
and controlling their behavior. The results are mixed for human-human
exploitation of this channel. Additionally,  I think there would be some 
difficult moral/ethical issues associated with  manipulating the actions of 
an agent with "free will".

-- 
              Gary S. Anderson               | Probity, sincerity, candor,
                                             | conviction, the idea of duty,
              +-+-+-+-+-+-+-+-+-+-+-+-+      | are things which, mistaken, may
      email:  anderson@secd.cs.umd.edu       | become hideous, but which even
 U.S. Snail:  University of Maryland         | though hideous, remain great;
              Department of Economics        | their majesty, peculiar to the
              Room 3147c Tydings Hall        | human conscience, continues in
              College Park, MD 20742         | all their horror; they are
      Voice:  (301)-454-6356                 | virtues with a single vice ---
                                             |      error.
---------------------------------------------- Victor Hugo, Les Miserables

ned@h-three.UUCP (ned) (12/24/88)

In article <16@csd4.milw.wisc.edu>, markh@csd4.milw.wisc.edu (Mark William Hopkins) writes:
> I do believe that human intelligence is beyond human comprehension.  The
> reason is that the day we learn about our intelligence as it currently
> exists, we'll experience a quantum leap in our own intelligence AS A RESULT.
> So we'll always be one step behind ourselves.

I don't think that the *mechanics* of human intelligence are beyond human
comprehension or will change due to any increase in knowledge.  Of course,
the brain will evolve, but I think we have time to figure it out before
our information becomes obsolete. :-)

> Signals in our nervous system travel at about 700 MPH (if my memory is 
> correct).  Signals in Silicon travel about 1 *MILLION* times faster.
> It's not whether AI is possible, no, the question is how long it will be
> before the machine's capacity exceeds our own, as it will.

What if it turns out that chemical processes are practically the only means
of creating human-like intelligence?  If so, our machines may not be any
faster or more capable than the brain.

-- Ned Robie		uunet!h-three!ned

harper@hi-csc.UUCP (Paul L. Harper) (12/27/88)

(Andy Lowry @ Columbia University Department of Computer Science)
writes:
> In article <4040a289.9d8d@hi-csc.UUCP> harper@hi-csc.UUCP (Paul L. Harper) writes:
> >I am continually amazed at the faith of AI "researchers"
> >(programmers?). I have seen nothing whatsoever from the AI
> >community that indicates there is any hope of producing
> >intelligence by running instructions on computers. 
> >
> >It is an incredible leap of faith, completely unfounded 
> >by science, to assume that computers can obtain the human
> >quality we call intelligence. Where is the scientific justification
> >for the assumption? 
> 
> I am continually amazed at the closed-mindedness of certain
> individuals.  On the contrary... it is an incredible leap of faith (in
> my book) to assume that this goal is unattainable.  That is, I cannot
> conceive of ANY argument that intelligence cannot be fabricated other
> than one based on a belief in God.  And that is a belief that I do not
> hold any part of, and that I consider an "incredible leap of faith."
> 
> That I believe "true" artificial intelligence to be attainable does
> not mean that I necessarily believe it will be attained.  That depends
> on a fair amount of luck, among other things.  It does mean that I
> consider it a worthy goal for research effort.
> 
> In fact, even if I were not so convinced that the goal can,
> theoretically, be achieved, I would still consider it a worthy
> pursuit.  How many programs of research are undertaken with full
> confidence in their eventual success?  Attempting to obtain a goal is
> certainly one valid way to go about seeing how attainable it is.

It is interesting that the response to my posting makes no
attempt at answering the major query, i.e. what about scientific 
justification? The justification is exactly that which I have complained
about, being "I have faith". I will grant that the feeling that "I'm on 
the right track" or something similar is viable in scientific pursuits, 
especially for us humans. But after so many years of AI promises, little
of consequence seems to have been produced.

A couple of questions:
What is ' "true" artificial intelligence ' ?
Will "true" artificial intelligence have consciousness?

              Paul

harper@hi-csc.UUCP (Paul L. Harper) (12/28/88)

J Storrs Hall @ Rutgers Univ., New Brunswick, N.J.
writes (in response to my posting):

> It turns out that
> there are plenty are reasons to believe that (a) intelligence
> is possible in computers of sufficient memory and processing 
> power; (b) it will be possible for us to create such intelligent
> programs after a sufficient investment in software capital; and
> (c) the knowledge gained in attempting this will be interesting
> and useful.  
> 
> However, it would be boring and worthless to explain this to
> Mr. Harper, so I shall not try.  Instead I will merely offer him
> the advice that if AI baffles him, ignore it, and we'll all be
> better off.                      

Whew! Rather than respond to the tone of the above, I'd like
to ask what the reasons are for believing (a) and (b) above.

What are the foundations for believing in the above?
Is this based on a belief in functional AI (if the functions that
make up intelligence can be identified, in terms of input and output
relationships, then any implementation is adequate), 
or does the requirement of memory and processing power stem from
the belief that simulating the brain (to presumably a rather fine
level of detail) would be a means to attain AI?

I am not *attacking* AI; I'm just looking for scientific justification
for the many claims, etc.

                             Paul

bwk@mbunix.mitre.org (Barry W. Kort) (12/28/88)

In article <1904@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu
(Gordon E. Banks) injects in interesting thought:

 > ... I am sure religion will survive the creation
 > of artificial persons.  Wouldn't it be interesting if these artificial
 > persons were also religious?  (Maybe I'll write a SF story about this.)

If artificial persons exhibit the property of seeking new knowledge,
then they will profess a belief system which motivates their behavior.
They will profess the belief that new knowledge awaits their discovery,
and that puzzles and unsolved problems will succumb to thoughtful
lines of reasoning and meticulous methods of investigation.

Perhaps we should coin a name for this system of religious belief.
I propose we call it Science.

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (12/28/88)

In article <15152@mimsy.UUCP> anderson@secd.cs.umd.edu (Gary Anderson
speculates about future generations of thinking machines:

 > As these machines become more complex and more versatile,  
 > their behavior in new and unanticipated situations will become more 
 > difficult to predict.  Even hindsight understanding of the behavior 
 > of these machines in new situations may be very difficult to achieve.
 > I wonder if even the developers of these machines will be able to 
 > distinguish free will from poorly understood programmed behavior.
 > If the machines  become so smart and so complex that we cannot 
 > easily predict their behavior in new situations, we 
 > will have no recourse but to ask "them" what they would do.

I suspect the distinction between programmed behavior and free will
will become a fuzzy boundary.  A chess playing computer, when confronted
with a novel situation may choose at random from a small set of alternatives.
The outcome (win or lose) may then become compiled knowledge about the
wisdom of the chosen line of play.  The next time around, the chess
machine won't be so naive, and may choose it's course of action with
more conviction.  The bemused observer would be hard pressed to
distinguish free will from such random decision.

So, asking them what they would do in a hypothetical situation
might generate the honest answer, "I don't know.  It depends on
whether I learn something useful by the time I have to make that
decision."

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (12/31/88)

In article <4081e1ba.75f0@hi-csc.UUCP> harper@hi-csc.UUCP
(Paul L. Harper) asks:

 > What is ' "true" artificial intelligence ' ?
 > Will "true" artificial intelligence have consciousness?

I define true artificial intelligence as intelligence residing
on a substrate other than a biologically grown carbon-based neural
network.  (Possibly a silicon substrate fabricated with currently
available technology.)  

I believe that an intelligent system which pursues the goal of
knowledge acquisition over time will exhibit behavior indistinguishable
from that of a conscious sentient being with transient emotional states.

--Barry Kort

peru@soleil.UUCP (Dave Peru) (01/04/89)

"But essential abilities for intelligence are certainly:

     to respond to situations very flexibly;
     to take advantage of fortuitous circumstances;
     to make sense out of ambiguous or contradictory messages;
     to recognize the relative importance of different elements of a situation;
     to find similarities between situations despite differences which may
          separate them;
     to draw distinctions between situations despite similarities which may
          link them;
     to synthesize new concepts by taking old concepts and putting them
          together in new ways;
     to come up with ideas with are novel.

 Here one runs up against a seeming paradox.  Computers by their very nature
 are the most inflexible, desireless, rule-following of beasts.  Fast though
 they may be, they are nonetheless the epitome of unconsciousness.  How, then,
 can intelligent behavior be programmed?  Isn't this the most blatent of
 contradictions in terms?  One of the major theses os this book is that it is
 not a contradiction at all.  One of the major purposes of this book is to urge
 each reader to confront the apparent contradiction head on, to savor it, to
 turn it over, to take it apart, to wallow in it, so that in the end the reader
 might emerge with new insights into the seemingly unbreachable gulf between 
 the formal and the informal, the animate and the inanimate, the flexible and
 the inflexible.

 This is what Artificial Intelligence (AI) research is all about.  And the 
 strange flavor of AI work is that people try to put together long sets of
 rules in strict formalisms which tell inflexible machines how to be flexible."

Taken from p.26 of "Goedel, Escher, Bach..." by D.R. Hofstadter.

Does anyone disagree with this?

Does anyone strongly disagree if I include in the definition of "intelligence"
the ability to recognize a paradox?

fransvo@htsa.uucp (Frans van Otten) (01/05/89)

In article <552@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>"But essential abilities for intelligence are certainly:
>
>     to respond to situations very flexibly;
>     to take advantage of fortuitous circumstances;
>     to make sense out of ambiguous or contradictory messages;

[ etc. ]

>Does anyone disagree with this?

I disagree. This is not a definition of intelligence, nor are the listed
abilities essential for intelligence. These are mere examples. I really
stick to my definition of intelligence:

  ***  Intelligence: The ability to draw a conclusion.

  ***  Needed:        A database and an algorithm to reach a conclusion
                      based on the data.

  ***  Improvements:  The ability to change the database.
		      The conclusion-algorithm being part of the database,
		      so that the system can add/change algorithms.

I would like to know how other people think about my definition.

-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

peru@soleil.UUCP (Dave Peru) (01/07/89)

Please consider the following thoughts of three people concerning the physics
of the mind.  Notice the difference from the first person and the next two.

COMPUTER SCIENTIST:

In the book "The Society of Mind" Marvin Minsky writes (p.50):

"When people have no answers to important questions, they often give some
 anyway.

      What controls the brain?  The Mind.
      What controls the mind?   The Self.
      What controls the Self?   Itself.

 To help us think about how our minds are connected to the outer world, our
 culture teaches schemes like this:

      (diagram ...)

 This diagram depicts our sensory machinery as sending information to the
 brain, wherein it is projected on some inner mental movie screen.  Then,
 inside that ghostly theater, a lurking Self observes the scene and then
 considers what to do.  Finally, that Self may act--somehow reversing all
 those steps--to influence the real world by sending various signals back
 through yet another family of remote-control accessories.

 This concept simply doesn't work.  It cannot help for you to think that
 inside yourself lies someone else who does your work.  This notion of
 "hommunculus"--a little person inside each self--leads only to a paradox
 since, then, that inner Self requires yet another movie screen inside itself,
 on which to project what *it* has seen!  And then, to watch that
 play-within-a-play, we'd need yet another Self-inside-a-Self--to do the
 thinking for the last.  And then this would all repeat again, as each new
 Self requires yet another one to do its job!

     The idea of a single, central Self doesn't explain anything.  This
     is because a thing with no parts provides nothing that we can use
     as pieces of explanation!

 Then why do we so often embrace the strange idea that what we do is done
 by Someone Else--that is, our Self?  Because so much of what our minds
 do is hidden from the parts of us that are involved with verbal
 consciousness."

MATHEMATICIAN/PHYSICIST/ASTRONOMY:

In the book "Bridges To Infinity" Michael Guillen (Ph.D in physics, mathema-
matics, and astronomy from Cornell University) writes (p.98):

"In his thirteen-page manuscript, "All Numbers, Great and Small," Conway
 begins as Frege began, with a few primitive ideas, including the null set
 and two rules.  The first rule, Conway's logical definition of a number,
 can be visualized in terms of encyclopedia volumes lined up in order in
 a library shelf.  According to the definition, a volume's place in the
 lineup, its number, can be inferred from the set of volumes on its left
 and the set of volumes on its right.  We could determine where volume nine
 belongs, for instance, simply by locating that place where volumes zero
 through eight are on the left and volumes ten through infinity are on the
 right.  Therefore, every volume, every number, has its own niche, determined
 uniquely by the left and right sets.  That's the thrust of Conway's first
 rule.

 His second rule, again explained here in terms of a set of encyclopedias,
 decrees that one number, such as 5, is smaller than (or equal to) another
 number, such as 9, if two things are true simultaneously: (A) all the volumes
 to the left of the first number (5) are less than the second number (9),
 and (B) all the volumes to the right of the second number (9) are bigger
 than the first number (5).  This rule is necessary in order for Conway
 to impose an order on the numbers he creates, beginning with zero: Zero
 is less than 1, so it precedes 1; 1 is less than 2, so it precedes 2; and
 so forth.

 As he does not assume the existence of any numbers to begin with, Conway,
 like Frege, has only the null set with which to start creating the sequence
 of natural numbers.  Consequently, Conway first contemplates the number
 whos left and right sets are both null sets, written symbolically as {}:{},
 He names this *zero*.  That is, in Conway's theory, as in Frege's, nothingness
 is the most primitive realization of nothing.

 After creating the number zero, Conway has two sets with which to continue
 creating numbers: the null set, {}, and the set containing zero, {0}. 
 Conway identifies the number 1 as the number whose left set contains zero
 and whose right set is the null set.  Thus, at this point in Conway's genesis,
 the number 1 is flanked to the left by nothingness and to the right by
 nothing.  To the left is potential already realized (as zero), and to the
 right is potential not yet realized.

 At each point in his creation, Conway always selects the next number as
 the number as the number whose left set contains all the previously created
 numbers and whose right set is the null set.  It's as though he were being
 guided by an image of those encyclopedias.  At each point, the newly created
 volume is placed to the right of all those volumes already shelved and
 to the left of empty space, which in this analogy has the aspect of the
 physicist's vacuum in representing the potential of numbers not yet brought
 into being.  By proceeding in this fashion indefinitely, Conway creates
 the entire sequence of natural numbers.

 From there he goes on, however, to create an infinity of in-between numbers,
 such as the number whose left set contains zero, {0}, and whose right set
 contains one through infinity {1, 2, 3, ...}.  This defines a number somewhere
 between zero and one.  Thus the standard set of encyclopedias, the natural
 numbers, is embellished by an interminable number of in-between volumes.
 And it doesn't stop there.

 Pursuing the logic of his method, Conway is able to create between in-between
 numbers, then numbers between *these*, and so on, literally ad infinitum.
 The result is limitless hierarchies of in-between numbers, never before
 named in mathematics.

 Conway's theory has ineffable graphic implications as well.  Traditional
 mathematical wisdom has it that a ruler's edge, a number line, is a blur
 of points, each of which can be labeled with either a whole number, a
 fraction, or an irrational number such as .1345792 ..., where the string
 of digits goes on forever.  All these points (or their numerical labels)
 together are imagined to form a continuum, with no space between adjacent
 points.  Conway's theory, however, asks us to imagine numbers that fall
 somehow between unimaginable cracks in this blur of points, and between
 the cracks left behind by those numbers, and so on and so on.  With his
 theory, Conway has made credible what many persons before him had merely
 speculated about: there is conceptually no limit to how many times an object
 can be divided.

 Conway's "All Numbers, Great and Small" shows off the boundless potential
 of the null set, but also of the human mind.  Human creative energy, like
 nothing, isn't anything if it isn't potential.  It is also an indomitable
 part of being alive, as countless experiments have documented.  People
 who are deprived of their senses by being floated in silent, dark tanks
 of water warmed to body temperature will hallucinate.  It is as though
 the human mind will not be stilled of its propensity to make something
 of nothing even, or especially, when immersed in nothingness.

 Like a physicist's vacuum, the human mind can be induced to create thoughts
 that come seemingly out of nowhere.  Mathematicians over the years have
 documented this common phenomenon.  The German Carl Friedrich Gauss recalled
 that he had tried unsuccessfully for years to prove a particular theorem
 in arithmetic, and then, after days of not thinking about the problem,
 the solution came to him "like a sudden flash of lightning."  The French
 mathematician Henri Poincare, too, reported working futilely on a problem
 for months.  Then one day while conversing with a friend about a totally
 unrelated subject, Poincare recalled that "... the idea came to me without
 anything in my former thoughts seeming to have paved the way for it."

 In this sense, the human mind is the real null set in Frege's and Conway's
 number theories; the mathematical null set is but a subordinate entity
 created after the mind's self-image."

PHYSICIST:

In the book "The Turning Point" Fritjof Capra (Ph.D in high-energy physics
from University of Vienna) writes (p.101):

"While the new physics was developing in the twentieth century, the
 mechanistic Cartesian world view and the principles of Newtonian physics
 maintained their strong influence on Western scientific thinking, and even
 today many scientists still hold to the mechanistic paradigm, although
 physicists themselves have gone beyond it.

 ...

 In biology the Cartesian view of living organisms as machines, constructed
 from separate parts, still provides the dominant conceptual framework.
 Although Descartes' simple mechanistic biology could not be carried very
 far and had to be modified considerably during the subsequent three hundred
 years, the belief that all aspects of living organisms can be understood
 by reducing them to their smallest constituents, and by studying the
 mechanisms through which these interact, lies at the very basis of most
 contemporary biological thinking.  This passage from a current textbook
 on modern biology is clear expression of the reductionist credo: 'One of
 the acid tests of understanding an object is the ability to put it together
 from its component parts.  Ultimately, molecular biologists will attempt
 to subject their understanding of cell structure and function to this sort
 of test by trying to synthesize a cell.'

 Although the reductionist approach has been extremely successful in biology,
 culminating in the understanding of the chemical nature of genes, the basic
 units of heredity, and in the unraveling of the genetic code, it nevertheless
 has its severe limitations.  As the eminent biologist Paul Weiss has
 observed:

     We can assert definitely ... on the basis of strictly empirical investiga-
     tions, that the sheer reversal of our prior analytic dissection of the
     universe by putting the pieces together again, whether in reality of
     just in our minds, can yield no complete explanation of the behavior
     of even the most elementary living system.

 This is what most contemporary biologists find hard to admit.  Carried
 away by the success of the reductionist method, most notable recently in
 the field of genetic engineering, they tend to believe that it is the only
 valid approach, and they have organized biological research accordingly.
 Students are not encouraged to develop integrative concepts, and research
 institutions direct their funds almost exclusively to ward the solution
 of problems formulated within the Cartesian framework.  Biological phenomena
 that cannot be explained in reductionist terms are deemed unworthy of
 scientific investigation.  Consequently biologists have developed very
 curious ways of dealing with living organisms.  As the distinguished biologist
 and human ecologist Rene Dubos has pointed out, they usually feel most at
 ease when the thing they are studying is no longer living.

 ...

 An extreme case of integrative activity that has fascinated scientists
 throughout the ages but has, so far, eluded all explanation is the phenome-
 non of embryogenesis--the formation and development of the embryo--which
 involves an orderly series of processes through which cells specialize
 to form the different tissues and organs of the adult body.  The interaction
 of each cell with its environment is crucial to these processes, and the
 whole phenomenon is a result of the integral coordinating activity of the
 entire organism--a process far too complex to lend itself to reductionist
 analysis.  Thus embryogenesis is considered a highly interesting but quite
 unrewarding topic for biological research.

 ...

 Transcending the Cartesian model will amount to a major revolution in medical
 science, and since current medical research is closely linked to research
 in biology--both conceptually and in its organization--such a revolution
 is bound to have a strong impact on the further development of biology."

***

I think it is quite interesting that "The Turning Point" was published
before "The Society of Mind" in reference to Fritjof Capra's comment,
"and even today many scientists still hold to the mechanistic paradigm."

Paradoxically, these three people's thoughts may sound unrelated.  It is up
to you to decide, any comments?

kevinc@auvax.UUCP (Kevin "auric" Crocker) (01/07/89)

In article <15152@mimsy.UUCP>, anderson@secd.cs.umd.edu (Gary Anderson) writes:
>The successful application of  artificial intelligence tools 
>and techniques will increase the degree to which humans depend
>on machines. Clearly, human kind will be able to construct 
>smart, productive, efficient machines to carry out many important tasks
>These machines will  become  superior to humans 
>along  certain dimensions (ie computational speed, physical strength, agility,
>stamina) in specific applications.  As the performance of these machines 
>improves, and their use becomes more pervasive, we will come to
>rely on them more and more for our survival and comfort.
>These ubiquitous machines will touch the lives of nearly everyone and
>they will  be a source of great anxiety for many people.  


I couldn't help but have visions of Issac Asimov's robot and Foundation
series flitter through my consciousness as I read the above.

It sounds like such a nice productive arena to have all thoses robots
doing all the tedious, meaningless, physically trying tasks that we
humans really don't want to do.

Hmm, doesn't sound a whole lot beyond the primative first stages of
computers that we have now, what with Artificial Intelligence, and - I
know the dirty words here- Expert systems.

The almost endurable world that would permit humans to exercise their
true `right' - devotion to thinking, and pleasure.

Should I really include a |=#^ here or just a :-).

Kevin Crocker
-- 
Kevin "Auric" Crocker @Athabasca University {alberta ncc}auvax!kevinc

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/07/89)

From article <558@soleil.UUCP>, by peru@soleil.UUCP (Dave Peru):
" ...
" Paradoxically, these three people's thoughts may sound unrelated.  It is up
" to you to decide, any comments?

Yes.  Guillen (not Conway) doesn't make sense, and Minsky's and
Capra's views seem to be in contradiction -- Minsky urging analysis
into parts and Capra denigrating it.  In the quoted passage from
Capra, I believe one can detect some confusion among:

(1) analysis into component parts
(2) analysis into independently acting component parts (= Cartesianism?)
(3) analysis in terms of more fundamental entities (= reductionism)

It's hard for me to see that there can be any real objection to (1).

I have been interested in analogues to the assumption of orthogonal
axes, (2), for a long time, but have been unable to find any general
discussions of the matter.  Maybe someone can provide a reference?
Here's a little example of this sort of reasoning from my own field.  In
1783, in Elements of Phonetics, Geoffrey Holder pointed out that
(voiceless) p, t, k are similar (voiced) to b, d, g, except for the
action of the vocal cords, and that the latter are similar to (nasal)
m, n, ng, except for the passage of air through the nose.  He argued,
on this basis, that there must exist in some language voiceless
nasals -- this fills the gap in the paradigm.  (It's very much like
the prediction of new elements to fill holes in the periodic table.)

		Greg, lee@uhccux.uhcc.hawaii.edu

bwk@mbunix.mitre.org (Barry W. Kort) (01/07/89)

In article <552@soleil.UUCP> peru@soleil.UUCP (Dave Peru) asks:

 > Does anyone strongly disagree if I include in the definition of
 > "intelligence" the ability to recognize a paradox?

Contributions to knowledge are made by people who recognize
paradoxes in physics, math, philosophy, logic, etc.  But their
contributions come not from the act of recognition, but from
acts of cognition which resolve the paradox.

--Barry Kort

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (01/07/89)

In article <687@htsa.uucp> fransvo@htsa.UUCP (Frans van Otten) writes:
>abilities essential for intelligence. These are mere examples. I really
>stick to my definition of intelligence:
>
>  ***  Intelligence: The ability to draw a conclusion.
>
>  ***  Needed:        A database and an algorithm to reach a conclusion
>                      based on the data.
>
Wouldn't an if-then rule be intelligent under such a definition?
I think you need more flexibility than that and prefer Dave's
definition.  There are many different aspects of intelligent.
All intelligent entities, including humans, rarely can display
all of the properties that people would like to say constitute
intelligence.  I suppose that is why the Turing test is so elegant.

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (01/08/89)

In article <558@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>
> Conway's "All Numbers, Great and Small" shows off the boundless potential
> of the null set, but also of the human mind.  Human creative energy, like
> nothing, isn't anything if it isn't potential.  It is also an indomitable
> part of being alive, as countless experiments have documented.  People
> who are deprived of their senses by being floated in silent, dark tanks
> of water warmed to body temperature will hallucinate.  It is as though
> the human mind will not be stilled of its propensity to make something
> of nothing even, or especially, when immersed in nothingness.
>
This comment is quite naive, and I am surprise to find someone trained
in physics can make it.  Even though all sensory input is extinguished,
the neural circuitry which is tuned to evaluate such input is still quite
active.  In the absence of input, it is the nature of such circuits to
increase their sensitivity until artifactual output is obtained.  The
situation is somewhat analogous to amplifier circuits which go into
oscillation when the gain is increased beyond a certain point.  All neural
tissues, including muscles do this.  If you denervate a living muscle, it will,
after a period of a few days, begin to twitch spontaneously.  There is nothing
mystical or infinite about such behavior.  It can be explained on a purely
mechanistic basis, and all neurologists are familiar with such behavior.

>PHYSICIST:
>
>In the book "The Turning Point" Fritjof Capra (Ph.D in high-energy physics
>from University of Vienna) writes (p.101):
>
>
>     We can assert definitely ... on the basis of strictly empirical investiga-
>     tions, that the sheer reversal of our prior analytic dissection of the
>     universe by putting the pieces together again, whether in reality of
>     just in our minds, can yield no complete explanation of the behavior
>     of even the most elementary living system.
>

All this says is that such systems, even the most elementary are very 
complex.  As far as the nervous system is concerned, we are about at
the level of snails (see the work of Eric Kandel, for example) in coming
up with a more or less "complete" understanding of what is going on, at
least on a macro level.  

Capra is definitely on the lunatic fringe on this subject.  He embraces
"holistic" medicine, chiropractic, and other even more bizarre medical
quack systems which I suppose only enhance his popularity among his new
age followers.  He certainly isn't considered a touchstone among the
physicists I know.  I find little in his work to lead me to believe he
knows anything substantial about the brain or biology.

bwk@mbunix.mitre.org (Barry W. Kort) (01/08/89)

In article <687@htsa.uucp> fransvo@htsa.UUCP (Frans van Otten) writes
proposes a definition of intelligence:

 >  ***  Intelligence: The ability to draw a conclusion.
 >
 >  ***  Needed:        A database and an algorithm to reach a conclusion
 >                      based on the data.
 >
 >  ***  Improvements:  The ability to change the database.
 >		        The conclusion-algorithm being part of the database,
 >		        so that the system can add/change algorithms.
 >
 > I would like to know how other people think about my definition.

I would suggest amending the first part to read "The ability to
efficiently draw provably valid conclusions."  This change suggests
criteria for the improvements sought in the third part of your
definition.  Children (and adults) can easily draw invalid conclusions,
but I'm not sure I want to label such behavior as highly intelligent.

I am interested in the names of the distinct methods (algorithms) for
efficiently drawing provably valid conclusions.  So far, I have come
up with:

	Adductive reasoning
	Deductive reasoning
	Inductive reasoning
	Inferential reasoning
	Model-based reasoning
	Combinatorial logic
	Intuitionist logic

I would appreciate some discussion aimed at completing the list of
known reasoning methods, together with their definitions.

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (01/08/89)

I continue to marvel at Dave Peru's fertile contributions to
our discussions in this newsgroup.  The Minksy/Conway/Capra
excerpts were most stimulating.

Reductionist (analytical) reasoning is easy to describe and
easy to teach.  But reductionism has a shortcoming.

If I give you a large, assembled jigsaw puzzle, and you examine
it piece by piece, you will end up with a pile of carefully
examined pieces.  But you will have missed seeing the big
picture hidden in the assembled puzzle.  This is called the
Forest and the Trees syndrome.  After examining every element,
you must painstakingly reassemble them to see the big picture.
When you do so, you experience a profound psychological
transformation, called Insight or Epiphany.  This rare and
treasured mental event is accompanied by a biochemical rush
of neurotransmitters such as serotonin, which yield a sense
of euphoria ("Eureka!  I have found it!")

Another place reductionism fails is in the understanding of
emergent properties of circular systems.  The simplest of
circular systems is the Furnace-and-Thermostat system.
When the furnace and thermostat are connected in a feedback
loop, the system exhibits the emergent property of maintaining
a stable room temperature in the face of unpredictable changes
in the outside weather.  Feedback control theorists and
cyberneticians appreciate the emergent properties of circular
systems, but their appreciation is akin to seeing the big
picture in the pile of jigsaw puzzle pieces.

Minsky and Conway, and Gauss and Poincare engage in 
synthetic reasoning (the complement of analytic reasoning).
Instead of understanding something by taking it apart,
they understand something by putting it together.  It is
harder to teach synthetic reasoning.  Artists and sculptors,
playwrights and poets, theoreticians and children -- these
are the synthetic thinkers, the practitioners of creative
intelligence.

The feedback loops of these discussion groups give rise to
an emergent property:  the synthesis of ideas from diverse
quarters.  The melting pot of ideas and the melding of minds
is the synthetic product of circular information channels.

--Barry Kort

hes@ecsvax.uncecs.edu (Henry Schaffer) (01/09/89)

In article <558@soleil.UUCP>, peru@soleil.UUCP (Dave Peru) writes:
>... 
> MATHEMATICIAN/PHYSICIST/ASTRONOMY:
> 
> In the book "Bridges To Infinity" Michael Guillen (Ph.D in physics, mathema-
> matics, and astronomy from Cornell University) writes (p.98): ...
> 
>  Pursuing the logic of his method, Conway is able to create between in-between
>  numbers, then numbers between *these*, and so on, literally ad infinitum.
>  The result is limitless hierarchies of in-between numbers, never before
>  named in mathematics.
 
  Hmm, (even this has nothing to do with reductionism) how is this different
that what is done in traditional mathematics?

> ... 
> PHYSICIST:
> 
> In the book "The Turning Point" Fritjof Capra [writes] ...
> 
>  Although the reductionist approach has been extremely successful in biology,
> ...
>    As the eminent biologist Paul Weiss has observed:
> 
>    We can assert definitely ... on the basis of strictly empirical investiga-
>    tions, that the sheer reversal of our prior analytic dissection of the
>    universe by putting the pieces together again, whether in reality of 
>    just in our minds, can yield no complete explanation of the behavior
>    of even the most elementary living system.
> 
  This seems to be an example of "proof by assertion".  

>... 
>  An extreme case of integrative activity that has fascinated scientists
                                                    ^^^^^^^^^^ - yes
>  throughout the ages but has, so far, eluded all explanation is the phenome-
                                               ^^^ - the large
       community of embryologists and developmental biologists
       would probably feel that they've explained *something*
>  non of embryogenesis-- ...
>  --a process far too complex to lend itself to reductionist analysis.  ...

Another proof by assertion -

  This whole controversy makes me think again about a question which
has bothered me before.  If reductionism is not sufficient - how can
one show/prove that it is not sufficient.  Clearly if a process is
very complex, then much work must be done do reduce it sufficiently
far to explain everything via a reductionist scenario.  

  I doubt that any reductionist is willing to believe that embryogenesis
is beyond reductionist analysis.  We haven't even finished cataloging
all the enzymes and enzymatic pathways in a single cell, there is still
lots of (reductionist) work left to be done and which clearly can be
done (I haven't heard anyone say that sequenceing the genome of a
higher organism can't be done - just that it is a lot of work) and so
clearly one can't give up on reductionism just yet.

  Is that the answer?  One can't disprove reductionism as long as there
is more work left to be done?  That would mean essentially never.


--henry schaffer  n c state univ

bwk@mbunix.mitre.org (Barry W. Kort) (01/10/89)

In article <833@auvax.UUCP> kevinc@auvax.UUCP (Kevin "auric" Crocker) 
refers to the "dirty word" of AI -- Expert Systems.

Although the expression hasn't caught on, I personally prefer
the phrase "Competent System" to "Expert System".  It's not
as sexy sounding, but it may be a more accurate label.

--Barry Kort

marty@homxc.ATT.COM (M.B.BRILLIANT) (01/10/89)

In article <687@htsa.uucp>, fransvo@htsa.uucp (Frans van Otten) writes:
> ....  I really
> stick to my definition of intelligence:
> 
>   ***  Intelligence: The ability to draw a conclusion.
> 
>   ***  Needed:        A database and an algorithm to reach a conclusion
>                       based on the data.
> 
>   ***  Improvements:  The ability to change the database.
> 		      The conclusion-algorithm being part of the database,
> 		      so that the system can add/change algorithms.

I like this formulation because I think it contains the key to the
definition of intelligence.

An expert system can draw a conclusion using a database and an algorithm.
I would not call it intelligent.  Intelligence is a generalization of
this capability, namely, the "Improvements" at the end of the list.

It is not sufficient to be able to go one only step further, and have a
database of databases, and a database of algorithms, and an algorithm
for choosing new databases and new algorithms.  Intelligence implies
the ability to create an arbitrary number of levels of databases of
databases, and databases of algorithms, and algorithms for choosing
databases and algorithms from the preceding level.

I suggest that the ability to reach an indefinite number of levels of
generalization distinguishes intelligence from mere computation.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858		Home (201) 946-8147
Holmdel, NJ 07733	att!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

EGNILGES@pucc.Princeton.EDU (Ed Nilges) (01/10/89)

In article <558@soleil.UUCP>, peru@soleil.UUCP (Dave Peru) writes:

>
> Then why do we so often embrace the strange idea that what we do is done
> by Someone Else--that is, our Self?  Because so much of what our minds
> do is hidden from the parts of us that are involved with verbal
> consciousness."
>

     Interesting.  Have you read Philosophy and the Mirror of Nature,
     by Richard Rorty, which deconstructs the notion of the self as
     the lonely spectator in an otherwise deserted movie theater
     (damned sad picture, not so?).  According to Rorty, this notion
     got its start in the seventeenth century, and it is unnecessary.



Edward Nilges

"Where is the wisdom we lost in knowledge?  Where is the knowledge we
 lost in information?" - T. S. Eliot

meadors@cogsci.ucsd.EDU (Tony Meadors) (01/10/89)

In article <558@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Please consider the following thoughts of three people concerning the physics
>of the mind.  
>   COMPUTER SCIENTIST:
>In the book "The Society of Mind" Marvin Minsky writes (p.50):
>"When people have no answers to important questions, they often give some
> anyway.
>      What controls the brain?  The Mind.
>      What controls the mind?   The Self.
>      What controls the Self?   Itself.
>   ....
> It cannot help for you to think that
> inside yourself lies someone else who does your work.  This notion of
> "hommunculus"--a little person inside each self--leads only to a paradox

An infinite regress.
One of the challenges of psychological explanation is 
to explain our overall intelligent behavior and cognitive
abilities with a model whose parts are not themselves possessors of 
those abilities...this is how homunculi can creep into real world models.
   What Minsky is doing in the quoted passages is simply noting how
commonsense notions such as self and mind entail the idea of a "detatched
controller" and this quickly leads down the homunculi trail.
>   MATHEMATICIAN/PHYSICIST/ASTRONOMY:
>In the book "Bridges To Infinity" Michael Guillen (Ph.D in physics, mathema-
>matics, and astronomy from Cornell University) writes (p.98):
> ........
> From there he goes on, however, to create an infinity of in-between numbers,
> such as the number whose left set contains zero, {0}, and whose right set
> contains one through infinity {1, 2, 3, ...}.  
> This defines a number somewhere
> numbers, is embellished by an interminable number of in-between volumes.
> And it doesn't stop there.
>
> Pursuing the logic of his method, Conway is able to create between in-between
> numbers, then numbers between *these*, and so on, literally ad infinitum.
> The result is limitless hierarchies of in-between numbers, never before
> named in mathematics.

I'm no mathematician, but if I take 
the numbers 2 & 3 and stick a bunch of
new items between them (no matter how cleverly)
I certainly won't have created "numbers never
before named in mathematics." Numbers seem rather fixed to me, those that
might be found on a simple numberline; the labels I attach to various
points shouldn't make any difference...Unless these new numbers are not
expressable in decimal form at all. If this is the case I missed the
point but my point is below anyway...

> points.  Conway's theory, however, asks us to imagine numbers that fall
> somehow between unimaginable cracks in this blur of points, and between
> the cracks left behind by those numbers, and so on and so on.  With his
> theory, Conway has made credible what many persons before him had merely
> speculated about: there is conceptually no limit to how many times an object
> can be divided.

Cosmic cracks eh.
Again, Im not a numbers man, but was there ever any doubt that a given two
points on a line one may always be found which lies between them?

> Conway's "All Numbers, Great and Small" shows off the boundless potential
> of the null set, but also of the human mind.  Human creative energy, like
> nothing, isn't anything if it isn't potential.  It is also an indomitable
> part of being alive, as countless experiments have documented.  People
> who are deprived of their senses by being floated in silent, dark tanks
> of water warmed to body temperature will hallucinate.  It is as though
> the human mind will not be stilled of its propensity to make something
> of nothing even, or especially, when immersed in nothingness.
>
> Like a physicist's vacuum, the human mind can be induced to create thoughts
> that come seemingly out of nowhere.  Mathematicians over the years have
> documented this common phenomenon.  The German Carl Friedrich Gauss recalled
> that he had tried unsuccessfully for years to prove a particular theorem
> in arithmetic, and then, after days of not thinking about the problem,
> the solution came to him "like a sudden flash of lightning."  The French
> mathematician Henri Poincare, too, reported working futilely on a problem
> for months.  Then one day while conversing with a friend about a totally
> unrelated subject, Poincare recalled that "... the idea came to me without
> anything in my former thoughts seeming to have paved the way for it."
>
> In this sense, the human mind is the real null set in Frege's and Conway's
> number theories; the mathematical null set is but a subordinate entity
> created after the mind's self-image."

I must say it's really getting deep at this point. 
I realize that the "wondrous parallels between profound mathematical
principles with the human mind" is the idea here. But I see no more that a
paper thin relatedness between the specifics under discussion.
This reminds me of other cases where "deep fundamental"
mathematic principles are put forward as "the essence" of thinking
or mind (recursion a common one).
  Let's go over this again:
> Conway's "All Numbers, Great and Small" shows off the boundless potential
> of the null set, but also of the human mind.  Human creative energy, like
> nothing, isn't anything if it isn't potential.  

  So roughly the claim is "the mind is like, the null set." (a california 
  surfer dude accent would go nicely here).
  I find this a very strange claim but let's consider the two examples...
First,
> People
> who are deprived of their senses by being floated in silent, dark tanks
> of water warmed to body temperature will hallucinate.  It is as though
> the human mind will not be stilled of its propensity to make something
> of nothing even, or especially, when immersed in nothingness.

Yes people do eventually have all sorts of wild experiences. How does this
relate to the mind being like a null set or the mathematical discussion at
all? Does the null set notion PREDICT that those in such cahmbers will
hallucinate? THERE IS ONLY A VERY CRUDE SEMANTIC RELATIONSHIP BETWEEN
THE NULL SET AND SENSORY DEPRIVATION. "Oh, like both have to do with 
complete nothingness man..."
Second,
> Like a physicist's vacuum, the human mind can be induced to create thoughts
> that come seemingly out of nowhere.  Mathematicians over the years have
> documented this common phenomenon.  The German Carl Friedrich Gauss recalled
>..... 
Yes, yes, such cases are well known. But now the relationship between the
null set and the "example" is almost hard to find at all. First, there is
no reason to suppose any sort of emptiness involved. Research on this
"incubation" period of problem solving indicates that active
though unconscious processing is involved in producting "the answer."
And the individual, through his long and arduous pursuit of a solution
to fulfill some set of constraints, has set up a situation where
when the "answer" is unconsciously conceived of, it is "recognized" and
brought to consciousness. Anyway THERE IS NOTHING MORE THAN A CRUDE
SEMANTIC RELATIONSHIP BETWEEN THE NULL SET AND THE INCUBATION PHENOMENON
IN PROBLEM SOLVING.

> In this sense, the human mind is the real null set in Frege's and Conway's
> number theories; the mathematical null set is but a subordinate entity
> created after the mind's self-image."
 
1 THE HUMAN MIND IS NO MORE "THE REAL NULL SET IN...NUMBER THEORIES"
  THAN IT IS A BASEBALL BAT OR A TORNADO.

2 The notion that the null set arose as a mathematical concept due to
  man's perception of some nothingness within his psyche is absurd.
   
>   PHYSICIST:
>In the book "The Turning Point" Fritjof Capra (Ph.D in high-energy physics
>from University of Vienna) writes (p.101):
>
>"While the new physics was developing in the twentieth century, the
> mechanistic Cartesian world view and the principles of Newtonian physics
> maintained their strong influence on Western scientific thinking, and even
> today many scientists still hold to the mechanistic paradigm, although
> physicists themselves have gone beyond it.
> ...
> In biology the Cartesian view of living organisms as machines, constructed
> from separate parts, still provides the dominant conceptual framework.
> Although Descartes' simple mechanistic biology could not be carried very
> far and had to be modified considerably during the subsequent three hundred
> years, the belief that all aspects of living organisms can be understood
> by reducing them to their smallest constituents, and by studying the
> mechanisms through which these interact, lies at the very basis of most
> contemporary biological thinking.  
  So is this a tirade against a mechanistic approach, or the reductionist
  enterprise? They are not the same of course. 
>....
> Transcending the Cartesian model will amount to a major revolution in medical
> science, and since current medical research is closely linked to research
> in biology--both conceptually and in its organization--such a revolution
> is bound to have a strong impact on the further development of biology."

Yeah this sounds like Capra. I don't know what it would mean to "transcend
the cartesian model", and no explanation of what that would be like is
offered in this passage. If what is meant is to "look for causes and
processes outside the normal realm of measurable cause and effect 
then I would say that its hogwash. If its just a childlike hope that
taking new perspectives, sometimes a "systems" or "cybernetic" 
perspective may yield new insight into complex systems, then 
point taken.

>Paradoxically, these three people's thoughts may sound unrelated.  It is up
>to you to decide, any comments?

  Yes, not only unrelated, they are unremarkable. Dave, your postings remain
  without peer in being provocative and interesting. But trust me, the
  "deep stuff" concerning minds and brains, the meta-psychology,
  is largely fluff. Move up the scientific foodchain a bit. You know
  the old saying, fact is stranger than fiction. Its never been more true 
  than in psychology. Get down to real data and yet 
  keep these larger questions in mind. Read about the bizzare
  dissociations brain damaged patients exhibit, study up on perceptual
  illusions, investigate the cases of extraordinary memories (people can 
  literally tell you what shirt they wore or the change they made on
  a given day in 1966, and its not a trick or learned ability). Well,
  you get the picture...these sorts of phenomenon baffle
  and challenge, and if there are secrets to be found and profound changes
  to take place in how we understand the mind it will likely be fueled
  by these inexplicable sorts of data. 

tonyM

markh@csd4.milw.wisc.edu (Mark William Hopkins) (01/11/89)

In article <43472@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>Reductionist (analytical) reasoning is easy to describe and
>easy to teach.  But reductionism has a shortcoming.
>
>If I give you a large, assembled jigsaw puzzle, and you examine
>it piece by piece, you will end up with a pile of carefully
>examined pieces.

I don't know about that.  I solve most of my puzzles by classifying pieces
on the basis of their shape and printed color, with little or no regard
for the place where they fit in the "big" picture.

Yet, I also claim that I'm solving the puzzle holistically in the process.
The "big" picture always emerges out of the jumble of pieces near the end.

bwk@mbunix.mitre.org (Barry W. Kort) (01/11/89)

In article <6177@ecsvax.uncecs.edu> hes@ecsvax.uncecs.edu
(Henry Schaffer) worries about the overthrow of reductionism:

>  This whole controversy makes me think again about a question which
>has bothered me before.  If reductionism is not sufficient - how can
>one show/prove that it is not sufficient.  Clearly if a process is
>very complex, then much work must be done do reduce it sufficiently
>far to explain everything via a reductionist scenario.  
>
>... Clearly one can't give up on reductionism just yet.
>
>  Is that the answer?  One can't disprove reductionism as long as there
>is more work left to be done?  That would mean essentially never.

I don't think anyone is suggesting that reductionism (or analysis
of a complex system into its constituent elements) is a doomed
activity.  I think the argument is that additional insight is
gained through synthetic reasoning (constructing novel systems
from known pieceparts).  Nature does this all the time.  The
cerebral cortex of the species Homo sapiens sapiens is believed
to be one of the most complex systems found in nature.  We learn
by taking apart, and we learn by putting together.  There is room
(and need) for both activities.

Personally, I find that, as a species, we devote more time to
disassembly than to assembly, and I would like us to spend
more time developing our creative intelligence.  But I wouldn't
want a world in which we have to choose between holism and
reductionism.  Both are essential ingredients in cognitive growth.

Now and then, I even like to rest and simply enjoy what is.

--Barry Kort

peru@soleil.UUCP (Dave Peru) (01/11/89)

>>In the book "The Society of Mind" Marvin Minsky writes (p.50):
>>"When people have no answers to important questions, they often give some
>> anyway.
>>      What controls the brain?  The Mind.
>>      What controls the mind?   The Self.
>>      What controls the Self?   Itself.
>>   ....
>> It cannot help for you to think that
>> inside yourself lies someone else who does your work.  This notion of
>> "hommunculus"--a little person inside each self--leads only to a paradox

In article <686@cogsci.ucsd.EDU> (Tony Meadors) writes:

>An infinite regress.
>One of the challenges of psychological explanation is 
>to explain our overall intelligent behavior and cognitive
>abilities with a model whose parts are not themselves possessors of 
>those abilities...this is how homunculi can creep into real world models.
>   What Minsky is doing in the quoted passages is simply noting how
>commonsense notions such as self and mind entail the idea of a "detatched
>controller" and this quickly leads down the homunculi trail.

I would like to humbly express my opinion about the way Marvin Minsky
describes "hommunculus" as "leads only to paradox".   Using the word
"only" is misleading, like there's something wrong with hommunculus
or even having a paradox.  Or as you have stated, "simply noting how".
Personally, these kind of statements in any explanation are not very
satisfying, in fact, I start to get uncomfortable.  All I'm saying,
considering the subject matter, is simply that things never to turn
out so simple.  Or at least, seem so simple to me.

    "The idea of a single, central Self doesn't explain anything.  This
     is because a thing with no parts provides nothing that we can use
     as pieces of explanation!" MM.

If to explain something, you must have parts, then at some point you got to
reduce down to physics.  I think our knowledge in physics is great, but
limited.  Physicists might have egos as big as atomic blasts, but unfortunately
God is still alive.  This bothers me and is why I have problems with
reductionist thinking.  Einstein said God does not play dice, or was it God 
that said Einstein does not play dice.  Anyway, as far as I know, according to
our current knowledge of physics, God does play dice and is probably living in
Atlantic City.  Who knows, maybe Donald Trump is the second coming of
Christ.  :-)

Seriously, is there anyone out there who really thinks reductionism can explain
everything there is to be explain?

>>In the book "Bridges To Infinity" Michael Guillen (Ph.D in physics, mathema-
>>matics, and astronomy from Cornell University) writes (p.98):
>> ........
>> From there he goes on, however, to create an infinity of in-between numbers,
>> such as the number whose left set contains zero, {0}, and whose right set
>> contains one through infinity {1, 2, 3, ...}.  
>> This defines a number somewhere
>> numbers, is embellished by an interminable number of in-between volumes.
>> And it doesn't stop there.
>>
>> Pursuing the logic of his method, Conway is able to create between in-between
>> numbers, then numbers between *these*, and so on, literally ad infinitum.
>> The result is limitless hierarchies of in-between numbers, never before
>> named in mathematics.

>I'm no mathematician, but if I take 
>the numbers 2 & 3 and stick a bunch of
>new items between them (no matter how cleverly)
>I certainly won't have created "numbers never
>before named in mathematics." Numbers seem rather fixed to me, those that
>might be found on a simple numberline; the labels I attach to various
>points shouldn't make any difference...Unless these new numbers are not
>expressable in decimal form at all. If this is the case I missed the
>point but my point is below anyway...

Don't waive this off, spend some time with this.  What Conway does is really
awesome.  If fact, it defines the word awesome.   The idea of "nothingness"
as opposed to "nothing as something", i.e. the set {0}, is really neat!  And
then boom, all the rational and irrational numbers spring to life.  To say
"Numbers seem rather fixed to me" seems fixed or closed minded to me.

>> points.  Conway's theory, however, asks us to imagine numbers that fall
>> somehow between unimaginable cracks in this blur of points, and between
>> the cracks left behind by those numbers, and so on and so on.  With his
>> theory, Conway has made credible what many persons before him had merely
>> speculated about: there is conceptually no limit to how many times an object
>> can be divided.
>
>Cosmic cracks eh.
>Again, Im not a numbers man, but was there ever any doubt that a given two
>points on a line one may always be found which lies between them?

"Cosmic", interesting word choice.

When you were younger, did you ever get the feeling while you were half asleep
that you were falling off your bed?  You suddenly wake up as you slam your
hand down on the mattress.   I have this feeling all the time, but nothing
to slam against.  :-)  And mathematically speaking, the way Conway generates
numbers is the closest thing I've seen to expressing this feeling.

>> People
>> who are deprived of their senses by being floated in silent, dark tanks
>> of water warmed to body temperature will hallucinate.  It is as though
>> the human mind will not be stilled of its propensity to make something
>> of nothing even, or especially, when immersed in nothingness.
>
>Yes people do eventually have all sorts of wild experiences. How does this
>relate to the mind being like a null set or the mathematical discussion at
>all? 

I knew I should have left the float-tank part out.  People have all kinds
of prejudices.  Tony, have you ever floated?  I haven't, but maybe Guillen
has. Apparently, Guillen thought the experience related to the discussion
to use the analogy.  You think the analogy doesn't apply, okay.  I still
think it's a neat idea and I'll reserve judgement until after I've floated.

> Does the null set notion PREDICT that those in such cahmbers will
>hallucinate? THERE IS ONLY A VERY CRUDE SEMANTIC RELATIONSHIP BETWEEN
>THE NULL SET AND SENSORY DEPRIVATION. "Oh, like both have to do with 
>complete nothingness man..."

This California surfer stuff is indicative of your close mindedness and
adds nothing to the conversation.  Which is appropriate considering the
subject matter.  

When you say "there is only a very crude semantic relationship between
the null set and sensory deprivation" are you speaking from experience?

>> In this sense, the human mind is the real null set in Frege's and Conway's
>> number theories; the mathematical null set is but a subordinate entity
>> created after the mind's self-image."
> 
>1 THE HUMAN MIND IS NO MORE "THE REAL NULL SET IN...NUMBER THEORIES"
>  THAN IT IS A BASEBALL BAT OR A TORNADO.
>
>2 The notion that the null set arose as a mathematical concept due to
>  man's perception of some nothingness within his psyche is absurd.

Considering the quality of your comments, your mind is a perfect example
of the null set.  All you've really said is this is bullshit with bullshit
reasons.  Maybe this is all we can ever say about this subject.

If you see something that is blatently wrong then say so and state why.
However, if these interpretations are simply contrary to your own
interpretations or intuition, then don't come off so condescending with
words like "absurd".  Like you know better, maybe you do.

Personally, my belief system is evolving.  I remain open to new ideas.

>> Transcending the Cartesian model will amount to a major revolution in medical
>> science, and since current medical research is closely linked to research
>> in biology--both conceptually and in its organization--such a revolution
>> is bound to have a strong impact on the further development of biology."
>
>Yeah this sounds like Capra. I don't know what it would mean to "transcend
>the cartesian model", and no explanation of what that would be like is
>offered in this passage. If what is meant is to "look for causes and
>processes outside the normal realm of measurable cause and effect 
>then I would say that its hogwash.

I think what Capra means by "transcend the cartesian model" is that a human
being as an organism is affected by the environment in such a way that
some processes will not be explanable out of that context.  Things may
be so interconnected that reductionism may be inadequate.  I think this
is interesting when you consider the relationship of the mind in respect to
understanding the physics of the environment or the physics of the mind.

> If its just a childlike hope that
>taking new perspectives, sometimes a "systems" or "cybernetic" 
>perspective may yield new insight into complex systems, then 
>point taken.

Childlike?  I don't understand.

What distinguishes childlike from adultlike?

>>Paradoxically, these three people's thoughts may sound unrelated.  It is up
>>to you to decide, any comments?
>
>  Yes, not only unrelated, they are unremarkable.

Then why did you make a remark.  I was trying to show some ideas about and of
the mind in respect to the reductionist approach.  Some people liked it.

>  Dave, your postings remain without peer in being provocative and
>  interesting.  But trust me, the
>  "deep stuff" concerning minds and brains, the meta-psychology,
>  is largely fluff.

Trust you?  Is it safe?  :-)

Some fluff hardens.

I think alot of people have been a little hard on Guillen.  This guy has
some really neat things to say.  Consider from his essay "Irrational Thinking"
from his book "Bridges to Infinity" (p.38-39):

"Despite this preeminence of rational numbers, science does need irrational
 numbers.  For well over a century, scientists have been taking note of a
 growing inventory of special quantities whose appearance in nearly every
 scientific theory signifies their import in the modern description of
 space-time.  These natural constants can be seen as nature's vital statistics,
 and right now it looks as though every one of them is an irrational number.
 For example, one of these constants, the speed of light, has been measured
 out to nine decimal places, and the digits have yet to show any pattern.
 (Expressed in millions of meters per second, our best measurement of the
 speed of light is the number .299792458.)  Another constant is one that is
 descriptive of dynamic behavior at the atomic level.  It is called the
 fine-structure constant, and there is no pattern to its digits even when
 measured out to ten decimal places.  (Our best measurement of the fine-
 structure constant, which is a dimensionless quantity, is .0072973502.) In
 physics alone there are more than a dozen of these constants, which have
 been measured out to anywhere from a few to eleven decimal places, and not
 one of them has a pattern to its digits."

When I read this I was astonished.  Of course, some of these constants may
not be irrational numbers.  But what would be really awesome is to come up
with some physics that would predict these irrational numbers.

Anyway, some more fluff for the pile.

>  Move up the scientific foodchain a bit. You know
>  the old saying, fact is stranger than fiction. Its never been more true 
>  than in psychology. Get down to real data and yet 
>  keep these larger questions in mind. Read about the bizzare
>  dissociations brain damaged patients exhibit, study up on perceptual
>  illusions, investigate the cases of extraordinary memories (people can 
>  literally tell you what shirt they wore or the change they made on
>  a given day in 1966, and its not a trick or learned ability). Well,
>  you get the picture...these sorts of phenomenon baffle
>  and challenge, and if there are secrets to be found and profound changes
>  to take place in how we understand the mind it will likely be fueled
>  by these inexplicable sorts of data. 

I try to get down to real data as much as I can.  That's why I like USENET,
after I read all the fluff, I can see what real people think.

In reference to "move up the scientific foodchain", I'm currently reading
Paul Kennedy's "Rise and Fall of Great Powers".  I want to find out why it
is so hard nowadays for a person my age to buy a house.

dmocsny@uceng.UC.EDU (daniel mocsny) (01/11/89)

In article <331@csd4.milw.wisc.edu>, markh@csd4.milw.wisc.edu (Mark William Hopkins) writes:
> In article <43472@linus.UUCP> bwk@mbunix (Barry Kort) writes:
> >If I give you a large, assembled jigsaw puzzle, and you examine
> >it piece by piece, you will end up with a pile of carefully
> >examined pieces.
> I don't know about that.  I solve most of my puzzles by classifying pieces
> on the basis of their shape and printed color, with little or no regard
> for the place where they fit in the "big" picture.
> 
> Yet, I also claim that I'm solving the puzzle holistically in the process.
> The "big" picture always emerges out of the jumble of pieces near the end.

How about those irritating puzzles with large washes of featureless
background (sky, ocean, forest). Even with our terrific holistic
pattern-matching power, the best we can often do is try every
combination of pieces to see which ones fit together (the problem gets
worse when a malicious jigsaw operator makes similar cuts that permit
close but erroneous fits). Assembling a solid-color puzzle reduces us
to the level of a slow, awkward serial computer, with perhaps some
advantage in avoiding certain obvious misfits. Is the solid-color
puzzle problem NP-complete?

Then again, I don't know anyone who has spent enough time assembling
solid-color puzzles to perform at an expert level. Perhaps subtle
cues exist that would allow our holistic power to get its foot in
the door and fetch the complexity monster a swift kick.

A portion of the puzzle with more information content provides
suitable grist for our holistic mill. Fixing the position of a piece
is "only" a matter of spotting when the pattern on it corresponds
uniquely to a detail on the puzzle box (if the puzzle came in a plain
box, toss it in the incinerator), or to a partially-assembled
structure. The holistic pattern-matcher must work in the face of
rotations, and know when to ignore or exploit the shape of a
particular puzzle piece.

But I think I am subverting Barry's original comment. He seemed to
be saying that the way the puzzle happens to divide into pieces
has _nothing_at_all_to_do_ with the picture that appears on the
puzzle. The "obvious" reductionist approach to "understanding" or
"explaining" the picture on the puzzle is doomed from the start.

However, I think the futility of the reductionist approach here
follows from the nature of the puzzle. I.e., the puzzle is an
artifact, and as such its decomposition is arbitrary. Do we in fact
see such "arbitrary" or misleading decomposition in nature, or do we
explain our failure to explain as due to nothing more than our limited
knowledge and bookkeeping ability?

Cheers,

Dan Mocsny
dmocsny@uceng.uc.edu

"God is subtle, but He is not malicious." -- A. Einstein
"Stop telling God what to do." -- P.A.M. Dirac (?) (Sorry, science historians,
if I botched this one)

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (01/11/89)

In article <564@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>God is still alive.  This bothers me and is why I have problems with
>reductionist thinking.  Einstein said God does not play dice, or was it God 
>that said Einstein does not play dice. 

Einstein did say "God does not play dice with the universe", and one of
his friends (I think it was Pauli) finally retorted: "When are you
going to quit telling God what to do?"
>
>Seriously, is there anyone out there who really thinks reductionism can explain
>everything there is to be explain?
>
I doubt if the human race will survive long enough to explain everything
there is to explain whatever method is used.  That isn't the point.
The point is, when dealing with complex systems, reductionism is a
necessary step if we are to understand them.  Only a first step, since
then we have to learn how to assemble the reduced parts back into a whole
again.  But it has worked splendidly in the past and there is no sign at all
that it is exhausted as a method, despite the ravings of Capra and others.
This all has nothing whatever to do with God.  If reductionism allows us
to make progress in understanding all parts of the universe we have 
heretofore investigated, why should the same method not work in the
case of the human mind?

mark@verdix.com (Mark Lundquist) (01/12/89)

In article <687@htsa.uucp> fransvo@htsa.UUCP (Frans van Otten) writes:
>I really stick to my definition of intelligence:
>
>  ***  Intelligence: The ability to draw a conclusion.
>
>  ***  Needed:        A database and an algorithm to reach a conclusion
>                      based on the data.
>
>  ***  Improvements:  The ability to change the database.
>		      The conclusion-algorithm being part of the database,
>		      so that the system can add/change algorithms.
>
>I would like to know how other people think about my definition.

Well...I'm not sure that it's very useful.  The problem is that it doesn't
really answer anything for us; instead, it takes all the interesting and
difficult issues that used to be part of the question "What is intelligence?"
and pushes them back into different questions.  For instance, "How do we
describe the particular unique way that human beings draw conclusions?"
This would be perfectly valid if this definition accurately reflected what
the word 'intelligence' does in our language.  Unfortunately, it doesn't
(I think), unless you mean something very special by the phrase "draw a
conclusion".  Suppose I said "Well, I have this tic-tac-toe-playing program.
It must be intelligent, since it correctly draws a conclusion about what the
next move should be."  One might respond by saying, "Not really; your
program can't be said to hold a _belief_ about the correctness of the
proposed tic-tac-toe move.  You've simply set up the bits in the machine in
such a way that it prints something on the screen, which happens to
correspond to a correct tic-tac-toe move, but only because you've rigged it
that way.  It's a conclusion in a sense, but it's not the kind of conclusion
that I meant".  This of course would beg the question "Then just what sort
of 'conclusion' do you mean?"
   However, one might respond to my tic-tac-toe suggestion as follows:
"You're quite right.  Your tic-tac-toe program _is_ intelligent.  Of course,
it's far less intelligent than a baboon.  Humans, in turn, exhibit
intelligence on a grander scale yet.  But in principle, it's the same."
This response would also be question-begging.  How is it that humans and
baboons apply this principle, to be able to exhibit their respective degrees
of intelligence?
	No matter how you slice it, it comes up peanuts.

	I would say that intelligence is what minds do.  Of course, this
definition is _at_least_ as question-begging as the last one, and almost as
useless, except for one thing:  it does seem to describe what people mean
when they use the word 'intelligence'.  I suspect that we'll never find a
definition of intelligence that escapes the difficulty that I've
described.  I guess I still don't understand the necessity of formulating
such a definition.

stevev@uoregon.uoregon.edu (Steve VanDevender) (01/12/89)

In article <564@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>I think alot of people have been a little hard on Guillen.  This guy has
>some really neat things to say.  Consider from his essay "Irrational Thinking"
>from his book "Bridges to Infinity" (p.38-39):
>
>"Despite this preeminence of rational numbers, science does need irrational
> numbers.  For well over a century, scientists have been taking note of a
> growing inventory of special quantities whose appearance in nearly every
> scientific theory signifies their import in the modern description of
> space-time.  These natural constants can be seen as nature's vital statistics,
> and right now it looks as though every one of them is an irrational number.
> For example, one of these constants, the speed of light, has been measured
> out to nine decimal places, and the digits have yet to show any pattern.
> (Expressed in millions of meters per second, our best measurement of the
> speed of light is the number .299792458.)  Another constant is one that is
> descriptive of dynamic behavior at the atomic level.  It is called the
> fine-structure constant, and there is no pattern to its digits even when
> measured out to ten decimal places.  (Our best measurement of the fine-
> structure constant, which is a dimensionless quantity, is .0072973502.) In
> physics alone there are more than a dozen of these constants, which have
> been measured out to anywhere from a few to eleven decimal places, and not
> one of them has a pattern to its digits."
>
>When I read this I was astonished.  Of course, some of these constants may
>not be irrational numbers.  But what would be really awesome is to come up
>with some physics that would predict these irrational numbers.
>
>Anyway, some more fluff for the pile.
>

This is definitely fluff.  It is ridiculous to try to read meaning into
the digits of the number for the speed of light in meters per second.
Meters and seconds are entirely arbitrary measurements of space and time
and it's not surprising that physical constants are going to show no
patterns when expressed in base ten and when measured in metric units.

You should know that in many physics problems, measurements are normalized
so that, say, c is 1.  The values of the constants themselves are not
important.  The relationships between the values of physical constants are.

It is slightly more interesting to consider dimensionless constants
like the fine structure constant, which are independent of any
measuring system.  However, who is to say that there is no pattern to
its digits when we only have ten decimal places and uncertainty to
work with, and we're looking at it in base ten anyway?  When the Planck
length is on the order of 10^-23 meters, is ten or eleven decimal digits
of a constant enough to make a pronouncement on?

Guillen's title "Irrational Thinking" may apply to more than just his
essay.  To claim that numbers we can only measure to finite precision
and which involve uncertainty are therefore irrational is, well,
irrational.  Rational numbers are perfectly adequate for representing
the physical constants he talks about.

From what I've seen of Guillen so far, I can see why people are hard
on him.  He may have fascinating mystic insights but his attempts to
justify them in scientific or mathematical terms don't work.  The best
I can say about his attempt to make an analogy between creating a
continuum of numbers out of the null set and the ability of the mind
to produce unpredictable thoughts is that the analogy is strained.
Does he show that the mind produces some insights out of nothing?  No.
Can he know that it does?  I think not.  It is just as tenable to say
that a mind produces insights via processes that are not accessible to
that mind's own consciousness, from information it already has.  This
also counters the justification that sensory deprivation somehow shows
that the mind makes something out of nothing.  People who climb into a
tank have memories, and when they start to hallucinate their minds
presumably aren't creating visions out of nothing--they're creating
hallucinations out of what is already in their minds.  Would a mind
that is completely blank, with no prior experiences, and that is
deprived of all input hallucinate?  Is this experiment possible?
Probably not.  Guillen isn't talking about this experiment, anyway,
but it's what he really should be talking about if he wants to claim
that a mind can generate something from nothing like Conway's theory
can generate numbers from the null set.

I think the reductionism/holism argument boils down to what I think is
a pair of clearer questions:  Is the universe explainable by rules?
Can those rules be derived by observing the universe?  Science assumes that
the answer to both of those questions is "yes."  My understanding of
holism leads me to think that it would answer "no" to one or both of those
questions.
-- 
Steve VanDevender 	stevev@drizzle.cs.uoregon.edu
"Bipedalism--an unrecognized disease affecting over 99% of the population.
Symptoms include lack of traffic sense, slow rate of travel, and the
classic, easily recognized behavior known as walking."

bwk@mbunix.mitre.org (Barry W. Kort) (01/13/89)

In article <331@csd4.milw.wisc.edu> markh@csd4.milw.wisc.edu
(Mark William Hopkins) takes me up on my jigsaw puzzle metaphor:

>In article <43472@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>>If I give you a large, assembled jigsaw puzzle, and you examine
>>it piece by piece, you will end up with a pile of carefully
>>examined pieces.
>
>I don't know about that.  I solve most of my puzzles by classifying pieces
>on the basis of their shape and printed color, with little or no regard
>for the place where they fit in the "big" picture.
>
>Yet, I also claim that I'm solving the puzzle holistically in the process.
>The "big" picture always emerges out of the jumble of pieces near the end.

I grant that you are solving the puzzle holistically.  After all, the
big picture does in fact emerge at the end.  But the *process* of
solution seems to be occuring outside the focus of concious attention.
We can teach people how to examine the jigsaw pieces, and classify them
by color, shape, and texture.  But the method of assembly which yields
the "Aha! Insight" seems to a fuzzier, less algorithmic activity.

Perhaps it is occuring largely in the right hemisphere, using parallel
processing and combinatorial logic.  Why is it that holistic thinking
and insight seems to come during periods of sleep or during periods
when our attention is diverted away from the problem at hand?  Why
is it that the solution "shows up" without warning?

--Barry Kort

bwk@mbunix.mitre.org (Barry W. Kort) (01/13/89)

In article <564@soleil.UUCP> peru@soleil.UUCP (Dave Peru) opines:

 > To say "Numbers seem rather fixed to me"
 > seems fixed or closed minded to me.

In Howard Rheingold's book, _Tools of Thought_, there is a sketch
of the neurophysiologist and pioneering cyberneticist, Warren McCulloch.
As Rheingold repeats the story, McCulloch was an abnormally gifted and
colorful person who had a firm background in mathematics.  A teacher
asked McCulloch what he wanted to do with his obviously brilliant
future.  "Warren," said he, "what is thee going to be?"  And I said,
"I don't know,"  "And what is thee going to do?"  And again I said,
"I have no idea, but there is one question I would like to answer:
What is a number, that man may know it, and a man that he may know
a number?"  He smiled and said, "Friend, thee will be busy as long
as thee lives."


 > What distinguishes childlike from adultlike?

On weekends I work as a volunteer in the Children's Discovery Room
at the Boston Museum of Science.  Occasionally I ask a parent,
"What is the difference between a child and a scientist?"  Most
of them quickly respond, "No difference?"

I often feel sorry for adults who have lost their childlike curiousity
somewhere along the way.  Fortunately a few children grow up to be
scientists.  It is a shame that so many people become adulturated
enroute to maturity.

--Barry Kort

Today's Quote:	"Nothing is as simple as it seems at first,
		 or as hopeless as it seems in the middle,
		 or as finished as it seems in the end."

bwk@mbunix.mitre.org (Barry W. Kort) (01/13/89)

In article <568@uceng.UC.EDU> dmocsny@uceng.UC.EDU (Daniel Mocsny) writes:

 > Is the solid-color puzzle problem NP-complete?

There are two kinds of extra-hard jigsaw puzzles:  the solid-color
puzzles (Little Red Riding Hood's Hood) and the puzzle in which
all the pieces are the same shape (Schmuzzles).

But curiously enough, the solid-color Schmuzzle puzzle isn't even
NP-hard.  It's NP-ridiculous.  :-)

On a more sublime note, Dan returns to the original point of discussion:

 > But I think I am subverting Barry's original comment. He seemed to
 > be saying that the way the puzzle happens to divide into pieces
 > has _nothing_at_all_to_do_ with the picture that appears on the
 > puzzle.  The "obvious" reductionist approach to "understanding" or
 > "explaining" the picture on the puzzle is doomed from the start.

I suppose Mother Nature is not so devilish as the jigsaw puzzle
maker.  But our own category boundaries are still somewhat arbitrary.
And, by studying the "elements" we don't automatically understand how
they assemble themselves into "molecules".

What I am saying is that anaylysis and differentiation are valuable
tools, but creative intelligence also requires synthesis and
integration.

--Barry Kort

mark@verdix.com (Mark Lundquist) (01/14/89)

In article <686@cogsci.ucsd.EDU> meadors@cogsci.UUCP (Tony Meadors) writes:
>  "deep stuff" concerning minds and brains, the meta-psychology,
>  is largely fluff. Move up the scientific foodchain a bit. You know
>  the old saying, fact is stranger than fiction. Its never been more true 
>  than in psychology. Get down to real data and yet 
>  keep these larger questions in mind. Read about the bizzare
>  dissociations brain damaged patients exhibit, study up on perceptual
>  illusions, investigate the cases of extraordinary memories (people can 
>  literally tell you what shirt they wore or the change they made on
>  a given day in 1966, and its not a trick or learned ability). Well,
>  you get the picture...these sorts of phenomenon baffle
>  and challenge, and if there are secrets to be found and profound changes
>  to take place in how we understand the mind it will likely be fueled
>  by these inexplicable sorts of data. 

Try any of the books written by Oliver Sacks ("A Leg To Stand On",
"The Man Who Mistook His Wife For A Hat", etc).  These books are accounts
of some really strange disorders experienced by patients who had had trauma
to the right hemisphere of the brain.  These disorders profoundly change
the patients' whole experience of being a human being.  Their symptoms
are not easily measured or quantified, and the disorders (according to Sacks)
do not lend themselves well traditional case studies.  Sacks decided that the
appropriate form of 'case study' for these disorders is the story.  He tells
these stories with acumen, compassion, insight, and humor.
	He's also got another book (I can't remember the title) in which he
discusses the relationships between Parkinson's and Tourette's syndromes.

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/15/89)

From article <43582@linus.UUCP>, by bwk@mbunix.mitre.org (Barry W. Kort):
" ...  But our own category boundaries are still somewhat arbitrary.
" And, by studying the "elements" we don't automatically understand how
" they assemble themselves into "molecules".

Of course not, but is this a fair analogy for reductionism?  I don't
think so.  Reductionist theories may occasionally arise by identifying
elements apart from the patterns they assemble into (perhaps molecular
biology would be a case?), but more typically the pattern is observed
first.  Later, a reduction into elements which assemble according to
certain rules is proposed to explain the patterns.  There is no step of
analysis apart from synthesis -- the rules of assembly are intrinsically
a part of the theory.

Instances are the analysis of atoms to explain the pattern of the
periodic table and the analysis of particles to explain the 8-fold way,
probably.  An instance I know more about may be drawn from Ventris'
decipherment of Linear B. The molecules were, let us say, the signs of
the writing system, and the atoms the vowels and consonants they stand
for.  A pattern in the data was that some signs appeared only at the
beginning of (what were inferentially) words.  One basis of the
decipherment was the identification of such signs as standing for single
vowels, the reasoning being that if the script was syllabic and if the
language being written did not permit vowels to occur next to one
another within a word (which is a common restriction), vowel-only
syllables and their signs could occur only word-initially.  This
would explain the pattern.  This, and other such inferences comprised
Ventris' theory.  (Other previous failed attempts to decipher
the script were, however, based on direct assignments of phonetic
values to signs.)

One cannot find here a step of synthesis that is chronologically
or logically "after" the analysis.

I suspect that the criticism proponents of holistic theories make
of reductionism is founded on a play on words -- an equivocation.
There is traditionally a use of the term 'analysis' which opposes
it to synthesis, but more commonly, 'analysis' does not refer merely
to a decomposition somehow apart from composition.

		Greg, lee@uhccux.uhcc.hawaii.edu

mirk@warwick.UUCP (Mike Taylor) (01/16/89)

In article <1995@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>If reductionism allows us to make progress in understanding all parts
>of the universe we have heretofore investigated, why should the same
>method not work in the case of the human mind?

Because the human mind is, by its very nature, something that can only
be observed in its entirety from within, and this viewpoint of conciousness
that we have is not succeptible to reductionist methods because we cannot
view the phenomenon objectively.  It is an intrinsically subjective thing.

Thomas Nagel's article "What is it like to be a bat?" (which can be
found in Hofstadter & Dennet's "The Mind's I") makes this point in
rather more detail, but in a very dull and dry way, IMHBDO.  His basic
point is that we cannot understand what it is like to be a bat because
it is a feeling subjective to the bat (if it is conscious at all).  We
can imagine what it would be like for ourselves to be a bat - but to
have a true picture of the phenomenon of bat-consciousness, we must
understand what it is like for the bat to be bat.  Clear?  No, I
didn't think so :-( :-)

I will try to restate the point in its bare form: to analyse something
by reductive techniques, we must be able to view it objectively.  But
to view consciousness objectively is to omit the most important aspect
of the phenomenon, namely the subjective experience of it, and thus
any reductionist anaysis made on this basis will be incomplete and/or
inaccurate.

There - that wasn't so bad, was it? :-)
______________________________________________________________________________
Mike Taylor - {Christ,M{athemat,us}ic}ian ...  Email to: mirk@uk.ac.warwick.cs
*** Unkle Mirk sez: "Em9 A7 Em9 A7 Em9 A7 Em9 A7 Cmaj7 Bm7 Am7 G Gdim7 Am" ***
------------------------------------------------------------------------------

bwk@mbunix.mitre.org (Barry W. Kort) (01/17/89)

In article <3498@uoregon.uoregon.edu> stevev@drizzle.UUCP
(Steve VanDevender) writes:

 > I think the reductionism/holism argument boils down to what I think is
 > a pair of clearer questions:  Is the universe explainable by rules?
 > Can those rules be derived by observing the universe?  Science assumes that
 > the answer to both of those questions is "yes."  My understanding of
 > holism leads me to think that it would answer "no" to one or both of those
 > questions.

Steve, did you really mean "derive" rather than "discover"?

Einstein supposed that the universe would appear the same to all
observers.  From this supposition, he derived the Theory of Relativity.
His starting point was neither a discovery nor a derivation.  But he
discovered that his derivation led to predictions which were borne out
by experimental observation.

But Einstein's nemesis was the Quantum Theory with it's dice-playing
lack of rhyme or reason.  So one of the "rules" appears to be lawless
and chaotic behavior.  Whether Stephen Hawking and others will
ultimately imagine/discover/derive rules underlying quantom randomness
remains to be seen.

Personally, I believe that quantum indeterminacy will survive the razor
of Occam, and that we will end up thanking our "lucky stars" for the
gift of life, including intelligent life.

--Barry Kort

abrown@homxc.ATT.COM (A.BROWN) (01/17/89)

Can someone please E-mail the difference between reductionism and
non-reductionism.  I'm doing a paper on Artificial Intelligence.
Which argues that given the proper sample space, computers can
adequately simulate the 'Primitive Visual System'. I now need to
conclude but am stuck as to whether this validates either system.
                                           Thanks a million
                                                abrown

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/18/89)

From article <5038@homxc.ATT.COM>, by abrown@homxc.ATT.COM (A.BROWN):
" 
" Can someone please E-mail the difference between reductionism and
" non-reductionism...

The Encyclopedia of Philosophy has some stuff under Laws and Theories,
Reductionism, which begins:  "Since theories do not refer directly to
observables, at least prima iacie, and do not make directly testable
statements, the first attempt to clarify their status was the suggestion
that they make a disguised reference to observables; that is, that they
provide some kind of shorthand for observation statements, or that their
content can be exhaustively translated into or reduced to observation
statements.  ..."

The article opposes reductionism to instrumentalism and realism.

So far as I can tell, this "proper" sense of 'reductionism' has
no relation to the way the term was being used in the recent
discussion in these groups, where it meant 'science'.

		Greg, lee@uhccux.uhcc.hawaii.edu

throopw@xyzzy.UUCP (Wayne A. Throop) (01/18/89)

> mark@verdix.com (Mark Lundquist)
> [...] one might respond to my tic-tac-toe suggestion as follows:
> "You're quite right.  Your tic-tac-toe program _is_ intelligent.  Of course,
> it's far less intelligent than a baboon.  Humans, in turn, exhibit
> intelligence on a grander scale yet.  But in principle, it's the same."
> This response would also be question-begging.  How is it that humans and
> baboons apply this principle, to be able to exhibit their respective degrees
> of intelligence?

This response does not beg the question at all.  Or rather, it has a
simple and straightforward answer for the question Mark claims it
begs.  The tic-tac-toe program models a game.  The human models the
game, the world, the human's self, the relationship among these
entities, and on and on.  The baboon (presumably) has fewer models of
less accuracy than does the human.

Or to put it another way, the answer to the question Mark poses is,
humans and baboons apply the same principles as does the tic-tac-toe
program, but they apply them to more things, more richly, and more
accurately.

Before anybody thinks I'm saying that AI is a SMOP, (to add lots of
models and make them rich and accurate) let me assure you all that
I *don't* minimize the difficulties or the unknowns in this way.  
After all, it is not known how one goes about building rich and
accurate models of things, and tying these to perceptions.  All
I'm saying is that the position of "understanding is modeling" is
not an obviously flawed position to take, nor does the position
lack something as obvious as a distinguishing factor between
levels of understanding.

--
God made integers, all else is the work of man.
                              --- Leopold Kronecker
-- 
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

demers@beowulf.ucsd.edu (David E Demers) (01/18/89)

In article <2894@xyzzy.UUCP> throopw@xyzzy.UUCP (Wayne A. Throop) writes:
>
>Before anybody thinks I'm saying that AI is a SMOP, 
					       ^^^^
Um, are we all supposed to know what this means?

>-- 
>Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw
					      ^^^^^
a maze of twisty mailpaths, all different...

Dave DeMers
demers@cs.ucsd.edu

litow@csd4.milw.wisc.edu (Bruce E Litow) (01/18/89)

Recently some postings have appeared in which the type of argument (so-called)
indicated in the summary has been invoked to maintain that reductionist 
methods cannot succeed in mind studies. I cannot accept that we can use
the construction: `the mind by its very nature ...' when we haven't
a clue as to what that `very nature' might be. In arguments based on
this construction one is always forced at some point into actually
accepting that there is a mind in toto which escapes whatever approach
is being argued against. That is the mind is an entity. 
(Following Rilke perhaps the ``Angels'' see it entire)
Once this is admitted
,then mind study is on par with physics which also faces a unity (the
universe) about which all our understanding has come from reductionist methods.
An interesting extended attempt in support of the claim 
that mind studies cannot proceed via reduction is given in Fodor's ``Modularity
of Mind''. However,Fodor only makes the case for cognition being beyond
our present reductions and nothing more. 

I believe that there is tremendous confusion in mind studies between e.g.
general,metaphysical speculation about mind and reductions such as 
neurophysiology,molecular physiology,linguistic inquiries,etc. The first is
limited because it only rarely can provide testable hypotheses. It is
unscientific. Its utility comes from inspiring people to examine
things but it is useless for carrying out research about mind. We have nothing
else but reduction when it comes to science.

bettingr@sunybcs.uucp (Keith E. Bettinger) (01/19/89)

In article <906@ubu.warwick.UUCP> mirk@uk.ac.warwick.cs (Mike Taylor) writes:
>
>  [ ... ]
>
>I will try to restate the point in its bare form: to analyse something
>by reductive techniques, we must be able to view it objectively.  But
>to view consciousness objectively is to omit the most important aspect
>of the phenomenon, namely the subjective experience of it, and thus
>any reductionist anaysis made on this basis will be incomplete and/or
>inaccurate.
>
>There - that wasn't so bad, was it? :-)

Maybe it IS so bad.  Let me try taking this position to its logical
conclusion. 

We study things objectively, but consciousness is "subjectivity" itself, so
that any objective study of subjectivity will be just missing the essence of
what is being studied.  Thus, we should reject objectivity for the one case of
studying subjectivity.

But we have already rejected subjective studies of phenomena, because its
person-dependent results are only useful to the person making the discovery.
Eliminating both means of investigation, this argument would seem to obviate
any reductive attempt to fully study consciousness.

Does this, then, argue for holism?

I'm sure this line of reasoning has a gaping hole in it, but it seems
interesting on the surface.  (Another in a series of Why-bother? arguments.)
Pardon my laziness, but I leave it to the esteemed members of the net to
plunge the dagger into the heart of this demon...

-------------------------------------------------------------------------
Keith E. Bettinger                  "Paradise
SUNY at Buffalo Computer Science     Is exactly like
                                     Where you are right now
CSNET:    bettingr@Buffalo.CSNET     Only much much
BITNET:   bettingr@sunybcs.BITNET    Better"    - Laurie Anderson
INTERNET: bettingr@cs.buffalo.edu
UUCP:     ..{bbncca,decvax,dual,rocksvax,watmath,sbcs}!sunybcs!bettingr
-------------------------------------------------------------------------

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (01/19/89)

In article <906@ubu.warwick.UUCP> mirk@uk.ac.warwick.cs (Mike Taylor) writes:
>In article <1995@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>>If reductionism allows us to make progress in understanding all parts
>>of the universe we have heretofore investigated, why should the same
>>method not work in the case of the human mind?
>
>Because the human mind is, by its very nature, something that can only
>be observed in its entirety from within, and this viewpoint of conciousness
>that we have is not succeptible to reductionist methods because we cannot
>view the phenomenon objectively.  It is an intrinsically subjective thing.
>
Certainly the mind can not be observed in its entirety from within.
Introspection is a very poor tool for understanding the mind.  If
we were able to understand the hardware (wetware) in which the mind
is implemented, and create simulations which show similar behavior
to minds, then don't you think we would be able to better understand
the natural mind?  Especially since we could perform experiments with
the simulations which we cannot do easily with the mind?

bwk@mbunix.mitre.org (Barry W. Kort) (01/20/89)

In article <2894@xyzzy.UUCP> throopw@xyzzy.UUCP (Wayne A. Throop) writes:

 > After all, it is not known how one goes about building rich and
 > accurate models of things, and tying these to perceptions.  All
 > I'm saying is that the position of "understanding is modeling" is
 > not an obviously flawed position to take, nor does the position
 > lack something as obvious as a distinguishing factor between
 > levels of understanding.

As it happens, I build models for a living. And, Wayne's comment
notwithstanding, I think I know how I do it.  :-)

In my work, I like to think of the modeling step as "capturing
the structure and behavior of the real system with the model."
Note that the English word, "comprehension", means "to capture
with."  So I agree with Wayne that modeling is a way of understanding
(comprehending) something.

After I capture/comprehend/understand the system, I use the model
to think about what would happen if I tweak one of the "knobs" on
the model.  I like to think of this activity as "cognition".
I can then add automatic feedback loops which maintain stable
behavior under nominal perturbing influences.

Next, I like to subject the model to abnormal conditions (like
overloading it, or introducing a fault condition).  Then I can
observe the behavior and see how the feedback loops compensate
for my nefarious disruption and disturbance of the peace.  Since
this step provides awareness of cause-and-effect patterns, I call
this step "consciousness".

Finally, I collect all the observations of behavior under overload
and fault conditions, and learn how to map the observable symptoms
back to the organic cause.  The diagnostic model is the inverse
model of the original system.  So I call this step "insight."

The whole process is referred to as CCCI (C-cubed I):
Comprehension, Cognition, Consciousness, and Insight.

--Barry Kort

throopw@xyzzy.UUCP (Wayne A. Throop) (01/25/89)

> bwk@mbunix.mitre.org (Barry W. Kort)
>> throopw@xyzzy.UUCP (Wayne A. Throop)

>> After all, it is not known how one goes about building rich and
>> accurate models of things, and tying these to perceptions. 
> As it happens, I build models for a living. And, Wayne's comment
> notwithstanding, I think I know how I do it.  :-)

My statement above is intended to mean that we don't know of any
algorithms that can build models as rich and accurate as those we
humans build automatically or at least "subconsciously" (at least, not
for as broad a domain and with reasonable efficency).  In particular,
the step in model building algorithm that Barry calls "comprehension"
or "capture" is not itself an algorithm, or an atomic operation of any
automaton known.

It is in that narrow sense that I meant that "it is knot known how
one goes about building rich and accurate models of things".

--
English language human-machine dialog is roughly equivalent to a
conversation between an idiot and a very young child.
          --- Vincent Rauzino - Datamation May 1982
-- 
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

bwk@mbunix.mitre.org (Barry W. Kort) (01/28/89)

In article <3046@xyzzy.UUCP> throopw@xyzzy.UUCP (Wayne A. Throop) discusses
the non-algorithmic aspect of model construction.  Wayne writes:

 > the step in model building algorithm that Barry calls "comprehension"
 > or "capture" is not itself an algorithm, or an atomic operation of any
 > automaton known.

The selection of the substrate upon which to build a model is indeed
a non-systematic step.  It takes an Eastman, or a Land to find the
apporpriate substrate.  But after that, the transcription from real
object to image can be utterly mechanical, as we see in Xerography
and photography.

On Saturdays, I work as a volunteer in the Children's Discovery
room at the Boston Museum of Science.  One of the educational
toys found in the room is a construction set called Marble Works.
The plastic pieces can be put together to build wonderful structures
through which a marble rolls from top to bottom.  One day I discovered
that I could construct a tower in the form of a double helix.  Two
marbles raced down the intertwined paths and came out at separate exits.

A few weeks later, I coached a 9-year old boy to build a double helix
as his bemused father looked on.  It wasn't until later that it
dawned on him that the boy had constructed a model of DNA.  The lad
was just mechanically assembling the parts according to a suggested
pattern.

--Barry Kort