[comp.ai] Sound and complete definitions of intelligence.

josh@klaatu.rutgers.edu (J Storrs Hall) (12/07/88)

Mark Plutowski writes:
    Back to the subject.  Until these terms are better defined, 
    one can be perfectly justified in claiming that they apply to current 
    computers.  Perhaps this is acceptable; if not, then the definition
    needs revision, since obviously from one perspective the application
    to computers is (although tongue firmly planted in cheek) not so
    far-fetched.  I'm looking forward to any sound and complete defintions
    of:		KNOWLEDGE, BELIEF, INTUITION, INDUCTION, 
		    IMAGINATION, INTELLIGENCE.
    believe me.
    ----------------------------------------------------------------------

Let me take a tangent that may shed some light on the subject:
Is it wrong to call a teddy bear a "bear" or Sherlock Holmes a 
"person"?  A real bear is an altogether more serious and thoroughgoing
thing: its "bearness" is generative, that of the teddy ascribed.
The bearness of a teddy bear is a *metaphorical shadow* of that 
of the real bear.

Now let's look at a pc running ELIZA.  The "statementness" of its
character strings, the "knowledgeness" of its stored keywords, are
metaphorical shadows of the real things.

Now I claim that the relation of the pc to the human mind is 
like that of a kitten to a tiger or a dollhouse to a bungalow.
They are quite similar in many respects, form and function, 
but so drastically different in scale as to be qualitatively
separate things.  All the ancillary details may be the same,
but the defining characteristic is missing:  the kitten is not
deadly, the dollhouse is not shelter, the pc is not intelligent.

Moravec estimates 10 teraops/10 terawords to be human-equivalent
computational power.  I would be quite comfortable with one teraop/
one terabyte:  the scale of a pc to such a machine is fairly precisely
that of a bacterium to a human body.  

I am not even sure we have to talk about the things the pc does
as requiring any amplification but the quantitative:  it it did a
million template matches for every one it does, held a million 
facts for every one it holds, selected its statements from a set 
a million times the size; will this be understanding, knowledge,
judgement?  Maybe so.  But until then, there is a qualitative 
difference hiding in the quantitative one.

--JoSH

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/07/88)

From article <Dec.6.15.57.28.1988.9988@klaatu.rutgers.edu>, by josh@klaatu.rutgers.edu (J Storrs Hall):
" ...
" Now I claim that the relation of the pc to the human mind is 
" like that of a kitten to a tiger ...

" but the defining characteristic is missing:  the kitten is not
" deadly, ...

Not bad, but why don't we make it a tiger cub instead of a kitten?
A tiger cub can grow into a tiger, and probably will ...

" Moravec estimates 10 teraops/10 terawords to be human-equivalent
" computational power. ...

Surely such estimates are frivolous.  We don't know what or how humans
compute in at least one crucial area, language, except functionally by
the gross results we can observe.  Could you estimate the computational
resources consumed by an unknown program executing under an unknown
operating system given some small samples of its input and output
and fragmentary information about the device in use?  Not feasible,
without (re)constructing the program, at least, which we haven't
yet managed to do for humans.

		Greg, lee@uhccux.uhcc.hawaii.edu

josh@klaatu.rutgers.edu (J Storrs Hall) (12/09/88)

I wrote:
" Moravec estimates 10 teraops/10 terawords to be human-equivalent
" computational power. ...

Greg, lee@uhccux.uhcc.hawaii.edu replied:

    Surely such estimates are frivolous.

They are not.  Let me reccomend to you not only Hans' published
work but Sejnowski in AAAI-88 and Merkle in AIAA Computers in
Aerospace 87.  It is obviously of critical importance to AI to 
have some understanding of the size of the problem it is trying
to solve, relative to the tool they are trying to use.

Surely the estimates are imprecise--I haven't seen even one that
claimed to be better than order-of-magnitude-- but estimates I have
read from widely varying sources fall into the 10e12 - 10e15 range
with surprising consistency.

You should not so blithely dismiss an area where serious, informed 
estimation dates back to von Neumann ("The Computer and the Brain").

    We don't know what or how humans
    compute in at least one crucial area, language, except functionally by
    the gross results we can observe.

So what?  The parts we do know about, the retina for example, give
us some guidelines for estimating an upper bound for whatever
computation is being done.  And we can theorize and conjecture.
Our estimates may be wrong, but they are not frivolous.

    Could you estimate the computational
    resources consumed by an unknown program executing under an unknown
    operating system given some small samples of its input and output
    and fragmentary information about the device in use?  Not feasible,
    without (re)constructing the program, at least, which we haven't
    yet managed to do for humans.

Again, let me start vivisecting the computer with appropriate test
instruments and I can begin to give you some believable upper and
lower bounds.

...Mind you, this is not to say that there aren't significantly better
ways the brain could be doing some of the things it does.  Consider
what a Cray could do to all those long division problems you slaved
over in grade school.  And the existance of "idiot savant" human
calculators proves that there are significantly faster ways that
even the brain can do some things like that.

--JoSH

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/09/88)

From article <Dec.8.18.59.38.1988.10409@klaatu.rutgers.edu>, by josh@klaatu.rutgers.edu (J Storrs Hall):
" ...
"     We don't know what or how humans
"     compute in at least one crucial area, language, except functionally by
"     the gross results we can observe.
" 
" So what?  The parts we do know about, the retina for example, give
" us some guidelines for estimating an upper bound for whatever
" computation is being done.

That seems reasonable, in general terms at least.  But the reasoning in
the article I was commenting on required estimating a lower bound.
That's different.  Language might require much or little of the
maximum computational capacity one can impute to the human organism.
Without knowing the algorithms involved, or what data needs to be
stored, or how it is stored, there's no way of telling.  If it requires
much, one might agree that human intellectual abilities are
qualitatively different than those which could ever be exhibited
by machines of the sort that are familiar to us.  If it requires
little, one might disagree.  Without knowing, there is no means to
judge.

" And we can theorize and conjecture.
" Our estimates may be wrong, but they are not frivolous.

I'm all for theory and conjecture.  And so far as maximum capacity goes,
maybe the estimates make sense.  I should have qualified my charge of
frivolity more carefully.  Putting it better:  the conclusion that
computers cannot in principle match human intellectual abilities on the
grounds that human have much more computational capacity available
involves a frivolous interpretation of an estimate perhaps meaningful in
other applications.

"     Could you estimate the computational
"     resources consumed by an unknown program executing under an unknown
"     operating system given some small samples of its input and output
"     and fragmentary information about the device in use?  Not feasible,
"     without (re)constructing the program, at least, which we haven't
"     yet managed to do for humans.
" 
" Again, let me start vivisecting the computer with appropriate test
" instruments and I can begin to give you some believable upper and
" lower bounds.

I say you would have to reconstruct the program, at least in part, with
your test instruments.  For the lower bounds.  Perhaps its arguable, but
I think this has not been done for humans in the exercise of their
intellectual capacities, and there is no reasonable prospect of its
being done in the near future.

" ...
		Greg, lee@uhccux.uhcc.hawaii.edu

josh@klaatu.rutgers.edu (J Storrs Hall) (12/10/88)

Starting in the middle:

    " And we can theorize and conjecture.
    " Our estimates may be wrong, but they are not frivolous.

    I'm all for theory and conjecture.  And so far as maximum capacity goes,
    maybe the estimates make sense.  I should have qualified my charge of
    frivolity more carefully.  Putting it better:  the conclusion that
    computers cannot in principle match human intellectual abilities on the
    grounds that human have much more computational capacity available
    involves a frivolous interpretation of an estimate perhaps meaningful in
    other applications.

Aha.  On the contrary, I claim that a human-equivalent computer is
buildable now, would be a million-dollar supercomputer in the
mid-90's, and a personal computer by 2010.  

Let me put that another way.  It is the consensus of people I have
read and heard on the subject (respected in their fields) that the
state of the technology will produce a one-rack, $100K, human-
processing-power-equivalent machine around the year 2000.  *It is
much less likely that the appropriate software will be available*.

--JoSH

dmocsny@uceng.UC.EDU (daniel mocsny) (12/11/88)

In article <Dec.9.15.13.42.1988.10600@klaatu.rutgers.edu>, josh@klaatu.rutgers.edu (J Storrs Hall) writes:
> It is the consensus of people I have
> read and heard on the subject (respected in their fields) that the
> state of the technology will produce a one-rack, $100K, human-
> processing-power-equivalent machine around the year 2000.  *It is
> much less likely that the appropriate software will be available*.

If my right arm is as strong as Da Vinci's was, will I now paint _The
Last Supper?_

I'm glad you included the disclaimer about software. Perhaps we will
find that gross computational power is even less of an issue than we
now believe.

Since we have essentially no understanding of how much leverage the
brain gets from its emergent properties, time domain multiplexing,
or analog processing, I regard such comparisons with some suspicion.
Nonetheless, I greedily await the opportunity to own and program
such a machine as you predict, even if I cannot reproduce my own 
thoughts on it.

Cheers,

Dan Mocsny
dmocsny@uceng.uc.edu

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/11/88)

From article <Dec.9.15.13.42.1988.10600@klaatu.rutgers.edu>, by josh@klaatu.rutgers.edu (J Storrs Hall):
"...
" state of the technology will produce a one-rack, $100K, human-
" processing-power-equivalent machine around the year 2000.  *It is
" much less likely that the appropriate software will be available*.

Sounds cost-effective, if it weren't for the darned software problem.
		Greg

fransvo@htsa (Frans van Otten) (12/12/88)

In article <Dec.8.18.59.38.1988.10409@klaatu.rutgers.edu> josh@klaatu.rutgers.edu (J Storrs Hall) writes:
>                       And the existance of "idiot savant" human
>calculators proves that there are significantly faster ways that
>even the brain can do some things like that.
>
>--JoSH

I disagree. In these cases a part of the brain is over-developed. Let's
agree on the fact that the human brain is very powerful. 'Normal' people
use this power quite scattered (see my article about multiple 'kinds' of
intelligence). Idiot savants use most of their brain power for a very
small task. To compare: Take a big mainframe with hundreds of users. If
you would use a single user, single tasking operating system on the same
hardware, wouldn't that be fast !

-- 
                         Frans van Otten
                         Algemene Hogeschool Amsterdam
			 Technische en Maritieme Faculteit
                         fransvo@htsa.uucp

markh@csd4.milw.wisc.edu (Mark William Hopkins) (12/22/88)

In article <Dec.6.15.57.28.1988.9988@klaatu.rutgers.edu> josh@klaatu.rutgers.edu (J Storrs Hall) makes reference to:

Mark Plutowski's challenge:
>    Back to the subject.  Until (intelligence, intuition, etc.) are better
>    defined,  one can be perfectly justified in claiming that they apply to
>    current computers.  Perhaps this is acceptable; if not, then the
>    definition needs revision, since obviously from one perspective the 
>    application to computers is (although tongue firmly planted in cheek) not
>    so far-fetched.  I'm looking forward to any sound and complete defintions
>    of:		KNOWLEDGE, BELIEF, INTUITION, INDUCTION, 
>		    IMAGINATION, INTELLIGENCE.

Let's take a stab at it:

	   INDUCTION: Having inductive ability means being able to derive
		      more general facts from less general instances in a
		      reliable (though not infallible) way.

	   INTELLIGENCE: The ability to successfully cope with unexpected
			 problems is the core of intelligence.

One could say that the paragon of intelligence lies in being able to program
(or teach!) this kind of intelligence.

     Some people also view intelligence as having a lot of specialized
knowledge, but I think the idea that it is nothing more than that is an insult 
to everyone's intelligence.

The other terms are momentarily beyond me.

bwk@mbunix.mitre.org (Barry W. Kort) (12/24/88)

In article <44@csd4.milw.wisc.edu> markh@csd4.milw.wisc.edu
(Mark William Hopkins) writes:
 > In article <Dec.6.15.57.28.1988.9988@klaatu.rutgers.edu>
 > josh@klaatu.rutgers.edu (J Storrs Hall) makes reference to:
 > Mark Plutowski's challenge:
 > >    I'm looking forward to any sound and complete defintions of:
 > >   	KNOWLEDGE, BELIEF, INTUITION, INDUCTION, IMAGINATION, INTELLIGENCE.
 > 
 > Let's take a stab at it:
 > 
 > 	   INDUCTION: Having inductive ability means being able to derive
 > 		      more general facts from less general instances in a
 > 		      reliable (though not infallible) way.
 > 
 > 	   INTELLIGENCE: The ability to successfully cope with unexpected
 > 			 problems is the core of intelligence.

Permit me to gently remove Mark's dagger from the corpus of discussion,
and quote from the Hypercard Stack, "Semantic Network":

Knowledge is a structured integration of information that
enables thoughtful action.

A theory is a belief about a system for which the evidence is
consistent but inconclusive.

Intuition is a form of theory construction using model-based reasoning
on partial information.

Inductive reasoning (backward chaining or goal-directed reasoning)
is a form of reasoning in which a knowledge base is traversed to
find causal antecedents consistent with asserted facts.

Imagination is the process of conceiving ideas (or possibilities)
for changing the state-of-affairs of a system.

Intelligence is the ability to think and solve problems.

Inferential reasoning is a form of information processing that
transforms observations of correlated events into theories
about cause and effect relationships.

Thinking is a rational form of information process which reduces
the entropy or uncertainty of a knowledge base, generates solutions
to outstanding problems, and conceives goal-oriented courses of action.

[There's more, but we'll save the rest for later.]

--Barry Kort