[comp.ai] Algorithms, Turing, Semantics

kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) (01/13/90)

I clearly must be missing something very important to all of these
discussions.  Therefore, before I spew out my version of the world, I
have three questions:

1) Could someone name one process which cannot be broken down into an
algorithm?  That is, many people have said the mind may not be simulated
through an algorith, but personally, no other examples of
non-alogorithmical processes occur to me.  Please, someone, tell me what
obvious example I'm missing.

2) Others have named that current computers are not Turing machines--is
this not false?  Aren't all digital computer simulatable on a Turing
machine?  Or again, am I missing something obvious?  What's the simplest
problem a digital computer can solve that a Turing machine cannot?

3)  What is the definition of semantics?  Before even attempting to
argue that syntax can give rise to semantics, I'd like to here some
definitions of what all of you think semantics mean.  To me, semantics
is just a fancy term (aren't we all fond of using fancy terms) for
having a defined meaning--for example, that any time anyone in the
universe sees the word "telephone" they would think of a telephone, just
like any of us would :-).  It seems that nothing in life, through this
loose, derived, definition, could ever have any semantics, so please,
someone define it for me before I post something I'll regret.

As this post may indicate, I have something to say on each of these
topics, but wish to avoid the serious Net-gaffe and post something ill-
informed.  I feel, however, that others may need the same clarification
as I, so I am posting this note first.  Feel free to either post your
responses or send E-Mail, as I will summarize any replies I get.

			Kent Dickey
kadickey@phoenix.Princeton.EDU

gall@yunexus.UUCP (Norm Gall) (01/14/90)

kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) writes:


| I clearly must be missing something very important to all of these
| discussions.  Therefore, before I spew out my version of the world, I
| have three questions:

| 1) Could someone name one process which cannot be broken down into an
| algorithm?  That is, many people have said the mind may not be simulated
| through an algorith, but personally, no other examples of
| non-alogorithmical processes occur to me.  Please, someone, tell me what
| obvious example I'm missing.

Well, no one has convincingly established that 'the mind' is a process
(or collection of precesses, etc), so I can say that all processes
_can_ be broken down into an algorthm, there is nothing that isn't a
process that can be 'algorithmised', and hence, the mind cannot be
broken down into amn algorithm.

I am quite sure few _here_ will concur.

nrg

-- 
York University          | "Philosophers who make the general claim that a 
Department of Philosophy |       rule simply 'reduces to' its formulations
Toronto, Ontario, Canada |       are using Occam's razor to cut the throat
_________________________|       of common sense.'             - R. Harris

kp@uts.amdahl.com (Ken Presting) (01/14/90)

In article <12883@phoenix.Princeton.EDU> kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) writes:
>
>I clearly must be missing something very important to all of these
>discussions.  Therefore, before I spew out my version of the world, I
>have three questions:
>
>1) Could someone name one process which cannot be broken down into an
>algorithm?  That is, many people have said the mind may not be simulated
>through an algorith, but personally, no other examples of
>non-alogorithmical processes occur to me.  Please, someone, tell me what
>obvious example I'm missing.

There has been a discussion here lately as to whether all processes are
algorithmic, and some controversy perhaps remains.  Algorithms are finite
sets of rules determining transitions among a finite set of states.
Physical systems usually have an infinite number of states, so any
algorithm can at best approximate the behavior of a physical system. In
a physical system with unstable components (where a small perturbation
can produce large changes) the approximation may never be good enough.

At least that is one point of view.  Quantum mechanics comlicates the
story, but doesn't seem to change the conclusion.  I think there is some
agreement that finitary algorithms are *different* in behavior from
physical systems, and disagreement is only about the significance of the
difference.

Much more controversial is the question whether algorithms (or the tokens
they manipulate) lack meaning.  It is agreed that most symbol tokens in
most algorithms can be assigned arbitrary meanings without changing the
algorithm, and that therefore the algorithm is at least partly independent
of meaning.  Some infer that this makes algorithms different from
thinking, which is waht the Chinese Room debate is all about.

>2) Others have named that current computers are not Turing machines--is
>this not false?  Aren't all digital computer simulatable on a Turing
>machine?  Or again, am I missing something obvious?  What's the simplest
>problem a digital computer can solve that a Turing machine cannot?

A guiding principle behind most of computer science is the "Church-Turing
Thesis" - that all possible types of computation can be performed by
Turing machines.  Many formal systems for defining computations have been
devised, and all can be reduced to Turing machine programs.

The only difference between computers and Turing machines is:
(1) Computers don't have an infinitely long tape
(2) Computers are real physical objects, whereas Turing machines are
    abstract objects (like numbers or functions)

>3)  What is the definition of semantics?  Before even attempting to
>argue that syntax can give rise to semantics, I'd like to here some
>definitions of what all of you think semantics mean.  To me, semantics
>is just a fancy term (aren't we all fond of using fancy terms) for
>having a defined meaning--for example, that any time anyone in the
>universe sees the word "telephone" they would think of a telephone, just
>like any of us would :-).  It seems that nothing in life, through this
>loose, derived, definition, could ever have any semantics, so please,
>someone define it for me before I post something I'll regret.

The situation is not so bad as it sometimes appears, at least for the
semantics of formal languages (ie mathematics or programs).  Basically,
the semantics of any language is a function from the words to objects and
events.

Syntax is a set of rules (ie a grammar for regular expressions) which
defines which strings of symbols are in the language.

Semantics has several parts:
(1) a "model" which is a set of objects (perhaps the integers, or phsyical
    things) plus relations among those objects (such as {(x,y): x=y}).
    Usually properties (such as being an even number) are lumped in with
    realtions as a special case.
(2) a "denotation function" which maps most of the symbols to objects and
    relations in the model (ie "1" |-> the smallest positive number)
(3) "Truth conditions" for sentences (ie "a=b" is true if and only if
    the denotation of "a" = the denotation of "b")

It is crucial that the truth conditions be defined recursively, so that
the truth of a complex sentence can be determined from the truth of the
parts, ie "apples are red and pears are green" is true when both parts
are true.  Meaning is the combination of denotation and truth conditions.
Denotation is also called "reference".

I've ignored issues of consciousness, intentions, and lots of others,
because with even this simple type of semantics, lots of really
horrible problems can be productively studied.

This is a sketchy definition, but it's enough to see why the Chinese Room
argument gets started.  When Searle says that he "doesn't understand", he
is saying that he doesn't know the denotation function and truth
conditions for the Chinese sentences.  Again, this is a sketchy reading of
the Chinese room problem, and not the whole picture.  But perhaps it's
enough to get started.

>As this post may indicate, I have something to say on each of these
>topics, but wish to avoid the serious Net-gaffe and post something ill-
>informed.  I feel, however, that others may need the same clarification
>as I, so I am posting this note first.  Feel free to either post your
>responses or send E-Mail, as I will summarize any replies I get.

Thanks for asking.  I have given my own opinions more prominence in this
reply than they perhaps deserve, but I hope I've been fair to those who
disagree.

You might want to look up the article "The Semantic Conception of Truth"
by Alfred Tarski, reprinted in several places including a collection
edited by Leonard Linsky (I've forgotten the title).  This is certainly
the most famous paper ever written about semantics - Tarski shows how to
solve the Liar paradox.  Tarski was a fine writer and one of the great
mathematicians of the century.
Any fisrt-year logic texbook will have a better account of semantics than
I've given above.  I like Benson Mates "Elementary Logic", but that's just
because it was my first logic text.

cash@muse.uucp (Peter Cash) (01/15/90)

In article <91Eq02wy7eX=01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>In article <12883@phoenix.Princeton.EDU> kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) writes:
>>
>>I clearly must be missing something very important to all of these
>>discussions.  Therefore, before I spew out my version of the world, I
>>have three questions:
>>
>>1) Could someone name one process which cannot be broken down into an
>>algorithm?  That is, many people have said the mind may not be simulated
>>through an algorith, but personally, no other examples of
>>non-alogorithmical processes occur to me.  Please, someone, tell me what
>>obvious example I'm missing.
>
>There has been a discussion here lately as to whether all processes are
>algorithmic, and some controversy perhaps remains.  Algorithms are finite
>sets of rules determining transitions among a finite set of states.
>Physical systems usually have an infinite number of states, so any
>algorithm can at best approximate the behavior of a physical system. In
>a physical system with unstable components (where a small perturbation
>can produce large changes) the approximation may never be good enough.

You are, of course, operating with a very narrow definition of what an
"algorithm" is (Turing's definition).  But to me, it sounds very odd to say
that an algorithm can "approximate" a physical system; I might concede that
a *model* can--in some strict sense--"approximate" a physical system, but an
algorithm, never.  An algorithm is a formal statement of rules (e.g., the
rules used to solve a problem, or construct a model, etc.), it is not the
thing itself.

For example, a topographical map can be said to correspond to the terrain
(I can't bring myself to say that it "approximates" the terrain).  But by
no stretch of the imagination do the cartographers "algorithms" (his
procedures and methods), correspond to the terrain.  

It seems to me that this is a confusion that arises out of conflating the
"states" of a Turing machine with the "states" of a physical system.  They
have something in common only in the most abstract way.  To speak of them
in the same breath begs the question of artificial intelligence.  If we
think that people are in some sense "systems" that have "states" like
Turing machines, then the outcome has already been decided:  people can be
"approximated" by algorithms.  Thus, the really important point that must
be decided first is in what sense (if any) people are like Turing machines.

>...[lots of formal stuff about semantics deleted]...

>I've ignored issues of consciousness, intentions, and lots of others,
>because with even this simple type of semantics, lots of really
>horrible problems can be productively studied.

But you imply that there *is* a connection between the concerns of the
formal semanticist and "consciousness, intentions, and lots of others".
This connection is what is really interesting and important here, and this
connection is what needs to be demonstrated.  After all, much of the
interest in semantics would wane if people didn't assume that there is a
connection.  But is this more than an unfounded assumption?  If you think
that this assumption has a foundation, I would like to know what it is.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
             |      Die Welt ist alles, was Zerfall ist.     |
Peter Cash   |       (apologies to Ludwig Wittgenstein)      |    cash@convex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) (01/16/90)

In article <91Eq02wy7eX=01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>In article <12883@phoenix.Princeton.EDU> kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) writes:
>>
>>1) Could someone name one process which cannot be broken down into an
>>algorithm?  That is, many people have said the mind may not be simulated
>>through an algorith, but personally, no other examples of
>>non-alogorithmical processes occur to me.  Please, someone, tell me what
>>obvious example I'm missing.
>
>There has been a discussion here lately as to whether all processes are
>algorithmic, and some controversy perhaps remains.  Algorithms are finite
>sets of rules determining transitions among a finite set of states.
>Physical systems usually have an infinite number of states, so any
>algorithm can at best approximate the behavior of a physical system. In
>a physical system with unstable components (where a small perturbation
>can produce large changes) the approximation may never be good enough.

First of all, algorithms can determine transitions among an infinite
amount of states. There is no reason why an algorithm can't operate on
analog quantities. For instance, I consider Gaussian reduction to be an
algorithm for solving systems of equations.

Second, I see no reason why a variable can't be part of a rule with the
stipulation that the rule is usable for an infinite number of possible
values of the variable. This can be considered an infinite number of set
rules. (minor point)

Lastly, there is no proof that physical systems have an infinite number
of states. Space could be discrete. Time could be discrete. Either could
be finite.

-Karl		kpfleger@phoenix.princeton.edu
		kpfleger@pucc (bitnet)

bls@cs.purdue.EDU (Brian L. Stuart) (01/16/90)

In article <91Eq02wy7eX=01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>In article <12883@phoenix.Princeton.EDU> kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) writes:
>>
>>2) Others have named that current computers are not Turing machines--is
>>this not false?  Aren't all digital computer simulatable on a Turing
>>machine?  Or again, am I missing something obvious?  What's the simplest
>>problem a digital computer can solve that a Turing machine cannot?
>
>A guiding principle behind most of computer science is the "Church-Turing
>Thesis" - that all possible types of computation can be performed by
>Turing machines.  Many formal systems for defining computations have been
>devised, and all can be reduced to Turing machine programs.
>
>The only difference between computers and Turing machines is:
>(1) Computers don't have an infinitely long tape
>(2) Computers are real physical objects, whereas Turing machines are
>    abstract objects (like numbers or functions)

If you'll indulge me one last waste of bandwidth, I'd like to elaborate
on this point briefly.  Yes, all computers (except those with physical
noise sources) are simulatable on a TM.  The converse is not true because
they don't have the infinite tape as mentioned in (1).  To be precise,
here is the relationship among computer programs running on finite computers,
algorithms and TMs.
a)  Since a finite computer has a finite (albeit large) number of states
    it is a finite automaton, and finite automata recognize only regular
    languages.
b)  Algorithms by definition can recognize recursive languages which are
    a proper super-set of regular languages.
c)  TMs recognize recursively enumerable languages which are in turn a
    proper super-set of recursive languages.
So the question is not "what can a computer do that a TM can't?"  It is
"what can a TM do that a computer can't?"  And the answer is a non-empty
set of things.  There are no things that a computer can do that a TM
can't (except for certain computers that have quantum noise sources to
generate "true" random numbers).

The issue for AI is whether or not the brain's operation fits into one
of these catagories.  If it fits into the first, then strong AI is assured.
If it fits into the other two, then we want to know if a finite approximation
will suffice.  If it will, then again strong AI is assured.  If not, well
then we still can have fun in showing it.  Besides there are still some
practical problems that we can solve without achieving a full intelligence.

Brian L. Stuart
Department of Computer Science
Purdue University

aboulang@bbn.com (Albert Boulanger) (01/17/90)

In article <12945@phoenix.Princeton.EDU>, Karl Robert Pfleger writes:

  Lastly, there is no proof that physical systems have an infinite number
  of states. Space could be discrete. Time could be discrete. Either could
  be finite.
 

Yeah. Interestingly, T. Erber and S. Putterman in "Randomness in Quantum
Mechanics -- Nature's Ultimate Cryptogram?", Nature, Vol 318 #7,
November 1985, pp 41-43, suggest looking for pseudo-random signatures
in isolated single-ion systems, based in part by the
quantum-mechanical quantization of phase-space at Plank lengths. If QM
was deterministic, (and the axiomatic development of QM says nothing
about this quality), then single isolated QM systems would recur.

Looking for a QM computer with maximal algorithmic complexity quantities,
Albert Boulanger
BBN Systems & Technologies Corp.
aboulanger@bbn.com

adamsf@turing.cs.rpi.edu (Frank Adams) (01/18/90)

In article <4593@convex.UUCP> cash@convex.COM (Peter Cash) writes:
>For example, a topographical map can be said to correspond to the terrain
>(I can't bring myself to say that it "approximates" the terrain).  But by
>no stretch of the imagination do the cartographers "algorithms" (his
>procedures and methods), correspond to the terrain.  

Of course they do not.  But then, neither do the maps correspond to my
automobile.

More generally, an algorithm does not, I think, correspond to any particular
object.  It is in the nature of an algorithm to be general.  But an algorithm
plus data can correspond to some physical system, with a more precise
correspondence than just the data has.  For example, consider the programs
meteorologists use, which are fed huge amounts of data, and do even more
computation, producing predictions about the weather.  The data alone
corresponds to the current state of the weather; the data plus program
corresponds to the behavior of the weather.

(As an aside, if one may speak of an algorithm as a process with the
specifics of the situation factored out, as I think one can; and if by
the behavior of a system we mean how it acts in general, not in a particular
instance, which is one of the possible meanings of behavior; then it is
quite legitimate to speak of an algorithm as corresponding to the behavior
of a system.)

Finally, note that when one speaks of a "computer program", one is not
necessarily just talking about [the implementation of] an algorithm; as
much data as desired can be incorporated into the program.

utility@quiche.cs.mcgill.ca (Ronald BODKIN) (01/19/90)

In article <12945@phoenix.Princeton.EDU> kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) writes:
>Lastly, there is no proof that physical systems have an infinite number
>of states. Space could be discrete. Time could be discrete. Either could
>be finite.
	I think that the ultimate test of the Church-Turing thesis is
whether a Turing Machine exists which could compute the universe's
behaviour.  If so then, naturally, everything which is "really" computible
is Turing-Computible and, indeed, strong AI is 1000% correct.  As
for whether a Turing Machine could do this, its a hard question -- I
think that given discrete space/time that does not extend infinitely (although
my best knowledge of modern physics is that such a description is incredibly
simplistic) it could certainly be done.  However, I would contend that
if the universe (as it does not currently appear) is in any way infinite,
then it is not Turing computible.  And then there is an algorithm for
deciding things which is not Turing computible, so the thesis fails (I
consider the universe to be a massive algorithm that decides things).  And
as for questions of randomness, everyone knows that Deterministic Turing
machines can similuate non-Deterministic ones, andprobability densities
can likewise be carried around for given events, and no operation can create
a number out of two finitely-representable ones which is not itself finitely
representable (e.g. from 1,5 I can get sin(6) but that SIN(6) is a finite
representation).
	Has anyone heard mention of the universe in relation to the old
Church-Turing thesis?
			Ron