[sci.philosophy.tech] Algorithms, Turing, Semantics

kp@uts.amdahl.com (Ken Presting) (01/14/90)

In article <12883@phoenix.Princeton.EDU> kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) writes:
>
>I clearly must be missing something very important to all of these
>discussions.  Therefore, before I spew out my version of the world, I
>have three questions:
>
>1) Could someone name one process which cannot be broken down into an
>algorithm?  That is, many people have said the mind may not be simulated
>through an algorith, but personally, no other examples of
>non-alogorithmical processes occur to me.  Please, someone, tell me what
>obvious example I'm missing.

There has been a discussion here lately as to whether all processes are
algorithmic, and some controversy perhaps remains.  Algorithms are finite
sets of rules determining transitions among a finite set of states.
Physical systems usually have an infinite number of states, so any
algorithm can at best approximate the behavior of a physical system. In
a physical system with unstable components (where a small perturbation
can produce large changes) the approximation may never be good enough.

At least that is one point of view.  Quantum mechanics comlicates the
story, but doesn't seem to change the conclusion.  I think there is some
agreement that finitary algorithms are *different* in behavior from
physical systems, and disagreement is only about the significance of the
difference.

Much more controversial is the question whether algorithms (or the tokens
they manipulate) lack meaning.  It is agreed that most symbol tokens in
most algorithms can be assigned arbitrary meanings without changing the
algorithm, and that therefore the algorithm is at least partly independent
of meaning.  Some infer that this makes algorithms different from
thinking, which is waht the Chinese Room debate is all about.

>2) Others have named that current computers are not Turing machines--is
>this not false?  Aren't all digital computer simulatable on a Turing
>machine?  Or again, am I missing something obvious?  What's the simplest
>problem a digital computer can solve that a Turing machine cannot?

A guiding principle behind most of computer science is the "Church-Turing
Thesis" - that all possible types of computation can be performed by
Turing machines.  Many formal systems for defining computations have been
devised, and all can be reduced to Turing machine programs.

The only difference between computers and Turing machines is:
(1) Computers don't have an infinitely long tape
(2) Computers are real physical objects, whereas Turing machines are
    abstract objects (like numbers or functions)

>3)  What is the definition of semantics?  Before even attempting to
>argue that syntax can give rise to semantics, I'd like to here some
>definitions of what all of you think semantics mean.  To me, semantics
>is just a fancy term (aren't we all fond of using fancy terms) for
>having a defined meaning--for example, that any time anyone in the
>universe sees the word "telephone" they would think of a telephone, just
>like any of us would :-).  It seems that nothing in life, through this
>loose, derived, definition, could ever have any semantics, so please,
>someone define it for me before I post something I'll regret.

The situation is not so bad as it sometimes appears, at least for the
semantics of formal languages (ie mathematics or programs).  Basically,
the semantics of any language is a function from the words to objects and
events.

Syntax is a set of rules (ie a grammar for regular expressions) which
defines which strings of symbols are in the language.

Semantics has several parts:
(1) a "model" which is a set of objects (perhaps the integers, or phsyical
    things) plus relations among those objects (such as {(x,y): x=y}).
    Usually properties (such as being an even number) are lumped in with
    realtions as a special case.
(2) a "denotation function" which maps most of the symbols to objects and
    relations in the model (ie "1" |-> the smallest positive number)
(3) "Truth conditions" for sentences (ie "a=b" is true if and only if
    the denotation of "a" = the denotation of "b")

It is crucial that the truth conditions be defined recursively, so that
the truth of a complex sentence can be determined from the truth of the
parts, ie "apples are red and pears are green" is true when both parts
are true.  Meaning is the combination of denotation and truth conditions.
Denotation is also called "reference".

I've ignored issues of consciousness, intentions, and lots of others,
because with even this simple type of semantics, lots of really
horrible problems can be productively studied.

This is a sketchy definition, but it's enough to see why the Chinese Room
argument gets started.  When Searle says that he "doesn't understand", he
is saying that he doesn't know the denotation function and truth
conditions for the Chinese sentences.  Again, this is a sketchy reading of
the Chinese room problem, and not the whole picture.  But perhaps it's
enough to get started.

>As this post may indicate, I have something to say on each of these
>topics, but wish to avoid the serious Net-gaffe and post something ill-
>informed.  I feel, however, that others may need the same clarification
>as I, so I am posting this note first.  Feel free to either post your
>responses or send E-Mail, as I will summarize any replies I get.

Thanks for asking.  I have given my own opinions more prominence in this
reply than they perhaps deserve, but I hope I've been fair to those who
disagree.

You might want to look up the article "The Semantic Conception of Truth"
by Alfred Tarski, reprinted in several places including a collection
edited by Leonard Linsky (I've forgotten the title).  This is certainly
the most famous paper ever written about semantics - Tarski shows how to
solve the Liar paradox.  Tarski was a fine writer and one of the great
mathematicians of the century.
Any fisrt-year logic texbook will have a better account of semantics than
I've given above.  I like Benson Mates "Elementary Logic", but that's just
because it was my first logic text.

cash@muse.uucp (Peter Cash) (01/15/90)

In article <91Eq02wy7eX=01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>In article <12883@phoenix.Princeton.EDU> kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) writes:
>>
>>I clearly must be missing something very important to all of these
>>discussions.  Therefore, before I spew out my version of the world, I
>>have three questions:
>>
>>1) Could someone name one process which cannot be broken down into an
>>algorithm?  That is, many people have said the mind may not be simulated
>>through an algorith, but personally, no other examples of
>>non-alogorithmical processes occur to me.  Please, someone, tell me what
>>obvious example I'm missing.
>
>There has been a discussion here lately as to whether all processes are
>algorithmic, and some controversy perhaps remains.  Algorithms are finite
>sets of rules determining transitions among a finite set of states.
>Physical systems usually have an infinite number of states, so any
>algorithm can at best approximate the behavior of a physical system. In
>a physical system with unstable components (where a small perturbation
>can produce large changes) the approximation may never be good enough.

You are, of course, operating with a very narrow definition of what an
"algorithm" is (Turing's definition).  But to me, it sounds very odd to say
that an algorithm can "approximate" a physical system; I might concede that
a *model* can--in some strict sense--"approximate" a physical system, but an
algorithm, never.  An algorithm is a formal statement of rules (e.g., the
rules used to solve a problem, or construct a model, etc.), it is not the
thing itself.

For example, a topographical map can be said to correspond to the terrain
(I can't bring myself to say that it "approximates" the terrain).  But by
no stretch of the imagination do the cartographers "algorithms" (his
procedures and methods), correspond to the terrain.  

It seems to me that this is a confusion that arises out of conflating the
"states" of a Turing machine with the "states" of a physical system.  They
have something in common only in the most abstract way.  To speak of them
in the same breath begs the question of artificial intelligence.  If we
think that people are in some sense "systems" that have "states" like
Turing machines, then the outcome has already been decided:  people can be
"approximated" by algorithms.  Thus, the really important point that must
be decided first is in what sense (if any) people are like Turing machines.

>...[lots of formal stuff about semantics deleted]...

>I've ignored issues of consciousness, intentions, and lots of others,
>because with even this simple type of semantics, lots of really
>horrible problems can be productively studied.

But you imply that there *is* a connection between the concerns of the
formal semanticist and "consciousness, intentions, and lots of others".
This connection is what is really interesting and important here, and this
connection is what needs to be demonstrated.  After all, much of the
interest in semantics would wane if people didn't assume that there is a
connection.  But is this more than an unfounded assumption?  If you think
that this assumption has a foundation, I would like to know what it is.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
             |      Die Welt ist alles, was Zerfall ist.     |
Peter Cash   |       (apologies to Ludwig Wittgenstein)      |    cash@convex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

bls@cs.purdue.EDU (Brian L. Stuart) (01/16/90)

In article <91Eq02wy7eX=01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>In article <12883@phoenix.Princeton.EDU> kadickey@phoenix.Princeton.EDU (Kent Andrew Dickey) writes:
>>
>>2) Others have named that current computers are not Turing machines--is
>>this not false?  Aren't all digital computer simulatable on a Turing
>>machine?  Or again, am I missing something obvious?  What's the simplest
>>problem a digital computer can solve that a Turing machine cannot?
>
>A guiding principle behind most of computer science is the "Church-Turing
>Thesis" - that all possible types of computation can be performed by
>Turing machines.  Many formal systems for defining computations have been
>devised, and all can be reduced to Turing machine programs.
>
>The only difference between computers and Turing machines is:
>(1) Computers don't have an infinitely long tape
>(2) Computers are real physical objects, whereas Turing machines are
>    abstract objects (like numbers or functions)

If you'll indulge me one last waste of bandwidth, I'd like to elaborate
on this point briefly.  Yes, all computers (except those with physical
noise sources) are simulatable on a TM.  The converse is not true because
they don't have the infinite tape as mentioned in (1).  To be precise,
here is the relationship among computer programs running on finite computers,
algorithms and TMs.
a)  Since a finite computer has a finite (albeit large) number of states
    it is a finite automaton, and finite automata recognize only regular
    languages.
b)  Algorithms by definition can recognize recursive languages which are
    a proper super-set of regular languages.
c)  TMs recognize recursively enumerable languages which are in turn a
    proper super-set of recursive languages.
So the question is not "what can a computer do that a TM can't?"  It is
"what can a TM do that a computer can't?"  And the answer is a non-empty
set of things.  There are no things that a computer can do that a TM
can't (except for certain computers that have quantum noise sources to
generate "true" random numbers).

The issue for AI is whether or not the brain's operation fits into one
of these catagories.  If it fits into the first, then strong AI is assured.
If it fits into the other two, then we want to know if a finite approximation
will suffice.  If it will, then again strong AI is assured.  If not, well
then we still can have fun in showing it.  Besides there are still some
practical problems that we can solve without achieving a full intelligence.

Brian L. Stuart
Department of Computer Science
Purdue University

adamsf@turing.cs.rpi.edu (Frank Adams) (01/18/90)

In article <4593@convex.UUCP> cash@convex.COM (Peter Cash) writes:
>For example, a topographical map can be said to correspond to the terrain
>(I can't bring myself to say that it "approximates" the terrain).  But by
>no stretch of the imagination do the cartographers "algorithms" (his
>procedures and methods), correspond to the terrain.  

Of course they do not.  But then, neither do the maps correspond to my
automobile.

More generally, an algorithm does not, I think, correspond to any particular
object.  It is in the nature of an algorithm to be general.  But an algorithm
plus data can correspond to some physical system, with a more precise
correspondence than just the data has.  For example, consider the programs
meteorologists use, which are fed huge amounts of data, and do even more
computation, producing predictions about the weather.  The data alone
corresponds to the current state of the weather; the data plus program
corresponds to the behavior of the weather.

(As an aside, if one may speak of an algorithm as a process with the
specifics of the situation factored out, as I think one can; and if by
the behavior of a system we mean how it acts in general, not in a particular
instance, which is one of the possible meanings of behavior; then it is
quite legitimate to speak of an algorithm as corresponding to the behavior
of a system.)

Finally, note that when one speaks of a "computer program", one is not
necessarily just talking about [the implementation of] an algorithm; as
much data as desired can be incorporated into the program.