[comp.lang.misc] Halting Problem Solved! Film at 11!

mathew@mantis.co.uk (mathew) (04/30/91)

rockwell@socrates.umd.edu (Raul Rockwell) writes:
> David Gudeman  []
> [] Strong typing: the feature that an operation cannot return an
> [] undefined result because it did not know the typing of one of its
> [] operands.
> 
> Strong typing also means that the function will terminate.

I beg your pardon?!

Last I heard, the Halting Problem was still insoluble.

Is there some new theoretical result you'd like to share with us?


mathew

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/03/91)

In article <R73812w164w@mantis.co.uk> mathew@mantis.co.uk (mathew) writes:
>rockwell@socrates.umd.edu (Raul Rockwell) writes:
>> David Gudeman  []
>> [] Strong typing: the feature that an operation cannot return an
>> [] undefined result because it did not know the typing of one of its
>> [] operands.
>> 
>> Strong typing also means that the function will terminate.
>
>I beg your pardon?!
>
>Last I heard, the Halting Problem was still insoluble.
>
>Is there some new theoretical result you'd like to share with us?

The halting problem does not really apply to actual programming languages
(i.e. those that are compiled or interpreted into machine code).
The sooner computer scientists begin to understand the actual meaning
of the undecidability and complexity results, the better off the filed
will be.

truesdel@nas.nasa.gov (David A. Truesdell) (05/04/91)

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

>>Last I heard, the Halting Problem was still insoluble.
>>
>>Is there some new theoretical result you'd like to share with us?

>The halting problem does not really apply to actual programming languages
>(i.e. those that are compiled or interpreted into machine code).
>The sooner computer scientists begin to understand the actual meaning
>of the undecidability and complexity results, the better off the filed
>will be.

Care to share the reason why the halting problem does not apply?  Or, is
this just some handwaving by someone who really doesn't understand what the
halting problem means.
--
T.T.F.N.,
dave truesdell (truesdel@nas.nasa.gov)

Ita erat quando hic adventi.

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/04/91)

In article <truesdel.673301155@sun418> truesdel@nas.nasa.gov (David A. Truesdell) writes:
>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>
>>>Last I heard, the Halting Problem was still insoluble.
>>>
>>>Is there some new theoretical result you'd like to share with us?
>
>>The halting problem does not really apply to actual programming languages
>>(i.e. those that are compiled or interpreted into machine code).
>>The sooner computer scientists begin to understand the actual meaning
>>of the undecidability and complexity results, the better off the filed
>>will be.
>
>Care to share the reason why the halting problem does not apply?  Or, is
>this just some handwaving by someone who really doesn't understand what the
>halting problem means.
>--
>T.T.F.N.,
>dave truesdell (truesdel@nas.nasa.gov)

Real computers correspond to finite state machines not turing machines,
thus the halting problem is trivially (in theory of course, not in
practice) solvable for real computers. If you provide a formal 
language L and an algorithm C that will transform programs over
L into machine language for a reasonable computing device, then
C transforms L programs into finite state machines.  In these
situations, the pumping lemma is of far more interest than the 
halting problem. To suggest that type checking a compilable
language is impossible because of the Turing theorem is to misapply
the Turing theorem.  In fact, even if we were to ignore the fundamental
boundedness of actual computing devices, we would be wise to 
apply Turings theorem very cautiously. It is well known that there is no
decision method for arithmetic, but number theorists don't seem to 
be too crippled by this situation. Turing showed that there was
*no general algorithm* that would solve the halting problem for all
turing machines. This does not mean that there is no algorithm for
solving the halting problem for all interesting cases. Turing's
result should be of interest primarily to meta-mathematicians and
logicians. Those concerned with applied mathematics, and programming
languages certainly seems applied, don't really have to worry about it.

beede@sctc.com (Mike Beede) (05/05/91)

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

>In article <R73812w164w@mantis.co.uk> mathew@mantis.co.uk (mathew) writes:
>>rockwell@socrates.umd.edu (Raul Rockwell) writes:
>>> David Gudeman  []
>>> [] Strong typing: the feature that an operation cannot return an
>>> [] undefined result because it did not know the typing of one of its
>>> [] operands.
>>> 
>>> Strong typing also means that the function will terminate.
>>
>>I beg your pardon?!
>>
>>Last I heard, the Halting Problem was still insoluble.
>>
>>Is there some new theoretical result you'd like to share with us?

>The halting problem does not really apply to actual programming languages
>(i.e. those that are compiled or interpreted into machine code).
>The sooner computer scientists begin to understand the actual meaning
>of the undecidability and complexity results, the better off the filed
>will be.

Trivially, this is true.  We can always run the program on the machine
and look for a cycle in the total state (all memory, all register, all
etc.)  Then we wait for on the order of 

	2 ^ ( 2 ^ wordsize * ( memorysize + registercount ) ) 

cycles (corrected for any non-binarity of the machine) and if we don't
get a repeat, the program never halts.  The approximate length of time
for my workstation (2^23 8-bit bytes, and arbitrary count of 8
registers, the number really doesn't matter), if it didn't have any
disk, and assuming 2^24 operations/sec, is 2^8388584 seconds ~=
2^8388559 years, or something like 10^2,800,000 years.  Net bandwidth
considerations demand exponential notation for numbers of this size.
Any even near-reasonable postulated speed increase will have no
noticible effect on this number (it will still exceed the lifetime of
the universe by an unreasonable factor).

I would say that this is close enough to insoluble for government
work, as the saying goes.  Am I missing something in your post?  I'd
love for the field to be better off . . . .

	Mike Beede




-- 
Mike Beede         SCTC
beede@sctc.com     1210 W. County Rd E, Suite 100           
			Arden Hills, MN  55112
                         (612) 482-7420

jeenglis@alcor.usc.edu (Joe English) (05/05/91)

beede@sctc.com (Mike Beede) writes:
>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

>>The halting problem does not really apply to actual programming languages
>>(i.e. those that are compiled or interpreted into machine code).
>>The sooner computer scientists begin to understand the actual meaning
>>of the undecidability and complexity results, the better off the filed
>>will be.
>
>Trivially, this is true.  We can always run the program on the machine
>and look for a cycle in the total state (all memory, all register, all
>etc.)  Then we wait for on the order of 
>
>	2 ^ ( 2 ^ wordsize * ( memorysize + registercount ) ) 
>
>cycles (corrected for any non-binarity of the machine) and if we don't
>get a repeat, the program never halts.

This wouldn't work even if you did have more than all the time in the
universe.  If the program does any file I/O, you have to examine
the state of the external store.  If it does terminal or network I/O,
this algorithm doesn't even apply.

I have to agree that the halting problem doesn't apply to 
real computer programs, though.  Many programs aren't 
*supposed* to halt on every input (interactive processes,
network servers, etc.; these act like finite state transducers
and theoretically *should* be able to run forever.)
Most programs alternate between doing some I/O and doing some
computation.  The "doing some computation" part is what 
we'd like to solve the Halting Problem for.  Theory says
that we can't write a program that will correctly answer
"yes" or "no" to the question "will this procedure halt on 
all inputs?"  but if we allow it to answer "I can't tell," 
it becomes entirely possible.  If the algorithm were smart enough,
this might even be a useful tool for a limited class 
of procedures.

Of course, since we know the HP is insoluble, that's a 
pretty strong hint that the algorithm would have to 
be pretty smart to not anwer "I can't tell" on all but
the most trivial procedures, but it isn't impossible.


--Joe English

  jeenglis@alcor.usc.edu

jwl@sag4.ssl.berkeley.edu (Jim Lewis) (05/05/91)

In article <30082@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
-In article <truesdel.673301155@sun418> truesdel@nas.nasa.gov (David A. Truesdell) writes:
->yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
->
->>The halting problem does not really apply to actual programming languages
->>(i.e. those that are compiled or interpreted into machine code).
->
->Care to share the reason why the halting problem does not apply?  Or, is
->this just some handwaving by someone who really doesn't understand what the
->halting problem means.
-
-Real computers correspond to finite state machines not turing machines,
-thus the halting problem is trivially (in theory of course, not in
-practice) solvable for real computers. If you provide a formal 
-language L and an algorithm C that will transform programs over
-L into machine language for a reasonable computing device, then
-C transforms L programs into finite state machines.  In these
-situations, the pumping lemma is of far more interest than the 
-halting problem. 

This thread began when someone mentioned the relationship between strong
typing and the problem of deciding whether a given function specifies a
terminating computation.  In this context, "termination" is generally 
understood to be a property of the algorithm (and perhaps its input), NOT
the machine used to run the algorithm.   Your formulation requires us
to tie the definition of "terminating" to a specific compiler and machine
configuration (including the amount of off-line storage available for the
computation).   It's about as useful and interesting as saying that
"Fermat's conjecture is true (in BASIC on a TRS-80 with 4k of memory...)"

-Turing showed that there was
-*no general algorithm* that would solve the halting problem for all
-turing machines. This does not mean that there is no algorithm for
-solving the halting problem for all interesting cases. 

It is also the case that there is no general algorithm to solve the
halting problem for *any* universal TM.  And decision procedures
for the halting problem on non-universal TM's (e.g. the ones that
are equivalent to FSMs) usually aren't very interesting, because they
don't tell you what happens if you throw a bigger state space at the
problem.

->>The sooner computer scientists begin to understand the actual meaning
->>of the undecidability and complexity results, the better off the filed
->>will be.

On behalf of computer scientists everywhere, "Eat my shorts!" :-)

-- Jim Lewis
   U.C. Berkeley

quale@khan.cs.wisc.edu (Douglas E. Quale) (05/05/91)

In article <30082@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>turing machines. This does not mean that there is no algorithm for
>solving the halting problem for all interesting cases. Turing's

Naturally ``interesting'' is a subjective term, but I find the busy beaver
problem interesting and it's unsolvable.

-- Doug Quale
quale@khan.cs.wisc.edu

quale@khan.cs.wisc.edu (Douglas E. Quale) (05/05/91)

In article <30082@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>
>Real computers correspond to finite state machines not turing machines,
>thus the halting problem is trivially (in theory of course, not in
>practice) solvable for real computers. If you provide a formal 

and later

>turing machines. This does not mean that there is no algorithm for
>solving the halting problem for all interesting cases. Turing's

Naturally ``interesting'' is a subjective term, but I find the busy beaver
problem interesting, and it's unsolvable.

I should also point out that real computers are *not* finite state machines.

-- Doug Quale
quale@khan.cs.wisc.edu

truesdel@nas.nasa.gov (David A. Truesdell) (05/06/91)

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

>In article <truesdel.673301155@sun418> truesdel@nas.nasa.gov (David A. Truesdell) writes:
>>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>
>>>The halting problem does not really apply to actual programming languages
>>>(i.e. those that are compiled or interpreted into machine code).
>>>The sooner computer scientists begin to understand the actual meaning
>>>of the undecidability and complexity results, the better off the filed
>>>will be.
>>
>>Care to share the reason why the halting problem does not apply?  Or, is
>>this just some handwaving by someone who really doesn't understand what the
>>halting problem means.

>Real computers correspond to finite state machines not turing machines,
>thus the halting problem is trivially (in theory of course, not in
>practice) solvable for real computers.

Wrong!  Last time I looked, a Turing machine consisted of a finite state
machine and an infinitely large memory.  "Real computers" really differ only
in that they have a finite amount of memory.

>                                       If you provide a formal 
>language L and an algorithm C that will transform programs over
>L into machine language for a reasonable computing device, then
>C transforms L programs into finite state machines.  In these
>situations, the pumping lemma is of far more interest than the 
>halting problem.

Of course you leave "reasonable" undefined. :-)  Other than that, it seems
that the since the halting problem doesn't "interest" you, it isn't important.
You sound like a typical academic.

>                                   Turing showed that there was
>*no general algorithm* that would solve the halting problem for all
>turing machines. This does not mean that there is no algorithm for
>solving the halting problem for all interesting cases.

I'd guess your definition of "interesting" probably means "trivial".

>                                                       Turing's
>result should be of interest primarily to meta-mathematicians and
>logicians. Those concerned with applied mathematics, and programming
>languages certainly seems applied, don't really have to worry about it.

Beware, problems of "theoretical interest only" have this nasty habit of
sneaking up and biting you on the ass.

--
T.T.F.N.,
dave truesdell (truesdel@nas.nasa.gov)
'And Ritchie said, "Let there be portability!"  And nothing happened, so
 Ritchie realized that he had his work cut out for him.'

truesdel@nas.nasa.gov (David A. Truesdell) (05/06/91)

jeenglis@alcor.usc.edu (Joe English) writes:

>I have to agree that the halting problem doesn't apply to 
>real computer programs, though.  Many programs aren't 
>*supposed* to halt on every input (interactive processes,
>network servers, etc.; these act like finite state transducers
>and theoretically *should* be able to run forever.)

What?  You mean my text editor isn't supposed to exit when I'm done?  (Well,
an editor is an interactive process, isn't it?)  "Input" refers to everything
that is input into the process.  Not individual lines, nor individual commands,
everything.  A FSM representing an editor will have a halting state.

On the other hand, if the FSM lacks a "halting state", the solution is trivial.
There is no "halting state" to reach, so of course it can't reach it.
--
T.T.F.N.,
dave truesdell (truesdel@nas.nasa.gov)
'And Ritchie said, "Let there be portability!"  And nothing happened, so
 Ritchie realized that he had his work cut out for him.'

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/06/91)

In article <1991May4.192440.5527@sctc.com> beede@sctc.com (Mike Beede) writes:
>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>
>>The halting problem does not really apply to actual programming languages
>>(i.e. those that are compiled or interpreted into machine code).
>>The sooner computer scientists begin to understand the actual meaning
>>of the undecidability and complexity results, the better off the filed
>>will be.
>
>Trivially, this is true.  We can always run the program on the machine
>and look for a cycle in the total state (all memory, all register, all
>etc.)  Then we wait for on the order of 
>
>	2 ^ ( 2 ^ wordsize * ( memorysize + registercount ) ) 
>
>cycles (corrected for any non-binarity of the machine) and if we don't
>get a repeat, the program never halts.  The approximate length of time
>for my workstation (2^23 8-bit bytes, and arbitrary count of 8
>registers, the number really doesn't matter), if it didn't have any
>disk, and assuming 2^24 operations/sec, is 2^8388584 seconds ~=
>2^8388559 years, or something like 10^2,800,000 years.  Net bandwidth
>considerations demand exponential notation for numbers of this size.
>Any even near-reasonable postulated speed increase will have no
>noticible effect on this number (it will still exceed the lifetime of
>the universe by an unreasonable factor).

If the halting problem applied to actual computers, then there would
be no bound on any algorithm to solve the problem. There is an obvious,
although impractical algorithm, however, so the invocation of Turing's
theorem is misplaced. Large numbers are not equal to infinite numbers,
hard problems are not equivalent to impossible problems. If you want to
say that deciding type problems for programing languages is unlikely
because the immense state spaces are not tractable to analysis, then fine.
But, if you want to say that Turing's theorem, or Herbrands theorem,
or Fermats theorem tells us that type checking real programs is 
impossible, then you don't understand any of these theorems ( or you've
figured out something that it would be interesting to hear about).

>I would say that this is close enough to insoluble for government
>work, as the saying goes.  Am I missing something in your post?  I'd

While "hard = insoluble" may be close
enough for government work, it's not true. There may be a wonderful
algorithm out there that in practice can verify type safety of 
very complex programs in a reasonable time. Is there such an
algorithm? I don't know. But I do know that there is no proof
that such an algorithm is impossible.

truesdel@nas.nasa.gov (David A. Truesdell) (05/06/91)

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

>If the halting problem applied to actual computers, then there would
>be no bound on any algorithm to solve the problem. There is an obvious,
>although impractical algorithm, however, so the invocation of Turing's
>theorem is misplaced. Large numbers are not equal to infinite numbers,
>hard problems are not equivalent to impossible problems. If you want to
>say that deciding type problems for programing languages is unlikely
>because the immense state spaces are not tractable to analysis, then fine.
>But, if you want to say that Turing's theorem, or Herbrands theorem,
>or Fermats theorem tells us that type checking real programs is 
>impossible, then you don't understand any of these theorems ( or you've
>figured out something that it would be interesting to hear about).

>>I would say that this is close enough to insoluble for government
>>work, as the saying goes.  Am I missing something in your post?  I'd

>While "hard = insoluble" may be close
>enough for government work, it's not true. There may be a wonderful
>algorithm out there that in practice can verify type safety of 
>very complex programs in a reasonable time. Is there such an
>algorithm? I don't know. But I do know that there is no proof
>that such an algorithm is impossible.

It's getting fairly obvious that you don't really understand what the problem
in the halting problem is.  The halting problem does not exist because a
turing machine has an potentially infinite state space and infinite storage.
It arises from the fact that, for certain cases, the question "will this
program reach the halting state" is inherently undecidable, even given
infinite time.  The physical limits of "actual computers" have no bearing on
the problem.  Your "obvious, although impractical algorithm" does not exist.
--
T.T.F.N.,
dave truesdell (truesdel@nas.nasa.gov)
'And Ritchie said, "Let there be portability!"  And nothing happened, so
 Ritchie realized that he had his work cut out for him.'

jwl@sag4.ssl.berkeley.edu (Jim Lewis) (05/06/91)

In article <30095@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>But, if you want to say that Turing's theorem, or Herbrands theorem,
>or Fermats theorem tells us that type checking real programs is 
>impossible, then you don't understand any of these theorems ( or you've
>figured out something that it would be interesting to hear about).

If type safety is a property of *programs*, you can't let your answer
depend on the size of the machine running the program.  This forces you 
to use the TM model!

If you want to let "type safety" depend on what compiler and machine you're 
using, fine, but please realize that you're saying more about the machine
configuration than about the program you're supposed to be analyzing.
If adding one bit's worth of state space can change the answer you get, I 
claim that this notion of "type safety" is useless.

Your reasoning seems to go like this:

1. To run a program, one must use a physically realizable machine.
2. Real machines are only equivalent to FSMs.
3. Therefore, programs aren't REALLY equivalent to Turing machines.   
4. Since programs are not equivalent to TM's, we *can* solve the halting
   problem, do rigorous type checking,  etc...

No one is disputing (1) or (2).  You get into trouble with (3) though,
because programs and programming languages are mathematical abstractions.
We can reason about properties of programs without having to actually run 
them on a real machine, much as we can reason about the properties of 
integers without having to write them down on a real sheet of paper.

If we accept step (3) in your reasoning, then it follows that there are
REALLY only a finite number of prime numbers, since we don't have to
consider the ones that are too large to write down on any real sheet
of paper...sorry, Euclid!  

-- Jim Lewis
   U.C. Berkeley

stephen@pesto.uchicago.edu (Stephen P Spackman) (05/06/91)

This thread is getting me down, because intelligent people are making
fools of themselves. So now I'm going to have another go at making a
fool of MYself. :-)

I think the observation here is that actual computers are NOT
(equivalent to) turing machines BECAUSE they have finite resources.
Note that it doesn't matter WHAT the bound on resources is; so long as
it is there, and has some value in each case, the theoretical
observation that this is NOT REALLY a turing machine holds.

So the question is, *can we get any mileage out of that*? What useful
theorems could we prove if we were to grant (or assume, if it makes
you feel better - maybe your computer is A LOT bigger than mine :-)
that there IS SOME upper bound on machine state complexity?

Now, I don't do theory, and I haven't spent a lot of time worrying
about the structure of the appropriate theorems (though it does seem
pretty obvious that a FINITE computer has an easy check for programme
termination: wait until it has run enough cycles to be in EVERY state.
By this point it has either halted or it is in an infinite loop, by
a pigeonhole argument. It's not PRACTICAL, of course, but that
doesn't mean it has no practical CONSEQUENCES); but it doesn't seem at
all incoherent that it MIGHT be very important.

And it certainly means there's room for different perspectives on
what's "possible".
----------------------------------------------------------------------
stephen p spackman         Center for Information and Language Studies
systems analyst                                  University of Chicago
----------------------------------------------------------------------

truesdel@nas.nasa.gov (David A. Truesdell) (05/06/91)

stephen@pesto.uchicago.edu (Stephen P Spackman) writes:

>I think the observation here is that actual computers are NOT
>(equivalent to) turing machines BECAUSE they have finite resources.
>Note that it doesn't matter WHAT the bound on resources is; so long as
>it is there, and has some value in each case, the theoretical
>observation that this is NOT REALLY a turing machine holds.

This is true in a theoretical sense, but in practical terms, it really
doesn't make that much of a difference.  Remember the number of possible
states grows exponentially!

>So the question is, *can we get any mileage out of that*? What useful
>theorems could we prove if we were to grant (or assume, if it makes
>you feel better - maybe your computer is A LOT bigger than mine :-)
>that there IS SOME upper bound on machine state complexity?

>Now, I don't do theory, and I haven't spent a lot of time worrying
>about the structure of the appropriate theorems (though it does seem
>pretty obvious that a FINITE computer has an easy check for programme
>termination: wait until it has run enough cycles to be in EVERY state.
>By this point it has either halted or it is in an infinite loop, by
>a pigeonhole argument.

Nice idea, but you've missed something fundamental.  A turing machine
consists of a state machine AND its "tape".  You need to account for the
initial state of the tape as well.  You could have a two state FSM (one is the
halted state) which will run, not in an infinite loop, but as long as the
tape "tells" it to.  The only "easy" cases I know of are:  No halted state,
and no loops.  Anything else can get really hard, really fast.  Assuming that
there IS an answer, which is not always the case.

>                       It's not PRACTICAL, of course, but that
>doesn't mean it has no practical CONSEQUENCES); but it doesn't seem at
>all incoherent that it MIGHT be very important.

>And it certainly means there's room for different perspectives on
>what's "possible".

Indeed.
--
T.T.F.N.,
dave truesdell (truesdel@nas.nasa.gov)
'And Ritchie said, "Let there be portability!"  And nothing happened, so
 Ritchie realized that he had his work cut out for him.'

jeenglis@alcor.usc.edu (Joe English) (05/06/91)

truesdel@nas.nasa.gov (David A. Truesdell) writes:
>jeenglis@alcor.usc.edu (Joe English) writes:
>>I have to agree that the halting problem doesn't apply to 
>>real computer programs, though.  Many programs aren't 
>>*supposed* to halt on every input (interactive processes,
>>network servers, etc.; these act like finite state transducers
>>and theoretically *should* be able to run forever.)

>What?  You mean my text editor isn't supposed to exit when I'm done?  (Well,
>an editor is an interactive process, isn't it?) "Input" refers to everything
>that is input into the process.  Not individual lines, nor individual commands,
>everything.  A FSM representing an editor will have a halting state.

Yeah, but that doesn't mean that it has to _reach_ the halt
state.  There are an infinite number of strings that don't end
in 'q <RET>', and "ed" won't halt on any of them.[*]

The interesting question is not whether or not an editor
will halt on all inputs.  Looking at the problem
this way is like looking at Emacs as a finite state machine
accepting the regular expression ".*^X^C".  What's interesting
are questions like "If I invoke ESC-X towers-of-hanoi, will
it finish?"


[*] Actually, this isn't correct.  In an editor if you simply stop
    typing, it will "halt", but not in an "accepting state"
    (returning to the operating system).  


--Joe English

  jeenglis@alcor.usc.edu

sabry@rice.edu (Amr Sabry) (05/06/91)

In article <truesdel.673495042@sun418>, truesdel@nas.nasa.gov (David A. Truesdell) writes:
|> program reach the halting state" is inherently undecidable, even given
|> infinite time.  The physical limits of "actual computers" have no bearing on

I am not sure I agree that given infinite time the halting problem is still 
undecidable. The question does not make sense in this case as the 
distinction between recursive languages and r.e. languages goes away if you 
have infinite time. 

I should point out that infinite time is different than arbitrarily large time.

--Amr

gudeman@cs.arizona.edu (David Gudeman) (05/07/91)

In article  <truesdel.673495042@sun418> David A. Truesdell writes:

]...the question "will this
]program reach the halting state" is inherently undecidable, even given
]infinite time.

I suppose you mean "an arbitrarily large time", not "inifinite time".
The halting problem _can_ be solved given infinite time.  You run the
TM in question; if it halts within an infinite time the the answer is
"yes" and if it doesn't halt within an infinite time the answer is
"no".
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

) (05/07/91)

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
> In article <R73812w164w@mantis.co.uk> mathew@mantis.co.uk (mathew) writes:
> >Last I heard, the Halting Problem was still insoluble.
> >
> >Is there some new theoretical result you'd like to share with us?
> 
> The halting problem does not really apply to actual programming languages
> (i.e. those that are compiled or interpreted into machine code).

I see. So you are claiming that it is possible to write a program to predict
whether a given piece of code will terminate or not, for any compiled or
interpreted programming language?


mathew

 

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/07/91)

In article <1991May5.135342.13792@daffy.cs.wisc.edu> quale@khan.cs.wisc.edu (Douglas E. Quale) writes:
>In article <30082@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>
>>Real computers correspond to finite state machines not turing machines,
>>thus the halting problem is trivially (in theory of course, not in
>>practice) solvable for real computers. If you provide a formal 
>
>and later
>
>>turing machines. This does not mean that there is no algorithm for
>>solving the halting problem for all interesting cases. Turing's
>
>Naturally ``interesting'' is a subjective term, but I find the busy beaver
>problem interesting, and it's unsolvable.
>

In the context of comp.lang.misc, the busy-beaver problem, whatever its
intrinsic value, seems quite uninteresting. Diagonalization arguments
are simply not the bread and butter of programming language design.

>I should also point out that real computers are *not* finite state machines.
>

Pointing it out is nice, but most computer engineers would be surprised by
this claim. If we build a device from 3dimensional switching elements, there
are only two alternatives --- finite state device, or infinite volume device. 
Of course, if we ignore the discrete nature of the switching elements and
start worrying about analog behavior then things are different.  But, 
this does not appear to be a very useful way of thinking about computation.

gudeman@cs.arizona.edu (David Gudeman) (05/08/91)

In article  <30164@dime.cs.umass.edu> victor yodaiken writes:
]In article <1991May5.135342.13792@daffy.cs.wisc.edu> quale@khan.cs.wisc.edu (Douglas E. Quale) writes:
]>I should also point out that real computers are *not* finite state machines.
]
]Pointing it out is nice, but most computer engineers would be surprised by
]this claim...

Probably so, but not those who have put some thought into the matter.
Keep in mind that "state" in this sense refers to the entire body of
information accessible by the computer.  If a computer runs out of
storage, you can always go buy another tape.  Now there are two
possible ways to interpret the question of whether this gives the
computer unbounded storage: the practical and the theoretical.

Whether this gives the computer _practical_ unbounded store depends on
your opinion of what is practical.  If either you don't expect your
problems to run out of tape, or your problems are important enough
that you are willing to keep the computer supplied with all the tape
it needs, then the computer has a practically unbounded state space.
If you don't agree that the above is practical, then the computer has
a _practically_ bounded state space.

If you want to get theoretical, then you can _theoretically_ keep
buying tapes until the world's supply of tape runs out.  Then you can
manufacture more.  When you use up all the earth's supply of materials
for making tapes, you can build mining space ships.  The question of
whether a computer is a finite-state device or not boils down to the
question of whether the universe holds a finite amount of material for
making tape.  In this case whether you believe that a computer is a
finite-state device depends on whether you believe there is a finite
amount of matter in the universe.

The question of whether an electronic computer is an FSA or an
infinite-state device is not as trivial as most people seem to think.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

stephen@pesto.uchicago.edu (Stephen P Spackman) (05/08/91)

In article <2861@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:

|If you want to get theoretical, then you can _theoretically_ keep
|buying tapes until the world's supply of tape runs out.  Then you can
|manufacture more.  When you use up all the earth's supply of materials
|for making tapes, you can build mining space ships.  The question of
|whether a computer is a finite-state device or not boils down to the
|question of whether the universe holds a finite amount of material for
|making tape.  In this case whether you believe that a computer is a
|finite-state device depends on whether you believe there is a finite
|amount of matter in the universe.

There is definitely and without question a finite amount of matter in
the universe that I am willing to make tapes out of.

For more than a finite amount of matter to become available to us on a
timescale in which we are in an energetic position to make some use of
it, would require certain (IMHO unlikely) breakthroughs in physics.

|The question of whether an electronic computer is an FSA or an
|infinite-state device is not as trivial as most people seem to think.

Evidently, I disagree. I am PERSONALLY willing to engineer my
compilers on the assumption that (a) FTL travel, (b) many-worlds
parallelism and (c) anything comparably revolutionary will not become
available within the useful lifetime of the software.

And even WITH those breakthroughs, I still think computers would for
the most part remain finite, making finite computation a useful field
of research.
----------------------------------------------------------------------
stephen p spackman         Center for Information and Language Studies
systems analyst                                  University of Chicago
----------------------------------------------------------------------

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/08/91)

In article <1991May6.021753.4373@agate.berkeley.edu> jwl@sag4.ssl.berkeley.edu (Jim Lewis) writes:
>If type safety is a property of *programs*, you can't let your answer
>depend on the size of the machine running the program.  This forces you 
>to use the TM model!
>
>If you want to let "type safety" depend on what compiler and machine you're 
>using, fine, but please realize that you're saying more about the machine
>configuration than about the program you're supposed to be analyzing.
>If adding one bit's worth of state space can change the answer you get, I 
>claim that this notion of "type safety" is useless.
>
>Your reasoning seems to go like this:
>
>1. To run a program, one must use a physically realizable machine.
>2. Real machines are only equivalent to FSMs.
>3. Therefore, programs aren't REALLY equivalent to Turing machines.   
>4. Since programs are not equivalent to TM's, we *can* solve the halting
>   problem, do rigorous type checking,  etc...
>
>No one is disputing (1) or (2).  You get into trouble with (3) though,
>because programs and programming languages are mathematical abstractions.
>We can reason about properties of programs without having to actually run 
>them on a real machine, much as we can reason about the properties of 
>integers without having to write them down on a real sheet of paper.
>

That's a good point, but there are really two different issues. If
you give me a sorting algorithm A so that A(sequence ) -> sorted-sequence
I really want to know if A is correct on every sequence, and 
knowing A(sequences-representable-on-a-TRS-80) is not very satisfactory.
But, it is often of interest to know if an algorithm can be sensibly
compiled on a real machine. Thus, if our compiler provides a sorting
algorithm, and we attempt to compile a program 
which applies the sorting operator to a sequence that cannot be sorted
on our machine, then it would be both useful and theoretically
possible to have the compiler find and report the error.
So, maybe we don't have an algorithm that will decide if our program is
type-safe on all possible computers -- i.e.
we can't decide (forall Machine)TypeSafe(Prog, Machine), but maybe we 
we can have an algorithm that will decide for any given  machine
if the program is typesafe --- i.e. maybe there is an algorithm
that will decide TypeSafe(Prog,Machine)  for every Machine.  Maybe
we can even do better than that, and have an algorithm that 
will compute TypeSafe(Prog,Bound) so that any machine M that fits
within the Bound is typesafe. If our compiler could test
TypeSafe(Prog,Anything-with-at-least-2^32-addressable-words)
this would would be quite useful.
Remember that while $(forall x)f(x)=0$ is not decidable for
arbitrary primitive recursive function f,  for every integer n, 
$f(n)=0$ is decidable. I'm certainly not arguing that we should
dispense with unbounded quantifiers in ordinary mathematical 
discourse (although some folks do, see Nelson, Predicative Arithmetic, for
example). But I am arguing that in computer engineering, where we often
would find it more interesting to know if Prog is type safe on 
the set of all computers made inthe last 10 years (a decidable question)
than if there is a hypothetical  computer on which Prog is not typesafe
(not decidable for any decent programming language), we should be a little
more careful about invoking Turing's theorem.

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/08/91)

In article <truesdel.673495042@sun418> truesdel@nas.nasa.gov (David A. Truesdell) writes:
>It's getting fairly obvious that you don't really understand what the problem
>in the halting problem is.  The halting problem does not exist because a
>turing machine has an potentially infinite state space and infinite storage.
>It arises from the fact that, for certain cases, the question "will this
>program reach the halting state" is inherently undecidable, even given
>infinite time.  The physical limits of "actual computers" have no bearing on
>the problem.  Your "obvious, although impractical algorithm" does not exist.
>--

I'm a little bothered by the tone of this post. These are sometimes tricky
questions, and the news group includes people of diverse backgrounds, so
a little patience with errors would be appreciated.  As far as I understand
it, the Turing proof relies quite a good deal on the infinite state
space of turing machines. Note that if we limit a turing machine to
a fixed number of squares on the tape, we can transform it into an
equivalent finite state machine. Say that we fix tape length at k, and
we have an alphabet of n symbols, including the blank. Then, there
are n^k possible configurations on the tape. The tape head can be
in positions 1,2,... k and there is a finite state control for the
turing machine with, say, r states. We can then build a FSA that
emulates the original turing machine by letting its state set be
the set of triples (s,p,t) where "s" is one of the states of the turing
machine, "p" is the current position of the turing machine read/write
head, and "t" is the tape in current configuarion --- t is a sequence
of length k over the alphabet. The alphabet of the FSA would consist
of  a single symbol -- "go-to-next-step". If the FSA was in state
(s,p,(x_1,....,x_k)) then we ask what the Turing machine would have done
in state s with the head over square p -- where the stored value will
be x_p. If the turing machine moves left we get a new state
(s',p-1,(x_1,...,x_k) etc. If there is a path from the state
(start,1,blank) to some state (halt,p,t) then the Turing machine halts
on the blank tape input. Otherwise it does not.

Now, this is an exponential algorithm (exponential in the size of the
tape), but that does not necessarily make it impractical as
your local trucking  or phone company will tell you (the simplex
algorithm for linear programming is exponential, but is in very
wide use).

Any corrections in this exposition will be accepted gratefully.

gudeman@cs.arizona.edu (David Gudeman) (05/09/91)

In article  <STEPHEN.91May7205714@pesto.uchicago.edu> Stephen P Spackman writes:
]In article <2861@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
]
]|If you want to get theoretical, then you can _theoretically_ keep
]|buying tapes...
]
]There is definitely and without question a finite amount of matter in
]the universe that I am willing to make tapes out of.
]
]Evidently, I disagree. I am PERSONALLY willing to engineer my
]compilers on the assumption that (a) FTL travel, (b) many-worlds
]parallelism and (c) anything comparably revolutionary will not become
]available within the useful lifetime of the software.

Not fair, Stephen.  You took the theoretical argument and made
practical objections to it.  Your objections do not address the issue
of whether a computer is _theoretically_ a finite state machine -- and
I covered the practical issues in the part you did not cite.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/09/91)

In article <a8uk215w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) writes:
>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>> The halting problem does not really apply to actual programming languages
>> (i.e. those that are compiled or interpreted into machine code).
>
>I see. So you are claiming that it is possible to write a program to predict
>whether a given piece of code will terminate or not, for any compiled or
>interpreted programming language?
>

Sure. Can it be done in practice, for every language and every program? 
I'll bet, no. But, there is no theoretical reason why a clever design
could not produce a useful programming language with a compiler that
verified termination.  

>
>mathew
>
> 

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/09/91)

In article <2861@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>Keep in mind that "state" in this sense refers to the entire body of
>information accessible by the computer.  If a computer runs out of
>storage, you can always go buy another tape.  Now there are two
>possible ways to interpret the question of whether this gives the
>computer unbounded storage: the practical and the theoretical.
>
>Whether this gives the computer _practical_ unbounded store depends on
>your opinion of what is practical.  If either you don't expect your
>problems to run out of tape, or your problems are important enough
>that you are willing to keep the computer supplied with all the tape
>it needs, then the computer has a practically unbounded state space.
>If you don't agree that the above is practical, then the computer has
>a _practically_ bounded state space.
>

Corret me if I'm wrong, but it seems as if proving that a program
will require at least the lifetime of the solar system to finish
computing on a device that switches in the time it takes an electron
to move 1 smallest measurable quanta, would be sufficient for most
practical purposes. 

>If you want to get theoretical, then you can _theoretically_ keep
>buying tapes until the world's supply of tape runs out.  Then you can

Of course, but now you are asking for a mathematical model which takes
your behavior into account. This is a rather more involved question than
the one we started with.

truesdel@nas.nasa.gov (David A. Truesdell) (05/09/91)

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

>In article <a8uk215w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) writes:
>>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>> The halting problem does not really apply to actual programming languages
>>> (i.e. those that are compiled or interpreted into machine code).
>>
>>I see. So you are claiming that it is possible to write a program to predict
>>whether a given piece of code will terminate or not, for any compiled or
>>interpreted programming language?
>>

>Sure. Can it be done in practice, for every language and every program? 
>I'll bet, no. But, there is no theoretical reason why a clever design
>could not produce a useful programming language with a compiler that
>verified termination.  

Oh terrific!  You've spent the past several days pretending that you're an
expert in the theory of computation.  You've generated an impressive heap of
buzzwords to defend your assertion that the halting problem does not
apply to real computers, and then you produce the statement above.

The answer to the question is "NO".  It is not possible in practice, it is not
possible in theory.  THAT is what the halting problem tells us.  It is
impossible to do that type of analysis in a truly general fashion.  Please
read even a basic description of the problem before wasting any more net
bandwidth.

While it might be possible for a compiler to do this type of analysis on
some SIMPLE programs, it's unlikely to work for even a moderately
sized program, and one thing it wouldn't be able to compile and verify is
itself.
--
T.T.F.N.,
dave truesdell (truesdel@nas.nasa.gov)
'And Ritchie said, "Let there be portability!"  And nothing happened, so
 Ritchie realized that he had his work cut out for him.'

stephen@estragon.uchicago.edu (Stephen P Spackman) (05/09/91)

In article <2953@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
|In article  <STEPHEN.91May7205714@pesto.uchicago.edu> Stephen P Spackman writes:
|]In article <2861@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
|]
|]|If you want to get theoretical, then you can _theoretically_ keep
|]|buying tapes...
|]
|]There is definitely and without question a finite amount of matter in
|]the universe that I am willing to make tapes out of.
|]
|]Evidently, I disagree. I am PERSONALLY willing to engineer my
|]compilers on the assumption that (a) FTL travel, (b) many-worlds
|]parallelism and (c) anything comparably revolutionary will not become
|]available within the useful lifetime of the software.
|
|Not fair, Stephen.  You took the theoretical argument and made
|practical objections to it.  Your objections do not address the issue
|of whether a computer is _theoretically_ a finite state machine -- and
|I covered the practical issues in the part you did not cite.

I wasn't cheating, I was just trying to reiterate - and defend - a
particular cut on the problem. There was a particular view given here
that it is interesting to study the THEORETICAL properties of
PRACTICAL machines.  I *could* talk about the theoretical properties
of theoretical machines or about the practical properties of practical
machines, yes, and that would be "fairer pool" in that it would
address what, perhaps, you thought I was talking about. But I
disagree!

The point is: bar some MAJOR breakthroughs in physics, the machines we
WORK with have limits. The fact that machines have limits has
potential theoretical implications. It is proposed by some to explore
them.

Objections that theoretical machines lack practical limits or that the
practical limits of practical machines are more immediate are SIMPLY
NOT GERMANE.

I addressed your comment that I understood to address my argument.

Let me try one more time. PRACTICAL computers are FSMs and not fully
general turing machines because PRACTICAL computers are finite. The
fact that PRACTICAL computers are finite is an important THEORETICAL
property of PRACTICAL computers and may need to be addressed before
certain ideas are dismissed. And this is true even if you include all
the input they're ever going to get as state space.

...I feel a bit like you're telling me that it's impossible to code a
correct sort, because you never know when your input is going to be
an infinite stream and the function just might not terminate....
----------------------------------------------------------------------
stephen p spackman         Center for Information and Language Studies
systems analyst                                  University of Chicago
----------------------------------------------------------------------

stephen@estragon.uchicago.edu (Stephen P Spackman) (05/09/91)

In article <30275@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

|Sure. Can it be done in practice, for every language and every program? 
|I'll bet, no. But, there is no theoretical reason why a clever design
|could not produce a useful programming language with a compiler that
|verified termination.  

Well, this minimal condition has of course been met. The language
Brouwer was useful, and its compiler guaranteed termination (it didn't
verify it directly; its type system was based on Intuitionistic Type
Theory, which simply lacks noterminating constructs).

Of course, what Brouwer was useful FOR was merely proving the point
that it could be done <smilie smilie smilie>: it wasn't what you'd
call a _practical_ programming language (it was a bitch to code in and
took forever to compile).

But for all the world it felt INCONVENIENT and underdeveloped rather
than HOPELESS. And that's speaking as someone who counts cycles and
criticises code generators.
----------------------------------------------------------------------
stephen p spackman         Center for Information and Language Studies
systems analyst                                  University of Chicago
----------------------------------------------------------------------

gudeman@cs.arizona.edu (David Gudeman) (05/10/91)

In article  <STEPHEN.91May9022258@estragon.uchicago.edu> Stephen P Spackman writes:
]
]... PRACTICAL computers are FSMs and not fully
]general turing machines because PRACTICAL computers are finite. The
]fact that PRACTICAL computers are finite is an important THEORETICAL
]property of PRACTICAL computers and may need to be addressed before
]certain ideas are dismissed...

I'm assuming that by "practical computers are finite" you mean that
computers are finite for practical purposes -- if you meant something
else, please correct me.  OK, suppose that there are important
theoretical results based on the assumption that computers are finite.
That does not preclude the possibility that there are important
theoretical results based on the assumption that computers are not
finite.  And it is certainly the case that there are many applications
where computers are not finite -- for practical purposes.

I have not claimed that computers are unbounded-state machines, I've
only said that it isn't the case that computers are obviously FSMs as
some on this group seem to think.  Both models of computers have their
uses.  It is not the case that all results based on the assumption
that computers are USMs are impractical, because as long as your
problems do not hit the boundaries, the boundaries may as well not
exits.  Most algorithm analyses and programming language semantics
assume that the machines are USM's and the results from such
assumptions are valid for most practical purposes.

]....I feel a bit like you're telling me that it's impossible to code a
]correct sort, because you never know when your input is going to be
]an infinite stream and the function just might not terminate....

I don't understand what you mean by that.  If your input is another
process of unknown origin, then there _are_ certain correctness
criteria that you cannot satisfy with your own sort.  Are you
suggesting that you wouldn't want to be told that?
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

davgot@auto-trol.com (David Gottner) (05/10/91)

In article <STEPHEN.91May9022258@estragon.uchicago.edu> stephen@estragon.uchicago.edu (Stephen P Spackman) writes:

>Let me try one more time. PRACTICAL computers are FSMs and not fully
>general turing machines because PRACTICAL computers are finite. The
>fact that PRACTICAL computers are finite is an important THEORETICAL
>property of PRACTICAL computers and may need to be addressed before
>certain ideas are dismissed. And this is true even if you include all
>the input they're ever going to get as state space.

PRACTICAL computers may be FSM's but programming languages are not.  Nowhere
in the ANSI C standard does it give a maximum amount of memory that a C
program can run in.  Thus your program checker must assume that the program
that it is analyzing will never run out of tape, which means that the analyzer
must assume that the program will run on a Turing Machine, and the 
undecidability theorem applies.

>
>...I feel a bit like you're telling me that it's impossible to code a
>correct sort, because you never know when your input is going to be
>an infinite stream and the function just might not terminate....

Of course some simple argorithms may be proven, like a sorting algorithm,
but not >ALL<
-- 
David Gottner
davgot@auto-trol.COM                           Auto-trol Technology Corporation
{...}ncar!ico!auto-trol!younam                 12500 North Washington Street
(303) 252-2321                                 Denver, CO 80241-2404

jallen@eeserv1.ic.sunysb.edu (Joseph Allen) (05/10/91)

In article <17115@chaph.usc.edu> jeenglis@alcor.usc.edu (Joe English) writes:
>truesdel@nas.nasa.gov (David A. Truesdell) writes:
>>jeenglis@alcor.usc.edu (Joe English) writes:
>The interesting question is not whether or not an editor
>will halt on all inputs.  Looking at the problem
>this way is like looking at Emacs as a finite state machine
>accepting the regular expression ".*^X^C".  What's interesting
>are questions like "If I invoke ESC-X towers-of-hanoi, will
>it finish?"

Yes, after 64 disks everything will finish  :-)

Actually, for real-time systems the halting problem gets to very weird.  For
examaple, in an OS, if user 1 runs towers of hanoi in gnu-emacs before user 2
finishes compiling gnu C, the os will finish because it ran out of swap space,
but if user 2 finishes first....

For some things the halting problem is important.  I seem to remember some
register allocation techniques in an optimizing compiler needed halting
analysis.
-- 
/*  jallen@ic.sunysb.edu  */     /* Amazing */     /* Joe Allen 129.49.12.74 */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

stephen@estragon.uchicago.edu (Stephen P Spackman) (05/10/91)

In article <3007@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
 
|In article  <STEPHEN.91May9022258@estragon.uchicago.edu> Stephen P Spackman writes:
|]
|]... PRACTICAL computers are FSMs and not fully
|]general turing machines because PRACTICAL computers are finite. The
|]fact that PRACTICAL computers are finite is an important THEORETICAL
|]property of PRACTICAL computers and may need to be addressed before
|]certain ideas are dismissed...
|
|I'm assuming that by "practical computers are finite" you mean that
|computers are finite for practical purposes -- if you meant something
|else, please correct me.

No. I mean that practical computers are ACTUALLY finite: for both
practical AND theoretical purposes. I don't know any simpler way of
phrasing this. Practical computers are finite. They ARE. Really. In
actuality, real, actual, practical computers are actually really
finite in every way. They are limited, bounded and finite. Really.

|                           OK, suppose that there are important
|theoretical results based on the assumption that computers are finite.
|That does not preclude the possibility that there are important
|theoretical results based on the assumption that computers are not
|finite.

Of course not. But any important theoretical result that DEPENDS on a
computer being infinite will have NO DIRECT APPLICATION to computers,
since actual computers are not, EVER, infinite.

|         And it is certainly the case that there are many applications
|where computers are not finite -- for practical purposes.

This is absurd. Of course there are applications in which a finite
practical machine of certain capacity and an infinite theoretical
machine would exhibit the same behaviour. But does this make the
finite machine less finite? No way, no how. Practical computers are,
for all purposes, always finite. That's just the way it is. Any
infinite computer is a mere approximation of a real computer and you
use techniques and results derived for infinite computers at your own
peril.

|I have not claimed that computers are unbounded-state machines, I've
|only said that it isn't the case that computers are obviously FSMs as
|some on this group seem to think.

I don't understand what you're saying here. Are you telling me that
you actually KNOW that computers are FSMs but that you consider this
fact sufficiently non-obvious that you feel justified in persisting in
the pretence that they are infinite, despite the facts? Or are you
merely defending OTHER people's rights to err without fear of
correction?

In any event, the finitude of computers is obvious to about the same
extent as it is obvious that the population of ducks in Tahiti is
finite, I'd say. I mean, computers are finite, have discrete states,
and are machines. Therefore they're REALLY QUITE LIKELY (ahem) to be
FSMs.

|                                  Both models of computers have their
|uses.  

You bet. No question.

|      It is not the case that all results based on the assumption
|that computers are USMs are impractical, because as long as your
|problems do not hit the boundaries, the boundaries may as well not
|exits.

EXACTLY!!!! THANKYOU!!!! THIS IS THE POINT!!!! And any theorem that
depends on the fact that the machine will *NOT* hit a boundary does
not in fact apply to a real computer - and this is true any theorem
that requires that encodings exist for arbitrary objects, for example.
Remember: computers FINITE and (like humans) CANNOT DO ARITHMETIC.

|        Most algorithm analyses and programming language semantics
|assume that the machines are USM's and the results from such
|assumptions are valid for most practical purposes.

Oh, yes. Correct me if I'm wrong, but it seems to me that they are not
adequate to THEORETICAL purposes like the standard
incompleteness/inconsistency proofs. You're confusing building up with
building down: these theorems, at least as I've seen them, apply to
"computers" that are AT LEAST infinite, not to ones that are AT MOST
infinite.  They may be useful approximations, but they aren't precise
results.

|]....I feel a bit like you're telling me that it's impossible to code a
|]correct sort, because you never know when your input is going to be
|]an infinite stream and the function just might not terminate....
|
|I don't understand what you mean by that.  If your input is another
|process of unknown origin, then there _are_ certain correctness
|criteria that you cannot satisfy with your own sort.  Are you
|suggesting that you wouldn't want to be told that?

No; I'm suggesting that it's important to attribute cause accurately.
There's nothing INCORRECT about my sort, just because it can be handed
unbounded input. And there's nothing INFINITE about my computer just
because it can be approximated in some respects by an infinite
machine.
----------------------------------------------------------------------
stephen p spackman         Center for Information and Language Studies
systems analyst                                  University of Chicago
----------------------------------------------------------------------

stephen@estragon.uchicago.edu (Stephen P Spackman) (05/10/91)

In article <1991May9.205401.12382@auto-trol.com> davgot@auto-trol.com (David Gottner) writes:

|PRACTICAL computers may be FSM's but programming languages are not.  Nowhere
|in the ANSI C standard does it give a maximum amount of memory that a C
|program can run in.  Thus your program checker must assume that the program
|that it is analyzing will never run out of tape, which means that the analyzer
|must assume that the program will run on a Turing Machine, and the 
|undecidability theorem applies.

I never disagreed. But.

First of all, I would be more interested in a verifier that told me
about what my programme would do *if compiled* than one that told me
what it would do if run on a non-realisable machine.

Secondly, your argument applies to languages as they are presently
defined; personally I wouldn't BOTHER writing a verifier for C (though
if I had one there are times when I'd use it). A *new* type system,
for instance, might be able to get very good mileage out of the
assumption that resources are finite (though I'd be very unhappy if it
assumed any PARTICULAR bound, I think). I wouldn't know, of course;
I'm no theorist.

But I grant your point: a C programme could consume an unbounded
amount of stack and still be correct by the standard (though it would
take a VERY imaginative implementation to get the heap to contain
more than ULONG_MAX distinct objects, whatever ULONG_MAX is).

But frankly, I think I'd rather have that flagged as an error, even if
that's technically wrong! (Not that this invalidates your point.)

Honestly I didn't think this was what we were talking about....
----------------------------------------------------------------------
stephen p spackman         Center for Information and Language Studies
systems analyst                                  University of Chicago
----------------------------------------------------------------------

rockwell@socrates.umd.edu (Raul Rockwell) (05/10/91)

Stephen P Spackman:
   Oh, yes. Correct me if I'm wrong, but it seems to me that they are
   not adequate to THEORETICAL purposes like the standard
   incompleteness/inconsistency proofs.

Which is not at all to say that a computer is complete -- if it was
you wouldn't have to worry about being able to write programs that can
not run because of machine limits :-)

Raul Rockwell

torek@elf.ee.lbl.gov (Chris Torek) (05/10/91)

A quick note that might serve to end all this fuss:

It seems to me that many of you are busy energetically expounding on
a different question than the one which whoever-started-this intended
to ask.  That is,

	given program P in language L, I want an analyser that will say:
		P does not halt for these inputs: <list>
		P might or might not halt for these inputs: <list>
	and I would like as few of the latter answers as possible.

For instance, given the program [yes, I use `unnecessary' semicolons]

	program p (input, output);
	    procedure spin; begin while true do {nothing}; end;
	var v : integer;
	begin
	    write('gimme a number');
	    read(v);
	    if v < 0 then spin;
	end.

we would like to have the program analyser say:

	P does not halt when the input value is less than zero.

We would not like to have it say:

	P might or might not halt for any input.

but the latter is acceptable when P is `sufficiently complicated'.
-- 
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA		Domain:	torek@ee.lbl.gov

rang@cs.wisc.edu (Anton Rang) (05/12/91)

In article <30227@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

>Thus, if our compiler provides a sorting algorithm, and we attempt to
>compile a program which applies the sorting operator to a sequence
>that cannot be sorted on our machine, then it would be both useful
>and theoretically possible to have the compiler find and report the
>error.

  Right.  We have ways of doing this in a suboptimal way (e.g. assume
that all branches are taken), but they're not as useful as we'd like,
since obviously real programs don't always do that.  For instance, the
program (in pseudo-code):

	list := []
	if A == B then
	  list := append(list, 3)
	else
	  list := append(list, 'x')
	fi
	if A == B then
	  list := append(list, 4)
	else
	  list := append(list, 'y')
	fi
	sort(list)

  It's pretty obvious that the list at this point will contain either
only integers, or only characters, so that we should be able to sort
it.  However, a compiler may not be able to figure this out, at least
in the general case.

>Maybe we can even do better than that, and have an algorithm that
>will compute TypeSafe(Prog,Bound) so that any machine M that fits
>within the Bound is typesafe. If our compiler could test
>TypeSafe(Prog,Anything-with-at-least-2^32-addressable-words) this
>would would be quite useful.

  It's possible, but it's unlikely that you can write a compiler which
will check this *within* the bounds of the machine, or which can do it
in a reasonable amount of time, etc.  It's conceivable, but how about
this example....

	list := ['a']
	for i in (some finite stream of even numbers)
	  if (we cannot find two prime numbers which add up to i) then
	    list := append(list, i)
	  fi
	rof

  Now, in the general case, the list will be safe iff Goldbach's
conjecture holds.  If we can figure out what numbers are generated in
the stream (perhaps bound them by some maximum integer), we can
obviously test whether this will hold, but it's not really feasible.

	Anton
   
+---------------------------+------------------+-------------+----------------+
| Anton Rang (grad student) | rang@cs.wisc.edu | UW--Madison | "VMS Forever!" |
+---------------------------+------------------+-------------+----------------+

rang@cs.wisc.edu (Anton Rang) (05/12/91)

In article <truesdel.673758212@sun418> truesdel@nas.nasa.gov (David A. Truesdell) writes:
>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>
>>Sure. Can it be done in practice, for every language and every program? 
>>I'll bet, no. But, there is no theoretical reason why a clever design
>>could not produce a useful programming language with a compiler that
>>verified termination.  
>
>The answer to the question is "NO".  It is not possible in practice, it is not
>possible in theory.  THAT is what the halting problem tells us.  It is
>impossible to do that type of analysis in a truly general fashion.  Please
>read even a basic description of the problem before wasting any more net
>bandwidth.

  It's not possible *if* your language allows you to represent all
decidable functions and all computable relations.  It may be that
there are useful languages which do not have this property, though I'm
not convinced of this.  (Well, OK, finite-loop languages are useful
for quite a few applications, I guess.)

	Anton
   
+---------------------------+------------------+-------------+----------------+
| Anton Rang (grad student) | rang@cs.wisc.edu | UW--Madison | "VMS Forever!" |
+---------------------------+------------------+-------------+----------------+

davgot@auto-trol.com (David Gottner) (05/13/91)

In article <STEPHEN.91May10020459@estragon.uchicago.edu> stephen@estragon.uchicago.edu (Stephen P Spackman) writes:
>
>First of all, I would be more interested in a verifier that told me
>about what my programme would do *if compiled* than one that told me
>what it would do if run on a non-realisable machine.
>

You are right, a good verifier should be able to tell you approximately how much
time/space overhead a program will require.  (After all, humans can do this for
algorithms, so a verifier should be able to do so also)  

However suppose the verifier said "This program will not halt on a machine
with no more than 16 MB"  Well, we can get a 32MB machine and run it.  In
otherwords if the verifier can show the program works or doesn't work on
a machine with memory <= M, then we can run it on a machine with memory M+1.
Thus, there is really no way to avoid Turing undecideability.

>Secondly, your argument applies to languages as they are presently
>defined; personally I wouldn't BOTHER writing a verifier for C (though
>if I had one there are times when I'd use it). A *new* type system,
>for instance, might be able to get very good mileage out of the
>assumption that resources are finite (though I'd be very unhappy if it
>assumed any PARTICULAR bound, I think). I wouldn't know, of course;
>I'm no theorist.
>
>But I grant your point: a C programme could consume an unbounded
>amount of stack and still be correct by the standard (though it would
>take a VERY imaginative implementation to get the heap to contain
>more than ULONG_MAX distinct objects, whatever ULONG_MAX is).
>

I don't think I'd want to use a language that placed limits on the number
of active objects (unless it is a large limit like ULONG_MAX :-) )
-- 
David Gottner
davgot@auto-trol.COM                           Auto-trol Technology Corporation
{...}ncar!ico!auto-trol!younam                 12500 North Washington Street
(303) 252-2321                                 Denver, CO 80241-2404

quale@khan.cs.wisc.edu (Douglas E. Quale) (05/13/91)

In article <30164@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>In article <1991May5.135342.13792@daffy.cs.wisc.edu> quale@khan.cs.wisc.edu (Douglas E. Quale) writes:
>>>turing machines. This does not mean that there is no algorithm for
>>>solving the halting problem for all interesting cases. Turing's
>>
>>Naturally ``interesting'' is a subjective term, but I find the busy beaver
>>problem interesting, and it's unsolvable.
>>
>
>In the context of comp.lang.misc, the busy-beaver problem, whatever its
>intrinsic value, seems quite uninteresting. Diagonalization arguments
>are simply not the bread and butter of programming language design.
>
>>I should also point out that real computers are *not* finite state machines.
>>
>
>Pointing it out is nice, but most computer engineers would be surprised by
>this claim. If we build a device from 3dimensional switching elements, there
>are only two alternatives --- finite state device, or infinite volume device. 
>Of course, if we ignore the discrete nature of the switching elements and
>start worrying about analog behavior then things are different.  But, 
>this does not appear to be a very useful way of thinking about computation.

Someone posted earlier hitting some of what will follow, but I'll be
redundant anyway.

Real computers are *NOT* FSMs, and I hope most computer engineers are
not surprised by this.  The rather small workstation on which I'm writing
this is currently running about 34 asynchronous processes.  The next state
of the machine depends very greatly on which key I strike next, the exact
time at which I strike it, the exact current location of the 5 hard disks
read and write heads, the value of all the peripheral devices hardware and
hardware registers (many of which cannot be read by the CPU), and whether
or not someone decides to log on to play Nethack.  (In fact, the state of
the machine will be rather different if decides to play a priest rather
than a knight, etc. ad. nauseum.)

This particular machine is fairly old and unreliable hardware.  Disk and
other failures are rather common, and unpredictable.  Memory read and write
errors are always possible, and they are not always corrected or even
detected.  Via Ethernet, this machine is linked to dozens of others at the
UW and literally tens or hundreds of thousands of machines on the Internet.
The current state of this lowly workstation is dependent on the current state
of all these machines.  The future state of this machine depends on what new
machines will be linked to the Internet.

1) Explain to me how this workstation qualifies to be called a deterministic
machine.

I completely disagree with you as to whether analog considerations are a
useful way to think about computation.  It's essential to realize that our
machines are not infallible, digital computation devices.  It's funny to
see someone complain that TMs are an inappropriate abstraction for
computation because they don't model real machines but that FSMs are -- and
completely disregard the reality in real.

When I asked a favorite math prof of mine what he disliked so about the
computer proof of the 4 color theorem he started by saying that the proof
was unsatisfying because he would naturally prefer a proof that he could
verify in his head, without relying on a machine.  His next statement
struck me more deeply; he stated that computer proofs are really proofs
consisting of an experiment in physics.  At first I thought this somewhat
silly, but I now understand that he was right.  Execution of a program on a
real machine is really nothing more than an experiment in physics.

In regards to TMs and interesting problems,

2) I won't speculate as to what the context of this group is, but of course
you realize that a solution to halting problem would instantly provide the
means to decide Fermat's last conjecture, the Goldberg hypothesis,
Riemann's hypothesis etc.

-- Doug Quale
quale@khan.cs.wisc.edu

stephen@estragon.uchicago.edu (Stephen P Spackman) (05/13/91)

In article <1991May13.055104.11872@daffy.cs.wisc.edu> quale@khan.cs.wisc.edu (Douglas E. Quale) writes:

|Real computers are *NOT* FSMs, and I hope most computer engineers are
|not surprised by this.  The rather small workstation on which I'm writing
|this is currently running about 34 asynchronous processes.  The next state
|of the machine depends very greatly on which key I strike next, the exact
|time at which I strike it, the exact current location of the 5 hard disks
|read and write heads, the value of all the peripheral devices hardware and
|hardware registers (many of which cannot be read by the CPU), and whether
|or not someone decides to log on to play Nethack.  (In fact, the state of
|the machine will be rather different if decides to play a priest rather
|than a knight, etc. ad. nauseum.)

I don't think you've demonstrated your point at all. Surely they are
i/o devices, but this idea that input is somehow not subject to type
constraints and could be "anything" is really weird. A serial line can
bring in only 256 (say) distinct codes; each one can arrive at any one
of a few billion cycles each second (it may not be synchronous when
it's sent but it damn well better be synchronised by input port!).
That's a lot of state, but it's still about as finite as you can get.

The trick you are perhaps missing is that at some point you end up
distinguishing the machine from its environment. Since the bandwidth
of the communication channel is finite and the active lifetime of the
channel is finite, and the channel is digital, this in no way
compromises the FSM-hood of the machine (even if there is feedback
in the channel, of course, since this can only constrain the input
space further).

The entire situation is formally equivalent to an FSM that starts off
with one "random" number which is formally an oracle for all the input
it will recieve while running - any further information you can give
about the form of the input stream only serves to constrain things.

|This particular machine is fairly old and unreliable hardware.  Disk and
|other failures are rather common, and unpredictable.  Memory read and write
|errors are always possible, and they are not always corrected or even
|detected.

If the hardware is flakey (or rather WHEN it is) we clearly can't talk
about language semantics or programme behaviour at all. This is a red
herring of the highest order. (Yes, you can talk about fault-tolerant
behaviour, but i order to be theoretically meaningful you have to list
a finite set of failure modes of bounded consequences, which is ABSURD
from a practical point of view (whoever heard of STABLE storage? What
about nuclear war?) and actually doesn't damage the model ANYWAY, it
just means that there's one more i/o device, formally).

|I completely disagree with you as to whether analog considerations are a
|useful way to think about computation.  It's essential to realize that our
|machines are not infallible, digital computation devices.  It's funny to
|see someone complain that TMs are an inappropriate abstraction for
|computation because they don't model real machines but that FSMs are -- and
|completely disregard the reality in real.

Look, we all know about synchroniser failures, but from a formal
semantics point of view, they ARE failures. And from a hardware point
of view too - even if there's nothing that can be done about them. The
machine glitched and the programme that's running is no longer the
programme I wrote. I don't think you CAN formalise software behaviour
in THAT level of reality - certainly you can't without talking about a
programme compiled and running on a specific hardware configuration -
and even if you could the machine would STILL be finite and an FSM
would STILL be a "better" formal model than a turing machine, at least
until the first hardware failure manifested.

|When I asked a favorite math prof of mine what he disliked so about the
|computer proof of the 4 color theorem he started by saying that the proof
|was unsatisfying because he would naturally prefer a proof that he could
|verify in his head, without relying on a machine.  His next statement
|struck me more deeply; he stated that computer proofs are really proofs
|consisting of an experiment in physics.  At first I thought this somewhat
|silly, but I now understand that he was right.  Execution of a program on a
|real machine is really nothing more than an experiment in physics.

I take it that he doesn't use a physical device - say, his brain -
when he verifies a proof of his own devising? :-)
----------------------------------------------------------------------
stephen p spackman         Center for Information and Language Studies
systems analyst                                  University of Chicago
----------------------------------------------------------------------

kers@hplb.hpl.hp.com (Chris Dollin) (05/13/91)

David Gudeman says:

   If you want to get theoretical, then you can _theoretically_ keep
   buying tapes until the world's supply of tape runs out.  Then you can
   manufacture more.  When you use up all the earth's supply of materials
   for making tapes, you can build mining space ships.  

Careful, David; maybe the material for tapes is also an essential part of the
mining ships. What do you do if you can't build the ship because the last
gramme of poly-unobtainable-thene, needed for the scrawlbar-widget on the
pilots desktop, has just been used to record a Very Significant Digit of the
answer (maybe the ships flight plan ...)?

Resource allocation is hard.

--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

jallen@csserv2.ic.sunysb.edu (Joseph Allen) (05/14/91)

In article <truesdel.673758212@sun418> truesdel@nas.nasa.gov (David A. Truesdell) writes:
>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>
>>In article <a8uk215w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) writes:
>>Sure. Can it be done in practice, for every language and every program? 
>>I'll bet, no. But, there is no theoretical reason why a clever design
>>could not produce a useful programming language with a compiler that
>>verified termination.  

>The answer to the question is "NO".  It is not possible in practice, it is not
>possible in theory.  THAT is what the halting problem tells us.  It is
>impossible to do that type of analysis in a truly general fashion.  Please
>read even a basic description of the problem before wasting any more net
>bandwidth.

No, it can be done!  Really!  Just don't allow loops, gotos or recursion.
Now who wants to write a compiler in a language with these restrictions?

-- 
/*  jallen@ic.sunysb.edu  */     /* Amazing */     /* Joe Allen 129.49.12.74 */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (05/14/91)

In article <KERS.91May13111546@cdollin.hpl.hp.com>, kers@hplb.hpl.hp.com (Chris Dollin) writes:
> Careful, David; maybe the material for tapes is also an essential part of the
> mining ships. What do you do if you can't build the ship because the last
> gramme of poly-unobtainable-thene, needed for the scrawlbar-widget on the
> pilots desktop, has just been used to record a Very Significant Digit of the
> answer (maybe the ships flight plan ...)?

This one has already been answered in this newsgroup, this year.
You use space itself as your store (think "delay-line").
Your ground station transmits a string of pulses, N per second,
to a space-ship D metres away, which simply reflects them back
to the ground station.  The total distance travelled is 2D, at
a round-trip time of 2D/c seconds, so space holds 2ND/c pulses.
When you want more storage, you just move the spaceship further
away.  You can have as much storage as you want, if you're willing
to wait for it.  Have you read the SF story "the Very Slow Time
Machine"?  This is a Very Slow Turing Machine.

-- 
There is no such thing as a balanced ecology; ecosystems are chaotic.

rockwell@socrates.umd.edu (Raul Rockwell) (05/14/91)

   >Victor Yodaiken:
   >>But, there is no theoretical reason why a clever design could not
   >>produce a useful programming language with a compiler that
   >>verified termination.

   David A. Truesdell:
   >The answer to the question is "NO".  It is not possible in
   >practice, it is not possible in theory.

Joseph Allen:
   No, it can be done!  Really!  Just don't allow loops, gotos or
   recursion.  Now who wants to write a compiler in a language with
   these restrictions?

Well, you can allow loops as long as they can guaranteed to be
bounded.  And you don't really need a new compiler, just take an
existing one and remove anything which allows unbounded loops or
recursion.

I suppose such a thing could be quite instructive.  Ask the question
"how useful is this language when I remove everything which
potentially prevents halting?".  If nothing else, it might tell you
where the language designers were at.

Raul Rockwell

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/14/91)

In article <1991May14.054813.18427@sbcs.sunysb.edu> jallen@csserv2.ic.sunysb.edu (Joseph Allen) writes:
>In article <truesdel.673758212@sun418> truesdel@nas.nasa.gov (David A. Truesdell) writes:
>>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>
>>>In article <a8uk215w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) writes:
>>>Sure. Can it be done in practice, for every language and every program? 
>>>I'll bet, no. But, there is no theoretical reason why a clever design
>>>could not produce a useful programming language with a compiler that
>>>verified termination.  
>
>>The answer to the question is "NO".  It is not possible in practice, it is not
>>possible in theory.  THAT is what the halting problem tells us.  It is
>>impossible to do that type of analysis in a truly general fashion.  Please
>>read even a basic description of the problem before wasting any more net
>>bandwidth.
>
>No, it can be done!  Really!  Just don't allow loops, gotos or recursion.
>Now who wants to write a compiler in a language with these restrictions?
>
>-- 

Care to explain why these features make a language turing uncomputable?
And please explain why primitive recursive functions are then, contrary
to common belief, turing uncomputable.


>/*  jallen@ic.sunysb.edu  */     /* Amazing */     /* Joe Allen 129.49.12.74 */
>int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
>+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
>]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/14/91)

In article <ROCKWELL.91May14081634@socrates.umd.edu> rockwell@socrates.umd.edu (Raul Rockwell) writes:
>
>   >Victor Yodaiken:
>   >>But, there is no theoretical reason why a clever design could not
>   >>produce a useful programming language with a compiler that
>   >>verified termination.
>
>   David A. Truesdell:
>   >The answer to the question is "NO".  It is not possible in
>   >practice, it is not possible in theory.
>
>Joseph Allen:
>   No, it can be done!  Really!  Just don't allow loops, gotos or
>   recursion.  Now who wants to write a compiler in a language with
>   these restrictions?
>
>Well, you can allow loops as long as they can guaranteed to be
>bounded.  And you don't really need a new compiler, just take an
>existing one and remove anything which allows unbounded loops or
>recursion.
>

The kind of looping that makes Turing machines undecidable is quite
different from the kind of looping we get in real machines. On
a turing machine
            while(true)i = i+1;
is a truly infinite loop, each iteration takes us to a new state.
But on a real computer, 
            while(true)i = i+1;
*will* repeat state (or crash the program). Here we have a *finite* loop.

Similarly, bounded recursion, even with implicit bounds, is all we have
on real computers -- eventually stacks will overflow. So it is, in
principle, quite possible to write an ANSI "C" compiler that would
detect non-terminating loops. The compiler would have to know someting
about the limits of adressable memory onthe machine, but, that does not
seem unreasonable.

torek@elf.ee.lbl.gov (Chris Torek) (05/15/91)

>Joseph Allen:
>   No, it can be done!  Really!  Just don't allow loops, gotos or
>   recursion.  Now who wants to write a compiler in a language with
>   these restrictions?

In article <ROCKWELL.91May14081634@socrates.umd.edu> rockwell@socrates.umd.edu
(Raul Rockwell) writes:
>Well, you can allow loops as long as they can guaranteed to be
>bounded. ...
>
>I suppose such a thing could be quite instructive.  Ask the question
>"how useful is this language when I remove everything which
>potentially prevents halting?".

Languages without loops can indeed be useful.  One such example is the
interpreter inside BPF, the Berkeley Packet Filter.  The `language' is
in effect an accumulator machine with packet-memory operations and a
small auxiliary memory.  The compiler takes packet matching
specifications (`tcp port 25') and emits load-and-compare operations to
decide whether the packet should be copied.  The interpreter allows
only forward branches and has no call instruction, so it is not possible
to create a loop.

BPF is very useful in its particular problem domain.  It is not much
good as a general purpose programming language. :-)
-- 
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA		Domain:	torek@ee.lbl.gov

rockwell@socrates.umd.edu (Raul Rockwell) (05/15/91)

Victor Yodaiken:

   The kind of looping that makes Turing machines undecidable is quite
   different from the kind of looping we get in real machines. On a
   turing machine
	       while(true)i = i+1;
   is a truly infinite loop, each iteration takes us to a new state.
   But on a real computer,
	       while(true)i = i+1;
   *will* repeat state (or crash the program). Here we have a *finite*
   loop.

er.. I hate to say this, but (brought to you through sheer brute effort): 

Assuming I agree with you, so what?


First off, a loop of the form   while(true) i=i+1   implemented on a
turing machine is NOT what make the turing machine undecidable.  (In
fact, it is trivial to determine if that loop halts:  if the machine
is using arbitrary precision arithmetic, it won't halt.  if the
machine is using a finite state arithmetic it will or won't halt
depending on how the overflow condition is defined).

What makes turing machines undecidable (to gloss over all the details)
is that you can write programs of arbitrary complexity.  (NOTE: not
infinite complexity, arbitrary complexity -- circularly defined for
the purposes of this article as complexity sufficient to be
undecidable that some program H can not determine if this complex
program halts.)

Second off, all I said was that unbounded loops and recursion must be
eliminated.  I carefully did not describe any specific mechanism for
doing so.  If your recognizer can determine that   while(true) i=i+1
will halt on your system, then it is fine to include it.  The key here
is to eliminate any ambiguous cases.  A simple example of which is
while(true) i = f(i)    :-).  Personally though, I wouldn't bother
considering this case as a halting case (why bother)?

Anyways, if your point is that for a finite state machine with n
states, a table with n entries is going to be able to list whether
each state halts, I understand the concept.  Or, if your point was
that a binary machine capable of representing at most n bits of
information has at most 2^n states, then yes I understand that too.
Thank you for pointing it out.

Raul Rockwell

) (05/15/91)

jallen@csserv2.ic.sunysb.edu (Joseph Allen) writes:
[ On solving the halting problem: ]
>
> No, it can be done!  Really!  Just don't allow loops, gotos or recursion.

Or I/O.

> Now who wants to write a compiler in a language with these restrictions?

More importantly, who wants to use it?


mathew

 

) (05/15/91)

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
> 
> The kind of looping that makes Turing machines undecidable is quite
> different from the kind of looping we get in real machines. On
> a turing machine
>             while(true)i = i+1;
> is a truly infinite loop, each iteration takes us to a new state.
> But on a real computer, 
>             while(true)i = i+1;
> *will* repeat state (or crash the program). Here we have a *finite* loop.

Only because either the machine's memory is finite, or because the specific
implementation of the language being used imposes some arbitrary limit on
integer size.

> Similarly, bounded recursion, even with implicit bounds, is all we have
> on real computers -- eventually stacks will overflow.

True. But all these bounds may be intractably large given virtual memory
systems.

>                                                          So it is, in
> principle, quite possible to write an ANSI "C" compiler that would
> detect non-terminating loops. The compiler would have to know someting
> about the limits of adressable memory onthe machine, but, that does not
> seem unreasonable.

It would have to know EXACTLY how much memory was going to be available to
the compiled program on execution, TO THE BYTE.

You would lose the abstraction of "int", "long" and so on; you would have to
know EXACTLY how each data type was represented, and the results printed by
the compiler would be critically dependent on that information.

What's more, the compiler would probably have to run on a machine which was
significantly larger than the target machine. You could write your program
and have it compile and have the compiler say "yes, this will terminate";
you could then try running the program on a machine with slightly more disk
space, or a different version of the OS, and suddenly it would fail to
terminate.

Now, I don't personally think it's worth putting in a great deal of effort in
order to get that sort of information.


mathew

 

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/15/91)

In article <1991May13.055104.11872@daffy.cs.wisc.edu> quale@khan.cs.wisc.edu (Douglas E. Quale) writes:
>Real computers are *NOT* FSMs, and I hope most computer engineers are
>not surprised by this.  The rather small workstation on which I'm writing
>this is currently running about 34 asynchronous processes.  The next state
>of the machine depends very greatly on which key I strike next, the exact
>time at which I strike it, the exact current location of the 5 hard disks
>read and write heads, the value of all the peripheral devices hardware and
>hardware registers (many of which cannot be read by the CPU), and whether
>or not someone decides to log on to play Nethack.  (In fact, the state of
>the machine will be rather different if decides to play a priest rather
>than a knight, etc. ad. nauseum.)
>

Finite state machines are models of what happens when you hit the key.
A mathematical model that also predicted your behavior would be interesting,
but please try to remember the problem at hand.  The physical laws which
predict the path of a projectile in flight do not predict  whether or not
you will launch the projectile,  what the weather will be like, or whether
or not a bird will collide with the proejctile. Nevertheless, we find these
laws to be quite useful.

>This particular machine is fairly old and unreliable hardware.  Disk and
>other failures are rather common, and unpredictable.  Memory read and write
>errors are always possible, and they are not always corrected or even
>detected.  Via Ethernet, this machine is linked to dozens of others at the
>UW and literally tens or hundreds of thousands of machines on the Internet.
>The current state of this lowly workstation is dependent on the current state
>of all these machines.  The future state of this machine depends on what new
>machines will be linked to the Internet.
>

Again, you are arguing that the behavior of the people who manage the
Internet cannot be completely captured with a finite state machine. You
are correct. I doubt that a Turing machine provides a satsfactory model
either. Any machine model is defective according to this analysis. After
all, if someone shows up with a bag of chips and a soldering iron,they
can quite radically change machine behavior. Do we really need to model
all that?

>I completely disagree with you as to whether analog considerations are a
>useful way to think about computation.  It's essential to realize that our
>machines are not infallible, digital computation devices.  It's funny to
>see someone complain that TMs are an inappropriate abstraction for
>computation because they don't model real machines but that FSMs are -- and
>completely disregard the reality in real.


So, look at the analog behavior, look at the quantum level behavior, 
explain how brownian motion can alter the behavior of a Pascal program.
Please note, however, that TMs are just as discrete and deterministic
as FSMs. You seem to be arguing for a randomized non-linear differential
equation model of computation. Good luck.

>In regards to TMs and interesting problems,
>
>2) I won't speculate as to what the context of this group is, but of course
>you realize that a solution to halting problem would instantly provide the
>means to decide Fermat's last conjecture, the Goldberg hypothesis,
>Riemann's hypothesis etc.

Let's try to be a little more precise. The halting problem for *Turing
machines* is not Turing decidable. It is decidable by machines which can
complete infinite computation sequences. Similarly,  the halting
problem for FSMs is not decidable by any one universal FSM. It is, however,
decidable by Turing machines and even weaker machines such as LBAs. 

A Turing machine provides quite a useful model of constructive mathematics.
This is the intended purpose of Turing machines. Turing among others, 
wanted to provide formal definition for the intuitive mathematical notion
of "algorithm". Since, Turing's definition was intuitively acceptable
to many mathematicians, and since was shown to be equivalent to competing
definitions given by Godel, Church,Kleene, and Post, it has been
generally (but not universally, c.f. Bishop) accepted. 
Finding constructive proofs and algorithms in mathematics is very
interesting. But,  the existence or non-existence of a Turing computable
algorithm for a problem tells us little about what we can or cannot
do with actual computers. 

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/16/91)

In article <eqJZ22w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) writes:
>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>> 
>> The kind of looping that makes Turing machines undecidable is quite
>> different from the kind of looping we get in real machines. On
>> a turing machine
>>             while(true)i = i+1;
>> is a truly infinite loop, each iteration takes us to a new state.
>> But on a real computer, 
>>             while(true)i = i+1;
>> *will* repeat state (or crash the program). Here we have a *finite* loop.
>
>Only because either the machine's memory is finite, or because the specific

Generally the case withthe machines I've seen.

>> Similarly, bounded recursion, even with implicit bounds, is all we have
>> on real computers -- eventually stacks will overflow.
>
>True. But all these bounds may be intractably large given virtual memory

This bears repeating: intractably large != infinite. What is intractably
large today may be easy tomorrow if someone thinks hard enough. What
is infinite today, will be infinite tomorrow. The state space of
the 8 queens problem was intracatably large to Gauss, yet it's no problem
for anyone with access to a PC. The traveling salesman problem
is probably inherently exponential, but smart algorithms make it
reasonable tractable.  "Intractable" means "nobody has thought of
a good algorithm yet, and perhaps there is no such algorithm". 
"Infinite" means, "there is no good algorithm". 

>>                                                          So it is, in
>> principle, quite possible to write an ANSI "C" compiler that would
>> detect non-terminating loops. The compiler would have to know someting
>> about the limits of adressable memory onthe machine, but, that does not
>> seem unreasonable.
>
>It would have to know EXACTLY how much memory was going to be available to
>the compiled program on execution, TO THE BYTE.

Not true. It would have to know the limits of adressable memory on the
machine. Thus, it would sometimes err by detecting an "infinite loop"
in a program that will really crash. Or, under many operating systems,
the compiler would be able to know the maximum size of process data
space without much effort.

>You would lose the abstraction of "int", "long" and so on; you would have to
>know EXACTLY how each data type was represented, and the results printed by
>the compiler would be critically dependent on that information.
>

The programmer would not have to bother with these details, but the compiler
writer would have to know how data types were represented. But, it's hard
to imagine a compiler writer operating in ignorance of this information.

>What's more, the compiler would probably have to run on a machine which was
>significantly larger than the target machine. You could write your program

If the algorithm was anything that I can come up with
right now  (but, I would never have thought of quicksort either).
Or if the programming language was laxly designed. 

>and have it compile and have the compiler say "yes, this will terminate";
>you could then try running the program on a machine with slightly more disk
>space, or a different version of the OS, and suddenly it would fail to
>terminate.

Similarly, some programs will run correctly only if certain o.s. or
machine dependent conditions are met.  Do you honestly believe that
a compiler that would warn you about exceeding memory limits or 
looping indefinitely would not be of interest to most programmers?

marv@chine.ism.isc.com (Marvin Rubinstein) (05/17/91)

Followup-To: comp.lang.misc
Distribution: 
Organization: Interactive Systems Corp., Santa Monica, CA
Keywords: 

In article <P3iZ21w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) writes:
>jallen@csserv2.ic.sunysb.edu (Joseph Allen) writes:
>[ On solving the halting problem: ]
>>
>> No, it can be done!  Really!  Just don't allow loops, gotos or recursion.
>
>Or I/O.
>
>> Now who wants to write a compiler in a language with these restrictions?
>
>More importantly, who wants to use it?

Historical note:  The first digital machine that I used (in 1953) had these
properties.  It was an IBM CPC (Card Programmed Calculator).  The machine
did not have a program store (it did have a small data store).  The way it
worked was that it read a card which contained a few instructions.  The
instructions were executed and the next card was read.  The process continued
until it ran out of cards and halted.  I used it to reduce wind tunnel data.
One of the models I tested was a Sparrow missile.  I noticed that Sparrows
were used in our latest war.

The machine was extremely useful compared to the alternative, pencil, paper
and a desk calculator.

BTW: Most early machines were called calculators.  Computer was a job title.

    Marv Rubinstein

) (05/18/91)

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
> In article <eqJZ22w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) w
> >yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
> >> The kind of looping that makes Turing machines undecidable is quite
> >> different from the kind of looping we get in real machines. On
> >> a turing machine
> >>             while(true)i = i+1;
> >> is a truly infinite loop, each iteration takes us to a new state.
> >> But on a real computer, 
> >>             while(true)i = i+1;
> >> *will* repeat state (or crash the program). Here we have a *finite* loop.
> >
> >Only because either the machine's memory is finite, or because the specific
> 
> Generally the case withthe machines I've seen.

Yes, but do you want to have to be aware of exactly how much memory is
available to your programs?

> >>                                                          So it is, in
> >> principle, quite possible to write an ANSI "C" compiler that would
> >> detect non-terminating loops. The compiler would have to know someting
> >> about the limits of adressable memory onthe machine, but, that does not
> >> seem unreasonable.
> >
> >It would have to know EXACTLY how much memory was going to be available to
> >the compiled program on execution, TO THE BYTE.
> 
> Not true.

Consider the following pseudo-code:

i = 0
repeat
   allocate 1 byte of memory
   i++
until allocation fails
if i is prime, while(1);
end

You can write such a program in ANSI C. In order to predict whether it will
enter a non-terminating loop you have to know exactly how much memory is
available to the program when it runs, to the byte.

The result of running the program will be different according to the amount
of memory the program is running in -- either virtual memory, or physical
memory if the machine has no VM.

The above example is contrived, but the same principles apply to any program
which does dynamic memory management; you cannot in general guarantee that
such programs will terminate. Look at buggy Lisp systems on small machines
and their tendency to go into infinite-GC loops.

You could maybe guarantee termination for the above program if your "allocate
memory" routine aborted the program on failure, but making ANSI C's malloc()
do that wouldn't be a very popular move.

> >You would lose the abstraction of "int", "long" and so on; you would have to
> >know EXACTLY how each data type was represented, and the results printed by
> >the compiler would be critically dependent on that information.
> 
> The programmer would not have to bother with these details, but the compiler
> writer would have to know how data types were represented. But, it's hard
> to imagine a compiler writer operating in ignorance of this information.

OK. Suppose the programmer writes his code without worrying about type
representation. He then compiles it on machine "A". The compiler says "Yes, it
will terminate".

Now, if the programmer wants to run his program on any other machine "B", he
needs to know if "B"'s data type representation for the types he uses is the
same as "A"'s in order to know whether his program will terminate. That's what
I mean by "...would have to know EXACTLY how each data type was
represented..."

> >What's more, the compiler would probably have to run on a machine which was
> >significantly larger than the target machine.
> 
> If the algorithm was anything that I can come up with
> right now  (but, I would never have thought of quicksort either).

I doubt that there will ever be an algorithm which will let a computer decide
whether a program running on itself will halt.

[ I expect there's a proof of that, which is probably much the same as the
  proof of the halting problem, but I'll leave it to someone else as it's
  Friday evening... ]

> Or if the programming language was laxly designed.

The restrictions you have to place on a language in order to enable
guaranteed termination are such that the language is not useful for general
programming.
 
>                                        Do you honestly believe that
> a compiler that would warn you about exceeding memory limits or 
> looping indefinitely would not be of interest to most programmers?

Yes.

Firstly, the amount of memory I have free in this machine depends upon what I
am running at the moment.

Secondly, I want to know whether my programs will work on my friends'
machines.

Thirdly, I don't want to have to compile on a bigger machine than the one I'm
going to run the program on.

Finally, my compiler is slow enough already. The kind of analysis it would
have to do for your hypothetical "terminating ANSI C" would make it
mindbogglingly slow.

I suspect that at least one of the above conditions holds for most
programmers.


mathew

 

rockwell@socrates.umd.edu (Raul Rockwell) (05/18/91)

Mathew "CNEWS MUST DIE" writes:
   The result of running the program will be different according to
   the amount of memory the program is running in -- either virtual
   memory, or physical memory if the machine has no VM.

Yeah, but you can always be facist about it and have your compiler
specify a fixed sized memory model.  In some applications this is even
a feature :-)

Of course, if the size of data objects depends on run-time conditions,
you're still out of luck (at best you'll be able to implement a
crippled version of your program), but FORTRAN programmers have been
living with constraints like that forever.  (One workaround is
re-compile a lot).

Raul Rockwell

jallen@eeserv1.ic.sunysb.edu (Joseph Allen) (05/19/91)

In article <30506@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>In article <1991May14.054813.18427@sbcs.sunysb.edu> jallen@csserv2.ic.sunysb.edu (Joseph Allen) writes:
>>In article <truesdel.673758212@sun418> truesdel@nas.nasa.gov (David A. Truesdell) writes:
>>>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>>>In article <a8uk215w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) writes:
>>No, it can be done!  Really!  Just don't allow loops, gotos or recursion.
>>Now who wants to write a compiler in a language with these restrictions?

>Care to explain why these features make a language turing uncomputable?

You need loops to have an infinite loop.  If you don't have infinite loops,
the program will halt.  Naturaly "loop" is a bit broad (as an other poster
pointed out, many loops can easily be proved to/not to terminate)

>And please explain why primitive recursive functions are then, contrary
>to common belief, turing uncomputable.

I'll transform the problem:

/* 6502 Simulator */

char ram[65536],a,x,y,sp,cc; short pc;

run() {
 switch(ram[pc++]) {
  /* a case...break for each 6502 instruction */
 case HLT: return; }
 run(); }

main() { /* Load program into ram */ pc=ram[0xfffe]+ram[0xffff]*256; run(); }

So now all you have to do is prove the haltability of any program you can make
on a 6502 microprocessor.  (the simulator does assume an infinite stack space,
but if we're talking about turing machines...) 

I think "primitive recursive functions" must mean something different from
what I was talking about.

-- 
/*  jallen@ic.sunysb.edu  */     /* Amazing */     /* Joe Allen 129.49.12.74 */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

jallen@eeserv1.ic.sunysb.edu (Joseph Allen) (05/19/91)

In article <30164@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>In article <1991May5.135342.13792@daffy.cs.wisc.edu> quale@khan.cs.wisc.edu (Douglas E. Quale) writes:
>>In article <30082@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>I should also point out that real computers are *not* finite state machines.

>Pointing it out is nice, but most computer engineers would be surprised by
>this claim. If we build a device from 3dimensional switching elements, there
>are only two alternatives --- finite state device, or infinite volume device. 
>Of course, if we ignore the discrete nature of the switching elements and
>start worrying about analog behavior then things are different.

I'm a communications ee (well almost anyway) and I'll make this statement even
stronger.  Even if the device is analog, you still have a finite amount of
information because of noise and QM.  Therefore it's still a FSM- only each
state is not exactly known (the state is a probability function instead).

And I'm almost 100% sure about this statement: the whole universe has a
finite amount of information in it if it has a finite amount of energy. 

But this is getting a little silly.  Whether finite programs are haltable is
also unprovable (right? CS people?).
-- 
/*  jallen@ic.sunysb.edu  */     /* Amazing */     /* Joe Allen 129.49.12.74 */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

jallen@eeserv1.ic.sunysb.edu (Joseph Allen) (05/19/91)

In article <1991May18.225337.24416@sbcs.sunysb.edu> jallen@eeserv1.ic.sunysb.edu (Joseph Allen) writes:
>In article <30164@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>In article <1991May5.135342.13792@daffy.cs.wisc.edu> quale@khan.cs.wisc.edu (Douglas E. Quale) writes:
>>>In article <30082@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>>I should also point out that real computers are *not* finite state machines.

>But this is getting a little silly.  Whether programs on finite machines will
>halt is also unprovable.  Isn't it?

Ack!  I apologize for wasting net bandwidth.  The following article answers
this question:

From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
>[A finite TM is translatable into a FSM.  A FSM has a state space S and a
>mapping S -> S.  To see if it halts, pick a starting state, run through the
>sequences.  If if goes on the halt state then it halts.  If it ever touches a
>state which already occured, it doesn't.  Since there is only a finite number
>of states, it takes a finite amount of time to decide (since at most it has
>to go through all the states)]

So on any real computer with no I/O, you would know.  Often it will halt
because of a "segmentation fault -- stack overflow -- core dumped" when on a
real TM it is undecidable. 

Now the entire universe can be modeled with a FSM with part of the state being
random for each step.  (This assumes the universe has finite storage capacity;
which I believe to be true if there's a finite amount of energy in the
universe) If you run a program on a computer in the universey ou get three
possible answers: halts, doesn't, undecidable. Obviously if the haltability
depends on the randomness then it's not decidable.  For some cases you can
tell if it depends, for others you can't.  The random part is kind of a door
to an infinite state space.  Hmm... 
-- 
/*  jallen@ic.sunysb.edu  */     /* Amazing */     /* Joe Allen 129.49.12.74 */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

yodaiken@chelm.cs.umass.edu (victor yodaiken) (05/19/91)

In article <TcT422w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) writes:
>yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>> In article <eqJZ22w164w@mantis.co.uk> mathew@mantis.co.uk (CNEWS MUST DIE!) w
>> >yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>> >> The kind of looping that makes Turing machines undecidable is quite
>> >> different from the kind of looping we get in real machines. On
>> >> a turing machine
>> >>             while(true)i = i+1;
>> >> is a truly infinite loop, each iteration takes us to a new state.
>> >> But on a real computer, 
>> >>             while(true)i = i+1;
>> >> *will* repeat state (or crash the program). Here we have a *finite* loop.
>> >
>> >Only because either the machine's memory is finite, or because the specific
>> 
>> Generally the case withthe machines I've seen.
>
>Yes, but do you want to have to be aware of exactly how much memory is
>available to your programs?

Not always. But, for the compiler to  warn me that 
"while(1)i =i+1" will either crash or cycle state forever does not require
me to know this information. And if the compiler were compiling a program for 
a Turing machine the program would do neither.

>Consider the following pseudo-code:
>
>i = 0
>repeat
>   allocate 1 byte of memory
>   i++
>until allocation fails
>if i is prime, while(1);
>end
>
>You can write such a program in ANSI C. In order to predict whether it will
>enter a non-terminating loop you have to know exactly how much memory is
>available to the program when it runs, to the byte.

If your point is that one can write ANSI "C" programs that are machine
specific, sure. And if your point is that one can write ANSI "C" programs
that will depend on I/O or operating system behavior, I also agree. I
do not claim that one can write a "C" compiler that will predict how
an unspecified o.s. or a input channel will behave (except that the behavior
will fall in some range. e.g., when we read a "byte", we get one of
256 values). The program "read byte into x, if x=='a' while(1); else exit"
is inherently unpredictable unless we have a model of the i/o channel.
The compiler cannot predict such paths, but it can warn us when our
programs contain i/o dependent paths that can lead to indefinite cycles.
This is the case in your example.  Note, the turing theorem has nothing
to do with predicting behavior of your code. If we had an oracle for
Turing machines, the oracle would still not be able to predict  the
behavior of "alloc" .

>> The programmer would not have to bother with these details, but the compiler
>> writer would have to know how data types were represented. But, it's hard
>> to imagine a compiler writer operating in ignorance of this information.
>
>OK. Suppose the programmer writes his code without worrying about type
>representation. He then compiles it on machine "A". The compiler says "Yes, it
>will terminate".
>
>Now, if the programmer wants to run his program on any other machine "B", he
>needs to know if "B"'s data type representation for the types he uses is the
>same as "A"'s in order to know whether his program will terminate. That's what
>I mean by "...would have to know EXACTLY how each data type was
>represented..."

Again, you raise a valid issue, but I'm not clear how it applies to the
point at hand. If I write a program that can run correctly given a "C"
compiler that uses 32bits to represent integers, but which will loop
given an 8 bit "C" compiler, the compilers can flag the error, without
requiring the programmer to worry about exact representation. The
program "int i = 1; while(i++ != 0)print(i)" will behave differently
under the different compilers. This is an inescapable fact. We provide
abstractions in our programmng languages, but these are conveniences which
do not negate the facts that real machines do not have "integers" and
"real numbers", but only have limited representations of these mathematical
abstractions. Even if the compiler implements "i" with a variable format
(e.g. bignums), the programmer will want to know if the program may
exceed thelimits of machine representation. A correctly written program
will not depend on exactly how much memory is available, and if the compiler
flags such a dependency, then it has found what is most likely an error.


>> >What's more, the compiler would probably have to run on a machine which was
>> >significantly larger than the target machine.
>> 
>> If the algorithm was anything that I can come up with
>> right now  (but, I would never have thought of quicksort either).
>
>I doubt that there will ever be an algorithm which will let a computer decide
>whether a program running on itself will halt.
>

There won't be. The key phrase here is "significantly larger".

>> Or if the programming language was laxly designed.
>
>The restrictions you have to place on a language in order to enable
>guaranteed termination are such that the language is not useful for general
>programming.

Demonstrate this? Any demonstraton would seem to require unusually
precise information about the limits of ingenuity.

> 
>>                                        Do you honestly believe that
>> a compiler that would warn you about exceeding memory limits or 
>> looping indefinitely would not be of interest to most programmers?
>
>Yes.
>
>Firstly, the amount of memory I have free in this machine depends upon what I
>am running at the moment.
>
See above.

>Secondly, I want to know whether my programs will work on my friends'
>machines.
>

Try compiling on those machines, or compile with a "portability" flag.

>Thirdly, I don't want to have to compile on a bigger machine than the one I'm
>going to run the program on.
>

For many applications, this is not a problem, either because the target
machine cannot support a compiler (e.g., embedded systems), because
correctness of the application is of great interest (e.g., if you are
releasing a program that you hope will run on all Mac's with at least
1 meg of memory), or because the o.s. limits each program to an adressable
memory space that is less than the available machine space (e.g. 
most operating systems  on memory mapped machines).

If I had a "C" compiler that would run on, say, an IBM PC with at least
6 meg of memory and a large disk, which would tell me within a few days
whether or not an arbitrary "C" program would fail given at most
X bytes of memory (X < 3 meg), then 
I believe that at least a few people would be interested
in using it.


>Finally, my compiler is slow enough already. The kind of analysis it would
>have to do for your hypothetical "terminating ANSI C" would make it
>mindbogglingly slow.
>

We started with a question about Turing computability and end up
with arguments about how slow "C" compilers are these days.
Why I remember running the "C" compiler on a PDP-11/45 with 128K and
in those days we didn't need all these fancy optimizers, portability
testers, and other stuff, we were happy just to get away from VM and
PL/1. You kids these days ....

gudeman@cs.arizona.edu (David Gudeman) (05/20/91)

In article  <1991May19.024820.25262@sbcs.sunysb.edu> Joseph Allen writes:
[about the halting problem for real computers]
]Ack!  I apologize for wasting net bandwidth.  The following article answers
]this question:

I missed the original article, but no, it does not answer the
question, it only confuses it further.

]From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
]>[A finite TM is translatable into a FSM.  A FSM has a state space S and a
]>mapping S -> S.  To see if it halts, pick a starting state, run through the
]>sequences.  If if goes on the halt state then it halts.  If it ever touches a
]>state which already occured, it doesn't.  Since there is only a finite number
]>of states, it takes a finite amount of time to decide (since at most it has
]>to go through all the states)]

What machine is going to do this?  If computers are FSM's then you
only have an FSM to do this, and I can always find a program too big
to analysis in this way.

By the way, how do you know that the TM is finite in the first place?

By the way, the term "Turing machine" refers to a specific
architecture, and does not describe all machines with infinite state
spaces.  (PDAs are infinite-state devices that can't even recognize
the recursive sets.)  A Turing machine has an infinite read-write tape
with a movable head and only four possible actions (as I recall).
There are very few applications where it is useful to model a computer
as a TM -- infinite or not.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

rockwell@socrates.umd.edu (Raul Rockwell) (05/20/91)

David Gudeman:
   By the way, the term "Turing machine" refers to a specific
   architecture, and does not describe all machines with infinite
   state spaces.  ...  A Turing machine has an infinite read-write
   tape with a movable head and only four possible actions (as I
   recall).  There are very few applications where it is useful to
   model a computer as a TM -- infinite or not.

Except that each "cell" on a turing machine tape represents a FSM of
arbitrary complexity.  The "four actions" of a TM are:

(1) do a state transition in the FSM
(2) move to the FSM on the left
(3) move to the FSM on the right
(4) "halt" or "suspend" (transition to an undefined state)

So any FSM is a restricted case of a turing machine, and turing
machines are trivially useful for anything that a FSM is useful for.

Often when people speak of turing machines, they think of one where
every FSM has only two states, and numbers are implemented as strings
of ones.  This particular instance of a turing machine is quite a bit
more limited than turing machines in general.

And while the basic definition of a turing machine only has rules for
a transition to an adjacent FSM you can usually make transitions to
FSMs located arbitrary distances away (as long as the states of the
intervening FSMs are known).  Note that you don't have to enumerate
the states and transitions, as long as each FSM is finite.

The way I understand it, the only kind of machine with an infinite
state space that can't be modeled by a TM is one with non-denumerable
state state space.

Raul Rockwell

gudeman@cs.arizona.edu (David Gudeman) (05/22/91)

In article  <ROCKWELL.91May19193927@socrates.umd.edu> Raul Rockwell writes:
]David Gudeman:
]   ...  There are very few applications where it is useful to
]   model a computer as a TM -- infinite or not.
]
]Except that each "cell" on a turing machine tape represents a FSM of
]arbitrary complexity...

This is rather opaque.  I can think of several functions to map a
turing machine with a fixed cell to a FSM.  None of them seem
particularly useful.

]So any FSM is a restricted case of a turing machine, and turing
]machines are trivially useful for anything that a FSM is useful for.

Not so.  If an FSM is useful for some purpose, then it would not
necessarily be useful to use a TM (with the added complications) for
the same thing.  It is not useful to build a TM to recognize a regular
expression.

For that matter, I can't think of any application where it is useful
to model a computer as an FSM.  If you are interested in machine
limits, then it seems simpler to begin with a nonfinite machine model.
For example, if you want to answer the question "For what inputs will
this program exceed the machine paramaters?" then the easiest thing to
do is to derive a function of the needed machine resources in terms of
the input (assuming arbitrary resources).  If you were really using an
FSM model then the effect of the resources limits would be pervasive,
making the work much harder.

]The way I understand it, the only kind of machine with an infinite
]state space that can't be modeled by a TM is one with non-denumerable
]state state space.

That depends on what you what you want to model.  If you are only
interested in whether a set can be generated, then this is true.  But
if you are interested in how many operations you need to generate a
given set, then your statement is false.

Besides, just because you _can_ do something a certain way doesn't
mean that is a good way to do it.  It would be silly to define
programming language semantics in terms of a TM.  The Lambda calculus,
the predicate calculus, or a specialized state transition system is
far easier to use.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

doug@netcom.COM (Doug Merritt) (05/23/91)

In article <1991May18.225337.24416@sbcs.sunysb.edu> jallen@eeserv1.ic.sunysb.edu (Joseph Allen) writes:
>
>But this is getting a little silly.  Whether finite programs are haltable is
>also unprovable (right? CS people?).

Untrue. For any finite program A of size N bits (with number of states = 2^N),
there exists (in a mathematical sense) another finite program B with a
lookup table of size 2^N (each entry corresponds to a state) which says
whether A will halt when in some particular state.

Similarly, for *all* finite programs A* with size < N bits (that as usual
start in a particular known state), there exists a finite program C with a
looktable of size 2^N (each entry corresponds to a program) that says whether
each program in A* must eventually halt.

This reasoning does not work on all programs, because that is an infinite
set including unboundedly large programs and therefore a lookup table
would have to be aleph-1 in size.

For a program with 4 Megabytes (2^25 bits) of memory, one would need a
lookup table of size 2^(2^25) bits, making this a bit unrealistic in
practice.

Generating the lookup tables is left as an exercise for the reader. :-)
	Doug
-- 
Doug Merritt	doug@netcom.com		apple!netcom!doug

jallen@eeserv1.ic.sunysb.edu (Joseph Allen) (05/25/91)

In article <3354@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <1991May19.024820.25262@sbcs.sunysb.edu> Joseph Allen writes:
>]From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
>]>[A finite TM is translatable into a FSM.  A FSM has a state space S and a
>]>mapping S -> S.  To see if it halts, pick a starting state, run through the
>]>sequences.  If if goes on the halt state then it halts.  If it ever touches a
>]>state which already occured, it doesn't.  Since there is only a finite number
>]>of states, it takes a finite amount of time to decide (since at most it has
>]>to go through all the states)]

>What machine is going to do this?  If computers are FSM's then you
>only have an FSM to do this, and I can always find a program too big
>to analysis in this way.

I agree, none (plus it would much much too slow- the number of states in real
computers is huge).  The 64K question is can it be done symbolically on the
language source code.  I believe the answer is sometimes.  If you don't have
malloc and you don't have other forms of bignums then it might even be
practicle.  Maybe.  It would be usefull because you could tell at compile time
if you're going to overflow or go past array bounds or run out of stack space.
At any rate, the halting problem proof should not be justification to not at
least research this.  I originally thought that it was because I misunderstood
the halting problem and its scope.

If you're programming in an idealized lisp-like environment, then I guess the
problem starts to approach the actuall halting problem.

>By the way, how do you know that the TM is finite in the first place?

Well it's not as defined, but real computers are.  And more importantly, many
times the "functions" or whater the modularity structure in the language in
question is is.

This is not to say that the halting problem is not usefull- it is, but for
fundamental questions of discrete algorithms in idealized environments.  

>By the way, the term "Turing machine" refers to a specific
>architecture, and does not describe all machines with infinite state
>spaces.  (PDAs are infinite-state devices that can't even recognize
>the recursive sets.)  A Turing machine has an infinite read-write tape
>with a movable head and only four possible actions (as I recall).
>There are very few applications where it is useful to model a computer
>as a TM -- infinite or not.

TMs are used because they can represent any machine with a finite or infinite
(but not continuum) state space (including PDAs).  It's not pretty to
represent real computers on TMs, but it can be done, and that's the point.  

Similarly, a FSM as a classification means much more than an EPROM and a
latch.
-- 
/*  jallen@ic.sunysb.edu  */     /* Amazing */     /* Joe Allen 129.49.12.74 */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

doug@netcom.COM (Doug Merritt) (05/28/91)

In article <1991May25.135757.6912@sbcs.sunysb.edu> jallen@eeserv1.ic.sunysb.edu (Joseph Allen) writes:
> [...]  The 64K question is can it be done symbolically on the
>language source code.  I believe the answer is sometimes.  If you don't have
>malloc and you don't have other forms of bignums then it might even be
>practicle.  Maybe.  It would be usefull because you could tell at compile time
>if you're going to overflow or go past array bounds or run out of stack space.

Bignums and malloc are important in the theoretical sense in which they
were brought up in this thread, but eliminating their consideration doesn't
help all that much in practice.

>At any rate, the halting problem proof should not be justification to not at
>least research this.

It gets researched all the time, the halting problem notwithstanding. It
is quite frequent to see papers that discuss heuristics, or actually more
often ones that assume certain simplifications that allow some progress
to be made, but which make the algorithm in question useless for general
application because of the difficulty/impossibility of proving that
the assumption holds for any particular real-world case.

If you actually try to design an analyzer that e.g. tries to predict whether
an array index will ever go out of bound (as good an example as any), you
almost always end up modeling behavior which is highly input-data-dependent,
dependent on the results of other function calls (whether they were called
or how often and what they computed), etc. In short you get a model which
represents significant chunks of the original program, with very few
possible simplifications, and hence can't make much progress.

A great deal of the work on this subject comes up under the heading "symbolic
execution", and it *is* useful, just not as much as you would like to hope.
Another related area is "the aliasing problem", which refers to the
problem of figuring out when/whether two names (or pointers, depending
on the language/problem being analyzed) refer to the same memory cell(s).
It turns out that a lot hangs on this, and an awful lot of people would
be very happy if some general solution to this could be found. Unfortunately
in most forms it maps to the halting problem, or impractical simplifying
assumptions need to be made.

So although it is generally thought to be hopeless in the completely
general case, people do tackle subsets, and some people hope that eventually
some significantly powerful subset of the problem may prove tractable.
But still, the halting problem says that you'll never get a hundred percent
solution.

Oh, btw: another interesting related problem is that there is no
"normalized form" for algorithms in any of the universal models of computation
developed thus far. I.e. you can express, say, a shell sort in any of
numerous different ways, and within broad limits, no single one of those
forms can be called "the base case"/"the natural form"/"the simplest form".
This means that even so simple a concept as trying to compare two algorithms
for functional equality is intractable. In number theory, positive numbers
have a unique factorization, and this makes a lot of things very tractable.
But most other things (e.g. general two- and multi- dimensional polynomials)
have no unique factorization. They can be factored several different ways.
And that significantly complicates things. Same thing with algorithms.

So if you discovered a universal model of computation that *did* have
a unique factorization, then that would be a breakthrough. Much would
become possible within that system. Only problem is that it would necessarily
be intractable or impossible to automatically translate algorithms from
other systems into that one, which probably means that it would be very
difficult to prove that it really was a universal model of computation.
	Doug
-- 
Doug Merritt	doug@netcom.com		apple!netcom!doug

smryan@clipper.ingr.com (Steven Ryan) (06/04/91)

>>]>[A finite TM is translatable into a FSM.

Consider an input tape with a left end marker '<' and right end marker '>'.
Between the markers are the symbols '(' and ')'.

The following Markov machine terminates with 'Y' if the '()' are semi-Dyck,
'N' if not.

	<	->	[p
	p(	->	(p
	(p)	->	p
	[p>	->	Y

	[p)	->	q
	q(	->	q
	q)	->	q
	q>	->	N

	(r>	->	r
	(r	->	r
	)r	->	r
	[r	->	N

This has a fixed number of rules. For any input string of length n, it uses
finite space O(n) and finite time O(n).

>>]>[A finite TM is translatable into a FSM.

This is finite and TM-equivalent. Please translate into a FSM for any legal
input string.
-- 
In compliance with DoC,            Steven Ryan
this posting is devoid             2400 Geng Road, Palo Alto, CA
of all information.                ...!{apple|pyramid}!clipper!wyrmham!smryan
fxxx!                              A new xxxx for xxxxxxx  xxxxxxxxxx.

dcooper@berlioz.nsc.com (D. Cooper) (06/04/91)

In article <1991Jun3.210908.17574@clipper.ingr.com> smryan@clipper.ingr.com (Steven Ryan) writes:
>>>]>[A finite TM is translatable into a FSM.
>
>Consider an input tape with a left end marker '<' and right end marker '>'.
>Between the markers are the symbols '(' and ')'.
>
> ...
>
>This has a fixed number of rules. For any input string of length n, it uses
>finite space O(n) and finite time O(n).
>
>This is finite and TM-equivalent. Please translate into a FSM for any legal
>input string.
>-- 
>In compliance with DoC,            Steven Ryan
>this posting is devoid             2400 Geng Road, Palo Alto, CA
>of all information.                ...!{apple|pyramid}!clipper!wyrmham!smryan
>fxxx!                              A new xxxx for xxxxxxx  xxxxxxxxxx.

Quite true. A TM in which the size of the tape is limited to the size of the
input is a Linear Bounded Automaton(LBA). A LBA accepts Context Sensitive Languages (CSL).
The set of CSLs is a strict superset of the set of Context Free Languages.

The difference between a FSM and a TM with a finite tape is that the FSM has a fixed
finite amount of "memory" while the amount of "memory" available to the TM with a finite
tape must necessarily grow as the length of the input grows (i.e. the tape must be at
least large enough to hold the input string).

The only way (that I am aware of) to restict the power of a TM to that of a FSM is to
not allow the TM to write to the tape.