park@usceast.UUCP (Kihong Park) (01/10/90)
Here are the results that I have corresponding to the problem of revised definition of "computational irreducibility" which I posted. Results from algorithmic information theory are utilized to deduce the results: computational irreducibility of (1) finite sequences Given a finite sequence w over some alphabet set, let w be algorithmically random, i.e., the size of the minimum program generating w on some standard universal turing machine is approximately |w|. Then any such minimal program P capable of generating w will more or less "copy" w to the output from an internal storage, or from its input. No compaction is possible. Since P is the most compact representation of w, to generate the whole of w, the amount of computation done by P is the minimum one, and hence "computationally irreducible". The above generalizes straightforwardly to algorithmically non-random sequences. If w is any such sequence then its minimum program P(need not be unique) is its most compact representation, and hence to generate the whole of w, again the amount of computation required by P is "computationally irreducible". (2) infinite sequences Given an infinite sequence(one-way) w and a minimal program P which generates it, by the same reasoning as above, the amount of computation needed by P to produce w is "computationally irreducible". By the definition of infinite random sequences, no such program, i.e., a finitely specifiable procedure can correspond to an infinite random sequence. We are implicitly implying that the turing machines acting as generators be deterministic. As the above definitions show, there exist programs whose corresponding sequences to generate in full, the amount of work required by the program is minimum, and hence computationally irreducible. But here is a problem that even the above given definition carries: although to faithfully produce a given sequence w in full, the amount of computation needed by a corresponding minimal program is "computationally irreducible", this definition is a more or less obvious observation. When one investigates "one-instance" problems, one asks questions pertaining to the properties of the problems, and in case of the root-of-2 problem described in the previous posting(the question asks whether in the decimal expansion of root-of-2 there exists a finite substring of some specified pattern), the definition given above does not imply that to answer this question one has theoretically no recourse but to explicitly simulate the algorithm ad infinitum to obtain the answer. As far as I know, knowledge about the properties of root-of-2 is still meager, but this does not mean that a better understanding of this infinite sequence is not possible. Hence, in general, when one asks questions about properties of infinite sequences and their associated generators, these properties are independent of the generating mechanism, and thus the definition given above is of little use. By properties of infinite sequences, I will mean computable functions on the subsequences of the sequence. If we call the above definition a "weak" computational irreducibility then a "strong" definition would say that there exist sequences with their generators such that there are properties of the sequence which can be computed only by explicitly generating the whole sequence. I am investigating these issues basically because I am trying to understand infinite sequences from a computational point of view. Questions arising in chaos theory deal with infinite processes(iterative equations) whose behavior one wants to analyze. Hence, understanding the nature of interesting infinite sequences(a subset of the computable irrational numbers) seems to be a core problem at hand. I would appreciate any suggestions corresponding to the definitions and problems cited above.