harnad@mind.UUCP (Stevan Harnad) (10/23/86)
michaelm@bcsaic.UUCP (michael maxwell) writes: > I believe the Turing test was also applied to orangutans, although > I don't recall the details (except that the orangutans flunked)... > As an interesting thought experiment, suppose a Turing test were done > with a robot made to look like a human, and a human being who didn't > speak English-- both over a CCTV, say, so you couldn't touch them to > see which one was soft, etc. What would the robot have to do in order > to pass itself off as human? They should all three in principle have a chance of passing. For the orang, we would need to administer the ecologically valid version of the test. (I think we have reasonably reliable cross-species intuitions about mental states, although they're obviously not as sensitive as our intraspecific ones, and they tend to be anthropocentric and anthropomorphic -- perhaps necessarily so; experienced naturalists are better at this, just as cross-cultural ethnographic judgments depend on exposure and experience.) We certainly have no problem in principle with foreign speakers (the remarkable linguist, polyglot and bible-translator Kenneth Pike has a "magic show" in which, after less than an hour of "turing" interactions with a speaker of any of the [shrinking] number of languages he doesn't yet know, they are babbling mutually intelligibly before your very eyes), although most of us may have some problems in practice with such a feat, at least, without practice. Severe aphasics and mental retardates may be tougher cases, but there perhaps the orang version would stand us in good stead (and I don't mean that disrespectfully; I have an extremely high regard for the mental states of our fellow creatures, whether human or nonhuman). As to the robot: Well that's the issue here, isn't it? Can it or can it not pass the appropriate total test that its appropriate non-robot counterpart (be it human or ape) can pass? If so, it has a mind, by this criterion (the Total Turing Test). I certainly wouldn't dream of flunking either a human or a robot just because he/it didn't feel soft, if his/its total performance was otherwise turing indistinguishable. Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet
harnad@mind.UUCP (Stevan Harnad) (10/26/86)
In Message-ID: <8610190504.AA08059@ucbvax.Berkeley.EDU> on mod.ai CUGINI, JOHN <cugini@nbs-vms.ARPA> replies to my claim that >> there is no rational reason for being more sceptical about robots' >> minds (if we can't tell their performance apart from that of people) >> than about (other) peoples' minds. with the following: > One (rationally) believes other people are conscious BOTH because > of their performance and because their internal stuff is a lot like > one's own. This is a very important point and a subtle one, so I want to make sure that my position is explicit and clear: I am not denying that there exist some objective data that correlate with having a mind (consciousness) over and above performance data. In particular, there's (1) the way we look and (2) the fact that we have brains. What I am denying is that this is relevant to our intuitions about who has a mind and why. I claim that our intuitive sense of who has a mind is COMPLETELY based on performance, and our reason can do no better. These other correlates are only inessential afterthoughts, and it's irrational to take them as criteria. My supporting argument is very simple: We have absolutely no intuitive FUNCTIONAL ideas about how our brains work. (If we did, we'd have long since spun an implementable brain theory from our introspective armchairs.) Consequently, our belief that brains are evidence of minds and that the absence of a brain is evidence of the absence of a mind is based on a superficial black-box correlation. It is no more rational than being biased by any other aspect of appearance, such as the color of the skin, the shape of the eyes or even the presence or absence of a tail. To put it in the starkest terms possible: We wouldn't know what device was and was not relevantly brain-like if it was staring us in the face -- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass the Total Turing Test). That's the only thing our intuitions have to go on, and our reason has nothing more to offer either. To take one last pass at setting the relevant intuitions: We know what it's like to DO (and be able to do) certain things. Similar performance capacity is our basis for inferring that what it's like for me is what it's like for you (or it). We do not know anything about HOW we do any of those things, or about what would count as the right way and the wrong way (functionally speaking). Inferring that another entity has a mind is an intuitive judgment based on performance. It's called the (total) turing test. Inferring HOW other entities accomplish their performance is ordinary scientific inference. We're in no rational position to prejudge this profound and substantive issue on the basis of the appearance of a lump of grey jelly to our untutored but superstitious minds. > [W]e DO have some idea about the functional basis for mind, namely > that it depends on the brain (at least more than on the pancreas, say). > This is not to contend that there might not be other bases, but for > now ALL the minds we know of are brain-based, and it's just not > dazzlingly clear whether this is an incidental fact or somewhat > more deeply entrenched. The question isn't whether the fact is incidental, but what its relevant functional basis is. In other words, what is it about he brain that's relevant and what incidental? We need the causal basis for the correlation, and that calls for a hefty piece of creative scientific inference (probably in theoretical bio-engineering). The pancreas is no problem, because it can't generate the brain's performance capacities. But it is simply begging the question to say that brain-likeness is an EXTRA relevant source of information in turing-testing robots, when we have no idea what's relevantly brain-like. People were sure (as sure as they'll ever be) that other people had minds long before they ever discovered they had brains. I myself believed the brain was just a figure of speech for the first dozen or so years of my life. Perhaps there are people who don't learn or believe the news throughout their entire lifetimes. Do you think these people KNOW any less than we do about what does or doesn't have a mind? Besides, how many people do you think could really pick out a brain from a pancreas anyway? And even those who can have absolutely no idea what it is about the brain that makes it conscious; and whether a cow's brain or a horse-shoe crab's has it; or whether any other device, artificial or natural, has it or lacks it, or why. In the end everyone must revert to the fact that a brain is as a brain does. > Why is consciousness a red herring just because it adds a level > of uncertainty? Perhaps I should have said indeterminacy. If my arguments for performance-indiscernibility (the turing test) as our only objective basis for inferring mind are correct, then there is a level of underdetermination here that is in no way comparable to that of, say, the unobservable theoretical entities of physics (say, quarks, or, to be more trendy, perhaps strings). Ordinary underdetermination goes like this: How do I know that your theory's right about the existence and presence of strings? Because WITH them the theory succeeds in accounting for all the objective data (let's pretend), and without them it does not. Strings are not "forced" by the data, and other rival theories may be possible that work without them. But until these rivals are put forward, normal science says strings are "real" (modulo ordinary underdetermination). Now try to run that through for consciousness: How do I know that your theory's right about the existence and presence of consciousness (i.e., that your model has a mind)? "Because its performance is turing-indistinguishable from that of creatures that have minds." Is your theory dualistic? Does it give consciousness an independent, nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit the objective data just as well (indeed, turing-indistinguishably) without consciousness? "Well..." That's indeterminacy, or radical underdetermination, or what have you. And that's why consciousness is a methodological red herring. > Even though any correlations will ultimately be grounded on one side > by introspection reports, it does not follow that we will never know, > with reasonable assurance, which aspects of the brain are necessary for > consciousness and which are incidental...Now at some level of difficulty > and abstraction, you can always engineer anything with anything... But > the "multi-realizability" argument has force only if its obvious > (which it ain't) that the structure of the brain at a fairly high > level (eg neuron networks, rather than molecules), high enough to be > duplicated by electronics, is what's important for consciousness. We'll certainly learn more about the correlation between brain function and consciousness, and even about the causal (functional) basis of the correlation. But the correlation will really be between function and performance capacity, and the rest will remain the intuitive inference or leap of faith it always was. And since ascertaining what is relevant about brain function and what is incidental cannot depend simply on its BEING brain function, but must instead depend, as usual, on the performance criterion, we're back where we started. (What do you think is the basis for our confidence in introspective reports? And what are you going to say about robots'introspective reports...?) I don't know what you mean, by the way, about always being able to "engineer anything with anything at some level of abstraction." Can anyone engineer something to pass the robotic version of the Total Turing Test right now? And what's that "level of abstraction" stuff? Robots have to do their thing in the real world. And if my groundedness arguments are valid, that ain't all done with symbols (plus add-on peripheral modules). Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771
harnad@mind.UUCP (Stevan Harnad) (10/26/86)
In mod.ai, Message-ID: <861016-071607-4573@Xerox>, "charles_kalish.EdServices"@XEROX.COM writes: > About Stevan Harnad's two kinds of Turing tests [linguistic > vs. robotic]: I can't really see what difference the I/O methods > of your system makes. It seems that the relevant issue is what > kind of representation of the world it has. I agree that what's at issue is what kind of representation of the world the system has. But you are prejudging "representation" to mean only symbolic representation, whereas the burden of the papers in question is to show that symbolic representations are "ungrounded" and must be grounded in nonsymbolic processes (nonmodularly -- i.e., NOT by merely tacking on autonomous peripherals). > While I agree that, to really understand, the system would need some > non-purely conventional representation (not semantic if "semantic" > means "not operable on in a formal way" as I believe [given the brain > is a physical system] all mental processes are formal then "semantic" > just means governed by a process we don't understand yet), giving and > getting through certain kinds of I/O doesn't make much difference. "Non-purely conventional representation"? Sounds mysterious. I've tried to make a concrete proposal as to just what that hybrid representation should be like. "All mental processes are formal"? Sounds like prejudging the issue again. It may help to be explicit about what one means by formal/symbolic: Symbolic processing is the manipulation of (arbitrary) physical tokens in virtue of their shape on the basis of formal rules. This is also called syntactic processing. The formal goings-on are also "semantically interpretable" -- they have meanings; they are connected to objects in the outside world that they are about. The Searle problem is that so far the only devices that do semantic interpretations intrinsically are ourselves. My proposal is that grounding the representations nonmodularly in the I/O connection may provide the requisite intrinsic semantics. This may be the "process we don't understand yet." But it means giving up the idea that "all mental processes are formal" (which in any case does not follow, at least on the present definition of "formal," from the fact that "the brain is a physical system"). > Two for instances: SHRDLU operated on a simulated blocks world. The > modifications to make it operate on real blocks would have been > peripheral and not have affected the understanding of the system. This is a variant of the "Triviality of Transduction (& A/D, & D/A, and Effectors)" Argument (TT) that I've responded to in another iteration. In brief, it's toy problems like SHRDLU that are trivial. The complete translatability of internal symbolic descriptions into the objects they stand for (and the consequent partitioning of the substantive symbolic module and the trivial nonsymbolic peripherals) may simply break down, as I predict, for life-size problems approaching the power to pass the Total Turing Test. To put it another way: There is a conjecture implicit in the solutions to current toy/microworld problems, namely, that something along essentially the same lines will suitably generalize to the grown-up/macroworld problem. What I'm saying amounts to a denial of that conjecture, with reasons. It is not a reply to me to simply restate the conjecture. > Also, all systems take analog input and give analog output. Most receive > finger pressure on keys and return directed streams of ink or electrons. > It may be that a robot would need more "immediate" (as opposed to > conventional) representations, but it's neither necessary nor sufficient > to be a robot to have those representations. The problem isn't marrying symbolic systems to any old I/O. I claim that minds are "dedicated" systems of a particular kind: The kind capable of passing the Total Turing Test. That's the only necessity and sufficiency in question. And again, the mysterious word "immediate" doesn't help. I've tried to make a specific proposal, and I've accepted the consequences, namely, that it's just not going to be a "conventional" marriage at all, between a (substantive) symbolic module and a (trivial) nonsymbolic module, but rather a case of miscegenation (or a sex-change operation, or some other suitably mixed metaphor). The resulting representational system will be grounded "bottom-up" in nonsymbolic function (and will, I hope, display the characteristic "hybrid vigor" that our current pure-bred symbolic and nonsymbolic processes lack), as I've proposed (nonmetaphorically) in the papers under discussion. Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771
harnad@mind.UUCP (Stevan Harnad) (10/26/86)
In mod.ai, Message-ID: <8610190504.AA08083@ucbvax.Berkeley.EDU>, 17 Oct 6 17:29:00 GMT, KRULWICH@C.CS.CMU.EDU (Bruce Krulwich) writes: > i disagree...that symbols, and in general any entity that a computer > will process, can only be dealt with in terms of syntax. for example, > when i add two integers, the bits that the integers are encoded in are > interpreted semantically to combine to form an integer. the same > could be said about a symbol that i pass to a routine in an > object-oriented system such as CLU, where what is done with > the symbol depends on it's type (which i claim is it's semantics) Syntax is ordinarily defined as formal rules for manipulating physical symbol tokens in virtue of their (arbitrary) SHAPES. The syntactic goings-on are semantically interpretable, that is, the symbols are also manipulable in virtue of their MEANINGS, not just their shapes. Meaning is a complex and ill-understood phenomenon, but it includes (1) the relation of the symbols to the real objects they "stand for" and (2) a subjective sense of understanding that relation (i.e., what Searle has for English and lacks for Chinese, despite correctly manipulating its symbols). So far the only ones who seem to do (1) and (2) are ourselves. Redefining semantics as manipulating symbols in virtue of their "type" doesn't seem to solve the problem... > i think that the reason that computers are so far behind the > human brain in semantic interpretation and in general "thinking" > is that the brain contains a hell of a lot more information > than most computer systems, and also the brain makes associations > much faster, so an object (ie, a thought) is associated with > its semantics almost instantly. I'd say you're pinning a lot of hopes on "more" and "faster." The problem just might be somewhat deeper than that... Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771
harnad@mind.UUCP (Stevan Harnad) (10/26/86)
Topic: Machines: Natural and Man-Made On mod.ai, in Message-ID: <8610240550.AA15402@ucbvax.Berkeley.EDU>, 22 Oct 86 14:49:00 GMT, NGSTL1::DANNY%ti-eg.CSNET@RELAY.CS.NET (Daniel Paul) cites Daniel Simon's earlier reply in AI digest (V4 #226): >One question you haven't addressed is the relationship between intelligence and >"human performance". Are the two synonymous? If so, why bother to make >artificial humans when making natural ones is so much easier (not to mention >more fun)? Daniel Paul then adds: > This is a question that has been bothering me for a while. When it > is so much cheaper (and possible now, while true machine intelligence > may be just a dream) why are we wasting time training machines when we > could be training humans instead? The only reasons that I can see are > that intelligent systems can be made small enough and light enough to > sit on bombs. Are there any other reasons? Apart from the two obvious ones -- (1) so machines can free people to do things machines cannot yet do, if people prefer, and (2) so machines can do things that people can only do less quickly and efficiently, if people prefer -- there is the less obvious reply already made to Daniel Simon: (3) because trying to get machines to display all our performance capacity (the Total Turing Test) is our only way of arriving at a functional understanding of what kinds of machines we are, and how we work. [Before the cards and letters pour in to inform me that I've used "machine" incoherently: A "machine," (writ large, Deus Ex Machina) is just a physical, causal system. Present-generation artificial machines are simply very primitive examples.] Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771
harnad@mind.UUCP (Stevan Harnad) (10/26/86)
freeman@spar.UUCP (Jay Freeman) replies: > Possibly a more interesting test [than the robotic version of > the Total Turing Test] would be to give the computer > direct control of the video bit map and let it synthesize an > image of a human being. Manipulating digital "images" is still only symbol-manipulation. It is (1) the causal connection of the transducers with the objects of the outside world, including (2) any physical "resemblance" the energy pattern on the transducers may have to the objects from which they originate, that distinguishes robotic functionalism from symbolic functionalism and that suggests a solution to the problem of grounding the otherwise ungrounded symbols (i.e., the problem of "intrinsic vs. derived intentionality"), as argued in the papers under discussion. A third reason why internally manipulated bit-maps are not a new way out of the problems with the symbolic version of the turing test is that (3) a model that tries to explain the functional basis of our total performance capacity already has its hands full with anticipating and generating all of our response capacities in the face of any potential input contingency (i.e., passing the Total Turing Test) without having to anticipate and generate all the input contingencies themselves. In other words, its enough of a problem to model the mind and how it interacts successfully with the world without having to model the world too. Stevan Harnad {seismo, packard, allegra} !princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771
harnad@mind.UUCP (Stevan Harnad) (10/27/86)
On mod.ai, in Message-ID: <8610160605.AA09268@ucbvax.Berkeley.EDU> on 16 Oct 86 06:05:38 GMT, eyal@wisdom.BITNET (Eyal mozes) writes: > I don't see your point at all about "categorical > perception". You say that "differences between reds and differences > between yellows look much smaller than equal-sized differences that > cross the red/yellow boundary". But if they look much smaller, this > means they're NOT "equal-sized"; the differences in wave-length may be > the same, but the differences in COLOR are much smaller. There seems to be a problem here, and I'm afraid it might be the mind/body problem. I'm not completely sure what you mean. If all you mean is that sometimes equal-sized differences in inputs can be made unequal by internal differences in how they are encoded, embodied or represented -- i.e., that internal physical differences of some sort may mediate the perceived inequalities -- then I of course agree. There are indeed innate color-detecting structures. Moreover, it is the hypothesis of the paper under discussion that such internal categorical representations can also arise as a consequence of learning. If what you mean, however, is that there exist qualitative differences among equal-sized input differences with no internal physical counterpart, and that these are in fact mediated by the intrinsic nature of phenomenological COLOR -- that discontinuous qualitative inequalities can occur when everything physical involved, external and internal, is continuous and equal -- then I am afraid I cannot follow you. My own position on color quality -- i.e., "what it's like" to experience red, etc. -- is that it is best ignored, methodologically. Psychophysical modeling is better off restricting itself to what we CAN hope to handle, namely, relative and absolute judgments: What differences can we tell apart in pairwise comparison (relative discrimination) and what stimuli or objects can we label or identify (absolute discrimination)? We have our hands full modeling this. Further concerns about trying to capture the qualitative nature of perception, over and above its performance consequences [the Total Turing Test] are, I believe, futile. This position can be dubbed "methodological epiphenomenalism." It amounts to saying that the best empirical theory of mind that we can hope to come up with will always be JUST AS TRUE of devices that actual have qualitative experiences (i.e., are conscious) as of devices that behave EXACTLY AS IF they had qualitative experiences (i.e., turing-indistinguishably), but do not (if such insentient look-alikes are possible). The position is argued in detail in the papers under discussion. > Your whole theory is based on the assumption that perceptual qualities > are something physical in the outside world (e.g., that colors ARE > wave-lengths). But this is wrong. Perceptual qualities represent the > form in which we perceive external objects, and they're determined both > by external physical conditions and by the physical structure of our > sensory apparatus; thus, colors are determined both by wave-lengths and > by the physical structure of our visual system. So there's no apriori > reason to expect that equal-sized differences in wave-length will lead > to equal-sized differences in color, or to assume that deviations from > this rule must be caused by internal representations of categories. And > this seems to completely cut the grounds from under your theory. Again, there is nothing for me to disagree with if you're saying that perceived discontinuities are mediated by either external or internal physical discontinuities. In modeling the induction and representation of categories, I am modeling the physical sources of such discontinuities. But there's still an ambiguity in what you seem to be saying, and I don't think I'm mistaken if I think I detect a note of dualism in it. It all hinges on what you mean by "outside world." If you only mean what's physically outside the device in question, then of course perceptual qualities cannot be equated with that. It's internal physical differences that matter. But that doesn't seem to be all you mean by "outside world." You seem to mean that the whole of the physical world is somehow "outside" conscious perception. What else can you mean by the statement that "perceptual qualities represent the form [?] in which we perceive external objects" or that "there's no...reason to expect that...[perceptual] deviations from [physical equality]...must be caused by internal representations of categories." Perhaps I have misunderstood, but either this is just a reminder that there are internal physical differences one must take into account too in modeling the induction and representation of categories (but then they are indeed taken into account in the papers under discussion, and I can't imagine why you would think they would "completely cut the ground from under" my theory) or else you are saying something metaphysical with which I cannot agree. One last possibility may have to do with what you mean by "representation." I use the word eclectically, especially because the papers are arguing for a hybrid representation, with the symbolic component grounded in the nonsymbolic. So I can even agree with you that I doubt that mere symbolic differences are likely to be the sole cause of psychophysical discontinuities, although, being physically embodied, they are in principle sufficient. I hypothesize, though, that nonsymbolic differences are also involved in psychophysical discontinuities. > My second criticism is that, even if "categorical perception" really > provided a base for a theory of categorization, it would be very > limited; it would apply only to categories of perceptual qualities. I > can't see how you'd apply your approach to a category such as "table", > let alone "justice". How abstract categories can be grounded "bottom-up" in concrete psychophysical categories is the central theme of the papers under discussion. Your remarks were based only on the summaries and abstracts of those papers. By now I hope the preprints have reached you, as you requested, and that your question has been satisfactorily answered. To summarize "grounding" briefly: According to the model, (learned) concrete psychophysical categories are formed from sampling positive and negative instances of a category and then encoding the invariant information that will reliably identify further instances. This might be how one learned the concrete categories "horse" and "striped" for example. The (concrete) category "zebra" could then be learned without need for direct perceptual ACQUAINTANCE with the positive and negative instances by simply being told that a zebra is a striped horse. That is, the category can be learned by symbolic DESCRIPTION by merely recombining the labels of the already-grounded perceptual categories. All categorization involves some abstraction and generalization (even "horse," and certainly "striped" did), so abstract categories such as "goodness," "truth" and "justice" could be learned and represented by recursion on already grounded categories, their labels and their underlying representations. (I have no idea why you think I'd have a problem with "table.") > Actually, there already exists a theory of categorization that is along > similar lines to your approach, but integrated with a detailed theory > of perception and not subject to the two criticisms above; that is the > Objectivist theory of concepts. It was presented by Ayn Rand... and by > David Kelley... Thanks for the reference, but I'd be amazed to see an implementable, testable model of categorization performance issue from that source... Stevan Harnad {allegra, bellcore, seismo, packard} !princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771
MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU (12/01/86)
Lambert Meertens asks: If some things we experience do not leave a recallable trace, then why should we say that they were experienced consciously? I absolutely agree. In my book, "The Society of Mind", which will be published in January, I argue, with Meertens, that the phenomena we call consciousness are involved with our short term memories. This explains why, as he Meertens suggests, it makes little sense to attribute consciousness to rocks. It also means that there are limits to what consciousness can tell us about itself. In order to do perfect self-experiments upon ourselves, we would need perfect records of what happens inside our memory machinery. But any such machinery must get confused by self-experiments that try to find out how it works - since such experiments must change the very records that they're trying to inspect! This doesn't mean that consciousness cannot be understood, in principle. It only means that, to study it, we'll have to use the methods of science, because we can't rely on introspection. Below are a few more extracts from the book that bear on this issue. If you want to get the book itself, it is being published by Simon and Schuster; it will be printed around New Year but won't get to bookstores until mid-February. If you want it sooner, send me your address and I should be able to send copies early in January. (Price will be 18.95 or less.) Or send name of your bookstore so I can get S&S to lobby the bookstore. They don't seem very experienced at books in the AI-Psychology-Philosophy area. In Section 15.2 I argue that although people usually assume that consciousness is knowing what is happening in the minds, right at the present time, consciousness never is really concerned with the present, but with how we think about the records of our recent thoughts. This explains why our descriptions of consciousness are so queer: whatever people mean to say, they just can't seem to make it clear. We feel we know what's going on, but can't describe it properly. How could anything seem so close, yet always keep beyond our reach? I answer, simply because of how thinking about our short term memories changes them! Still, there is a sense in which thinking about a thought is like from thinking about an ordinary thing. Our brains have various agencies that learn to recognize to recognize - and even name - various patterns of external sensations. Similarly, there must be other agencies that learn to recognize events *inside* the brain - for example, the activities of the agencies that manage memories. And those, I claim, are the bases of the awarenesses we recognize as consciousness. There is nothing peculiar about the idea of sensing events inside the brain; it is as easy for an agent (that is, a small portion of the brain) to be wired to detect a *brain-caused brain-event*, as to detect a world-caused brain-event. Indeed only a small minority of our agents are connected directly to sensors in the outer world, like those that sense the signals coming from the eye or skin; most of the agents in the brain detect events inside of the brain! IN particular, I claim that to understand what we call consciousness, we must understand the activities of the agents that are engaged in using and changing our most recent memories. Why, for example, do we become less conscious of some things when we become more conscious of others? Surely this is because some resource is approaching some limitation - and I'll argue that it is our limited capacity to keep good records of our recent thoughts. Why, for example, do thoughts so often seem to flow in serial streams? It is because whenever we lack room for both, the records of our recent thoughts must then displace the older ones. And why are we so unaware of how we get our new ideas? Because whenever we solve hard problems, our short term memories become so involved with doing *that* that they have neither time nor space for keeping detailed records of what they, themselves, have done. To think about our most recent thoughts, we must examine our recent memories. But these are exactly what we use for "thinking," in the first place - and any self-inspecting probe is prone to change just what it's looking at. Then the system is likely to break down. It is hard enough to describe something with a stable shape; it is even harder to describe something that changes its shape before your eyes; and it is virtually impossible to speak of the shapes of things that change into something else each time you try to think of them. And that's what happens when you try to think about your present thoughts - since each such thought must change your mental state! Would any process not become confused, which alters what it's looking at? What do we mean by words like "sentience," "consciousness," or "self-awareness? They all seem to refer to the sense of feeling one's mind at work. When you say something like "I am conscious of what I'm saying," your speaking agencies must use some records about the recent activity of other agencies. But, what about all the other agents and activities involved in causing everything you say and do? If you were truly self-aware, why wouldn't you know those other things as well? There is a common myth that what we view as consciousness is measurelessly deep and powerful - yet, actually, we scarcely know a thing about what happens in the great computers of our brains. Why is it so hard to describe your present state of mind? One reason is that the time-delays between the different parts of a mind mean that the concept of a "present state" is not a psychologically sound idea. Another reason is that each attempt to reflect upon your mental state will change that state, and this means that trying to know your state is like photographing something that is moving too fast: such pictures will be always blurred. And in any case, our brains did not evolve primarily to help us describe our mental states; we're more engaged with practical things, like making plans and carrying them out. When people ask, "Could a machine ever be conscious?" I'm often tempted to ask back, "Could a person ever be conscious?" I mean this as a serious reply, because we seem so ill equipped to understand ourselves. Long before we became concerned with understanding how we work, our evolution had already constrained the architecture of our brains. However we can design our new machines as we wish, and provide them with better ways to keep and examine records of their own activities - and this means that machines are potentially capable of far more consciousness than we are. To be sure, simply providing machines with such information would not automatically enable them to use it to promote their own development and until we can design more sensible machines, such knowledge might only help them find more ways to fail: the easier to change themselves, the easier to wreck themselves - until they learn to train themselves. Fortunately, we can leave this problem to the designers of the future, who surely would not build such things unless they found good reasons to. (Section 25.4) Why do we have the sense that things proceed in smooth, continuous ways? Is it because, as some mystics think, our minds are part of some flowing stream? think it's just the opposite: our sense of constant steady change emerges from the parts of mind that manage to insulate themselves against the continuous flow of time! In other words, our sense of smooth progression from one mental state to another emerges, not from the nature of that progression itself, but from the descriptions we use to represent it. Nothing can *seem* jerky, except what is *represented* as jerky. Paradoxically, our sense of continuity comes not from any genuine perceptiveness, but from our marvelous insensitivity to most kinds of changes. Existence seem continuous to us, not because we continually experience what is happening in the present, but because we hold to our memories of how things were in the recent past. Without those short-term memories, all would seem entirely new at every instant, and we would have no sense at all of continuity, or of existence. One might suppose that it would be wonderful to possess a faculty of "continual awareness." But such an affliction would be worse than useless because, the more frequently your higher-level agencies change their representations of reality, the harder for them to find significance in what they sense. The power of consciousness comes not from ceaseless change of state, but from having enough stability to discern significant changes in your surroundings. To "notice" change requires the ability to resist it, in order to sense what persists through time, but one can do this only by being able to examine and compare descriptions from the recent past. We notice change in spite of change, and not because of it. Our sense of constant contact with the world is not a genuine experience; instead, it is a form of what I call the "Immanence illusion". We have the sense of actuality when every question asked of our visual systems is answered so swiftly that it seems as though those answers were already there. And that's what frame-arrays provide us with: once any frame fills its terminals, this also fills the terminals of the other frames in its array. When every change of view engages frames whose terminals are already filled, albeit only by default, then sight seems instantaneous.