BINDNER@auvm.auvm.edu (10/05/90)
A few thoughts on the limits and potentials of AI. First the limits. I don't think AI will ever be able to really duplicate human judgement. It may one day be the compliment of man's rational thought, but it will never duplicate man as it will not evolve the same way. Thus, the thinking machine will never happen and the expert system is limited to an automated rulebook with data processing functions. Emotion, intuition, inspiration (gasp, a dualist) and other things which are physical or spiritual are outside the scope of our abilities (at least I hope they are). Next, the potentials. Although AI will never duplicate thought it may enhance it. It will do this by aiding human's in their use of computers. Managers and scientists have the ability to think, and in fact they do it well. What they can't do is digest the enormous amounts of data which automation makes gatherable. Expert systems and their successors can aid this analysis. However, hunches and judgement are beyond the capablities of automation (at least for the present) as they are non-rational. A further potential, possibly AIs grandest is to make computers accessable to all. Let me elaborate. Nothing discourages a new user more than the literal nature of computers. As all hackers know, computers like exact commands (and will accept nothing less). Correct this problem and AI will have served its function nicely. An attempt has been made to work around it with the rise of the menu driven system. However, this is not a true solution (though it has the same effect). Here's what I would like in an AI system: - user affection (as opposed to user friendliness). I expect my PC to cuss at me if I cuss at it and compliment me if I compliment it. It should know the answer I want based on context. - mistake correction. If I type Logim and it needs Logon it should ask me "Do you mean Logon?" and if I respond Yes (or Y or sure depending upon how well it knows me) after a number of repeated trial (2 to 5 depending upon how similar the command or error is to other commands) it will automatically say "I assume you meant Logon" and implement the command without asking me. Instead of saying command not understood it would search for permutations of the command from the front or back. These might be based on context (for instance, if machine is not logged on it would query for logon command if expected, and if not found query for synonyms. Model high level pseudo code: If command = error search permutations If search positive go to presenter if search negative search synonymes If search positive go to presenter Present commandwith question "do you mean () If answer=y or synonym or synonym or close permutation execute and add the command or misprint to synonym structure If answer=n "" "" "" "" " "" "" "" ask "what do you mean?" (in case of typos and not ignorance) and requery. If after 2 tries nothing comes up ask "do you wish to try something else?" if n etc. offer a help screen or menu, if y restart. Loops should also be included for vulgarity, etc. - voice recognition, response, OCR and handwriting deciphering (of course). I suggest a closed loop between VR and speaking with the system attempting to answer me in my own voice. When the comparitor figures a close enough match (or my ear does) it should be able to decipher most words. A training vocab could be developed (possibly a personalized version which could be recorded once and plugged into any similar machine). - VR will make language content access easier. This is because language interaction could occur all the time. The mistake correction/language acquisition feature would obviously be incorporated into the DOS and Root systems. A dual processor would also be helpful. If it determines a job takes over 2 minutes to run the job will be sent to a batch "subconcious" while the talk system chats with the operator using every opportunity to build associations between concepts, i.e. if a new word is found it will try to put it into its synonym structure. This structure would contain such things as emotional loading (polite to vulgar scales and sterile to emotive scales) tense, gender, etc. This time might be used to clean up synonym ambiguities or be hooked into a news net which gives briefs on current events or discusses them (events tailored to operator from sport to politics to sex). Time would also be used to identify which subjects are important to the operator. - If keyboards are eliminated an input/edit toggle would be necessary, as would a larger screen with standard keys listed in a sidebar. - Patterns of use could be recorded for possible duplication. The macro storage facility does this. However, it is not sensative to environmental variation. For instance, if a common set of commands (macro) is used in cell a5, but a different spreadsheet is similar, but has an extra collumn a2 the macro would start at a6. - Systems would have diagnostics (temperature, memory) built into them to complain in quite human terms (anthropromorphism strikes again) if a problem occurs or could occur. - The guiding principle here is to make the computer seem human, though the thought processes are far from human (though maybe not too far). The key is to take fear out of computing. There are drawbacks to this approach. A new DOS, memory structure, and hardware would be needed. However, advances in memory are made every day, so this might be feasible soon (comments on doability?). I hope some of these ideas are useful. I've been kicking them around for a few years now. Have any been tried? Am I too late and just ill informed on the state of the art? Discussion please. If by some chance I have hit on something feel free to use it, but I want a working a copy (or 6). I'll interface (internet?) with you all later, Mike Bindner
schraudo@beowulf.ucsd.edu (Nici Schraudolph) (10/06/90)
BINDNER@auvm.auvm.edu writes: >A few thoughts on the limits and potentials of AI. >First the limits. I don't think AI will ever be able to really duplicate >human judgement. It may one day be the compliment of man's rational >thought, but it will never duplicate man as it will not evolve the same way. Never say never (especially on Usenet :-). There is no empirical evidence suggesting that "true AI" be impossible - only that it is very hard. The "designed vs. evolved" argument against true AI can easily be refuted: what is to keep up from *evolving* intelligent machines? Granted, they wouldn't evolve in exactly the same way, but their evolution might be set up such as to converge with ours (all we need is some religious fanatics that kill off all androids that betray themselves as non-human :-). >Thus, the thinking machine will never happen and the expert system is limited >to an automated rulebook with data processing functions. Emotion, intuition, >inspiration (gasp, a dualist) and other things which are physical or spiritual >are outside the scope of our abilities (at least I hope they are). Modern philosophers find the dualist position increasingly harder to defend. Claims about the supposed impossibility of one thing or another have fallen by the hundreds through the history of science, and should therefore be viewed with suspicion unless backed up by solid arguments - which have not been forthcoming by the dualists. >this analysis. However, hunches and judgement are beyond the capablities of >automation (at least for the present) as they are non-rational. Non-rational with respect to what theory? Behaviors that seem completely irrational to you (ie. wrt. folk psychology) may be perfectly rational to a psychologist, or maybe to some future psychologist. Non-rational be- haviors are defined by exclusion; the current definition is "not explicable by scientific psychology as of 1990". You cannot a priori rule out the possibility of a future Predictive Psychology of Hunches and Judgement. I am not expecting to see true AI in my lifetime. However, from this I do not jump to the foregone conclusion that it will never happen: the jury is still out on arguments against the reduction of psychology to biology. An excellent argument *for* the possibility of such a reduction can be found in "Neurophilosophy" by Patricia S. Churchland (MIT Press 1986). Please note that I am directing followups to comp.ai.philosophy. -- Nicol N. Schraudolph, C-014 "Big Science, hallelujah. University of California, San Diego Big Science, yodellayheehoo." La Jolla, CA 92093-0114 - Laurie Anderson. nici%cs@ucsd.{edu,bitnet,uucp}