markh@csd4.milw.wisc.edu (Mark William Hopkins) (09/18/88)
Any time that one sets out to deal with a major problem, there is usually some kind of end-state that is desired, an IDEAL if you will. It's a necessary component of the problem solving task; so much so that if you were to lack the goals and direction you would just end up floundering and meandering -- and that's what is often (wrongly) perceived as doing philosophy. So this brings up the question on my mind: Why does anyone want artificial intelligence? What is it that you're seeking to gain by it? What is it that you would have an intelligent machine do? And when you answer these questions then answer how and why considering AI seems more urgent today than ever before. Link what I've just said in the first two paragraphs. You'll see that it is a recursive problem. It applies both to AI and to you in the quest of seeking AI. If you want to successfully deal with the problem of AI, then you are going to have to know just what it is that you are trying to do. Human curiosity (about the nature of our mind) is one thing, but even that has to be directed toward a pressing need -- so the question remains just what the pressing need is. To say that we merely desire to understand the mind is just a way of rephrasing the question -- it is not an answer. I asked the question and raised the issue, so probably I should try to answer it too. The first thing that comes to mind is our current situation as regards science -- its increasing specialization. Most people will agree that this is a trend that has gone way too far ... to the extent that we may have sacrificed global perspective and competence in our specialists; and further that it is a trend that needs to be reversed. Yet fewer would dare to suggest that we can overcome the problem. I dare. One of the most important functions of AI will be to amplify our own intelligence. In fact, I believe that time is upon us that this symbiotic relation between human and potentially intelligent machine is triggering an evolutionary change in our species as far as its cognitive abilities are concerned. Seen this way, we'll realise that the axiom still holds that: THE COMPUTER IS A TOOL. It's an Intelligent Tool -- but a tool nevertheless. Nowadays, for instance, we credit ourselves with the ability to go at high speeds (60 mph in a car) even though it is really the machine that is doing it for us. Likewise it is going to be with intelligent tools. So in this way, the problem with the information explosion is going to be solved. Slowly, it is dawning on us that the very need for specialization is becoming obsolete. A major determinant of how fragmented science is is how much communication takes place. I submit here that the information explosion is for the most part an explosion in redundancy brought about by a communication bottleneck. Our goal is then to find a way to open up this bottle neck. It is here, again that AI (especially in relation to intelligent data bases) may come to the rescue. This is what I see as for the Why's.
bstev@pnet12.cts.com (Barry Stevens) (09/19/88)
markh@csd4.milw.wisc.edu (Mark William Hopkins) writes: > Why does anyone want artificial intelligence? > A major determinant of how fragmented science is is how much communication >takes place. I submit here that the information explosion is for the most part >an explosion in redundancy brought about by a communication bottleneck. Our >goal is then to find a way to open up this bottle neck. It is here, again that >AI (especially in relation to intelligent data bases) may come to the rescue. Along with the need to handle increasing amounts of information, comes an increased need for performance: Timeliness -- the speed at which information must be processed has increased dramatically. (e.g. computer console messages in a commercial datacenter with multiple CPUs need to be analyzed at the rates of 5 to 50 per SECOND. ) Accuracy -- decisions must be made at accuracies that are beyond the sustained ability of human experts (e.g process control systems needing 0.1% accuracy in set point values for hundreds of variables set every minute for 24 hrs/day) Cost -- expert knowledge must be employed in situations where the presence of experts can't be afforded (e.g. stock or commodity trading systems based on expert systems and/or neural nets) Availability- most experts are fond of their weekends and evenings, and make a very big deal over their vacations. AI methods can make their skills available 24 hrs, 365 days/year. I have surveyed many companies in their use of AI techniques. My personal feeling, supported by no one else at this point, is that the "why" of AI will be answered when the following application is implemented and becomes widespread: A mid level manager must analyze a budget report once a week. He uses the rules he follows as the basis for an expert system: "If the variance is greater than $1000 in Acct 101, OR the TOTAL in Line 5 is greater than 10% of plan, OR ... " an then delegates the expert system and his rule base of 10, 15, or 20 rules to HIS SECRETARY, AI and expert systems will have come of age in industry. The big question will be answered not by robotics applications, or speaker independent speech recognition, or writer-independent character recognition, or even smart data bases. (Most professionals don't use data bases), but by simple tasks, done by almost everyone in the work environment, taken over or delegated to someone else as a result of AI. The AI applications that do that will propogate across the workplace like LOTUS or other truly horizontal applications. UUCP: {crash ncr-sd}!pnet12!bstev ARPA: crash!pnet12!bstev@nosc.mil INET: bstev@pnet12.cts.com
shani@TAURUS.BITNET (09/19/88)
In article <6823@uwmcsd1.UUCP>, markh@csd4.milw.wisc.edu.BITNET writes: > Why does anyone want artificial intelligence? > > What is it that you're seeking to gain by it? What is it that you would have > an intelligent machine do? Well, well waddaya know! :-) Not long ago, an endless argument was held in this newsgroup, reguarding AI and value-systems. It seem that the reason this argument did not (as far as I know) reach any constructive conclousions, is that the question above was never raised... So realy? what do we expect an intelligent machine to be like? Or let me sharp the question a bit: How will we know that a machine is intelligent, if we lack the means to measure (or even to define) intelligence ? This may sound a bit cynical, but it is my opinion that setting up such misty goals, and useing therms like 'intelligence' or 'value-systems' to describe them, is mainly ment to fund something which MAY BE beneficial (since research is allmost always beneficial in some way), but will never reach those goals... why who would like to fund a research which will only end up with easyer to use programming languages or faster computers? O.S. BTW: I wish it wasn't like that. It could be wonderful if RND financing was not goal-depended... all and all, the important thing is the research itself.
jeff2@certes.UUCP ( jeff) (09/22/88)
in article <867@taurus.BITNET>, shani@TAURUS.BITNET says: > > In article <6823@uwmcsd1.UUCP>, markh@csd4.milw.wisc.edu.BITNET writes: >> Why does anyone want artificial intelligence? >> >> What is it that you're seeking to gain by it? What is it that you would have >> an intelligent machine do? > > Or let me sharp the question a bit: > > How will we know that a machine is intelligent, if we lack > the means to measure (or even to define) intelligence ? > > This may sound a bit cynical, but it is my opinion that setting up such > misty goals, and useing therms like 'intelligence' or 'value-systems' to > describe them, is mainly ment to fund something which MAY BE beneficial > (since research is allmost always beneficial in some way), but will never > reach those goals... why who would like to fund a research which will only > end up with easyer to use programming languages or faster computers? > Consider the following: 1): it takes nearly 30 years (from conception to expert level) to train a new programmer/software engineer 2): the average "expert expectancy" of this person is (I'm guessing) probably 10 - 15 years 3): there are nearly 100,000,000 working people with ideas to improve the way their jobs are done. 4): that (perhaps) 1 person in 10 of these has the skills to automate the job. At least two people are required to automate some portion of a task; one to describe the process and one to automate it; this increases the cost of the automation process (two salaries are being paid to do one job), and limits the number of tasks that can be automated at any one time to the number of automaters available. As a result, the number of tasks to be automated is expanding much more rapidly than the number of people to automate it. Given that few automaters remain experts in their field long enough to be fully replaced, we have no choice but to reduce the skill level required to automate a task if we want to improve our abilities to automate tasks. This alone is justification for research into "easy to use" languages. Additionally, it would be nice if AI could create a tool for the development of the other automation tools that are sufficiently close to those in current use (e.g. English) that little training is required to use them. -- /*---------------------------------------------------------------------------*/ Jeff Griffith Teradyne/Attain, Inc., San Jose, CA 95131 (408)434-0822 Disclaimer: The views expressed here are strictly my own. Paths: jeff@certes!quintus or jeff@certes!aeras!sun
smoliar@vaxa.isi.edu (Stephen Smoliar) (09/23/88)
In article <6823@uwmcsd1.UUCP> markh@csd4.milw.wisc.edu (Mark William Hopkins) writes: > Human >curiosity (about the nature of our mind) is one thing, but even that has to be >directed toward a pressing need -- so the question remains just what the >pressing need is. My strongest urge is to respond to this statement in your own fashion: Why? Do you not admit the possibility that the human mind is inclined to pose difficult problems for itself WITHOUT that "pressing need?"