rej%Cornell@sri-unix.UUCP (11/19/83)
From: rej@Cornell (Ralph Johnson) The recent discussions on AIlist have been boring, so I have another idea for discussion. I see no evidence that that AI is going to make as much of a change on the world as data processing or information retrieval. While research in AI has produced many results in side areas such as computer languages, computer architecture, and programming environments, none of the past promises of AI (automatic language translation, for example) have been fulfilled. Why should I expect anything more in the future? I am a soon-to-graduate PhD candidate at Cornell. Since Cornell puts little emphasis on AI, I decided to learn a little on my own. Most AI literature is hard to read, as very little concrete is said. The best book that I read (best for someone like me, that is) was the three-volume "Handbook on Artificial Intelligence". One interesting observation was that I already knew a large percentage of the algorithms. I did not even think of most of them as being AI algorithms. The searching algorithms (with the exception of alpha beta pruning) are used in many areas, and algorithms that do logical deduction are part of computational mathematics (just my opinion, as I know some consider this hard core AI). Algorithms in areas like computer vision were completely new, but I could see no relationship between those algorithms and algorithms in programs called "expert systems", another hot AI topic. [Agreed, but the gap is narrowing. There have been 1 or 2 dozen ***Sender closed connection*** === Network Mail from host sri-ai on Sun Nov 20 21:33:04 ===
rej%Cornell@sri-unix.UUCP (11/19/83)
From: rej@Cornell (Ralph Johnson) The recent discussions on AIlist have been boring, so I have another idea for discussion. I see no evidence that that AI is going to make as much of a change on the world as data processing or information retrieval. While research in AI has produced many results in side areas such as computer languages, computer architecture, and programming environments, none of the past promises of AI (automatic language translation, for example) have been fulfilled. Why should I expect anything more in the future? I am a soon-to-graduate PhD candidate at Cornell. Since Cornell puts little emphasis on AI, I decided to learn a little on my own. Most AI literature is hard to read, as very little concrete is said. The best book that I read (best for someone like me, that is) was the three-volume "Handbook on Artificial Intelligence". One interesting observation was that I already knew a large percentage of the algorithms. I did not even think of most of them as being AI algorithms. The searching algorithms (with the exception of alpha beta pruning) are used in many areas, and algorithms that do logical deduction are part of computational mathematics (just my opinion, as I know some consider this hard core AI). Algorithms in areas like computer vision were completely new, but I could see no relationship between those algorithms and algorithms in programs called "expert systems", another hot AI topic. [Agreed, but the gap is narrowing. There have been 1 or 2 dozen good AI/vision dissertations, but the chief link has been that many individuals and research departments interested in one area have also been interested in the other. -- KIL] As for expert systems, I could see no relationship between one expert system and the next. An expert system seems to be a program that uses a lot of problem-related hacks to usually come up with the right answer. Some of the "knowledge representation" schemes (translated "data structures") are nice, but everyone seems to use different ones. I have read several tech reports describing recent expert systems, so I am not totally ignorant. What is all the noise about? Why is so much money being waved around? There seems to be nothing more to expert systems than to other complicated programs. [My own somewhat heretical view is that the "expert system" title legitimizes something that every complicated program has been found to need: hackery. A rule-based system is sufficiently modular that it can be hacked hundreds of times before it is so cumbersome that the basic structures must be rewritten. It is software designed to grow, as opposed to the crystalline gems of the "optimal X" paradigm. The best expert systems, of course, also contain explanatory capabilities, hierarchical inference, constrained natural language interfaces, knowledge base consistency checkers, and other useful features. -- KIL] I know that numerical analysis and compiler writing are well developed fields because there is a standard way of thinking that is associated with each area and because a non-expert can use tools provided by experts to perform computation or write a parser without knowing how the tools work. In fact, a good test of an area within computer science is whether there are tools that a non-expert can use to do things that, ten years ago, only experts could do. Is there anything like this in AI? Are there natural language processors that will do what YACC does for parsing computer languages? There seem to be a number of answers to me: 1) Because of my indoctrination at Cornell, I categorize much of the important results of AI in other areas, thus discounting the achievements of AI. 2) I am even more ignorant than I thought, and you will enlighten me. 3) Although what I have said describes other areas of AI pretty much, yours is an exception. 4) Although what I have said describes past results of AI, major achievements are just around the corner. 5) I am correct. You may be saying to yourself, "Is this guy serious?" Well, sort of. In any case, this should generate more interesting and useful information than trying to define intelligence, so please treat me seriously. Ralph Johnson
dietz%usc-cse%USC-ECL%SRI-NIC@sri-unix.UUCP (11/21/83)
I too am skeptical about expert systems. Their attraction seems to be as a kind of intellectual dustbin into which difficulties can be swept. Have a hard problem that you don't know (or that no one knows) how to solve? Build an expert system for it. Ken Laws' idea of an expert system as a very modular, hackable program is interesting. A theory or methodology on how to hack programs would be interesting and useful, but would become another AI spinoff, I fear.
leff@smu.UUCP (11/24/83)
#R:sri-arpa:-1384100:smu:10900001:000:2566 smu!leff Nov 23 08:44:00 1983 There was a recent discussion of an AI project that was done at ONR on determining the cause of a chemical spill in a large chemical plant with various ducts and pipes and manholes, etc. I argued that the thing was just an application of graph algorithms and searching techniques. (That project was what could be done in three days by an AI team as part of a challenge from ONR and quite possibly is not representative.) Theorem proving using resolution is something that someone with just a normal algorithms background would not simply come up with 'as an application of normal algorithms.' Using IF then rules perhaps might be a search of the type you might see an algorithms book. Although, I don't expect the average CS person with a background in algorithms to come up with that application although once it was pointed out it would be quite intuitive. One interesting note is that although most of the AI stuff is done in LISP, a big theorem proving program discussed by Wos at a recent IEEE meeting here was written in PASCAL. It did some very interesting things. One point that was made is that they submitted a paper to a logic journal. Although the journal agreed the results were worth publishing, the "computer stuff" had to go. Continuing on this rambling aside, some people submitted results in mechanical engineering using a symbolic manipulator referenceing the use of the program in a footnote. The poor referree (sp?) conscientiously tried to duplicate the derivations manually. Finally he noticed the reference and sent a letter back saying that they must put symbolic manipulation by computer in the covering. Getting back to the original subject, I had a discussion with someone doing research in daemons. After he explained to me what daemons were, I came to the conclusion they were a fancy name for what you described as a hack. A straightforward application of theorem proving or if then rule techniques would be inefficient or otherwise infeasable so one puts an exception in to handle a certain kind of a case. What is the difference between that an error handler for zero divides rather than putting a statement everywhere one does a division? Along the subject of hacking, a DATAMATION article on 'Real Programmers Don't Use PASCAL.' in which he complained about the demise of the person who would modify a program on the fly using the switch register, etc. He remarkeed at the end that some of the debugging techniques in LISP AI environments were starting to look like the old style techniques of assembler hackers.
notes@pur-ee.UUCP (11/25/83)
#R:sri-arpa:-1384100:ecn-ee:15300001:000:2084 ecn-ee!davy Nov 24 21:47:00 1983 As an aside to this discussion, I'm curious as to just what everyone thinks of when they think of AI. I am a student at Purdue, which has absolutely nothing in the way of courses on what *I* consider AI. I have done a little bit of reading on natural language processing, but other than that, I haven't had much of anything in the way of instruction on this stuff, so maybe I'm way off base here, but when I think of AI, I primarily think of: 1) Natural Language Processing, first and foremost. In this, I include being able to "read" it and understand it, along with being able to "speak" it. 2) Computers "knowing" things - i.e., stuff along the lines of the famous "blocks world", where the "computer" has notions of pyramids, boxes, etc. 3) Computers/programs which can pass the Turing test (I've always thought that ELIZA sort of passes this test, at least in the sense that lots of people actually think the computer understood their problems). 4) Learning programs, like the tic-tac-toe programs that remember that "that" didn't work out, only on a much more grandiose scale. 5) Speech recognition and understanding (see #1). For some reason, I don't think of pattern recognition (like analyzing satellite data) as AI. After all, it seems to me that this stuff is mostly just "if <cond 1> it's trees, if <cond 2> it's a road, etc.", which doesn't really seem like "intelligence". What do you think of when I say "Artificial Intelligence"? Note that I'm NOT asking for a definition of AI, I don't think there is one. I just want to know what you consider AI, and what you consider "other" stuff. Another question -- assuming the (very) hypothetical situation where computers and their programs could be made to be "infinitely" intelligent, what is your "dream program" that you'd love to see written, even though it realistically will probably never be possible? Jokingly, I've always said that my dream is to write a "compiler that does what I meant, not what I said". --Dave Curry decvax!pur-ee!davy eevax.davy@purdue