LEVITT%MIT-OZ@MIT-MC.ARPA (12/07/83)
SO hard, the senior faculty typically give up programming altogether and lose touch with the problems. Nobody seems to realize how close we would be to practical AI, if just a handful of the important systems of the past were maintained and extended, and if the most powerful techniques were routinely applied to new applications - if an engineered system with an ongoing, expanding knowledge base were developed. Students looking for theses and "turf" are reluctant to engineer anything familiar-looking. But there's every indication that the proven techniques of the 60's/early 70's could become the core of a very smart system with lots of overlapping knowledge in very different subjects, opening up much more interesting research areas - IF the whole thing didn't have to be (re)programmed from scratch. AI is easy now, showing clear signs of diminishing returns, CS/software engineering are hard. I have been developing systems for the kinds of analogy problems music improvisors and listeners solve when they use "common sense" descriptions of what they do/hear, and of learning by ear. I have needed basic automatic constraint satisfaction systems (Sutherland'63), extensions for dependency-directed backtracking (Sussman'77), and example comparison/extension algorithms (Winston'71), to name a few. I had to implement everything myself. When I arrived at MIT AI there were at least 3 OTHER AI STUDENTS working on similar constraint propagator/backtrackers, each sweating out his version for a thesis critical path, resulting in a draft system too poorly engineered and documented for any of the other students to use. It was idiotic. In a sense we wasted most of our programming time, and would have been better off ruminating about unfamiliar theories like some of the faculty. Theories are easy (for me, anyway). Software engineering is hard. If each of the 3 ancient discoveries above was an available module, AI researchers could have theories AND working programs, a fine show.