marty1@houdi.UUCP (M.BRILLIANT) (07/16/87)
Suppose one wanted to build a robot that does what a Seeing-Eye dog does (that is, helping a blind person to get around), but communicates in the blind person's own language instead of by pushing and pulling. Clearly this robot does not have to imitate a human being. But it does have to recognize objects and associate them with the names that humans use for them. It also has to interpret certain situations in its owner's terms: for instance, walking in one direction leads to danger, and walking in another direction leads to the goal. What problems will have to be solved to build such a robot? Will its hypothetical designers have to deal with the problem of mere recognition, or the deeper problem of grounding symbols in meaning? Could it be built by hardwiring sensors to a top-down symbolic processor, or would it require a hybrid processor? M. B. Brilliant Marty AT&T-BL HO 3D-520 (201)-949-1858 Holmdel, NJ 07733 ihnp4!houdi!marty1
merrill@iuvax.cs.indiana.edu (07/17/87)
In comp.ai, marty1@houdi (M.B. Brilliant) writes: > Suppose one wanted to build a robot that does what a Seeing-Eye dog > does (that is, helping a blind person to get around), but communicates > in the blind person's own language instead of by pushing and pulling. > [Commentary on some of the essential properties of the robot.] > What problems will have to be solved to build such a robot? Will its > hypothetical designers have to deal with the problem of mere > recognition, or the deeper problem of grounding symbols in meaning? > Could it be built by hardwiring sensors to a top-down symbolic > processor, or would it require a hybrid processor? I seriously doubt that recognition itself would be adequate. As Brilliant observes, one of the functions that the robot must perform is the detection of "danger to its master." Consider the problem of crossing a street. Is it enough to recognize cars (and trucks, and motorcycles, and other already--known objects?) No. The robodog has to generalize beyond simply cars and trucks and busses, since their shapes change, to "things travelling along this stretch of road {and what's a stretch of road?} which are a) moving {and what does it mean to move?} b) fast {and what is fast? Why, fast enough to be dangerous...which begs the question} c) in this direction." At this point, I think that we have exceeded the bounds of recognition and entered a realm where "judgement" is required, but, if not, I imagine that I can probably extend this situation to meet most specific objections. (I assume that the blind woman needs to cross roads without undue delay. Traffic lights don't eliminate these problems, since the robodog must "recognize" drivers who are turning, some of whow would be safe, since they're either stopped or slow--moving, but some of whom (at least, here in Bloomington) would run *any* pedestrian down. !-)) BTW: I like this example very much. It raises quite nicely the underlying issue in the symbol grounding problem discussion without using the terminology that many of the readers of comp.ai seem to have objected to. Congratulations, Mr. Brilliant! John Merrill merrill@iuvax.cs.indiana.edu UUCP:seismo!iuvax!merrill Dept. of Comp. Sci. Lindley Hall 101 Indiana University Bloomington, Ind. 47405