taylor@hplabsc.UUCP (Dave Taylor) (07/17/86)
This article is from Spiros Triantafyllopoulos <SPIROS@GMR.COM> and was received on Tue Jul 15 21:05:55 1986 >>My point is this, I think it is intrinically impossible to program common >>sense because a computer is not a man. A computer cannot experience what man >>can; it cannot see or make ubiquitous judgements that man can. We may be >>able to program common-sense like rules into it,but this is not tantamount >>to real world common sense because real world common sense is drawn from a >>'database' that could never be matched by a simulated one. > >Why not? Computers *have* been programmed with common-sense rules and >information in limited domains (i.e., expert systems). They sometimes >exhibit better "common-sense" reasoning than people, within these domains. >What is the theoretical or philosophical reason that these domains could not >be extended to the larger but still limited ones that humans use? > > Len Popp >{allegra,decvax,ihnp4,tektronix,ubc-vision}!watmath!watdaisy!lmpopp Nope. Expert systems have been known for behaving perfectly when faced with situations for which the appropriate rule/action pairs exist. However, expert systems can fail pretty miserably when faced with situations beyond the scope of their knowledge. Most peoples' idea of expert systems comes from quite inflated claims by software developers/research-paper-writers/and-the-like. An article in AI magazine about a year ago discussed this phenomenon quite extensively. Claims of 90% success etc. were usual, but nowhere where the limitations and/or failures of the presented system (typical let-the-user-find-out conditions). There have been cases of expert systems working with VERY specialized problems (where common sence wouldn't do much good anyway) therefore paving the way for generalization of claims about expert systems in general. But we're a long way from getting there. ----------------------------------------------------------------------------- > A Macintosh computer was recently a star witness in the first-degree-murder > trial of Sagon Penn, who was charged with the shooting deaths of two San Diego > policemen. > ..... > the case had not been reached at press time, Poza said that the attorneys, the >judge, and the jury were impressed with the Macintosh technology and its > contribution to the analysis of important evidence. > >. The most disturbing thing about this article was the closing line "the >. attorneys, the judge and the jury were impressed with the Macintosh tech- >. nology..." it's just a precursor to courtrooms where testimony is processed >. by computer and available for review in the jury deliberation area, and then >. what's to stop someone breaking into the system and altering testimony >. records? Etcetera etcetera -- Dave] Couldn't agree more. But, then, the prosecutor could bring up a CRAY-2 or something and prove, beyond any doubt, that the people who wrote the MAC software (or software in general) are not "competent" and disqualify them (therefore disqualifying the produced evidence). Or they could hire a Ph.D from a local University to testify about program complexity (show the Jury Knuth's books?) and arrive at similar conclusions. Then, Jury, judge and prosecutors could be introduced a bit more to computerese as a hung jury could be renamed as an infinite loop... I'm getting *really* scared. -------------------------------------------------------------------- > In this high-tech age, it was inevitable that someone would think of >inventing a computer to solve the problem of contraception. The first on >the market is a hand-held computer made by Rabbit Computer Corp. in Los >Angeles. Any home computer would do... Computers tend to be very discouraging in the childmaking process (not now honey, I got a program to finish). Also physiological changes caused by computers, such as headaches (what an excuse!), back aches (?), VDT radiation that may cause infertility, etc. So, the Rabbit Co. plainly re-invents the wheel... Spiros Triantafyllopoulos Spiros@GMR.COM
taylor@hplabsc.UUCP (07/19/86)
This article is from Len Popp <tektronix!watmath!watdaisy!lmpopp> and was received on Fri Jul 18 20:00:58 1986 In article <472@hplabsc.UUCP> Spiros Triantafyllopoulos writes: >> . . . Computers *have* been programmed with common-sense rules and >>information in limited domains (i.e., expert systems). They sometimes >>exhibit better "common-sense" reasoning than people, within these domains. >>What is the theoretical or philosophical reason that these domains could not >>be extended to the larger but still limited ones that humans use? >> > >Nope. Expert systems have been known for behaving perfectly when faced >with situations for which the appropriate rule/action pairs exist. However, >expert systems can fail pretty miserably when faced with situations >beyond the scope of their knowledge. "Nope" to what??? I never said that expert systems could reason outside their domains, just that people can't either!!!! Compare: People have been known for behaving (almost) perectly when faced with situations for which they have appropriate experience or knowledge. However, common sense can fail pretty miserably when faced with situations outside of everyday experience. Is not that just as true as your statement about expert systems? Are you really trying to say that because humans have MORE "appropriate rule/action pairs" (i.e., knowledge or experience) that our common sense covers ALL situations in the universe? Don't be silly! I won't bore you with examples, because I'm sure you can come up with your own, but surely it is obvious that there are a great many situations not covered by human common sense. The major difference between the limited scope of expert systems and the limited scope of common sense is one of degree. I still claim that the reason that computer "common sense" only works within small domains is that we have never programmed a computer with all the rules covering a large domain of problems. And the reason that this hasn't been done is not because we don't know how to build expert systems, but because we don't really understand the reasoning behind our "common sense". In other words, the knowledge built into current expert systems is limited by *our* knowledge. We can't explain what we don't understand. > >Most peoples' idea of expert systems comes from quite inflated claims >by software developers/research-paper-writers/and-the-like. An article >in AI magazine about a year ago discussed this phenomenon quite extensively. >Claims of 90% success etc. were usual, but nowhere where the limitations >and/or failures of the presented system (typical let-the-user-find-out >conditions). There have been cases of expert systems working with VERY >specialized problems (where common sence wouldn't do much good anyway) >therefore paving the way for generalization of claims about expert systems >in general. But we're a long way from getting there. My claim that expert systems have been made to reason within small domains was not based on advertising or media claims, but on documented performance of expert systems. I agree that many of the claims in ads and popular articles are utterly ridiculous. (It seems that any program with a slightly clever algorithm or data structure is using "AI", these days.) However, how is reasoning on "VERY specialized problems" different from common sense? Our "common sense" is limited to the domain of phenomena encountered in everyday human life, which is also very specialised in the grand scheme of things. And since when does common sense *not* do much good to, for example, doctors diagnosing diseases, or chemists trying to divine the structure of organic molecules from indirect data? Again, I say that the difference between computer "common sense" and human "common sense" is that we haven't figured out enough of our common sense to tell it to a computer. That's not a reflection on the computer. Len Popp {allegra,decvax,ihnp4,tektronix,ubc-vision}!watmath!watdaisy!lmpopp
taylor@hplabsc.UUCP (Dave Taylor) (07/25/86)
This article is from Eugene miya <ames!aurora!eugene> and was received on Thu Jul 24 16:37:28 1986 Re: Spiros posting about using computers at trials. Two issues of Time ago, several investigative stories were noted to have used IBM mainframes for DBMS matching work and reporters (hu!) PROGRAMMING analyses. Wow investigative programming. I can see the two sides in a trial getting bigger and bigger guns. Just another set of toys to distract juries and judges from issues at hand. Sure, one or two real uses, but justice? Lawyers need more education. naw. From the Rock of Ages Home for Retired Hackers: --eugene miya NASA Ames Research Center eugene@ames-aurora.ARPA
taylor@hplabsc.UUCP (Dave Taylor) (07/25/86)
Eugene Miya says; > [computers are] just another set of toys to distract juries and judges from > issues at hand. Sure, one or two real uses, but justice? Lawyers need more > education. naw. Just to play devils advocate, I disagree. I think that there are a lot of cases where concise unbiased analysis would have been very significant in the ultimate decision made. For example in a case where a person feigns insanity to escape a harsh verdict, instead of having a person who can be easily cross-examined and have their otherwise excellent testimony called into question, if a computer system were accepted by the community (e.g. the Bar + Courts) as a valid test, no amount of familial abuse, or whathaveyou could bias the testimony. I think that at the more `prestigious' cases (what a strange concept) like celebrities on trial for something or, perhaps a better example, cases to do with organized crime (another strange concept, if you think about it) no amount of coercion could influence the system (no "if you testify against bugsy we'll break ya kids face!" phone calls at three in the morning or anything). I think that would be a significant step forward. People are always complaining that justice isn't fair, well here's a way to perhaps even it up a bit. It would be GREAT if it couldn't tell if the person were rich, poor, a minority or whatever and simply be a witness based on THE FACTS ALONE. Let's face it, we're all human and we all have biases. Even the best judges prefer certain racial/ religious/sexual/etc traits in people and are therefore liable to be slightly more lenient towards people they like. Computers can't do that. On the other hand, it really leaves the possibility of tampering wide open. You can harass a person, but you can't alter the structure of their mind (well, you can, but they won't be much use at a trial after- wards). Computers could be reprogrammed, or could be given a set of routines to determine if a name is a minority, or a certain sex, or whatever (in fact, they could hook into IRS or NCIC or something!) (a truly frightening thought - "You're innocent according to the computer analysis, but since you turn out to owe $200 in back taxes we're going to toss you into jail anyway"). -- Dave
taylor@hplabsc.UUCP (07/25/86)
This article is from coller@utah-cs.ARPA (Lee D. Coller) and was received on Fri Jul 25 02:09:53 1986 In article <497@hplabsc.UUCP> hplabs!taylor (Dave Taylor) writes: >... >People are always complaining that justice isn't fair, well here's a >way to perhaps even it up a bit. [Use computers] I don't think I would want a computer making moral judgements (which is what "justice" is really all about). At least when a human is doing this you know exactly whose morals you're dealing with. If a computer is making these judgements you are dealing with the morals of an unseen person (s) (the programmer). I don't think computers should ever be given equal status with people. Someone has to take responsibility. Some things are better left to humans. >On the other hand, it really leaves the possibility of tampering wide >open. ... >Computers could be reprogrammed, or could be given a set of >routines to determine if a name is a minority, or a certain sex, or >whatever (in fact, they could hook into IRS or NCIC or something!) or the Programmer's name: (if (equal name "Lee Coller") (setq verdict 'innocent) ( ... rest of program ... )) -Lee
taylor@hplabsc.UUCP (Dave Taylor) (07/28/86)
This article is from munnari!cidam.rmit.oz!mg@hplabs.HP.COM (Mike A. Gigante) and was received on Sun Jul 27 17:44:48 1986 In article <497@hplabsc.UUCP>, taylor@hplabsc.UUCP (Dave Taylor) writes: > > Eugene Miya says; > >>[computers are] just another set of toys to distract juries and judges from >>issues at hand. Sure, one or two real uses, but justice? Lawyers need more > >[...] Let's face it, we're all human and we all have biases. [...] This one sentence (admittedly, removed from it's context), serves to illustrate one problem I perceive here. We *are* only human, and our code sometimes reflects this :-) Unless it has been *proven* that the machine/program is *completely error free*, I think that a miscarriage of justice is a definate possibility if a machine/program is used as incriminating evidence. Honestly, would like like someone else's code to decide your fate? Taken to it's extreme, you could have duelling debug sessions in a court room :-) (Hah! but I just fixed that bug 2 minutes ago!) -- Mike [a good point...indeed a very good point - worth a 'real' followup...-- Dave]
taylor@hplabsc.UUCP (Dave Taylor) (07/29/86)
This article is from Eugene miya <aurora!eugene@hplabs.HP.COM> and was received on Mon Jul 28 22:01:15 1986 Devil's Daniel Webster: Don't mistake the "unbiased" for "unbiased" estimators in statistics. Unbiased is at best a marginal adjective for statistics (similarly the concept of "Ideals" in number theory). I think that for most every computer expert (analysis), you could find a matching analysis. Only the simplest analysis could result in what you term a FACT. Those can be handled by legalese. What makes life tough are dilemmas, and dilemmas are what courts are about. I would like to think causality, logic, and "facts" would be enough to "solve" court cases, but computers are still little more than glorified calculators and sorting/searching machines for a court. Consider the proposal for a science court. Suppose you sent computers back during the age when funding for research on the properties of light (wave or particle). Science court convenes both sides give their evidence: computer models. Both have FACTS, both represent the truth. Humans deduce paradox/dualism. The problem is a bit harder for computers. I'm glad I would not be funded by the decisions of such a science court. From the Rock of Ages Home for Retired Hackers: --eugene miya NASA Ames Research Center eugene@ames-aurora.ARPA