Barry_A_Stevens@cup.portal.com (12/04/87)
I am interested in the legal aspects of using expert systems. Consider, and please comment on, this scenario. * * * * * * * * * * * A well-respected, well-established expert systems(ES) company constructs an expert financial advisory system. The firm employs the top ES applications specialists in the country. The system is constructed with help from the top domain experts in the financial services industry. It is exhaustively tested, including verification of rules, verification of reasoning, and further analyses to establish the system's overall value. All results are excellent, and the system is offered for sale. Joe Smith is looking for a financial advisory system. He reads the sales literature, which lists names of experts whose advice was used when building the system. It lists the credentials of the people in the company who were the implementors. It lists names of satisfied users, and quotes comments that praise the product. Joe wavers, weakens, and buys the product. "The product IS good,", Joe explains. "I got it up and running in less than an hour!" Joe spends the remainder of that evening entering his own personal financial data, answering questions asked by the ES, and anticipating the results. By now, you know the outcome. On the Friday morning before Black Monday, the expert system tells Joe to "sell everything he has and go into the stock market." ESs can usually explain their actions, and Joe asks for an explanation. The ES replies "because ... it's only been going UP for the past five years and there are NO PROBLEMS IN SIGHT." Joe loses big on Monday. Since he lives in California, (where there is one lawyer for every four households, or so it seems, and a motion asking that a lawsuit be declared frivolous is itself declared frivolous) he is going to sue someone. But who? The company that implemented the system? The domain experts that built their advice into the system? The knowledge engineers who turned expertise into a system? The distributor who sold an obviously defective product? Will a warranty protect the parties involved? Probably not. If real damages are involved, people will file lawsuits anyway. Can the domain experts hide behind the company? Probably not. The company will specifically want to use their names and reputations as the source of credibility for the product. The user's reaction could be, "There's the so-and-so who told me to go into the stock market." Can the knowledge engineers be sued for faulty construction of a system? Why not, when people who build anything else badly can be sued? How about the distributor -- after all, he ultimately took money from the customer and gave him the product. * * * * * * * * * * * I would be very interested in any of your thoughts on this subject. I'd be happy to summarize the responses to the net. Barry A. Stevens Applied AI Systems, Inc. PO Box 2747 Del Mar, CA 92014 619-755-7231
smoliar@vaxa.isi.edu (Stephen Smoliar) (12/07/87)
In article <1788@cup.portal.com> Barry_A_Stevens@cup.portal.com writes: > >Consider, and please comment on, this scenario. > > * * * * * * * * * * * > >A well-respected, well-established expert systems(ES) company constructs >an expert financial advisory system. The firm employs the top ES >applications specialists in the country. The system is constructed with >help from the top domain experts in the financial services industry. It >is exhaustively tested, including verification of rules, verification of >reasoning, and further analyses to establish the system's overall value. >All results are excellent, and the system is offered for sale. > Anyone who is willing to accept these premises at face value may be more interested in investing in the bridge I have between Manhattan and Brooklyn than in expert systems. The sort of "ideal" product envisaged here is certainly beyond the grasp of current development technology and may remain so for quite some time. The most important omission from this scenario is the assumption that any sort of disclaimer has been attached to the product. I have encountered a variety of advertisements for human financial consultants; and, as a rule, there is always some disclaimer about risk present. The idea that their would be a machine-based product which would be risk-free borders on ludicrous. If a customer was hooked by such a claim, most likely the only place he would be able to complain would be to the Better Business Bureau. > >By now, you know the outcome. On the Friday morning before Black Monday, >the expert system tells Joe to "sell everything he has and go into the >stock market." ESs can usually explain their actions, and Joe asks for >an explanation. The ES replies "because ... it's only been going UP for >the past five years and there are NO PROBLEMS IN SIGHT." > Would Joe have accepted such an explanation from a human advisor? If so, he has gotten what he deserved. (I happened to be discussing an analogous case with my lawyer-neighbor. Our scenario involved medical systems and malpractice, but the theme is basically the same.) This raises another question: Assuming Joe is no dummy (and that he can afford good human advice), why would he be intersted in an machine advisor? I would argue that the area in which machines tend to have it over humans is that of quantitative risk assessment. Thus, the machine is more likely to synthesize and justify concrete quantitative predictive models than is a human expert, whose skills are fundamentally qualitative. Thus, the best Joe could hope for would be such a model. INTERPRETING the model would remain his responsibility (although that interpretation may be linked to the machines justification of the model, itself). I would conclude that this scenario is far too simplistic for the real world. I suggest that Mr. Stevens debug it a bit. Then we might be able to have a more realistic debate on the matter.
bondono@imag.UUCP (Philippe Bondono) (12/17/87)
In article <1788@cup.portal.com> Barry_A_Stevens@cup.portal.com writes: > >Consider, and please comment on, this scenario. > > * * * * * * * * * * * > >A well-respected, well-established expert systems(ES) company constructs >an expert financial advisory system. The firm employs the top ES >applications specialists in the country. The system is constructed with >help from the top domain experts in the financial services industry. It >is exhaustively tested, including verification of rules, verification of >reasoning, and further analyses to establish the system's overall value. >All results are excellent, and the system is offered for sale. No comment on this point: I am very trustful in expert systems (I must be so, in fact, since I am working in that field), nevertheless, I think that the two most important features of expert systems are: 1) their capacity to verify the consistency of their database(s), and 2) the domain they are concerned with. >By now, you know the outcome. On the Friday morning before Black Monday, >the expert system tells Joe to "sell everything he has and go into the >stock market." ESs can usually explain their actions, and Joe asks for >an explanation. The ES replies "because ... it's only been going UP for >the past five years and there are NO PROBLEMS IN SIGHT." The expert system was right: it made a deduction from the knowledge it was fed on with! But the real problem is the domain of expertise, more precisely the suitability of an expert system in a particular field. It seems to me quite unreasonable to build an expert system for financial advice, since this field is continuously in evolution. Moreover, for the particular problem of stock market, it is neither a question of months, nor of days: it is a question of hours! Everybody knows that stock market is particularly precarious, since it can easily go up or down, depending on "abstract" parameters, such as feelings, or interpretations of official people's declarations (remember the effect of Reagan's declarations!), or even the fact that one is tense! This kind of knowledge cannot be modeled, at least till now, in an expert system database. This was to say that the problem is not whether or not to start a discussion on qualities/drawbacks of expert systems, but rather on what kind of field is suitable for building expert systems. ______________________________________________________________________________ Meryem MARZOUKI Laboratoire TIM3/IMAG INPG - 46 avenue Felix VIALLET 38031 Grenoble Cedex - FRANCE e-mail marzouki@archi.uucp "my tailor is rich, but my english is poor!" ______________________________________________________________________________
varol@cwi.nl (Varol Akman) (12/18/87)
Meryem Marzouki writes: > > ... material deleted > >No comment on this point: I am very trustful in expert systems (I must be so, >in fact, since I am working in that field), nevertheless, I think that the two >most important features of expert systems are: >1) their capacity to verify the consistency of their database(s), and >2) the domain they are concerned with. > > ... material deleted > >The expert system was right: it made a deduction from the knowledge it was fed >on with! >But the real problem is the domain of expertise, more precisely the suitability >of an expert system in a particular field. >It seems to me quite unreasonable to build an expert system for financial >advice, since this field is continuously in evolution. Moreover, for the >particular problem of stock market, it is neither a question of months, nor of >days: it is a question of hours! >Everybody knows that stock market is particularly precarious, since it can >easily go up or down, depending on "abstract" parameters, such as feelings, or >interpretations of official people's declarations (remember the effect of >Reagan's declarations!), or even the fact that one is tense! >This kind of knowledge cannot be modeled, at least till now, in an expert system >database. >This was to say that the problem is not whether or not to start a discussion on >qualities/drawbacks of expert systems, but rather on what kind of field is >suitable for building expert systems. Expert systems, at this stage of their evolution, are tools for modeling surface knowledge in an area. They have no ability whatsoever to reason about the underlying mechanisms of the domain that they try to model. Thus they lack deep knowledge. Human beings have deep knowledge. There is also a lot of high quality work in the area of modeling deep knowledge but this is very much experimental. In fact, if we're successful (to an extent) in modeling deep knowledge, then AI will prove that it is a discipline which can deal with realistic (read non-toy) problems. Until then, expert systems will serve as advisors whose advice need close scrutiny (sp?) by human experts. I would never try to sue an expert system because I KNOW that it can't be trusted, given their level of sophistication at this time. I can't trust something if it is the subject matter of my field of research because my field of research is very much in its infancy. To me that kind of trust is probably the worst thing that I may have. Programs should be trusted not because we feel a parental warm affinity towards them. They should be trusted if they are worth our trust. The road to that trust is not a path of roses; it is a path full of hard work, correctness proofs, wide and general field tests, etc. Until then let's just work and hope that everythings turns out to be allright at the end. -Varol Akman CWI, Amsterdam
Barry_A_Stevens@cup.portal.com (12/20/87)
I have received many replies to my original posting on the legal aspects of using expert systems. Many were useful, a few thought the scenario was trivial and, therefore, so was the discussion. I'll be summarizing the result and posting shortly. Thanks for your help. -- Barry Stevens