[mod.comp-soc] Computers and Society Digest, #14

taylor@hplabsc.UUCP (06/23/86)

                    Computers and Society Digest, Number 14
 
                          Thursday, February 6th 1986
 
Topics of discussion in this issue...
 
                 Technological versus Social/Political problems
                             More on the NCIC system
                         Ethical Issues with Technology
----------------------------------------------------------------------
 
From: hplabs!aurora!eugene (Eugene miya)
Date: 30 Jan 1986 1059-PST (Thursday)
Subject: Technological versus Social/Political problems

A thought on computers and society:

1) agreed that you will overlap human-nets@rutgers, but let's not
stop yet, perhaps you can be a springboard for larger discussion
since you have a smaller group.

2) When I was an undergrad and "socially relevant" was a big word,
I took a couple of classes on "Science and Society" as General
education requirements (in one case we had 2 science majors in a
class of 100).

In two of those classes, distinctions were made between technological
problems versus social/politcal problems.  The arms race was, for the
physics community, the classical social problem which "was not
solved by technological solutions.  Now on various nets, with
different compositions of people, I have heard it argued that technological
problems do solve social problems: an example given was the slavery
issue really being `solved' by the technology of the cotton gin
(certainly debateable by both sides).  This can certainly be generalized
to the arms race and SDI.  Many are impatient with political solutions
or not satisfied by other opinions.  My question is: what are the
social problems not to be satisfied by technological solutions?
South Africa for instance?  Should technologists stand back?  How
do we as technologists know when to step back?  When do we learn this or
receive training, or are all problems technologically solveable?
Should computers decide who is the most compatible and form
matches by machine?

 From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  {hplabs,hao,dual,ihnp4,vortex}!ames!aurora!eugene
  eugene@ames-nas.ARPA

------------------------------
 
Date: 31 Jan 1986 13:40-EST 
From: hplabs!Benjamin.Pierce@GANDALF.CS.CMU.EDU
Subject: More on the NCIC system

>From: jefu <ihnp4!seismo!rochester!steinmetz!putnam>
>Subject: NCIC Systems and so on...
>...
>There should be a single national database with all the information the 
>government and credit agencies collect.  This database should be accessible
>to anyone with a terminal and a telephone.  However, for each individual
>access should be granted and denied by class of organization, and on an 
>individual basis.  For example, I should be able to read anything about
>myself, even classified information - if it is about myself, I must already
>know it (:-).  The FBI may be able to read/write certain parts of the 
>information, and the local police others.  Sam from down the street should 
>be able to read/write nothing, unless I grant him permission.  Credit agencies
>should be able to read/write only when I grant them permission.  Traces 
>should be made of anyone reading my entry, and especially of anyone writing 
>it.  Perhaps a search warrant might be necessary to read some parts.  Any 
>information that is written and is in error, I should be able to protest.  

This scheme sounds technologically more sound than the present mess,
but I must admit to feeling much more comfortable with the mess than
with the idea of an alternative that works too well.  

Still, there's one idea here that I like.  One of the reasons that the
present scheme is a mess is that it's difficult or impossible to assign
responsibility for dissemination of incorrect information.  If each
entry had a complete audit trail, it would at least be possible to
determine where the bad data came from.  Furthermore, it might be
possible to require that an organization which has passed along bad
information contact and correct records at all the other organizations
who now hold the data, with appropriate penalties for failure to
correct a reported mistake.

		Benjamin Pierce  

------------------------------
 
Date: Thu, 6 Feb 86 23:30:15 MST
From: hpcnou!dat (Dave Taylor)
Subject: Ethical Issues with Technology

Recently I've been wondering about the ethical responsibilities
of people designing equipment.

For example, is the person who designed or constructed the faulty
system that caused the Shuttle to explode (assuming, for sake of
argument, that it did) somehow morally or ethically responsible
for the deaths of the astronauts?

Closer to our areas, with the computerized pharmacutical database
system we've been talking about, what happens if it makes an error
and someone dies?

Say patient 'a' goes into the pharmacy, currently being prescribed
with drug alpha.  He has a prescription for drug beta from his 
orthodontist to alleviate pain due to his recent jaw operation
BUT doesn't know that alpha and beta interact in a way that is
potentially fatal if taken within two hours of eating.

He goes to the pharmacy and the pharmacist enters the new prescription
into the database.  The computer system correlates that the patient is
in fact taking alpha already along with the new drug, beta.  Due to a
flaw in the programming of the system, however, the computer doesn't
"realize" that the two drugs fatally interact...

Later that afternoon, after having a large lunch of soup (remember,
the patient is in considerable pain from his operation) and taking
both drugs as prescribed, the patient dies of a massive coronary.

The question is - who's responsible?

The pharmacist for relying on the computer for information when they 
should have known that alpha + beta are potentially fatal from the
years of pharmacutical school?

The Computer programmer who designed the faulty database/knowledge
system that failed to issue a warning about the fatal drug interaction?

The Person testing the system before installation to verify that it
does indeed have all the knowledge and information it's meant to have?

Or the person taking the drugs, for trusting a pharmacist who uses
flawed computer systems?

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

My feeling is that the burden of legal responsibility is on the
parties that did something wrong, or, by proxy, failed to do something
"right" with criminal intent, in this situation.  For example, if the
pharmacist was later shown to have known that the combination was 
potentially lethal but trusted the computer to be right when he might
be 'confused' then he is indeed responsible for the death of the 
patient.

Those parties that unintentionally caused the death of the patient
are not legally responsible.

This, of course, is not what the actual law tends to indicate.  

For example, A.H. Robbins, makers of the Dalkon Shield Intra-Uterine Device 
(IUD) with the best intentions and the approval of the US Food and Drug 
Administration introduced the birth control product here in the US in the 
early 1970s.  By the beginning of the 80s, though, it was linked to abnormally 
high occurances of cervical cancer and other major health problems in women.  
Recently, due to a number of lawsuits brought by women with health problems, 
the courts have ruled that A.H. Robbins must offer restitution to women 
adversely affected by the IUD.  This could potentially cost the company
hundreds of millions of dollars - perhaps driving it out of business
entirely.

There was NO criminal intent when they introduced the IUD, however, and 
they also had the approval of the FDA (the branch of the American government 
involved with testing and approving foods and drugs for consumer purchase).  

The point?  That the US government feels that the act is all, and that
the intention is irrelevant.

Enough legalities, however.  The more interesting questions in the
hypothetical scenario presented are the moral ones.  Should the programmer, 
hearing about this tragedy, feel responsible in any sense for the death?  
I don't know.  My suspicion is that if I were put in that role I would feel 
devastated.  In a sense, I would have, by not finding the defect in my 
program, have committed murder.  Murder by ommision, but murder nonetheless.

Morally, then, the creator of a system is certainly at least somewhat
responsible for the health and well-beings of those using the system.

Another example of this dilemma is the person who designs a faulty
jet engine on an airliner.  The engine later fails and causes hundreds
of people to lose their lives in a terrible tragedy.  Is the designer
"guilty" by ommision or by negligence of the death of the people?

>From another direction, legally the designer would not be, at least
in states like Colorado.  Here in Colorado engineers are licensed 
by the state after taking rigorous exams, or are vouched for by 
the employer, who must accept some legal responsibility for this.
Hewlett Packard chooses the second route.  This is one of the main
reasons that engineers almost always MUST have degrees to be able
to perform certain tasks.

I ramble for too long, however.

	I welcome comments and disagreement.

					-- Dave


-----------------------------------

	To have your item included in this digest, please mail it to any
of the addresses; ihnp4!hpfcla!d_taylor,  {ucbvax} !hplabs!hpcnof!dat or 
hpcnof!dat@HPLABS.CSNET.  You can also simply respond to this mailing.
                                      
-----------------------------------
End of Computers and Society Digest 
***********************************