[comp.ai] My parents own my output.

jac@allegra.UUCP (Jonathan Chandross) (11/18/87)

If I write a program that generates machine code from a high level language
do I not own the output?  Of course I own it.  I also own the output from
a theorum prover, a planner, and similar systems, no matter how elaborate.

One of the assumptions being made in this discussion is that an AI can be
treated as a person.  Let us consider, for the moment, that it is merely
a clever piece of programming.  Then I most *certainly* do own its output
(assuming I wrote the AI) by the reason given above.  (Software piracy is a 
whole other ball of wax.)  

The alternative is to view the AI as an sentient entity with rights, that
is, a person.  Then we can view the AI as a company employee who developed
said work on a company machine and on company time.  Therefore the employer
owns the output, just as my employer owns my output done on company time.

The real question should be: Did the AI knowlingly enter into a contract with
the employer.  

I wonder if the ACLU would take the case.


Jonathan A. Chandross
AT&T Bell Laboratories
Murray Hill, New Jersey
{moss, rutgers}!allegra!jac

roberts@cognos.uucp (Robert Stanley) (11/23/87)

In article <7880@allegra.UUCP> jac@allegra.UUCP (Jonathan Chandross) writes:

>If I write a program that generates machine code from a high level language
>do I not own the output?  Of course I own it.  I also own the output from
>a theorum prover, a planner, and similar systems, no matter how elaborate.

You do indeed, unless you perform (or fail to perform) some act or acts which,
in the eyes of the Law, strip you either of your status as owner or of your
right to compensation for its use.  Giving a copy to a friend without explicit
(read: a witnessed contract) injunction against passing it on, using it other
than for private purposes, etc. is just as much a reduction of your legal
writes as selling it under a contract of sale/lease.  There is still some
considerable controversy as to the status of software license agreements under
a variety of legal systems, which is why no consensus has been reached on the
subject of how best to protect your software against theft.  

Failing to take positive legal steps to protect your rights of ownership of a
piece of software is tantamount to surrendering those rights once you have
made, or allowed to be made, even one copy of the (suite of) programs.  This
may not be fair, but it is what appears to have been established by precedent
in all the major industrialized nations where cases involving software rights
have been tried.  At present, in the US and to a large degree in Canada, the
only really successful legal defences have been for ROM software, notably the
Apple Macintosh, which is why there are as yet *no* Macintosh clones in the
market place.  It is rumoured (comment anyone?) that this is one of the reasons
for IBM's approach to the design of the PS2, with critical components of the
system architecture in ROM.

For those with a speculative approach to the future, it will be interesting if
history repeats itself.  In the 1970's, IBM was taken to court by a number
of PCM's (Plug-Compatible Manufacturers) and eventually lost a ruling, being
forced to disclose the details of their internal architecture to a degree
sufficient to allow other manufacturers to design compatible equipment.  At the
time IBM was viewed as holding a monopolistic position, which is not currently
the case with any one personal computer manufacturer nor, as yet, for any
specific piece of software.

>The alternative is to view the AI as an sentient entity with rights, that
>is, a person.  Then we can view the AI as a company employee who developed
>said work on a company machine and on company time.  Therefore the employer
>owns the output, just as my employer owns my output done on company time.

Whether your employer owns your output is exactly and only a matter of legal
contract.  Either you have signed a legally binding contract of employment with
your employer or your (and your employer's) rights are protected by clauses in
one or more current labour relations bills.  Precise terms of the latter will,
of course, vary from country to country.  It is possible that some aspects of
an explicit contract of employment may be challengeable in court as being overly
restrictive; there have been several US and Canadian precedents within the last
year.

I, for instance, have a contract of emplyment into which I insisted be written
several waivers, simply because the wording of the standard contract gave my
employer the right to everything I did anywhere at any time (24 hours a day,
365.25 days per year) while I was still their employee.  I doubt that the
original contract would actually have withstood a challenge in court, but that
would have taken money and time; much, much better to avoid the situation
completely.

>The real question should be: Did the AI knowlingly enter into a contract with
>the employer.  

This will only be an issue if an AI can first be demonstrated to be a legal
individual within the eyes of the court.  Remember, there are plenty of humans
who do not have this status, but for whom some other legal individual is deemed
to have legal responsibility: the legally insane and the under-aged, to name
but two.

>I wonder if the ACLU would take the case.

Not until there is seen to be some benefit to be gained from protecting the
rights of an AI.  Let's face it, more working human beings are likely to oppose
the establishment of such precedents right now than are going to be for it.
How soon do you see this attitude changing?  Especially if white-collar workers
start being displaced by intelligent management systems!

Robert_S
-- 
R.A. Stanley             Cognos Incorporated     S-mail: P.O. Box 9707
Voice: (613) 738-1440 (Research: there are 2!)           3755 Riverside Drive 
  FAX: (613) 738-0002    Compuserve: 76174,3024          Ottawa, Ontario 
 uucp: decvax!utzoo!dciem!nrcaer!cognos!roberts          CANADA  K1G 3Z4