pmd@cbscd5.UUCP (10/25/83)
I'm interested in getting some feedback on some philosophical questions that have been haunting me: 1) Is there any reason why developments in artificial intelligence and computer technology could not someday produce a machine with human consciousness (i.e. an I-story)? 2) If the answer to the above question is no, and such a machine were produced, what would distinguish it from humans as far as "human" rights were concerned? Would it be murder for us to destroy such a machine? What about letting it die of natural (?) causes if we have the ability to repair it indefinitely? (Note: Just having a unique, human genetic code does not legally make one human as per the 1973 *Row vs Wade* Supreme Court decision on abortion.) Thanks in advance. Paul Dubuc