[comp.ai.digest] punishment metaphor

NICK@AI.AI.MIT.EDU (Nick Papadakis) (06/04/88)

Date: Fri, 3 Jun 88 16:17 EDT
From: Bruce E. Nevin <bnevin@cch.bbn.com>
Subject: punishment metaphor
To: ailist@ai.ai.mit.edu
cc: bn@cch.bbn.com

In article <1209@cadre.dsl.PITTSBURGH.EDU> Gordon E. Banks writes:
>>Are there any serious examples of re-programming systems, i.e. a system
>>that redesigns itself in response to punishment.
>
>Certainly!  The back-propagation connectionist systems are "punished"
>for giving an incorrect response to their input by having the weight
>strengths leading to the wrong answer decreased.

This is an error of logical typing.  It may be that punishment results
in something analogous to reduction of weightings in a living organism.
Supposing that hypothesis to have been proven to everyone's
satisfaction, direct manipulation of such analogs to connectionist
weightings (could they be found and manipulated) would not in themselves
be punishment.

An analog to punishment would be if the machine reduces its own
weightings (or reduces some and increases others) in response to being
kicked, or to being demeaned, or to being caught making an error
("Bad hypercube!  BAD!  No user interactions for you for the rest of the
morning!").  

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>