[comp.ai.neural-nets] Step Function

danforth@riacs.edu (Douglas G. Danforth) (09/02/89)

Newsgroups: comp.ai.neural-nets
Subject: Re: : Step Function
Summary: Bias and Basis
Keywords: learning,generalization

Tony Russo writes:
>
>No. You bring up a good point, because your argument is really, "What functions
>shall we consider in the hypothesis space?" 
>
>I can't tell you; this appears to be getting very deep. On the surface,
>it seems that biases are  !necessary! for learning anything at all.
>If so, then the biases are probably hard-wired and not learned, since
>they would have to be learned in terms of other biases, etc.
>
>Does this make sense to anyone else, or have I gone off the deep end?
>
> ~ tony ~
>
>	~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>	~  	 Tony Russo		" Surrender to the void."	~
>	~   apr@cbnewsl.ATT.COM		   put disclaimer here		~
>	~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Two comments:
 
     (1) If you will allow me to modify your words, it is interesting that
     bias can also be looked upon as a "basis".  By choosing a biased view of
     the world one does at least have a place to stand.  If an organism has
     a  collection of biased views then there is a possibility that the collection
     can "span" a space and act as a basis for representing any other biased
     view in that space.

     (2) In a deep way we are all limited by the sensory space in which we live.
     The representation of a function which spans a space larger than we
     are aware can only be approximated by functions within the smaller space.
     We are blind to that which we do not know.  We know we don't know some
     things.  However, there are things we don't know we don't know.  Very
     humbling (see Fernando Flores and Terry Winograd's book, Computers and
     Cognition (??), sorry forgot the exact title).




--------------------------------------
Doug Danforth
danforth@riacs.edu
--------------------------------------

zmacv61@flamingo.doc.ic.ac.uk.doc.ic.ac.uk (L K C Leighton) (09/09/89)

Subject: Re: : Step Function
Summary: Bias and Basis

>Re: message from danforth@hydra.riacs.edu.UUCP (Douglas G. Danforth), 768
>Tony Russo writes:
>
>>I can't tell you; this appears to be getting very deep. On the surface,
>>it seems that biases are  !necessary! for learning anything at all.
>>If so, then the biases are probably hard-wired and not learned, since
>>they would have to be learned in terms of other biases, etc.
>
>>Does this make sense to anyone else, or have I gone off the deep end?
>
>     bias can also be looked upon as a "basis".  By choosing a biased view of
>     the world one does at least have a place to stand.  If an organism has
>     a  collection of biased views then there is a possibility that the collection
>     can "span" a space and act as a basis for representing any other biased
>     view in that space.

I read a book on improving memory recall recently - it noted that we use
our existing knowledge, i.e the current state of a neural network, to
'cross-reference' and memorise a new pattern.  the more cross-referencing
we do, the less likely we are to forget something.
	if we don't know ANYTHING, then we have nothing to judge what we are
presented with, and cannot learn anything new.  on the nn level, if we have
no neural links at present, then we cannot have any new neurons firing!!!
so yes, a "basis" - some sort of (biased) neural links are vital!  i'll
never knock someone for having biased views of the world, again!  after all,
it only appears to be human...


comments welcomed...


| Luke Leighton @ Imperial College          |   zmacv61@uk.ac.ic.doc           |
| they said it was impossible.  i agreed.   |   tharg@uk.ac.ic.cc              |
| i said it was impossible. they disagreed. |   (On Janet :-)                  |
| Luke Leighton @ Imperial College          |   zmacv61@uk.ac.ic.doc           |
| they said it was impossible.  i agreed.   |   tharg@uk.ac.ic.cc              |
| i said it was impossible. they disagreed. |   (On Janet :-)                  |

danforth@riacs.edu (Douglas G. Danforth) (09/09/89)

Luke Leighton writes:
===============================================================================
Reply-To: zmacv61@doc.ic.ac.uk (L K C Leighton)
Organization: Imperial College Department of Computing

....  i saw memory recall-type neural nets as limited, as they have no means
to link one neural pattern to another (thought processing we do all the time).
===============================================================================

  Please take a look at Pentti Kanerva's book, "Sparse Distributed Memory"
1988, Cambridge MA, MIT Press.

  The fundamental place from which  Pentti begins is precisely the ability of
memory to link one pattern to another.   Input to memory triggers a
reconstructed output which in turn can act as more input.  This linking
of input-output associations can form long "pointer chains" so that
sequences, such as musical sonatas, can be recalled.  The complexity of the
patterns can be very great (hundreds to thousand of bits) and still be 
manageable and learnable.

   The dividing line between an artificial neural net and a "memory" is a
fuzzy one.  They share many similarities.

------------------
Doug Danforth
danforth@riacs.edu
------------------