[comp.ai.neural-nets] ART and non-stationary environments

adverb@bucsb.UUCP (Josh Krieger) (04/28/89)

I think it's important to say one last thing about ART:

ART is primarily usefull in a statistically non-stationary environment
because its learned categories will not erode with the changing input.
If your input environment is stationary, then there may be little reason
to use the complex machinery behind ART; your vanilla backprop net will
work just fine.

-- Josh Krieger

myke@gatech.edu (Myke Reynolds) (04/29/89)

In article <2503@bucsb.UUCP> adverb@bucsb.bu.edu (Josh Krieger) writes:
>I think it's important to say one last thing about ART:
>
>ART is primarily usefull in a statistically non-stationary environment
>because its learned categories will not erode with the changing input.
>If your input environment is stationary, then there may be little reason
>to use the complex machinery behind ART; your vanilla backprop net will
>work just fine.
>
BAM is a the stationary version of ART, and blows backprop out of the
water in both power and simplicity. Its less than a linear equation solver,
but thats enough to out-preform backprop.
That backprop is not much worse, is not only wrong, it makes for a skimpy
last ditch effort to argue for a model that has no other defense.
-- 
Myke Reynolds
School of Information & Computer Science, Georgia Tech, Atlanta GA 30332
uucp:	...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!myke
Internet:	myke@gatech.edu

kavuri@cb.ecn.purdue.edu (Surya N Kavuri ) (05/01/89)

In article <18583@gatech.edu>, myke@gatech.edu (Myke Reynolds) writes:
> BAM is a the stationary version of ART, and blows backprop out of the
> water in both power and simplicity. Its less than a linear equation solver,
> but thats enough to out-preform backprop.
> Myke Reynolds

   I do not understand what you mean by "power" but if you look at the 
  memory capacity, BAM's look pathetic.  
   I do not speak for BP, but I heard some explanations that the 
   hidden layers serve as feature detectors (4-2-4 decoder) which 
   shows a likeness(intuitive) to pattern classification methods.


                                             Surya Kavuri
                                             (FIAT LUX)

  P.S:  What I dispise in relation to BP is the apparent tendencies 
        that people have in romanticizing it.  (I should say that the 
        problem is not with BP but with its researchers).  I have 
        seen sinful explanations to what the hidden units stand for.  
        I have seen claims that they stand for concepts that could 
        be given physical meanings (sic!).  These are baseless 
        dreams that people come with.  This is a disgrace to the 
        serious scientific community as it indicates a degeneration.
        
   BP is not even Steepest gradient approach, strictly speaking.  
   It does minimization of an error measure.          

   (1) There are no measures of its convergence time.
       
                                        

myke@gatech.edu (Myke Reynolds) (05/01/89)

Surya N Kavuri writes:
>   I do not understand what you mean by "power" but if you look at the 
>  memory capacity, BAM's look pathetic.  
Its memory capacity is no less than that of a linear filter, and its size is
not limited, unlike BP. Since size = memory capacity, its memory capacity
is limited only by your implementation of a linear equation solver. If you
don't make the obvious step of using a sparse solver, then it will be pathetic.
-- 
Myke Reynolds
School of Information & Computer Science, Georgia Tech, Atlanta GA 30332
uucp:	...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!myke
Internet:	myke@gatech.edu