[comp.ai.neural-nets] Meta-Nets

abbott@aerospace.aero.org (Russell J. Abbott) (03/17/90)

Has there been any work on building neural nets that find weights for
neural nets?  For example, suppose one wanted to construct a neural-net
to recognize handwritten letters.  Traditionally one would use a
learning algorithm to construct a set of weights.  Why not instead build
a meta-net that was trained to take a set of category instances and
produce a set of weights that would differentiate among the given
categories?

Presumably such a meta-net would be bigger than the nets for which it
was finding weights (or a diagonalization could be constructed) and it
would probably be difficult to train it.  But is there any a priori
reason why such a meta-net could not be built?

Input to such a meta-net might be something like an array of instances
of the desired categories.  Each column would correspond to a category;
the entries in each column would be examples of that category.  The
output would be a set of weights for a nerual net of a given
architecture.

The meta-net could be trained in a number of ways.  One way would simply
be to compare the output weights to weights produced for those
categories by a traditionally trained net.  Another way would be by
actually applying the output weights to the given instances in an
application-level net to see how well they categorized the examples.

In any event, since neural net training is a highly parallel and
continuous process and since neural nets tend to be most applicable to
highly parallel and continuous tasks, production of neural net weights
would seem to be the sort of job for which neural nets are well suited.


-- 
-- Russ abbott@itro3.aero.org

gblee@maui.cs.ucla.edu (Geunbae Lee) (03/19/90)

In article <68940@aerospace.AERO.ORG> abbott@itro3.aero.org (Russell J. Abbott) writes:
>
>Has there been any work on building neural nets that find weights for
>neural nets?  For example, suppose one wanted to construct a neural-net
>-- Russ abbott@itro3.aero.org

I don't know you are familiar with Jordan Pollack's _cascaded neural net_
stuff. He tried to train high level network to produce the correct weight
for the low level network in which several functions can be implemented.
I think it is in the 8th cogsci proceedings. (? sorry, my memory may not
be correct about this).


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+  Geunbae Lee, Artificial Intelligence Lab, Computer Science Dept, UCLA.   +
+  INTERNET:gblee@cs.ucla.edu, PHONE:213-825-5199 (office)                  +
+  Sir, AI is the science that makes machines smart, but people dumb!!!     +

smagt@fwi.uva.nl (Patrick van der Smagt) (03/19/90)

In article <68940@aerospace.AERO.ORG> abbott@itro3.aero.org (Russell J. Abbott) writes:
>
>Has there been any work on building neural nets that find weights for
>neural nets?

Please post your responses to this newsgroup.  There are more people
interested in this subject.


+--------------------------------------------------------------+
| Patrick van der Smagt                                        |
|                                                              |
| X-Organisation: Faculty of Mathematics & Computer Science,   |
|                 University of Amsterdam, Kruislaan 409,      |
|                 NL-1098 SJ  Amsterdam, The Netherlands       |
| X-Phone:        +31 20  592 5022                             |
| X-Telex:        10262 hef nl                                 |
| X-Fax:          +31 20  592 5155                             |
+--------------------------------------------------------------+

kolen-j@toto.cis.ohio-state.edu (john kolen) (03/19/90)

In article <33177@shemp.CS.UCLA.EDU> gblee@maui.UUCP (Geunbae Lee) writes:
>In article <68940@aerospace.AERO.ORG> abbott@itro3.aero.org (Russell J. Abbott) writes:
>>
>>Has there been any work on building neural nets that find weights for
>>neural nets?  For example, suppose one wanted to construct a neural-net
>>-- Russ abbott@itro3.aero.org
>
>I don't know you are familiar with Jordan Pollack's _cascaded neural net_
...
>I think it is in the 8th cogsci proceedings. (? sorry, my memory may not

It's the 9th cogsci proceedings.  Cascaded nets can be thought of
as two single-layer nets where the output of the first network
is the weights of the second network.  

John Kolen
-=-


--
John Kolen (kolen-j@cis.ohio-state.edu)|computer science - n. A field of study
Laboratory for AI Research             |somewhere between numerology and
The Ohio State Univeristy	       |astrology, lacking the formalism of the
Columbus, Ohio	43210	(USA)	       |former and the popularity of the latter

apr@cbnewsl.ATT.COM (anthony.p.russo) (03/20/90)

I don't want to put a damper on the idea on Meta-nets, but
I would like to spark some debate on their use. Clearly,
the idea of using one network to teach another is an important
step toward emulating the human thought process.

However, from the discussions I've read lately, I have doubts as to
whether such efforts will prove fruitful. Here's why.

We could, in principle and probably in actuality, train a net to
teach other nets. I don't argue with that. However, the meta-net
has been trained by some standard algorithm. What we have then,
is a standard algorithm (say backprop) that is a teacher 
of a teacher (the meta net) of a learner (other networks).

I tend to think that the meta net, at best, would learn to
implement the standard algorithm. That is, we are training it to
learn some known algoithm. If this is the case, why not just use 
the standard alg and skip the meta net?

If this is NOT the case, then how do you propose to train the meta net?
I'm interested in any ideas addressing this.

 ~ tony ~

	~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	~  	 Tony Russo		" Surrender to the void."	~
	~  AT&T Bell Laboratories					~
	~   apr@cbnewsl.ATT.COM						~
	~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

jim@se-sd.NCR.COM (Jim Ruehlin) (03/22/90)

In article <4670@cbnewsl.ATT.COM> apr@cbnewsl.ATT.COM (anthony.p.russo) writes:
>I tend to think that the meta net, at best, would learn to
>implement the standard algorithm. That is, we are training it to
>learn some known algoithm. If this is the case, why not just use 
>the standard alg and skip the meta net?

As I remember from my NN classes, the exciting thing about them is that they
begin to generalize (if designed and trained properly).  Hopefully, the
meta-net would generalize learning heuristics based on the standard
algorithm it was taught.

- Jim Ruehlin

abbott@aerospace.aero.org (Russell J. Abbott) (03/22/90)

In article <4670@cbnewsl.ATT.COM> apr@cbnewsl.ATT.COM (anthony.p.russo) writes:
>We could, in principle and probably in actuality, train a net to
>teach other nets. I don't argue with that. However, the meta-net
>has been trained by some standard algorithm. What we have then,
>is a standard algorithm (say backprop) that is a teacher 
>of a teacher (the meta net) of a learner (other networks).

The original question was intended as a thought experiment whose purpose
was to examine the limits of neural nets.  If one could develop a
meta-net, then for a large class of problems the training phase would be
by-passed since the meta-net would be able to come up with the weights
directly.  It wouldn't be a trainer of the application net; it would
determine the weights for that net itself.  But is that reasonable: a
neural net system without the need for training?  If not, then why is a
meta-net impossible?
-- 
-- Russ abbott@itro3.aero.org