[ont.events] 23 June AI/Theory Seminar Abe, Learnability

greiner@ai.toronto.edu (Russ Greiner) (06/23/89)

The title and abstract for the joint AI/Theory seminar tomorrow appears below.
Notice it has moved: it will now be in  *GB 244*.
Please contact me if you are interested in talking with the speaker.
	Russ Greiner -- 978-6277

[This msg supersedes all prior msgs about this talk.]



* * Joint AI/Theory seminar --  Friday, 23/VI --  at 11am -- GB 244 * *

          `Polynomial Learnability of Formal Languages' 

                           Naoki Abe 
         Department of Computer and Information Science, 
                  University of Pennsylvania

                           Abstract

This talk considers learnability and non-learnability of various classes
of formal languages with respect to the distribution-free notion of
learnability called `polynomial learnability' originally formulated by
L. G. Valiant.  Polynomial learnability requires only that the learning 
algorithm arrive at an approximately correct hypothesis with a high 
confidence probability, but that it do so in time polynomial in the 
complexity of the target concept, and the parameters quantifying 
the accuracy and confidence. 

The first half of the talk will present a nearly complete 
charaterization of a host of learnability questions for subsets of $N^m$ 
called `semilinear sets', or the class of `letter-counts' (or Parikh-images) 
of regular sets.   In particular, the class of semilinear sets 
of dimensions 1 and 2 is shown to be learnable, when the integers are 
encoded in unary.  This result is complemented with negative results of 
several different characters verifying the non-triviality of this 
learnability result, relying on hardness assumptions of varying degrees 
-- from $P \neq NP$,$RP \neq NP$ to the hardness of learning DNF.  

The second half of the talk will demonstrate how similar proof techniques 
can be used to show results of a similar character about some subclasses 
of formal languages which are of more direct linguistic relevance.
On the one hand, a novel, nontrivial constraint on grammars called k-locality 
is presented, which allows not only context free grammars but also 
a rich class of `mildly context sensitive' grammars to be learnable. 
On the other hand, another constraint which amounts to a bound on the number 
of `recursion free derivations' in a grammar is shown not to help efficient 
learning.  Possible implications of these results to the theory of natural
language acquistion will aslo be touched upon.

-- 
| Russ Greiner                     U of T, Toronto M5S 1A4 CDN | 
| greiner@ai.toronto.edu		  uunet!utai!greiner   |
| greiner%ai.toronto.edu@csnet-relay            (416) 978-6277 |