[ont.events] U of Toronto Computer Science activities, Feb. 1-5

clarke@csri.toronto.edu (Jim Clarke) (01/27/88)

         (SF = Sandford Fleming Building, 10 King's College Road)
              (GB = Galbraith Building, 35 St. George Street)

SUMMARY:

COLLOQUIUM, Tuesday, February 2, 11 am, SF1105 -- Michael D. Tilson:
     "The Future of UNIX in the Year 2001"

THEORY SEMINAR, Thursday, February 4, 3 pm, GB244 -- Andrei Broder:
     "On the second eigenvalue of random regular graphs"

THEORY SEMINAR, Friday, February 5, 1 pm, GB412 -- Prabhakar Raghavan:
     "Ramdomized Algorithms and Pseudorandom Numbers"

-------------

              COLLOQUIUM, Tuesday, February 2, 11 am, SF1105

                           Mr. Michael D. Tilson
                              HCR Corporation

                   "The Future of UNIX in the Year 2001"

     UNIX has been available outside Bell Labs since the mid-70's; the
first UNIX system at the University of Toronto was installed in early 1975.
Thirteen years ago the system was new, still experimental, and rarely used.
Today, UNIX is mature, becoming standardized, and widely used.  In thirteen
years we will see the dawn of a new (calendar) millenium.  What can we
expect from UNIX? This colloquium discusses the technology trends that will
determine the status of UNIX at the turn of the century, with special
reference to the role UNIX may play in "wiring the world".

     UNIX has become a standard.  The lifetime of standards is surprisingly
long.  On the other hand, technology continues to advance at a rapid rate.
There is no reason to believe that the rate of change will slow between now
and the end of the century.  However, there are good reasons to believe
that "conventional" architectures will continue to dominate general-purpose
computing for the next decade, and if this is true, then UNIX has a long
and bright future as a higher-level machine definition.

     In the next thirteen years UNIX will open the door to possibilities
for distributed processing and distributed applications that go far beyond
anything we can do today.  In particular, LAN networking issues will be
solved soon, but long haul connectivity of diverse international systems
presents some new technical challenges.  UNIX, standards, high performance
machines, networking (especially ISDN), and "internationalization" will
combine to create a basis for practical, everyday world-wide network com-
puting.

     A goal of this talk is to understand why a typical obsolete C applica-
tion written in the mid-80's might be still running on a incredibly
advanced networked architecture, moving data transparently from New York to
Tokyo in the year 2001, and why that might actually represent a good solu-
tion.

     This talk is based upon a paper presented at the Summer '87 Usenix
Conference and new material to be presented at the forthcoming UniForum '88
conference.

             THEORY SEMINAR, Thursday, February 4, 3 pm, GB244

                             Dr. Andrei Broder
                        DEC Systems Research Center

            "On the second eigenvalue of random regular graphs"

Expanders have many applications in Computer Science.  It is known that
random d-regular graphs are very efficient expanders, almost surely.  How-
ever, checking whether a particular graph is a good expander is co-NP-
complete.  We show that random d-regular graphs are certifiable efficient
expanders with high probability.  The certificate is based on computing the
second largest eigenvalue, lambda-2, about which we show that it is
concentrated within an interval of width O(sqrt d) whose center, equal
to E[lambda-2]$, is less than O(d ** (3/4) ).  The bound on the
width of the interval is derived from a simple application of martingale
theory and the bound on E[lambda-2] is obtained by exploring the
properties of random walks in random graphs.

This is a joint work with Eli Shamir from Hebrew University, Jerusalem.

              THEORY SEMINAR, Friday, February 5, 1 pm, GB412

                          Dr. Prabhakar Raghavan
                     IBM, T.J. Watson Research Center

             "Ramdomized Algorithms and Pseudorandom Numbers"

There are two major areas of interaction between computation and random-
ness: (1) randomized algorithms, and (2) the generation of pseudorandom
sequences. Traditionally, the two studies have remained divorced from each
other; randomized algorithms are analyzed as if unlimited amount of perfect
randomness were available, while pseudorandom generation is studied from
the perspective of cryptographic security. Bach [1] recently proposed
studying the interaction between pseudorandom number generators and random-
ized algorithms.

There are two approaches one might take to studying this interaction.  On
the one hand, one might adopt the $entropic approach$ - to seek to deter-
mine the minimum quantity of randomness needed to run a particular random-
ized algorithm attaining a certain performance - such an approach is
adopted by Peleg and Upfal [9].  On the other hand, viewing randomness as a
scarce resource, one might wish to design simple randomized algorithms that
are especially economical in their use of random bits.  We adopt the latter
approach in this paper and study randomized algorithms for (1) sorting, (2)
selection and (3) routing.  We assume, as does Bach, that a small amount of
perfect randomness - the "seed" - is initially available; from this seed we
generate a pseudorandom sequence that is used by the algorithm.
-- 
Jim Clarke -- Dept. of Computer Science, Univ. of Toronto, Canada M5S 1A4
              (416) 978-4058
BITNET,CSNET: clarke@csri.toronto.edu     CDNNET: clarke@csri.toronto.cdn
UUCP: {allegra,cornell,decvax,linus,utzoo}!utcsri!clarke