[mod.ai] Course - Parallel Architecture and AI

BORGIDA@RED.RUTGERS.EDU.UUCP (12/02/86)

Posted-Date: Mon, 17 Nov 86 09:51 EST
From: Tim Finin <Tim@cis.upenn.edu>


Here is a description of a 1 and 1/2 day course we are putting on for
the Army Research Office.  We are opening it up to some people from
other universities and nearby industry.  We have set a modest fee of
$200. for non-academic attendees and is free for academic colleagues.
Please forward this to anyone who might be interested.


                      SPECIAL ARO COURSE ANNOUNCEMENT

     COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING IN AI APPLICATIONS

As a part of our collaboration with the Army Research Office, we are
presenting a three-day course on computer architectures for parallel
processing with an emphasis on their application to AI problems.  Professor
Insup Lee has organized the course which will include lectures by
professors Hossam El Gindi, Vipin Kumar (from the University of Texas at
Austin), Insup Lee, Eva Ma, Michael Palis, and Lokendra Shastri.

Although the course is being sponored by the ARO for researchers from
various Army research labs, we are making it available to colleagues from
within the University of Pennsylvania as well as some nearby universities
and research institutions.

If you are interested in attending this course, please contact Glenda Kent
at 898-3538 or send electronic mail to GLENDA@CIS.UPENN.EDU and indicate
your intention to attend.  Attached is some addiitonal information on the
course.

Tim Finin



TITLE           Computer Architectures for Parallel Processing in AI
                Applications

WHEN            December 10-12, 1986 (from 9:00 a.m. 12/10 to 12:00 p.m.
                12/12)

WHERE           room 216, Moore School (33rd and Walnut), University of
                Pennsylvania, Philadelphia, PA.

FEE             $200. for non-academic attendees

PRESENTERS      Hossam El Gindi, Vipin Kumar, Insup Lee, Eva Ma, Michael
                Palis, Lokendra Shastri

POC             Glenda Kent, 215-898-3538, glenda@cis.upenn.edu
                Insup Lee, lee@cis.upenn.edu

INTENDED FOR    Research and application programmers, technically oriented
                managers.

DESCRIPTION     This course will provide a tutorial on parallel
                architectures, algorithms and programming languages, and
                their applications to Artificial Intelligence problems.

PREREQUISITES   Familiarity with basic computer architectures, high-level
                programming languages, and symbolic logic, knowledge of
                LISP and analysis of algorithms desirable.

COURSE CONTENTS This three day tutorial seminar will present an overview of
                parallel computer architectures with an emphasis on their
                applications to AI problems.  It will also supply the
                neccessary background in parallel algorithms, complexity
                analysis and programming languages.  A tentative list of
                topics is as follows:

                   - Introduction to Parallel Architectures - parallel
                     computer architectures such as SIMD, MIMD, and
                     pipeline; interconnection networks including
                     ring, mesh, tree, multi-stage, and cross-bar.

                   - Parallel Architectures for Logic Programming -
                     parallelism in logic programs; parallel execution
                     models; mapping of execution models to
                     architectures.

                   - Parallel Architectures for High Speed Symbolic
                     Processing - production system machines (e.g,
                     DADO); tree machines (e.g., NON-VON); massively
                     parallel machines (e.g., Connection Machine,
                     FAIM).

                   - Massive Parallelism in AI - applications of the
                     connectionist model in the areas of computer
                     vision, knowledge representation, inference, and
                     natural language understanding.

                   - Introduction to Parallel Computational Complexity
                     - formal parallel computation models such as
                     Boolean circuits, alternating Turing machines,
                     parallel random-access machines; relations
                     between sequential and parallel models of
                     computation; parallel computational complexity of
                     AI problems such as tree, graph searches,
                     unification and natural language parsing.

                   - Parallel Algorithms and VLSI - interconnection
                     networks for VLSI layout; systolic algorithms and
                     their hardware implementations;

                   - Parallel Programming Languages - language
                     constructs for expressing parallelism and
                     synchronization; implementation issues.

              COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING
                            IN AI APPLICATIONS

                              COURSE OUTLINE

The course will consist of seven lectures, where each lecture is between
two to three hours.

The first lecture introduces the basic concepts of parallel computer
architectures.  It explains the organization and applications of different
classes of parallel computer architectures such as SIMD, MIMD, and
pipeline.  It then discusses the properties and design tradeoffs of various
types of interconnection networks for parallel computer architectures. In
particular, the ring, mesh, tree, multi-stage, and cross-bar will be
evaluated and compared.

The second and third lectures concentrate on parallel architectures for AI
applications. The second lecture overviews current research efforts to
develop parallel architectures for executing logic programs. Topics covered
will include potential for exploiting parallelism in logic programs,
parallel execution models, and mapping of execution models to
architectures.  Progress made so far and problems yet to be solved in
developing such architectures will be discussed. The third lecture
overviews the state-of-the-art of architectures for performing high speed
symbolic processing. In particular, we will describe parallel architectures
for executing production systems such as DADO, tree machines (e.g.,
NON-VON), massively parallel machines (e.g., Connection Machine, FAIM).

The fourth lecture explains why the von Neuman architecture is
inappropriate for AI applications and motivates the need for pursuing the
connectionist approach.  To justify the thesis, some specific applications
of the connectionist model in the areas of computer vision, knowledge
representation, inference, and natural language understanding will be
discussed.  Although the discussion will vary at the levels of detail, we
plan to examine at least one effort in detail, namely the applicability and
usefulness of adopting a connectionist approach to knowledge representation
and limited inference.

The fifth lecture introduces the basic notions of parallel computational
complexity. Specifically, the notion of ``how difficult a problem can be
solved in parallel'' is formalized. To formulate this notion precisely, we
will define various formal models of parallel computation such as boolean
circuits, alternating Turing machines, and parallel random-access machines.
Then, the computational complexity of a problem is defined in terms of the
amount of resources such as parallel time and number of processors needed
to solve it. The relations between sequential and parallel models of
computation, as well as characterizations of ``efficiently parallelizable''
and ``inherently sequential'' problems are also given. Finally, the
parallel computational complexity of problems in AI (e.g., tree and graph
searches, unification and natural language parsing) are discussed.

The sixth lecture discusses how to bridge the gap between design of
parallel algorithms and their hardware implementations using the present
VLSI technology. This lecture will overview interconnection networks
suitable for VLSI layout. Then, different systolic algorithms and their
hardware implementations will be discussed. To evaluate their
effectiveness, we compare how important data storage schemes, like queue
(FIFO), dictionary, and matrix manipulation, can be implemented on various
systolic architectures.

The seventh lecture surveys various parallel programming languages.  In
particular, the lecture will describe extensions made to sequential
procedural, functional, and logic programming languages for parallel
programming.  Language constructs for expressing parallelism and
synchronization, either explicitly or implicitly, will be overviewed and
their implementation issues will be discussed.
-------