[comp.research.japan] Kahaner Report: Comp in High Energy Physics '91

rick@cs.arizona.edu (Rick Schlichting) (05/27/91)

  [Dr. David Kahaner is a numerical analyst visiting Japan for two-years
   under the auspices of the Office of Naval Research-Asia (ONR/Asia).  
   The following is the professional opinion of David Kahaner and in no 
   way has the blessing of the US Government or any agency of it.  All 
   information is dated and of limited life time.  This disclaimer should 
   be noted on ANY attribution.]

  [Copies of previous reports written by Kahaner can be obtained from
   host cs.arizona.edu using anonymous FTP.]

To: Distribution
From: David K. Kahaner ONR Asia [kahaner@xroads.cc.u-tokyo.ac.jp]
Re: Computing in High Energy Physics '91, Tsukuba Japan 11-15 March 1991.
22 May 1991

ABSTRACT.
The international Computing in High Energy Physics meeting, held 11-15 
March 1991, in Tsukuba Science City Japan is summarized.

CHEP'91.
High energy physicists are engaged in "big ticket" physics. These are the 
people whose experiments require the large accelerators at CERN, Fermi 
National Accelerator Lab (FNAL), National Lab for High Energy Physics in 
Japan (KEK), the Super Collider at Texas (SSC), etc. The experiments 
generate massive amounts of data. Experiments can generate 100 
megabytes/second of data, a terabyte a day, and a pentabyte a year.  
Acquiring, moving, and storing this data needs high speed, high bandwidth 
networks, libraries of tapes and other external storage devices as well 
as automated retrieval systems.  

Processing the data is required first in real time during the 
experiments, and then as post processing afterwards for analysis.  For 
both of these requirements, the computing needs have always outstripped 
the capabilities of whatever was the current fastest supercomputer.  
Related theoretical analysis, such as lattice gauge theory, which does 
not depend on the experimental data, also requires tremendous computing 
resources which will barely be satisfied by teraflop computers.  In fact, 
special purpose computers are being built specifically for some of these 
analyses.  

Each year high energy physicists from around the world who are interested 
in computing come together for their annual meeting (Computing in High 
Energy Physics-CHEP).  This year it was held in Tsukuba Science City, 
about one hour outside Tokyo, from 11-15 March 1991. This was truly an 
international meeting as the participant's list summary below shows.  

    Country           No. of participants
     Brazil               2
     Canada               1
     China                5
     Denmark              1
     France              12
     Germany             12
     Israel               2
     Italy               17
     Japan              118
     Malaysia             1
     Spain                1
     Switzerland         29
     UK                   4
     USA                 35
     USSR                32
    -------------------------
     Total              272


There were 30 plenary talks and 84 presentations in the parallel and
poster sessions. There was also a small exhibition by vendors.  Because
several other meetings were being held during the same week I was only
able to attend the first day and a half of this conference, and I missed
a great many fascinating-sounding papers.  The purpose of this summary
is to provide my general impressions of the work as far as I was able to
assess it. Many thanks to Professor Yoshio Oyanagi (University of
Tokyo), and Mr.  Sverre Jarp (CERN) who participated in all of CHEP '91,
read this report and made important suggestions.  A complete list of the
titles and authors of the papers is attached to the end of this report.
A Proceedings is not yet available, but will be published this summer by
        Universal Academy Press       (this is NOT Academic Press)
        Ohgi-ya Building
        Hongo 5-26-5, Bunkyo-ku
        Tokyo, 113 Japan
         Tel: +81-3-3813-7232


(1) High energy physics computing demands are at least as great as those 
in some better known fields, such as fluid dynamics, molecular modeling, 
etc.  

(2) The scientists working in high energy physics are already using
large, interconnected, state of the art hardware for their experiments.
Thus the use of complicated computer networks and collections of
distributed computers for data processing and analysis does not put them
off. Rather, they have been doing distributed and parallel computing for
years using "farms" of minicomputers, typically Vax's.  Vax computers
are so ingrained into the culture that performance is measured in VUPs
(Vax Units of Performance).  Postprocessing of data is also done on
whatever is the largest machine available (in Japan these are typically
FACOM or Hitachi mainframes). A good deal of the "tracking" computations
can be vectorized but not much else.  However, there is a definite
movement toward RISC workstations and parallel computers. In fact, the
computing environment surrounding some of these experiments may be more
sophisticated (although often homegrown) than in laboratories that are
famous for supercomputing.  Also at the software level these labs are
already dealing with some extremely large (often multi-millions of
lines) source programs. Few software tools are being used, and most of
those are either homegrown or vendor supplied utilities. Thomas Nash
(FNAL, nash@fnal.fnal.gov) emphasized the need for research in software
engineering to aid in software management.

(3) The high energy physics community has more or less spurned
mainframes and related centralized services.  HEP code has never been
cost-justified on expensive supercomputers or mainframes because the
highly computational programs (1) are often small and (2) rarely
vectorize.  Hence the interest for cheaper systems (Unix and RISC) that
offer the promise of the huge computer quantities that the community
needs.  At the same time they realize that supercomputer companies can
offer some services that cannot be duplicated elsewhere.

David O. Willims from CERN (davidw@cernvm.cern.ch) discussed 
the relationships between mainframes and workstations as seen by his 
constituency. With respect to the question "is the role of the mainframe 
terminated" he made the following conclusions.  
  * General purpose mainframes as we know them in HEP are at the start of 
    their run-down phase. This phase will take about 5 years in HEP and 
    longer in the general marketplace.  
  * The services provided by these mainframes are essential and over time 
    will be provided by more specialized systems.  
He urged mainframe builders to realign prices towards the workstation 
server market, emphasize integration, push the mainframes I/O advantage 
relative to workstations, perform research related to quickly accessing 
vast quantities of data on a worldwide basis, and emphasize 
dependability, service, ease of use, and other things that will have a 
big payoff for scientists. (Robert Grossman from U Illinois, 
grossman@uicbert.eecs.uic.edu, echoed a part of this by pointing out that 
performance of database systems will have to be dramatically improved. 
For example the "distance" between two physical events in a 10^(15) item 
DB can be very great. Thus the first query will always be expensive, but 
research needs to be done on methods to speed up subsequent queries.) 

Williams' advice for workstation builders is to maintain aggressive 
pricing, emphasize integration, push I/O capacity, develop good 
peripherals and multiprocessors.  

I believe that most of the audience agreed with his points. 


(4) Specialized computers for simulation, in particular quantum 
chronodynamics, have been built, or are under development in US, Japan, 
Italy, and perhaps other countries. These QCD machines include one at 
Columbia University, Italy's APE, Tsukuba University's QCDPAX, IBM 
Yorktown Heights' GF11, and FermiLab's ACP-MAPS.  The Japanese QCDPAX 
project began in the late 1970's and is now running with 480 nodes and a 
peak speed of about 14GFlops (see also my report PAX 12 April 1990), 
beginning its fifth generation. The Columbia machine has almost as long a 
history and is of similar performance.  The following table (from the 
paper given by Iwasaki) gives some details of existing parallel computer 
projects dedicated to lattice gauge theory.  

Name       -      GF11     ACP-MAPS    APE      QCDPAX    APE100(*)
Where   Columb U   IBM     FNAL       Rome      Tsukuba   Rome

#CPUs    256      566        256      16        480       2048
 
Archi-   MIMD    SIMD       MIMD      SIMD      MIMD      SIMD
 tecture

  CPU    80286     -        Weitek      -       68020     MAD
  FPU    80287   Weitek      XL8032    Weitek   LSI        (custom)
         Weitek   1032x2      chip     1032x4    Logic
          3364x2  1033x2      set      1033x4    L64133

Memory
 SRAM    2MB      64KB       2MB        -        2MB
 DRAM    8MB      2MB        10MB       16MB     4MB        4MB

1CPU     64MF     20MF       20MF       64MF     32MF       50MF
 perform

Network  2D NNM   Memphis    X-bar &    Linear   2D NNM     3D NNM
                   switch     hyper^3    array
           (NNM=Nearest neighbor mesh)

Host     Vax       IBM       Micro      Micro    Sun 
          11/780    3090       Vax       Vax      3/260

Peak      16GF     11GF       5GF        1GF      15GF      100GF

Status     (1)      (2)        (2)        (1)      (1)       (3)
  (1): Running, physical results reported
  (2): Nearly working
  (3): Well underway
(*): For additional details concerning APE100 contact M. Malek
(mmalek@onreur-gw.navy.mil), who is writing about high performance
computing in ONR's London office.

Several new machines are in the pipeline.  A teraflop machine for QCD has 
been proposed to the US Department of Energy by a collaboration of 
scientists from (mostly) US universities and National Laboratories. New 
machines are under development at Fermi Lab, and other places. The 
Japanese Ministry of Education, Science, and Culture (MOMBUSHO) has just 
approved funding of the next generation PAX (about $10M US from 1992-
1996). All of these are estimating performance in the range of several 
hundred gigaflops within the next few years.  The network topology of QCD 
machines has been getting more sophisticated too, moving from 1D (16 
CPU), 2D (16x16), 3D (16x16x8), to 4D (16x16x8x8). There are still plenty 
of problems though, as neither the topology nor control structure (SIMD, 
MIMD, ?) is  really settled. In addition, reliability (MTBF) as well as 
pin and cabling issues have to be addressed.  Nevertheless at the leading 
edge, some of these scientists are already talking about performance 
beyond one teraflop.  

(5) The community is very international, with visits to each other's
labs and joint projects being very common.  For example, Katsuya Amako
(KEK- Japan) pointed out that in each of the Tristan experiments (Venus,
Topaz, Amy) physicists from almost 17 institutes are participating.
Frankly, this is one of the most well mixed international research
communities that I have seen. A glance at the joint authorship of many
papers presented at this symposium testifies to this fact.  Consequently
there is a great deal of data sharing and savvy about advanced computing
and networking.  There is not much going on within their world that is
not rapidly known by all the active participants.  (I urge readers, even
nonspecialists, to scan the attached list of papers in order to get a
sense of how close to the front of the technology wave this community is
riding.) On the other hand there does not seem to be nearly as much
communication between this group and others doing high performance
computing. I see several reasons for this, including an intuitive sense
by the physicists that they have the best expertise needed to treat
their problems (because their computing needs are so special purpose),
and an almost exclusive dependence on VMS software until recently--thus
an isolation from the Unix world.  High energy physicists are moving
heavily and rapidly from minicomputers to workstations, and a "wind of
Unix" was definitely blowing through the conference.  The growth of Unix
is already bringing people closer together and I am very optimistic that
all parties can learn from each other.  In particular, it seems to me
that as computing becomes more distributed, the experiences of the
physics community who have actually been doing this for some time can be
beneficial in more general situations. Similarly, the physicists can
learn from computer scientists and algorithm developers who have broader
views.  Incidentally, Japanese contributions in this area are bound to
increase rapidly; when Unix is the accepted standard, then the best
hardware will be easily adopted worldwide.

(5) For the future, the participants see data storage, cpu power, and 
software as three crisis issues. Networking between remote scientists and 
the experiment, or among scientists, was seen as something that needed to 
be beefed up, but not emphasized as at a crisis stage.  In Japan, future 
high energy physics projects are viewed as large international 
collaborations, and there is a strong feeling that a more unified 
worldwide HEP computing environment is needed.  

(6) Parallel computing is moving more into the mainstream of Japanese 
science. Two Japanese parallel computers that I reported on within 
the past year (QCDPAX and AP1000) were used to perform real work 
presented at this meeting. In addition, some applications of transputers 
were also shown.  I am predicting that we will see this trend continue as 
Japanese-built parallel machines are installed in other "friendly" 
outside user installations. 

                              TITLES AND AUTHORS
                     COMPUTING IN HIGH ENERGY PHYSICS '91
                                Tsukuba, Japan
                               March 11-15, 1991

GENERAL SESSIONS

Opening Remarks
     S. Shibata (KEK)

HEP Computing: Where are we now? Open questions for this conference.
     J.J. Thresher (CERN)

HEP Computing in Japan
     K. Amako (KEK)

Computing at LEP
     M. Delfino (LFAE)

Computing at HERA
     J. May (DESY)

Computing at Fermilab
     T. Nash (FNAL)

How is the large volume of data acquired?
     P. Le Du (Saclay)

Database Computing and HEP
     R. Grossman (U. Illinois)

Is the rome of the mainframe terminated? Mainframe vs workstations.
     D. Williams (CERN)

Computing plan for SSC
     P. Leibold, B. Scipion (SSCL)

Computer in the next generation
     K. Fuchi (ICOT)

Special Architectures for HEP
     R. Bock (CERN)

Special Systems for Lattice Gauge Simulations
     Y. Iwasaki (U. Tsukuba)

Using Neural Neworks to Identify Jets
     C. Peterson (U. Lund)
 
Pattern Recognition with Neural Nets
     M. Campbell (U. Michigan)

Symbolic and Formula Processing in HEP
     D.V. Shirkov (JINR)

Discussion: Collaboration with Industries
     K. Uchida (Fujitsu), R. Bock (CERN)

Discussion: Software Project Management in Shinkansen Control System
     T. Daniels (RAL)

Software Engineerig for Large Collaboration
     K. Hashimoto (Fujitsu)

Reality of Software Engineering in HEP
     J. Knobloch (CERN)

Database Management and Distributed Data in HEP: Present and Future.
     L.M. Barone (U Rome)

Computing for HEP in IHEP, China
     T. Wang (IHEP)

HEP Computing in USSR
     V. Schegelsky (Leningrad)

UNIX in Future and its Impact to HEP
     J. Butler (FNAL)

UNIX and SHIFT Project at CERN
     L. Robertson (CERN)

The role of the UNIX central computing facility in a multi-purpose
        national laboratory
     C. Eades (LBL)

Physics Analysis Tools and its Integration
     P. Kunz (SLAC)

Library and Management Issues
     R. Brun (CERN)

Why Fortran 90?
     M. Metcalf (CERN)

Graphics over Networks and Graphical User Interface
     F. Etienne (CPP Marseille)

Network and Communication Environment for Large Collaboration
     R. Mount (Caltech)

Video Conference for HEP Collaboration
     G. Chartrand (SSCL)

HEPnet in Europe: Status and Trends
     F. Fluckiger (CERN)

Future Prospects for Networking in the United States
     W. Lininsky (FNAL)

Banquet speech. Teraflops Computer Using Josephson Elements
     E. Goto (U Tokyo)

Discussion:
Unified Environment in world-wide HEP community and Collaboration to get
        it; Operating System, Library, Database,...
     T. Schalk (SCIPP)

PARALLEL SESSIONS

Prototype of Computer Farm for KEK B-Factory
     Ryosuke Itoh  (National Lab. for High Energy Physics, KEK)

Novel Aspects of the SLAC B Factory Computing Model
     Tom Glanzman (SLAC)

Operating HEP Simulation Codes on the Parallel Computer T. Node
     A. Jejcic, J. Maillard, J. Silva (Laboratoire de physique corpusculaire,
     College de France - Paris)

An Object-Oriented Compositon Environment for Scientific Applications
     Claudia M.L. Werner, Jano M. de Souza (COPPE/CERN Project, Federal
     University of Rio de Janeiro, Brazil))

GISMO:  Application of OOP to HEP Detector Design, Simulation, and
        Reconstruction
     B.W. Atwood (Stanford Linear Accelerator Center, Stanford, CA94309, USA)
     T.H. Burnett (Physics Department, University of Washington, Seattle,
     WA98915, USA)
     R. Cailliau, D.R. Myers, K.M. Storr (European Laboratory for Particle
     Physics (CERN), 1211 Geneva 23, Switzerland)

Object Oriented Approach to B Reconstruction
     Nobu Katayama (Cornell University)

Object Oriented Design and Programming for Experiment Online Applications -
        Experiences with a Prototype Application
     Gene A. Oleynik (Online Support Department, Fermi National Accelerator
     Laboratory, Batavia, IL, USA)

ARGUS:  A Distributed Graphic "C" Object Oriented Package for the L-3 Slow
        Control Graphic Display
     Jean-Marie Le Goff (CERN, 1211 Geneva 23, Switzerland)

Automatic Fortran Code Generation in the Entity Relationship Model
     Anne Sauvage, Alain Bonissent (CPPM IN2P3 CNRS Campus de Luminy, Case 907
     13288 Marseille Cedex 9, France)

The L3 Database Management System
     S. Banerje (Tata Institute of Fundamental Research, Bombay, India)
     L.M. Barone (INFN-Sezione di Rome and University of Rome, "La Sapienza",
     Italy)
     D. Boutigny, Y. Karyotakis (Laboratoire de Physique des Particules, LAPP,
     Annecy, France)
     P. Cardenal, N. Colino (CERN, Geneva, Switzerland)
     E. Gonzalez (Centro de Investigaciones Energeticas, Medioambientales y
     Tecnologicas, CIEMAT, Madrid, Spain)
     F. Linde (Carnegie Mellon University, Pittsburgh, USA)
     L. Niessen, J. Rose (I. Physikalisches Institut, RWTH, Aachen, Federal
     Republic of Germany)
     J. Perrier (University of Geneva, Switzerland)
     M. Pieri, Y.F. Wang (INFN sezione di Firenze and University of Firenze,
     Italy)
     S. Shevchenko, I. Vorobiev (Institute of Theoretical and Experimental
     Physics, ITEP, Moscow, Soviet Union)

Data Management, Access and Presentation in a Distributed, Heterogeneous
        Environment
     J.D. Shiers (CERN, 1211 Geneva 23, Switzerland)

An Environment for Building Control and Diagnosis Systems
     F. Corazziari, S. Falciano, L. Luminari, M. Savarese, E. Trasatti (INFN
     and Dipartimento di Fisica, P.le A. Moro 2, I-00185 Roma, Italy)

S.A.C.A.D. An Expert System for Data Multidimensional Analysis
     J. Jousset, J. Proriol, J.C. Chevaleyre (L.P.C. Clearmont - Ferrand)

From Event Display to Monitoring Display: Use of Colour Graphics to Monitor a
        Complex Apparatus
     G. Zito (I.N.F.N. Sez. Bari)

How to Represent Three Dimensional Data of High Energy Physics Events?
     H. Drevermann, C. Grab (CERN, 1211 Geneva 23, Switzerland)
     B.S. Nilsson (Niels Bohr Institute, 2100 Copenhagen, Denmark)

Providing a Computing Environment for a High Energy Physics Workshhop
     Judy Nicholls (Fermi National Accelerator Laboratory, Batavia IL, USA)

The Aleph Data Processing Chain at CERN: A Successful Combination of Three
        Heterogeneous Computer Architectures
     M. Delfino (Laboratori de Fisica d'Altes Energies Universitat Autonoma de
     Barcelona E-08193 Bellaterra, Barcelona, Spain)
     C. Georgiopoulos (Supercomputer Computation Research Institute, Florida
     State University, Tallahassee, FL 32301-4052, USA)
     T.F. Edgecock (Particle Physics Department, Rutherford Appleton
     Laboratory, Chilton, Didcot, Oxon OX11, 0QX, UL)
     J. Knobloch (European Laboratory for Particle Physics (CERN), CH-1211
     Geneva 23, Switzerland)

Experiences on Parallelization of Venus Data Analysis and GEANT Codes
     S. Ichikawa, N. Ishida (Facom-Hitac Limited)
     Y. Takaiwa, J. Kanzaki, K. Amako, T. Tsuboyama, Y. Watase (National
     Laboratory for High Energy Physics, Japan)

Application of CAP-II to HEP Computation
     T. Matsuura, S. Ichikawa (Facom-Hitac Limited)
     H. Shiraishi, M. Ikesaka (Fujitsu Laboratories Limited)
     A. Miyamoto, T. Nozaki, A. Manabe (National Laboratory for High Energy
     Physics, Japan)

Parallelizing HEP Fortran Programs for the GP-MIMD Distributed Memory Machine
     Adrian King, Andre Schneider (CERN, 1211 Geneva 23, Switzerland)
     C. Battista (Univ. of Rome - INFN)

THE APE100 Supercomputer
     C. Battista (Univ. of Rome - INFN)
     
An Artificial Neural Network Computational Scheme for Pattern Matching
        Problems in High Energy Physics
     M. Castellano, E. Nappi, G. Satalino (Instituto Nazionale de Fisica
     Nucleare-Bari, Via Amendola 173 cap. 70126-Bari, Italy)

Studies in Reconstructing Circular Tracks using Neural Networks
     Wan Ahmad Tajuddin Wan Abdullah (Department of Physics, University of
     Malaya, 5900, Kuala Lumpur, Malaysia)

SIM: A Software Information Manager
     C. Maidantchik, A.R.C. Rocha, J.M. de Souza, G. Xexeo (COPPE/UFRJ - Rio
     de Janeiro/Brazil)
     C. Maidantchik, G. Xexeo (World Lab. - LAA/HED Project - Lausanne,
     Switzerland)
     G. La Commare (CERN - Geneva, Switzerland)

DALI, A Software Development Environment for Data Reduction and the Analysis
        of Nuclear Physics Events
     D. Heuer, J.C. Durand (Institute des Sciences Nucleaires, 53 Bd des
     Martyrs, 38026 Grenoble Cedex, France)

A KUIB Based Interface for Tracks Fitting with Splines
     M.P. Bussa, L. Busso, L. Fava, L. Ferrero, R. Garfagnini, G. Grasso, I.
     Goulas, A. Maggiora, D. Panzieri, L. Santi, F. Tosello, G. Zosi (Istituto
     Nazionale di Fisica Nucleare, Torino-Udine, Italia)

The Architecture of TABA-HEP Workstation
     Jano de Souza, Ana Regina Rocha (Federal University of Rio de Janeiro
     (COPPE/CERN Research Project Caixa Postal 68511, 21945 Rio de Janeiro,
     Brazil)

Formula Manipulation System GAL
     Tateaki Sasaki (The Institute of Physical and Chemical Research)

The GAP-Project of Computer Aided Theoretical Calculations for Collider
        Physical Programs
     E. Boos, L. Gladilin, M. Dubinin, V. Edneral, V. Ilyin, A. Pukhov, V.
     Savrin, S. Shichanin (Inst. for Nucl. Phys., Moscow State University,
     119899 Moscow, USSR)

Computing the QCD alpha-3-s - Correction to the sigma-tot (e+e- --> HARONS)
        with the Symbolic Manipulation System
     S.A. Larin (INR, Moscow)

Numerical Approach to the One-Loop Integral
     J. Fujimoto, Y. Shimizu (National Laboratory for High Energy Physics
     (KEK), Tsukuba, Ibaraki 305, Japan)
     K. Kato (Department of Physics, Kogakuin University, Shinjuku, Tokyo 160,
     Japan)
     Y. Oyanagi (Institute of Information Science, University of Tsukuba,
     Tsukuba, Ibaraki 305, Japan)

First Experiences with a Systolic Trigger Processor for Rich Detector
     R. Baur, J. Glaeb, R. Maenner (Physics Institute, University of
     Heidelberg)

The MPPC Project (Massively Parallel Processing Collaboration); Status and
        First Results
     Francois Rohrbach (CERN)

Radiation-Hard Associative String Processors - A High-Density Scalable SIMD
        Architecture
     E.G. Friedman (Hughes Aircraft Company, Carlsbad, California and Aspex
     Microsystems)
     R.M. Lea (Brunel University, Uxbridge, Middlesex, United Kingdom)

Use of Massive Parallel Processors for the Second Level Trigger At SSC/LHC
     Patrik Le Du (DPhPE-SEPH-CEN Saclay)

Fast Cluster Finding System for Future HEP Experiments
     Dario Crosetto (CERN, 1211 Geneva 23, Switzerland)

Architecture and Performance of the Delphi Data Acquisition and Control System
     J.N. Albert, L. Beneteau, T. Camporesil, Ph. Charpentier, J. Fuster, C.
     Gaspar, Ph. Gavillet, A. Grant, F. Harris, M. Jonker, P. Moreau, D.
     Ruffinoni, G.R. Smith, C. Stubenrauch (CERN, CH-1211 Geneva 23,
     Switzerland)
     T. Adye, B. Franek, G. Gopal, R. Sekulin, G.R. Smith (Rutherford Appleton
     Laboratory, Chilton, GB-Didcot OX11 OQX, UK)
     J.N. Albert (Universite de Paris-Sud, Laboratoire de l'Accelerateur
     Lineaire, Bat. 200, F-91405 Orsay)
     J.P. Laugier (CEN-Saclay, DPhPE, F-91191 Gif-sur-Yvette, France)
     A. Bassi, T. Rovelli, G. Valenti (INFN Bologna, Via Irnerio 46, I-40126
     Bologna, Italy)
     M. Donszelmann (NIKHEF-H, Postbus 41882, NL-1009 DB Amsterdam, The
     Netherlands)
     A. Tilquin (College de France, Lab. de Physique Corpusculaire, 11 Place
     M. Berthelot, F-75231 Paris Cedex 05, France)
     F. Harris (Huclear Physics Laboratory, University of Oxford, Keble Road,
     GB- Oxford OX1 3PH, UK)

The H1 Data Acquisition System
     W.J. Haynes (DESY, H1 Collaboration, D-2000 Hamburg 52, SERC Rutherford
     Appleton Laboratory, Oxfordshire, UK)

The Data Acquisition System for the CLEO-B, At the Cornell B-Factory
     Klaus Honscheid, Chris Bebek (Wilson Lab., Cornell University)

The Delphi Fastbus Readout System
     L. Beneteau, T. Camporesil, Ph. Charpentier, J. Fuster, C. Gaspar, Ph.
     Gavillet, F. Harris, J. Javello, M. Jonker, Y. Miere, P. Moreau, H.
     Muller, E. Murzeau (CERN, CH 1211 Geneva 23, Switzerland)
     G. Goujon, M. Gros, M. Mur, P. Siegrist (CEN-Saclay, DPhPE, F-91191 Gif-
     sur-Yvette, France)
     B. Bouquet (Universite de Paris-Sud, Laboratorire de l'Accelerateur
     Lineaire, Bat. 200, F-91405 Orsay)
     J. Buytaert (Physics Department, Univ. Instelling Antwerpen,
     Universteitplein 1, B-2610 Wilrijk, Belgium)
     L. Cerrito (Istituto Superiore di Sanita, INFN, Viale Regina Elena 299,
     I-00161 Rome, Italy.
     W. Adam (Institut fur Hochenergiephysik, Oesterreich Akad. Wissenschaft,
     Nikolsdorfergasse 18, A-1050 Vienna, Austria)
     L. Guglielmi, J.B. Burnet (College de France, Lab. de Physique
     Corpusculaire, 11 Place M. Berthelot, F-75231 Paris Cedex 05, France)
     H. Lebbolo (LPNHE, Unversites Paris Vlet VII, Tour 33 (RdC), 4 Place
     Jussieu, F-75230, Paris Cedex 05, France)
     R.M.A. Lucock (Rutherford Appleton Laboratory, Chilton, GB-Didcot OX11
     OQX, UK)
     B. Nielsen (Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen O,
     Denmark)
     F. Harris (Nuclear Physics Laboratory, University of Oxford, Keble Road,
     GB- Oxford OX1 3RH, UK)

High Speed Data Exchange System for Data Acquisition
     Yoshiji Yasu, Hirofumi Fujii, Atsushi Manabe, Masaharu Nomachi, Yoshiyuki
     Watase, Shigeo Yashiro (KEK National Laboratory for High Energy Physics,
     1-1 Oho, Tsukuba, 305 Japan)

If it's RISC must it be UNIX?
     Rochelle Lauer (Yale University Physics Department)

The Fermilab Experience: Integration of UNIX Systems in a HEP Computing
        Environment
     Uday Pabrai (Fermi National Accelerator Laboratory, Batavia IL, USA)

Running Batch Services on a UNIX RISC Workstation
     Eril Jagel (CERN, 1211 Geneva 23, Switzerland)

The Aleph Offline Workstation Environment: Optimization of a Large VMS Cluster
     G. Kellner (European Laboratory for Particle Physics (CERN), CH-1211
     Geneva 23, Switzerland)
     K. Garnto, M. Ikeda, D. Levinthal (Supercomputer Computation Research
     Institute and Department of Physics, Florida State University,
     Tallahassee, FL 32306, USA)

Fermilab UNIX Environment
     Judy Nicholls (Fermi National Accelerator Laboratory, Batavia IL, USA)

Simulation, Status and Future Trends for GEANT
     F. Carminati (CERN, 1211, Geneva 23, Switzerland)

Parallel GEANT SGI 280 Benchmark Report
     Harald Johnstad (Physics Software Support Group, SSC Laboratory)

The Delphi Off-Line Software: Description, Difficulties and Trends
     G. Grosdidier (The DELPHI Collaboration, LAL-IN2P3-CNRS)

The ZEUS Offline Software Environment
     Tobias M. Haas (Deutsches Elektronen Synchrotron, Notkestr. 85 D-2000
     Hamburg 52)

The EOS TPC Analysis Shell
     D.L. Olson (Lawrence Berkeley Laboratory, Berkeley, CA, USA)

Control of an Experimental Apparatus by Means of an Expert System on a
        Transputer Network
     R. Campanini, I. D'Antone, G. Di Caro, G. Giusti (INFN Sezione di Bologna
     and Dipartimento di Fisica, Univ. Bologna)

The New Nature Generator for Lattice Gauge Simulation
     N.Z. Akopov, E.M. Madunts, G.K. Savvidi (Yerevan Physics Institute)

The Use of "Logic Cell Arrays" in a Fast Trigger Processor for H1 at HERA
     H.J. Behrend, W. Zimmermann (DESY, Notkestrasse 85, 2000 Hamburg 52,
     Germany)

Capabilities of a Software Bus Based Visualization Paradigm
     Bill Johnston, Brian Tierney, David Robertson (Imaging Technologies
     Group, Lawrence Berkeley Laboratory, Berkeley, CA, USA)

An Environment for Software Quality Evaluation in HEP
     Ana Regina Rocha, Simoni Palermo (Federal University of Rio de Janeiro
     COPPE/CERN Research Project, Caixa Postal 68511, 21945 - Rio de Janeiro -
     Brazil)

The Data Acquisition System of the Obelix Tracking Drift Chambers (JDC) at
        Lear
     M.P. Bussa, L. Busso, L. Fava, L. Ferrero, R. Garfgnini, A. Grasso, A.
     Maggiora, D. Panzieri, G. Piragino, T. Tosello, G. Zosi (INFN & Istituto
     di Fisica Generale, Torino)
     A. Lanaro (CERN, Geneva)
     F. Balestra (INFN & Dipartimento di Fisica, Cagliari)
     L. Santi (INFN, Udine)

A High-Speed Data Acquisition System Using VME and RISC/UNIX Workstation
     H. Fujii, E. Inoue, H. Kodama, M. Nomachi, Y. Yasu (KEK)

A Special Function Coprocessor and its Uses in the D0 Data Acquisition System
     D. Cutts, J.S. Hoftun, D. Nesic (Physics Department Brown University,
     Providence, R.I. 02912)
     C.R. Johnson, R.T. Zeller (ZRL 500 Wood Street, Bristol, R.I. 02809)

The Venus Second Level Parallel Processor Trigger
     Timo Korhonen, Hiroshi Sakamoto, Yoshiyuki Watase (KEK National
     Laboratory for High Energy Physics)

Quasi-Online Data Processing with the Aleph Event Reconstruction Facility: One
        Year of Total Success
     M. Delfino, A. Pacheco (Laboratori de Fisica d'Altes Energies,
     Universitat Autonoma de Barcelona, E-08193 Bellaterra, Barcelona, Spain)
     J. Knobloch (European laboratory for Particle Physics (CERN), CH-1211
     Geneva 23, Switzerland)

Integrating UNIX Workstations into Existing Online Data Acquisition Systems
        for Fermilab Experiments
     Gene Oleynik (Online Support Department, Fermi National Accelerator
     Laboratory, Batavia IL, USA)

The Obelix Online Monitor and Display
     F. Balestra, M.P. Bussa, L. Fava, L.Ferrero, R. Garfagnini, I. Goulas, A.
     Masoni, B. Minetti, G. Puddu, L. Santi, G. Zosi (Istituto Nationale di
     Fisica Nucleare, Cagliari-Torino-Udine, Italia)

Building a Mass Storage System for Physics Applications
     Harvard Holmes, Stewart Loken (Lawrence Berkeley Laboratory, Berkeley,
     CA, USA)

Rapid Access to Event Subsamples in Large Disk Files through Random Acccess
        Techniques
     M. Delfino (Laboratori de Fisica d'Altes Energies, Universitat Autonoma
     de Barcelona, E-08193 Bellaterra, Barcelona, Spain)
     E. Blucher, J. Knobloch, M. Talby (European Laboratory for Particle
     Physics (CERN), CH-1211 Geneva 23, Switzerland)

Academic Research Network in Japan or Todai International Science Network
     T. Kamae (Tokyo Univ.)

HEP Network Environment in Japan
     S. Ichii, F. Abe, Y. Banno, H. Goto, H. Hirose, Y. Karita, R. Ogasawara,
     S. Yashiro, F. Yuasa (KEK)

Strategy of Multiprotocol Campus HEP Network
     Katsuo Hasegawa (Faculty of Science, Tohoku Univ.)

Matrix Appropximation in Track Finding
     F. Abe, K. Amako, Y. Takaiwa (KEK National Laboratory for High Energy
     Physics)
     M. Asai (Hiroshima Institute of Technology)

Automatic Track Reconstruction in Events with Several Hundreds of Particle
        Tracks
     M. Fuchs (GSI/Darmstadt)

A Fast Vertex Fitting Algorithm with the Perigee Parametrization of Tracks
     P. Billoir (LPNHE, Universites Paris VI et VII, Paris, France)
     S. Qian (CERN, Geneva, Switzerland/INFN, Frascati, Italy)

A Spot Description Algorithm Applied to Data Analysis for Imaging Detectors
     M. Castellano, E. Nappi, G. Tomasicchio, G. Satalino (Istituto Nazionale
     di Fisca Nucleare-Bari, Italy)

CAB - The Cosmos Application Builder
     Geraldo Xexeo (World Lab-LAA/HED Project-Lausanne)
     Geraldo Xexeo, Jano de Souza (COPPE/UFRJ-Rio de Janeiro, Giuseppe La
     Commare, CERN, Geneva)


Poster Session

A Method for Data Registration and "ISTRA" Instrallation Control
     Y.M. Klubakov (Inst. for Nucl. Res., Acad. of Sci of USSR)

ACSD - Software Package for Engineering Calculations of Proton Accelerator
        Shielding
     K.L. Belyanski, I.N. Kopeykin, S.V. Serezhnikov (Inst. for Nucl. Research
     of the Academy of Sci.)

The Experimental Data Analysis Methods Based on the Nonparametric Goodness-of-
        Fit Criteria W(N,2) and W(N,3)
     V.V. Ivanov, P.V. Zrelov (JINR, Dubna, USSR)

Fractals in Quantum Theory: Analytical Approach and Simulations
     O.A. Khrustalev, P.K. Silaev, E.N. Tyurin (Moscow State Univ.)

About One Method for Determining the Transmission Function Parameters for
        Drift Chambers of the "Neutrino Detector" Type
     I.M. Ivanchenko, P.V. Moissenz (Joint Institute for Nuclear Research,
     Laboratory of Computing Techniques and Automation, Department of
     Mathematical Analysis of Experiemental Data, Dubna, USSR)

Safety Controlled by an Expert System on Experimental Sites in High Energy
        Physics
     Francois Chevrier (CERN/ECP, Geneva, Switzerland)

FBNEXPERT: An Intelligent Tool for Fault Diagnosis in Fastbus Data Acquisition
        Systems
     F. Corazziari, S. Falciano, L. Luminari, M. Savarese, E. Trasatti (INFN
     and Dipartimento di Fisica, P. le A. Moro 2, I-00185 Roma, Italy)
     E.M. Rimmer (ECP Division, CERN, 1211 Geneva 23, Switzerland)

Some Remarks on the Architecture of Multiprocessor Systems for Elementary
        Particle Physics
     I. Kolpakov (JINR, Dubna, USSR)

A Systolic Track Finding Trigger Processor
     F. Klefenz, R. Maenner (Physics Institute, University of Heidelberg)

A Method for Data Registration for "ISTRA"
     O.V. Karavichev, YU. M. Klubakov, etc. (Inst. for Nncl. Res., Acad. of
     Sci. of USSR)

A Flexible Data Acquisition System which Can Support Simulataneously both NIM
        and CAMAC Data Modules
     Yasuo Nagashima, Hiromi Kimura,Terushi Kaikura (Tandem Accellerator
     Center, University of Tsukuba)

Computing for TAU Charm in Dubna USSR (Some Initial Parameters, Architecture,
        Trends)
     V.M. Kotov (JINR, Dubna)

Graphics-Orientated Operator Interfaces at H1
     M. Zimmer (DESY)

On a Project of Satellite Computer Links Between JINR Member States and
        International Networks (KOKOS Project)
     S.G. Kadantsev (JINR, Dubna, USSR)

Application of Neuralnet to Rich Pattern Recognition
     Y. Chiba (Hiroshima Univ.)

CND, A Software for Sorting and Packing Nuclear Physics Events
     D. Heuer and J.C. Durand (Institut des Sciences Nucleaires, 53 Av des
     Martyrs, 38026 Grenoble Cedex, France)

Computational Methods for Generating Subgraphs of Some Lattices
     F.M. Bhatti (Shah Abdul Latif University, Khairpur, Pakistan)

Recognition of a Hyperon's Decay Vertex using Neural Network for High Energy
        Nuclear Experiments
     Y. Igarashi, Y. Yamashita, I. Arai, K. Yagi (Institute of Physics,
     University of Tsukuba, Tsukuba, Ibaraki 305, Japan)

Data Acquisition and Event Filtering by Using Transputers
     Y. Nagasaka, I. Arai, K. Yagi (Institute of Physics, University of
     Tsukuba, Tsukuba, Ibaraki 305, Japan)

----------------END OF REPORT----------------------------------------