WBD.TYM@OFFICE-2.ARPA (08/19/85)
From: William Daul / McDonnell-Douglas / APD-ASD <WBD.TYM@OFFICE-2.ARPA> ...news of at least one IBM research effort in high-speed computing surfaced at last month's National Computer Conference in Chicago. A team of physicists will soon take over a specially built computer designed to solve a single physics problem. According to an IBM official, this computer is supposed to take less than a year to solve a provblem that would take a CRAY-1 supercomputer more than 300 years to do. The IBM machine, developed at the Thomas J. Watson Research Center in Yorktown Heights, N.Y., consists of an array of 576 processors, each one capable of 20 million "floating point" operations per second (equivalent to multiplying two decimal numbers 20 million times). In contrast, a typical personal computer performs 1,000 or so such operations per second. When all the processors are working in parallel, each one handling a small part of a computation, the IBM computer can handle more than 10 billion floating point operations per second. The machine will be used to calculate the mass of a proton from "first princilple," applying quantum chromodynamics theory. This year-long exercise should give physicists some clues as to the valididty of their concepts about quarks and gluons. Once this project is over, the machine could be used for uther purposes, says IBM's George Paul. And the computer's design team is already thinging about how to extend the ideas they developed for the original machine.
karsh@geowhiz.UUCP (Bruce Karsh) (08/22/85)
In article <509@sri-arpa.ARPA> WBD.TYM@OFFICE-2.ARPA writes: >From: William Daul / McDonnell-Douglas / APD-ASD <WBD.TYM@OFFICE-2.ARPA> > >A team of physicists >will soon take over a specially built computer designed to solve a single >physics problem. According to an IBM official, this computer is supposed to >take less than a year to solve a provblem that would take a CRAY-1 >supercomputer more than 300 years to do. >The IBM machine, developed at the Thomas J. Watson Research Center in Yorktown >Heights, N.Y., consists of an array of 576 processors, each one capable of 20 >million "floating point" operations per second (equivalent to multiplying two >decimal numbers 20 million times). In contrast, a typical personal computer >performs 1,000 or so such operations per second. When all the processors are >working in parallel, each one handling a small part of a computation, the IBM >computer can handle more than 10 billion floating point operations per second. Does anybody know how you would go about retaining significant digits in a computation like this? If you figure there there will be about 10**9 round off errors per second accumulating for one year, there must be some plans for designing the calculations to be *EXTREMELY* insensitive to round off problems. How is this going to work? Is there literature on this subject? -- Bruce Karsh U. Wisc. Dept. Geology and Geophysics 1215 W Dayton, Madison, WI 53706 (608) 262-1697 {ihnp4,seismo}!uwvax!geowhiz!karsh
matt@oddjob.UUCP (Matt Crawford) (08/25/85)
In article <230@geowhiz.UUCP> karsh@geowhiz.UUCP (Bruce Karsh) writes: >Does anybody know how you would go about retaining significant digits >in a computation like this? If you figure there there will be about >10**9 round off errors per second accumulating for one year, there must >be some plans for designing the calculations to be *EXTREMELY* >insensitive to round off problems. I have done calculations of this sort (on a much smaller scale), and round-off errors do not accumulate. The system being simulated is represented by discrete parts and and then each part is repeatedly altered at random to any of its allowable states with the probability of each state dependent on the energy of that state and a parameter which plays the role of temperature. As the parameter is gradually reduced, the lowest energy state of the system should be discovered. IBM has used this technique to lay out chips on a circuit board. The "energy" of a given configuration is a count of how many wire crossings or how much wire length is needed to connect the chips. _____________________________________________________ Matt University crawford@anl-mcs.arpa Crawford of Chicago ihnp4!oddjob!matt