K312240%AEARN@CORNELLC.CIT.CORNELL.EDU (Klaus Kusche) (03/09/90)
Dear Mailing List: As someone from Parsytec responded to my emails about Parsytec, some explanations and comments are to be made: 1.) It's true, I'm *not* a user of Parsytec hardware or software, *but* I seriously considered going with Parsytec in the past. First of all, we wanted to use Parsytec's PS/2 boards as masters for our Inmos-compatible transputer box. I asked a very precise question if this is possible, and as described in my previous mail, in spite of many Faxes and phone calls it took them more than one month to come up with a useful answer, which seriously delayed our projects. Don't blame it on the Austrian dealer - he was very helpful, but unable to obtain any more info's from them than I did. Secondly, for more than two years now, I tried to get info's about their Lisp and Prolog projects. In the past, response was zero, no matter if I tried by phone, Fax or smail. They didn't even say 'there is no such project' or 'that's a secret'. After a phone call three weeks ago, I received a single report about their Prolog. I still consider buying their Prolog, but again, up to now I was unable to obtain a definite statement from them telling me: * If they sell it to people not using their hardware. * If it will run on such hardware. * What are the requirements. * What is the price. Again, the dealer was unable to have these questions answered, too. Summary: I don't know how they support their users, but speaking as a non-user, my experiences are not very promising. 2.) I did never say that Parsytec products are bad: * They have a very wide product range. * They have very high mechanical and electrical quality. * They have done a lot for the development of Helios. 3.) The Parsytec representative and I seem to have a very different point of view: He is mainly talking about parallel supercomputing, and about systems for specific applications or application software development, where people want to use a single or a few important software environments with minimum trouble and maximum safety. I'm talking about research and education, not about multi-user supercomputing. Here the main goal is to be able to try almost every piece of software developed for transputers: * There are four parallel Prologs available for transputers? - Ok, I want to try and compare them all! * Trollius and GNU C for transputers? Fine, I want to have a copy! * and so on.... Here, experiences show that: * There is the widest software range for PC's with Inmos-compatibles. * The PC versions usually become available several months prior to any other versions of the same software. * The PC versions are by far cheapest and easiest to get. * The PC versions show minimum troubles with respect to distribution medium or host hardware/software dependencies. * It is easiest to port servers and device drivers to the PC/DOS environment, and it doesn't require any modifications to the host operating system (have you ever tried to install device drivers for a dozen different pieces of transputer software in a single Unix kernel?) This is not only true for software products, but also for exchanging research software with other academic institutions. This is why I absolutely insist on binary compatibility down to the hardware level! I didn't say that the Inmos hardware standards are best (definitely not), but they are most widespread and easiest to live with. We don't need a a fool-proof, highly sophisticated multiuser system, we need a system dedicated to a single user which is able to run as many different software systems as possible, one after another, with minimum trouble. For us, the most useful reset scheme is a global big red button: It gurantees minimum inference with any software (and Inmos comes closest to that). Similarely, I didn't say that DOS is the best or most comfortable environment for working with transputers, just the one best suited to our needs. Sun is better, Apple is better, but both are by far not as universal with respect to transputers (and DOS isn't that bad). Moreover, you can always run Unix on your PC, and execute your transpute applications from the DOS emulator, or port the servers. 4.) About Helios: We are a non-Helios site, and this will not change in the near future. Helios is very good for providing a standard environment, for using ready-to-run applications, and for hiding and simplifying parallelism as much as possible. But for research and education in parallel computing, I need exactly the opposite: * Complete knowledge about and control of what is going on in hardware and software, absolutely reproducible timings, process placements etc. * A simple, consistent, formally well based model of parallelism and communication (like Occam), which is also efficient for medium to fine grain programs. The pipes in Helios are too high-level, too coarse-grain, and too slow, and parallelism and communication using Helios system calls is by far to complicated and unintuitive. Helios definitely has its place in the transputer world, but not in our institute. Besides, currently perhaps 25 % of all transputer software packages have been ported to Helios, and this number is increasing quite slowly. This is definitely not enough for us to accept Helios as the one and only standard for transputers. I hope this provides better understanding of what I've said before (again, everything is my strictly private opinion). By the way, could we hear the opinions of some Parsytec users about the discussion going on here? (Dealers are welcome, too, but please clearly indicate your status!) Greetings ************************************************************************ * Klaus Kusche * * Research Institute for Symbolic Computation * * Johannes Kepler University Tel: +43 7236 3231 67 * * A-4040 Linz Telex: (Austria) 22323 uni li a * * Austria (Europe) Fax: +43 7236 3231 30 * * * * Bitnet: K312240@AEARN * * Arpa/CS/Internet: K312240%AEARN.BITNET@CUNYVM.CUNY.EDU * * UUCP: mcvax!aearn.bitnet!K312240 * * Janet: k312240@earn.aearn or k312240%aearn@earn-relay * ************************************************************************