Schauble@MIT-MULTICS.ARPA (11/04/85)
From: Paul Schauble <Schauble@MIT-MULTICS.ARPA> I've been reading about vector processing machines lately, and it has started me wondering how you program these things. FORTRAN used to be the language of choice for number crunching. Is it still? What else is used? I'm very curious what people consider the relative merits of the various languages. For the vector machines, is the language extended, or are the compilers smart enough to recognize things that the vector instructions are good for? What kinds of extensions? Please reply to the list. I read regularly and will be very curious about the discussion. Thanks, Paul
eugene@ames.UUCP (Eugene Miya) (11/05/85)
> From: Paul Schauble <Schauble@MIT-MULTICS.ARPA> > > FORTRAN used to be the language of choice for number crunching. Is it > still? What else is used? I'm very curious what people consider the > relative merits of the various languages. It is my understanding that F-66 style programs constitute 99% of all Cray codes. F-77 compiler has not been released yet. The other 1% are Pascal and C. Cray Pascal does now vectorize, and it is important because the new F-77 based compiler is written in Pascal to make it more maintainable. No one to my knowledge have written an APL, but with Unix running on a Cray this become easier to try. Relative merits? Try dusty decks. Also dusty brains (mine included). This is not much else because of dust. It takes a lot to move LISP or other "esoterica" and my typical user (read physicist, ME, EE) does not use these languages nor do they wish to maintain them. I suspect this will change as interest in AI grows. PSL was ported to the Cray-1 and XLISP was ported to the Cray-2 [neither are products]. > For the vector machines, is the language extended, or are the compilers > smart enough to recognize things that the vector instructions are good > for? What kinds of extensions? > > Thanks, > Paul Extensions come principally in the form of compiler directives to ignore vector dependences. These machines are quite fast scalar machines, too. The new thing to watch for will be used defined "multi-tasking" (Cray terminology). Automatic decomposition won't come fast enough, so it will be placed in the users hands. Fortran 99? Maybe. By the way at this time on Crays, only the inner most nested loops vectorize. The Convex will go more, I did get it to do this for me when I ran on one, so will the Amdahl-1200 [FAI VP-200]. Recently, I visited a group where I began work in the real world. We had FPS-120B serial #4. My old friends rationalized that programming vector machines was too hard. This is far from true. In some senses, vector programming is a bit too easy. We are overloading the function of the DO-loop. It's not just a DO-loop anymore. If you don't vectorize, you still get a fast loop, but otherwise, you end up with a program of interconnected vectorized DO-loop "nodules." The compiler does help you by telling you what it did and did not vectorize. Japanese vectorizing software is apparently like nothing in this country. It not only tells you what to vectorize, but suggestions more global optimizations interactively. It's very impressive stuff, but you can send mail to fai [Fujitsu] on the net. From the Rock of Ages Home for Retired Hackers: --eugene miya NASA Ames Research Center {hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene emiya@ames-vmsb
kurtk@tektronix.UUCP (Kurt Krueger) (11/08/85)
I still see Fortran as the number crunching language. The approach for vector machines has been two fold, to work on both language extensions and creating compilers that recognize language constructs that can be vectorized. The best success appears to be in the language extension camp, as fortran's constructs don't lend themselves to easy vectorizations (don't forget, fortran was 'invented' when computers had a single accumlator and vacuum tubes). I must say, though, that the 'smart' compilers are getting there. Two big forces in this currently are CDC (the 205 line) and Cray. Another force is represented by the array processor folks. Floating Point systems has a Fortran compiler for their processor. An approach that I've never seen suggested would be to use the APL language. It has vector and matrix operations built into the language. It would require NO language extensions to use vector hardware and since each command is so powerful, an interpreter (which most APL implementations are) running on a vector machine should really perform. I won't get into a discussion of the relative merits/demerits of APL. It is not a very popular language but it is a natural for vector processors.
hes@ecsvax.UUCP (Henry Schaffer) (11/12/85)
> In the meantime, as you say, we seem to be stuck with FORTRAN. > -- > D Gary Grady I was at a seminar on large scale scientific computing given by Bob Voigt (of ICASE) and he was asked what language would be used for number crunching in the year 2000. His response was that he didn't know what the language would be --- but he predicted that it would be called FORTRAN. --henry schaffer
eugene@ames.UUCP (Eugene Miya) (11/15/85)
> > -- > > D Gary Grady > > I was at a seminar on large scale scientific computing given by Bob > Voigt (of ICASE) and he was asked what language would be used for > number crunching in the year 2000. His response was that he didn't > know what the language would be --- but he predicted that it would > be called FORTRAN. > --henry schaffer Funny you should mention that. I know Bob was not the first person to say that, and the most famous attribution I've heard was to Cray himself back in the early 1970s. [BTW, I'm trying to make an effort to see Bob in a couple of weeks and I will ask him if he made a paraphase.] Alan Perlis said, 42. You can measure a programmer's perspective by noting his attitude on the continuing vitality of FORTRAN. I gave a talk here on trends in programming languages. 2000 is getting closer: F8X != F77 != F66 != FIV != FII. F8X has some pretty radical stuff proposed, but the question is whether you (generically) will be writing in F66? From the Rock of Ages Home for Retired Hackers: --eugene miya NASA Ames Research Center {hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene emiya@ames-vmsb
ljdickey@watmath.UUCP (Lee Dickey) (11/16/85)
> An approach that I've never seen suggested would be to use the > APL language. It has vector and matrix operations built into > the language. It would require NO language extensions to use > vector hardware and since each command is so powerful, an > interpreter (which most APL implementations are) running on a > vector machine should really perform. There is a quiet revolution going on. Here are three examples of array machines and array languages that match very well: The first was The APL Machine, an interpreter that claims to meet the forthcoming ISO APL Standard, running on a hardware that uses array processors originally designed and built by Analogic (the people who bring you CAT scanners and NMR devices) for developing images. The second and third examples are from somewhat larger companies that you are more likely to have heard of: Honeywell's CP6 version of APL running on the model 90, built by NEC, uses the array instructions. This was announced at HLSUA. IBM's APL2 running on a 3090, makes use of the vector instruction set. Some users thought that the 3090 would come with only two language processors that would make use of the vector instructions. APL2 is the first really high level language to do so. All are very fast indeed, and much easier to use than FORTRAN. I guess the question now is, do you want to spend your life writing programs, or do you want to get your results right away. The learning curve for APL is a bit steeper at the beginning than for lower level languages (such as C and Fortran) but the payoff is in productivity.