bglenden@mandrill.cv.nrao.edu (Brian Glendenning) (11/21/90)
It is often stated that Fortran is better than C for numerical work. I am interested in compiling both the standard list of such arguments and what any counterarguments might be. Please email me directly. I will post a summary to the net when the replies have stopped trickling in. (I am interested in things like pointer aliasing, optimizer assumptions, inlining ability etc., not language wars). Thank you Brian -- Brian Glendenning - National Radio Astronomy Observatory bglenden@nrao.edu bglenden@nrao.bitnet (804) 296-0286
ghe@comphy.physics.orst.edu (Guangliang He) (11/22/90)
In article <BGLENDEN.90Nov21003342@mandrill.cv.nrao.edu> bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: > >It is often stated that Fortran is better than C for numerical work. >[Some deleted text here] It may not be true any more. A friend of mine brought a little fortran program (It is two big do loops with some instrinsic function calculation in the loop.) and the C translation of the fortran program. We compiled two program on a IBM RISC System 6000/530 with xlc and xlf. To my surprise, the excutable from C is faster than the excutable from Fortran by a few percent. >Brian >-- > Brian Glendenning - National Radio Astronomy Observatory >bglenden@nrao.edu bglenden@nrao.bitnet (804) 296-0286 Guangliang He ghe@physics.orst.edu Guangliang He ghe@PHYSICS.ORST.EDU hegl@ORSTVM.BITNET
paco@rice.edu (Paul Havlak) (11/22/90)
In article <21884@orstcs.CS.ORST.EDU>, ghe@comphy.physics.orst.edu (Guangliang He) writes: |> In article <BGLENDEN.90Nov21003342@mandrill.cv.nrao.edu> bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: |> > |> >It is often stated that Fortran is better than C for numerical work. |> >[Some deleted text here] |> |> It may not be true any more. A friend of mine brought a little fortran |> program (It is two big do loops with some instrinsic function calculation in |> the loop.) and the C translation of the fortran program. We compiled two |> program on a IBM RISC System 6000/530 with xlc and xlf. To my surprise, the |> excutable from C is faster than the excutable from Fortran by a few percent. Presumably the Fortran-to-C translation preserved the array structure and indexing found in the original Fortran program. A good compiler can optimize Fortran, no matter what language it's written in. But watch out if you use C in its full generality. All but the simplest pointers will confuse a compiler and reduce its ability to optimize. Heap-allocated dynamic data structures will reduce data locality and increase page faults. To paraphrase Jack Schwartz: "We don't know what the numerical programming language of the year 2000 will be called, but it will look like Fortran." (Well, at least the loops will.) ----------- Paul Havlak These are the opinions of a single grad student, working on compiler analysis of scientific programs.
salomon@ccu.umanitoba.ca (Dan Salomon) (11/22/90)
In article <BGLENDEN.90Nov21003342@mandrill.cv.nrao.edu> bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: > >It is often stated that Fortran is better than C for numerical work. I >am interested in compiling both the standard list of such arguments >and what any counterarguments might be. Here are the reasons that FORTRAN has not been replaced by C: 1) C is definitely for wizards, not beginners or casual programmers. Usually people who are heavily into numerical work are not hacker types. They are mathematicians, scientists, or engineers. They want to do calculations, not tricky pointer manipulations. FORTRAN's constructs are more obvious to use, while even simple programs in C tend to be filled with tricks. Even the fundamental operation of reading input is tricky in C, as shown by the recent postings on scanf, gets, and fgets. 2) FORTRAN is dangerous to use, but not as dangerous as C. For instance, most FORTRAN compilers have subscript checking as an option, while I have never encountered a C compiler with this feature. The ANSI standard for function prototypes will give C an edge over FORTRAN in parameter mismatch errors, but that improvement is relatively recent and not enforced yet. 3) There is a large body of well tested mathematical packages available for FORTRAN, that are not yet available in C. For example the IMSL package. However, this situation is improving for C. 4) FORTRAN still gives the option of using single precision floating calculations for speed and space optimizations, whereas C forces some calculations into double precision. 5) Optimizers are a non issue, since FORTRAN optimizers can match C optimizers on numerical expressions. The reasons that C should replace FORTRAN for numerical work: 1) C allows recursive functions, whereas portable FORTRAN doesn't. Recursive functions can often solve a problem more clearly than iterative methods, even if they are usually less efficient. 2) FORTRAN has no dynamic array allocation. Although C has dynamically allocated arrays, they are not trivial to describe or allocate. -- Dan Salomon -- salomon@ccu.UManitoba.CA Dept. of Computer Science / University of Manitoba Winnipeg, Manitoba, Canada R3T 2N2 / (204) 275-6682
rosenkra@convex.com (William Rosencranz) (11/22/90)
In article <21884@orstcs.CS.ORST.EDU> ghe@comphy.PHYSICS.ORST.EDU.UUCP (Guangliang He) writes: >It may not be true any more. A friend of mine brought a little fortran >program (It is two big do loops with some instrinsic function calculation in >the loop.) and the C translation of the fortran program. We compiled two >program on a IBM RISC System 6000/530 with xlc and xlf. To my surprise, the >excutable from C is faster than the excutable from Fortran by a few percent. this says nothing about the *language*, only the *compilers*. actually, it may not be that suprising when u consider that the 6000 runs unix and unix needs a good C compiler. IBM may have spent more time on the C compiler than the fortran compiler, figuring that more people may use C on the box than fortran. believe it or not, i have seen similar behavior on crays (cft77 vs scc [actually vc at the time], though also by only a few percent). this does say that this particluar code seems well suited for C on the 6000 today. it implies that C is not a bad language for numerical work, if performance is the criterion. -bill rosenkra@convex.com -- Bill Rosenkranz |UUCP: {uunet,texsun}!convex!c1yankee!rosenkra Convex Computer Corp. |ARPA: rosenkra%c1yankee@convex.com
avery@netcom.UUCP (Avery Colter) (11/22/90)
paco@rice.edu (Paul Havlak) writes: >To paraphrase Jack Schwartz: >"We don't know what the numerical programming language of the year 2000 will >be called, but it will look like Fortran." (Well, at least the loops will.) Well sure, from what I can tell, the primary structures of C, Pascal, and Fortran look pretty similar. After learning Fortran, C is coming along pretty naturally. The pointers and structures and unions are interesting new things, kinda like "Fortran with some bells and whistles". -- Avery Ray Colter {apple|claris}!netcom!avery {decwrl|mips|sgi}!btr!elfcat (415) 839-4567 "I feel love has got to come on and I want it: Something big and lovely!" - The B-52s, "Channel Z"
wvenable@spam.ua.oz.au (Bill Venables) (11/22/90)
paco@rice.edu (Paul Havlak) writes: > To paraphrase Jack Schwartz: > "We don't know what the numerical programming language of the year 2000 > will be called, but it will look like Fortran." > Actually this is an inversion rather than a paraphrase. I recall it being exactly the other way round: "We don't know what the numerical programming language of the year 2000 will look like, but it will be called Fortran." which seems all too distressingly plausible! (Take that any way you like... :-) -- Bill Venables, Dept. of Statistics, | Email: venables@spam.adelaide.edu.au Univ. of Adelaide, South Australia. | Phone: +61 8 228 5412
john@ghostwheel.unm.edu (John Prentice) (11/23/90)
In article <1990Nov22.051446.1871@ccu.umanitoba.ca> salomon@ccu.umanitoba.ca (Dan Salomon) writes: > >The reasons that C should replace FORTRAN for numerical work: > > 1) C allows recursive functions, whereas portable FORTRAN doesn't. > Recursive functions can often solve a problem more clearly > than iterative methods, even if they are usually less efficient. > > 2) FORTRAN has no dynamic array allocation. Although C has dynamically > allocated arrays, they are not trivial to describe or allocate. >-- > It should be mentioned however, that the proposed Fortran 90 standard does have allocatable arrays and most current generation Fortran compilers already either allow for this or can be trivially linked to C to do it. There are also recursive Fortran compilers available now and (if I remember right) this is a feature of Fortran 90, should we live so long as to actually see the standard adopted. John Prentice Amparo Corporation
john@ghostwheel.unm.edu (John Prentice) (11/23/90)
Another interesting point is that in studies done at Cray Research, they found it took SIGNIFICANTLY longer for their programmers to learn C and the number of errors generated in coding in C (as opposed to Fortran) was much higher. Anyone who has programmed in C should be familiar with that problem. It is not a particularly straightforward language. I would also raise the point that neither Fortran nor C are really all that great as scientific languages. They are both old languages which lack alot of the features one would like in a modern language, particularly in a world where the future looks increasingly to be in parallelism. I laughingly agree that the scientific language of the future will be "called Fortran", but I don't know that I necessarily believe it. There is a whole generation of programmers (and scientists) coming on line who don't particularly pledge allegence to Fortran. Also, the traditional argument for not ever throwing anything away in Fortran (i.e., there are billions of dollars worth of old Fortran codes around, which is true I admit) will cease to be that significant I expect in the future as we move away from serial machines and as we concede that there is a finite lifetime to codes, even ones written in Fortran. This, by the way, is written from the perspective of a computational physicist who has authored two hydrodynamic codes, both of which are on the order of 100,000 lines of code. John Prentice Amparo Corporation Albuquerque, NM
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (11/23/90)
In article <1990Nov22.051446.1871@ccu.umanitoba.ca>, salomon@ccu.umanitoba.ca (Dan Salomon) writes: > The ANSI standard for function prototypes will > give C an edge over FORTRAN in parameter mismatch errors, but > that improvement is relatively recent and not enforced yet. There are several checkers around for Fortran: several Fortran "lint" programs (a perennial topic in this newsgroup), PFORT, something in ToolPack. > 3) There is a large body of well tested mathematical packages available > for FORTRAN, that are not yet available in C. Given the existence of f2c, any math package available _in_ Fortran is effectively available in C, and in UNIX and VMS at least, it isn't hard to call anything that could have been called from Fortran from C. > 4) FORTRAN still gives the option of using single precision floating > calculations for speed and space optimizations, whereas C forces > some calculations into double precision. This is not true of ANSI C, and many vendors provided something like Sun's "-fsingle" as an option for years before that. It is also worth noting that on a number of machines, single-precision calculations are not faster than double precision. > 1) C allows recursive functions, whereas portable FORTRAN doesn't. > Recursive functions can often solve a problem more clearly > than iterative methods, even if they are usually less efficient. Solved in Fortran Extended. > 2) FORTRAN has no dynamic array allocation. Although C has dynamically > allocated arrays, they are not trivial to describe or allocate. Solved in Fortran Extended. Some vendors have provided pointers of some sort for several years, and it is easy to fake on some systems. -- I am not now and never have been a member of Mensa. -- Ariadne.
gwyn@smoke.brl.mil (Doug Gwyn) (11/24/90)
In article <1990Nov22.051446.1871@ccu.umanitoba.ca> salomon@ccu.umanitoba.ca (Dan Salomon) writes: >The reasons that C should replace FORTRAN for numerical work: 3) C has decent support for nontrivial data structures, while they are sufficiently painful to emulate in Fortran that few Fortran programmers even try. Most really interesting algorithms are associated with interesting data structures.
gwyn@smoke.brl.mil (Doug Gwyn) (11/24/90)
In article <17290@netcom.UUCP> avery@netcom.UUCP (Avery Colter) writes: >Well sure, from what I can tell, the primary structures of C, Pascal, >and Fortran look pretty similar. After learning Fortran, C is coming >along pretty naturally. The pointers and structures and unions are >interesting new things, kinda like "Fortran with some bells and whistles". While there aren't many legitimate applications for unions, most good applications in C lean VERY heavily on structures and pointers. For example, to add an item to a list in Fortran one normally hopes that the array (or parallel set of arrays) holding list members was declared with enough room, then stores the item at the next available location in the array and increments the integer variable that is used to keep track of the next available location. In C, however, the following is much more likely: ... node *AddItem( node *itemp, node **list ) { node *np = (node *)malloc( sizeof(node) ); if ( np == NULL ) return NULL; /* out of heap space (unlikely) */ assert(itemp != NULL); /* else usage error */ *np = *itemp; /* copy the data */ assert(list != NULL); /* else usage error */ np->link = *list; /* attach current list, if any */ return *list = np; /* current node is new head */ } You'll know you're reasonably proficient in C when code like the above makes perfect sense to you.
henry@zoo.toronto.edu (Henry Spencer) (11/24/90)
In article <1990Nov22.051446.1871@ccu.umanitoba.ca> salomon@ccu.umanitoba.ca (Dan Salomon) writes: > ...Even the fundamental > operation of reading input is tricky in C, as shown by the recent > postings on scanf, gets, and fgets. Actually, Fortran has much the same problems in this area: the facilities for formatted input make little provision for clean error recovery. This doesn't show up very much because the stereotypical use of Fortran is for batch jobs, not interaction. > 2) FORTRAN is dangerous to use, but not as dangerous as C. For > instance, most FORTRAN compilers have subscript checking as an > option, while I have never encountered a C compiler with this > feature. The ANSI standard for function prototypes will > give C an edge over FORTRAN in parameter mismatch errors, but > that improvement is relatively recent and not enforced yet. One might ask what compilers you are using. C compilers have trouble doing subscript checking because of the complexity of C pointers, but debugging compilers/interpreters which do this checking *do* exist. And there are already many C compilers which implement prototypes. > 3) There is a large body of well tested mathematical packages available > for FORTRAN, that are not yet available in C. For example the > IMSL package. However, this situation is improving for C. As others have mentioned, given f2c, this is a non-issue. They are all available in C now. (Sometimes they run faster that way, too...!) > 4) FORTRAN still gives the option of using single precision floating > calculations for speed and space optimizations, whereas C forces > some calculations into double precision. Not any more. -- "I'm not sure it's possible | Henry Spencer at U of Toronto Zoology to explain how X works." | henry@zoo.toronto.edu utzoo!henry
rh@smds.UUCP (Richard Harter) (11/24/90)
In article <21884@orstcs.CS.ORST.EDU>, ghe@comphy.physics.orst.edu (Guangliang He) writes: > In article <BGLENDEN.90Nov21003342@mandrill.cv.nrao.edu> bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: > It may not be true any more. A friend of mine brought a little fortran > program (It is two big do loops with some instrinsic function calculation in > the loop.) and the C translation of the fortran program. We compiled two > program on a IBM RISC System 6000/530 with xlc and xlf. To my surprise, the > excutable from C is faster than the excutable from Fortran by a few percent. This probably has nothing to do with the merits of C versus Fortran and has everything to do with the merits of the compilers involved. In the UNIX world C compilers are often optimized to a gnats posterior whereas Fortran compilers are often relatively primitive. The converse is true in environments where Fortran is big and C is just another minor language. Fundamentally Fortran compilers can be faster because the Fortran language specification forbids aliasing (but makes the user responsible for making sure that it is not present) whereas C has to deal with it. -- Richard Harter, Software Maintenance and Development Systems, Inc. Net address: jjmhome!smds!rh Phone: 508-369-7398 US Mail: SMDS Inc., PO Box 555, Concord MA 01742 This sentence no verb. This sentence short. This signature done.
steve@taumet.com (Stephen Clamage) (11/25/90)
ghe@comphy.physics.orst.edu (Guangliang He) writes: >In article <BGLENDEN.90Nov21003342@mandrill.cv.nrao.edu> bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: >> >>It is often stated that Fortran is better than C for numerical work... >It may not be true any more. A friend of mine brought a little fortran >program ... >the excutable from C is faster than the excutable from Fortran by a few percent What we have here is an example of one program compiled by one FORTRAN compiler and a translation of that program compiled by one C compiler. Comparison of the execution speeds of the two programs on one machine cannot lead to any valid conclusions about the relative utility of the two languages for numerical work. FORTRAN has a large body of standard libraries for numerical work whose operation and reliability have been well-tested for many years. This cannot be said for C, although some multi-language evironments allow such FORTRAN libraries to be called from C programs. The utility of a language must be judged by more criteria than just execution speed of one sample program. -- Steve Clamage, TauMetric Corp, steve@taumet.com
gt4512c@prism.gatech.EDU (BRADBERRY,JOHN L) (11/26/90)
On Nov 25 18:55:29 EST 1990 in Article <34828> , (Dan Salomon) <salomon@ccu.umanitoba.ca> writes: >Here are the reasons that FORTRAN has not been replaced by C: > > 1) C is definitely for wizards, not beginners or casual > programmers. > Usually people who are heavily into numerical work are not > hacker > types. They are mathematicians, scientists, or engineers. I agree! The group described happens to represent most of the clients I deal with in Radar and Antenna applications. As in the original derivation of the name, these people are FOR-mula TRAN- slators! > >..(text deleted)... > >The reasons that C should replace FORTRAN for numerical work: > >..(text deleted)... > 3) FORTRAN ANSI standards take entirely too long to pass through committees and there is little or no effort made to 'purge' old style Watfor/V methods from 'current' teaching texts! I wouldn't be surprised to see references to 'card decks' in 'current' FORTRAN books into the year 3000 (programmers live forever!)... Sorry about the digression! -- John L. Bradberry |Georgia Tech Research Inst|uucp:..!prism!gt4512c Scientific Concepts Inc. |Microwaves and Antenna Lab|Int : gt4512c@prism 2359 Windy Hill Rd. 201-J|404 528-5325 (GTRI) |GTRI:jbrad@msd.gatech. Marietta, Ga. 30067 |404 438-4181 (SCI) |'...is this thing on..?'
hp@vmars.tuwien.ac.at (Peter Holzer) (11/27/90)
paco@rice.edu (Paul Havlak) writes: >But watch out if you use C in its full generality. All but the simplest >pointers >will confuse a compiler and reduce its ability to optimize. Even simple pointer can confuse a compiler on brain-dead architectures. I had a little program that did something with two arrays item * a, * b; int i; /* init a, b */ for (i = 0; i < N; i ++) { a [i] = f (a [i], b [i]); } After I optimized it to: item * a, * b, * pa, * pb; /* init a, b */ for (pa = a + N, pb = b + N; pa >= a; /* I know that is not portable, but it works * with this compiler if sizeof (item) <= 8 */ pa --, pb --) { * pa = f (* pa, * pb); } it ran 80% slower, because on this architecture (80286) far pointers don't fit into registers whereas the integer i does, and indexing is relatively fast. To make things more complicated: On the same computer program 2 is faster than program 1 if both are compiled with near pointers. Morale: Write your code readable. Usually the compiler knows more about the architecture of the target computer than the programmer (especially if the program has to be compiled on lots of different computers) and can therefore optimize better. -- | _ | Peter J. Holzer | Think of it | | |_|_) | Technical University Vienna | as evolution | | | | | Dept. for Real-Time Systems | in action! | | __/ | hp@vmars.tuwien.ac.at | Tony Rand |
bglenden@mandrill.cv.nrao.edu (Brian Glendenning) (11/27/90)
A few days ago I posted a note asking: >It is often stated that Fortran is better than C for numerical work. I >am interested in compiling both the standard list of such arguments >and what any counterarguments might be. Here is my summary of the responses I received. If anyone wants to read the raw responses please email me and I will be happy to forward them (160k+!). Many thanks to all the respondents who so generously answered my query. 1. Pointer aliasing. SUBROUTINE FOO(A,B) void foo(a,b) REAL A(*), B(*) float *a, *b; The fortran standard requires that A,B be unaliased. In C a,b may well be aliased, and there is no portable way to say that they are unaliased. Compilers on serious vector machines (at least) will have ways of declaring unaliased pointers. The programmer can make a mistake doing this, but of course the programmer can also really pass aliased arrays in Fortran as well. Although I understand that "noalias" is hated by C purists, I wish that it had made it into the ANSI standard. (Maybe I just don't understand the arguments against it). 2. C has no conformant arrays, i.e. you can't do the equivalent of: SUBROUTINE FOO(A, M, N) REAL A(M,N) In C you either have to do your own indexing *(a + j*m +i) or have pointers to pointers *(*(a + i) + j). You can in either case use a macro expansion ARRAY(a,i,j) to take some of the sting out of the syntax. 3. In fortran functions like sin, cos, ** are intrinsic. I think that ANSI C has a method by which compilers may make sin, cos etc intrinsic, but I don't remember how it works. Maybe a wizardly followup could answer this question. A ** builtin _is_ handy. 4. Fortran has complex variables. If you need to do a lot of complex arithmetic this might be a show stopper unless you have a good source of C complex arithmetic routines. Even then it is not going to be as convenient as in Fortran. 5. There are many numerical libraries written for Fortran. This is likely not a fundamental problem on any modern system scientific programmers would use, e.g. either use f2c to convert it or link in the fortran, but it does impose either some programmer time overhead in the translation or make the linking process (at least) a bit non-portable. 6. C can ignore the placement of parentheses 7. "C has too many system dependent aspects (e.g. round up or down when dividing negative integers)." Both of these need to be understood by a scientific programmer so they can work around them. 8. C does everything in double. Not (necessarily) with ANSI C. ====== I will not go into the reasons why C was claimed to be better for numerical work Fortran (basically better data typing, control structures, dynamic memory etc). _MY_ sumarry-summary is as follows: I conclude that for scientific programming there are no overwhelming reasons not to use C unless you do a lot of complex arithmetic. Personally I don't consider the "pointer aliasing defeats optimizers" to be too serious. Anyone who cares about speed is going to profile their code, and at that time it shouldn't be too difficult to tell the compiler what is not aliased in the "hot spot" routines. Whether or not the switch to C is worthwhile will depend on whether the above quirks in C outweigh the benefits of having "more modern" data typing and control structures. Fortran is probably more portable (*) and will run faster without tweaking. On the other hand Fortran may be harder to maintain, and it is a poor fit to algorithms that are best expressed with types more involved than n-dimensional arrays. (*) I realize that it is not that hard to write portable C. I think it's fair to say that it's easier to write portable Fortran for numeric work, though. When and if we have fortran9? available the story may be different (but it's getting pretty late in the day for F9x to come riding over the horizon to save us). Brian -- Brian Glendenning - National Radio Astronomy Observatory bglenden@nrao.edu bglenden@nrao.bitnet (804) 296-0286
olstad@uf.msc.umn.edu (Ken Olstad) (11/27/90)
paco@rice.edu (Paul Havlak) writes: > To paraphrase Jack Schwartz: > "We don't know what the numerical programming language of the year 2000 > will be called, but it will look like Fortran." > wvenable@spam.ua.oz.au (Bill Venables) writes: > > "We don't know what the numerical programming language of the year 2000 > will look like, but it will be called Fortran." > I've always heard it closer to the latter, but attributed to Seymour Cray. Does anybody here really know where this came from? -Ken olstad@msc.edu
hp@vmars.tuwien.ac.at (Peter Holzer) (11/27/90)
bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: >A few days ago I posted a note asking: >>It is often stated that Fortran is better than C for numerical work. I >>am interested in compiling both the standard list of such arguments >>and what any counterarguments might be. >Here is my summary of the responses I received. A few remarks to the summary: >2. C has no conformant arrays, i.e. you can't do the equivalent of: > SUBROUTINE FOO(A, M, N) > REAL A(M,N) >In C you either have to do your own indexing *(a + j*m +i) or have >pointers to pointers *(*(a + i) + j). You can in either case ^^^^^^^^^^^^^^^ You would write that as a [i][j] ordinarily, which is a rather nice syntax :-) >use a macro expansion ARRAY(a,i,j) to take some of the sting out of >the syntax. >3. In fortran functions like sin, cos, ** are intrinsic. >I think that ANSI C has a method by which compilers may make sin, cos >etc intrinsic, but I don't remember how it works. Maybe a wizardly >followup could answer this question. No special method. An ANSI C compiler may treat any function defined in the standard as intrinsic if the header file that defines the function has been included. >A ** builtin _is_ handy. >4. Fortran has complex variables. >If you need to do a lot of complex arithmetic this might be a show >stopper unless you have a good source of C complex arithmetic >routines. Even then it is not going to be as convenient as in Fortran. Would be nice sometimes. But other special types are also handy sometimes (e.g. very long integers), and you cannot have everything. If you want C++ you know where to find it. >6. C can ignore the placement of parentheses Not anymore. The standard says that the compiler may regroup expressions only if it does not change the result. -- | _ | Peter J. Holzer | Think of it | | |_|_) | Technical University Vienna | as evolution | | | | | Dept. for Real-Time Systems | in action! | | __/ | hp@vmars.tuwien.ac.at | Tony Rand |
henry@zoo.toronto.edu (Henry Spencer) (11/28/90)
In article <BGLENDEN.90Nov26162335@mandrill.cv.nrao.edu> bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: >3. In fortran functions like sin, cos, ** are intrinsic. > >I think that ANSI C has a method by which compilers may make sin, cos >etc intrinsic, but I don't remember how it works... It's really very simple: they are allowed to be intrinsic, essentially. There is no complexity or mystery. C and Fortran are no longer different in this regard, except insofar as the Fortran libraries are larger. >6. C can ignore the placement of parentheses Not any more. This too is an obsolete argument. >7. "C has too many system dependent aspects (e.g. round up or down when > dividing negative integers)." A lot of the purportedly "system dependent aspects" also exist in Fortran. This particular one doesn't, but that is a concession to efficiency in an area that rarely matters to programmers. What it is, in fact, is permission to the hardware to do things the wrong way because Fortran wants it that way! People who use this argument are missing an important point: C may have system-dependent aspects, but well-crafted C programs do not. Those who believe that Fortran programs are automatically system-independent have not tried to port very many amateur-written Fortran programs. (Programs written by competent professionals avoid these problems regardless of the choice of language.) -- "I'm not sure it's possible | Henry Spencer at U of Toronto Zoology to explain how X works." | henry@zoo.toronto.edu utzoo!henry
khb@chiba.Eng.Sun.COM (chiba) (11/28/90)
In article <1990Nov27.175023.26039@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes: > >I think that ANSI C has a method by which compilers may make sin, cos >etc intrinsic, but I don't remember how it works... It's really very simple: they are allowed to be intrinsic, essentially. There is no complexity or mystery. C and Fortran are no longer different in this regard, except insofar as the Fortran libraries are larger. But in ANSI C one is still stuck with ERRNO which makes computing things out of order, and/or at the same time, far more entertaining and challenging. -- ---------------------------------------------------------------- Keith H. Bierman kbierman@Eng.Sun.COM | khb@chiba.Eng.Sun.COM SMI 2550 Garcia 12-33 | (415 336 2648) Mountain View, CA 94043
ds@juniper09.cray.com (David Sielaff) (11/28/90)
In article <1990Nov27.175023.26039@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes: >In article <BGLENDEN.90Nov26162335@mandrill.cv.nrao.edu> bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: >>3. In fortran functions like sin, cos, ** are intrinsic. >> >>I think that ANSI C has a method by which compilers may make sin, cos >>etc intrinsic, but I don't remember how it works... > >It's really very simple: they are allowed to be intrinsic, essentially. >There is no complexity or mystery. C and Fortran are no longer different >in this regard, except insofar as the Fortran libraries are larger. > At the risk of beating a dead horse, the only gotcha here is that if `static double sin(double);' is seen in a compilation unit, the compiler had better wait to find the definition in the compilation unit, and not treat calls to sin() as an intrinsic (after it has seen the declaration, anyway). This is a situation which does not arise in FORTRAN. Dave Sielaff Cray Research, Inc.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (11/28/90)
Summary: Each of the disadvantages that Brian mentions for C as compared to Fortran should disappear within a few years without any further standardization. In the same areas, Fortran has many big disadvantages, each of which will be quite difficult to fix. In article <BGLENDEN.90Nov26162335@mandrill.cv.nrao.edu> bglenden@mandrill.cv.nrao.edu (Brian Glendenning) writes: > 1. Pointer aliasing. If you allow interprocedural analysis, the compiler can detect the ``aliasing signature'' of every call to a function, and generate separate versions that run as fast as possible for each signature. If, in fact, no aliasing is present, the compiler will generate just one version. So this is only a temporary disadvantage---when C compilers get smarter, they'll be able to generate just as fast code as Fortran (at the expense of somewhat slower compilation). In my experience, pointer aliasing in C is an advantage. I don't have to, e.g., write two versions of a large-number addition routine, one where the output overlaps the input and one where it doesn't. The price I pay for this is that most current compilers only generate the slower version (so in speed-critical applications I'm now forced to write two versions, one using some unportable assertion like ``noalias''). But a smarter compiler could do the job. The extra compile time is worth the programming time saved. This disadvantage of Fortran (that it doesn't allow aliasing at all) is much more difficult to fix. Even if a compiler appears that allows aliasing, code taking advantage of it won't be portable. So nobody'll bother. Part of the flame war in comp.lang.misc is over my contention that the compiler can do a respectable job of aliasing detection *without* interprocedural analysis---by generating a small, static set of signatures and working with those. But this is a side issue. > 2. C has no conformant arrays, i.e. you can't do the equivalent of: [ ... ] > In C you either have to do your own indexing *(a + j*m +i) or have > pointers to pointers *(*(a + i) + j). I agree that this is a problem. However, the double-pointer solution usually allows faster access than the standard method of storing arrays, doesn't waste much memory, allows more flexibility, gets around the memory management problems of some small architectures, and lets you use a[i][j] for *(*(a + i) + j), all within current C. It is extremely difficult to do this efficiently in Fortran, and it will continue to be. In some applications, though, dynamically sized flat multidimensional arrays may be better than multiple-pointer arrays. At least one very popular current compiler, namely gcc, lets you declare arrays the way you want. > 3. In fortran functions like sin, cos, ** are intrinsic. Fortran and ANSI C treat intrinsics the same way. > 4. Fortran has complex variables. Given the number of inlining compilers, this is at most an issue of what syntax you prefer. Many programmers don't like infix notation for operations that are generally simulated by the compiler rather than executed directly by the machine. For them, Fortran is at a disadvantage here. This is hardly a major issue, though. > 5. There are many numerical libraries written for Fortran. Which, given f2c, is no longer an issue. > 6. C can ignore the placement of parentheses ANSI clamps down on this. > 7. "C has too many system dependent aspects (e.g. round up or down when > dividing negative integers)." Huh? Fortran has system-dependent aspects too. Fortran does not have standard I/O powerful enough to, e.g., respectably checkpoint a computation. This is a huge disadvantage. It will not be fixed except by further standardization. > 8. C does everything in double. ANSI fixes this cleanly. > (*) I realize that it is not that hard to write portable C. I think > it's fair to say that it's easier to write portable Fortran for > numeric work, though. In my experience with numerical code, it has been extremely easy to write portable C. Numerical programs use input and output too. > When and if we have fortran9? available the story may be different > (but it's getting pretty late in the day for F9x to come riding over > the horizon to save us). Fortran 8X: the teenage mutant ninja offspring of Modula-2 and Ada. A few years ago I was in a room full of Fortran programmers listening to a presentation about the language. They hated almost everything about it. Now it's Fortran 9X and still somewhere over the horizon. C is a de facto standard. ANSI C is a standard. So is Fortran 77. The new Fortran is not. C, ANSI C, and Fortran 77 are the language variants that will be widely used over the next several years, and those are the languages we should be comparing. ---Dan
ttw@lanl.gov (Tony Warnock) (11/28/90)
Re: >> 2. C has no conformant arrays, i.e. you can't do the equivalent of: > [ ... ] >> In C you either have to do your own indexing *(a + j*m +i) or have >> pointers to pointers *(*(a + i) + j). Dan Bernstein writes: >I agree that this is a problem. However, the double-pointer solution >usually allows faster access than the standard method of storing arrays, >doesn't waste much memory, allows more flexibility, gets around the >memory management problems of some small architectures, and lets you use >a[i][j] for *(*(a + i) + j), all within current C. It is extremely >difficult to do this efficiently in Fortran, and it will continue to be. What is it that Dan thinks is difficult? He uses only "this" so it is not clear what he has in mind. Fortran has allowed one to write a(i,j,k,l,m,n,p) for years. If done inside a loop where one of the supscripts is varying, there is only a single addition (if that) to do indexing. If the subscripts are chosed at random, how does having a pointer help: the necessary offset must still be gotten somehow.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (11/28/90)
In article <7097@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: > >I agree that this is a problem. However, the double-pointer solution > >usually allows faster access than the standard method of storing arrays, > >doesn't waste much memory, allows more flexibility, gets around the > >memory management problems of some small architectures, and lets you use > >a[i][j] for *(*(a + i) + j), all within current C. It is extremely > >difficult to do this efficiently in Fortran, and it will continue to be. > What is it that Dan thinks is difficult? He uses only "this" > so it is not clear what he has in mind. Fortran has allowed > one to write a(i,j,k,l,m,n,p) for years. Sorry. I said ``The double-pointer solution allows X, Y, Z, W, and A, all within current C. It is extremely difficult to do this efficiently in Fortran, and it will continue to be.'' By ``this'' I was referring to the double-pointer solution, not to any of its particular features. > If the subscripts > are chosed at random, how does having a pointer help: the > necessary offset must still be gotten somehow. Huh? A double-pointer array, as we were discussing, is a single-dimensional array of pointers to single-dimensional arrays. To access a random element of the array takes two additions and two memory references. In contrast, to access a random element of a flat array takes two additions, a multiplication, and a memory reference. On most widely used machines, a multiplication is quite a bit slower than a memory reference, particularly a cached memory reference. That's why double-pointer arrays are better than flat arrays for so many applications. Fortran can't deal with a double-pointer array efficiently because it doesn't have pointers. Simulating pointers efficiently is what I was calling difficult. Do you disagree? ---Dan
patrick@convex.COM (Patrick F. McGehearty) (11/29/90)
In article <17680:Nov2806:04:1090@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: ...stuff deleted in interest of brevity... >Huh? A double-pointer array, as we were discussing, is a >single-dimensional array of pointers to single-dimensional arrays. To >access a random element of the array takes two additions and two memory >references. In contrast, to access a random element of a flat array >takes two additions, a multiplication, and a memory reference. On most >widely used machines, a multiplication is quite a bit slower than a >memory reference, particularly a cached memory reference. That's why >double-pointer arrays are better than flat arrays for so many >applications. > >Fortran can't deal with a double-pointer array efficiently because it >doesn't have pointers. Simulating pointers efficiently is what I was >calling difficult. Do you disagree? > >---Dan Actually, simulating pointers in Fortran is not very hard, just a bit ugly. First declare an array SPACE for all data that might be pointed to. Then access it as you please. For example: VAL = SPACE(IPTR(I)) I'm not claiming its a wonderful approach, just doable. I did it 17 years ago in FTN66 because that was the only efficient compiler for the machine I was using. While a single access to a random element in memory of a flat array takes two additions, one multiply and a memory reference, an successive access on a normal loop iteration only takes one addition and a memory reference. That is, if we save the address of a(i,j) in a register, then computing the address of either a(i+1,j) or a(i,j+1) only takes a single addition, assuming a rectangular array. The double-pointer array still takes two memory accesses and an addition. Also, on leading edge machines, a multiplication is as fast or faster than a memory reference, especially if you miss the cache. As killer micros continue to increase their clock rates, this phenomena will spread. However, double-pointer arrays still will be used for sparse data.
ttw@lanl.gov (Tony Warnock) (11/29/90)
Dan Bernstein answers [correctly]: >Sorry. I said ``The double-pointer solution allows X, Y, Z, W, and A, >all within current C. It is extremely difficult to do this efficiently >in Fortran, and it will continue to be.'' By ``this'' I was referring to >the double-pointer solution, not to any of its particular features. > >> If the subscripts >> are chosed at random, how does having a pointer help: the >> necessary offset must still be gotten somehow. > >Huh? A double-pointer array, as we were discussing, is a >single-dimensional array of pointers to single-dimensional arrays. To >access a random element of the array takes two additions and two memory >references. In contrast, to access a random element of a flat array >takes two additions, a multiplication, and a memory reference. On most >widely used machines, a multiplication is quite a bit slower than a >memory reference, particularly a cached memory reference. That's why >double-pointer arrays are better than flat arrays for so many >applications. Thanks, I didn't get the idea from your first posting. With respect to speed, almost all machines that I have used during the last 25 or so years have had faster multiplications than memory accesses. (I have been doing mostly scientific stuff.) For most scientific stuff, I think that the scales are tipped in favor of array-hood because of the rarity of accessing an arbitrary element. Most computations access an array along one of its dimensions, holding the others constant. In this case, there is only one addition and one memory access whatever the dimensionality of the array. There is also no storage overhead associated with keeping arrays of pointers. For multi-dimensional problems, this overhead could be quite large. Again, for scientific problems, there is usually no left over room as the entire memory will be taken up by arrays. It doesn't matter how much memory is available, my problems will easily fill it and still be at too coarse a granularity to be nice. Anyway, Dan's answer points out the performance differences in the array versus pointer access stuff. Personally I just don't user pointers much because my problems don't call for them. If I had to access multi-dimensional arrays in random fashion very often, the pointer solution might be acceptable. On a slightly different issue: often it is necessary to do row or column accesses (or whatever you call them in 3 or more dimensions) in the same code. How does one set up a pointer array for allowing easy access along each dimension (for example in a typical 5-dimensional array)? I use 5 dimensions as typical because a physical grid usually is indexed by x,y,z, and time indices and also by variable type. That is each point in x,y,z,t space has an array of variable present (on bad days it has arrays of matrices.)
tong@convex.csd.uwm.edu (Shuk Y Tong) (11/29/90)
In article <17680:Nov2806:04:1090@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >In article <7097@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: >> >I agree that this is a problem. However, the double-pointer solution >> >a[i][j] for *(*(a + i) + j), all within current C. It is extremely >> > difficult to do in Fortran. >> Fortran has allowed >> one to write a(i,j,k,l,m,n,p) for years. > >Huh? A double-pointer array, as we were discussing, is a >single-dimensional array of pointers to single-dimensional arrays. To >access a random element of the array takes two additions and two memory >references. In contrast, to access a random element of a flat array >takes two additions, a multiplication, and a memory reference. On most >widely used machines, a multiplication is quite a bit slower than a >memory reference, particularly a cached memory reference. That's why >double-pointer arrays are better than flat arrays for so many >applications. > >Fortran can't deal with a double-pointer array efficiently because it >doesn't have pointers. Simulating pointers efficiently is what I was >calling difficult. Do you disagree? Yes. First of all, arrays are the most common data structure in scientific computing. By not providing a generic array type in C(i.e., one which can have a variable in its declaration in functions), it basically says C is not for numerical work. As for two-dim arrays, in C, it has to be simulated by a pointer array which is clumsy (it becomes clumsier for 6-dim arrays) and runs much SLOWER on vector machines. The reason is simply the compiler is unable to tell where each element a[i][j] points to. It might be posible to put tons of directives to tell the compiler what is going on, but why bother when you can do it by using a(i,j) alone ? As to accessing a random element a(i,j), it is true that that a(i,j) is slower than a[i][j], but the point is that in scientific programing, accessing a random element in a ramdom place is RARE. Even it is not rare, it doesn't matter becuase the CPU hog in scientifc computing is LOOPS with ARRAYS in it. When a(i,j) is within a loop, its address usually can be gotten by an addition (a constant stride or an increment, depending on which index runs faster). Before ANSI C compiler became widely available, it was a nightmare to do numerical work in C: arbitary evaluation order, mindless promotion, too many to name. ANSI C is much better in this regard, but still not quite has the (numerical) capbilities of Fortran. Some of the common functions, like sqrt etc., is generated inline by a Fortran compiler, but no way of doing it in C. If complex math is desired, there is no end of trouble in C. A few dosens of functions (the compiler better has inline capability, which is not true present for most compilers) or macros need to be written. I am almost sure people like to write c=c1/c2, c=r/c1, rather than CDIV(c,c1,c2), RCDIV(c,r,c1). The only possibe candidate of the languages that can replace Fortran is C++ ( I doubt that too ), but not C, because C itself will be dead when a true C++ compiler is written.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (11/29/90)
In article <7200@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: > With respect to speed, almost all machines that I have used during > the last 25 or so years have had faster multiplications than > memory accesses. Hmmm. What machines do you use? In my experience (mostly mainframes and supers, some micros, and only recently a bit with minis) local memory access is up to several times as fast as integer multiplication. (I don't like this situation; converting to floating point just to multiply quickly on a Cray seems rather silly.) > Most computations access an array along one of its > dimensions, holding the others constant. Yes, but just because you use pointers doesn't mean you have to give up the advantages of flat arrays. > There is also no storage overhead > associated with keeping arrays of pointers. For multi-dimensional > problems, this overhead could be quite large. I assume you meant ``there is storage overhead...'' That's true, but it's really not a problem. If you have a 5 by 5 by 2 by 3 by 15 array, can you begrudge space for thirty pointers so that you save 5% of your computation time? Thought so. Even if you set up pointers within pointers for every dimension, you can do it so that the pointer space is 1/N of the array space, where N is the widest single dimension. > Anyway, Dan's answer points out the performance differences in the > array versus pointer access stuff. Personally I just don't user > pointers much because my problems don't call for them. If I had to > access multi-dimensional arrays in random fashion very often, the > pointer solution might be acceptable. But you really do access random spots in arrays; it's a rare problem that always starts from the top left corner of every matrix. Take a typical pivot-based algorithm, for instance: you're dealing with essentially random rows at each step. > How does one set up a pointer array > for allowing easy access along each dimension (for example in a > typical 5-dimensional array)? You keep pointers to the top of each 4-dimensional hyperplane in the array. You can get some of the benefit of this from storing integer indexes, but you still lose at least an addition per array sweep, more for higher-dimensional arrays if you store more pointers. Since the sweeps run 20x faster on (e.g.) a Convex, the extra computation outside each sweep becomes noticeable. ---Dan
sef@kithrup.COM (Sean Eric Fagan) (11/29/90)
In article <2392:Nov2902:59:0590@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >In article <7200@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: >> With respect to speed, almost all machines that I have used during >> the last 25 or so years have had faster multiplications than >> memory accesses. >Hmmm. What machines do you use? In my experience (mostly mainframes and >supers, some micros, and only recently a bit with minis) local memory >access is up to several times as fast as integer multiplication. On both Cybers and Crays, multiplication can easily take fewer cycles than accessing memory. (No cache, remember?) But most machines aren't like that, I believe. >(I >don't like this situation; converting to floating point just to multiply >quickly on a Cray seems rather silly.) Uhm... you don't have to, I don't think. A Cyber had only one type of multiply instruction, but if the exponent were 0, it did an integer multiplication. I believe Cray's do the same thing. -- -----------------+ Sean Eric Fagan | "That's my weakness: vulnerable poultry." sef@kithrup.COM | -----------------+ Any opinions expressed are mine, shared with none.
dik@cwi.nl (Dik T. Winter) (11/29/90)
In article <1990Nov29.040910.7400@kithrup.COM> sef@kithrup.COM (Sean Eric Fagan) writes: > In article <2392:Nov2902:59:0590@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: > >(I > >don't like this situation; converting to floating point just to multiply > >quickly on a Cray seems rather silly.) > > Uhm... you don't have to, I don't think. A Cyber had only one type of > multiply instruction, but if the exponent were 0, it did an integer > multiplication. I believe Cray's do the same thing. > No. The Cybers give indeed the lower half of the product of two integers (and the other multiply instruction gives the upper part, although that is not documented). The Cray returns the upper part if the two exponents are zero. But the Cray has a 24x24->24 bit integer multiply and as a compiler option you can use 24 bit integers. -- dik t. winter, cwi, amsterdam, nederland dik@cwi.nl
jlg@lanl.gov (Jim Giles) (11/30/90)
From article <1990Nov29.040910.7400@kithrup.COM>, by sef@kithrup.COM (Sean Eric Fagan): > [...] > Uhm... you don't have to, I don't think. A Cyber had only one type of > multiply instruction, but if the exponent were 0, it did an integer > multiplication. I believe Cray's do the same thing. No. The Crays have an integer multiply unit for addresses. This mult takes 4 clocks. Memory access costs 14 or 17 clocks depending on the model of machine you have. J. Giles
ttw@lanl.gov (Tony Warnock) (11/30/90)
Dan Bernstein asks: RE: >In article <7200@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: >> With respect to speed, almost all machines that I have >> used during the last 25 or so years have had faster >> multiplications than memory accesses. >Hmmm. What machines do you use? In my experience (mostly mainframes and >supers, some micros, and only recently a bit with minis) local memory >access is up to several times as fast as integer multiplication. (I >don't like this situation; converting to floating point just to multiply >quickly on a Cray seems rather silly.) Model Multiplication Time Memory Latency YMP 5 clock periods 18 clock periods XMP 4 clock periods 14 clock periods CRAY-1 6 clock periods 11 clock periods Compaq 25 clock periods 4 clock periods Zenith 120 clock periods 30 clock periods The times on the PC-clones are approximate depending on the type of variables being accessed and the sizes of the indices. Most of my work has been on CRAY type computers. It is always a win to avoid memory accesses. On the PC-type machines, memory access is faster, but in the particular cases that I have been programming, one does many more constant-stride access than random accesses, usually in a ratio of many thousand to one. For an LU decompositon with partial pivoting, one does rougly N/3 constant stride memory accesses for each "random" access. For small N, say 100 by 100 size matrices or so, one would do about 30 strength-reduced operations for each memory access. For medium (1000 by 1000) problems, the ratio is about 300 and for large (10000 by 10000) it is about 30000.
ttw@lanl.gov (Tony Warnock) (11/30/90)
In one of the smaller problems that I had to run, I found that I needed two large arrays declared as follows: COMPLEX a(3,3,4,12,12,12,32), b(3,4,12,12,12,32) These are BIG suckers for a small (CRAY-1) machine. Actually the first array was dimensioned as a(3,2,4,12,12,12,32) using a standard trick of the underlying physics. It was necessary to access the arrays letting each of the last four dimensions be the innermost loop. (I used more programming tricks to eliminate memory-bank conflicts.) Every few hundred steps, some of the a's and b's were treated specaially as single 3x3 (realized as 3x2) and 4x3 complex matrices. There is not much room left over for the pointer arrays to get at these matrices.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (11/30/90)
In article <7318@lanl.gov> jlg@lanl.gov (Jim Giles) writes: > The Crays have an integer multiply unit for addresses. This mult > takes 4 clocks. But isn't that only for the 24-bit integer? If you want to multiply full words you have to (internally) convert to floating point, multiply, and convert back. I have dozens of machines that can handle a 16MB computation; I'm not gonig to bother with a Cray for those. The biggest advantage of the Cray line (particularly the Cray-2) is its huge address space. So what's the actual time for multiplying integers? ---Dan
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (11/30/90)
Several of you have been missing the crucial point. Say there's a 300 to 1 ratio of steps through a matrix to random jumps. On a Convex or Cray or similar vector computer, those 300 steps will run 20 times faster. Suddenly it's just a 15-1 ratio, and a slow instruction outside the loop begins to compete in total runtime with a fast floating-point multiplication inside the loop. Anyone who doesn't think shaving a day or two off a two-week computation is worthwhile shouldn't be talking about efficiency. In article <7339@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: > Model Multiplication Time Memory Latency > YMP 5 clock periods 18 clock periods > XMP 4 clock periods 14 clock periods > CRAY-1 6 clock periods 11 clock periods Um, I don't believe those numbers. Floating-point multiplications and 24-bit multiplications might run that fast, but 32-bit multiplications? Do all your matrices really fit in 16MB? > Compaq 25 clock periods 4 clock periods Well, that is a little extreme; I was talking about real computers. > For an LU > decompositon with partial pivoting, one does rougly N/3 constant > stride memory accesses for each "random" access. For small N, say > 100 by 100 size matrices or so, one would do about 30 > strength-reduced operations for each memory access. For medium > (1000 by 1000) problems, the ratio is about 300 and for large > (10000 by 10000) it is about 30000. And divide those ratios by 20 for vectorization. 1.5, 15, and 150. Hmmm. ---Dan
mcdonald@aries.scs.uiuc.edu (Doug McDonald) (11/30/90)
In article <7339@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: > > > > Model Multiplication Time Memory Latency > > YMP 5 clock periods 18 clock periods > XMP 4 clock periods 14 clock periods > CRAY-1 6 clock periods 11 clock periods > > Compaq 25 clock periods 4 clock periods > Zenith 120 clock periods 30 clock periods > > The times on the PC-clones are approximate depending on the type > of variables being accessed and the sizes of the indices. > I don't know what kind of Compaq or Zenith you are using, but on a 25 or 33 MHz 386 or 486 machine with a 3167 or 4167 floating point unit the memory latency and FPU multiplication time are roughly equal. The manuals of the compilers I use say that the memory accesses slow computations down by up to a factor of two compared to things on the FPU stack. Doug McDonald
ttw@lanl.gov (Tony Warnock) (11/30/90)
>From: brnstnd@kramden.acf.nyu.edu (Dan Bernstein) > >In article <7318@lanl.gov> jlg@lanl.gov (Jim Giles) writes: >> The Crays have an integer multiply unit for addresses. This mult >> takes 4 clocks. > >But isn't that only for the 24-bit integer? If you want to multiply full >words you have to (internally) convert to floating point, multiply, and >convert back. > >I have dozens of machines that can handle a 16MB computation; I'm not >gonig to bother with a Cray for those. The biggest advantage of the Cray >line (particularly the Cray-2) is its huge address space. > >So what's the actual time for multiplying integers? >---Dan The time for multiplying 32-bit integers on the YMP is 5 clock periods. Normally YMP addresses are interpreted as 64-bit words not as bytes. On the previous models of CRAYS, 24 bits are used to address 16Mwords not Mbytes. (This saves 3 wires per address data path As most work on CRAY's is done on words (numerical) or packed-character strings, multiplication of longer integers is not provided for in the hardware. Personally I would like to have long integer support. The CRAY architecture supports a somewhat strange multiplication method which will yield a 48-bit product of the input words have total length less than 48 bits. That is, one can multiply two 24-bit quantities, a 16-bit and a 32-bit quantity, a 13-bit and a 35-bit quantity, or shorter things. This operation takes two shifts and one multiply. The shifts may be overlapped so the time is 3 clocks for the two shifts and 7 clocks for the multiply if the shifts are known; or 4 clocks for the shifts and 7 clocks for the multiply if the shifts are variable. Its a bit of a pain to program but the compiler does for us. Another form of integer multiplication is used sometimes: the integers are converted to floating, then multiplied, and the result converted back to integer. This method fails if an intermediate value exceeds 46-bits of significance. The time is 2 clocks for producing a "magic" constant, 3 clocks each for two integer adds (reduces to 4 total because of pipelining), 6 clocks each for two floating adds (reduces to 6 because of pipelining overlap with the integer add), 7 clocks for the floating multiply, 6 clocks for another floating add, and 6 clocks for another integer multiply. Total is 29 clocks if no other operations may be pipelined with these operations. If the quantities being multiplied are addresses, some of the above is eliminated, bringing the result down to 20 clocks. Still this is not as good as the floating point performance. All of the above may be vectorized which would result in 3 clocks per result in vector mode.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (12/01/90)
In article <1990Nov30.145649.17688@ux1.cso.uiuc.edu> mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes: > I don't know what kind of Compaq or Zenith you are using, but > on a 25 or 33 MHz 386 or 486 machine with a 3167 or 4167 floating > point unit the memory latency and FPU multiplication time are > roughly equal. We are not talking about floating point. ---Dan
mroussel@alchemy.chem.utoronto.ca (Marc Roussel) (12/01/90)
I think I'm pretty typical of Fortran users. I know Fortran and a smattering of other languages. I use Fortran mostly because a) I can write code quickly in this language. b) Compilers exist for it on any machine I'm ever likely to use. I don't want to use C. From what little I've been exposed to it, I don't like it. C has a nasty syntax which I don't have time to learn. Now everybody who's been trying to convince scientific programmers like me to learn C, go away! Maybe you have the time to waste, but I don't. Every algorithm I've ever used is expressible in Fortran. (I've even written algorithms that create trees in Fortran using no extensions other than recursion... That's right, no pointers, just good old arrays.) If ever I run across a problem that I can't code in Fortran, then I'll consider other languages. When the time comes, I may even ask some of you what language you think is appropriate. Until then, I don't want your silly-ass opinion. If you want to compare languages, do it on comp.lang.misc where someone cares (notice the followup-to line). Look, if someone out there can suggest a computer language that's easy to learn and code in and that has the sort of widespread base that Fortran does, I'll listen. C just isn't for scientific programmers like me so it's no use trying to convince me (and probably 90% of the rest of the readership of this group) otherwise. No one sensible would say that Fortran is the best language for everything, but it's a more than adequate language for most scientific computing. While I'm at it, I sincerely hope that some cleaner language like Turing wipes C off the face of this planet. I've about had it with all this "my language is better than yours" garbage from the C folk and can wish nothing for them other than extinction. Marc R. Roussel mroussel@alchemy.chem.utoronto.ca
hwr@pilhuhn.uucp (Heiko W.Rupp) (12/01/90)
Organization: Not an Organization In article <7339@lanl.gov>, Tony Warnock writes: > > > Model Multiplication Time Memory Latency > > YMP 5 clock periods 18 clock periods > XMP 4 clock periods 14 clock periods > CRAY-1 6 clock periods 11 clock periods > > Compaq 25 clock periods 4 clock periods > Zenith 120 clock periods 30 clock periods These data also depend on clock speed !!! But did someone see a self vectorizing compiler for C as there are many for Fortran ????? -Heiko -- O|O Heiko W.Rupp hwr%pilhuhn@bagsend.ka.sub.org | Gerwigstr.5 | There is someone in my head, but it't not me U FRG-7500 Karlsruhe 1 | - Pink Floyd Voice : + 49 721 693642| Do You know where Your towel is ???
salomon@ccu.umanitoba.ca (Dan Salomon) (12/01/90)
In article <9458:Nov2721:51:5590@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: > >> 5. There are many numerical libraries written for Fortran. > >Which, given f2c, is no longer an issue. Does f2c handle conformant arrays properly? If so, is the code that it generates maintainable? I remember once trying to translate a FORTRAN fast-fourier analysis procedure into PL/I. Sounds like a piece of cake, right? The problem was that the FORTRAN procedure accepted arrays with not only any size of dimensions, but also with any number of dimensions. I devised a way of doing it, but my method crashed the version of the PL/I compiler that IBM was distributing at the time (circa 1972), got stuck in the operating system, and the O/S had to be brought down by the operators to get it deleted. -- Dan Salomon -- salomon@ccu.UManitoba.CA Dept. of Computer Science / University of Manitoba Winnipeg, Manitoba, Canada R3T 2N2 / (204) 275-6682
ds@juniper09.cray.com (David Sielaff) (12/01/90)
In article <6690:Nov3006:15:3890@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >Several of you have been missing the crucial point. > >Say there's a 300 to 1 ratio of steps through a matrix to random jumps. >On a Convex or Cray or similar vector computer, those 300 steps will run >20 times faster. Suddenly it's just a 15-1 ratio, and a slow instruction >outside the loop begins to compete in total runtime with a fast >floating-point multiplication inside the loop. > >Anyone who doesn't think shaving a day or two off a two-week computation >is worthwhile shouldn't be talking about efficiency. > >In article <7339@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: >> Model Multiplication Time Memory Latency >> YMP 5 clock periods 18 clock periods >> XMP 4 clock periods 14 clock periods >> CRAY-1 6 clock periods 11 clock periods > >Um, I don't believe those numbers. Floating-point multiplications and >24-bit multiplications might run that fast, but 32-bit multiplications? >Do all your matrices really fit in 16MB? On late-model X-MP's and all Y-MP's, those times are correct for 32 bit integer multiplications. The change (from 24 to 32 bit multiplies) corresponds to when the address space on the Cray 1/X-MP/Y-MP line was bumped up from 24 bits to 32 bits (it was always 32 bits on a Cray-2). But this certainly seems to be getting an awfully long way from C ;-) Dave Sielaff Cray Research, Inc.
henry@zoo.toronto.edu (Henry Spencer) (12/02/90)
In article <1990Nov30.183032.5420@ccu.umanitoba.ca> salomon@ccu.umanitoba.ca (Dan Salomon) writes: >>> 5. There are many numerical libraries written for Fortran. >>Which, given f2c, is no longer an issue. >Does f2c handle conformant arrays properly? I don't recall the fine points, but f2c is a full Fortran 77 "compiler"; if conformant arrays are legal, portable F77, f2c does them. >If so, is the code that it generates maintainable? f2c deliberately does not try to generate maintainable code. That is hard, and nobody has yet produced a program that can do it without human tinkering with the output. In one sense, f2c really ought to be cc2fc -- its primary mission is to be a pre-pass to turn a C compiler into a Fortran compiler. Code maintenance is still better done on the Fortran. -- "The average pointer, statistically, |Henry Spencer at U of Toronto Zoology points somewhere in X." -Hugh Redelmeier| henry@zoo.toronto.edu utzoo!henry
john@newave.UUCP (John A. Weeks III) (12/02/90)
In a somewhat long discussion... > > > It is often stated that Fortran is better than C for numerical work. > > It may not be true any more. A friend of mine brought a little fortran > > program (It is two big do loops with some instrinsic function calculation > > in the loop.) and the C translation of the fortran program. We compiled two > > program on a IBM RISC System 6000/530 with xlc and xlf. To my surprise, the > > excutable from C is faster than the excutable from Fortran by a few percent. I hope that numerical quality is not always measured by speed. If my recent experience means anything, I think that Fortran implementations generally have more well behaved numerical libraries, documented behavior, and better error handling. And since FORTRAN is noted for numerical work, someone usually tests the floating point stuff before the compiler is shipped. I have encountered C compilers that made me wonder if + and - was even tested. Of course, I remember things that don't work longer than I remember things that do work flawlessly 8-). The speed that FORTRAN is noted for can be found on the big IBM boat anchors. COBOL and FORTRAN rule on those machine for speed (discounting assembler) with C being something like 10 times slower when performing similar tasks (based on benchmarks with the IBM & SAS compilers). C style I/O is especially slow on IBM mainframes because those machines work best with record I/O and C programmers tend to think in streams of characters. -john- -- =============================================================================== John A. Weeks III (612) 942-6969 john@newave.mn.org NeWave Communications ...uunet!rosevax!tcnet!wd0gol!newave!john ===============================================================================
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (12/03/90)
In article <1990Nov30.183032.5420@ccu.umanitoba.ca>, salomon@ccu.umanitoba.ca (Dan Salomon) writes: > I remember once trying to translate a FORTRAN fast-fourier analysis > procedure into PL/I. Sounds like a piece of cake, right? The problem > was that the FORTRAN procedure accepted arrays with not only any size > of dimensions, but also with any number of dimensions. I don't recall that being legal in F77. To be sure, I've met compilers that didn't bother to check, and I think there was a special case for DATA statements, but argument passing? -- I am not now and never have been a member of Mensa. -- Ariadne.
salomon@ccu.umanitoba.ca (Dan Salomon) (12/04/90)
In article <1990Dec1.232408.13365@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes: > ... In one sense, f2c really ought to be cc2fc -- its primary >mission is to be a pre-pass to turn a C compiler into a Fortran compiler. >Code maintenance is still better done on the Fortran. If you have to maintain the numerical libraries in FORTRAN, then you cannot really say that you are doing your numerical work in C. -- Dan Salomon -- salomon@ccu.UManitoba.CA Dept. of Computer Science / University of Manitoba Winnipeg, Manitoba, Canada R3T 2N2 / (204) 275-6682
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (12/04/90)
In article <1990Dec4.011220.9302@ccu.umanitoba.ca> salomon@ccu.umanitoba.ca (Dan Salomon) writes: > If you have to maintain the numerical libraries in FORTRAN, then you > cannot really say that you are doing your numerical work in C. One of the great advantages of the classical Fortran numerical libraries is that they are so reliable that the code never has to be maintained. A library is a library is a library. ---Dan
john@ghostwheel.unm.edu (John Prentice) (12/05/90)
In article <26434:Dec404:42:4990@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >One of the great advantages of the classical Fortran numerical libraries >is that they are so reliable that the code never has to be maintained. A >library is a library is a library. I hate to do it, but I have to at least qualify this point. I have a good friend who is in charge of maintaining the SLATEC and IMSL libraries at the Air Force Weapons Laboratory (Cray 2, IBM 3070, and a bunch of workstation systems). SLATEC is a joint project by Sandia, Livermore, Los Alamos, and the Air Force Weapons Laboratory and is a large mathematical library, somewhat like IMSL but free. The software there is easily as good as what I have seen in other libraries like IMSL or NAG. However, I am digressing. His experience with these well worn and tested libraries is that they quite often will not compile on new machines and will often fail the quick checks until someone goes in and makes minor changes to the code. Now, the changes are usually minor, more often then not it is just a question of changing some floating point test for small numbers, etc... However, there have also been cases where the answers are just plain wrong. So, on an system where a library has been resident for long periods, there is a good chance it is reliable (though it is not an absolute certainty). However, from what I have seen and heard of these libraries, they are not easily transported to new systems and unfortunately in science, new systems are always happening. Perhaps someone involved with SLATEC, IMSL, NAG, etc... could comment on all this. So, I basically agree with Dan's comment, but it is not quite as simple perhaps as his comment suggests. John Prentice Amparo Corporation john@unmfys.unm.edu
gwyn@smoke.brl.mil (Doug Gwyn) (12/05/90)
In article <184a59d8.ARN0ebd@pilhuhn.uucp> hwr%pilhuhn@bagsend.ka.sub.org writes: >But did someone see a self vectorizing compiler for C as there are many >for Fortran ????? There are vectorizing C compilers, particularly on large machines, but if you're interested in comparisons you need to appreciate that Fortran has essentially only one form of data structuring, the array, while in C arrays are much less commonly used, other more appropriate data structures taking their place. Thus, while vectorization is important for many Fortran applications, the same optimization is of much less importance in C. There are numerous other forms of optimization that can be (and often are) applied in the course of generating code from C programs. As others have mentioned, the semantics of pointers raises more severe aliasing concerns than apply to Fortran, so in some cases C code must be less highly optimized than corresponding Fortran code. This is traded off against more freedom for the C programmer, since it is the Fortran programmer's responsibility to not alias function arguments whereas C permits aliasing (which can at times be very useful). Anyway, discussions about code optimization should have little to do with selection of a programming language.
shenkin@cunixf.cc.columbia.edu (Peter S. Shenkin) (12/05/90)
In article <1990Dec4.190148.4026@ariel.unm.edu> john@ghostwheel.unm.edu (John Prentice) writes: >In article <26434:Dec404:42:4990@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >>One of the great advantages of the classical Fortran numerical libraries >>is that they are so reliable that the code never has to be maintained. A >>library is a library is a library. > >I hate to do it, but I have to at least qualify this point. I have a good friend >who is in charge of maintaining the SLATEC and IMSL libraries at the Air >Force Weapons Laboratory.... >. His experience with these well worn and tested libraries is >that they quite often will not compile on new machines and will often >fail the quick checks until someone goes in and makes minor changes to the >code. Now, the changes are usually minor, more often then not it is >just a question of changing some floating point test for small numbers, etc... >However, there have also been cases where the answers are just plain >wrong.... I have had the same experience with the Harwell subroutines.... -P. ************************f*u*cn*rd*ths*u*cn*gt*a*gd*jb************************** Peter S. Shenkin, Department of Chemistry, Barnard College, New York, NY 10027 (212)854-1418 shenkin@cunixf.cc.columbia.edu(Internet) shenkin@cunixf(Bitnet) ***"In scenic New York... where the third world is only a subway ride away."***
dik@cwi.nl (Dik T. Winter) (12/05/90)
In article <1990Dec4.190148.4026@ariel.unm.edu> john@ghostwheel.unm.edu (John Prentice) writes: > However, from what I have seen and heard of > these libraries, they are not easily transported to new systems and > unfortunately in science, new systems are always happening. Perhaps > someone involved with SLATEC, IMSL, NAG, etc... could comment on all this. I am not involved with those, but I have some experience porting stuff. My stuff consists of two layers. Both layers are in Fortran and in C. I ported (part of) it to some 30 platforms. Everything goes through the C preprocessor because of machine pecularities (as some parts are written in assembler this is necessary). However, the C preprocessor is also used to avoid compiler bugs. There were 19 platforms that had a Fortran compiler when I used the system. I needed in 6 cases conditionals because of bugs. I am now porting the highest level in C. I did port that to two platforms, and on one of those I needed to avoid a compier bug. So much about porting an (in principle) perfectly portable package. And this package contains only a fraction of the code that is in the libraries mentioned above. So what is involved: 1. Many compilers have bugs that you may encounter; especially if the code is large. (My favourite is the 68000 compiler that generates VAX assembler for some legal constructs. But I have also seen compilers generate non-existing, apparently legal, instructions.) 2. Do not rely on properties that seem to be intuitive. See the ongoing debate on pointers in C (especially NULL). But also do not expect that the construct 'sqrt(1-sin(x))' is valid, because it will trap on some machines, etc. (To quash any arguments; within your precision constraints it is perfectly possible that sin(x) >1.) 3. Especially if you have a large body of code, be prepared to customize it to every platform you may encounter. Allow the use of preprocessors to do this (C preprocessor, m4, your own special built macro processor, etc.). I know that NAG uses 3 (and the last count I heard was over 80 platforms). They use generic sources and a preprocessor that customizes to the platform required. -- dik t. winter, cwi, amsterdam, nederland dik@cwi.nl
scs@adam.mit.edu (Steve Summit) (12/05/90)
Are you all still arguing about C and Fortran? The discussion has gone from responding to a few inappropriate criticisms of C (which was reasonable) to trying to downplay examples in which Fortran might well be superior (which is silly) to discussing multiplication vs. memory access time (which belongs in comp.arch, if anywhere). In article <1990Dec4.011220.9302@ccu.umanitoba.ca> salomon@ccu.umanitoba.ca (Dan Salomon) writes: >In article <1990Dec1.232408.13365@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes: >>Code maintenance is still better done on the Fortran. > >If you have to maintain the numerical libraries in FORTRAN, then you >cannot really say that you are doing your numerical work in C. Henry isn't saying you should do your numerical work in C, nor am I. If your data structures are (as has recently been asserted) all arrays, and you don't mind a few of Fortran's other weaknesses, use it in good health. C is better than a lot of people think it is at numerical work, but it certainly isn't perfect, and C apologists don't need to get up in arms when someone proposes an example which Fortran can probably handle better. Numerical work has never been C's claim to fame, anyway. Steve Summit scs@adam.mit.edu
dik@cwi.nl (Dik T. Winter) (12/05/90)
(Yes, I can also play with Followup-To. Apparently Doug Gwyn does not want to have a discussion about the merits of C version Fortran in the C group. This is the second time such a thing happened to my followup, but I gave an answer to the following remark; you can find it in comp.lang.fortran.) In article <14651@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: > Anyway, discussions about code optimization should have little to do > with selection of a programming language. -- dik t. winter, cwi, amsterdam, nederland dik@cwi.nl
sarima@tdatirv.UUCP (Stanley Friesen) (12/06/90)
In article <2392:Nov2902:59:0590@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >In article <7200@lanl.gov> ttw@lanl.gov (Tony Warnock) writes: >> There is also no storage overhead >> associated with keeping arrays of pointers. For multi-dimensional >> problems, this overhead could be quite large. >... That's true, but >it's really not a problem. If you have a 5 by 5 by 2 by 3 by 15 array, >can you begrudge space for thirty pointers so that you save 5% of your pointers Except for one thing. In scientific computation typical dimensions would be more like 1000 by 1000 by 1000 by 1000 by 50, which requires a great deal more than a mere thirty pointers. [In biological work it may be even larger, though in that case there are usually fewer dimensions] Really, I have done some *small* studies with hundreds of rows/columns. [In fact my data set was so small that I failed to produce usable results, I actually needed something like 5 times as many 'rows']. -- --------------- uunet!tdatirv!sarima (Stanley Friesen)
rob@mowgli.eng.ohio-state.edu (Rob Carriere) (12/06/90)
Would everybody please remove the word `(SUMMARY)' from the subject line? Thanks. SR "We now rejoin the following discussion, already in progress" ---
mat@mole-end.UUCP (Mark A Terribile) (12/09/90)
> > ... To access a random element of the array takes two additions and two > >memory references. In contrast, to access a random element of a flat array > >takes two additions, a multiplication, and a memory reference. On most > >widely used machines, a multiplication is quite a bit slower than a > >memory reference, particularly a cached memory reference. ... > With respect to speed, almost all machines that I have used during > the last 25 or so years have had faster multiplications than > memory accesses. (I have been doing mostly scientific stuff.) ... Please note that superscalar machines may change this again! If a superscalar machine has fewer multiplication resources than instruction pipelines, the memory lookup may once again win, depending upon how much other stuff is done to the datum being looked up. Superscalar may not be a factor for the very fastest machines (immersion cooled, GaAs, ECL, ballistic semiconductor, whatever) but it will probably become more important for engineering work- stations and for the second tier of fast machines (mini-supers, whatever). But then, I could be wrong ... Say, whatever happened to the Numerical C Extensions Group? -- (This man's opinions are his own.) From mole-end Mark Terribile
avery@netcom.UUCP (Avery Colter) (12/14/90)
mroussel@alchemy.chem.utoronto.ca (Marc Roussel) writes: > I don't want to use C. From what little I've been exposed to it, I >don't like it. C has a nasty syntax which I don't have time to learn. >Now everybody who's been trying to convince scientific programmers like >me to learn C, go away! Maybe you have the time to waste, but I don't. Well, like it or not, I HAVE to. Well, not quite, there is Pascal or assembly. I'm interested in getting into desktop tools for Apple computers. I'm currently on a GS, and might get into Macintoshes later on. Even if I ever consider (rrrrack, pttthhh) Windows 3.0, Fortran doesn't loom large as a desktop lingo. In fact, Fortran hasn't loomed large on Apple computers at all! The only version I've ever seen is some dinky little thing for the eight-bit II models in the ancient UCSD Pascal operating system. I guess I'll just have to rely on my dial-in site for f77 to keep my Fortran in practice. Maybe I'll come back to Fortran when I start working. It looks like lady-C will rule the homefront though. With that, I think I'm getting out of this flamefest... -- Avery Ray Colter {apple|claris}!netcom!avery {decwrl|mips|sgi}!btr!elfcat (415) 839-4567 "I feel love has got to come on and I want it: Something big and lovely!" - The B-52s, "Channel Z"
andyn@stpstn.UUCP (Andy Novobilski) (12/19/90)
In article <18756@netcom.UUCP> avery@netcom.UUCP (Avery Colter) writes: >Well, like it or not, I HAVE to. Well, not quite, there is Pascal or assembly. >I'm interested in getting into desktop tools for Apple computers. I'm >currently on a GS, and might get into Macintoshes later on. Even if I ever >consider (rrrrack, pttthhh) Windows 3.0, Fortran doesn't loom large as >a desktop lingo. In fact, Fortran hasn't loomed large on Apple computers >at all! The only version I've ever seen is some dinky little thing for the >eight-bit II models in the ancient UCSD Pascal operating system. > There are several Fortran packages for the Macintosh (including two listed in the Summer'90 APDAlog) that will work with the Mac toolkit. The ones listed in the APDAlog, work in the MPW environment. This would allow you to construct an app by keeping your original fortran computational engine and using MacApp (Pascal of C++) to build the interface. There is another Fortran, by a company named DCL (Ft Worth, TX), that provided a package of routines to construct simple windows and dialogs for building mac like user interfaces. -- Andy Novobilski | The Stepstone Corp. | Object-Oriented Programming: andyn@stepstone.com | 75 Glen Rd. | TV Network scheduling based on (203)426-1875 | Sandy Hook, CT 06482 | the number of insulted viewers.