gessel@carthage.cs.swarthmore.edu (Daniel Mark Gessel) (11/15/90)
Along the lines of optimization and pointerfull/pointerless languages, any one know much about languages which take suggestions for compilation (like many lisps), or compilers which automatically use profiling to improve optimization? How much of a bitch are these kinds of things? How useful are they? This kind of thing could allow both camps to be satisfied, (I would think), in that there would be a very portable abstract specification (program in a high level language), but also allow bit-twiddlers to twiddle (if implemented "right"). -- Daniel Mark Gessel Independent Software Consultant Internet: gessel@cs.swarthmore.edu Opinions expressed by me are not necessarily those of Swarthmore College.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (11/16/90)
In article <GESSEL.90Nov14134457@carthage.cs.swarthmore.edu> gessel@carthage.cs.swarthmore.edu (Daniel Mark Gessel) writes: > Along the lines of optimization and pointerfull/pointerless languages, any > one know much about languages which take suggestions for compilation (like > many lisps), or compilers which automatically use profiling to improve > optimization? By now there are quite a few compilers that use feedback information to improve code placement. I haven't seen any that use it for more intelligent decisions, but presumably that'll become common during this decade. ``Suggestions for compilation'' is too broad for me to answer the first half of your question. Lots of languages and compilers let you make suggestions for one thing or another. What I'd love to see is a way to turn on optimizer debugging output for a section of code. The optimizer would print out everything that it looks for while optimizing, what it finds, and what it's not sure about. Then the programmer can make assertions to fill in the missing gaps. ---Dan
misha@just-right.ai.mit.edu (Mike Bolotski) (11/16/90)
In article <5020:Nov1518:23:0790@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >What I'd love to see is a way to turn on optimizer debugging output for >a section of code. The optimizer would print out everything that it >looks for while optimizing, what it finds, and what it's not sure about. >Then the programmer can make assertions to fill in the missing gaps. This is something that everyone in the group will hopefully agree on. The Lucid Common Lisp compiler can do this, to an extent. It spits out, on request, optimizations it made and optimizations that it could have made, had it additional information. Granted, most of these are type declarations, are therefore inapplicable to C, but it's a definite step in the right direction. Unfortunately, I have yet to see a C compiler with a similar feature. And by the way, Dan, are we ever going to see a public admission of error on the "optimal addition sequence" issue? Or do you claim to be more an authority on the issue than Aho, Sethi, or Ullman? Mike Bolotski Artificial Intelligence Laboratory, MIT misha@ai.mit.edu Cambridge, MA
chased@rbbb.Eng.Sun.COM (David Chase) (11/16/90)
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >What I'd love to see is a way to turn on optimizer debugging output for >a section of code. The optimizer would print out everything that it >looks for while optimizing, what it finds, and what it's not sure about. It's been done, somewhat, for the case of dependence-driven transformations -- get info on PTOOL at Rice. I am told that in general (i.e., their experience has been) you really *don't* want "everything" that the optimizer looks for, finds, or is unsure about. If you think about how large some optimizing compilers get while optimizing, you can see why you might not want to know everything. The last time I paid much attention to the design of the user interface on this (years ago), it was proposed that it take the form: 1) compiler performs transformations; 2) user is presented with transformed code; 3) user queries compiler as to why certain transformations were not performed; 4) compiler answers, and user either says "oh yeah, you're right", or supplies assertions, or says "do it anyway" (sometimes chaotic convergence is ok, for example). Ideally, the feedback would be remembered in the form of pragmas/assertions automatically inserted into the source code. I believe that the reason you see this done for dependence analysis, but not for more ordinary optimizations, is that the pay-offs are so much larger (vectorize, or not? parallelize, or not?), and that the cost of the machine (e.g., Cray) time saved often is greater than the cost of the time spent by humans to do the machine-aided transformations. This is less often the case for more ordinary transformations on more ordinary computers, especially as cycles get cheaper and cheaper. Of course, we could always strive for ultimate program speed as a sort of a sport -- we do something like that for chess already. David Chase Sun Microsystems
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (11/16/90)
In article <11896@life.ai.mit.edu> misha@just-right.ai.mit.edu (Mike Bolotski) writes: > In article <5020:Nov1518:23:0790@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: > >What I'd love to see is a way to turn on optimizer debugging output for > >a section of code. The optimizer would print out everything that it > >looks for while optimizing, what it finds, and what it's not sure about. > >Then the programmer can make assertions to fill in the missing gaps. > This is something that everyone in the group will hopefully agree on. I'm sure someone will disagree just for the sake of arguing with me. :-( > And by the way, Dan, are we ever going to see a public admission of > error on the "optimal addition sequence" issue? You mean my statements about optimizing addition chains in a general computation? In the ``array references cannot be made optimal'' thread I gave something that might stand for a proof of my assertions. Jim admits that I'm right too. > Or do you claim > to be more an authority on the issue than Aho, Sethi, or Ullman? I claim that you're misapplying their statements about something entirely different. In fact, I said so before in a followup to your article. ---Dan
jlg@lanl.gov (Jim Giles) (11/16/90)
From article <8515:Nov1522:16:1490@kramden.acf.nyu.edu>, by brnstnd@kramden.acf.nyu.edu (Dan Bernstein): > [...] In the ``array references cannot be made optimal'' thread I > gave something that might stand for a proof of my assertions. Jim admits > that I'm right too. False. But read the thread yourself. The are _optimizations_ that the compiler can't do. They must, therefore, be expressed by the human programmer. Pointers are _not_ inherently superior to arrays for expressing these improvements. J. Giles
gl8f@astsun.astro.Virginia.EDU (Greg Lindahl) (11/16/90)
In article <11896@life.ai.mit.edu> misha@just-right.ai.mit.edu (Mike Bolotski) writes: >And by the way, Dan, are we ever going to see a public admission of >error on the "optimal addition sequence" issue? Or do you claim >to be more an authority on the issue than Aho, Sethi, or Ullman? I think we'll see that about as fast as Dan will admit that his little alias scheme isn't sufficiently general to catch the more obscure cases of FORTRAN guaranteed non-aliasing. That is to say, never.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (11/16/90)
In article <6083@lanl.gov> jlg@lanl.gov (Jim Giles) writes: > From article <8515:Nov1522:16:1490@kramden.acf.nyu.edu>, by brnstnd@kramden.acf.nyu.edu (Dan Bernstein): > > [...] In the ``array references cannot be made optimal'' thread I > > gave something that might stand for a proof of my assertions. Jim admits > > that I'm right too. > False. But read the thread yourself. The are _optimizations_ that > the compiler can't do. They must, therefore, be expressed by the > human programmer. Pointers are _not_ inherently superior to > arrays for expressing these improvements. Again, you must be defining ``pointers'' and ``arrays'' differently from the rest of the world. You have admitted my point that no algorithm can determine the optimal choice of whether to store &(x[0]) or &(x[1]), because no algorithm can determine for all programs which pointer will be used more. I'll even quote the article for you if you want. This is just a special case of the assertion that no algorithm can solve the addition chain problem in a general computation. More precisely, the fundamental difference between pointers and arrays is that the former can find &(x[a]) given &(x[b]), while without pointers this can only be done for &(x[0]) (and in fact &(x[b]) can only be stored for b = 0). No algorithm can convert an optimal pointer-free program into an optimal pointer version. Do you agree with that assertion? To put it differently, it is impossible to optimize array references unless there's some way to express array shifts (pointer arithmetic). Do you agree with that assertion? Now Fortran, Pascal, and Ada have arrays. They can't express array shifts. Agreed? C can express array shifts. Agreed? Therefore arrays cannot be optimized as well as pointers in some programs. Agreed? You have said the opposite of the above statement. Agreed? (I'll quote it for you if you want.) I can think of two reasons why: 1. When you said ``array'' you didn't mean array as in Fortran, or Pascal, or Ada. 2. You simply didn't think enough to realize that optimizing array references is equivalent to solving the halting problem. Now I'm tempted to believe #2, since in several articles before I posted the outline of my proof and the proof itself, you challenged my assertion that optimizing array references is equivalent to solving the halting problem. (I'll quote it for you if you want.) But I'll be charitable. I'm perfectly willing to believe that you have simply been using a different definition of ``array'' from the rest of the world (except people who use the new Fortran). So which is it? ---Dan
gessel@carthage.cs.swarthmore.edu (Daniel Mark Gessel) (11/17/90)
I've seen a number of mentions of pragmas/assertions. Can anybody give references to how these are done? (i.e. what kind of language is used to specify these assertions, where do they go. When I read assertion, I think math/logic etc). Anything about using multiple pragmas/assertions in a program for different target machines? Dan -- Daniel Mark Gessel Independent Software Consultant Internet: gessel@cs.swarthmore.edu and Developer My opinions are not necessarily representative of those of Swarthmore College.