chris@mimsy.UUCP (04/12/87)
In article <995@wanginst.EDU> mckeeman@wanginst.EDU (William McKeeman) writes: >Conclusion: when making a pitch for a better C definition rule, identify >which of the problems: > unrestricted optimization > overflow anomalies > roundoff anomalies > side effect anomalies >it is designed to solve. If you don't intend for your rule to have >language-wide implications, state the limited area of application. Here is a question: Which of those problems are sidestepped by volatile variables? I would guess that at least roundoff anomalies, as demonstrated by the code fragment double a, b, n, d; a = n / d; func(); /* clobber fp accumulator */ b = n / d; if (a != b) printf("eh?\n"); where a 64-bit `a' might be compared with an 80-bit accumulator, would be avoided if `b' were declared `volatile'. But there may be more to this. Just what does `volatile' *do* in a compiler? What are the precise semantics of volatile variables? If a volatile variable is just one whose value can never be assumed from context, a compiler could eliminate `dead' stores, so this definition is clearly wrong: volatile v; v = 0; v = 1; /* * Altering this to just `v = 1' makes no assumption about * previous values of v, but is no doubt not what is desired. */ I am not quite comfortable with the definition `a volatile variable is one which may never be used in any optimisation', but if we take this as a working definition, what are the semantics of volatile variables within larger expressions? Could this be extended somehow to include a concept of volatile expressions? Let us use as another example the h-bar expression someone noted recently: answer = value / h_bar / h_bar; which, since h_bar is small, is not the same as h_bar_squared = h_bar * h_bar; answer = value / h_bar_squared; /* division by zero */ If we have /* * Using eV-sec is cheating, since we can represent its square * (handy having a physics book lying around at home!). */ volatile double h_bar = 6.626e-34/(2*PI);/* J-sec */ it is then obvious that the compiler cannot optimise the expression answer = value / h_bar / h_bar; We will run into trouble, however, if we use #define PI 3.1415926535897932384626433832795 #define h_bar (6.626e-34 / 2 / PI) ... answer = value / h_bar / h_bar; since we now have a floating point constant. (Of course, if the optimiser is smart *enough*, it will notice that h_bar*h_bar is zero and not do the `optimisation' h_bar_squared = h_bar * h_bar; answer = value / h_bar_squared; but assuming this is also cheating.) I now run a bit out of my depth, not having the ANSI draft handy, but propose the definition #define h_bar ((volatile double) (6.626e-34 / 2 / PI)) Since the semantics of a cast are precisely that of assignment to a variable of the same type as the cast, this would seem to solve the immediate problem. -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690) UUCP: seismo!mimsy!chris ARPA/CSNet: chris@mimsy.umd.edu
lmiller@venera.isi.edu (Larry Miller) (04/13/87)
In article <6264@mimsy.UUCP> chris@mimsy.UUCP (Chris Torek) writes: > >Here is a question: Which of those problems are sidestepped by >volatile variables? I would guess that at least roundoff anomalies, >as demonstrated by the code fragment > > double a, b, n, d; > > a = n / d; > func(); /* clobber fp accumulator */ > b = n / d; > if (a != b) > printf("eh?\n"); > >where a 64-bit `a' might be compared with an 80-bit accumulator, >would be avoided if `b' were declared `volatile'. But there may >be more to this. This is a mess. C is looking more and more like a throwback to an earlier era, disregarding all we've learned about program semantics and abstract data types in the last decade. Either the language can be strongly typed, or it can't. If it can, then a comparison of `a' with an 80- bit accumulator is a type violation. What can we prove about any program if we can't even make the kinds of simple comparisons in the cited passage (assuming that func() does not change the value of `a')? I WANT to be able to say float a, b; and KNOW that they are the same TYPE. A TYPE has two characteristics: a set of operations (with precise semantics), and a representation. It is insufficient to say ``use C++, or use LISP'' because many of these are written in C. Larry Miller
mckeeman@wanginst.UUCP (04/13/87)
Chris Torek writes: >Here is a question: Which of those problems are sidestepped by >volatile variables?... >Just what does `volatile' *do* in a compiler? What are the precise >semantics of volatile variables? If a volatile variable is just >one whose value can never be assumed from context, a compiler could >eliminate `dead' stores, so this definition is clearly wrong: > > volatile v; > v=0; v=1; Bill McKeeman (me) responds: I worry that volatile, designed to deal with asynchronous processes, is accidentally picking up some sequential access semantics baggage. The relevant paragraph from x3j11 is: 3.5.2.4 const and volatile The properties associated with the const and volatile type specifiers are meaningful only for expressions that are lvalues.... and ...Therefore any expression referring to such an object [volatile] shall be evaluated strictly according to the sequence rules of the abstract machine... That is pretty handy for those who want to keep the optimizer at bay. I interpret it to mean that both v=0; and v=1; must be executed in sequence in Torek's example above. A cast (volatile) can only be applied to an lvalue. I interpret this to mean that as a compiler writer, I am obligated to flag Torek's suggestion: #define h_bar ((volatile double) (6.626e-34 / 2 / PI)) under the present rules. QUESTION: Do we want to use volatile variables as a (subtle) way to indicate that our expressions are not to be optimized? Would this be more intuitive and transparent than ignored parentheses and unary +? -- W. M. McKeeman mckeeman@WangInst Wang Institute decvax!wanginst!mckeeman Tyngsboro MA 01879