[comp.sys.mac.programmer] printf and 68881 code

gs@eddie.MIT.EDU (Gordon Strong) (02/10/89)

Using LSC 3.0p2, Mac II, 5MB, System 6.0.2:

I am having problems getting printf to correctly format variables
which are declared/set using 68881 code.  The variables have the
correct values (judged by examiniation using the debugger).  When
the value is approx 0.5, printf displays 0.000008.  As the actual
variables fluctuate, the displayed value remains constant.

I have tried casting the 68881 values to non-68881 floats, but
get the same results.  The LSC manual for the math881 library says
that "some conversion must be done" due to 96-bit 68881 representation.
I assumed this meant casting -- does it mean something else?

The (simplified) code looked something like this:

#define _MC68881_
#include <math.h>
#include <stdio.h>
...
register double a,b,c;
float x;
...
c = sqrt((a*a) + (b*b));
x = (float)c;
printf("%f",c);
printf("%f",x);


I was linked with MacTraps, math881, and stdio libraries.

Any ideas?

Gordon Strong
gs@eddie.mit.edu

siegel@endor.harvard.edu (Rich Siegel) (02/11/89)

In article <11054@eddie.MIT.EDU> gs@eddie.MIT.EDU (Gordon Strong) writes:
>Using LSC 3.0p2, Mac II, 5MB, System 6.0.2:
>
>I am having problems getting printf to correctly format variables
>which are declared/set using 68881 code.  The variables have the

	You need to be using the "stdio881" library, which has code for
correctly formatting output when 68881 is turned on.

		--Rich




Rich Siegel
Staff Software Developer
THINK Technologies Division, Symantec Corp.
Internet: siegel@endor.harvard.edu
UUCP: ..harvard!endor!siegel
Phone: (617) 275-4800 x305

Any opinions stated in this article do not necessarily reflect the views
or policies of Symantec Corporation or its employees.