desj@brahms.BERKELEY.EDU (David desJardins) (03/16/86)
>From: ihnp4!gargoyle!sphinx!fdot (Tom Lippincott) > >How about error handling? I've never seen *any* program where you could, >for example, add 1.23+/-.02 to 4.56+/-.04 and get 5.79+/-.06, let alone >perform more complicated math, graph them with error bars automatically, etc. Maybe that's because (1.23 +- .02) + (4.56 +- .04) = (5.79 +- .045)? (Or do you want the program to figure out the correlation coefficient? :-)) -- David desJardins
jin@hropus.UUCP (Bear) (03/17/86)
> >From: ihnp4!gargoyle!sphinx!fdot (Tom Lippincott) > > > >How about error handling? I've never seen *any* program where you could, > >for example, add 1.23+/-.02 to 4.56+/-.04 and get 5.79+/-.06, let alone > >perform more complicated math, graph them with error bars automatically, etc. > > Maybe that's because (1.23 +- .02) + (4.56 +- .04) = (5.79 +- .045)? > (Or do you want the program to figure out the correlation coefficient? :-)) > > -- David desJardins If I remember my stats correctly the sum of two error intervals is the square root of the sum of squares only if the two measurements are non-correlated. I don't know the context of this discussion but thought I'd drop my .02$ in. -- Jerry Natowitz ihnp4!houxm!hropus!jin (official) ihnp4!opus!jin (temporary) Institute for the Study of Non-existent Phenomena
gwyn@brl-smoke.ARPA (Doug Gwyn ) (03/20/86)
In article <12410@ucbvax.BERKELEY.EDU> desj@brahms.UUCP (David desJardins) writes: >>From: ihnp4!gargoyle!sphinx!fdot (Tom Lippincott) >>How about error handling? I've never seen *any* program where you could, >>for example, add 1.23+/-.02 to 4.56+/-.04 and get 5.79+/-.06, let alone >>perform more complicated math, graph them with error bars automatically, etc. > > Maybe that's because (1.23 +- .02) + (4.56 +- .04) = (5.79 +- .045)? >(Or do you want the program to figure out the correlation coefficient? :-)) If one adds two Gaussian distributions, does he get a Gaussian? How about other arithmetic operations (e.g. sqrt)? Tom seems to have had what is known as "range arithmetic" in mind; there have been several such packages, some in Fortran. The trouble with range arithmetic is that the range of uncertainty grows really fast as more and more operations are performed. It is true that "n +- s" conventionally means: best estimate for a quantity is "n" and the standard error of that estimate is "s". I have seen simplified rules for combining such (n,s) entities under arithmetic operation, on the assumption that the relative errors are sufficiently small; unfortunately, many such rules have been simply wrong. The best discussion of error analysis I know of is in a book by Bevington, entitled something like "Data Reduction and Error Analysis for the Physical Sciences".
desj@brahms.BERKELEY.EDU (David desJardins) (03/22/86)
ihnp4!gargoyle!sphinx!fdot (Tom Lippincott) in <1794@sphinx.UChicago.UUCP>: >How about error handling? I've never seen *any* program where you could, >for example, add 1.23+/-.02 to 4.56+/-.04 and get 5.79+/-.06, ... My response in article <12410@ucbvax.BERKELEY.EDU>: > Maybe that's because (1.23 +- .02) + (4.56 +- .04) = (5.79 +- .045)? jsdy@hadron.UUCP (Joseph S. D. Yao) in <314@hadron.UUCP>: >If you treat the +/- .02 etc. as errors (which seems to be what the >poster said), the first formulation is correct (.06). >[... some stuff about percentage errors ...] >Obviously .045 comes from somewhere else entirely. I was terrible >in stats, and can't remember if there is a proper way of adding >s.d.'s. But it seems that one should read messages before asserting >flatly that they are wrong! bill@utastro.UUCP (William H. Jefferys) In <547@utastro.UUCP>: >In Engineering situations it is common to add the absolute values of the >errors in order to come up with a "worst case" error, so that you can >design the equipment to be resistant to failure modes. > >Scientists are of course more familiar with the RSS error, where you >take the square root of the sum of the squares of the errors. The point, on which I suppose I was not clear, is that in any sort of scientific setting the errors which you are adding are standard errors, which essentially means that your measurement has some distribution about the true value with standard deviation at most the given error. The correct way to add these is, as William Jefferys says, to take the square root of the sum of the squares of the errors. Admittedly there might *occasionally* be a need for a worst-case error analysis (the fact that this is commonly done by engineers does not make it correct!). And, as other posters have noted, you might also have a nonzero correlation coefficient. But all of this should go to support my original point that software that tries to add errors, *especially* if it is done naively, is not of much use. As for Joseph Yao's comments, I find it mildly amusing to be criticized on this subject by someone who says, "I was terrible in stats...." I, in turn, suggest to you that in fields about which you know little (error analysis is a subfield of statistics) you refrain from criticizing others. -- David desJardins