wei@princeton.UUCP (P Wei) (02/11/86)
I recently ported the curve fitting program, listed on BYTE 1984 May p.340, onto ibmpc using TURBO Pascal. The authors claimed that the program could fit *any* equation with *any* number of parameters and variables to experimental data. Now, I am wondering if there is any disvantage or potential problems in this algorithsm ??? (Up to now, we haven't met any problem yet--- Is it panacea?? Thanks in advance . HP Wei (wei@princeton)
weemba@brahms.BERKELEY.EDU (Matthew P. Wiener) (02/11/86)
In article <1217@princeton.UUCP> wei@princeton.UUCP (P Wei) writes: >I recently ported the curve fitting program, listed on BYTE 1984 May p.340, >onto ibmpc using TURBO Pascal. >The authors claimed that the program could fit *any* equation with *any* >number of parameters and variables to experimental data. >Now, I am wondering if there is any disvantage or potential problems in this >algorithsm ??? (Up to now, we haven't met any problem yet--- Is it panacea?? >Thanks in advance . >HP Wei (wei@princeton) I don't know the algorithm, but the claim is garbage. It is impossible to fit for a,b,c,d from data points to the function y=a*exp(b*x)+c*exp(d*x). The fitting process here is completely unstable. (I believe this example is due to Wilkinson in the late 1950s.) ucbvax!brahms!weemba Matthew P Wiener/UCB Math Dept/Berkeley CA 94720
blm@chinet.UUCP (Brad L. McKinley) (02/13/86)
To whom it may concern: I translated the above mentioned program from Pascal to C. Interested parties can E-mail requests for it to me. For as little as I've used it, it seems to run very well. (Please excuse grammar. I'm a programmer, not an english teacher.)
perry@vu-vlsi.UUCP (Rick Perry) (02/14/86)
> I recently ported the curve fitting program, listed on BYTE 1984 May p.340, > The authors claimed that the program could fit *any* equation with *any* > number of parameters and variables to experimental data. I've implemented that (or similiar) simplex method in both APL and Pascal and have great success with it. The Simplex algorithm is very robust and stable if programmed properly (taking proper account of machine precision, roundoff's and convergence criterion). It has some advantages over other methods in that it tends to avoid local minima if started over a big enough region. But like any global minimization algorithm, there is always some chance that it could converge to a local minima. The only disadvantage that I would note is that it has more or less linear convergence, so it may take more cpu time to get the answers with this method. But I like it for its stability and easy understandability - that is, you don't need 10 Ph.D.'s in math to comprehend how it works, and it's fairly easy to program in a straightforward, bug-free (if there is such a thing!) way. ...Rick ...{pyrnj,psuvax1}!vu-vlsi!perry
carlc@tektronix.UUCP (Carl Clawson) (02/14/86)
In article <11800@ucbvax.BERKELEY.EDU> weemba@brahms.UUCP (Matthew P. Wiener) writes: >>The authors claimed that the program could fit *any* equation with *any* >>number of parameters and variables to experimental data. > >I don't know the algorithm, but the claim is garbage. It is impossible to >fit for a,b,c,d from data points to the function y=a*exp(b*x)+c*exp(d*x). >The fitting process here is completely unstable. (I believe this example >is due to Wilkinson in the late 1950s.) If the parameters b and d are not "too close," and if you have data well beyond the "knee" in a log plot of y vs. x, then curve-fitting works for this case. I'm not talking about stability theory, I'm talking about what I've observed. For arbitrary b,d this is a tough problem; unfortunately, it is common in radioactive decay, spectroscopy, and many other fields. There are other ways of treating it than curve fitting. See D.N. Swingler, IEEE Trans. on Biomedical Engineering, BME-24, p.408, July 1977. I expect to be severely scorched by serious numerical analysts. Go ahead, send me your proofs of instability, or your flames, or whatever. -- Carl Clawson Solid State Research Lab / Tek Labs {decvax, ucbvax, ihnp4, ???}!tektronix!carlc (503) 627-6304
gwyn@brl-smoke.ARPA (Doug Gwyn ) (02/16/86)
>>The authors claimed that the program could fit *any* equation with *any* >>number of parameters and variables to experimental data. >>Now, I am wondering if there is any disvantage or potential problems in this >>algorithsm ??? (Up to now, we haven't met any problem yet--- Is it panacea?? > >I don't know the algorithm, but the claim is garbage. It is impossible to >fit for a,b,c,d from data points to the function y=a*exp(b*x)+c*exp(d*x). >The fitting process here is completely unstable. (I believe this example >is due to Wilkinson in the late 1950s.) I don't have experience with the "simplex algorithm" for curve fitting, but I did implement a Marquardt gradient-expansion nonlinear least- squares fitting routine (that took error ellipses into account) years ago and found that in practice it performed quite well, although a bit slowly. For most functions of practical interest, a good fitting technique will converge if one feeds it a reasonable initial guess at parameter values. Certainly, one can invent unstable situations that will cause the fit to diverge or to produce a bogus fit (such as, to harmonics of a periodic function instead of to the fundamental). Visual inspection of the fitted curve against the empirical data set is essential if one is to have real confidence in the result (which should of course include error estimates for the parameters). If the data is really noisy, a straight line may well have a better chi-squared than any more elaborate functional form. It is true that there is no panacea, but intelligent use of a good scheme can work wonders. We were able to fit M"ossbauer and TDPAC data that was so noisy that the features measured by curve-fitting could not even be detected by visual examination of the raw data.
ice@trwrba.UUCP (Douglas L. Ice) (02/18/86)
In article <236@vu-vlsi.UUCP> perry@vu-vlsi.UUCP (Rick Perry) writes: >> I recently ported the curve fitting program, listed on BYTE 1984 May p.340, >> The authors claimed that the program could fit *any* equation with *any* >> number of parameters and variables to experimental data. > >[...] > The only disadvantage that I would note is that it has more or less linear >convergence, so it may take more cpu time to get the answers with this method. >But I like it for its stability and easy understandability - that is, you >don't need 10 Ph.D.'s in math to comprehend how it works, and it's fairly >easy to program in a straightforward, bug-free (if there is such a thing!) >way. > >...Rick ...{pyrnj,psuvax1}!vu-vlsi!perry In general, the simplex algorithm is classified as NP-complete -- in other words, given an arbitrary set of variables and constraints, it is currently unknown if the algorithm terminates in an amount of time which is a polynomial function of the input length. Many problems whose only known solutions take time exponential in the input length can be modeled using the simplex method (for example, the Travelling Salesman problem). Of course, there are many problems amenable to quick solution via the simplex method, and, as Rick points out, it is dependable and simple(x). ---Doug
hofbauer@utcsri.UUCP (John Hofbauer) (02/19/86)
> I expect to be severely scorched by serious numerical analysts. > Go ahead, send me your proofs of instability, or your flames, or whatever. As long as you are computing with approximations to the real numbers, i.e. floating point numbers, things can be made to fail. They may not fail for what you are interested in but they can be made to fail. There is no free lunch. -- a serious numerical analyst
blm@chinet.UUCP (Brad L. McKinley) (02/20/86)
For those of you who have requested my Pascal to C translation I have submitted it to mod.sources. Look for it anytime now. ------- Name : Brad L. McKinley --- (503) 889-4321 USMail: M D R Professional Software, Inc., 915 SW 3rd Avenue, Ontario, OR 97914 Usenet: ihnp4!chinet!blm OR ihnp4!chinet!mdr!blm CIS : 70015,1205 Source: BDY171 "God created Arrakis to test the faithful."
bill@utastro.UUCP (William H. Jefferys) (02/22/86)
In article <1836@trwrba.UUCP>, ice@trwrba.UUCP (Douglas L. Ice) writes: > In article <236@vu-vlsi.UUCP> perry@vu-vlsi.UUCP (Rick Perry) writes: > >> I recently ported the curve fitting program, listed on BYTE 1984 May p.340, > >> The authors claimed that the program could fit *any* equation with *any* > >> number of parameters and variables to experimental data. > > > >[...] > > The only disadvantage that I would note is that it has more or less linear > >convergence, so it may take more cpu time to get the answers with this method. > In general, the simplex algorithm is classified as NP-complete -- in > other words, given an arbitrary set of variables and constraints, it > is currently unknown if the algorithm terminates in an amount of time > which is a polynomial function of the input length. Wrong simplex method. What Rick and his respondent are talking about here is the Simplex Method for minimizing nonlinear functions, due to Nedler and Meade, (Comput. J. *7*, 308, 1965), and NOT Dantzig's Simplex Method of Linear Programming, which was invented in the 1950's. The two methods have nothing whatsoever to do with each other, other than that they unfortunately have the same name. I believe that Smale recently showed that the Dantzig's Simplex Method "usually" gets its answer in polynomial time, where "usually" was described precisely. This explains why it is such a good method in practice even though there are pathological cases for which it doesn't get the answer in polynomial time. But this result has nothing to do with the rate of convergence of Nedler and Meade's algorithm. -- Glend. I can call spirits from the vasty deep. Hot. Why, so can I, or so can any man; But will they come when you do call for them? -- Henry IV Pt. I, III, i, 53 Bill Jefferys 8-% Astronomy Dept, University of Texas, Austin TX 78712 (USnail) {allegra,ihnp4}!{ut-sally,noao}!utastro!bill (UUCP) bill@astro.UTEXAS.EDU. (Internet)