开发者

Function approximation with Maclaurin series

开发者 https://www.devze.com 2023-02-17 12:44 出处:网络
I need to approx (1-x)^0.25 with given accuracy (0.0001 e.g.). I\'m using expansion found on Wikipedia for (1+x)^0.25. I need to stop approximating when current expression is less than the accuracy.

I need to approx (1-x)^0.25 with given accuracy (0.0001 e.g.). I'm using expansion found on Wikipedia for (1+x)^0.25. I need to stop approximating when current expression is less than the accuracy.

long do开发者_Python百科uble s(long double x, long double d) {
    long double w = 1;
    long double n = 1; // nth expression in series
    long double tmp = 1;

    // sum while last expression is greater than accuracy
    while (fabsl(tmp) >= d) {
        tmp *= (1.25 / n - 1) * (-x); // the next expression
        w += tmp; // is added to approximation
        n++; 
    } 

    return w;
}

Don't mind long double n. :P This works well when I'm not checking value of current expression but when I'm computing 1000 or more expressions. Domain of the function is <-1;1> and s() calculates approximation well for x in <-1;~0.6>. The bigger the argument is the bigger is the error of calculation. From 0.6 it exceeds the accuracy.

I'm not sure if the problem is clear enough because I don't know English math language well. The thing is what's the matter with while condition and why the function s() doesn't approximate correctly.

EDIT: Problem mostly solved. When x>0 I have to substract absolute value of consecutive expressions from 1.

if (x<0)
   w += tmp;
else
   w -= fabsl(tmp);

After that accuracy increases a lot (fox x>0 of course). Redundant error stems from long double inaccuracy. That's all. Thanks anyway to you guys.


Try drawing graph of function

abs((1.0+x)alpha - binomial_formula(alpha,x,tolerance))
even in close x range such as [-0.5;0.5] you will get something like:

Function approximation with Maclaurin series

This means that your binomial expansion implementation is unstable. As x gets further and further from zero - series must include more and more terms for given accuracy. But doing so in current expansion implementation causes Catastrophic cancellation to occur (some floating point error accumulation mechanism). Try reading my given link on how to design numerically stable algorithm.

BTW, thanks for really interesting problem !


Your problem is that whilst the iteration part of your algorithm is fine, the termination is not what you think it is.

The Taylor series expansion you are using is exact when the infinite sum is evaluated. However, you cannot evaluate that infinite sum and are truncating.

I suppose you are assuming that when tmp becomes smaller than your desired tolerance, then the error in w is also less than that tolerance.

However, this is not true. The error, at each iteration is the infinite sum of the remaining terms. It is the sum of the infinite number of terms that you are throwing away. The first one of these, the value of tmp at the point of termination, may be less that your tolerance, but the sum of them all may be greater than your tolerance.

You happen to get away with it when (-x) is negative because the alternating sign of tmp works in your favour. And when (-x) is positive you get away with it when x is close to zero.

However, I'm not convinced there is an easy way to come up with a simple general purpose stopping criteria. You'd have to be able to put some bounds on the terms which you are throwing away. This now becomes a mathematical problem rather than a programming problem.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号