I have just made a program which calculates pi. However, even with 10 million iterations my result is kinda off. I get 3.1415927535897831, whereas already early on is it wrong. It is supposed to be 3.141592653589793238...
So my question is: What is the required amount of iterations to get at least an accurate answer all the way up to 10^-16
Here is my code if anyone is interested:
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
long double pi = 4.0;
long double tempPi;
for (int i = 1, j = 3; i <= 1000开发者_运维技巧0000; i++, j+=2)
{
tempPi = static_cast<double>(4)/j;
if (i%2 != 0)
{
pi -= tempPi;
}
else if (i%2 == 0)
{
pi += tempPi;
}
}
cout << "Pi has the value of: " << setprecision(16) << fixed << pi << endl;
system("pause");
return 0;
}
Any performance related tips would also be appreciated.
You are using the Leibniz series, which is very, very slow to converge. In an alternating series such as the one you are using, the first omitted term provides a good estimate of the error in the estimate. Your first omitted term is 4/2000005, so you should expect less than six significant digits of precision here.
Note well: Rounding errors, use of double precision numbers has nothing to do with the lack of precision here. The sole factor is that you are using a crappy algorithm.
There are lots of methods for calculating pi. Some converge faster than others.
Also see "Modern Formulae"
the sequence 1 / a converges quartically to pi, giving about 100 digits in three steps and over a trillion digits after 20 steps.
The problem is that double
is not nearly as accurate as you hope. You can't even represent decimal 1.2 with 100% accuracy.
I didn't look at the code closely to see if there are other problems.
Since the result is wrong after 10 million iterations due to round-up errors, you won't get the correct answer with more loops, only adding more error.
精彩评论