开发者

Mathematica internal number formats and precision

开发者 https://www.devze.com 2023-02-09 17:10 出处:网络
Tangentially related to this question, what exactly is happening here with the number formatting? In[1]:= InputForm @ 3.12987*10^-270

Tangentially related to this question, what exactly is happening here with the number formatting?

In[1]  := InputForm @ 3.12987*10^-270
Out[1] := 3.12987`*^-270

In[2]  := InputForm @ 3.12987*10^-271
Out[2] := 3.1298700000000003`*^-271

If you use *10.^ as the multiplier the transition is where you would naively expect it to be:

In[3]  := InputForm @ 3.12987*10.^-16
Out[3] := 3.12987`*^-16

In[4]  := InputForm @ 3.12987*10.^-17
Out[4] := 3.1298700000000004`*^-17

whereas *^ takes the transition a bit further, albeit it is the machine precision that starts flaking out:

In[5]  := InputForm @ 3.12987*^-308
Out[5] := 3.12987`*^-308

In[6]  := InputForm @ 3.12987*10.^-309
Out[6] := 3.12987`15.9545897701开发者_运维百科91008*^-309

The base starts breaking up only much later

In[7]  := InputForm @ 3.12987*^-595
Out[7] := 3.12987`15.954589770191005*^-595

In[8]  := InputForm @ 3.12987*^-596
Out[8] := 3.1298699999999999999999999999999999999999`15.954589770191005*^-596

I am assuming these transitions relate to the format in which Mathematica internally keeps it's numbers, but does anyone know, or care to hazard an educated guess at, how?


If I understand correctly you are wondering as to when the InputForm will show more than 6 digits. If so, it happens haphazardly, whenever more digits are required to "best" represent the number obtained after evaluation. Since the evaluation involves explicit multiplication by 10^(some power), and since the decimal input need not be (and in this case is not) exactly representable in binary, you can get small differences from what you expect.

In[26]:= Table[3.12987*10^-j, {j, 10, 25}] // InputForm

Out[26]//InputForm=
{3.12987*^-10,
 3.12987*^-11, 
 3.12987*^-12, 
 3.12987*^-13, 
 3.12987*^-14, 
 3.12987*^-15, 
 3.12987*^-16, 
 3.1298700000000004*^-17, 
 3.1298700000000002*^-18, 
 3.12987*^-19, 
 3.12987*^-20, 
 3.1298699999999995*^-21, 
 3.1298700000000003*^-22, 
 3.1298700000000004*^-23, 
 3.1298700000000002*^-24, 
 3.1298699999999995*^-25}

As for the *^ input syntax, that's effectively a parsing (actually lexical) construct. No explicit exact power of 10 is computed. A floating point value is constructed and it is faithful as possible, to the extent allowed by binary-to-decimal, to your input. The InputForm will show as many digits as were used in inputting the number, because that is indeed the closest decimal to the corresponding binary value that got created.

When you surpass the limitations of machine floating point numbers, you get an arbitrary precision analog. It no longer is machinePrecision but actually is $MachinePrecision (that's the bignum analog to machine floats in Mathematica).

What you see in InputForm for 3.12987*^-596 (a decimal ending with a slew of 9's) is, I believe, caused by Mathematica's internal representation involving usage of guard bits. Were there only 53 mantissa bits, analogous to a machine double, then the closest decimal representation would be the expected six digits.

Daniel Lichtblau Wolfram Research

0

精彩评论

暂无评论...
验证码 换一张
取 消