开发者

Parsing a hex formated DEC 32 bit single precision floating point value in python

开发者 https://www.devze.com 2022-12-12 17:52 出处:网络
I\'m having problems parsing a hex formatted DEC 32bit single precision floating point value in python, the value I\'m parsing is represented as D44393DB in hex. The original floating point valu开发者

I'm having problems parsing a hex formatted DEC 32bit single precision floating point value in python, the value I'm parsing is represented as D44393DB in hex. The original floating point valu开发者_如何转开发e is ~108, read from a display of the sending unit.

The format is specified as: 1bit sign + 8bit exponent + 23bit mantissa. Byte 2 contains the sign bit + the 7 most significant bits of the exponent Byte 1 contains the least significant bit of the exponent + the starting most significant bits of the mantissa.

The only thing I have found that differs in the two formats is the bias of the exponent which is 128 in DEC32 and 127 in IEEE-754 (http://www.irig106.org/docs/106-07/appendixO.pdf)

Using http://babbage.cs.qc.edu/IEEE-754/32bit.html does not give the expected result.

/Kristofer


Is it possible that the bytes got shuffled somehow? The arrangements of bits that you describe (sign bit in byte 2, LSB of exponent in byte 1) is different from Appendix O that you link to. It looks like byte 1 and 2 were exchanged.

I'll assume that byte 3 and 4 were also exchanged, so that the real hex value is 43D4DB93. This translates as 0100 0011 1101 0100 1101 1011 1001 0011 in binary, so the sign bit is 0, indicating a positive number. The exponent is 10000111 (binary) = 135 (decimal), indicating a factor of 2^(135-128) = 128. Finally, the mantissa is 0.1101 0100 1101 1011 1001 0011 (binary), using that Appendix O says that you have to add 0.1 in front, which is approximately 0.8314 in decimal. So your number is 0.8314 * 128 = 106.4 under my assumptions.

Added: Some Python 2 code might clarify:

input = 0xD44393DB;
reshuffled = ((input & 0xFF00FF00) >> 8) | ((input & 0x00FF00FF) << 8);
signbit = (reshuffled & 0x80000000) >> 31;
exponent = ((reshuffled & 0x7F800000) >> 23) - 128;
mantissa = float((reshuffled & 0x007FFFFF) | 0x00800000) / 2**24;
result = (-1)**signbit * mantissa * 2**exponent;

This yields result = 106.42885589599609.

Here is an explanation for the line computing the mantissa. Firstly, reshuffled & 0x007FFFFF yield the 23 bits encoding the mantissa: 101 0100 1101 1011 1001 0011. Then ... | 0x00800000 sets the hidden bit, yielding 1101 0100 1101 1011 1001 0011. We now have to compute the fraction 0.1101 0100 1101 1011 1001 0011. By definition, this equals 1*2^(-1) + 1*2^(-2) + 0*2^(-3) + ... + 1*2^(-23) + 1*2^(-24). This can also be written as (1*2^23 + 1*2^22 + 0*2^21 + ... + 1*2^1 + 1*2^0) / 2^24. The expression in brackets is the value of 1101 0100 1101 1011 1001 0011 (binary), so we can find the mantissa by dividing (reshuffled & 0x007FFFFF) | 0x00800000 by 2^24.


From my copy of "Microcomputers and Memories" (DEC, 1981), you are correct about the difference between the two formats. The DEC mantissa is normalized to 0.5<=f<1 and the IEEE format mantissa is normalized to 1<=f<2, both with the MSB implicit and not stored. Thus the mantissa bit-layouts are the same. Jitse Niesens assumptions look like a plausible explanation since the value of D44393DB would be -0.7639748 X 2^40 (which is -8.3999923E11).


Found under "Related" on the RHS: these answers from last month

One of the references helps understanding the "wired" (weird?) byte2 byte1 notation.


Is it definitely a DEC32 value? The sign bit seems to be 1, which indicates negative by this format. However, you do get a result very close to your 108 value if you ignore this and assume that the exponent bias is 15, retaining the 0.1 factor on the mantissa:

def decode(x):
    exp = (x>>30) & 0xff
    mantissa = x&((2**24)-1)
    return 0.1 * mantissa * (2**(exp-15))

>>> decode(0xD44393DB)
108.12409668
0

精彩评论

暂无评论...
验证码 换一张
取 消