what is the meqaning of pData[1+2*i]<<8|pData[2+2*i]
where pData[ ]
is the array containing BYTE data?
I have the following function
in the main function
{
..........
....
BYTE Receivebuff[2048];
..
ReceiveWa开发者_JAVA技巧vePacket(&Receivebuff[i], nNextStep);
....
...
..
}
Where Receivebuff is the array of type BYTE.
ReceiveWavePacket(BYTE * pData, UINT nSize)
{
CString strTest;
for(int i = 0 ; i < 60 ; i++)
{
strTest.Format("%d\n",(USHORT)(pData[1+2*i]<<8|pData[2+2*i]));
m_edStatData.SetWindowTextA(strTest);
}
}
I want to know the meaning of ",(USHORT)(pData[1+2*i]<<8|pData[2+2*i])
.
Can any body please help me?
This seems to be code for synthesizing a 16-bit value out of two eight-bit values. If you'll note, the math has the form
(a << 8) | b
For suitable a and b. This first part, (a << 8), takes the eight bits in a and shifts them up eight positions, giving a 16- bit value whose first eight bits are the bits from a and whose second eight bits are all zero. Applying the bitwise OR operator between this new value and the value of b creates a new sixteen-bit value whose first eight bits are the bits of a (because zero-extending b for the OR step leaves these bits intact) and whose lower eight bits are the bits of b, since ORing zero bits with the bits of b yields b.
I think @templatetypedef is correct, it does look like code that creates a 16-bit value by logical OR'ing two 8 bit values. ( since it's OR'ing two elements from your array of BYTE's ).
Logical OR'ing of two bits basically means, if either of the bits is 1
then the result is 1 ("this bit OR that bit").
As a further example, take a look at this function that takes a pointer to a char (that is 8 bits in size), and returns an integer ( in this implementation that means 32 bits).
int Read32(const char *pcData)
{
return ( (pcData[3]<<24) & 0xff | pcData[2]<<16 | pcData[1]<<8 | pcData[0]);
}
If pcData
is a pointer to an array of char
- then it takes the 3rd char it finds and it shifts it by 24 bits:
e.g.
if pcData[3]
was 10110001 then it is now
10110001000000000000000000000000
It takes this 32 bit value and subsequently OR's it with pcData[2], shifted by 16 bits - which means that if pcData[2]
is 11111111
then the 32 bit value is now:
10110001111111110000000000000000
and so on with pcData[1]
and pcData[0]
.
@templatetypedef's answer is correct, but there is another interesting thing going on here:
(pData[1+2*i]<<8|pData[2+2*i])
i
goes from 0 to 59, so this will process 60 words, starting with pData[1]
(1+2*0
== 1).
This means that the first byte in the array is never processed, which seems odd. Why isn't this the more natural pData[2*i]<<8|pData[2*i+1]
?
One possibility: Word data can be stored in a byte stream 2 ways: the word 0xAA11 can be stored as 0xAA 0x11 or 0x11 0xAA. Imagine it is the latter.
For the words : 0xAA11 0xBB22 0xCC33 ...
The byte stream will be 11 AA 22 BB 33 CC ...
Parsing with the 'natural' method would give 0x11AA 0x22BB...
, which is obviously wrong.
This code will print 0xAA22 0xBB33, 0xCC44 ....
, which will probably pass a quick-look sanity check, but is actually totally incorrect.
I hope the extra +1 wasn't added to "fix" the endianness issue.
精彩评论