My understanding is that C++ reinterpret_cast and C pointer cast is a just a compile-time functionality and that i开发者_开发问答t has no performance cost at all.
Is this true?
It's a good assumption to start with. However, the optimizer may be restricted in what it can assume in the presence of a reinterpret_cast<>
or C pointer cast. Then, even though the cast itself has no associated instructions, the resulting code is slower.
For instance, if you cast an int to a pointer, the optimizer likely will have no idea what that pointer could be pointing to. As a result, it probably has to assume that a write through that pointer can change any variable. That beats very common optimizations such as storing variables in registers.
That's right. No cost other than any gain/loss in performance for performing instructions at the new width, which I might add, is only a concern in rare cases. Casting between pointers on every platform I've ever heard of has zero cost, and no performance change whatsoever.
C style casts in C++ will attempt a static_cast first and only perform a reinterpret_cast if a static cast cannot be performed. A static_cast may change the value of the pointer in the case of multiple inheritance (or when casting an interface to a concrete type), this offset calculation may involve an extra machine instruction. This will at most be 1 machine instruction so really very small.
Yes, this is true. Casting type which has runtime cost is dynamic_cast.
You're right, but think about it: reinterpret_cast means maybe a bad design or that you're doing something very low level.
dynamic-cast instead it will cost you something, because it has to look in a lookup table at runtime.
reinterpret_cast
does not incur runtime cost.. however you have to be careful, as every use of reinterpret_cast
is implementation defined. For example, it is possible reinterpreting a char
array as an int
array could cause the target architecture to throw an interrupt, because different types may have different alignment rules.
Get correct first, then worry about efficiency.
I was looking at my assembler code before and after reinterpret casting signed char as unsigned char. The instructions grew by about 3 or four more instructions.
int main()
{
signed char i = 0x80;
(unsigned char&)i >>= 7;
return i;
}
I was casting to unsigned char to make the compiler use SHL instruction, rather than SAR instruction, so that newly shifted shift in bits would be zer0s instead of var i signed bit value.
The compiler still and seems to always use SAR instruction. But the reinterpret casting made the compiler add more instructions. 3 to 4 more instructions!
I was concerned why my unicode function for converting UTF8 to UTF16 string was almost 3 times slower than Win32 MultiByteToWideChar(). Now I am worried that casting is one of the main factors.
Which is IRONIC, as we use reinterpret cast for speed.
精彩评论