I know that polymorphism can add a noticeable overhead. Calling a virtual function is slower than calling a non-virtual one. (All my experience is about GCC, but I think/heard that this is true for any realcompiler.)
Many times a given virtual function gets called on the same object over and over; I know that object type doesn't ch开发者_开发问答ange, and most of the times compiler could easily deduct that has well:
BaseType &obj = ...;
while( looping )
obj.f(); // BaseType::f is virtual
To speed up the code I could rewrite the above code like this:
BaseType &obj = ...;
FinalType &fo = dynamic_cast< FinalType& >( obj );
while( looping )
fo.f(); // FinalType::f is not virtual
I wonder what's the best way to avoid this overhead due to polymorphism in these cases.
The idea of upper-casting (as shown in the second snippet) doesn't look that good to me: BaseType
could be inherited by many classes, and trying to upper-cast to all of them would be pretty prolix.
Another idea could be that of storing obj.f
in a function pointer (didn't test this, not sure it would kill run-time overhead), but again this method doesn't look perfect: as the above method, it would require to write more code and it wouldn't be able to exploit some optimizations (eg: if FinalType::f
was an inline function, it wouldn't get inlined -- but I guess the only way to avoid this would be to upper-cast obj
to its final type...)
So, is there any better method?
Edit: Well, of course this is not going to impact that much. This question was mostly to know if there was something to do, since it looks like this overhead is given for free (this overhead looks to be very easy to kill) I don't see why not to.
An easy keyword for little optimizations, like C99 restrict
, to tell compiler a polymorphic object is of a fixed type is what I was hoping for.
Anyway, just to answer back to comments, a little overhead is present. Look at this ad-hoc extreme code:
struct Base { virtual void f(){} };
struct Final : public Base { void f(){} };
int main( ) {
Final final;
Final &f = final;
Base &b = f;
for( int i = 0; i < 1024*1024*1024; ++ i )
#ifdef BASE
b.f( );
#else
f.f( );
#endif
return 0;
}
Compiling and running it, taking times:
$ for OPT in {"",-O0,-O1,-O2,-O3,-Os}; do
for DEF in {BASE,FINAL}; do
g++ $OPT -D$DEF -o virt virt.cpp &&
TIME="$DEF $OPT: %U" time ./virt;
done;
done
BASE : 5.19
FINAL : 4.21
BASE -O0: 5.22
FINAL -O0: 4.19
BASE -O1: 3.55
FINAL -O1: 1.53
BASE -O2: 3.61
FINAL -O2: 0.00
BASE -O3: 3.58
FINAL -O3: 0.00
BASE -Os: 6.14
FINAL -Os: 0.00
I guess only -O2, -O3 and -Os are inlining Final::f
.
And these tests have been run on my machine, running the latest GCC and an AMD Athlon(tm) 64 X2 Dual Core Processor 4000+ CPU. I guess it could be a lot slower on a cheaper platform.
If dynamic dispatch is a performance bottleneck in your program, then the way to solve the problem is not to use dynamic dispatch (don't use virtual functions).
You can replace some run-time polymorphism with compile-time polymorphism by using templates and generic programming instead of virtual functions. This may or may not result in better performance; only a profiler can tell you for sure.
To be clear though, as wilhelmtell has already pointed out in comments to the question, it's rare that the overhead caused by dynamic dispatch is significant enough to worry about. Be absolutely sure that it's your performance hot-spot before you go replacing built-in convenience with an unwieldy custom implementation.
If you need to use polymorphism, then use it. There is really no faster way to do it.
However, I would respond with another question: Is this your biggest problem? If so, your code is already optimal or nearly so. If not, find out what the biggest problem is, and concentrate on that instead.
精彩评论