I see some code like this:
float num2 = ( ( this.X * this.X ) + ( this.Y * this.Y ) ) + ( this.Z * this.Z );
float num = 1f / ( ( float ) Math.Sqrt ( ( double ) num2 ) );
this.X *= num;
this.Y *= num;
this.Z *= num;
Does it matter if it was like this?:
float num2 = ( ( this.X * this.X ) + ( this.Y * this.Y ) ) + ( this.Z * this.Z );
float num = 1 / ( ( float ) Math.开发者_StackOverflowSqrt ( ( double ) num2 ) );
this.X *= num;
this.Y *= num;
this.Z *= num;
Would the compiler use (float) / (float)
or try to use (double) / (float)
for the 2nd example for line 2?
EDIT: Btw would there be any performance difference?
It actually uses (int)/(float)
for the second example. Since Int32 is implicitly convertible to Single, the compiler won't complain, and it will work fine.
That being said, it will complain if you do:
float num = 1.0 / ( ( float ) Math.Sqrt ( ( double ) num2 ) );
This would cause it to try to use (double)/(float)
, which will effectively turn into (double)/(double)
. The compiler will then complain when that double is tried to be implicitly set into a float variable.
EDIT: Btw would there be any performance difference?
Probably not a measurable one. That being said, you're going to be creating extra conversion operations in IL. These may get eliminated during JIT - but again, it'll be microscopic.
Personally, I would probably handle this using double precision math, since it would make the code easier to read:
double num2 = (this.X * this.X) + (this.Y * this.Y) + (this.Z * this.Z);
float num = (float) (1.0 / Math.Sqrt(num2));
this.X *= num;
// ...
No; it would be the same.
If you change 1f
to 1.0
(or 1d
), the result would be a double
.
精彩评论