开发者

Why use Decimal.Multiply vs operator multiply?

开发者 https://www.devze.com 2023-02-18 01:23 出处:网络
decimal result = 100 * 200; vs decimal resul开发者_如何转开发t = Decimal.Multiply(100, 200); Using the Decimal.Multiply will force the multiply to take inputs of type decimal instead of whatever th
decimal result = 100 * 200;

vs

decimal resul开发者_如何转开发t = Decimal.Multiply(100, 200);


Using the Decimal.Multiply will force the multiply to take inputs of type decimal instead of whatever the type that is being used and converted to decimal.

Decimal.Multiply(decimal d1, decimal d2) and will enforce and output of type decimal. Where as the * you could do:

decimal result = yourDecimal * yourInt; 

This allows you to mix and match types in some cases and it will handle it all for you but the type is not guaranteed to be decimal depending on how the right side is defined.


The * operator is overloaded and calls Multiply internally. It's really just a matter of readability.


The * operator is overloaded for decimal types and it is identical to Decimal.Multiply(). However, the overloaded * operator requires that at least one of the parameters is decimal. Otherwise, some other * operator is called. In decimal result = 100 * 200 the int types are first multiplied and then converted to decimal. If the multiplication result is bigger than Int32.MaxValue, you will get an overflow.

decimal d1 = 2147483647 * 2; // Overflow
decimal d2 = 2147483647m * 2; // OK
decimal d3 = Decimal.Multiply(2147483647, 2); // OK


Some languages do not support overloaded operators; those must call the Multiply() method.


There is no requirement for a .NET language to directly support decimals, but in such a language you can still use the System.Decimal struct. Since there's also no requirement that a .NET language support operator overrides, the methods are needed to allow for full use in such a case.

0

精彩评论

暂无评论...
验证码 换一张
取 消