The problem is that the SqlDecimal datatype packs more bits than the Decimal datatype which is native to the CLR. So how does one map between the two in the most practical way. This wohn't work that well:
SqlDecimal x = ...
decimal z = x.value; // can overflow
To have more numbers pass one can strip trailing zeros. But if you accept the loss of precision that the conversion gives you one would expect there'd be a function to do this lossy conversion.
Is there? Or what would be best practices here?
I've already made a function which both 开发者_运维百科crops and removes trailing zeroes to do the conversion but I'd rather use a standard .NET BCL function if one such exists.
Use SqlDecimal.Round
with a precision that matches the .NET decimal
type.
精彩评论