I am just reading on the MSDN about Precision Handling.
Taken out from the table on this site:
Operation: e1 / e2
Result precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)
Result scale: max(6, s1 + p2 + 1)
For the explanation of the used expressions:
The operand expressions are denoted as ex开发者_StackOverflowpression e1, with precision p1 and scale s1, and expression e2, with precision p2 and scale s2.
What I do not understand (more like I am not 100% sure I understand it) is this expression
max(6, s1 + p2 + 1)
Can someone explain it to me?
Many thanks :)
See my worked example here T-SQL Decimal Division Accuracy
It means maximum of 6
or (scale1 + precision2 + 1)
for the scale of result
精彩评论