I usually program in C++, but for school i have to do a project in C#.
So i went ahead and coded like i was used to in C++, but was surprised when the compiler complained about code like the following:
const uint size = 10;
开发者_高级运维 ArrayList myarray = new ArrayList(size); //Arg 1: cannot convert from 'uint' to 'int
Ok they expect int as argument type, but why ? I would feel much more comfortable with uint as argument type, because uint fits much better in this case.
Why do they use int as argument type pretty much everywhere in the .NET library even if though for many cases negative numbers dont make any sense (since no container nor gui element can have a negative size).
If the reason that they used int is, that they didnt expect that the average user cares about signedness, why didnt they add overloads for uint additonally ?
Is this just MS not caring about sign correctness or are there cases where negative values make some sense/ carry some information (error code ????) for container/gui widget/... sizes ?
I would imagine that Microsoft chose Int32
because UInt32
is not CLS-compliant (in other words not all languages that use the .NET framework support unsigned integers).
Because unsigned integers are not CLS compliant. There are languages that are missing support for them, Java would be an example.
In addition to the answers talking about CLS compliance, consider that integer math (e.g. 10 + 2) results in integer (as in signed) data, which only makes sense; now consider the bother of having to cast every mathematical expression to uint
to pass it to one of the methods you refer to.
As for overloads that take uint
-- in many cases method arguments are stored as (or used to calculate) property values, which are usually of type int
(again for CLS compliance, and possibly for integer math convenience); the discrepancy in sign would be confusing, not to mention vulnerable to overflow.
Stroustrup prefers int over "uint" in The C++ Programming Language, and I think his reasons apply to C# too:
It's about underflow with no warning:
// Unintended very large uint
uint oops = (uint)5 - (uint)10;
or:
// Unintended infinite loop
for( uint counter = 10; counter >= 0; counter-- )
; // Do something
The extra bit of info is rarely worth having to watch for these kinds of bugs.
This possibly come a "little" late, but just found the question and want to add a missing bit.
A prime example where negative values make perfect sense is in graphical frameworks. For sizes, as stated in the question, negatives are out of question, but for position values it's perfectly acceptable to have negative values. Such values make objects to appear off-screen or at least partially cropped:
It follows the very same principle as in mathematics, negative coordinates just make points to go to the opposing from where the axis grows values. Assuming that (0,0) is at the upper-left corner of the screen, negative values displace things to the left and top of that point, making them half-visible.
This is useful for example if you want to implement a scrolling region, where the contents are larger than the available space. Simply all objects positions become negative to begin disappear from the top or larger that the height for disappear from bottom.
Such things aren't limited to C#. Winforms and WPF use that, as per in the question, but most other graphical environments have the same behavior. HTML+CSS can place elements in the same way, or the C/C++ library SDL also can make use of this effect.
精彩评论