开发者

Why do programming language allow assignment from integer to shortint?

开发者 https://www.devze.com 2022-12-17 03:27 出处:网络
program TypeCategory; {$R+} var sInt : shortint; iInt : integer; begin readln(iInt); sInt := iInt; writeln(sInt);
program TypeCategory;
{$R+}
var
    sInt : shortint;
    iInt : integer;
begin
    readln(iInt);
    sInt := iInt;
    writeln(sInt);
end.

Considering the above example,pascal language do allow assignment from integer to shortint,or even longint to shortint without explict type casting.That is,pascal do allow assignment inside type category(here is the integer types).

Pascal is famous for its strongly typed,but why it allow this kind of weakly typed thing?I think this kind of syntax would encourage inconsistent code.

What are pros for this kind of syntax?Are there any other languages which applied this kind of syntax,except the famous C and C++?

thanks.

EDIT开发者_StackOverflow中文版:

I have only tested turbo pascal and free pascal with fpc/objfpc/tp/delphi model.Also,the gcc/g++ and msvc produce the same result.That is,assigment from int(which is size of 4 byte on my computer) to short int(size of 2) will not triggger any compile errors,while you could set proper options to make the compilers generate possible lose of data warnings.


Part 1. About definitions

First of all what language and implementation do you call "pascal"? If you are talking about ISO Pascal than it's dead many years ago. If you are talking about some other language or implementation please provide more information.

Secondly (as Teun D already mentioned) there is no definition for the term strong typing. Take a look at Wikipedia article on strong typing.

However, these terms have been given such a wide variety of meanings over the short history of computing that it is often difficult to know, out of context, what an individual author means when using them.

Let's assume that we follow definition from Luca Cardelli's article Typeful Programming described bellow at the Wikipedia page:

Luca Cardelli's article Typeful Programming describes strong typing simply as the absence of unchecked run-time type errors. In other writing, the absence of unchecked run-time errors is referred to as safety or type safety; Tony Hoare's early papers call this property security.

Anyway the described behavior can't be classified as static (or safe) typing discipline. Me personally really dislike this hmmm... Well that's not a feature, that's a bug. =)

Part 2. Answer

I think the problem is not in this weak typing but in a large variety of integer types available in some languages.

Are there any other languages which applied this kind of syntax,except the famous C and C++?

I think almost every staticly typed language with a variety of integers types has such behavior. It was good idea to have this SHORTINTs and and all that jazz in early years for memory saving. But now when nearly every PC have about 1 GB and more RAM... Suppose we have 1 million of 4 byte INTEGERs instead of 2 byte SHORTINTs. It's only around 4 MB of RAM instead of 2 MB. I think it's reasonable price for not having all this strange behavior you described.

Take a quick look at Wirth's Oberon-07 (Language Report, PDF). There is only one integer type - 32 bit INTEGER.

Also one can mention Python (or may be some other modern dynamicly typed language) with int type which represent numbers in an unlimited range, subject to available (virtual) memory only.

So you can see the trend - variety of integers types is 70's survival. =)

What are pros for this kind of syntax?

The pros is (probably) reduction of verbosity. This staticly typed languages are so verbose already so if we decide to add some explicit integer type conversion like Wirth did in Oberon-2 (take a look at SHORT() and LONG() functions) their become even more verbose. As a compromise one can allow implicit conversion. Also in many languages the actual size of integer types variables doesn't fixed and differ from one implementation to another. The only information available is that size(shortint) <= size(int). In the case of the equality explicit conversion looks quite strange.

Part 3. Dithyramb to Oberon-2 =)

By the way, don't wary too much about Pascal. It's dead but in Oberon-2 Niklaus Wirth corrected his mistake.

In Chapter 6 of Language Report you can find information about types. For our discussion the important statement is:

Types 3 to 5 are integer types, types 6 and 7 are real types, and together they are called numeric types. They form a hierarchy; the larger type includes (the values of) the smaller type: LONGREAL >= REAL >= LONGINT >= INTEGER >= SHORTINT

In Chapter 9 we can read about assignments:

The expression must be assignment compatible with the variable

Finally in Appendix A:

Assignment compatible

An expression e of type Te is assignment compatible with a variable v of type Tv if one of the following conditions hold:

Te and Tv are the same type;

Te and Tv are numeric types and Tv includes Te;

...

So here we are. You can't assign INTEGER expression to SHORTINT variable. If you are interested you can also take a look at Component Pascal, minor variant and refinement of Oberon-2. BlackBox Component Builder is an IDE for Windows.


In response to Justin Smith's comment.

I am amazed he said the larger type includes (the values of) the smaller type: LONGREAL >= REAL >= LONGINT >= INTEGER >= SHORTINT, given that there are LONGINTS that cannot be expressed as "REAL"s.

I'm little bit confused about your statement

there are LONGINTS that cannot be expressed as "REAL"s

Actually on my machine IDE mentioned above has

MAX(LONGINT) = 9223372036854775807

MAX(REAL) = 1.797693134862316E+308

So you can represent every LONGINT as REAL number. But the representation may be not exact. I think you was actually talking about it but we are talking here about different integer types conversion. And the conversion between REALs and INTEGERs is another story. The story of bad and confusing naming. The REAL numbers are not actually real numbers from the mathematical point of view. They are some approximate representation. One can use rational numbers approximation (storing the numerator and denominator as integers) but the common way is using of floating point approximation. The IEEE Standard for Floating-Point Arithmetic (also known as IEEE 754) is the most widely-used standard for floating-point computation.

Everyone should know that REAL numbers are not real, but numbers discribed in IEEE 754 standard. And everyone should read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" to clarify some points.

But it's another story... =)


Regardless of which language is used, there are always ways for the programmer to shoot himself in the foot. When using variables, you need to be aware of the type of data that you're storing, and how it will be used. Strong typing makes this safer, but you could still mistakenly cast from a larger variable to a smaller one.


I think that this behavior is not weak typed in the sense that you can never get a pointer to a shortint that really points to a integer. It would also never cause a crash (although the numeric value in the second variable could be different from your expectations).

What you mean is that apparently, Pascal does not check for overflow. In eg. C#, you can set checking behavior from code ( http://msdn.microsoft.com/en-us/library/khy08726(VS.71).aspx), but often this is not done for performance considerations. I assume that this goes for pascal as well.


If a language never allowed assignment from int to short int, the it would be impossible to write this code

var
sInt : shortint;
iInt : integer;

if (iInt < 100 && iInt > -100)
{
   sInt := iInt; // this would not compile
}

or this

var
sInt : shortint;
iInt : integer;

sInt := (iInt & 0xFFFF); // this would not compile

You couldn't write code that used ints as temporary values while doing math on short ints, like this.

var
sInt1 : shortint;
sInt2 : shortint;
iInt : integer;

/* I want to multiply sInt2 by sInt1 and then divide by 100 */
iInt = sInt2;
sInt2 := (iInt * sInt1) / 100; // this would not compile

But using more bits in a temporary value is a common technique to make sure that you don't get errors due to overflow of temporary values.


As far as I know it, there are several things to consider:

  1. There is only ONE integer type in the classic Pascal sense, integer. The others are merely subranges, and thus not strictly a fully separate type. Assignments are allowed depending on range, and runtime rangechecks are part of the standard to enforce it.
  2. In general Pascal allows assignment for numeric types as long as ranges match. This is why adding an integer to a real is allowed. (real is considered to contain the range of integer), but not the other way around. This is a special exception though, (a language defined conversion for ease of use) since real is not even an ordinal type.

Classic Pascals often defined the base integer type different from the machine word size. (typically twice as large), to the largest type one can handle on the machine, and use subranges for all variable types. This means various calculations, both runtime and compiletime (constants) will remain working even if the machine word size is exceeded occasionally, at the cost of some performance. They will merely be upscaled to the bigger integer type. I only got this from newsgroups, and not too much experience with this though. I don't know if any multiplication was automatically upscaled. Probably this is less applicable nowadays, where most processing isn't batchwise with a welldefined input and a staff of engineers to handle it.

Putting ranges on all variables made the range checking more efficient also

e.g.

var x : 0..10;

x:=10;
x:=x+1; // runtime range check error  here.

However this use of subranges to define variables had waned as the DOS era brought in a more casual type of programmer. In Turbo Pascal nearly everything was typed "integer", and thus that had to be machineword (or smaller) for efficiency reasons.

A longint at twice the range size had to be introduced (and later int64 in Delphi. Maybe int128 for x86_64 in time), but that always remained somewhat peculiar (and normal ranging rules wouldn't apply. Also the problems with the unsigned type corresponding to the largest signed type were never fully resolved (as the Pascal basetype is signed, and thus can't contain the largest unsigned type)

0

精彩评论

暂无评论...
验证码 换一张
取 消