开发者

Is there merit to having less-than-8-byte pointers on 64-bit systems?

开发者 https://www.devze.com 2023-01-21 07:04 出处:网络
We know that in 64bit computers pointers will be 8bytes, that will allow us to address a huge memory. But on the other hand, memories that are available to usual people

We know that in 64bit computers pointers will be 8bytes, that will allow us to address a huge memory. But on the other hand, memories that are available to usual people now are up to 16G, that means that at the开发者_StackOverflow moment we do not need 8 bytes for addressig, but 5 or at most 6 bytes.

I am a Delphi user.

The question (probably for developers of 64 bit compiler) is:

Would it be possible to declare somewhere how many bytes you would like to use for pointers, and that will be valid for the whole application. In case that you have application with millions of pointers and you will be able to declare that pointers are only 5 bytes, the amount of memory that will be occupied will be much lower. I can imagine that this could be difficult to implement, but I am curious anyway about it.

Thanks in advance.


A million 64-bit pointers will occupy less than eight megabytes. That's nothing. A typical modern computer has 6 GB of RAM. Hence, 8 MB is only slightly more than 1 permille of the total amount of RAM.


There are other uses for the excess precision of 8-byte pointers: you can, for example, encode the class of a reference (as an ordinal index) into the pointer itself, stealing 10 or 20 bits from the 64 available, leaving more than enough for currently available systems.

This can let the compiler writer do inline caching of virtual methods without the cost of an indirection when confirming that the instance is of the expected type.


Actually, it wouldn't save memory. Memory allocations have to be aligned based on the size of what you're allocating. E.g., a 4 byte section of memory has to be placed at a multiple of 4. So, due to the padding to align your 5-byte pointers, they'd actually consume the same amount of memory.


Remember actual OSes don't let you use physical addresses. User processes always use virtual addresses (usually only the kernel can access physical addresses). The processor will transparently turn virtual addresses into physical addresses. That means you can find your program uses pointers to virtual addresses large enough that they don't have a real address counterpart for a given system. It always happened in 32 bit Windows, where DLLs are mapped in the upper 2GB (virtual process address space, always 4GB), even when the machine has far less than 2GB of memory (actually it started to happen when PC had only a few megabytes - it doesn't matter). Thereby using "small" pointers is a nonsense (even ignoring all the other factors, i.e. memory access, register sizes, instructions standard operand sizez, etc.) which would only reduce the virtual address space available. Also techniques like memory mapped files needs "large" pointers to access a file which could be far larger than the available memory.


Another use for some excess pointer space would be for storing certain value types without boxing. I'm not sure one would want a general-purpose mechanism for small value types, but certainly it would be reasonable to encode all 32-bit signed and unsigned integers, as well as all single-precision floats, and probably many values of type 'long' and 'unsigned long' (e.g. all those that would could be precisely represented by an int, unsigned int, or float).

0

精彩评论

暂无评论...
验证码 换一张
取 消