I have read somewhere that to handle large number arithmetic (really really large) we should store number in large b开发者_JS百科ase at max squareroot(MAXNUMBER). Because representing a number in larger base needs small number of digits e.g. 120 in decimal = 1111000 in binary system. So if we at all store large numbers in large bases, does that reduces the number of bits at lowest level? I don't think so because any number in hexadecimal number system surely takes small number of digits on paper but not on hardware.
I think I am missing something here. Could someone help me understand how can I store a large base number at bit level in less number of bits?
depends on what encoding you want to use .. you can always do binary coded decimals in which you can code each individual binary digit with four binary bits ,, or you can use extended binary representations for numbers for e.g 128 bit numbers etc.. you should be able to find libraries that allow you to do both of the above solutions online. P.S BCD will take smaller space for small numbers but huge amount of bits for large numbers.. Maybe you can find some variation of BCD which is not so wastefull
The idea is to represent the number with fewer bits. You could represent the number 1,000,000,000,002 as 1,000,000^2. Of course, you lose precision, but if it's a really really big number, you usually don't care so much about the loss of precision.
精彩评论