I'm taking a Computer Systems class as a pre-req for my Masters and came across something I found fascinating and hard to see practical use of and that is "faking subtraction" and the fact that there doesn't need to be a subtraction instruction.
Something like:
x - y
Can be written as:
x + (~y + 1)
Now, that's all well and good but it seems like that is overly complicated for a simple subtraction, especially when you could just easily put "x - y". Are there situations where it would be necessary to do th开发者_运维知识库is, or is it just something that CAN be done but isn't.
This is often how it's done at the hardware level (i.e. inside the ALU).
At the software level, it's generally useless, as it can never be more efficient than the straightfoward subtraction (unless you have a truly bizarre compiler/platform combination).
The two's complement implementation is done in hardware, so you do not need to implement them like that for builtin datatypes.
If you are making an n-bit integer arithmetic library, then you need to emulate the integer addition, subtraction, multiplication and division etc operations, in which case such a technique might be implemented to add the n-bit length numbers, but using the carry flag to do so is a better implementation in my opinion.
It should be obvious that that is how substraction is done internally, so I'm not sure what you mean by "being used in the real world". This is why two's complement was chosen in the first place, because subtraction is just overflowing negative addition.
I do not see any reason to do it in your C code. Doing it in software is no faster than subtracting using the minus operator - and is a lot more unclear.
However, that is the way processors execute subtraction. I bet you have seen this code as an example of what hardware does, since it is easier to see how x + (~y + 1)
will become a logic circuit.
So... no, you will not use this code in real world, but this operation is executed a lot of times in your processor.
I couldn't see the point of doing this. It is not anymore efficient. In fact if it's not optimised out by the compiler it ends up generating more opcodes.
Stuff like this was more common back before CPU's had billions of transistors to play with. A particular CPU might not implement a specific subtract opcode, and so a compiler (or assembly program) targeting it would have to know that trick.
These manipulations can also help you understand the internal implementation of CPU's. For example, CPU's division operations are sometimes accomplished by taking the reciprocal of the divisor and multiplying it by the dividend; the reciprocal is the only actual "division" being performed.
精彩评论