I have done some searching but have not found a truly satisfactory answer. As a developer i want to invest the n开发者_如何学运维ecessary time in understanding this, thus i am looking for a complete explanation on this and feel free to provide any useful references.
Thanks.
I would recommend buying this book by Andrew S. Tanenbaum. He developed one of the predecessors to Linux called Minix. I used Structured Computer Organization as part of my university course.
Why computers use binary is not just a matter of switch context.
Relative to a reference voltage of say 3v. +1v(4v) = true or 1 and -1v(2v) = false or 0.
It also has to do with the most efficient method of creating controlling or logic circuits. This has to do with cost of implementation. How much does it cost to build circuits that work with binary compared to circuits that work with decimal or analogue see this answer.
If you compare how many billions of binary circuit transistors fit on to a modern CPU. The cost of doing that with say decimal (or analogue) system increases exponentially for every digit you want to add as you now have to add that much more controlling circuitry.
If you want to understand some of the most important contributing components that have helped to make binary the default standard for logic and controlling circuitry read and understand the following topics from Wikipedia. It will take about 4 hours to read through the most important topics, which have to do with some of the electrical engineering used to create the circuits.
I tried to be complete in this list of concepts you need to understand how the actual switches work and why they are used. As well as why Binary Arithmetic is such an efficient form of computation in hardware.
- Transistor types Understand the pnp and npn transistor types to understand how the actual circuity that forms the switches works. These circuits are very cheap to make and can be shrunk to minuscule(nano meter) size
- Logic Circuitry. If you understand the basic logic circuitry you will understand how the actual transistor types are used to implement them. These relate to some of the programming constructs such as "and &&" "or ||" and "if, branch" constructs.
- DigitalCircuitry has a use full disadvantages section comparing analog and digital circuits
- NAND Logic Gate is important as all other logic gate circuits can be implemented using just this one logic gate. Simplifying the manufacturing process, as the complexity of the machinery used to create the the circuits can be streamlined.
- Adder Circuits To understand how basic addition is done using logic gates.
- Twos Complement this is very help full in understand number representation in actual CPUs. It is also very cheap to implement this type of arithmetic in a CPU, as it requires fewer transistors. For instance a simple addition circuitry is all that is need to do addition and subtraction. If you add a negative number you get the correct answer ie +7 + (-4) = +3. This also helps to understand the integer overflow
- Binary_number
- These are some of the most used circus for controlling other circuits. These control when circuits are switched on and off. Decoder Encoder How (if or branch) condition logic is implemented.
- Multiplexer Is fundamental to how routing is done. In a CPU, BUS and in a network. One of the most common logic circuits found in most digital devices.
Now for some hard cores stuff. C. and C++ is used to write device drivers that speak to actual hardware. If you really want to get into how certain devices work, your CPU, and or external devices learn Assembler. You will begin to see how you can switch off a device by setting a certain device register to a specific value, that will be read by a logic circuit to change the devices state. For example you will understand why (0101) base2 = 5 (binary related stuff) will route a specific way through the circuits to switch the device on and off.
Computers could have been built to work even with decimal numbers, but from the engineering point of view it is a lot safer to distinguish only two states.
The voltage of the value 1 (+5V) is only a theoretical value, in real-life it always differs a bit. Would have they done computers with decimals, there would be no way to tell if +4.75V is 9 or 10.
It is because how logic gates work: There is a logical output (1) if the control voltage exceeds a certain threshold; no logical output (0) if not.
But probably much more crucial:
Maybe one time computers wouldn't work in binary anymore when quantum based machines arise (or other stuff like that, which maybe would encourage more complex state representations). But as binary values are the simplest possible representation of any (more complex) state, even in "quantum times" it probably would be most appropriate to stay with computers working in binary (abstracting from other physical representations like ternary or so, if given).
Stumbled upon this question. I recommend two books which addresses the question faithfully:
A Peek at Computer Electronics: Things you Should Know - by Caleb Tennis
CODE : The Hidden Language of Computer Hardware and Software - by Charles Petzold
And if you really want to understand how computers work, then take up:
The Elements of Computing Systems: Building a Modern Computer from First Principles
-By Noam Nisan and Shimon Schocken
Computers are using electricity as a mean to transport informations. And the easiest way to use electricity as an information is as On or Off (1 or 0).
Sure you can use different voltage to represents different number, but the electronic components to do so are really complicated.
It is also important to note that the ability to write and read 1 and 0 are enough to compute any calculation, this is called Turing completeness, so there is no need to find some more complex systems allowing something else than binary
(Ok, to be thorough, Turing completeness can be achieved only with infinite memory, but this isn't really relevant here.)
Well I guess u need to consider the IC's within a PC, each IC has millions of Gates mostly NANDS or NORS and every computation is either a true or a false i.e. 0 or 1 respectively and thus binary number would suffice. Hope it's clear :-)
Ok... I will give you my opinion about it, but first It's necesary to say that I'm far for being an expert, so take my answer carefully.
In the bottom of all this hardware, gates and transistors, a computer It's no more that a circuit. In every part of a circuit, the electric pulses can flow... or not flow (this is a simplified version, read paxdiablo comment). 2 states. This two states can be represented by a 0 or a 1. And that's binary!
In fact, maths calcs can be made in every base, the only reason because the human being use base-10 is that we (use to) have 10 fingers so it's easy to understand by us. The digital systems has two states, so the base-2 is the best choice for them.
Well in computers we always go with things with least complexity that helps us speed up computations. So here if we see binary is the least complex of them all..
Consider the number 1000 here,
For Unary :- Input Symbols {0} No of digits to represent 1000 :- 1000, Complexity :- 1000*1=1000
For Binary :- Input Symbols {0,1} No of digits to represent 1000 :- 10, Complexity :- 10*2=20
For Ternary :- Input Symbols {0,1,2} No of digits to represent 1000 :- 7, Complexity :- 7*3=21
For Decimal :- Input Symbols {0,1,2...9} No of digits to represent 1000 :- 4, Complexity :- 4*10=40
Thus we see binary has the least complexity.
Because computers are state machines and they understand mainly two states. That is, on and off and it concerns electricity. That is the main reason.
Also, how else would you find tshirts saying that there are 10 types of people, those who understand binary and those who do not ? :)
Computers basically work on electric signals; as a dumb machine , it can only understand 'high' and 'low' . High is +5v and low is 0v. (v-volt). Thus , the 1 in binary represents high or 'on' . 0 represents 'low' or 'off'. So, binary is needed to make computer understand something.
精彩评论