I am new to Computer Architecture and Design. My question was a high level program Instruction set are executed in CPU one after another. Does it even involve Operating System instructions as overhead when executing these开发者_StackOverflow instructions ?. For Example: If in 2 GHz processor there are 2*10^9 instructions that can be executed in 2*10^9 clock cycles. Then the operating system always takes about 1*10^9 instructions to execute per second. Is this overhead always there and only another 1*10^9 instructions are available free for execution for other custom scheduled programs to execute ?
Does that mean operating systems should always have as less instructions to execute as possible so it can accomodate more of other programs to be executed ?
Yes to both questions, within limits.
First, yes, if the OS is using 1e9 instructions/sec, there are only 1e9 instructions/sec left.
Second, yes, you'd like to reduce that as much as possible; it's called "overhead".
The "limits" are that the OS does do good things for you. Consider, for example, miltitasking, where the OS lets you run several programs concurrently, sharing the processor among them. On the one hand, there is overhead involved. On the other hand, without it you'd either leave the machine idle for long stretches when no program could run, or you'd have to simulate multitasking yourself -- which would take at least as many instructinos as the OS would.
To expand a bit on Mr. Martin's response: (beware, this is highly simplified) The job of the OS is to handle those things which the program doesn't want to odo for itself - like handle I/O interrupts and schedule multiple tasks to share the machine. In a perfect world on a machine running one application program, the program would have control of the CPU until it needed the OS to do something for it, like read the next record from a disk file (which calls layered 'services' to figure out which disk, which file, which record, which byte and calculate which disk block on which track to ask fro from the disk controller. A typical 'real' machine also has a bunch of background tasks running, keeping the screen updated, reading the clock, checking for new mail, downloading patches, etc. This is where priorities come in. some tasks run at lower priority, because we don't care when they're done, like updating the system tray icon in Windows for New Mail notification. Other tasks run at a high priority, but are very short, like following the mouse on the screen and changing it from a pointer to a hand. Keep in mind that a typical task only does a few hundred instructions before needing some OS service and going to sleep while it happens. Large applications may have hundreds of thousands of 'instructions,' but again spend some of thir time waiting for something else, from a button push or keyboard entry to a response from a database lookup on another machine. The most CPU intensive applications like calculating Pi to a million decimal places may consume 99.9% of the processor for long periods, but the OS is going to interrupt it periodically just to see if something else needs to be done. Back in the days of DOS (1980's) the program could actually take the who CPU for awhile, but if it needed to read or write or type something to the screen, it would have to as the BIOS to do that, unless the program was written to do those basic operations itsef. Some of this is how computer games respond, by doing the specific operations needed to modify the screen directly, and read directly from the keyboard or mouse device buffers, bypassing the OS. Hopefully, I haven't confused you more...
精彩评论