Possible Duplicate:
C# driver development?
Why do we use C for device driver development rather than C#?
Because C# programs cannot run in kernel mode (Ring 0).
The main reason is that C is much better suited for low-level (close to hardware) development than C#. C was designed as a form of portable assembly code. Also, many times it may be difficult or unsafe for C# to be used. The key time is when drivers are ran at ring-0, or kernel mode. Most applications are not meant to be run in this mode, including the .NET runtime.
While it may be theoretically possible to do, many times C is more suited to the task. Some of the reasons are tighter control over what machine code is produced and being closer to the processor is almost always better when working directly with hardware.
Ignoring C#'s managed language limitations for a little bit, object oriented languages, such as C#, often do things under the hood which may get in the way when developing drivers. Driver development -- actually reading and manipulating bits in hardware -- often has tight timing constraints and makes use of programming practices which are avoided in other types of programming, such as busy waits and dereferencing pointers which were set from integer constant values. Device drivers are often actually written in a mix of C and inline assembly (or make use of some other method of issuing instructions which C compilers don't normally produce). The low level locking mechanisms alone (written in assembly) are enough to make using C# difficult. C# has lots of locking functionality, but when you dig down they are at the threading level. Drivers need to be able to block interrupts as well as other threads to preform some tasks.
Object oriented languages also tend to allocate and deallocate (via garbage collection) lots of memory over and over. Within an interrupt handler, though, your ability to access heap allocation functionality is severely restricted. This is both because heap allocation may trigger garbage collection (expensive) and because you have to avoid and deal with both the interrupt code and the non-interrupt code trying to allocate at the same time. Without lots of limitations placed on this code, the compiler's output, and on whatever is managing the code (VM or possibly compiled natively and only using libraries) you will likely end up with some very odd bugs.
Managed languages have to be managed by someone. This management probably relies on a functioning underlying operating system. This produces a bootstrap problem for using managed code for many (but not all) drivers.
It is entirely possible to write device drivers in C#. If you are driving a serial device, such as a GPS device, then it may be trivial (though you will be making use of a UART chip or USB driver somewhere lower, which is probably written in C or assembly) since it can be done as an application. If you are writing an ethernet card (that isn't used as for network booting) then it may be (theoretically) possible to use C# for some parts of this, but you would probably rely heavily on libraries written in other languages and/or use of an operating system's userspace driver functionality.
C is used for drivers because it has relatively predictable output. If you know a little bit of assembly for a processor you can write some code in C, compile for that processor, and have a pretty good guess at what the assembly for that will look like. You will also know the determinism of those instructions. None of them are going to surprise you and kick off the garbage collector or call a destructor (or finalize).
Because C# is a high level language and it cannot talk to processors directly. C code compile straight into native code that a processor can understand.
Just to build a little to Darin Dimitrov's answer. Yes C# programs cannot run in kernel mode.
But why can't they?
In Patrick Dussud's interview for Behind the Code he describes an attempt that was made during the development of Vista to include the CLR at a low level*. The wall that they hit was that the CLR takes dependencies the OS security library which in turn takes dependencies on the UI level. They were not able to resolve this for Vista. Except for singularity I don't know of any other effort to do this.
*Note that while "low level" may not have been sufficient for being able to write drivers in C# it was at the very least necessary.
The C is a language which is supports more for interfacing with other peripherals. So C is used to develop System Softwares.The only problem is that one need to manage the memory .It is a developer's nightmare.But in C# the memory management can be done easiy and automatically(there are n number of differences between c and C#,try googling ).
Here's the honest truth. People who tend to be good at hardware or hardware interfaces, are not very sophisticated programmers. They tend to stick to simpler languages like C. Otherwise, frameworks would have evolved that allow languages like C++ or even C# to be used at kernel level. Entire OSes have been written in C++ (ECOS). So, IMHO, it is mostly tradition.
Now, there are some legitimate arguments against using more sophisticated languages in demanding code like drivers / kernel. There is the visibility aspect. C# and C++ compilers do a lot behind the scenes. An innocuous assignment statement may hide a shitload of code (operator overrides, properties). The cost of exceptions may not be clearly visible. Garbage collection makes the lifetime of objects / memory unclear. All the features that make programming easier, are also rope to hang yourself with.
Then there is the required ecosystem for the language. If the required features pull in too many components, the size alone may become a factor. If the primitives in the language tend to be heavy (for the sake of useful software abstractions), that's a burden drivers and kernels may not be willing to carry.
精彩评论