The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C.
Or do you?
I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment:
The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand.
Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don'开发者_运维百科t have the tools to measure it.
I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong?
To sum it up: is there any value in knowing more than the API docs tell me?
EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)
Neither knowing C nor knowing the lower-level details of implementation hurt you -- in themselves. What can and will hurt you is if you consistently think and work in terms of the low-level details, even when it's inappropriate.
The old saying was that "real programmers can write FORTRAN in any language." If you do the same using C, it's not an improvement. If you're writing Lisp, write Lisp. If you're writing Python, write Python. What's appropriate and reasonable for C is not for either of those (or any of many others).
A great programmer needs to be able to think at many different levels of abstraction, and (more importantly) recognize and apply the level of abstraction that's appropriate to the task at hand.
Knowledge of C's level of abstraction doesn't hurt. Ignorance of alternatives can (and will).
knowledge doesn't harm. ever. Those that wrote bad code in higher level language is because they didn't master properly the higher level language, bad developers.
To a bad developer, any type of knowledge can be dangerous.
To a good developer, any type of knowledge is an asset.
Using languages - natural (spoken) or artificial (programming) - requires the mind to adapt in a certain way. Each language has it's own grammar, it's own vocabulary (APIs) etc. If you're mostly a Java programmer and switch to Ruby, you will at least follow the thought patterns of a Java programmer, if not write what is basically Java code in Ruby. It takes a bit of effort and practice until you begin to feel comfortable in the new environment (Ruby grammar, Ruby APIs) and start writing Ruby code.
So, the process is perfectly normal and any adverse effect of previous patterns is very short lived. What's more important, every language you learn broadens your horizons and makes learning the next one easier. Enjoy the journey. :]
Programming is not about programming languages. It is about solving problems. The tools used to solve the problems just happens to be programming languages. You don't write code to write code, you write code to execute it and solve the problem.
The better you know your tools, the better and faster you can solve the problems. But while you will have serious trouble when you physically try to drive a screw into wood using a hammer, software has a nice property: There are an absurdillion different solutions to any given problem.
So it is perfectly possible to write a hammer that hits a screw in such an angle that the screw will tell the wood to make a hole into itself so that the screw fits in. Then you can hide it behind a button, the user doesn't even need to know what a hammer actually is.
While it is not the most efficient solution, it is still a valid and working solution. When you get better with the tool you've used, finally you will discover how you can write a screwdriver when the API doesn't provide one.
The more tools you know and the more ways you know how to solve any problem, the more choice you have and the better your decisions about what solution to use will be. Chose the right tool for the job. But how could you when you don't know the tools and the possible solutions?
To expand on other's comments... While I'm not sure that I believe in the http://en.wikipedia.org/wiki/Whorfian_hypothesis">Whorfian Hypothesis when it comes to natural languages, it's pretty clearly true when it comes to programming. The languages you know affect how you solve a problem. Two examples:
1) From a professor I had many many years ago: He was trying to find out if there were any duplicates in his array of strings. This in the 70's so he was writing this in FORTRAN. His brute force n^2 implementation was taking way too long. So he spoke to a friend. His friend knew PL1 (I think it was, maybe it was APL) which has a sort operator. So, in that language, you learn to sort things, and how useful that can be, because it's easy. The friend came up with the obvious sort first, then look at adjacent elements algorithm. Much faster, and it would not have occurred to my FORTRAN writing professor, even though it was perfectly implementable in FORTRAN.
2) When I was in grad school, I had a Physics grad student for a roommate. He went to MIT, and only took one programming class, which was of course in Scheme. One day, I stopped by his office, and he said "Hey, Brian, can you take a look at this code and tell me if it should work?" It was a sorting routine. I glanced at it, and said it couldn't possibly work, because it was clearly bubblesort, and yet it only had one loop (and no, it wasn't the one funky loop that you can write bubblesort with if you are sick and twisted). So, I said that. He responded "Oh, but it has a recursive call at the bottom!" It never would have occurred to me to write a recursive bubblesort. But more importantly, it never would have occurred to HIM to write a non-recursive function!
The point being that the languages you know, determine to a large extent, the kind of code you will write. The more languages you know, the more tools you have, and more tools are never bad, as long as you know when to use each of them...
To be a really good programmer, you need to know C.
I agree with this.
But I also agree that to be a really good programmer, you have to really know how to write code in another language (not how to write C code in another language).
Knowing C will not hurt the quality of your code, but knowing "only C" for sure will
no.
If you lose your desire to know and improve though you are damaged.
Software engineering is about understanding abstraction and how to use abstraction to solve a problem efficiently (whether efficiently means lower cost, faster performance or shortest schedule to delivery of the functionality.) Understanding C is just another insight into a layer of the abstraction we use every day and the skill required to 'zoom in' to this level of detail is valuable, as long as you also develop the skill to 'zoom out' when necessary. This skill set will serve you well in every aspect of the discipline, whether it's designing an object model, setting up clean functional compositions, or even just structuring an individual method for clarity and maintainability.
Knowing different languages is an asset. Knowing how compilers and interpreters are built is also an asset. Finally, every programmer should spend some time in assembly language to appreciate the higher languages. :-)
At my university, we took a class, "Programming Languages", in which we learned LISP, SNOBOL and ADA. These languages open your mind to different conceptual thinking in solving the programming problems. The summary was to choose the language that best fits the problem.
Knowing a programming language is only the foundation. I would not be very far in my career if I didn't know other related topics: Data Structures, Algorithms, Linear Algebra, Boolean Algebra, Microprocessor Design and Communications (between people). Anybody can pick up a book, learn a language and call themselves a programmer. It's the other skills that differentiates a skilled developer from one pulled off the street.
Learn a programming language. Learn it well, so that you can focus more brain power on the other tasks at hand. You should not be referencing a programming manual often. Most of my concentrations are on the requirements of the task and the algorithms and data structures to get it implemented correctly in the least amount of time.
Short and sweet:
To be a good programmer, you need to be able to think in an organized way. C or LUA or Java, whatever.
It'll only hurt if you apply that knowledge to higher-level languages when it really isn't required. Sure, with some low-level C experience in writing my own collection classes, I could also do that in, say, Java. But would it be a better alternative to the existing Collections library (both the Java API as the Commons Collections extras)? Maybe.
In practice, you'd have to calculate if the time invested is worth it.
In truth, you should simply do research before hacking away at your code. See if what you're trying to do can be done using built-in or 3rd party tools. If you can, then see if the built-in or 3rd party tools do what you want, and if they perform good. If they don't, find out why not. If they /really/ don't, rewrite.
As others has stated, all knowledge is worthwhile. And with that I mean /all/ - both the low-level optimized C code, as the high-level calls to well-developed libraries. If you know both, you will know which one to use when, and why.
No, knowing multiple implementations of a programming language can only help you understand those abstractions better.
The problem is is you accept one to be the best abstraction that will cause you not to succeed using the others you perceive to be lesser.
Each abstraction is designed with a specific different goal, choose the one that works best for your needs. Does knowing Linux make knowing Windows or Mac OS harder? Its the acceptance that they are different.
The real problem here is assumption. Those other developers assume they know how it works. Assumptions are the devil whether it's from an experienced dev who thinks they have it all figured out or from a newbie who thinks they have it all figured out.
At least, that's what I'm assuming here
Learning C is a good thing. Trying to write C code in a higher level language is a bad thing.
That learning C (or any language for that matter) could hurt you at a programmer seems to hinge upon you not being able to learn anything after having learned C. The lesson here should be don't stop learning. In addition, don't assume that any other language or technology necessarily works like C (or your favorite C compiler).
That said, I think learning C can be a good way to learn how hardware works and what is actually occurring in the machine. I can't see how knowing this could hurt you in any way. I don't think being ignorant has any benefits (unless they are accidental). I do admit that learning C is not the only way to learn about the machine, but it is merely one way to do so.
Understanding multiple languages/frameworks and programming paradigms should never hurt - it should be beneficial.
The important bit is to understand the language/framework/environment you are currently working in to the extent that you know the implications of making implementation choices. Here, knowledge gained in working with other languages may open your eyes to a wider range of possibilities - but you have to evalutate those possibilities in terms of your current environment.
The folks that get themselves into real trouble are those that learned some language, C for example, and then learn another language in terms of C as opposed to learning it for its own merits, strengths and weaknesses (kind of like the handyman with a hammer as his only tool - all problems look like nails to him).
Knowing C, and then working in my favorite very-high-level language (Python), for example, is an example of why I find it helpful to know C.
Knowing C, when I use Python is helpful in several ways:
(a) I am thankful for Python's lists, dictionaries, and builtin types, because it makes it easy to do something, repeatably, in one line of code, that would require me to select some code library , learn it, and link to it (hash tables, data structures, etc) and avoid shooting myself in the foot with it.
(b) Python is written in C. Being a C programmer also means, that if Python gets me 99% of the way there, but some additional abstraction might be handy in python, I can write that abstraction in Python. I can look into the source code of the CPython interpreter and understand what is happening internally. I am, in effect, as a python programmer, still using something built atop the C language. Thus, knowing that language is still valuable.
Everything I said above is also true of people using Perl, Ruby, and PHP.
精彩评论