开发者

Learning Java and logic using debugger. Did I cheat? [closed]

开发者 https://www.devze.com 2022-12-08 17:50 出处:网络
Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing
Closed. This question is opinion-based. It is not currently accepting answers.

Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.

Closed 9 years ago.

开发者_如何学C Improve this question

After a break from coding in general, my way of thinking logically faded (as if it was there to begin with...). I'm no master programmer. Intermediate at best. I decided to see if i can write an algorithm to print out the fibonacci sequence in Java. I got really frustrated because it was something so simple, and used the debugger to see what was going on with my variables. solved it in less than a minute with the help of the debugger.

Is this cheating?

When I read code either from a book or someone else's, I now find that it takes me a little more time to understand. If the alghorithm is complex (to me) i end up writing notes as to whats going on in the loop. A primitive debugger if you will.

When you other programmers read code, do you also need to write things down as to whats the code doing? Or are you a genius and and just retain it?


No, it's not cheating.

I think that if your sense of programming "logic" has faded a bit, then the absolute, best, 100% way to refresh or even learn this is to watch code in the debugger.

This is such a cool idea that if I ever teach a beginning programming class again, I should have a computer right there running code in the debugger so that the students can watch what happens.

In answer to your second question, if I really had to worry about what the code was doing, then I'd start writing things down. However, if I'm looking at code by navigating around in Eclipse, then I rarely have to write things down because the history of where I just was is readily available. However, if that history were not written down by the computer, I would absolutely be furiously scribbling on a pad as I navigated around the code.


This was something i needed time to realize:

Understanding the code written by someone else is not voodoo magic, it's just practice.

It's not a matter of intelligence nor logic, truly this is a skill your develop while actually having to understand other's code. I really began to understand this and increase this skill while i start working, as I needed to make changes in others' code.

Don't be afraid, the more code you'll read, the easier it'll be.


It's not "cheating" to use the debugger to find bugs or to observe your program behavior, but you have to be careful not to let it turn into a crutch. Too much reliance on the debugger can also lead to "programming by accident", which is also not very productive. Also, you really want to be able to conceptualize how something is supposed to work before you even observe in the debugger whether it works the way you think it should.

Programming is largely an abstract, mental activity. You have to work out in your head how something is going to work (the design), then you go and write the code (the implementation). The more you can work out in your head how something is going to work, the more productive you will be in the long run.

As others mentioned, there are many times when you can't use a debugger. I think in the long run you will be best served by writing your code so it is easier to understand its behavior.


Even the most experienced programmers lookup the debugger for answers and clarifications; or write plain old printf's to understand states; or write down things when they're reading/reviewing someone else's code.

I think you're learning any thing by looking at what happens under-the-hood is no cheating and in deed you'll have a clearer, more concrete understanding than just reading books and having it as an abstract idea.

So no, it isn't cheating at all.


Using a debugger and step by step execution is one of the best way to understand the internals of code, libraries, APIs,... and to learn. So it's definitely not cheating, it's learning, it's getting knowledge.


Most of the time the exercise is to get you to think about what might happen in the code, not what does happen on a particular run. So running through a couple of times with a debugger might help, but you still have to do the work to generalise from those specific runs. For algorithms, this often means thinking about how the paths grow with increasing input size. For concurrent programs, this means thinking about how the paths of different execution threads will interact with each other in your code. A debugger won't tell you these things, however many times you run it.

Stepping through with a debugger can only tell you what did happen in one trial; it won't train you to think about your program abstractly - it's one apple dropping from a tree, not the theory of gravity.


It's OK to use the debugger to try to figure out why something happened, especially in mysterious code, but it's better to try to predict what should happen first and then see if the debugger confirms your reasoning and intuition. In the future this will help you write test cases that catch unanticipated errors, and increasingly you'll write code that works from the start.


Absolutely not! That's the exact opposite of cheating! The best way learn what's happening is to get in there in the guts of things and watch it happen. Reading code out of a book can be eye-glazingly boring... but watching it execute in the debugger can be magic, especially if you are having problems with a particular section.

I NEVER release code I have written without having run it in a debugger and watched almost all of it.


I agree that you didn't cheat in the general sense. I can only see two cases where you could conceivably consider using a debugger as cheating:

  1. you have set a bar for yourself where you want to complete the task at hand without any aid such as a debugger. Obviously if you use a debugger in this case you can consider it cheating.

  2. you have been given instructions by your teacher or the proctor for an interview test not to use external tools.


Well, it's cheating if you're taking a midterm for a class, and there's a "programming" problem that requires you to analyze something on paper, without running it. But I always hated that kind of test, precisely because it didn't seem at all useful - it's nothing like actually programming. And heck, even there, you still probably would end up "run" it by writing down interim processing in your notes.

I do find that it's good not to over-rely on an actual debugger too much, because occasionally you have to rely on simpler methods of debugging (if, say, you're trying to debug a problem that can only be reproduced on a computer owned by a tester who doesn't want you fiddling about on her computer). But even there, you're still running code and seeing what happens (and possibly adding "debugging" message boxes or writing-text-to-disk functions). Trying to read any program much more complicated than "Hello World" without actually running it (even on paper) isn't avoiding cheating, it's masochism.

That said, it is true, you start having to do this less the more you've seen of a particular language / class of problem. But certainly, if there's ever a bug, even a bug in code you've seen similar versions of millions of times, the fastest and least painful way of finding it is always going to be running the code and seeing what it does.


I think there is extreme value in debuggers and they can be especially great as a learning tool. Sometimes you simply need them to diagnose a defect. However, I think bad programming habits can emerge if you continually rely on a debugger as you develop software.

The perfectly efficient software developer understands the abstractions that she works with -- the languages, frameworks, OSes -- well enough to be able to declare her problem to the system and know that it is correct. Of course this is an unrealistic ideal, but I think it is the right thing to strive for. If you find yourself frequently in a debugger, it means you don't understand your abstractions and this means you are going to move a lot slower when writing code. Even if you sort out that one issue with the help of the debugger, your ignorance is going to slow you down as you try to layer on capability and features.

Clearly you're in "learning" mode, so whatever gets you to that level of understanding depends on your learning style. If the debugger helps you get there, great. But if you're looking to be productive, it is a good goal to have a level of understanding where you can get code correct as you write it not when you run it.

Related to this question is Linus Torvalds' rant on debuggers, which I particularly like:

http://lwn.net/2000/0914/a/lt-debugger.php3

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号