开发者

Should I use Hungarian notation when coding WinAPI apps? [closed]

开发者 https://www.devze.com 2023-01-08 07:30 出处:网络
Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing
Closed. This question is opinion-based. It is not currently accepting answers.

Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.

Closed 8 years ago.

Improve this question

I've recently started learning Win32 API and I hate Hungarian notation (开发者_运维技巧those stupid prefixes in variable names which make the code looking ugly and almost unreadable), however as you may know it is absolutely everywhere in there! And this fact causes everyone to use it in their code too to keep kind of consistency... I suppose it is stupid question but anyway, should I do so as well? Will my code look strange or wrong if I don't?


It seems that it's only because it's internal Microsoft coding standard. I personally use it only in the original (intented) way, and sometimes it makes sense (eg, I preffer int size_kb instead of defining type for size in kilobytes).

also check this article: http://www.joelonsoftware.com/articles/Wrong.html


No -- the time for HN is long past. In fact, it's a good example of a second-system type of artifact that was really obsolete before it was even invented. A long time ago, the idea kind of made sense. When most compilers did little or no type checking, a somewhat systematic method of checking types by hand was an idea that at least had enough merit to be worth considering.

That's simply not the case any more though (and hasn't been for decades now). Even C compilers actually do real type checking nowadays, and compilers for nearly all other languages do considerably more still. Even a compiler that was obsolete and mediocre a decade ago will do this job far more effectively than encoding type information into a variable name gives you a hope of doing on your own. For that matter, even assemblers now do sufficient type checking to prevent many-to-most of the kinds of problems HN was intended to help with.

Worse, people who use HN frequently seem to assume it will do something useful, and because of that they ignore what the compiler could do. As a result, they actually end up with code that's considerably worse (fragile, borderline buggy) than if they hadn't used it at all. Huge amounts of Microsoft code display this sort of problem (e.g., DWORD getting used for all sorts of incompatible purposes, and people figuring that assigning one to another makes sense because they both have "dw" prefixes).

Worse still, code that uses HN is much less maintainable than code without it. As a prime example, look around at how much current code still uses an "lpsz" prefix, even though the concept of "long" versus "short" pointers was last relevant in Windows 3.1, close to 20 years ago now. In theory, this should all have been fixed during the migration to 32-bit code ~15 years ago -- but that would require huge amounts of work editing code for essentially no benefit (since the HN is worthless anyway, having a "correct" psz wouldn't really be any better than the "incorrect" lpsz).


NO!!!


If you're working on a larger team that has adopted a coding convention (HN or not), then follow it, even if you hate it (with one exception described below). If you're just coding for yourself, or you're on a team that has a more laissez faire attitude towards style, then go with what makes life easier for you.

The problem with coding conventions (especially something as artificial as HN) is that everybody has to follow them; if only half the team bothers to use HN properly, then nobody gets any benefit out of it at all, and you've just made your code harder to read for nothing.

Having said that, do not follow the idiotic convention of prefixing integer variables with i or strings with lpst; the sooner that bit of silliness dies off, the better. HN was intended to convey more abstract semantics; the goal was to identify which objects were safe or appropriate to use for a particular operation (the Joel on Software article linked to by ruslik makes a pretty good case for it). IOW, variable names should denote usage, not type, and HN was an attempt to codify usage.

Personally, I prefer using meaningful English (or native language of your choice) names that tell you explicitly what you're dealing with, rather than some artificial coding scheme; going from Joel's example linked above, I'd just use names like rawName and encodedName as opposed to usName and sName. It conveys much the same semantic information, and it reads better as well (IMO, anyway).


Joel on Software has a fantastic article explaining why (a) you SHOULD hate the Hungarian you see in windows Apps, and (b) why you should seriously consider using it.

If you are developing in C, the flavor of notation called Apps Hungarian is indispensable and will help you write correct code. If you are developing in a type safe OO language like C++, you should NOT use Hungarian as its just not needed.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号