This is a best practices question. I am making an array
type * x = malloc(size*sizeof(type));
AFAIK sizeof gives a return value of size_t. Does that mean that I should use a size_t to declare, or pass around size? Also when indexing the array should I also use a size_t for the index variable? What is the best practice for these these? This is no开发者_如何转开发t something that they taught in school, and now that I'm getting into serious c++ I want to know.
Also if anyone has references of where I can find best practices for this kind of stuff it would be helpful? Kind of an etiquette for programmers book.
EDIT: The malloc should be cudaHostAlloc, or cudaMalloc, since I am developing a class that stores an array simultaneously on the device and host, and updates both at the same time. So malloc here is just a place holder for what I'll actually be doing.
In general, I use whatever minimizes the number of implicit or explicit casts and warning errors. Generally there is a good reason why things are typed the way they are. size_t
is a good choice for array index, since it's unsigned
and you don't generally want to access myarray[-1]
, say.
btw since this is C++ you should get out of the habit of using malloc
(free
) which is part of CRT (C runtime library). Use new
(delete
), preferably with smart pointers to minimize manual memory handling.
Once you have mastered the basics, a good practices reference (language-specific) is Effective C++ by Scott Meyers. The logical next step is Effective STL.
In reference to your follow-on question:
The best reference I have used for general high-level programming "current good practices" sort of thing is:
Code Complete by Steve McConnell (ISBN 0-7356-1967-0)
I reference it all the time. When my company formalized its coding standards, I wrote them based off of it. It doesn't go into design or architecture as much, but for actually banging out code, the book is appropriately named.
cudaMalloc
takes a size of type size_t
, so for consistency, that's what you should use for indices.
Well, since you've already abandoned best practice, even GOOD practice, by using malloc...why does it really matter?
That said, I generally use size_t unless I need a type that can go negative for various semi-rare conditions.
I would prefer int
over size_t
for a couple reasons:
- primitive types should be preferred unless the typedef provides something fundamentally new,
size_t
here doesn't - the
size_t
is defined differently on different systems, possibly creating surprises for some developers - signed
int
avoidsfor (size_t i=9; i>=0; --i)
bugs, as well as other bugs in conditionals, e.g.if (result < 1)
orif ((i-2)/2 < 1)
size_t
is a useless abstraction that masks undesired behavior by silencing underflows
精彩评论