I am new to C, and am having some fun playing around with bitwise operations. What I am doing seems to work, I am just wondering if it is considered good style.
Alright so let's say my program has 3 command-line options, -a
, -b
, and -c
. Previously I would have had 3 ints act as booleans, say aFlag
, bFlag
, and cFlag
. Then when I call my processOpts( int *aFlag, int *bFlag, int *cFlag)
function, I would pass &aFlag
, &bFlag
, and &cFlag
as arguments, and set them like *aFlag = 1
.
Here's my new approach: have 1 int *options
to represent all of the options, and treat it like an array of booleans. So to set them in the function:
case 'a':
*options |= 1;
break;
case 'b':
*options |= 2;
break;
case 'c':
*options |= 4;
break;
Then, back in main
(or wherever), when I want to test to see if an option is chosen:
if ( *options & 1 )
开发者_运维知识库// Option 'a' selected
if ( *options & 2 )
// Option 'b' selected
if ( *options & 4 )
// Option 'c' selected
My question is: which method is considered better style? The first way could be more clear and less error-prone, whereas the second would probably make for easier refactoring (no need to change function prototype, as it's just one int
).
Or, is there an even better way to do this? :D
EDIT: added breaks per Mat's suggestion.
Thanks for all the responses, I am quite impressed with this community and its willingness to help everybody learn—you guys rock!
Using a single variable to represent a set of boolean flags works well if they are related. In general, I'd avoid doing this if the flags were not related. However in your case where the 3 flags relate to the program and how it runs, I'd say this is a good use.
As far as the implementation goes, rather than using hard coded constant values for your flags, you should define macros or an enum to represent the flags. That way it is clear that you are setting (or unsetting) which flag. Though it's probably not necessary to use pointers here as you have in your code.
e.g.,
enum option
{
OPTION_none = 0,
OPTION_a = 0x1,
OPTION_b = 0x2,
OPTION_c = 0x4,
};
enum option processOpts(int argc, char **argv)
{
enum option options = OPTION_none;
if (argc > 1)
{
int i;
for (i = 1; i < argc; i++)
{
if (argv[i][0] == '-')
{
switch (argv[i][1])
{
case 'a': options |= OPTION_a; break;
case 'b': options |= OPTION_b; break;
case 'c': options |= OPTION_c; break;
}
}
}
}
return options;
}
int main(int argc, char *argv[])
{
enum option options = processOpts(argc, argv);
if (options & OPTION_a)
{
/* do stuff for option a */
}
if (options & OPTION_b)
{
/* do stuff for option b */
}
if (options & OPTION_c)
{
/* do stuff for option c */
}
return 0;
}
The better way to do this is to use an unsigned integer type to represent a collection of flags. Otherwise, your new approach is in many respects far superior to a bunch of individual flags.
Of course, the bit-masks that correspond to each option should be represented by named constants instead of "magic numbers" (1, 2, 4...) in the middle of the code.
This is fine - you're just packing bits into an int - the only suggestion I would make is that 1, 2, 4 etc should be symbolic constants, and not literal hard-coded values as above, e.g.
enum {
OPTION_A = 1 << 0,
OPTION_B = 1 << 1,
OPTION_C = 1 << 2,
};
Using bit flags is imho better style - because it can easily be extended if you happen to need more command line options.
However, I'd strongly recommend you not to use plain, magic numbers - instead, do something like this:
#define OPTION_A 1
#define OPTION_B 2 ...
if (*options & OPTION_A) {
}
this makes checking for options more or less self-explaining, while bitwise ops with raw numbers always smell a bit like hidden magic.
Your new approach (with an unsigned variable) is usual (and idiomatic).
It is usual to define the 1
, 2
, and 4
etc with an enum (or #define) and give them names
enum {
OPTION_LEFT_TO_RIGHT = 1,
OPTION_TOP_TO_BOTTOM = 2,
OPTION_BOTTOM_TO_TOP = 4,
OPTION_RIGHT_TO_LEFT = 8,
};
.
case 'a': *options |= OPTION_BOTTOM_TO_TOP; break;
.
if (*options & OPTION_RIGHT_TO_LEFT) { /* ... */ }
As I suggested in my comment, my preference would be to use a bit fields to do this. For me, it has a slightly more intuitive usage, whilst allowing the compiler to do most of the work for me (checking the assembly output at least with my compiler confirms it is doing the expected and/or operations when checking/setting flags). So, taking Jeff M's solution as a starting point, I'd alter it to look like this:
// Assume TRUE defined as 1
struct option
{
unsigned int OptionA : 1; // defines 1 bit for option A flag
unsigned int OptionB : 1; // next bit is for option B
unsigned int OptionC : 1; // note, you can define more than one bit per flag
// if more complex options are required.
};
struct option processOpts(int argc, char **argv)
{
struct option options = {0,0,0}; // explicit setting of all flags to false
// compiler optimizes to options=0
if (argc > 1)
{
int i;
for (i = 1; i < argc; i++)
{
if (argv[i][0] == '-')
{
switch (argv[i][1])
{
case 'a': options.OptionA = TRUE; break;
case 'b': options.OptionB = TRUE; break;
case 'c': options.OptionC = TRUE; break;
}
}
}
}
return options;
}
int main(int argc, char *argv[])
{
struct option options = processOpts(argc, argv);
if (options.OptionA)
{
/* do stuff for option a */
}
if (options.OptionB)
{
/* do stuff for option b */
}
if (options.OptionC)
{
/* do stuff for option c */
}
return 0;
}
The way you have wriggled yourself to is an outstanding way. It might even be the "best" way. Congratulations! Well done.
-- pete
The usual approach to parsing command line options is to use getopt(). A good example of its usage can be found in the 5 source.
Note that while getopt() conforms to POSIX.2 and POSIX.1-2001, getopt_long() is a GNU extension and should IMO be avoided.
精彩评论