Thursday, January 31, 2019

Why Does C++ Support Hex Assignment, But Lack Binary Assignment? How best to store flags?



I have a set of bit flags that are used in a program I am porting from C to C++.



To begin...




The flags in my program were previously defined as:



/* Define feature flags for this DCD file */
#define DCD_IS_CHARMM 0x01
#define DCD_HAS_4DIMS 0x02
#define DCD_HAS_EXTRA_BLOCK 0x04


...Now I've gather that #defines for constants (versus class constants, etc.) are generally considered bad form.




This raises questions about how best to store bit flags in c++ and why c++ doesn't support assignment of binary text to an int, like it allows for hex numbers to be assigned in this way (via "0x"). These questions are summarized at the end of this post.



I could see one simple solution is to simply create individual constants:



namespace DCD {
const unsigned int IS_CHARMM = 1;
const unsigned int HAS_4DIMS = 2;
const unsigned int HAS_EXTRA_BLOCK = 4;
};



Let's call this idea 1.



Another idea I had was to use an integer enum:



namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = 2,
HAS_EXTRA_BLOCK = 8

};
};


But one thing that bothers me about this is that its less intuitive when it comes to higher values, it seems... i.e.



namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = 2,

HAS_EXTRA_BLOCK = 8,
NEW_FLAG = 16,
NEW_FLAG_2 = 32,
NEW_FLAG_3 = 64,
NEW_FLAG_4 = 128
};
};


Let's call this approach option 2.




I'm considering using Tom Torf's macro solution:



#define B8(x) ((int) B8_(0x##x))

#define B8_(x) \
( ((x) & 0xF0000000) >( 28 - 7 ) \
| ((x) & 0x0F000000) >( 24 - 6 ) \
| ((x) & 0x00F00000) >( 20 - 5 ) \
| ((x) & 0x000F0000) >( 16 - 4 ) \

| ((x) & 0x0000F000) >( 12 - 3 ) \
| ((x) & 0x00000F00) >( 8 - 2 ) \
| ((x) & 0x000000F0) >( 4 - 1 ) \
| ((x) & 0x0000000F) >( 0 - 0 ) )


converted to inline functions, e.g.



#include 
#include

....

/* TAKEN FROM THE C++ LITE FAQ [39.2]... */
class BadConversion : public std::runtime_error {
public:
BadConversion(std::string const& s)
: std::runtime_error(s)
{ }
};


inline double convertToUI(std::string const& s)
{
std::istringstream i(s);
unsigned int x;
if (!(i >> x))
throw BadConversion("convertToUI(\"" + s + "\")");
return x;
}
/** END CODE **/


inline unsigned int B8(std::string x) {
unsigned int my_val = convertToUI(x.insert(0,"0x").c_str());
return ((my_val) & 0xF0000000) >( 28 - 7 ) |
((my_val) & 0x0F000000) >( 24 - 6 ) |
((my_val) & 0x00F00000) >( 20 - 5 ) |
((my_val) & 0x000F0000) >( 16 - 4 ) |
((my_val) & 0x0000F000) >( 12 - 3 ) |
((my_val) & 0x00000F00) >( 8 - 2 ) |
((my_val) & 0x000000F0) >( 4 - 1 ) |
((my_val) & 0x0000000F) >( 0 - 0 );

}

namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = B8("00000001"),
HAS_4DIMS = B8("00000010"),
HAS_EXTRA_BLOCK = B8("00000100"),
NEW_FLAG = B8("00001000"),
NEW_FLAG_2 = B8("00010000"),
NEW_FLAG_3 = B8("00100000"),

NEW_FLAG_4 = B8("01000000")
};
};


Is this crazy? Or does it seem more intuitive? Let's call this choice 3.



So to recap, my over-arching questions are:



1. Why doesn't c++ support a "0b" value flag, similar to "0x"?
2. Which is the best style to define flags...
i. Namespace wrapped constants.
ii. Namespace wrapped enum of unsigned ints assigned directly.
iii. Namespace wrapped enum of unsigned ints assigned using readable binary string.




Thanks in advance! And please don't close this thread as subjective, because I really want to get help on what the best style is and why c++ lacks built in binary assignment capability.






EDIT 1



A bit of additional info. I will be reading a 32-bit bitfield from a file and then testing it with these flags. So bear that in mind when you post suggestions.


Answer



Prior to C++14, binary literals had been discussed off and on over the years, but as far as I know, nobody had ever written up a serious proposal to get it into the standard, so it never really got past the stage of talking about it.




For C++ 14, somebody finally wrote up a proposal, and the committee accepted it, so if you can use a current compiler, the basic premise of the question is false--you can use binary literals, which have the form 0b01010101.



In C++11, instead of adding binary literals directly, they added a much more general mechanism to allow general user-defined literals, which you could use to support binary, or base 64, or other kinds of things entirely. The basic idea is that you specify a number (or string) literal followed by a suffix, and you can define a function that will receive that literal, and convert it to whatever form you prefer (and you can maintain its status as a "constant" too...)



As to which to use: if you can, the binary literals built into C++14 or above are the obvious choice. If you can't use them, I'd generally prefer a variation of option 2: an enum with initializers in hex:



namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 0x1,

HAS_4DIMS = 0x2,
HAS_EXTRA_BLOCK = 0x8,
NEW_FLAG = 0x10,
NEW_FLAG_2 = 0x20,
NEW_FLAG_3 = 0x40,
NEW_FLAG_4 = 0x80
};
};



Another possibility is something like:



#define bit(n) (1<<(n))

enum e_feature_flags = {
IS_CHARM = bit(0),
HAS_4DIMS = bit(1),
HAS_EXTRA_BLOCK = bit(3),
NEW_FLAG = bit(4),
NEW_FLAG_2 = bit(5),

NEW_FLAG_3 = bit(6),
NEW_FLAG_4 = bit(7)
};

No comments:

Post a Comment

plot explanation - Why did Peaches&#39; mom hang on the tree? - Movies &amp; TV

In the middle of the movie Ice Age: Continental Drift Peaches' mom asked Peaches to go to sleep. Then, she hung on the tree. This parti...