I have 3 base representations for positive integer numbers:
- Decimal, in unsigned long variable (e.g. unsigned long int NumDec = 200).
- Hex, in string variable (e.g. string NumHex = “C8”)
- Binary, in string variable (e.g. string NumBin = “11001000”)
I want to be able to convert between numbers in all 3 representations in the most efficient way. I.e. to implement the following 6 functions:
unsigned long int Binary2Dec(const string & Bin) {}
unsigned long int Hex2Dec(const string & Hex) {}
string Dec2Hex(unsigned long int Dec) {}
string Binary2Hex(const string & Bin) {}
string Dec2Binary(unsigned long int Dec) {}
string Hex2Binary(const string & Hex) {}
What is the most efficient approach for each of them? I can use C and C++, but not boost.
Edit: By “efficiency” I mean time efficiency: Shortest execution time.
As others have pointed out, I would start with
sscanf(),printf()and/orstrtoul(). They are fast enough for most applications, and they are less likely to have bugs. I will say, however, that these functions are more generic than you might expect, as they have to deal with non-ASCII character sets, with numbers represented in any base and so forth. For some domains it is possible to beat the library functions.So, measure first, and if the performance of these conversion is really an issue, then:
1) In some applications / domains certain numbers appear very often, for example zero, 100, 200, 19.95, may be so common that it makes sense to optimize your functions to convert such numbers with a bunch of if() statements, and then fall back to the generic library functions.
2) Use a table lookup if the most common 100 numbers, and then fall back on a library function. Remember that large tables may not fit in your cache and may require multiple indirections for shared libraries, so measure these things carefully to make sure you are not decreasing performance.
You may also want to look at boost lexical_cast functions, though in my experience the latter are relatively compared to the good old C functions.
Tough many have said it, it is worth repeating over and over: do not optimize these conversions until you have evidence that they are a problem. If you do optimize, measure your new implementation to make sure it is faster and make sure you have a ton of unit tests for your own version, because you will introduce bugs 🙁