I was working on Project Euler 40, and was a bit bothered that there was no int.Parse(char). Not a big deal, but I did some asking around and someone suggested char.GetNumericValue. GetNumericValue seems like a very odd method to me:
- Takes in a char as a parameter and returns… a double?
- Returns -1.0 if the char is not ‘0’ through ‘9’
So what’s the reasoning behind this method, and what purpose does returning a double serve? I even fired up Reflector and looked at InternalGetNumericValue, but it’s just like watching Lost: every answer just leads to another question.
Remember that it’s taking a Unicode character and returning a value. ‘0’ through ‘9’ are the standard decimal digits, however there are other Unicode characters that represent numbers, some of which are floating point.
Like this character: ¼
Outputs 0.25 in the console window.