A thought struck me as I was writing a piece of JavaScript code that processed some floating point values. What is the decimal point symbol in JavaScript? Is it always .? Or is it culture-specific? And what about .toFixed() and .parseFloat()? If I’m processing a user input, it’s likely to include the local culture-specific decimal separator symbol.
Ultimately I’d like to write code that supports both decimal points in user input – culture-specific and ., but I can’t write such a code if I don’t know what JavaScript expects.
Added: OK, Rubens Farias suggests to look at similar question which has a neat accepted answer:
function whatDecimalSeparator() {
var n = 1.1;
n = n.toLocaleString().substring(1, 2);
return n;
}
That’s nice, it lets me get the locale decimal point. A step towards the solution, no doubt.
Now, the remaining part would be to determine what the behavior of .parseFloat() is. Several answers point out that for floating point literals only . is valid. Does .parseFloat() act the same way? Or might it require the local decimal separator in some browser? Are there any different methods for parsing floating point numbers as well? Should I roll out my own just-to-be-sure?
According to the specification, a DecimalLiteral is defined as:
and for satisfying the parseFloat argument:
So numberString becomes the longest prefix of trimmedString that satisfies the syntax of a StrDecimalLiteral, meaning the first parseable literal string number it finds in the input. Only the
.can be used to specify a floating-point number. If you’re accepting inputs from different locales, use a string replace:The function uses the unary operator instead of parseFloat because it seems to me that you want to be strict about the input.
parseFloat("1ABC")would be1, whereas using the unary operator+"1ABC"returnsNaN. This makes it MUCH easier to validate the input. Using parseFloat is just guessing that the input is in the correct format.