I’m optimizing a sorting function for a numerics/statistics library based on the assumption that, after filtering out any NaNs and doing a little bit twiddling, floats can be compared as 32-bit ints without changing the result and doubles can be compared as 64-bit ints.
This seems to speed up sorting these arrays by somewhere on the order of 40%, and my assumption holds as long as the bit-level representation of floating point numbers is IEEE 754. Are there any real-world CPUs that people actually use (excluding in embedded devices, which this library doesn’t target) that use some other representation that might break this assumption?
- https://en.wikipedia.org/wiki/Single-precision_floating-point_format
(binary32, akafloatin systems that use IEEE754) - https://en.wikipedia.org/wiki/Double-precision_floating-point_format
(binary64, akadoublein systems that use IEEE754)
Other than flawed Pentiums, any x86 or x64-based CPU is using IEEE 754 as their floating-point arithmetic standard.
Here are a brief overview of the FPA standards and their adoptions.
Unless your planning on supporting your library on fairly exotic CPU architectures, it is safe to assume that for now 99% of CPUs are IEEE 754 compliant.