I’m trying to determine the asymptotic run-time of one of my algorithms, which uses exponents, but I’m not sure of how exponents are calculated programmatically.
I’m specifically looking for the pow() algorithm used for double-precision, floating point numbers.
I’ve had a chance to look at fdlibm’s implementation. The comments describe the algorithm used:
followed by a listing of all the special cases handled (0, 1, inf, nan).
The most intense sections of the code, after all the special-case handling, involve the
log2and2**calculations. And there are no loops in either of those. So, the complexity of floating-point primitives notwithstanding, it looks like a asymptotically constant-time algorithm.Floating-point experts (of which I’m not one) are welcome to comment. 🙂