I have two floats in Python that I’d like to subtract, i.e.
v1 = float(value1)
v2 = float(value2)
diff = v1 - v2
I want “diff” to be computed up to two decimal places, that is compute it using %.2f of v1 and %.2f of v2. How can I do this? I know how to print v1 and v2 up to two decimals, but not how to do arithmetic like that.
The particular issue I am trying to avoid is this. Suppose that:
v1 = 0.982769777778
v2 = 0.985980444444
diff = v1 - v2
and then I print to file the following:
myfile.write("%.2f\t%.2f\t%.2f\n" %(v1, v2, diff))
then I will get the output: 0.98 0.99 0.00, suggesting that there’s no difference between v1 and v2, even though the printed result suggests there’s a 0.01 difference. How can I get around this?
thanks.
You said in a comment that you don’t want to use
decimal, but it sounds like that’s what you really should use here. Note that it isn’t an “extra library”, in that it is provided by default with Python since v2.4, you just need toimport decimal. When you want to display the values you can use Decimal.quantize to round the numbers to 2 decimal places for display purposes, and then take the difference of the resulting decimals.