Assert.Equal(1000000.0, table.Convert("g", "mcg", 1.0)); // Pass
Assert.Equal(2000000.0, table.Convert("g", "mcg", 2.0)); // Pass
Assert.Equal(3200000.0, table.Convert("g", "mcg", 3.2)); // Fail
// The failing one is equal to doing the following calculation, which fails also:
Assert.Equal(3200000.0, 3.2 * 1.0 / (1.0 / 1000000.0)); // Fail
Assert.Equal(3200000.0, 3.2 * (1.0 / (1.0 / 1000000.0)));// Pass, WTF?!?!
===================================================================
Assert.Equal() Failure
Expected: 3200000
Actual: 3200000
Assert.Equal(1000000.0, table.Convert(g, mcg, 1.0)); // Pass Assert.Equal(2000000.0, table.Convert(g, mcg, 2.0)); // Pass Assert.Equal(3200000.0, table.Convert(g,
Share
With the different order of operations, the floating point binary rounding errors appear to be propagating up differently. You can get “less surprising” but potentially slower results with the Decimal type.
3.2 * 1.0 / (1.0 / 1000000.0) -> 3200000.0000000005
(try
(3.2 * 1.0 / (1.0 / 1000000.0) ).ToString("R");3.2 * (1.0 / (1.0 / 1000000.0)) -> 3200000.0
If you don’t already understand the differences between floating point and decimal types, please read: http://docs.sun.com/source/806-3568/ncg_goldberg.html
Or, if you prefer something in plainer English:
http://floating-point-gui.de/