-1

(This question is asked in a Ruby context, but I wouldn't be upset if it were answered in a more general way.)

So (just about) everyone knows that comparing floating point numbers involved in math computations for equality is a no-go, and you have to test for range (in Ruby, like so):

assert_in_delta 5.0, 3.5+1.5, 0.0001

However, is this still necessary when sanity checking a floating point number for equality when it is passed from one subsystem to another, say (in a Ruby context) like so?

json={:foo => 0.5}.to_json
# POST the JSON to a Rails controller, that puts it in a Mongo database
found_obj=pull_obj_out_of_Mongo # details not important here
assert_in_delta 0.5, found_obj[:foo], 0.0001

I personally argue is that assert_in_delta is still a good idea here, because I don't know, and perhaps many people don't know, whether the conceptual floating point value 0.5 is passed along in a way that would enable it to be compared with the floating point literal 0.5 with equals. Is that actually the case, or am I being too paranoid about how floating point numbers are stored and passed along? How much does the answer depend on what language is being used?

4

1 回答 1

0

This depends a lot on what the subsystem does with the Float. In Ruby (1.8 and 1.9 at least), Floats simply wrap an double precision number (literally a double in C, which is stored as an IEEE 754 double-precision number on most modern architectures). The bits don't change when you pass the number around or copy it in that representation.

However, when you pass the Float into a subsystem, it may change the underlying representation. In the example you give, you're serializing the number with JSON, which involves converting the float to a string, and then converting it back. There's no guarantee that you'll recover more than 15-16 digits of accuracy (assuming you converted it with that degree of accuracy). That seems like a lot, but it's not enough to ensure strict equality for all possible floating point values.

于 2012-08-03T03:33:24.663 回答