Given a range of representable floating point numbers, how can I go about calculating the number of bits of precision that I will be able to store in a IEE 754 32-bit float in that range.
For instance, when performing a mathematical calculation where the result and numbers in question are expected to end up in a range of -1 to 1 or say 0 to 16, how would I go about calculating how many theoretical bits of precision exist within said range?
I realize that the values don't have even spacing and are more concentrated around 0, so this complicates the question. In the end, I want to understand what values will not be rounded and how many significant digits I can expect within a range. For instance, can I expect to store (without rounding), a value with accuracy down to 0.000001 in the range of -1 to 1? How would I go about calculating this?