Say I have two values 0 <= a < b <= 1
, how can I chose an x
such that a <= x
<
b
with the shortest binary expansion possible?
My approach so far is to take the binary strings of a
and b
, with the decimal point removed, and at the first place they differ, take the expansion of a
up until that point. If there's more of a
to consume, strip the last bit. Finally, add 1
.
In JavaScript:
var binaryInInterval = function(a, b) {
if (a < 0 || b > 1 || a >= b) return undefined;
var i, u, v, x = '';
a = a.toString(2).replace('.', '');
b = b.toString(2).replace('.', '');
for (i = 0; i < Math.max(a.length, b.length); i++) {
u = parseInt(a.substr(i, 1), 10) || 0;
v = parseInt(b.substr(i, 1), 10) || 0;
x += u.toString();
if (u != v) {
if (i + 1 < a.length) x = x.slice(0, -1);
x += '1';
break;
}
}
return '0.' + x.substr(1);
};
This works, but I'm not convinced that it's generally correct. Any thoughts?...
EDIT I've already found a case that doesn't work correctly :P
binaryInInterval(0.25, 0.5) = '0.1'
0.25 0.01
0.5 0.1
^ Difference
but a hasn't been fully consumed
so we strip 00 to 0 before adding 1
EDIT 2 An alternative algorithm would be to iterate through 2^-n
and check if any multiple of this fits within the interval. This would be more expensive, however.