I am having difficulty understanding why the following binary subtraction gives the result that it does. I keep getting a different answer. I am trying to compute 0.1-x such that x is 0.00011001100110011001100. The answer should be 0.000000000000000000000001100[1100]...(1100 keeps repeating) When i do it, I keep getting 1100 in the very beginning.
What am I not doing correctly?