我有一个不支持浮点的协处理器。我尝试使用 32 位固定点,但它无法处理非常小的数字。我的数字范围从 1 到 1e-18。一种方法是使用浮点仿真,但它太慢了。在我们知道数字不会大于 1 且小于 1e-18 的情况下,我们能否让它更快。或者有没有办法让固定点在非常小的数字上工作。
4 回答
32 位定点编码不可能表示从 10 –18到 1 的数字。从 10 -18的跨度是 10 18的比率这一事实可以立即看出这一点,但非零编码32 位整数跨度的比率小于 2 32,远小于 10 18。因此,定点编码的比例选择不会提供所需的跨度。
因此 32 位定点编码将不起作用,您必须使用其他技术。
在某些应用中,可能适合使用多个定点编码。也就是说,各种输入值将使用定点编码进行编码,但每个值都有适合它的比例,中间值和输出也将具有自定义比例。显然,只有在设计时可以确定合适的比例时,这才有可能。否则,您应该放弃 32 位定点编码并考虑替代方案。
简化的 24 位浮点是否足够快和足够准确?:
#include <stdio.h>
#include <limits.h>
#if UINT_MAX >= 0xFFFFFFFF
typedef unsigned myfloat;
#else
typedef unsigned long myfloat;
#endif
#define MF_EXP_BIAS 0x80
myfloat mfadd(myfloat a, myfloat b)
{
unsigned ea = a >> 16, eb = b >> 16;
if (ea > eb)
{
a &= 0xFFFF;
b = (b & 0xFFFF) >> (ea - eb);
if ((a += b) > 0xFFFF)
a >>= 1, ++ea;
return a | ((myfloat)ea << 16);
}
else if (eb > ea)
{
b &= 0xFFFF;
a = (a & 0xFFFF) >> (eb - ea);
if ((b += a) > 0xFFFF)
b >>= 1, ++eb;
return b | ((myfloat)eb << 16);
}
else
{
return (((a & 0xFFFF) + (b & 0xFFFF)) >> 1) | ((myfloat)++ea << 16);
}
}
myfloat mfmul(myfloat a, myfloat b)
{
unsigned ea = a >> 16, eb = b >> 16, e = ea + eb - MF_EXP_BIAS;
myfloat p = ((a & 0xFFFF) * (b & 0xFFFF)) >> 16;
return p | ((myfloat)e << 16);
}
myfloat double2mf(double x)
{
myfloat f;
unsigned e = MF_EXP_BIAS + 16;
if (x <= 0)
return 0;
while (x < 0x8000)
x *= 2, --e;
while (x >= 0x10000)
x /= 2, ++e;
f = x;
return f | ((myfloat)e << 16);
}
double mf2double(myfloat f)
{
double x;
unsigned e = (f >> 16) - 16;
if ((f & 0xFFFF) == 0)
return 0;
x = f & 0xFFFF;
while (e > MF_EXP_BIAS)
x *= 2, --e;
while (e < MF_EXP_BIAS)
x /= 2, ++e;
return x;
}
int main(void)
{
double testConvData[] = { 1e-18, .25, 0.3333333, .5, 1, 2, 3.141593, 1e18 };
unsigned i;
for (i = 0; i < sizeof(testConvData) / sizeof(testConvData[0]); i++)
printf("%e -> 0x%06lX -> %e\n",
testConvData[i],
(unsigned long)double2mf(testConvData[i]),
mf2double(double2mf(testConvData[i])));
printf("300 * 5 = %e\n", mf2double(mfmul(double2mf(300),double2mf(5))));
printf("500 + 3 = %e\n", mf2double(mfadd(double2mf(500),double2mf(3))));
printf("1e18 * 1e-18 = %e\n", mf2double(mfmul(double2mf(1e18),double2mf(1e-18))));
printf("1e-18 + 2e-18 = %e\n", mf2double(mfadd(double2mf(1e-18),double2mf(2e-18))));
printf("1e-16 + 1e-18 = %e\n", mf2double(mfadd(double2mf(1e-16),double2mf(1e-18))));
return 0;
}
输出(ideone):
1.000000e-18 -> 0x459392 -> 9.999753e-19
2.500000e-01 -> 0x7F8000 -> 2.500000e-01
3.333333e-01 -> 0x7FAAAA -> 3.333282e-01
5.000000e-01 -> 0x808000 -> 5.000000e-01
1.000000e+00 -> 0x818000 -> 1.000000e+00
2.000000e+00 -> 0x828000 -> 2.000000e+00
3.141593e+00 -> 0x82C90F -> 3.141541e+00
1.000000e+18 -> 0xBCDE0B -> 9.999926e+17
300 * 5 = 1.500000e+03
500 + 3 = 5.030000e+02
1e18 * 1e-18 = 9.999390e-01
1e-18 + 2e-18 = 2.999926e-18
1e-16 + 1e-18 = 1.009985e-16
减法留作练习。同上以获得更好的转换程序。
使用 64 位定点并完成它。
与 32 位定点相比,它的乘法速度会慢四倍,但仍然比浮点仿真效率高得多。
In embedded systems I'd suggest using 16+32, 16+16, 8+16 or 8+24 bit redundant floating point representation, where each number is simply M * 2^exp.
In this case you can choose to represent zero with both M=0 and exp=0; There are 16-32 representations for each power of 2 -- and that mainly makes comparison a bit harder than typically. Also one can postpone normalization e.g. after subtraction.