在 c# 聊天中一直在讨论这个问题。原来的问题是这样的:
计算例如 (Int32) 5+5 比 1234723847+32489237 快吗?
我最初的想法是在二进制级别进行优化以忽略填充零,因此较小的数字会更快。
所以,我测试了它。如果你有兴趣,这里是程序。如果没有,请直接跳到结果。
Stopwatch sw = new Stopwatch();
Int64 c = 0;
long msDifferential = 0; //
int reps = 10; //number of times to run the entire program
for (int j = 0; j < reps; j++)
{
sw.Start(); //
sw.Stop(); // Just in case there's any kind of overhead for the first Start()
sw.Reset(); //
sw.Start(); //One hundred million additions of "small" numbers
for (Int64 i = 0, k = 1; i < 100000000; i++, k++)
{
c = i + k;
}
sw.Stop();
long tickssmall = sw.ElapsedTicks;
long mssmall = sw.ElapsedMilliseconds;
sw.Reset();
sw.Start(); //One hundred million additions of "big" numbers
for (Int64 i = 100000000000000000, k = 100000000000000001; i < 100000000100000000; i++, k++)
{
c = i + k;
}
sw.Stop();
long ticksbig = sw.ElapsedTicks;
long msbig = sw.ElapsedMilliseconds;
//total differentials for additions
ticksDifferential += ticksbig - tickssmall;
msDifferential += msbig - mssmall;
}
//average differentials per 100000000 additions
long averageDifferentialTicks = ticksDifferential / reps;
long averageDifferentialMs = msDifferential / reps;
//average differentials per addition
long unitAverageDifferentialTicks = averageDifferentialTicks / 100000000;
long unitAverageDifferentialMs = averageDifferentialMs / 100000000;
System.IO.File.AppendAllText(@"C:\Users\phillip.schmidt\My Documents\AdditionTimer.txt", "Average Differential (Ticks): " + unitAverageDifferentialTicks.ToString() + ", ");
System.IO.File.AppendAllText(@"C:\Users\phillip.schmidt\My Documents\AdditionTimer.txt", "Average Differential (Milliseconds): " + unitAverageDifferentialMs.ToString());
结果
调试模式
- 平均单位差:2.17 纳秒
发布模式(已启用优化)
- 平均单位差:0.001 纳秒
发布模式(已禁用优化)
- 平均单位差:0.01 纳秒
因此,在调试模式下,“大”数字每次相加比“小”数字加起来要长约 2.17 纳秒。但是,在发布模式下,差异几乎没有那么显着。
问题
所以我有几个后续问题:
- 哪种模式最适合我的目的?(调试、发布、发布(无选择))
- 我的结果准确吗?如果是这样,速度差异的原因是什么?
- 为什么调试模式的差异如此之大?
- 还有什么我应该考虑的吗?