假设您正在谈论返回数据的分辨率,则状态的 POSIX规范gettimeofday
:
系统时钟的分辨率未指定。
这是因为系统在跟踪小时间段方面可能具有广泛变化的能力。甚至 ISO 标准clock()
函数也包含这样的警告。
如果您谈论调用它需要多长时间,该标准不保证这些方面的性能。一个实现完全可以免费等待 125分钟,然后再给你时间,尽管我怀疑这样的实现会取得很大的市场成功 :-)
作为有限分辨率的示例,我输入了以下代码以在我的系统上进行检查:
#include <stdio.h>
#include <sys/time.h>
#define NUMBER 30
int main (void) {
struct timeval tv[NUMBER];
int count[NUMBER], i, diff;
gettimeofday (&tv[0], NULL);
for (i = 1; i < NUMBER; i++) {
gettimeofday (&tv[i], NULL);
count[i] = 1;
while ((tv[i].tv_sec == tv[i-1].tv_sec) &&
(tv[i].tv_usec == tv[i-1].tv_usec))
{
count[i]++;
gettimeofday (&tv[i], NULL);
}
}
printf ("%2d: secs = %d, usecs = %6d\n", 0, tv[0].tv_sec, tv[0].tv_usec);
for (i = 1; i < NUMBER; i++) {
diff = (tv[i].tv_sec - tv[i-1].tv_sec) * 1000000;
diff += tv[i].tv_usec - tv[i-1].tv_usec;
printf ("%2d: secs = %d, usecs = %6d, count = %5d, diff = %d\n",
i, tv[i].tv_sec, tv[i].tv_usec, count[i], diff);
}
return 0;
}
代码基本上记录了底层时间的变化,记录了gettimeofday()
实际变化所花费的调用次数。这是在一台相当强大的机器上,因此它的处理能力并不短缺(计数表明它能够调用gettimeofday()
每个时间量子的频率,大约 5,800 标记,忽略第一个,因为我们不知道我们何时开始在那个量子测量值)。
输出是:
0: secs = 1318554836, usecs = 990820
1: secs = 1318554836, usecs = 991820, count = 5129, diff = 1000
2: secs = 1318554836, usecs = 992820, count = 5807, diff = 1000
3: secs = 1318554836, usecs = 993820, count = 5901, diff = 1000
4: secs = 1318554836, usecs = 994820, count = 5916, diff = 1000
5: secs = 1318554836, usecs = 995820, count = 5925, diff = 1000
6: secs = 1318554836, usecs = 996820, count = 5814, diff = 1000
7: secs = 1318554836, usecs = 997820, count = 5814, diff = 1000
8: secs = 1318554836, usecs = 998820, count = 5819, diff = 1000
9: secs = 1318554836, usecs = 999820, count = 5901, diff = 1000
10: secs = 1318554837, usecs = 820, count = 5815, diff = 1000
11: secs = 1318554837, usecs = 1820, count = 5866, diff = 1000
12: secs = 1318554837, usecs = 2820, count = 5849, diff = 1000
13: secs = 1318554837, usecs = 3820, count = 5857, diff = 1000
14: secs = 1318554837, usecs = 4820, count = 5867, diff = 1000
15: secs = 1318554837, usecs = 5820, count = 5852, diff = 1000
16: secs = 1318554837, usecs = 6820, count = 5865, diff = 1000
17: secs = 1318554837, usecs = 7820, count = 5867, diff = 1000
18: secs = 1318554837, usecs = 8820, count = 5885, diff = 1000
19: secs = 1318554837, usecs = 9820, count = 5864, diff = 1000
20: secs = 1318554837, usecs = 10820, count = 5918, diff = 1000
21: secs = 1318554837, usecs = 11820, count = 5869, diff = 1000
22: secs = 1318554837, usecs = 12820, count = 5866, diff = 1000
23: secs = 1318554837, usecs = 13820, count = 5875, diff = 1000
24: secs = 1318554837, usecs = 14820, count = 5925, diff = 1000
25: secs = 1318554837, usecs = 15820, count = 5870, diff = 1000
26: secs = 1318554837, usecs = 16820, count = 5877, diff = 1000
27: secs = 1318554837, usecs = 17820, count = 5868, diff = 1000
28: secs = 1318554837, usecs = 18820, count = 5874, diff = 1000
29: secs = 1318554837, usecs = 19820, count = 5862, diff = 1000
表明分辨率似乎限制在不超过一千微秒。当然,您的系统可能与此不同,最重要的是它取决于您的实现和/或环境。
绕过这种限制的一种方法是不要做一次,而是做N
几次,然后将经过的时间除以N
.
例如,假设您调用您的函数并且计时器说它花费了 125 毫秒,您怀疑这似乎有点高。我建议然后在一个循环中调用它一千次,测量整个一千次所花费的时间。
如果结果是 125 秒,那么,是的,它可能很慢。但是,如果它只需要 27 秒,那将表明您的计时器分辨率是导致看似大时间的原因,因为这相当于每次迭代 27 毫秒,与您从其他结果中看到的相同。
修改您的代码以考虑到这一点将遵循以下原则:
int main() {
const int count = 1000;
timeval tim;
gettimeofday(&tim, NULL);
double t1 = 1.0e6 * tim.tv_sec + tim.tv_usec;
int v;
for (int i = 0; i < count; ++i)
v = a(3, 4);
gettimeofday(&tim, NULL);
double t2 = 1.0e6 * tim.tv_sec + tim.tv_usec;
cout << v << '\n' << ((t2 - t1) / count) << '\n';
return 0;
}