我试图了解gettimeofday()
系统调用的精度。这是我的程序:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/time.h>
int main(int argc, char *argv[])
{
struct timeval t;
int prev = 0;
for (int i = 0; i < atoi(argv[1]); i++)
{
gettimeofday(&t, NULL);
printf("secs: %ld, nmicro_secs: %d, delta: %d\n", t.tv_sec, t.tv_usec, t.tv_usec - prev);
prev = t.tv_usec;
sleep(1);
}
return 0;
}
如果我运行这个程序 ( ./a.out 10
) 的输出是,
secs: 1643494972, nmicro_secs: 485698, delta: 485698
secs: 1643494973, nmicro_secs: 490785, delta: 5087
secs: 1643494974, nmicro_secs: 491121, delta: 336
secs: 1643494975, nmicro_secs: 494810, delta: 3689
secs: 1643494976, nmicro_secs: 500034, delta: 5224
secs: 1643494977, nmicro_secs: 501143, delta: 1109
secs: 1643494978, nmicro_secs: 506397, delta: 5254
secs: 1643494979, nmicro_secs: 509905, delta: 3508
secs: 1643494980, nmicro_secs: 510637, delta: 732
secs: 1643494981, nmicro_secs: 513451, delta: 2814
秒列似乎与 1 秒的睡眠相协调。有人可以解释一下微秒列中的值是怎么回事吗?看起来从睡眠到睡眠的跳跃时间约为 1 毫秒。