1

I am implementing a timer and need it to run every 50 ms or so and would like the resolution to be 1 ms or less. I started by reading these two articles:

http://www.codeproject.com/Articles/1236/Timers-Tutorial

http://www.virtualdub.org/blog/pivot/entry.php?id=272

Oddly enough they seem to contradict one another. One says queue timers are good for high resolution, the other posts results from a Windows 7 system showing resolution around 15ms (not good enough for my application).

So I ran a test on my system (Win7 64bit i7-4770 CPU @3.4 Ghz). I started at a period of 50ms and this is what I see (time since beginning on left, gap between executions on right; all in ms):

150   50.00
200   50.01
250   50.00
...
450   49.93
500   50.00
550   50.03
...
2250  50.10
2300  50.01

I see that the maximum error is about 100 us and that the average error is probably around 30 us or so. This makes me fairly happy.

So I started dropping the period to see at what point it gets unreliable. I started seeing bad results once I decreased the period <= 5ms.

With a period of 5ms it was not uncommon to see some periods jump between 3 and 6ms every few seconds. If I reduce the period to 1ms periods of 5 to 10 to 40 ms can be seen. I presume that the jumps up to 40ms may be due to the fact that I'm printing stuff to the screen, I dunno.

This is my timer callback code:

VOID CALLBACK timer_execute(PVOID p_parameter, 
   BOOLEAN p_timer_or_wait_fired)
{ 
   LARGE_INTEGER l_now_tick;

   QueryPerformanceCounter(&l_now_tick);

   double now = ((l_now_tick.QuadPart - d_start.QuadPart) * 1000000) / d_frequency.QuadPart;
   double us = ((l_now_tick.QuadPart - d_last_tick.QuadPart) * 1000000) / d_frequency.QuadPart;

   //printf("\n%.0f\t%.2f", now / 1000.0f, ms / 1000.0f);

   if (us > 2000 ||
       us < 100)
   {
      printf("\n%.2f", us / 1000.0f);
   }

   d_last_tick = l_now_tick;
}

Anyways it looks to me as if queue timers are very good tools so long as you're executing at 100hz or slower. Are the bad results posted in the second article I linked to (accuracy of 15ms or so) possibly due to a slower CPU, or a different config?

I'm wondering if I can expect this kind of performance across multiple machines (all as fast or faster than my machine running 64bit Win7)? Also, I noticed that if your callback doesn't exit before the period elapsed, the OS will put another thread in there. This may be obvious, but it didn't stand out to me in any documentation and has significant implications for the client-code.

4

2 回答 2

2

Windows 默认计时器分辨率为 15.625 毫秒。这就是您观察到的粒度。但是,可以按照 MSDN 中的说明修改系统计时器分辨率:获取和设置计时器分辨率。这允许在大多数平台上将粒度降低到大约 1 毫秒。这个SO answer 公开了如何获取当前系统计时器分辨率。

在平台支持的情况下,隐藏功能NtSetTimerResolution(...)甚至允许将计时器分辨率设置为 0.5 毫秒。请参阅此SO 对“如何将计时器分辨率设置为 0.5 毫秒? ”的问题的回答。

...不同的配置? 这取决于底层硬件和操作系统版本。使用上述工具检查计时器分辨率。

...都和我运行 64 位 Win7 的机器一样快或快)? 是的你可以。但是,也允许其他应用程序设置定时器分辨率。谷歌浏览器是一个已知的例子。这样的其他应用程序也可以仅临时改变定时器分辨率。因此,您永远不能依赖计时器分辨率在平台/时间之间保持不变。确保计时器分辨率由您的应用程序控制的唯一方法是自行将计时器粒度设置为 1 毫秒(0.5 毫秒)的最小值。

注意:降低系统定时器粒度会导致系统中断频率增加。它减少了线程量子(时间片)并增加了功耗。

于 2014-11-25T07:38:05.873 回答
0

我相信差异是因为系统中使用的资源管理。我刚刚在我必须为我的操作系统课程做的演示中了解到这一点。由于有许多进程正在运行,因此当时间太短时,它可能无法足够快地对进程进行排队。另一方面,当它有更多时间时,进程会及时排队,而且它也与优先级有关。我希望这有点帮助。

于 2014-11-25T00:31:22.687 回答