11

我主要来自 c++ 背景,但我认为这个问题适用于任何语言的线程。这是场景:

  1. 我们有两个线程(ThreadA 和 ThreadB),共享内存中有一个值 x

  2. 假设对 x 的访问由互斥锁(或其他合适的同步控制)适当控制

  3. 如果线程碰巧在不同的处理器上运行,如果 ThreadA 执行写操作,但它的处理器将结果放在它的 L2 缓存而不是主存中,会发生什么情况?然后,如果 ThreadB 尝试读取该值,它会不会只是查看自己的 L1/L2 缓存/主内存,然后使用那里的旧值?

如果不是这种情况,那么如何管理这个问题?

如果是这样的话,那么可以做些什么呢?

4

4 回答 4

12

你的例子会很好用。

多个处理器使用MESI一致性协议来确保数据在缓存之间保持同步。使用 MESI,每个高速缓存行都被视为已修改、独占、在 CPU 之间共享或无效。写入在处理器之间共享的高速缓存行会强制它在其他 CPU 中变为无效,从而保持高速缓存同步。

然而,这还不够。不同的处理器具有不同的内存模型,并且大多数现代处理器都支持某种程度的重新排序内存访问。在这些情况下,需要内存屏障

例如,如果您有线程 A:

DoWork();
workDone = true;

和线程 B:

while (!workDone) {}
DoSomethingWithResults()

With both running on separate processors, there is no guarantee that the writes done within DoWork() will be visible to thread B before the write to workDone and DoSomethingWithResults() would proceed with potentially inconsistent state. Memory barriers guarantee some ordering of the reads and writes - adding a memory barrier after DoWork() in Thread A would force all reads/writes done by DoWork to complete before the write to workDone, so that Thread B would get a consistent view. Mutexes inherently provide a memory barrier, so that reads/writes cannot pass a call to lock and unlock.

In your case, one processor would signal to the others that it dirtied a cache line and force the other processors to reload from memory. Acquiring the mutex to read and write the value guarantees that the change to memory is visible to the other processor in the order expected.

于 2009-07-09T17:57:41.763 回答
2

Most locking primitives like mutexes imply memory barriers. These force a cache flush and reload to occur.

For example,

ThreadA {
    x = 5;         // probably writes to cache
    unlock mutex;  // forcibly writes local CPU cache to global memory
}
ThreadB {
    lock mutex;    // discards data in local cache
    y = x;         // x must read from global memory
}
于 2009-07-09T18:01:13.207 回答
0

In general, the compiler understands shared memory, and takes considerable effort to assure that shared memory is placed in a sharable place. Modern compilers are very complicated in the way that they order operations and memory accesses; they tend to understand the nature of threading and shared memory. That's not to say that they're perfect, but in general, much of the concern is taken care of by the compiler.

于 2009-07-09T18:02:21.070 回答
0

C# has some build in support for this kind of problems. You can mark an variable with the volatile keyword, which forces it to be synchronized on all cpu's.

public static volatile int loggedUsers;

The other part is a syntactical wrappper around the .NET methods called Threading.Monitor.Enter(x) and Threading.Monitor.Exit(x), where x is an variable to lock. This causes other threads trying to lock x to have to wait untill the locking thread calls Exit(x).

public list users;
// In some function:
System.Threading.Monitor.Enter(users);
try {
   // do something with users
}
finally {
   System.Threading.Monitor.Exit(users);
}
于 2009-07-09T18:08:41.977 回答