Java's memory model is based on "happens-before" relationship that enforces rules but also allows for optimization in the virtual machine's implementation in terms of cache invalidation.
For example in the following case:
// thread A
private void method() {
//code before lock
synchronized (lockA) {
//code inside
}
}
// thread B
private void method2() {
//code before lock
synchronized (lockA) {
//code inside
}
}
// thread B
private void method3() {
//code before lock
synchronized (lockB) {
//code inside
}
}
if thread A calls method()
and thread B tries to acquire lockA
inside method2()
, then the synchronization on lockA
will require that thread B observes all changes that thread A made to all of its variables prior to releasing its lock, even the variables that were changed in the "code before lock" section.
On the other hand, method3()
uses another lock and doesn't enforce a happens-before relatation. This creates opportunity for optimization.
My question is how does the virtual machine implements those complex semantics? Does it avoid a full flush of the cache when it is not needed?
How does it track which variables did change by which thread at what point, so that it only loads from memory just the cache-lines needed?