当我阅读 asio 源代码时,我很好奇 asio 如何使线程之间的数据同步,甚至是隐式链。这些是asio中的代码:
io_service::运行
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
io_service::do_run_one
while (!stopped_)
{
if (!op_queue_.empty())
{
// Prepare to execute first handler from queue.
operation* o = op_queue_.front();
op_queue_.pop();
bool more_handlers = (!op_queue_.empty());
if (o == &task_operation_)
{
task_interrupted_ = more_handlers;
if (more_handlers && !one_thread_)
{
if (!wake_one_idle_thread_and_unlock(lock))
lock.unlock();
}
else
lock.unlock();
task_cleanup on_exit = { this, &lock, &this_thread };
(void)on_exit;
// Run the task. May throw an exception. Only block if the operation
// queue is empty and we're not polling, otherwise we want to return
// as soon as possible.
task_->run(!more_handlers, this_thread.private_op_queue);
}
else
{
std::size_t task_result = o->task_result_;
if (more_handlers && !one_thread_)
wake_one_thread_and_unlock(lock);
else
lock.unlock();
// Ensure the count of outstanding work is decremented on block exit.
work_cleanup on_exit = { this, &lock, &this_thread };
(void)on_exit;
// Complete the operation. May throw an exception. Deletes the object.
o->complete(*this, ec, task_result);
return 1;
}
}
在其do_run_one中,互斥锁的解锁都在执行处理程序之前。如果存在隐式链,handler 不会并发执行,但问题是:线程 A 运行一个修改数据的处理程序,线程 B 运行下一个处理程序读取已被线程 A 修改的数据。没有互斥锁的保护,线程B如何看到线程A对数据所做的更改?在处理程序执行之前解锁互斥锁不会在线程访问处理程序访问的数据之间建立发生之前的关系。当我走得更远时,处理程序执行使用了一个名为fenced_block的东西:
completion_handler* h(static_cast<completion_handler*>(base));
ptr p = { boost::addressof(h->handler_), h, h };
BOOST_ASIO_HANDLER_COMPLETION((h));
// Make a copy of the handler so that the memory can be deallocated before
// the upcall is made. Even if we're not about to make an upcall, a
// sub-object of the handler may be the true owner of the memory associated
// with the handler. Consequently, a local copy of the handler is required
// to ensure that any owning sub-object remains valid until after we have
// deallocated the memory here.
Handler handler(BOOST_ASIO_MOVE_CAST(Handler)(h->handler_));
p.h = boost::addressof(handler);
p.reset();
// Make the upcall if required.
if (owner)
{
fenced_block b(fenced_block::half);
BOOST_ASIO_HANDLER_INVOCATION_BEGIN(());
boost_asio_handler_invoke_helpers::invoke(handler, handler);
BOOST_ASIO_HANDLER_INVOCATION_END;
}
这是什么?我知道栅栏似乎是 C++11 支持的同步原语,但这个栅栏完全是由 asio 本身编写的。这个 fenced_block 是否有助于完成数据同步的工作?
更新
在我google并阅读this和this之后,asio确实使用内存栅栏原语来同步线程中的数据,这比解锁更快,直到处理程序执行完成(x86上的速度差异)。事实上,Java volatile 关键字是通过在 write 之后插入内存屏障来实现的,然后再读取这个变量来建立发生前的关系。
如果有人可以简单地描述 asio 内存栅栏实现或添加我错过或误解的内容,我会接受。