2

我有一个线程问题,我有资格作为一个适度的线程背景。

假设我有以下(过于简单的)设计和行为:

对象 ObjectA - 具有对对象 ObjectB 的引用和方法 MethodA()。对象 ObjectB - 具有对 ObjectA 的引用、元素数组 ArrayB 和方法 MethodB()。

ObjectA 负责实例化 ObjectB。ObjectB.ObjectA 将指向 ObjectB 的实例化器。

现在,只要满足某些条件,就会在 ObjectB.ArrayB 中添加一个新元素,并为此元素启动一个新线程,例如 ThreadB_x,其中 x 从 1 变为 ObjectB.ArrayB.Length。每个这样的线程调用 ObjectB.MethodB() 来传递一些数据,然后调用 ObjectB.ObjectA.MethodA() 进行数据处理。

因此,多个线程调用同一个方法 ObjectB.MethodB(),而且它们很可能同时这样做。MethodB 中有很多创建和初始化新对象的代码,所以我认为那里没有问题。但是后来这个方法调用了 ObjectB.ObjectA.MethodA(),而我对里面发生的事情一无所知。根据我得到的结果,显然没有错,但我想确定这一点。

现在,我将 ObjectB.ObjectA.MethodA() 的调用包含在 ObjectB.MethodB() 内的 lock 语句中,所以我认为这将确保 MethodA() 的调用不会发生冲突,尽管我是不是 100% 肯定的。但是,如果每个 ThreadB_x 多次调用 ObjectB.MethodB() 并且非常非常快,会发生什么?我会有等待 ObjectB.ObjectA.MethodA() 完成的呼叫队列吗?

谢谢。

4

1 回答 1

0

Your question is very difficult to answer because of the lack of information. It depends on the average time spent in methodA, how many times this method is called per thread, how many cores are allocated to the process, the OS scheduling policy, to name a few parameters.

All things being equals, when the number of threads grows toward infinity, you can easily imagine that the probability for two threads requesting access to a shared resource simultaneously will tend to one. This probability will grow faster in proportion to the amount of time spent on the shared resource. That intuition is probably the reason of your question.

The main idea of multithreading is to parallelize code which can be effectively computed concurrently, and avoid contention as much as possible. In your setup, if methodA is not pure, ie. if it may change the state of the process - or in C++ parlance, if it cannot be made const, then it is a source of contention (recall that a function can only be pure if it uses pure functions or constants in its body).

One way of dealing with a shared resource is to protect it with a mutex, as you've done in your code. Another way is to try to turn its use into an async service, with one thread handling it, and others requesting that thread for computation. In effect, you will end up with an explicit queue of requests, but threads doing these requests will be free to work on something else in the mean time. The goal is always to maximize computation time, as opposed to thread management time, which happens each time a thread gets rescheduled.

Of course, it is not always possible to do so, eg. when the result of methodA belongs to a strongly ordered chain of computation.

于 2013-03-04T13:56:39.080 回答