0

I have a graph that has a few endpoints: f0(g(x)) and f1(g(x)). I could make a graph with an edge from g to f0 and g to f1, but if I do wait_for_all() it'll calculate both f0 and f1. But sometimes I only want to know f0(x) and other times I want f1(x), and sometimes I want to know both. Assuming g, f0, and f1 are all expensive to calculate, I'd like to be able to build a graph and call y0 = f0.run_and_wait() and then earlier, later, or at about the same time call y1 = f1.run_and_wait().

One approach is to not use a tbb flow graph and instead have f0 and f1 both call g, but that means two calls to g. Another approach is to have g do internal caching, but then if both calls to g happen at the same time, either both threads do the work or one thread blocks while the other does work. My understanding is that that goes against tbb's notion of non-blocking tasks.

I think maybe there's a way to use async_node to allow one thread to block on g, but that feels like a kludge.

Is there a tbb way to have nodes pull from their parent nodes on demand?

4

1 回答 1

0

你有没有想过在那儿使用 aparallel_for和 a task_group

G g(const X &x);
void f0(const G& g);
void f1(const G& g);

void runAndWait(const X &x, std::initializer_list<void (*)(const G&)> functions) {
  tbb::task_group g;
  g.run_and_wait([&]() {
    G intermediate = g(x);
    tbb::parallel_for_each(functions, [&](auto &f) {
      f(intermediate);
    });
  });
}

parallel_for_each根据需要生成任务。您只会调用g一次,之后是一次还是两次调用取决于初始化程序列表的大小。

于 2021-01-18T11:38:48.973 回答