50

据我了解ForkJoinPool,该池创建固定数量的线程(默认值:核心数)并且永远不会创建更多线程(除非应用程序通过 using 指示需要这些线程managedBlock)。

然而,使用ForkJoinPool.getPoolSize()我发现在一个创建 30,000 个任务 ( RecursiveAction) 的程序中,ForkJoinPool执行这些任务平均使用 700 个线程(每次创建任务时都会计算线程数)。这些任务不做 I/O,而是纯粹的计算;唯一的任务间同步是调用ForkJoinTask.join()和访问AtomicBooleans,即没有线程阻塞操作。

由于join()按照我的理解不会阻塞调用线程,因此池中的任何线程都没有理由阻塞,因此(我假设)没有理由创建任何进一步的线程(这显然仍在发生) .

那么,为什么要ForkJoinPool创建这么多线程呢?哪些因素决定了创建的线程数?

我曾希望可以在不发布代码的情况下回答这个问题,但这里是应要求提供的。这段代码是从一个四倍大小的程序中摘录出来的,精简到了基本部分;它不会按原样编译。如果需要,我当然也可以发布完整的程序。

该程序使用深度优先搜索在迷宫中搜索从给定起点到给定终点的路径。保证存在解决方案。主要逻辑在: A的compute()方法中,它从某个给定点开始,并继续从当前点可到达的所有相邻点。而不是创造一个新的SolverTaskRecursiveActionSolverTask在每个分支点(这将创建太多任务),它将除一个之外的所有邻居推送到回溯堆栈以供稍后处理,并且仅继续仅一个未推送到堆栈的邻居。一旦它以这种方式到达死胡同,最近推送到回溯堆栈的点就会被弹出,并且从那里继续搜索(相应地削减从 taks 的起点构建的路径)。一旦任务发现其回溯堆栈大于某个阈值,就会创建一个新任务;从那时起,该任务在继续从其回溯堆栈中弹出直到用完为止,但在到达分支点时不会将任何其他点推入其堆栈,而是为每个这样的点创建一个新任务。因此,可以使用堆栈限制阈值来调整任务的大小。

我上面引用的数字(“30,000 个任务,平均 700 个线程”)来自于搜索 5000x5000 个单元的迷宫。所以,这里是基本代码:

class SolverTask extends RecursiveTask<ArrayDeque<Point>> {
// Once the backtrack stack has reached this size, the current task
// will never add another cell to it, but create a new task for each
// newly discovered branch:
private static final int MAX_BACKTRACK_CELLS = 100*1000;

/**
 * @return Tries to compute a path through the maze from local start to end
 * and returns that (or null if no such path found)
 */
@Override
public ArrayDeque<Point>  compute() {
    // Is this task still accepting new branches for processing on its own,
    // or will it create new tasks to handle those?
    boolean stillAcceptingNewBranches = true;
    Point current = localStart;
    ArrayDeque<Point> pathFromLocalStart = new ArrayDeque<Point>();  // Path from localStart to (including) current
    ArrayDeque<PointAndDirection> backtrackStack = new ArrayDeque<PointAndDirection>();
    // Used as a stack: Branches not yet taken; solver will backtrack to these branching points later

    Direction[] allDirections = Direction.values();

    while (!current.equals(end)) {
        pathFromLocalStart.addLast(current);
        // Collect current's unvisited neighbors in random order: 
        ArrayDeque<PointAndDirection> neighborsToVisit = new ArrayDeque<PointAndDirection>(allDirections.length);  
        for (Direction directionToNeighbor: allDirections) {
            Point neighbor = current.getNeighbor(directionToNeighbor);

            // contains() and hasPassage() are read-only methods and thus need no synchronization
            if (maze.contains(neighbor) && maze.hasPassage(current, neighbor) && maze.visit(neighbor))
                neighborsToVisit.add(new PointAndDirection(neighbor, directionToNeighbor.opposite));
        }
        // Process unvisited neighbors
        if (neighborsToVisit.size() == 1) {
            // Current node is no branch: Continue with that neighbor
            current = neighborsToVisit.getFirst().getPoint();
            continue;
        }
        if (neighborsToVisit.size() >= 2) {
            // Current node is a branch
            if (stillAcceptingNewBranches) {
                current = neighborsToVisit.removeLast().getPoint();
                // Push all neighbors except one on the backtrack stack for later processing
                for(PointAndDirection neighborAndDirection: neighborsToVisit) 
                    backtrackStack.push(neighborAndDirection);
                if (backtrackStack.size() > MAX_BACKTRACK_CELLS)
                    stillAcceptingNewBranches = false;
                // Continue with the one neighbor that was not pushed onto the backtrack stack
                continue;
            } else {
                // Current node is a branch point, but this task does not accept new branches any more: 
                // Create new task for each neighbor to visit and wait for the end of those tasks
                SolverTask[] subTasks = new SolverTask[neighborsToVisit.size()];
                int t = 0;
                for(PointAndDirection neighborAndDirection: neighborsToVisit)  {
                    SolverTask task = new SolverTask(neighborAndDirection.getPoint(), end, maze);
                    task.fork();
                    subTasks[t++] = task;
                }
                for (SolverTask task: subTasks) {
                    ArrayDeque<Point> subTaskResult = null;
                    try {
                        subTaskResult = task.join();
                    } catch (CancellationException e) {
                        // Nothing to do here: Another task has found the solution and cancelled all other tasks
                    }
                    catch (Exception e) {
                        e.printStackTrace();
                    }
                    if (subTaskResult != null) { // subtask found solution
                        pathFromLocalStart.addAll(subTaskResult);
                        // No need to wait for the other subtasks once a solution has been found
                        return pathFromLocalStart;
                    }
                } // for subTasks
            } // else (not accepting any more branches) 
        } // if (current node is a branch)
        // Current node is dead end or all its neighbors lead to dead ends:
        // Continue with a node from the backtracking stack, if any is left:
        if (backtrackStack.isEmpty()) {
            return null; // No more backtracking avaible: No solution exists => end of this task
        }
        // Backtrack: Continue with cell saved at latest branching point:
        PointAndDirection pd = backtrackStack.pop();
        current = pd.getPoint();
        Point branchingPoint = current.getNeighbor(pd.getDirectionToBranchingPoint());
        // DEBUG System.out.println("Backtracking to " +  branchingPoint);
        // Remove the dead end from the top of pathSoFar, i.e. all cells after branchingPoint:
        while (!pathFromLocalStart.peekLast().equals(branchingPoint)) {
            // DEBUG System.out.println("    Going back before " + pathSoFar.peekLast());
            pathFromLocalStart.removeLast();
        }
        // continue while loop with newly popped current
    } // while (current ...
    if (!current.equals(end)) {         
        // this task was interrupted by another one that already found the solution 
        // and should end now therefore:
        return null;
    } else {
        // Found the solution path:
        pathFromLocalStart.addLast(current);
        return pathFromLocalStart;
    }
} // compute()
} // class SolverTask

@SuppressWarnings("serial")
public class ParallelMaze  {

// for each cell in the maze: Has the solver visited it yet?
private final AtomicBoolean[][] visited;

/**
 * Atomically marks this point as visited unless visited before
 * @return whether the point was visited for the first time, i.e. whether it could be marked
 */
boolean visit(Point p) {
    return  visited[p.getX()][p.getY()].compareAndSet(false, true);
}

public static void main(String[] args) {
    ForkJoinPool pool = new ForkJoinPool();
    ParallelMaze maze = new ParallelMaze(width, height, new Point(width-1, 0), new Point(0, height-1));
    // Start initial task
    long startTime = System.currentTimeMillis();
     // since SolverTask.compute() expects its starting point already visited, 
    // must do that explicitly for the global starting point:
    maze.visit(maze.start);
    maze.solution = pool.invoke(new SolverTask(maze.start, maze.end, maze));
    // One solution is enough: Stop all tasks that are still running
    pool.shutdownNow();
    pool.awaitTermination(Integer.MAX_VALUE, TimeUnit.DAYS);
    long endTime = System.currentTimeMillis();
    System.out.println("Computed solution of length " + maze.solution.size() + " to maze of size " + 
            width + "x" + height + " in " + ((float)(endTime - startTime))/1000 + "s.");
}
4

5 回答 5

17

关于stackoverflow的相关问题:

在 invokeAll/join 期间 ForkJoinPool 停止

ForkJoinPool 似乎浪费了一个线程

我制作了一个可运行的精简版本(我使用的 jvm 参数:-Xms256m -Xmx1024m -Xss8m):

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveAction;
import java.util.concurrent.RecursiveTask;
import java.util.concurrent.TimeUnit;

public class Test1 {

    private static ForkJoinPool pool = new ForkJoinPool(2);

    private static class SomeAction extends RecursiveAction {

        private int counter;         //recursive counter
        private int childrenCount=80;//amount of children to spawn
        private int idx;             // just for displaying

        private SomeAction(int counter, int idx) {
            this.counter = counter;
            this.idx = idx;
        }

        @Override
        protected void compute() {

            System.out.println(
                "counter=" + counter + "." + idx +
                " activeThreads=" + pool.getActiveThreadCount() +
                " runningThreads=" + pool.getRunningThreadCount() +
                " poolSize=" + pool.getPoolSize() +
                " queuedTasks=" + pool.getQueuedTaskCount() +
                " queuedSubmissions=" + pool.getQueuedSubmissionCount() +
                " parallelism=" + pool.getParallelism() +
                " stealCount=" + pool.getStealCount());
            if (counter <= 0) return;

            List<SomeAction> list = new ArrayList<>(childrenCount);
            for (int i=0;i<childrenCount;i++){
                SomeAction next = new SomeAction(counter-1,i);
                list.add(next);
                next.fork();
            }


            for (SomeAction action:list){
                action.join();
            }
        }
    }

    public static void main(String[] args) throws Exception{
        pool.invoke(new SomeAction(2,0));
    }
}

显然,当您执行连接时,当前线程会看到所需的任务尚未完成,并为自己执行另一个任务。

它发生在java.util.concurrent.ForkJoinWorkerThread#joinTask.

然而,这个新任务产生了更多相同的任务,但它们在池中找不到线程,因为线程被锁定在连接中。而且由于它无法知道释放它们需要多少时间(线程可能处于无限循环或永远死锁),因此产生了新线程(补偿加入的线程,正如Louis Wasserman提到的):java.util.concurrent.ForkJoinPool#signalWork

因此,为了防止这种情况,您需要避免递归生成任务。

例如,如果在上面的代码中将初始参数设置为 1,则活动线程数将为 2,即使将 childrenCount 增加十倍。

另请注意,虽然活动线程的数量增加,但运行线程的数量小于或等于并行度

于 2013-11-21T17:54:30.700 回答
12

从源评论:

补偿:除非已经有足够的活动线程,否则 tryPreBlock() 方法可能会创建或重新激活备用线程以补偿阻塞的加入者,直到它们解除阻塞。

我认为正在发生的事情是您没有很快完成任何任务,并且由于在您提交新任务时没有可用的工作线程,因此会创建一个新线程。

于 2012-05-29T10:59:31.737 回答
8

严格、完全严格和终端严格与处理有向无环图(DAG)有关。您可以搜索这些术语以全面了解它们。这就是框架旨在处理的处理类型。查看递归 API 中的代码...,该框架依赖于您的 compute() 代码来执行其他 compute() 链接,然后执行 join()。每个 Task 都执行一个 join() ,就像处理 DAG 一样。

您没有进行 DAG 处理。您正在分叉许多新任务并在每个任务上等待 (join())。阅读源代码。它非常复杂,但您可能能够弄清楚。该框架没有进行适当的任务管理。当它执行 join() 时,它将把等待的任务放在哪里?没有挂起的队列,这将需要一个监视器线程不断查看队列以查看已完成的内容。这就是框架使用“延续线程”的原因。当一个任务执行 join() 时,框架假设它正在等待一个较低的任务完成。当存在许多 join() 方法时,线程无法继续,因此需要存在帮助程序或继续线程。

如上所述,您需要一个分散-聚集类型的 fork-join 过程。在那里你可以分叉尽可能多的任务

于 2012-06-01T14:09:47.150 回答
3

Holger Peineelusive-code发布的两个代码片段实际上并未遵循javadoc 1.8 版本中出现的推荐做法:

在最典型的用法中,fork-join 对的行为类似于并行递归函数的调用(fork)和返回(join)。与其他形式的递归调用一样,返回(连接)应该在最内层优先执行。例如, a.fork(); b.fork(); b.加入();a.join(); 可能比在代码b之前加入代码a更有效。

在这两种情况下,FJPool 都是通过默认构造函数实例化的。这导致使用asyncMode=false构建池,这是默认设置:

@param asyncMode 如果为真,则为
从未加入的分叉任务建立本地先进先出调度模式。在工作线程仅处理事件样式异步任务的应用程序中,此模式可能比默认的基于本地堆栈的模式更合适。对于默认值,请使用 false。

这样工作队列实际上是 lifo:
head -> | t4 | t3 | t2 | t1 | ... | <- 尾巴

因此,在片段中,他们fork()所有任务将它们推入堆栈,然后 以相同的顺序join(),即从最深的任务 (t1) 到最顶层的 (t4) 有效阻塞,直到其他线程将窃取 (t1),然后是 (t2 ) 等等。由于有 enouth 任务阻塞所有池线程(task_count >> pool.getParallelism()),因此正如Louis Wasserman所描述的那样,补偿开始了。

于 2017-07-12T21:14:56.127 回答
2

值得注意的是,elusive-code贴出的代码输出取决于java的版本。在 java 8 中运行代码我看到了输出:

...
counter=0.73 activeThreads=45 runningThreads=5 poolSize=49 queuedTasks=105 queuedSubmissions=0 parallelism=2 stealCount=3056
counter=0.75 activeThreads=46 runningThreads=1 poolSize=51 queuedTasks=0 queuedSubmissions=0 parallelism=2 stealCount=3158
counter=0.77 activeThreads=47 runningThreads=3 poolSize=51 queuedTasks=0 queuedSubmissions=0 parallelism=2 stealCount=3157
counter=0.74 activeThreads=45 runningThreads=3 poolSize=51 queuedTasks=5 queuedSubmissions=0 parallelism=2 stealCount=3153

但是在 java 11 中运行相同的代码,输出是不同的:

...
counter=0.75 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=4 queuedSubmissions=0 parallelism=2 stealCount=0
counter=0.76 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=3 queuedSubmissions=0 parallelism=2 stealCount=0
counter=0.77 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=2 queuedSubmissions=0 parallelism=2 stealCount=0
counter=0.78 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=1 queuedSubmissions=0 parallelism=2 stealCount=0
counter=0.79 activeThreads=1 runningThreads=1 poolSize=2 queuedTasks=0 queuedSubmissions=0 parallelism=2 stealCount=0
于 2019-12-06T11:59:31.227 回答