13

我有一个序列(来自 File.walkTopDown),我需要在每个序列上运行一个长时间运行的操作。我想使用 Kotlin 最佳实践/协程,但我要么没有并行性,要么并行性太多,并遇到“打开文件太多”的 IO 错误。

File("/Users/me/Pictures/").walkTopDown()
    .onFail { file, ex -> println("ERROR: $file caused $ex") }
    .filter { ... only big images... }
    .map { file ->
        async { // I *think* I want async and not "launch"...
            ImageProcessor.fromFile(file)
        }
    }

这似乎不能并行运行,而且我的多核 CPU 永远不会超过 1 个 CPU 的价值。协程有没有办法运行“NumberOfCores 并行操作”的延迟作业?

我查看了使用 Kotlin Coroutines 的多线程,它首先创建所有作业然后加入它们,但这意味着在繁重的处理加入步骤之前完成序列/文件树遍历,这似乎...... 将其拆分为收集和处理步骤意味着收集可以在处理之前运行。

val jobs = ... the Sequence above...
    .toSet()
println("Found ${jobs.size}")
jobs.forEach { it.await() }
4

7 回答 7

9

这不是特定于您的问题,但它确实回答了“如何限制 kotlin 协程最大并发性”的问题。

编辑:从 kotlinx.coroutines 1.6.0(https://github.com/Kotlin/kotlinx.coroutines/issues/2919)开始,您可以使用limitedParallelism,例如Dispatchers.IO.limitedParallelism(123)

旧解决方案:我一开始想使用newFixedThreadPoolContext,但 1)它已被弃用,2)它会使用线程,我认为这不是必要或不可取的(与 相同Executors.newFixedThreadPool().asCoroutineDispatcher())。这个解决方案可能存在我使用Semaphore不知道的缺陷,但它非常简单:

import kotlinx.coroutines.async
import kotlinx.coroutines.awaitAll
import kotlinx.coroutines.coroutineScope
import kotlinx.coroutines.sync.Semaphore
import kotlinx.coroutines.sync.withPermit

/**
 * Maps the inputs using [transform] at most [maxConcurrency] at a time until all Jobs are done.
 */
suspend fun <TInput, TOutput> Iterable<TInput>.mapConcurrently(
    maxConcurrency: Int,
    transform: suspend (TInput) -> TOutput,
) = coroutineScope {
    val gate = Semaphore(maxConcurrency)
    this@mapConcurrently.map {
        async {
            gate.withPermit {
                transform(it)
            }
        }
    }.awaitAll()
}

测试(抱歉,它使用 Spek、hamcrest 和 kotlin 测试):

import kotlinx.coroutines.ExperimentalCoroutinesApi
import kotlinx.coroutines.delay
import kotlinx.coroutines.launch
import kotlinx.coroutines.runBlocking
import kotlinx.coroutines.test.TestCoroutineDispatcher
import org.hamcrest.MatcherAssert.assertThat
import org.hamcrest.Matchers.greaterThanOrEqualTo
import org.hamcrest.Matchers.lessThanOrEqualTo
import org.spekframework.spek2.Spek
import org.spekframework.spek2.style.specification.describe
import java.util.concurrent.atomic.AtomicInteger
import kotlin.test.assertEquals

@OptIn(ExperimentalCoroutinesApi::class)
object AsyncHelpersKtTest : Spek({
    val actionDelay: Long = 1_000 // arbitrary; obvious if non-test dispatcher is used on accident
    val testDispatcher = TestCoroutineDispatcher()

    afterEachTest {
        // Clean up the TestCoroutineDispatcher to make sure no other work is running.
        testDispatcher.cleanupTestCoroutines()
    }

    describe("mapConcurrently") {
        it("should run all inputs concurrently if maxConcurrency >= size") {
            val concurrentJobCounter = AtomicInteger(0)
            val inputs = IntRange(1, 2).toList()
            val maxConcurrency = inputs.size

            // https://github.com/Kotlin/kotlinx.coroutines/issues/1266 has useful info & examples
            runBlocking(testDispatcher) {
                print("start runBlocking $coroutineContext\n")

                // We have to run this async so that the code afterwards can advance the virtual clock
                val job = launch {
                    testDispatcher.pauseDispatcher {
                        val result = inputs.mapConcurrently(maxConcurrency) {
                            print("action $it $coroutineContext\n")

                            // Sanity check that we never run more in parallel than max
                            assertThat(concurrentJobCounter.addAndGet(1), lessThanOrEqualTo(maxConcurrency))

                            // Allow for virtual clock adjustment
                            delay(actionDelay)

                            // Sanity check that we never run more in parallel than max
                            assertThat(concurrentJobCounter.getAndAdd(-1), lessThanOrEqualTo(maxConcurrency))
                            print("action $it after delay $coroutineContext\n")

                            it
                        }

                        // Order is not guaranteed, thus a Set
                        assertEquals(inputs.toSet(), result.toSet())
                        print("end mapConcurrently $coroutineContext\n")
                    }
                }
                print("before advanceTime $coroutineContext\n")

                // Start the coroutines
                testDispatcher.advanceTimeBy(0)
                assertEquals(inputs.size, concurrentJobCounter.get(), "All jobs should have been started")

                testDispatcher.advanceTimeBy(actionDelay)
                print("after advanceTime $coroutineContext\n")
                assertEquals(0, concurrentJobCounter.get(), "All jobs should have finished")
                job.join()
            }
        }

        it("should run one at a time if maxConcurrency = 1") {
            val concurrentJobCounter = AtomicInteger(0)
            val inputs = IntRange(1, 2).toList()
            val maxConcurrency = 1

            runBlocking(testDispatcher) {
                val job = launch {
                    testDispatcher.pauseDispatcher {
                        inputs.mapConcurrently(maxConcurrency) {
                            assertThat(concurrentJobCounter.addAndGet(1), lessThanOrEqualTo(maxConcurrency))
                            delay(actionDelay)
                            assertThat(concurrentJobCounter.getAndAdd(-1), lessThanOrEqualTo(maxConcurrency))
                            it
                        }
                    }
                }

                testDispatcher.advanceTimeBy(0)
                assertEquals(1, concurrentJobCounter.get(), "Only one job should have started")

                val elapsedTime = testDispatcher.advanceUntilIdle()
                print("elapsedTime=$elapsedTime")
                assertThat(
                    "Virtual time should be at least as long as if all jobs ran sequentially",
                    elapsedTime,
                    greaterThanOrEqualTo(actionDelay * inputs.size)
                )
                job.join()
            }
        }

        it("should handle cancellation") {
            val jobCounter = AtomicInteger(0)
            val inputs = IntRange(1, 2).toList()
            val maxConcurrency = 1

            runBlocking(testDispatcher) {
                val job = launch {
                    testDispatcher.pauseDispatcher {
                        inputs.mapConcurrently(maxConcurrency) {
                            jobCounter.addAndGet(1)
                            delay(actionDelay)
                            it
                        }
                    }
                }

                testDispatcher.advanceTimeBy(0)
                assertEquals(1, jobCounter.get(), "Only one job should have started")

                job.cancel()
                testDispatcher.advanceUntilIdle()
                assertEquals(1, jobCounter.get(), "Only one job should have run")
                job.join()
            }
        }
    }
})

根据https://play.kotlinlang.org/hands-on/Introduction%20to%20Coroutines%20and%20Channels/09_Testing,您可能还需要调整编译器参数以运行测试:

compileTestKotlin {
    kotlinOptions {
        // Needed for runBlocking test coroutine dispatcher?
        freeCompilerArgs += "-Xuse-experimental=kotlin.Experimental"
        freeCompilerArgs += "-Xopt-in=kotlin.RequiresOptIn"
    }
}
testImplementation 'org.jetbrains.kotlinx:kotlinx-coroutines-test:1.4.1'
于 2020-11-28T04:51:45.910 回答
7

您的第一个片段的问题是它根本不运行 - 请记住,Sequence它是懒惰的,您必须使用终端操作,例如toSet()or forEach()newFixedThreadPoolContext此外,您需要通过构造上下文并在以下位置使用它来限制可用于该任务的线程数async

val pictureContext = newFixedThreadPoolContext(nThreads = 10, name = "reading pictures in parallel")

File("/Users/me/Pictures/").walkTopDown()
    .onFail { file, ex -> println("ERROR: $file caused $ex") }
    .filter { ... only big images... }
    .map { file ->
        async(pictureContext) {
            ImageProcessor.fromFile(file)
        }
    }
    .toList()
    .forEach { it.await() }

编辑:在等待结果之前,您必须使用终端运算符 ( toList)

于 2017-12-07T08:23:00.880 回答
5

我让它与一个频道一起工作。但也许我对你的方式是多余的?

val pipe = ArrayChannel<Deferred<ImageFile>>(20)
launch {
    while (!(pipe.isEmpty && pipe.isClosedForSend)) {
        imageFiles.add(pipe.receive().await())
    }
    println("pipe closed")
}
File("/Users/me/").walkTopDown()
        .onFail { file, ex -> println("ERROR: $file caused $ex") }
        .forEach { pipe.send(async { ImageFile.fromFile(it) }) }
pipe.close()
于 2017-12-09T01:49:13.117 回答
4

这不会保留投影的顺序,但会将吞吐量限制为最多maxDegreeOfParallelism. 按照您认为合适的方式扩展和扩展。

suspend fun <TInput, TOutput> (Collection<TInput>).inParallel(
        maxDegreeOfParallelism: Int,
        action: suspend CoroutineScope.(input: TInput) -> TOutput
): Iterable<TOutput> = coroutineScope {

    val list = this@inParallel

    if (list.isEmpty())
        return@coroutineScope listOf<TOutput>()

    val brake = Channel<Unit>(maxDegreeOfParallelism)
    val output = Channel<TOutput>()
    val counter = AtomicInteger(0)

    this.launch {

        repeat(maxDegreeOfParallelism) {
            brake.send(Unit)
        }

        for (input in list) {

            val task = this.async {
                action(input)
            }

            this.launch {
                val result = task.await()
                output.send(result)
                val completed = counter.incrementAndGet()
                if (completed == list.size) {
                    output.close()
                } else brake.send(Unit)
            }

            brake.receive()
        }
    }

    val results = mutableListOf<TOutput>()
    for (item in output) {
        results.add(item)
    }

    return@coroutineScope results
}

示例用法:

val output = listOf(1, 2, 3).inParallel(2) {
    it + 1
} // Note that output may not be in same order as list.
于 2020-01-08T10:47:51.537 回答
2

这将限制对工人的协程。我建议观看https://www.youtube.com/watch?v=3WGM-_MnPQA

package com.example.workers

import kotlinx.coroutines.*
import kotlinx.coroutines.channels.ReceiveChannel
import kotlinx.coroutines.channels.produce
import kotlin.system.measureTimeMillis

class ChannellibgradleApplication

fun main(args: Array<String>) {
    var myList = mutableListOf<Int>(3000,1200,1400,3000,1200,1400,3000)
    runBlocking {
        var myChannel = produce(CoroutineName("MyInts")) {
            myList.forEach { send(it) }
        }

        println("Starting coroutineScope  ")
        var time = measureTimeMillis {
            coroutineScope {
                var workers = 2
                repeat(workers)
                {
                    launch(CoroutineName("Sleep 1")) { theHardWork(myChannel) }
                }
            }
        }
        println("Ending coroutineScope  $time ms")
    }
}

suspend fun theHardWork(channel : ReceiveChannel<Int>) 
{
    for(m in channel) {
        println("Starting Sleep $m")
        delay(m.toLong())
        println("Ending Sleep $m")
    }
}
于 2019-09-22T00:52:27.783 回答
0

为了将并行度限制在某个值,limitedParallelism函数从库的1.6.0版本开始kotlinx.coroutines。它可以在CoroutineDispatcher对象上调用。因此,为了限制并行执行的线程,我们可以编写如下内容:

val parallelismLimit = Runtime.getRuntime().availableProcessors()
val limitedDispatcher = Dispatchers.Default.limitedParallelism(parallelismLimit)
val scope = CoroutineScope(limitedDispatcher) // we can set limitedDispatcher for the whole scope

scope.launch { // or we can set limitedDispatcher for a coroutine launch(limitedDispatcher)
    File("/Users/me/Pictures/").walkTopDown()
        .onFail { file, ex -> println("ERROR: $file caused $ex") }
        .filter { ... only big images... }
        .map { file ->
            async {
                ImageProcessor.fromFile(file)
            }
        }.toList().awaitAll()
}

ImageProcessor.fromFile(file)将使用parallelismLimit线程数并行执行。

于 2021-12-31T14:05:23.160 回答
0

为什么不使用asFlow()运算符然后使用flatMapMerge

someCoroutineScope.launch(Dispatchers.Default) {
    File("/Users/me/Pictures/").walkTopDown()
        .asFlow()
        .filter { ... only big images... }
        .flatMapMerge(concurrencyLimit) { file ->
            flow {
                emit(runInterruptable { ImageProcessor.fromFile(file) })
            }
        }.catch { ... }
        .collect()
    }

然后,您可以限制同时打开的文件,同时仍同时处理它们。

于 2021-12-16T13:29:02.190 回答