0

我正在做一个需要我从一些文件中提取大量信息的项目。关于项目的格式和大部分信息对于我要问的内容并不重要。我大多不明白如何与进程池中的所有进程共享这本字典。

这是我的代码(更改了变量名并删除了大部分代码,只需要知道部分):

import json

import multiprocessing
from multiprocessing import Pool, Lock, Manager

import glob
import os

def record(thing, map):

    with mutex:
        if(thing in map):
            map[thing] += 1
        else:
            map[thing] = 1


def getThing(file, n, map): 
    #do stuff
     thing = file.read()
     record(thing, map)


def init(l):
    global mutex
    mutex = l

def main():

    #create a manager to manage shared dictionaries
    manager = Manager()

    #get the list of filenames to be analyzed
    fileSet1=glob.glob("filesSet1/*")
    fileSet2=glob.glob("fileSet2/*")

    #create a global mutex for the processes to share
    l = Lock()   

    map = manager.dict()
    #create a process pool, give it the global mutex, and max cpu count-1 (manager is its own process)
    with Pool(processes=multiprocessing.cpu_count()-1, initializer=init, initargs=(l,)) as pool:
        pool.map(lambda file: getThing(file, 2, map), fileSet1) #This line is what i need help with

main()

据我了解,该 lamda 功能应该可以工作。我需要帮助的行是:pool.map(lambda file: getThing(file, 2, map), fileSet1)。它在那里给我一个错误。给出的错误是“AttributeError: Cant pickle local object 'main..'”。

任何帮助,将不胜感激!

4

1 回答 1

0

为了并行执行任务,multiprocessing“腌制”任务功能。在你的情况下,这个“任务功能”是 lambda file: getThing(file, 2, map).

不幸的是,默认情况下,无法在 python 中腌制 lambda 函数(另请参阅此 stackoverflow 帖子)。让我用最少的代码来说明这个问题:

import multiprocessing

l = range(12)

def not_a_lambda(e):
    print(e)

def main():
    with multiprocessing.Pool() as pool:
        pool.map(not_a_lambda, l)        # Case (A)
        pool.map(lambda e: print(e), l)  # Case (B)

main()

案例 A中,我们有一个适当的、自由的函数,可以腌制,因此该pool.map操作将起作用。在案例 B中,我们有一个 lambda 函数,并且会发生崩溃。

一种可能的解决方案是使用适当的模块范围函数(如 my not_a_lambda)。另一种解决方案是依赖第三方模块,例如dill来扩展酸洗功能。在后一种情况下,您将使用例如pathos作为常规multiprocessing模块的替代品。最后,您可以创建一个将您的共享状态作为成员Worker收集的类。这可能看起来像这样:

import multiprocessing

class Worker:
    def __init__(self, mutex, map):
        self.mutex = mutex
        self.map = map

    def __call__(self, e):
        print("Hello from Worker e=%r" % (e, ))
        with self.mutex:
            k, v = e
            self.map[k] = v
        print("Goodbye from Worker e=%r" % (e, ))

def main():
    manager = multiprocessing.Manager()
    mutex = manager.Lock()
    map = manager.dict()

    # there is only ONE Worker instance which is shared across all processes
    # thus, you need to make sure you don't access / modify internal state of
    # the worker instance without locking the mutex.
    worker = Worker(mutex, map)

    with multiprocessing.Pool() as pool:
        pool.map(worker, l.items())

main()
于 2019-03-02T18:22:44.900 回答