0

我正在尝试查找 dask 数据帧的长度,len(dataframe[column])但每次尝试执行此操作时都会出现错误:

distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
distributed.nanny - WARNING - Restarting worker
Traceback (most recent call last):
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\queues.py", line 238, in _feed
    send_bytes(obj)
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 280, in _send_bytes
    ov, err = _winapi.WriteFile(self._handle, buf, overlapped=True)
BrokenPipeError: [WinError 232] The pipe is being closed
distributed.nanny - ERROR - Nanny failed to start process
Traceback (most recent call last):
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\site-packages\distributed\nanny.py", line 575, in start
    await self.process.start()
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\site-packages\distributed\process.py", line 34, in _call_and_set_future
    res = func(*args, **kwargs)
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\site-packages\distributed\process.py", line 202, in _start
    process.start()
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 948, in reduce_pipe_connection
    dh = reduction.DupHandle(conn.fileno(), access)
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 170, in fileno
    self._check_closed()
  File "C:\Users\thakneh\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 136, in _check_closed
    raise OSError("handle is closed")
OSError: handle is closed
distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting

我的 dask 数据框有 1000 万行。有什么办法可以解决这个错误。

4

1 回答 1

-1

我觉得找到一列的长度不会那么简单,因为 Dask 可能正在从各种来源构建一个数据框——这与您可以获取数据框的原因类似.head(),但需要做一些额外的事情.tail()

由于您使用的是如此大的数据框,我相信 Python 会将其中的任何内容加载len()到内存中。

我有两个建议,但我不完全确定它们不会触发相同的异常。

使用pipe

让我们看看这是否可行,您可以尝试pipe在您的列上使用并传递len给它,也许这可行。

dataframe["column"].pipe(len)

供参考这里是管道文档

分区

我认为它可能会有所帮助的一件事是,如果您将列划分为块,这可能有助于保持较低的内存使用率,唯一的问题是您必须对这些分区的大小进行一些访客工作。

您必须跟踪的另一件事是每个分区的长度,这可能有点混乱,我觉得必须有更好的方法来做到这一点。

length = 0

len += dataframe["column"].partitions[:10000]
len += dataframe["column"].partitions[:20000]

当然,您可以尝试使用循环来使代码更简洁。

供参考,这里是文档dataframe.partitions

请让我知道这些工作是否有效,我希望我能帮助你。

于 2021-01-03T17:50:24.937 回答