5

我使用将一个示例数据框转换为.arrow文件pyarrow

import numpy as np
import pandas as pd
import pyarrow as pa

df = pd.DataFrame({"a": [10, 2, 3]})
df['a'] = pd.to_numeric(df['a'],errors='coerce')
table = pa.Table.from_pandas(df)
writer = pa.RecordBatchFileWriter('test.arrow', table.schema)
writer.write_table(table)
writer.close()

这会创建一个文件test.arrow

df.info()
    <class 'pandas.core.frame.DataFrame'>
    RangeIndex: 3 entries, 0 to 2
    Data columns (total 1 columns):
    a    3 non-null int64
    dtypes: int64(1)
    memory usage: 104.0 bytes

然后在 NodeJS 中,我用 arrowJS 加载文件。 https://arrow.apache.org/docs/js/

const fs = require('fs');
const arrow = require('apache-arrow');

const data = fs.readFileSync('test.arrow');
const table = arrow.Table.from(data);

console.log(table.schema.fields.map(f => f.name));
console.log(table.count());
console.log(table.get(0));

这打印像

[ 'a' ]
0
null

我期待这个表的长度为 3 并table.get(0)给出第一行而不是null.

这是桌子的样子console.log(table._schema)

[ Int_ [Int] { isSigned: true, bitWidth: 16 } ]
Schema {
  fields:
   [ Field { name: 'a', type: [Int_], nullable: true, metadata: Map {} } ],
  metadata:
   Map {
     'pandas' => '{"index_columns": [{"kind": "range", "name": null, "start": 0, "stop": 5, "step": 1}], "column_indexes": [{"name": null, "field_name": null, "pandas_type": "unicode", "numpy_type": "object", "metadata": {"encoding": "UTF-8"}}], "columns": [{"name": "a", "field_name": "a", "pandas_type": "int16", "numpy_type": "int16", "metadata": null}], "creator": {"library": "pyarrow", "version": "0.15.0"}, "pandas_version": "0.22.0"}' },
  dictionaries: Map {} }

知道为什么它没有按预期获取数据吗?

4

1 回答 1

2

正如 Wes在 Apache JIRA中提到的,这是由于 Arrow 0.15 中的格式更改所致。这意味着所有Arrow 库,而不仅仅是 PyArrow,在将 IPC 文件发送到旧版本的 Arrow 时都会出现这个问题。修复方法是将 ArrowJS 升级到 0.15.0,以便您可以在其他 Arrow 库和 JS 库之间往返。如果由于某种原因无法更新,则可以改用以下解决方法之一:

use_legacy_format=True作为一个 kwarg 传递给RecordBatchFileWriter

with pa.RecordBatchFileWriter('file.arrow', table.schema, use_legacy_format=True) as writer:
    writer.write_table(table)

将环境变量设置ARROW_PRE_0_15_IPC_FORMAT为 1:

$ export ARROW_PRE_0_15_IPC_FORMAT = 1
$ python
>>> import pyarrow as pa
>>> table = pa.Table.from_pydict( {"a": [1, 2, 3], "b": [4, 5, 6]} )
>>> with pa.RecordBatchFileWriter('file.arrow', table.schema) as writer:
...   writer.write_table(table)
...

或将 PyArrow 降级为0.14.x

$ conda install -c conda-forge pyarrow=0.14.1
于 2019-10-17T18:23:54.617 回答