0

我使用 Petastormrow_group_indexer为 petastorm 数据集中的列构建索引。之后,元数据文件的大小显着增加,由于此错误,Pyarrow 无法再加载数据集:

OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit

这是我用来加载数据集的代码:

from pyarrow import parquet as pq

dataset_path = "path/to/dataset/"

dataset = pq.ParquetDataset(path_or_paths=dataset_path)

用于索引物化 petastorm 数据集的代码:

from pyspark.sql import SparkSession
from petastorm.etl.rowgroup_indexers import SingleFieldIndexer
from petastorm.etl.rowgroup_indexing import build_rowgroup_index

dataset_url = "file:///path/to/dataset"

spark = SparkSession.builder.appName("demo").config("spark.jars").getOrCreate()

indexer = [SingleFieldIndexer(index_name="my_index",index_field="COLUMN1")]

build_rowgroup_index(dataset_url, spark.sparkContext, indexer)
4

0 回答 0