0

我有大约 5,000.gzip个文件(每个文件约 1MB)。这些文件中的每一个都包含某种jsonlines格式的数据。这是它的样子:

{"category_id":39,"app_id":12731}
{"category_id":45,"app_id":12713}
{"category_id":6014,"app_id":13567}

我想解析这些文件并将它们转换为熊猫数据框。有没有办法加快这个过程?这是我的代码,但有点慢(每个文件 0.5 秒)

import pandas as pd
import jsonlines
import gzip
import os
import io


path = 'data/apps/'
files = os.listdir(path)

result = []
for n, file in enumerate(files):
    print(n, file)
    with open(f'{path}/{file}', 'rb') as f:
        data = f.read()

    unzipped_data = gzip.decompress(data)

    decoded_data = io.BytesIO(unzipped_data)
    reader = jsonlines.Reader(decoded_data)

    for line in reader:
        if line['category_id'] == 6014:
            result.append(line)


df = pd.DataFrame(result)
4

1 回答 1

0

这应该允许您在不加载整个文件的情况下读取每一行。

import pandas as pd
import json
import gzip
import os


path = 'data/apps/'
files = os.listdir(path)

result = []
for n, file in enumerate(files):
    print(n, file)
    with gzip.open(f'{path}/{file}') as f:
        for line in f:
            data = json.loads(line)
            if data['category_id'] == 6014:
                result.append(data)


df = pd.DataFrame(result)
于 2020-03-23T14:38:54.037 回答