1

我正在从 800 GB 的大型 csv 文件中解析数据。对于每一行数据,我将其保存为 pandas 数据框。

readcsvfile = csv.reader(csvfile)
for i, line in readcsvfile:
    # parse create dictionary of key:value pairs by csv field:value, "dictionary_line"
    # save as pandas dataframe
    df = pd.DataFrame(dictionary_line, index=[i])

现在,我想将其保存为 HDF5 格式,并像查询整个 csv 文件一样查询 h5。

import pandas as pd
store = pd.HDFStore("pathname/file.h5")

hdf5_key = "single_key"

csv_columns = ["COL1", "COL2", "COL3", "COL4",..., "COL55"]

到目前为止,我的方法是:

import pandas as pd
store = pd.HDFStore("pathname/file.h5")

hdf5_key = "single_key"

csv_columns = ["COL1", "COL2", "COL3", "COL4",..., "COL55"]
readcsvfile = csv.reader(csvfile)
for i, line in readcsvfile:
    # parse create dictionary of key:value pairs by csv field:value, "dictionary_line"
    # save as pandas dataframe
    df = pd.DataFrame(dictionary_line, index=[i])
    store.append(hdf5_key, df, data_columns=csv_columns, index=False)

也就是说,我尝试将每个数据帧保存df到一个键下的 HDF5 中。但是,这失败了:

  Attribute 'superblocksize' does not exist in node: '/hdf5_key/_i_table/index'

所以,我可以尝试先将所有内容保存到一个熊猫数据框中,即

import pandas as pd
store = pd.HDFStore("pathname/file.h5")

hdf5_key = "single_key"

csv_columns = ["COL1", "COL2", "COL3", "COL4",..., "COL55"]
readcsvfile = csv.reader(csvfile)
total_df = pd.DataFrame()
for i, line in readcsvfile:
    # parse create dictionary of key:value pairs by csv field:value, "dictionary_line"
    # save as pandas dataframe
    df = pd.DataFrame(dictionary_line, index=[i])
    total_df = pd.concat([total_df, df])   # creates one big CSV

现在存储为 HDF5 格式

    store.append(hdf5_key, total_df, data_columns=csv_columns, index=False)

但是,我认为我没有将所有 csv 行保存total_df为 HDF5 格式的 RAM/存储。

那么,如何将每个“单行”df 附加到 HDF5 中,使其最终成为一个大数据帧(如原始 csv)?

编辑:这是具有不同数据类型的 csv 文件的具体示例:

 order    start    end    value    
 1        1342    1357    category1
 1        1459    1489    category7
 1        1572    1601    category23
 1        1587    1599    category2
 1        1591    1639    category1
 ....
 15        792     813    category13
 15        892     913    category5
 ....
4

1 回答 1

1

您的代码应该可以工作,您可以尝试以下代码:

import pandas as pd
import numpy as np

store = pd.HDFStore("file.h5", "w")
hdf5_key = "single_key"
csv_columns = ["COL%d" % i for i in range(1, 56)]
for i in range(10):
    df = pd.DataFrame(np.random.randn(1, len(csv_columns)), columns=csv_columns)
    store.append(hdf5_key, df,  data_column=csv_columns, index=False)
store.close()

如果代码有效,那么您的数据有问题。

于 2016-10-10T00:57:38.330 回答