PyTables 库和 HDFStore 对象(基于 PyTables)都为用户提供索引。
仅对于 PyTables,我们创建一个 HDF5 文件如下(来自文档):
from tables import *
class Particle(IsDescription):
identity = StringCol(itemsize=22, dflt=" ", pos=0) # character String
idnumber = Int16Col(dflt=1, pos = 1) # short integer
speed = Float32Col(dflt=1, pos = 2) # single-precision
# Open a file in "w"rite mode
fileh = open_file("objecttree.h5", mode = "w")
# Get the HDF5 root group
root = fileh.root
# Create the groups
group1 = fileh.create_group(root, "group1")
group2 = fileh.create_group(root, "group2")
# Now, create an array in root group
array1 = fileh.create_array(root, "array1", ["string", "array"], "String array")
# Create 1 new tables in group1
table1 = fileh.create_table(group1, "table1", Particle)
# Get the record object associated with the table:
row = table1.row
# Fill the table with 10 records
for i in xrange(10):
# First, assign the values to the Particle record
row['identity'] = 'This is particle: %2d' % (i)
row['idnumber'] = i
row['speed'] = i * 2.
# This injects the Record values
row.append()
# Flush the table buffers
table.flush()
# Finally, close the file (this also will flush all the remaining buffers!)
fileh.close()
用户使用“Column.create_index()”索引列
例如:
indexrows = table.cols.var1.create_index()
indexrows = table.cols.var2.create_index()
indexrows = table.cols.var3.create_index()
对于后一种情况,用户实例化一个 HDFStore 对象,然后选择要索引的列。
store = HDFStore('file1.hd5')
key = "key_name"
index_columns = ["column1", "column2"]
store.append(key,... data_columns=index_columns)
在这里,我们对两列进行索引,这应该会优化我们的搜索。
两个问题:
(1)我实际上不清楚PyTables示例(第一个示例)中如何设置索引(索引)。上面没有定义列。在我看来,有三个字段:身份、身份证号码、速度。假设我想为速度和身份建立一个索引。如何做到这一点?
(2) 基于 pandas 的索引和基于 PyTables 的索引之间是否有任何基准?一个比另一个快吗?一个是否比另一个占用更多的磁盘空间(即更大的 HDF5 文件)?