6

我开始学习如何将 theano 与千层面一起使用,并从 mnist 示例开始。现在,我想尝试我自己的示例:我有一个 train.csv 文件,其中每一行以 0 或 1 开头,代表正确答案,然后是 773 个 0 和 1,代表输入。我不明白如何在 load_database() 函数中将此文件转换为所需的 numpy 数组。这是 mnist 数据库的原始函数的一部分:

...

with gzip.open(filename, 'rb') as f:
    data = pickle_load(f, encoding='latin-1')

# The MNIST dataset we have here consists of six numpy arrays:
# Inputs and targets for the training set, validation set and test set.
X_train, y_train = data[0]
X_val, y_val = data[1]
X_test, y_test = data[2]

...

# We just return all the arrays in order, as expected in main().
# (It doesn't matter how we do this as long as we can read them again.)
return X_train, y_train, X_val, y_val, X_test, y_test

我需要从我的 csv 文件中获取 X_train (输入)和 y_train (每一行的开头)。

谢谢!

4

1 回答 1

2

您可以使用numpy.genfromtxt()ornumpy.loadtxt()如下:

from sklearn.cross_validation import KFold

Xy = numpy.genfromtxt('yourfile.csv', delimiter=",")

# the next section provides the required
# training-validation set splitting but 
# you can do it manually too, if you want

skf = KFold(len(Xy))

for train_index, valid_index in skf:
    ind_train, ind_valid = train_index, valid_index
    break

Xy_train, Xy_valid = Xy[ind_train], Xy[ind_valid]

X_train = Xy_train[:, 1:]
y_train = Xy_train[:, 0]

X_valid = Xy_valid[:, 1:]
y_valid = Xy_valid[:, 0]


...

# you can simply ignore the test sets in your case
return X_train, y_train, X_val, y_val #, X_test, y_test

在代码片段中,我们忽略了传递test集合。

现在您可以将数据集导入主模块或脚本或其他任何内容,但请注意也要从中删除所有测试部分。

或者,您可以简单地将有效集合作为test集合传递:

# you can simply pass the valid sets as `test` set
return X_train, y_train, X_val, y_val, X_val, y_val

在后一种情况下,我们不必关心主要模块部分是否引用了例外test集,但是作为分数(如果有),您将获得validation scores两次,即 as test scores

注意:我不知道,哪个 mnist 示例是那个示例,但可能在您准备好上述数据之后,您还必须在您的培训模块中进行进一步修改以适应您的数据。例如:数据的输入形状,输出形状,即类的数量,例如在您的情况下,前者是773,后者是2

于 2015-08-02T17:17:43.177 回答