我想在一些时间序列数据上尝试一个 SKFLOW 循环神经网络,这些数据具有二进制分类问题的实际值。我的数据的每一行包含 57 个特征(变量),我想查看前 2 个样本和接下来的 2 个样本以对每一行进行预测。
我的数据如下所示:
样本-2:f1,f2,f3,f4,...f57,样本-1:f1,f2,f3,f4,...f57,当前样本:f1,f2,f3,f4,...f57,采样 +1:f1、f2、f3、f4、...f57,采样 +2:f1、f2、f3、f4、...f57
MAX_DOCUMENT_LENGTH = 10
vocab_processor = skflow.preprocessing.VocabularyProcessor(MAX_DOCUMENT_LENGTH)
X_train = np.array(list(vocab_processor.fit_transform(X_train)))
X_test = np.array(list(vocab_processor.transform(X_test)))
n_words = len(vocab_processor.vocabulary_)
print('Total words: %d' % n_words)
### Models
EMBEDDING_SIZE = 50
# Customized function to transform batched X into embeddings
def input_op_fn(X):
# Convert indexes of words into embeddings.
# This creates embeddings matrix of [n_words, EMBEDDING_SIZE] and then
# maps word indexes of the sequence into [batch_size, sequence_length,
# EMBEDDING_SIZE].
word_vectors = skflow.ops.categorical_variable(X, n_classes=n_words,
embedding_size=EMBEDDING_SIZE, name='words')
# Split into list of embedding per word, while removing doc length dim.
# word_list results to be a list of tensors [batch_size, EMBEDDING_SIZE].
word_list = skflow.ops.split_squeeze(1, MAX_DOCUMENT_LENGTH, word_vectors)
return word_list
# Single direction GRU with a single layer
classifier = skflow.TensorFlowRNNClassifier(rnn_size=EMBEDDING_SIZE,
n_classes=15, cell_type='gru', input_op_fn=input_op_fn,
num_layers=1, bidirectional=False, sequence_length=None,
steps=1000, optimizer='Adam', learning_rate=0.01, continue_training=True)
看起来我应该能够只修改 input_op_fn 以使其工作,但我不确定如何正确地将我的 numpy 数组转换为 skflow.TensorFlowRNNClassifier 的张量。这就是文本分类示例的样子。
>>> word_vectors.get_shape()
TensorShape([Dimension(560000), Dimension(10), Dimension(50)])
>>> len(word_list)
10
如果我正确解释了文本问题,那么对于我的问题,它将是 TensorShape([Dimension(# rows), Dimension(57), Dimension(3)])