3

这是我从 github 获得的用于分层注意力网络的一段代码,该代码最初在 Keras 1.2.2 中。现在我必须更改它以使用 Keras 2.0.5 编译,但是,它有这样的错误消息,我无法解决。

原代码如下

MAX_SENT_LENGTH = 100
MAX_SENTS = 20 
MAX_NB_WORDS = 276176
EMBEDDING_DIM = 128
VALIDATION_SPLIT = 0.1
# Feed the data
# Here you have source data

x_train = np.load('./data/X_full_train_data.npy')
y_train = np.load('./data/X_full_train_labels.npy')
x_val = np.load('./data/X_full_test_data.npy')
y_val = np.load('./data/X_full_test_labels.npy')

np.random.seed(10)
shuffle_indices = np.random.permutation(np.arange(len(y_train)))
x_train = x_train[shuffle_indices]
y_train = y_train[shuffle_indices]

shuffle_indices = np.random.permutation(np.arange(len(y_val)))
x_val = x_train[shuffle_indices]
y_val = y_train[shuffle_indices]



with open("./data/W.npy", "rb") as fp:
    embedding_weights = np.load(fp)


# here you feed embeding matrix        
embedding_layer = Embedding(MAX_NB_WORDS,
                        EMBEDDING_DIM,
                        weights=[embedding_weights],
                        input_length=MAX_SENT_LENGTH,
                        trainable=True)
# building Hierachical Attention network

class AttLayer(Layer):

    def __init__(self, **kwargs):
        self.init = initializers.get('normal')

        super(AttLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        assert len(input_shape)==3
        self.W = self.init((input_shape[-1],))
        self.trainable_weights = [self.W]
        super(AttLayer, self).build(input_shape)  

    def call(self, x, mask=None):

        eij = K.tanh(K.dot(x, self.W))

        ai = K.exp(eij)
        weights = ai/K.sum(ai, axis=1).dimshuffle(0,'x')
        weighted_input = x*weights.dimshuffle(0,1,'x')
        ret = weighted_input.sum(axis=1)
        return ret

    #def get_output_shape_for(self, input_shape):
    def compute_output_shape(self,input_shape):

        return (input_shape[0], input_shape[-1])



sentence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sentence_input)
l_lstm = Bidirectional(GRU(100, return_sequences=True))(embedded_sequences)

l_dense = TimeDistributed(Dense(200))(l_lstm)
l_att = AttLayer()(l_lstm)
sentEncoder = Model(sentence_input, l_att)
review_input = Input(shape=(MAX_SENTS,MAX_SENT_LENGTH), dtype='int32')
review_encoder = TimeDistributed(sentEncoder)(review_input)
l_lstm_sent = Bidirectional(GRU(100, return_sequences=True))(review_encoder)
l_dense_sent = TimeDistributed(Dense(200))(l_lstm_sent)
l_att_sent = AttLayer()(l_lstm_sent)
preds = Dense(3, activation='softmax')(l_att_sent)
model = Model(input=review_input, output=preds)
model.compile(loss='binary_crossentropy',
          optimizer='rmsprop',
          metrics=['categorical_accuracy'])

print("model fitting - Hierachical attention network")
print(model.summary())

model.fit(x_train, y_train, nb_epoch=10, batch_size=32, validation_data=(x_val,y_val))

predictions = model.predict(x_val)
score, acc = model.evaluate(x_val, y_val,batch_size=32)

然后我有以下错误

textClassifierHATT.py:235: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.

model.fit(x_train, y_train, nb_epoch=10, batch_size=32, validation_data=(x_val,y_val))


Traceback (most recent call last):
  File "textClassifierHATT.py", line 235, in <module>
    model.fit(x_train, y_train, nb_epoch=10, batch_size=32, validation_data=(x_val,y_val))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 1575, in fit
    self._make_train_function()
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 960, in _make_train_function
    loss=self.total_loss)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/optimizers.py", line 226, in get_updates
    accumulators = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/optimizers.py", line 226, in <listcomp>
    accumulators = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/backend/theano_backend.py", line 275, in int_shape
    raise TypeError('Not a Keras tensor:', x)
TypeError: ('Not a Keras tensor:', Elemwise{add,no_inplace}.0)

keras模型在model.compile()中编译成功,但是在model.fit()中有错误,我完全不明白为什么会出现这样的错误。任何人都可以告诉我如何修改它,以便它可以运行 keras 2.0 非常感谢。

4

2 回答 2

1

问题出在自定义层的构建方法上,根据keras 的文档,您需要使用以下self.add_weight函数创建权重:

def build(self, input_shape):
        assert len(input_shape)==3
        self.W = self.add_weight(name='kernel', 
                                      shape=(input_shape[-1],),
                                      initializer='normal',
                                      trainable=True)
        super(AttLayer, self).build(input_shape)  

这和一些 API 更改:

  • 参数inputoutput更改Model(inputs=.., outputs=..)
  • 现在调用nb_epochs参数 infitepochs
于 2017-10-30T12:50:56.873 回答
0

为训练提供的数据不是张量,而是作为 numpy 数组提供的。尝试使用以下方法将 numpy 数组转换为张量:

import tensorflow as tf
tf.convert_to_tensor(
    value, dtype=None, dtype_hint=None, name=None
)

然后将它们传递给模型进行训练。

于 2020-10-01T03:54:30.543 回答