我正在尝试实现基于此模型的注意力模型 ,但我希望我的模型不仅查看一帧来决定该帧的注意力,我想要一个模型将尝试查看整个序列的帧。所以我正在做的是将每一帧乘以一个序列向量,这是一个 lstm 的输出 (return_sequence=False)
这些是修改后的功能:
def build(self, input_shape):
self.W = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name))
if self.lstm_size is None:
self.lstm_size = input_shape[-1]
self.vec_lstm = LSTM(self.lstm_size, return_sequences=False)
self.vec_lstm.build(input_shape)
self.seq_lstm = LSTM(self.lstm_size, return_sequences=True)
self.seq_lstm.build(input_shape)
self.trainable_weights = [self.W]+self.vec_lstm.trainable_weights + self.seq_lstm.trainable_weights
super(Attention2, self).build(input_shape) # Be sure to call this somewhere!
def call(self, x, mask=None):
vec = self.vec_lstm(x)
seq = self.seq_lstm(x)
#
eij = # combine seq and vec somehow?
#
eij = K.dot(eij,self.W)
eij = K.tanh(eij)
a = K.exp(eij)
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
attention = K.sum(weighted_input, axis=1)
return attention
组合这两个矩阵的简单代码是:
eij = np.zeros((batch_size,sequence_length,frame_size))
for i,one_seq in enumerate(seq):
for j,timestep in enumerate(one_seq):
eij[i,j] = timestep*vec[i]
我很感激在使用 keras 后端实现这一点的帮助。
谢谢!