0

我正在尝试使用 Keras 重写一段 tflearn 代码。

目标是组合两个输入,其中一个输入跳过第一层。以下代码在 tflearn 中有效:

    # Two different inputs.
    inputs = tflearn.input_data(shape=[None, 10])
    action = tflearn.input_data(shape=[None, 10])

    #First layer used only by the inputs
    net = tflearn.fully_connected(inputs, 400, activation='relu')

    # Add the action tensor in the 2nd hidden layer
    # Use two temp layers to get the corresponding weights and biases
    t1 = tflearn.fully_connected(net, 300)
    t2 = tflearn.fully_connected(action, 300)

    # Combine the two layers using the weights from t1 and t2 and the bias from t2
    net = tflearn.activation(tf.matmul(net,t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')

我正在尝试使用以下代码在 Keras 中复制此代码:

    # Two different inputs.
    inputs = tf.placeholder(tf.float32, [None, 10])
    action = tf.placeholder(tf.float32, [None, 10])

    #First layer used only by the inputs
    t1 = Sequential()
    t1.add(Dense(400, activation='relu', input_shape=(1,10)))

    # Add the action tensor in the 2nd hidden layer
    # Use two temp layers to get the corresponding weights and biases
    t1.add(Dense(300))

    t2 = Sequential()
    t2.add(Dense(300, input_shape=(1,10)))

    # Combine the two layers
    critnet = Sequential()
    critnet.add(Merge([t1, t2], mode='sum'))
    critnet.add(Activation('relu'))

    # Create the net using the inputs and action placeholder
    net = critnet([inputs, action])

keras 中的代码表现不同。如何在 keras 中组合两层以获得与 tflearn 中相同的结果?

4

1 回答 1

0

You could use a Lambda layer take takes your 2 layers as input and using keras.backend to merge them the same way. I think there is K.dot for matmul.

于 2017-03-16T20:56:20.840 回答