我正在开发一个基于连体的神经网络模型,以下是我需要提供给连体网络的两个数组,即我有两对输入,每对大小为 30,因此一对来自左侧,另一对来自对将来自右侧。
EXAMPLES=10000
FEATURES=30
LEFT=np.random.random((EXAMPLES,FEATURES))
RIGHT=np.random.random((EXAMPLES,FEATURES))
LABELS=[]
for i in range(EXAMPLES):
LABELS.append(np.random.randint(0,2))
LABELS=np.asarray(LABELS)
现在我想开发一个由两个神经网络组成的连体神经网络模型,我将按照我之前所说的那样对这些数据进行训练
SIAMESE_MODEL
inputShape=Input(shape=(FEATURES,))
left_input = Input(FEATURES,)
right_input = Input(FEATURES,)
model = Sequential()
model.add(Dense(20, activation='relu', input_shape=inputShape))
model.add(Dense(10, activation='relu'))
model.add(Dense(5, activation='relu'))
model.summary()
encoded_l = model(left_input)
encoded_r = model(right_input)
L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1]))
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(1,activation='sigmoid')(L1_distance)
siamese_net.compile(loss="mse",optimizer=Adam(lr=0.001))
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)
siamese_net.fit(x=[LEFT,RIGHT],y=LABELS,batch_size=64,epochs=100)
错误
TypeError Traceback (most recent call last)
<ipython-input-64-71752a24704a> in <module>
1 inputShape=Input(shape=(FEATURES,))
----> 2 left_input = Input(FEATURES,)
3 right_input = Input(FEATURES,)
4
5 model = Sequential()
~\Anaconda3\envs\tf-gpu\lib\site-packages\keras\engine\input_layer.py in Input(shape, batch_shape, name, dtype, sparse, tensor)
170 'dimension.')
171 if shape is not None and not batch_shape:
--> 172 batch_shape = (None,) + tuple(shape)
173 if not dtype:
174 dtype = K.floatx()
TypeError: 'int' object is not iterable
有人可以指导我为 2D 矩阵构建连体网络的正确方法是什么,对于图像有可用的资源,但是对于这类问题,我没有找到任何有用的资源?
问候
更新 是的,第一个问题是输入的形状,我没有正确指定它,所以在做之后,它运行良好
inputShape=Input(shape=(FEATURES,))
left_input = Input(shape=(FEATURES,))
right_input =Input(shape=(FEATURES,))
我现在作为连体模特总结得到关注
siamese_net.summary()
Model: "model_6"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_23 (InputLayer) (None, 30) 0
__________________________________________________________________________________________________
input_24 (InputLayer) (None, 30) 0
__________________________________________________________________________________________________
sequential_11 (Sequential) (None, 5) 885 input_23[0][0]
input_24[0][0]
__________________________________________________________________________________________________
lambda_3 (Lambda) (None, 5) 0 sequential_11[1][0]
sequential_11[2][0]
__________________________________________________________________________________________________
dense_35 (Dense) (None, 1) 6 lambda_3[0][0]
==================================================================================================
Total params: 891
Trainable params: 891
Non-trainable params: 0
我的问题是其他层(如 20 和 10)现在消失了,有什么问题吗?