我为该项目准备了一个小型数据集。它给了
ValueError: 层权重形状 (43, 100) 与提供的权重形状 (412457, 400) 不兼容
错误。我认为标记器存在问题。
train_test_split 的 X 和 Y
X = []
sentences = list(titles["title"])
for sen in sentences:
X.append(preprocess_text(sen))
y = titles['Unnamed: 1']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
分词器在这里
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(X_train)
X_train = tokenizer.texts_to_sequences(X_train)
X_test = tokenizer.texts_to_sequences(X_test)
vocab_size = len(tokenizer.word_index) + 1 #vocab_size 43
maxlen = 100
X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)
因此,我的预训练 word2vec 模型具有 (412457, 400) 形状。
from numpy import array
from numpy import asarray
from numpy import zeros
from gensim.models import KeyedVectors
embeddings_dictionary = KeyedVectors.load_word2vec_format('drive/My Drive/trmodel', binary=True)
我使用了我的预训练 word2vec 模型而不是 GloVe。(vocab_size: 43, 100, 来自 embeddings_dictionary.vectors 的权重)
from keras.layers.recurrent import LSTM
model = Sequential()
embedding_layer = Embedding(vocab_size, 100, weights=[embeddings_dictionary.vectors], input_length=maxlen , trainable=False)
model.add(embedding_layer)
model.add(LSTM(128))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
ValueError: 层权重形状 (43, 100) 与提供的权重形状 (412457, 400) 不兼容