1

我正在尝试使用来自 Tensorflow 集线器的谷歌通用句子编码器构建语义相似性搜索,据我所知,它采用小写标记化字符串并输出 512 嵌入向量。

主要问题

除了表初始化过程之外的所有操作都在一秒钟内执行:

session.run([tf.global_variables_initializer()) # performed less than a second
session.run(tf.tables_initializer()) # takes 15+ seconds

上面的行需要大约 20 秒,有什么方法可以加快表初始化过程(以便以后,为了实际使用,它可以快速将用户输入转换为嵌入向量)?


代码很简单:

import tensorflow as tf
import tensorflow_hub as hub
import pickle # just for saving vectorized data

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'  # avoid any logs but error

embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder/2")

def search_converted(query_text, file_path="file_path"):
    with tf.Session() as session:
        session.run(tf.global_variables_initializer()) # initialize global variables
        session.run(tf.tables_initializer())  # initialize tables
        message_embeddings = session.run(embed([query_text])) # turn query text into vector embeddings
    similarities = []  # array of similarities (float values)
    with open(file_path, "rb") as fl:  # open pickle containing array that contains vector embeddings and their readable form
        pckl = pickle.load(fl)
        for col in pckl[0]: # vector embeddings
            similarities.append(1 - acos(np.inner(message_embeddings[0], col) / (
                        np.linalg.norm(message_embeddings[0]) * np.linalg.norm(col))) / pi)   # append angular distance similarities to array
    return (pckl[1][similarities.index(max(similarities))], max(similarities))   # return the most similar string, with the similarity precentage

上面的代码仅用于测试(我知道它在实践中并不是最优的)。它打开包含数组的腌制文件,并从这些数组中选择最相似的字符串。


很快,我怎样才能加快表初始化,以便在实践中使用这个库?

4

0 回答 0