0

我想创建一个连体网络来比较两个字符串的相似性。

我正在尝试遵循本教程。此示例适用于图像,但我想使用字符串表示(在字符级别)并且我被困在文本的预处理中。

假设我有两个输入:

string_a = ["one","two","three"]
string_b = ["four","five","six"]

我需要准备它以输入我的模型。为此,我需要:

  • 创建分词器
  • 创建一个 tf 数据框
  • 预处理此数据帧(标记输入)

所以我正在尝试以下方法:

    import tensorflow as tf
    from tensorflow.keras.preprocessing.text import Tokenizer
    from tensorflow.keras.preprocessing.sequence import pad_sequences

    #create a tokenizer
    tok = Tokenizer(char_level=True,oov_token="?")
    tok.fit_on_texts(string_a+string_b)
    char_index = tok.word_index
    maxlen = max([len(x) for x in tok.texts_to_sequences(string_a+string_b)])
    
    #create a dataframe
    dataset_a = tf.data.Dataset.from_tensor_slices(string_a)
    dataset_b = tf.data.Dataset.from_tensor_slices(string_b)
    
    dataset = tf.data.Dataset.zip((dataset_a,dataset_b))
    
    # preprocessing functions
    def tokenize_string(data,tokenizer,max_len):
        """vectorize string with a given tokenizer
        """
        sequence = tokenizer.texts_to_sequences(data)
        return_seq = pad_sequences(sequence,maxlen=max_len,padding="post",truncating="post")
        return return_seq[0]
    
    def preprocess_couple(string_1,string_2):
        """given 2 strings, tokenize them and return an array
        """
        return (
            tokenize_string([string_1], tok, maxlen),
            tokenize_string([string_2], tok, maxlen)
        )
    
    #shuffle and preprocess dataset
    dataset = dataset.shuffle(buffer_size=2)
    dataset = dataset.map(preprocess_couple)

但是我收到一个错误:

AttributeError: in user code:

    <ipython-input-29-b920d389ea82>:29 preprocess_couple  *
        tokenize_string([string_2], tok, maxlen)
    <ipython-input-29-b920d389ea82>:20 tokenize_string  *
        sequence = tokenizer.texts_to_sequences(data)
    C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\embargo_text\lib\site-packages\keras_preprocessing\text.py:281 texts_to_sequences  *
        return list(self.texts_to_sequences_generator(texts))
    C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\embargo_text\lib\site-packages\keras_preprocessing\text.py:306 texts_to_sequences_generator  **
        text = text.lower()
    C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\embargo_text\lib\site-packages\tensorflow\python\framework\ops.py:401 __getattr__
        self.__getattribute__(name)

应用 preprocess_couple 函数之前的数据集状态如下:

(<tf.Tensor: shape=(), dtype=string, numpy=b'two'>, <tf.Tensor: shape=(), dtype=string, numpy=b'five'>)
(<tf.Tensor: shape=(), dtype=string, numpy=b'three'>, <tf.Tensor: shape=(), dtype=string, numpy=b'six'>)
(<tf.Tensor: shape=(), dtype=string, numpy=b'one'>, <tf.Tensor: shape=(), dtype=string, numpy=b'four'>)

我认为这个错误来自这样一个事实,即字符串通过函数 from_tensor_slices 转换为张量。但是,为输入预处理这些数据的正确方法是什么?

4

1 回答 1

0

我没有得到您真正想要实现的目标,但如果想将您的文本转换为矢量,这将有所帮助

def process(data):
    tok = Tokenizer(char_level=True,oov_token="?")
    tok.fit_on_texts(data)
    maxlen = max([len(x) for x in tok.texts_to_sequences(data)])
    data=tok.texts_to_sequences(data)
    data=pad_sequences(data,maxlen=maxlen,padding='post')
    return data
于 2021-07-13T13:14:31.887 回答