0

下面是生成嵌入和降维的代码:

def generate_embeddings(text):
    if embed_fn is None:
        embed_fn = hub.load(module_url)
    embedding = embed_fn(text).numpy()
    return embedding


from sklearn.decomposition import IncrementalPCA
def pca():
    pca = IncrementalPCA(n_components = 64, batch_size= 1024)
    pca.fit(generate_embeddings(df))
    features_train = pca.transform(generate_embeddings(df))
    return features_train

当我在 100 000 条记录上运行时,它会引发错误:

ResourceExhaustedError:  OOM when allocating tensor with shape[64338902,512] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
     [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderDNN/EmbeddingLookup/EmbeddingLookupUnique/GatherV2}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 [Op:__inference_restored_function_body_15375]

Function call stack:
restored_function_body

4

2 回答 2

0

这显示了您对 GPU 内存的限制。要么减少batch_size要么 的大小network layers

于 2020-12-05T13:35:38.120 回答
0

由于数据大于系统内存,无法一次加载,因此将其传递给块或批处理。它一次只会将批处理数据加载到内存中。

def generate_embeddings(text):   
    embed_fn = hub.load(module_url)
    embedding = embed_fn(text).numpy()
    return embedding

def gen_pca(batch):
    gen=generate_embeddings(batch)
    pca = PCA(n_components = 64)
    pca.fit(gen)
    features_train = pca.transform(gen)
    return features_train


def run():
    ex=[]
    for batch in np.array_split(df['text'], 100):
        ex.extend(gen_pca(batch))
    return ex

于 2021-01-19T07:27:04.160 回答