2

我正在使用 UMAP ( https://umap-learn.readthedocs.io/en/latest/# ) 来减少数据中的维度。我的数据集包含 4700 个样本,每个样本有 120 万个特征(我想减少)。然而,尽管使用了 32 个 CPU 和 120GB RAM,这仍需要相当长的时间。特别是嵌入的构建速度很慢,并且在过去 3.5 小时内详细输出没有变化:

UMAP(dens_frac=0.0, dens_lambda=0.0, low_memory=False, n_neighbors=10,
     verbose=True)
Construct fuzzy simplicial set
Mon Jul  5 09:43:28 2021 Finding Nearest Neighbors
Mon Jul  5 09:43:28 2021 Building RP forest with 59 trees
Mon Jul  5 10:06:10 2021 metric NN descent for 20 iterations
     1  /  20
     2  /  20
     3  /  20
     4  /  20
     5  /  20
    Stopping threshold met -- exiting after 5 iterations
Mon Jul  5 10:12:14 2021 Finished Nearest Neighbor Search
Mon Jul  5 10:12:25 2021 Construct embedding

有什么方法可以加快这个过程。我已经在使用稀疏矩阵(scipy.sparse.lil_matrix),如下所述:https ://umap-learn.readthedocs.io/en/latest/sparse.html 。此外,我还安装了 pynndescent(如此处所述:https ://github.com/lmcinnes/umap/issues/416 )。我的代码如下:

from scipy.sparse import lil_matrix
import numpy as np
import umap.umap_ as umap

term_dok_matrix = np.load('term_dok_matrix.npy')
term_dok_mat_lil = lil_matrix(term_dok_matrix, dtype=np.float32)

test = umap.UMAP(a=None, angular_rp_forest=False, b=None,
     force_approximation_algorithm=False, init='spectral', learning_rate=1.0,
     local_connectivity=1.0, low_memory=False, metric='euclidean',
     metric_kwds=None, n_neighbors=10, min_dist=0.1, n_components=2, n_epochs=None, 
     negative_sample_rate=5, output_metric='euclidean',
     output_metric_kwds=None, random_state=None, repulsion_strength=1.0,
     set_op_mix_ratio=1.0, spread=1.0, target_metric='categorical',
     target_metric_kwds=None, target_n_neighbors=-1, target_weight=0.5,
     transform_queue_size=4.0, unique=False, verbose=True).fit_transform(term_dok_mat_lil)

是否有任何技巧或想法如何使计算更快?我可以更改一些参数吗?我的矩阵仅由零和一组成是否有帮助(这意味着我的矩阵中的所有非零条目都是一)。

4

1 回答 1

4

With 1.2 million features and only 4700 samples you are going to be better off just precomputing the full distance matrix and passing that in with metric="precomputed". Currently it is expending a lot of work computing nearest neighbors of those 1.2million long vectors. Just brute force will be a lot better.

于 2021-07-06T03:01:26.393 回答