我总是发现利用 scipy 的稀疏矩阵和向量化操作而不是依赖于 python 的集合函数更快。这是一个简单的函数,它将 DataFrame 边缘列表转换为稀疏矩阵(有向和无向):
import scipy.sparse as spar
def sparse_adjmat(df, N=None, directed=False, coli='i', colj='j'):
# figure out size of matrix if not given
if N is None:
N = df[[coli, colj]].max() + 1
# make a directed sparse adj matrix
adjmat = spar.csr_matrix((np.ones(df.shape[0],dtype=int), (df[coli].values, df[colj].values)), shape = (N,N))
# for undirected graphs, force the adj matrix to be symmetric
if not directed:
adjmat[df[colj].values, df[coli].values] = 1
return adjmat
那么它只是二进制邻接矩阵上的简单向量运算:
def sparse_jaccard(m1,m2):
intersection = m1.multiply(m2).sum(axis=1)
a = m1.sum(axis=1)
b = m2.sum(axis=1)
jaccard = intersection/(a+b-intersection)
# force jaccard to be 0 even when a+b-intersection is 0
jaccard.data = np.nan_to_num(jaccard.data)
return np.array(jaccard).flatten()
为了比较,我制作了一个随机 pandas 边缘列表函数并将您的代码包装到以下函数中:
def erdos_renyi_df(N=100,m=400):
df = pd.DataFrame(np.random.randint(0,N, size=(m,2)), columns = ['i','j'])
df.drop_duplicates(['i','j'], inplace=True)
df.sort_values(['i','j'], inplace=True)
df.reset_index(inplace=True, drop=True)
return df
def compute_jaccard_index(set_1, set_2):
n = len(set_1.intersection(set_2))
return n / float(len(set_1) + len(set_2) - n)
def set_based_jaccard(df1,df2):
tmp1 = pd.unique(df1['i'])
tmp2 = pd.unique(df2['i'])
JI = []
for i in tmp1:
tmp11 = df1[df1['i']==i]
tmp22 = df2[df2['i']==i]
set_1 = set(tmp11['j'])
set_2 = set(tmp22['j'])
JI.append(compute_jaccard_index(set_1, set_2))
return JI
然后我们可以通过制作两个随机网络来比较运行时间:
N = 10**3
m = 4*N
df1 = erdos_renyi_df(N,m)
df2 = erdos_renyi_df(N,m)
并使用基于集合的方法计算每个节点的 Jaccard 相似度:
%timeit set_based_jaccard(df1,df2)
1.54 s ± 113 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
以及稀疏方法(包括转换为稀疏矩阵的开销):
%timeit sparse_jaccard(sparse_adjmat(df1, N=N, directed=True, coli='i', colj='j'),sparse_adjmat(df2, N=N, directed=True, coli='i', colj='j'))
1.71 ms ± 109 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
如您所见,稀疏矩阵代码大约快 1000 倍。