1

我需要使用 numexpr 重写此代码,它正在计算矩阵数据 [rows x cols] 和向量 [1 x cols] 的欧几里得范数矩阵。

d = ((data-vec)**2).sum(axis=1)

怎么做到呢?也许还有另一种更快的方法?

我使用 hdf5 的问题,以及从中读取的数据矩阵。例如,此代码给出错误:对象未对齐。

#naive numpy solution, can be parallel?
def test_bruteforce_knn():
    h5f = tables.open_file(fileName)

    t0= time.time()
    d = np.empty((rows*batches,))
    for i in range(batches):
        d[i*rows:(i+1)*rows] = ((h5f.root.carray[i*rows:(i+1)*rows]-vec)**2).sum(axis=1)
    print (time.time()-t0)
    ndx = d.argsort()
    print ndx[:k]

    h5f.close()

#using some tricks (don't work error: objects are not aligned )
def test_bruteforce_knn():
    h5f = tables.open_file(fileName)

    t0= time.time()
    d = np.empty((rows*batches,))
    for i in range(batches):
        d[i*rows:(i+1)*rows] = (np.einsum('ij,ij->i', h5f.root.carray[i*rows:(i+1)*rows],
        h5f.root.carray[i*rows:(i+1)*rows]) 
        + np.dot(vec, vec)
        -2 * np.dot(h5f.root.carray[i*rows:(i+1)*rows], vec))
    print (time.time()-t0)
    ndx = d.argsort()
    print ndx[:k]

    h5f.close()

使用 numexpr: 似乎 numexpr 不理解 h5f.root.carray[i*rows:(i+1)*rows] 它必须重新分配?

import numexpr as ne

def test_bruteforce_knn():
    h5f = tables.open_file(fileName)

    t0= time.time()
    d = np.empty((rows*batches,))
    for i in range(batches):
        ne.evaluate("sum((h5f.root.carray[i*rows:(i+1)*rows] - vec) ** 2, axis=1)")
    print (time.time()-t0)
    ndx = d.argsort()
    print ndx[:k]

    h5f.close()
4

1 回答 1

4

仅使用 scikit-learn 中使用的 NumPy 有一种潜在的快速方法(对于非常大的数组):

def squared_row_norms(X):
    # From http://stackoverflow.com/q/19094441/166749
    return np.einsum('ij,ij->i', X, X)

def squared_euclidean_distances(data, vec):
    data2 = squared_row_norms(data)
    vec2 = squared_row_norms(vec)
    d = np.dot(data, vec.T).ravel()
    d *= -2
    d += data2
    d += vec2
    return d

这是基于 (x - y)² = x² + y² - 2xy 的事实,即使对于向量也是如此。

测试:

>>> data = np.random.randn(10, 40)
>>> vec = np.random.randn(1, 40)
>>> ((data - vec) ** 2).sum(axis=1)
array([  96.75712686,   69.45894306,  100.71998244,   80.97797154,
         84.8832107 ,   82.28910021,   67.48309433,   81.94813371,
         64.68162331,   77.43265692])
>>> squared_euclidean_distances(data, vec)
array([  96.75712686,   69.45894306,  100.71998244,   80.97797154,
         84.8832107 ,   82.28910021,   67.48309433,   81.94813371,
         64.68162331,   77.43265692])
>>> from sklearn.metrics.pairwise import euclidean_distances
>>> euclidean_distances(data, vec, squared=True).ravel()
array([  96.75712686,   69.45894306,  100.71998244,   80.97797154,
         84.8832107 ,   82.28910021,   67.48309433,   81.94813371,
         64.68162331,   77.43265692])

轮廓:

>>> data = np.random.randn(1000, 40)
>>> vec = np.random.randn(1, 40)
>>> %timeit ((data - vec)**2).sum(axis=1)
10000 loops, best of 3: 114 us per loop
>>> %timeit squared_euclidean_distances(data, vec)
10000 loops, best of 3: 52.5 us per loop

使用 numexpr 也是可能的,但它似乎没有为 1000 点提供任何加速(并且在 10000 时,也好不到哪里去):

>>> %timeit ne.evaluate("sum((data - vec) ** 2, axis=1)")
10000 loops, best of 3: 142 us per loop
于 2013-10-29T09:41:50.840 回答