2

给定一个 NxM 特征向量作为 numpy 矩阵。是否有任何例程可以使用 L1 距离(曼哈顿距离)通过 Kmeans 算法对其进行聚类?

4

4 回答 4

5

这是一种使用 L1 距离(曼哈顿距离)的 Kmeans 算法。为了一般性,将特征向量表示为一个列表,很容易转换为一个numpy矩阵。

    import random
    #Manhattan Distance
    def L1(v1,v2):
      if(len(v1)!=len(v2):
        print “error”
        return -1
      return sum([abs(v1[i]-v2[i]) for i in range(len(v1))])

    # kmeans with L1 distance. 
    # rows refers to the NxM feature vectors
    def kcluster(rows,distance=L1,k=4):# Cited from Programming Collective Intelligence 
        # Determine the minimum and maximum values for each point
        ranges=[(min([row[i] for row in rows]),max([row[i] for row in rows])) for i in range(len(rows[0]))]

        # Create k randomly placed centroids
        clusters=[[random.random( )*(ranges[i][1]-ranges[i][0])+ranges[i][0] for i in range(len(rows[0]))] for j in range(k)]

        lastmatches=None
        for t in range(100):
            print 'Iteration %d' % t
            bestmatches=[[] for i in range(k)]
            # Find which centroid is the closest for each row
            for j in range(len(rows)):
                row=rows[j]
                bestmatch=0
                for i in range(k):
                    d=distance(clusters[i],row)
                    if d<distance(clusters[bestmatch],row): 
                        bestmatch=i
                bestmatches[bestmatch].append(j)
            ## If the results are the same as last time, this is complete
            if bestmatches==lastmatches:
                break
            lastmatches=bestmatches

            # Move the centroids to the average of their members
            for i in range(k):
                avgs=[0.0]*len(rows[0])
                if len(bestmatches[i])>0:
                    for rowid in bestmatches[i]:
                        for m in range(len(rows[rowid])):
                            avgs[m]+=rows[rowid][m]
                    for j in range(len(avgs)):
                        avgs[j]/=len(bestmatches[i])
                    clusters[i]=avgs
        return bestmatches
于 2012-04-14T15:17:35.470 回答
1

我不认为这是在 scipy 中明确提供的,但你应该看看以下内容:

http://projects.scipy.org/scipy/ticket/612

于 2011-06-06T14:48:50.690 回答
1

is-it-possible-to-specify-your-own-distance-function-using-scikits-learn-k-means下有代码 ,它使用 scipy.spatial.distance 中的 20 多个指标中的任何一个。另见 L1-or-L.5-metrics-for-clustering;你能评论一下 L1 与 L2 的结果吗?

于 2011-06-12T09:51:15.490 回答
0

看看pyclustering。在这里,您可以找到可以配置为使用 L1 距离的 k-means 实现。但是您必须将 numpy 数组转换为列表。

如何安装pyclustering

pip3 install pyclustering

从pyclustering复制的代码片段

pip3 install pyclustering

from pyclustering.cluster.kmeans import kmeans, kmeans_visualizer
from pyclustering.cluster.center_initializer import kmeans_plusplus_initializer
from pyclustering.samples.definitions import FCPS_SAMPLES
from pyclustering.utils import read_sample

sample = read_sample(FCPS_SAMPLES.SAMPLE_TWO_DIAMONDS)

manhattan_metric = distance_metric(type_metric.MANHATTAN)
kmeans_instance = kmeans(sample, initial_centers, metric=manhattan_metric)
kmeans_instance.process()

于 2021-06-06T07:08:02.893 回答