1

我想使用 GMM 对经典的鸢尾花数据集进行聚类。我从以下位置获取数据集:

https://gist.github.com/netj/8836201

到目前为止,我的程序如下:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.mixture import GaussianMixture as mix
from sklearn.cross_validation import StratifiedKFold

def main():
    data=pd.read_csv("iris.csv",header=None)

    data=data.iloc[1:]

    data[4]=data[4].astype("category")

    data[4]=data[4].cat.codes

    target=np.array(data.pop(4))
    X=np.array(data).astype(float)


    kf=StratifiedKFold(target,n_folds=10,shuffle=True,random_state=1234)

    train_ind,test_ind=next(iter(kf))
    X_train=X[train_ind]
    y_train=target[train_ind]

    gmm_calc(X_train,"full",y_train)

def gmm_calc(X_train,cov,y_train):
    print X_train
    print y_train
    n_classes = len(np.unique(y_train))
    model=mix(n_components=n_classes,covariance_type="full")
    model.means_ = np.array([X_train[y_train == i].mean(axis=0) for i in 
 xrange(n_classes)])
    model.fit(X_train)
    y_predict=model.predict(X_train)
    print cov," ",y_train
    print cov," ",y_predict
    print (np.mean(y_predict==y_train))*100

我遇到的问题是当我尝试获取巧合数 y_predict=y_train 时,因为每次运行程序都会得到不同的结果。例如:

第一次运行:

full   [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
full   [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 2 2 2 2 2 2 2 2 2
 2 2 2 0 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
0.0

第二次运行:

full   [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
full   [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0
 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
33.33333333333333

第三轮:

full   [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
full   [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1
 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
98.51851851851852

因此,正如您所见,每次运行的结果都不同。我在互联网上找到了一些代码:

https://scikit-learn.org/0.16/auto_examples/mixture/plot_gmm_classifier.html

但他们在完全协方差的情况下,训练集的准确度约为 82%。在这种情况下我做错了什么?

谢谢

更新:我发现在互联网示例中使用 GMM 而不是新的 GaussianMixture。我还发现,在示例中,GMM 参数以监督方式初始化:classifier.means_ = np.array([X_train[y_train == i].mean(axis=0) for i in xrange(n_classes)])

我已将修改后的代码放在上面,但每次运行它时它仍然会更改结果,但是使用库 GMM 不会发生这种情况。

4

2 回答 2

3

1)GMM分类器使用期望最大化算法来拟合混合高斯模型:高斯分量随机以数据点为中心,然后算法移动它们直到收敛到局部最优。由于随机初始化结果每次运行可能不同。因此,您还必须使用random_state参数GMM(或尝试设置更多的初始化次数n_init并期望获得更多类似的结果。)

2) 出现精度问题是因为GMM(same as kmeans) 恰好适合n高斯并报告每个点所属的高斯分量“数字”;这个数字在每次运行中都不同。您可以在预测中看到,集群是相同的,但它们的标签交换了:(1,2,0) -> (1,0,2) -> (0,1,2),最后一个组合与适当的课程,这样你就可以得到 98% 的分数。如果你绘制它们,你可以看到高斯本身在这种情况下往往保持不变,例如在此处输入图像描述 你可以使用一些考虑到这一点的聚类指标:

>>> [round(i,5) for i in  (metrics.homogeneity_score(y_predict, y_train),
 metrics.completeness_score(y_predict, y_train),
 metrics.v_measure_score(y_predict,y_train),
 metrics.adjusted_rand_score(y_predict, y_train),
 metrics.adjusted_mutual_info_score(y_predict,  y_train))]
[0.86443, 0.8575, 0.86095, 0.84893, 0.85506]

绘图代码,来自https://scikit-learn.org/stable/auto_examples/mixture/plot_gmm_covariances.html,请注意代码在版本之间有所不同,如果您使用旧版本,则需要替换make_ellipses功能:

model = mix(n_components=len(np.unique(y_train)), covariance_type="full", verbose=0, n_init=100)
X_train = X_train.astype(float)
model.fit(X_train)
y_predict = model.predict(X_train)

import matplotlib as mpl
import matplotlib.pyplot as plt

def make_ellipses(gmm, ax):
    for n, color in enumerate(['navy', 'turquoise', 'darkorange']):
        if gmm.covariance_type == 'full':
            covariances = gmm.covariances_[n][:2, :2]
        elif gmm.covariance_type == 'tied':
            covariances = gmm.covariances_[:2, :2]
        elif gmm.covariance_type == 'diag':
            covariances = np.diag(gmm.covariances_[n][:2])
        elif gmm.covariance_type == 'spherical':
            covariances = np.eye(gmm.means_.shape[1]) * gmm.covariances_[n]
        v, w = np.linalg.eigh(covariances)
        u = w[0] / np.linalg.norm(w[0])
        angle = np.arctan2(u[1], u[0])
        angle = 180 * angle / np.pi  # convert to degrees
        v = 2. * np.sqrt(2.) * np.sqrt(v)
        ell = mpl.patches.Ellipse(gmm.means_[n, :2], v[0], v[1],
                                  180 + angle, color=color)
        ell.set_clip_box(ax.bbox)
        ell.set_alpha(0.5)
        ax.add_artist(ell)


def plot(model, X, y, y_predict):

    h = plt.subplot(1, 1, 1)
    plt.subplots_adjust(bottom=.01, top=0.95, hspace=.15, wspace=.05,
                    left=.01, right=.99)
    make_ellipses(model, h)
    for n, color in enumerate( ['navy', 'turquoise', 'darkorange']):
        plt.scatter(X[y == n][:,0], X[y == n][:,1],  color=color,marker='x')
        plt.text(0.05, 0.9, 'Accuracy: %.1f' % ((np.mean(y_predict == y)) * 100),
                 transform=h.transAxes)

    plt.show()
plot(model, X_train, y_train, y_predict)
于 2018-11-11T21:40:22.497 回答
0

您的查询很晚。可能对其他人有益。正如@hellpanderr 所发布,在 GMM "random_state=1" 中使用

于 2021-06-09T10:18:14.090 回答