1

当使用相同的数据运行相同的 LogisticRegression 时,scikit-learn 和 dask-ml 实现之间的结果应该没有差异。

版本:scikit-learn=0.21.2
dask-ml=1.0.0

首先使用 dask-ml LogisticRegression:

from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn import metrics
from dask_yarn import YarnCluster
from dask.distributed import Client
from dask_ml.linear_model import LogisticRegression
import dask.dataframe as dd
import dask.array as da
digits = load_digits()
x_train, x_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size=0.25, random_state=0)
lr = LogisticRegression(solver_kwargs={"normalize":False})
lr.fit(x_train, y_train)
score = lr.score(x_test, y_test)
print(score)
predictions = lr.predict(x_test)
cm = metrics.confusion_matrix(y_test, predictions)
print(cm)

现在使用 sklearn LogisticRegression :

from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn import metrics
from dask_yarn import YarnCluster
from dask.distributed import Client
from sklearn.linear_model import LogisticRegression
import dask.dataframe as dd
import dask.array as da
digits = load_digits()
x_train, x_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size=0.25, random_state=0)
lr = LogisticRegression()
lr.fit(x_train, y_train)
score = lr.score(x_test, y_test)
print(score)
predictions = lr.predict(x_test)
cm = metrics.confusion_matrix(y_test, predictions)
print(cm)

scikit-learn 的分数和卷积矩阵

0.9533333333333334
[[37  0  0  0  0  0  0  0  0  0]
 [ 0 39  0  0  0  0  2  0  2  0]
 [ 0  0 41  3  0  0  0  0  0  0]
 [ 0  0  1 43  0  0  0  0  0  1]
 [ 0  0  0  0 38  0  0  0  0  0]
 [ 0  1  0  0  0 47  0  0  0  0]
 [ 0  0  0  0  0  0 52  0  0  0]
 [ 0  1  0  1  1  0  0 45  0  0]
 [ 0  3  1  0  0  0  0  0 43  1]
 [ 0  0  0  1  0  1  0  0  1 44]]

dask-ml 的分数和卷积矩阵

0.09555555555555556
[[ 0 37  0  0  0  0  0  0  0  0]
 [ 0 43  0  0  0  0  0  0  0  0]
 [ 0 44  0  0  0  0  0  0  0  0]
 [ 0 45  0  0  0  0  0  0  0  0]
 [ 0 38  0  0  0  0  0  0  0  0]
 [ 0 48  0  0  0  0  0  0  0  0]
 [ 0 52  0  0  0  0  0  0  0  0]
 [ 0 48  0  0  0  0  0  0  0  0]
 [ 0 48  0  0  0  0  0  0  0  0]
 [ 0 47  0  0  0  0  0  0  0  0]]
4

1 回答 1

0

Dask-ml,从 version 开始dask_ml==1.0.0,不支持多类的逻辑回归。使用原始示例的略微修改版本,如果您predictions从拟合的 dask-mlLogisticRegression分类器打印,您会看到它给出了一个布尔数组,其中填充了True.

from sklearn.datasets import load_digits
from dask_ml.linear_model import LogisticRegression

X, y = load_digits(return_X_y=True)
lr = LogisticRegression(solver_kwargs={"normalize": False})
lr.fit(X, y)
predictions = lr.predict(X)
print('predictions = {}'.format(predictions))

输出

predictions = [ True  True  True ...  True  True  True]

这就是为什么 dask-ml 和 scikit-learn 混淆矩阵彼此不同的原因。

在 GitHub 上有一个相关的未解决问题https://github.com/dask/dask-ml/issues/386

于 2019-08-06T02:41:58.227 回答