3

我最近发现了这个用于 ML 可解释性的惊人库。我决定使用来自 sklearn 的玩具数据集构建一个简单的 xgboost 分类器,并绘制一个force_plot.

为了理解情节,图书馆说:

上面的解释显示了每个有助于将模型输出从基值(我们传递的训练数据集上的平均模型输出)推到模型输出的特征。将预测推高的特征以红色显示,将预测推低的特征以蓝色显示(这些力图在我们的 Nature BME 论文中进行了介绍)。

所以在我看来,base_value 应该与clf.predict(X_train).mean()等于 0.637 相同。然而,在看情节时情况并非如此,这个数字实际上甚至不在 [0,1] 之内。我尝试在不同的基础(10,e,2)中进行日志,假设这将是某种单调变换......但仍然不是运气。我怎样才能得到这个base_value?

!pip install shap

from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
import pandas as pd
import shap

X, y = load_breast_cancer(return_X_y=True)
X = pd.DataFrame(data=X)
y = pd.DataFrame(data=y)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

clf = GradientBoostingClassifier(random_state=0)
clf.fit(X_train, y_train)

print(clf.predict(X_train).mean())

# load JS visualization code to notebook
shap.initjs()

explainer = shap.TreeExplainer(clf)
shap_values = explainer.shap_values(X_train)

# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.force_plot(explainer.expected_value, shap_values[0,:], X_train.iloc[0,:])
4

1 回答 1

4

要进入base_value原始空间(何时link="identity"),您需要展开类标签 --> 到概率 --> 到原始分数。注意,默认损失是"deviance",所以原始是反 sigmoid:

# probabilites
y = clf.predict_proba(X_train)[:,1]
# raw scores, default link="identity"
y_raw = np.log(y/(1-y))
# expected raw score
print(np.mean(y_raw))
print(np.isclose(explainer.expected_value, np.mean(y_raw), 1e-12))
2.065861773054686
[ True]

原始空间中第 0 个数据点的相关图:

shap.force_plot(explainer.expected_value[0], shap_values[0,:], X_train.iloc[0,:], link="identity")

在此处输入图像描述

如果您希望切换到 sigmoid 概率空间 ( link="logit"):

from scipy.special import expit, logit
# probabilites
y = clf.predict_proba(X_train)[:,1]
# exected raw base value
y_raw = logit(y).mean()
# expected probability, i.e. base value in probability spacy
print(expit(y_raw))
0.8875405774316522

概率空间中第 0 个数据点的相关图:

在此处输入图像描述

请注意,base_value从 shap 的角度来看的概率,如果没有可用数据,他们称之为基线概率,并不是一个理性的人通过没有自变量来定义的(0.6373626373626373在这种情况下)


完全可重现的例子:

from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
import pandas as pd
import shap
print(shap.__version__)

X, y = load_breast_cancer(return_X_y=True)
X = pd.DataFrame(data=X)
y = pd.DataFrame(data=y)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

clf = GradientBoostingClassifier(random_state=0)
clf.fit(X_train, y_train.values.ravel())

# load JS visualization code to notebook
shap.initjs()

explainer = shap.TreeExplainer(clf, model_output="raw")
shap_values = explainer.shap_values(X_train)

from scipy.special import expit, logit
# probabilites
y = clf.predict_proba(X_train)[:,1]
# exected raw base value
y_raw = logit(y).mean()
# expected probability, i.e. base value in probability spacy
print("Expected raw score (before sigmoid):", y_raw)
print("Expected probability:", expit(y_raw))

# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.force_plot(explainer.expected_value[0], shap_values[0,:], X_train.iloc[0,:], link="logit")

输出:

0.36.0
Expected raw score (before sigmoid): 2.065861773054686
Expected probability: 0.8875405774316522

在此处输入图像描述

于 2020-11-03T04:53:15.927 回答