2

我有一个 MLPRegressor,它非常适合我的数据集。这是我的代码的精简版本,删除了一些不必要的东西:

from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler, RobustScaler
from sklearn import preprocessing
import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.tree import export_graphviz
from datetime import datetime

def str_to_num(arr):
    le = preprocessing.LabelEncoder()
    new_arr = le.fit_transform(arr)
    return new_arr

def compare_values(arr1, arr2):
    thediff = 0
    thediffs = []
    for thing1, thing2 in zip(arr1, arr2):
        thediff = abs(thing1 - thing2)
        thediffs.append(thediff)

    return thediffs

def minmaxscale(data):
    scaler = MinMaxScaler()
    df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
    return df_scaled

data = pd.read_csv('reg.csv')
label = data['TOTAL']
data = data.drop('TOTAL', axis=1)
data = minmaxscale(data)

mlp = MLPRegressor(
    activation = 'tanh',
    alpha = 0.005,
    learning_rate = 'invscaling',
    learning_rate_init = 0.01,
    max_iter = 200,
    momentum = 0.9,
    solver = 'lbfgs',
    warm_start = True
)

X_train, X_test, y_train, y_test = train_test_split(data, label, test_size = 0.2)
mlp.fit(X_train, y_train)
preds = mlp.predict(X_test)
score = compare_values(y_test, preds)
print("Score: ", np.average(score))

而且效果很好!生产:Score: 7.246851606714535

但是,我想看看这个模型中的特征重要性。我知道这并不总是神经网络的重点,但这是一个商业理由,所以这是必要的。我通过LIME Paper发现了LIME,我想使用它。由于这是回归,我尝试在此处遵循示例

所以我添加了以下几行:

categorical_features = np.argwhere(np.array([len(set(data[:,x])) for x in range(data.shape[1])]) <= 10).flatten()

explainer = lime.lime_tabular.LimeTabularExplainer(
    X_train, 
    feature_names=X_train.columns, 
    class_names=['TOTAL'], 
    verbose=True,
    categorical_features = categorical_features, 
    mode='regression')

但现在得到错误:

Traceback (most recent call last):
  File "c:\Users\jerry\Desktop\mlp2.py", line 65, in <module>
    categorical_features = np.argwhere(np.array([len(set(data[:,x])) for x in range(data.shape[1])]) <= 10).flatten()
  File "c:\Users\J39304\Desktop\mlp2.py", line 65, in <listcomp>
    categorical_features = np.argwhere(np.array([len(set(data[:,x])) for x in range(data.shape[1])]) <= 10).flatten()
  File "C:\Python35-32\lib\site-packages\pandas\core\frame.py", line 2927, in __getitem__
    indexer = self.columns.get_loc(key)
  File "C:\Python35-32\lib\site-packages\pandas\core\indexes\base.py", line 2657, in get_loc
    return self._engine.get_loc(key)
  File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
  File "pandas\_libs\index.pyx", line 110, in pandas._libs.index.IndexEngine.get_loc
TypeError: '(slice(None, None, None), 0)' is an invalid key

为什么我会收到此错误,我该怎么办?我不明白如何正确集成 LIME。

我看到其他人也有这个问题,但我不知道如何解决

4

1 回答 1

5

我需要首先将所有内容转换为 numpy 数组:

class_names = X_train.columns
X_train = X_train.to_numpy()
X_test = X_test.to_numpy()
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()

然后从那里,将其提供给解释器:

explainer = lime.lime_tabular.LimeTabularExplainer(
    X_train, 
    feature_names=class_names, 
    class_names=['TOTAL'], 
    verbose=True, 
    mode='regression')

exp = explainer.explain_instance(X_test[5], mlp.predict)
exp = exp.as_list()
print(exp)
于 2019-08-30T14:09:22.053 回答