1

我正在研究目标变量不平衡的逻辑回归案例。为了解决这个问题,我使用了 SMOTE(合成少数过采样技术),但每次运行回归模型时,我的混淆矩阵中都会得到不同的数字。我在调用 SMOTE 和 Logistic 回归时设置了 random_state 参数,但仍然无济于事。甚至我的功能在每次迭代中都是相同的。我能够一次获得最佳召回值 0.81 和 AUC 值 0.916,但它们不再出现。在某些情况下,False Positives 和 False Negatives 的值会急剧上升,这表明分类器非常糟糕。

请指导我在这里做错了什么,下面是代码片段。

# Feature Selection
features = [ 'FEMALE','MALE','SINGLE','UNDER_WEIGHT','OBESE','PROFESSION_ANYS',
            'PROFESSION_PROF_UNKNOWN']

# Set X and Y Variables
X5 = dataframe[features]

# Target variable
Y5 = dataframe['PURCHASE']

# Splitting using SMOTE
from imblearn.over_sampling import SMOTE
os = SMOTE(random_state = 4)

X5_train, X5_test, Y5_train, Y5_test = train_test_split(X5,Y5, test_size=0.20)
os_data_X5,os_data_Y5 = os.fit_sample(X5_train, Y5_train)
columns = X5_train.columns

os_data_X5 = pd.DataFrame(data = os_data_X5, columns = columns )
os_data_Y5 = pd.DataFrame(data = os_data_Y5, columns = ['PURCHASE'])

# Instantiate Logistic Regression model (using the default parameters)
logreg_5 = LogisticRegression(random_state = 4, penalty='l1', class_weight = 'balanced')

# Fit the model with train data
logreg_5.fit(os_data_X5,os_data_Y5)

# Make predictions on test data set
Y5_pred = logreg_5.predict(X5_test)

# Make Confusion Matrix to compare results against actual values
cnf_matrix = metrics.confusion_matrix(Y5_test, Y5_pred)
cnf_matrix


4

0 回答 0