4

我知道您可以为不平衡的数据集设置 scale_pos_weight 。但是,如何处理不平衡数据集中的多分类问题。我已经浏览了https://datascience.stackexchange.com/questions/16342/unbalanced-multiclass-data-with-xgboost/18823但不太了解如何在 Dmatrix 中设置权重参数。

谁能详细解释一下?

4

1 回答 1

4

对于不平衡的数据集,我在 Xgboost 中使用了“权重”参数,其中权重是根据数据所属的类分配的权重数组。

def CreateBalancedSampleWeights(y_train, largest_class_weight_coef):
    classes = np.unique(y_train, axis = 0)
    classes.sort()
    class_samples = np.bincount(y_train)
    total_samples = class_samples.sum()
    n_classes = len(class_samples)
    weights = total_samples / (n_classes * class_samples * 1.0)
    class_weight_dict = {key : value for (key, value) in zip(classes, weights)}
    class_weight_dict[classes[1]] = class_weight_dict[classes[1]] * 
    largest_class_weight_coef
    sample_weights = [class_weight_dict[y] for y in y_train]
    return sample_weights

只需传递目标列和最频繁类的出现率(如果最频繁类在 100 个样本中有 75 个,则为 0.75)

    largest_class_weight_coef = 
    max(df_copy['Category'].value_counts().values)/df.shape[0]
    
    #pass y_train as numpy array
    weight = CreateBalancedSampleWeights(y_train, largest_class_weight_coef)

    #And then use it like this
    xg = XGBClassifier(n_estimators=1000, weights = weight, max_depth=20)

而已 :)

于 2019-12-05T12:05:41.933 回答