1

我完全意识到我可能会因为错过一些明显的东西而感到尴尬,但这让我很难过。我正在使用 Optuna 调整 LGBM 模型,我的笔记本上充斥着警告消息,我该如何抑制它们而留下错误(以及理想的试验结果)?下面的代码

import optuna
import sklearn

optuna.logging.set_verbosity(optuna.logging.ERROR)

import warnings
warnings.filterwarnings('ignore')

def objective(trial):    
    list_bins = [25, 50, 75, 100, 125, 150, 175, 200, 225, 250,500,750,1000]   

    param = {
        'lambda_l1': trial.suggest_loguniform('lambda_l1', 1e-8, 10.0),
        'lambda_l2': trial.suggest_loguniform('lambda_l2', 1e-8, 10.0),
        'colsample_bytree': trial.suggest_categorical('colsample_bytree', [0.3,0.4,0.5,0.6,0.7,0.8,0.9, 1.0]),
        'subsample': trial.suggest_categorical('subsample', [0.4,0.5,0.6,0.7,0.8,1.0]),
        'learning_rate': trial.suggest_categorical('learning_rate', [0.006,0.008,0.01,0.014,0.017,0.02,0.05]),
        'max_depth': trial.suggest_categorical('max_depth', [10,20,50,100]),
        'num_leaves' : trial.suggest_int('num_leaves', 2, 1000),
        'feature_fraction': trial.suggest_uniform('feature_fraction', 0.1, 1.0),
        'bagging_fraction': trial.suggest_uniform('bagging_fraction', 0.1, 1.0),
        'bagging_freq': trial.suggest_int('bagging_freq', 1, 15),
        'min_child_samples': trial.suggest_int('min_child_samples', 1, 300),
        'cat_smooth' : trial.suggest_int('cat_smooth', 1, 256),
        'cat_l2' : trial.suggest_int('cat_smooth', 1, 256),
        'max_bin': trial.suggest_categorical('max_bin', list_bins)
    }
    

    model = LGBMRegressor(**param,objective='regression',metric= 'rmse',boosting_type='gbdt',verbose=-1,random_state=42,n_estimators=20000,cat_feature= [x for x in range(len(cat_features))])
    
    
    model.fit(X_train, y_train,eval_set=[(X_test,y_test)], early_stopping_rounds=200,verbose=False)
    
    preds = model.predict(X_test)
    
    rmse = mean_squared_error(y_test, preds,squared=False)
    
    return rmse


study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=300)

print("Number of finished trials: {}".format(len(study.trials)))

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
    

我要尽量减少的是

[LightGBM] [Warning] feature_fraction is set=0.7134336417771784, colsample_bytree=0.4 will be ignored. Current value: feature_fraction=0.7134336417771784
[LightGBM] [Warning] lambda_l1 is set=0.0001621506831365743, reg_alpha=0.0 will be ignored. Current value: lambda_l1=0.0001621506831365743
[LightGBM] [Warning] bagging_fraction is set=0.8231149550002105, subsample=0.5 will be ignored. Current value: bagging_fraction=0.8231149550002105
[LightGBM] [Warning] bagging_freq is set=4, subsample_freq=0 will be ignored. Current value: bagging_freq=4
[LightGBM] [Warning] lambda_l2 is set=0.00010964883369301453, reg_lambda=0.0 will be ignored. Current value: lambda_l2=0.00010964883369301453
[LightGBM] [Warning] feature_fraction is set=0.3726043373358532, colsample_bytree=0.3 will be ignored. Current value: feature_fraction=0.3726043373358532
[LightGBM] [Warning] lambda_l1 is set=1.4643061619613147, reg_alpha=0.0 will be ignored. Current value: lambda_l1=1.4643061619613147
4

2 回答 2

0

我知道这是一个较晚的响应,但我最近在使用带有 XGBoost 的 Optuna 时遇到了类似的问题,我能够像这样关闭警告simplefilter

from warnings import simplefilter
simplefilter("ignore", category=RuntimeWarning)

我看到您已经在使用带有忽略的警告模块,我没有这样做,但simplefilter对我有用。

于 2021-08-21T01:33:13.737 回答
0

您应该在 dict 'params' 中传递“'verbosity': -1”,稍后您将传递给 lightgbm.train()。此外,将 'verbose_eval=False' 传递给 lightgbm.train() 也是必要的。

喜欢:

params=
        {... 
        'verbosity': -1
        } 
gbm = lgbm.train(
        params, 
        ...
        verbose_eval=True, 
        ...)

于 2022-01-10T00:05:59.403 回答