我正在使用 optunta + catboost 来优化和训练一些增强的树。我想知道优化我的超参数以最大化准确性(而不是最小化对数损失)的正确方法。
目前我的代码是:
def objective(trial):
pram = {
'depth' = trial.suggest_int('depth', 4, 9),
'learning_rate' = trial.suggest_float('learning_rate', 1e-5, 1e-1),
'iterations' = trial.suggest_float('iterations', 100, 7500)
'loss_function' = 'Logloss',
'custom_loss' = 'Accuracy'
}
for step in range(50):
cv = cb.cv(train_pool, param, fold_count = 3, verbose = 1000)
acc = np.max(cv['test-Accuracy-mean'])
trial.report(acc, step)
if trial.should_prune():
raise optuna.TrialPruned()
return acc
study = optuna.create_study(direction = 'maximize')
study.optimize(objective, n_trials = 50)
这是否是调整超参数以最大化准确性而不是最小化对数损失的正确方法?