3

还有其他几个与此类似的问题,但我找不到似乎适合的解决方案。我正在将 LightGBM 与 Scikit-Optimize BayesSearchCV 一起使用。

full_pipeline = skl.Pipeline(steps=[('preprocessor', pre_processor), 
                                        ('estimator',    lgbm.sklearn.LGBMClassifier())])
scorer=make_scorer(fl.lgb_focal_f1_score)
lgb_tuner = sko.BayesSearchCV(full_pipeline, hyper_space, cv=5, refit=True, n_iter=num_calls,scoring=scorer)
lgb_tuner.fit(balanced_xtrain, balanced_ytrain)

训练运行了一段时间,然后出现以下错误:

Traceback (most recent call last):
  File "/var/training.py", line 134, in <module>
    lgb_tuner.fit(balanced_xtrain, balanced_ytrain)
  File "/usr/local/lib/python3.6/site-packages/skopt/searchcv.py", line 694, in fit
    groups=groups, n_points=n_points_adjusted
  File "/usr/local/lib/python3.6/site-packages/skopt/searchcv.py", line 579, in _step
    self._fit(X, y, groups, params_dict)
  File "/usr/local/lib/python3.6/site-packages/skopt/searchcv.py", line 423, in _fit
    for parameters in parameter_iterable
  File "/usr/local/lib/python3.6/site-packages/joblib/parallel.py", line 1041, in __call__
    if self.dispatch_one_batch(iterator):
  File "/usr/local/lib/python3.6/site-packages/joblib/parallel.py", line 859, in dispatch_one_batch
    self._dispatch(tasks)
  File "/usr/local/lib/python3.6/site-packages/joblib/parallel.py", line 777, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "/usr/local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 208, in apply_async
    result = ImmediateResult(func)
  File "/usr/local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 572, in __init__
    self.results = batch()
  File "/usr/local/lib/python3.6/site-packages/joblib/parallel.py", line 263, in __call__
    for func, args, kwargs in self.items]
  File "/usr/local/lib/python3.6/site-packages/joblib/parallel.py", line 263, in <listcomp>
    for func, args, kwargs in self.items]
  File "/usr/local/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 531, in _fit_and_score
    estimator.fit(X_train, y_train, **fit_params)
  File "/usr/local/lib/python3.6/site-packages/sklearn/pipeline.py", line 335, in fit
    self._final_estimator.fit(Xt, y, **fit_params_last_step)
  File "/usr/local/lib/python3.6/site-packages/lightgbm/sklearn.py", line 857, in fit
    callbacks=callbacks, init_model=init_model)
  File "/usr/local/lib/python3.6/site-packages/lightgbm/sklearn.py", line 617, in fit
    callbacks=callbacks, init_model=init_model)
  File "/usr/local/lib/python3.6/site-packages/lightgbm/engine.py", line 252, in train
    booster.update(fobj=fobj)
  File "/usr/local/lib/python3.6/site-packages/lightgbm/basic.py", line 2467, in update
    return self.__boost(grad, hess)
  File "/usr/local/lib/python3.6/site-packages/lightgbm/basic.py", line 2503, in __boost
    ctypes.byref(is_finished)))
  File "/usr/local/lib/python3.6/site-packages/lightgbm/basic.py", line 55, in _safe_call
    raise LightGBMError(decode_string(_LIB.LGBM_GetLastError()))
lightgbm.basic.LightGBMError: Check failed: (best_split_info.left_count) > (0) at /__w/1/s/python-package/compile/src/treelearner/serial_tree_learner.cpp, line 651 .

类似问题的一些答案表明这可能是使用 GPU 的结果,但我没有可用的 GPU。我不知道还有什么原因导致它或如何尝试修复它。任何人都可以提出任何建议吗?

4

1 回答 1

0

我认为这是由于我的超参数限制错误,导致一个超参数设置为零,这不应该是,尽管我不确定是哪一个。

于 2021-01-21T10:29:12.013 回答