ray.tune.integration.xgboost.TuneReportCallback
ray.tune.integration.xgboost.TuneReportCallback#
- class ray.tune.integration.xgboost.TuneReportCallback(metrics: Optional[Union[str, List[str], Dict[str, str]]] = None, results_postprocessing_fn: Optional[Callable[[Dict[str, Union[float, List[float]]]], Dict[str, float]]] = None)[source]#
Bases:
ray.tune.integration.xgboost.TuneCallback
XGBoost to Ray Tune reporting callback
Reports metrics to Ray Tune.
- Parameters
metrics – Metrics to report to Tune. If this is a list, each item describes the metric key reported to XGBoost, and it will reported under the same name to Tune. If this is a dict, each key will be the name reported to Tune and the respective value will be the metric key reported to XGBoost. If this is None, all metrics will be reported to Tune under their default names as obtained from XGBoost.
results_postprocessing_fn – An optional Callable that takes in the dict that will be reported to Tune (after it has been flattened) and returns a modified dict that will be reported instead. Can be used to eg. average results across CV fold when using
xgboost.cv
.
Example:
import xgboost from ray.tune.integration.xgboost import TuneReportCallback config = { # ... "eval_metric": ["auc", "logloss"] } # Report only log loss to Tune after each validation epoch: bst = xgb.train( config, train_set, evals=[(test_set, "eval")], verbose_eval=False, callbacks=[TuneReportCallback({"loss": "eval-logloss"})])