ray.tune.integration.xgboost.TuneReportCheckpointCallback#

class ray.tune.integration.xgboost.TuneReportCheckpointCallback(*args: Any, **kwargs: Any)[source]#

Bases: TuneCallback

XGBoost callback to save checkpoints and report metrics.

Saves checkpoints after each validation step. Also reports metrics to Ray Train or Ray Tune.

Parameters:
  • metrics – Metrics to report to Tune. If this is a list, each item describes the metric key reported to XGBoost, and it will be reported under the same name to Tune. If this is a dict, each key will be the name reported to Tune and the respective value will be the metric key reported to XGBoost.

  • filename – Filename of the checkpoint within the checkpoint directory. Defaults to “checkpoint”. If this is None, all metrics will be reported to Tune under their default names as obtained from XGBoost.

  • frequency – How often to save checkpoints. Defaults to 0 (no checkpoints are saved during training). A checkpoint is always saved at the end of training.

  • results_postprocessing_fn – An optional Callable that takes in the dict that will be reported to Tune (after it has been flattened) and returns a modified dict that will be reported instead. Can be used to eg. average results across CV fold when using xgboost.cv.

Example:

import xgboost
from ray.tune.integration.xgboost import TuneReportCheckpointCallback

config = {
    # ...
    "eval_metric": ["auc", "logloss"]
}

# Report only log loss to Tune after each validation epoch.
# Save model as `xgboost.mdl`.
bst = xgb.train(
    config,
    train_set,
    evals=[(test_set, "eval")],
    verbose_eval=False,
    callbacks=[TuneReportCheckpointCallback(
        {"loss": "eval-logloss"}, "xgboost.mdl)])

Methods