ray.tune.integration.xgboost.TuneReportCheckpointCallback#

class ray.tune.integration.xgboost.TuneReportCheckpointCallback(metrics: Optional[Union[str, List[str], Dict[str, str]]] = None, filename: str = 'checkpoint', frequency: int = 5, results_postprocessing_fn: Optional[Callable[[Dict[str, Union[float, List[float]]]], float]] = None)[source]#

Bases: ray.tune.integration.xgboost.TuneCallback

XGBoost report and checkpoint callback

Saves checkpoints after each validation step. Also reports metrics to Tune, which is needed for checkpoint registration.

Parameters
  • metrics – Metrics to report to Tune. If this is a list, each item describes the metric key reported to XGBoost, and it will reported under the same name to Tune. If this is a dict, each key will be the name reported to Tune and the respective value will be the metric key reported to XGBoost.

  • filename – Filename of the checkpoint within the checkpoint directory. Defaults to “checkpoint”. If this is None, all metrics will be reported to Tune under their default names as obtained from XGBoost.

  • frequency – How often to save checkpoints. Per default, a checkpoint is saved every five iterations.

  • results_postprocessing_fn – An optional Callable that takes in the dict that will be reported to Tune (after it has been flattened) and returns a modified dict that will be reported instead. Can be used to eg. average results across CV fold when using xgboost.cv.

Example:

import xgboost
from ray.tune.integration.xgboost import TuneReportCheckpointCallback

config = {
    # ...
    "eval_metric": ["auc", "logloss"]
}

# Report only log loss to Tune after each validation epoch.
# Save model as `xgboost.mdl`.
bst = xgb.train(
    config,
    train_set,
    evals=[(test_set, "eval")],
    verbose_eval=False,
    callbacks=[TuneReportCheckpointCallback(
        {"loss": "eval-logloss"}, "xgboost.mdl)])
after_iteration(model: xgboost.core.Booster, epoch: int, evals_log: Dict)[source]#

Run after each iteration. Return True when training should stop.