ray.tune.integration.xgboost.TuneReportCheckpointCallback#

class ray.tune.integration.xgboost.TuneReportCheckpointCallback(*args: Any, **kwargs: Any)[source]#

Bases: RayReportCallback

XGBoost callback to save checkpoints and report metrics for Ray Tune.

Parameters:
  • metrics – Metrics to report. If this is a list, each item describes the metric key reported to XGBoost, and it will be reported under the same name. This can also be a dict of {<key-to-report>: <xgboost-metric-key>}, which can be used to rename xgboost default metrics.

  • filename – Customize the saved checkpoint file type by passing a filename. Defaults to “model.ubj”.

  • frequency – How often to save checkpoints, in terms of iterations. Defaults to 0 (no checkpoints are saved during training).

  • checkpoint_at_end – Whether or not to save a checkpoint at the end of training.

  • results_postprocessing_fn – An optional Callable that takes in the metrics dict that will be reported (after it has been flattened) and returns a modified dict. For example, this can be used to average results across CV fold when using xgboost.cv.

Examples

Reporting checkpoints and metrics to Ray Tune when running many independent xgboost trials (without data parallelism within a trial).

import xgboost

from ray.tune import Tuner
from ray.tune.integration.xgboost import TuneReportCheckpointCallback

def train_fn(config):
    # Report log loss to Ray Tune after each validation epoch.
    bst = xgboost.train(
        ...,
        callbacks=[
            TuneReportCheckpointCallback(
                metrics={"loss": "eval-logloss"}, frequency=1
            )
        ],
    )

tuner = Tuner(train_fn)
results = tuner.fit()

PublicAPI (beta): This API is in beta and may change before becoming stable.