ray.train.xgboost.XGBoostTrainer#

class ray.train.xgboost.XGBoostTrainer(*args, **kwargs)[source]#

Bases: XGBoostTrainer

A Trainer for data parallel XGBoost training.

This Trainer runs the XGBoost training loop in a distributed manner using multiple Ray Actors.

Note

XGBoostTrainer does not modify or otherwise alter the working of the XGBoost distributed training algorithm. Ray only provides orchestration, data ingest and fault tolerance. For more information on XGBoost distributed training, refer to XGBoost documentation.

Example

import ray

from ray.train.xgboost import XGBoostTrainer
from ray.train import ScalingConfig

train_dataset = ray.data.from_items(
    [{"x": x, "y": x + 1} for x in range(32)])
trainer = XGBoostTrainer(
    label_column="y",
    params={"objective": "reg:squarederror"},
    scaling_config=ScalingConfig(num_workers=3),
    datasets={"train": train_dataset},
)
result = trainer.fit()
Parameters:
  • datasets – The Ray Datasets to use for training and validation. Must include a “train” key denoting the training dataset. All non-training datasets will be used as separate validation sets, each reporting a separate metric.

  • label_column – Name of the label column. A column with this name must be present in the training dataset.

  • params – XGBoost training parameters. Refer to XGBoost documentation for a list of possible parameters.

  • num_boost_round – Target number of boosting iterations (trees in the model). Note that unlike in xgboost.train, this is the target number of trees, meaning that if you set num_boost_round=10 and pass a model that has already been trained for 5 iterations, it will be trained for 5 iterations more, instead of 10 more.

  • scaling_config – Configuration for how to scale data parallel training.

  • run_config – Configuration for the execution of the training run.

  • dataset_config – The configuration for ingesting the input datasets. By default, all the Ray Datasets are split equally across workers. See DataConfig for more details.

  • resume_from_checkpoint – A checkpoint to resume training from.

  • metadata – Dict that should be made available in checkpoint.get_metadata() for checkpoints saved from this Trainer. Must be JSON-serializable.

  • **train_kwargs – Additional kwargs passed to xgboost.train() function.

PublicAPI (beta): This API is in beta and may change before becoming stable.

Methods

as_trainable

Converts self to a tune.Trainable class.

can_restore

Checks whether a given directory contains a restorable Train experiment.

fit

Runs training.

get_dataset_config

Returns a copy of this Trainer's final dataset configs.

get_model

Retrieve the XGBoost model stored in this checkpoint.

preprocess_datasets

Deprecated.

restore

Restores a DataParallelTrainer from a previously interrupted/failed run.

setup

Called during fit() to perform initial setup on the Trainer.