ray.train.lightgbm.LightGBMTrainer#
- class ray.train.lightgbm.LightGBMTrainer(*args, **kwargs)[source]#
Bases:
LightGBMTrainer
A Trainer for data parallel LightGBM training.
This Trainer runs the LightGBM training loop in a distributed manner using multiple Ray Actors.
If you would like to take advantage of LightGBM’s built-in handling for features with the categorical data type, consider applying the
Categorizer
preprocessor to set the dtypes in the dataset.Note
LightGBMTrainer
does not modify or otherwise alter the working of the LightGBM distributed training algorithm. Ray only provides orchestration, data ingest and fault tolerance. For more information on LightGBM distributed training, refer to LightGBM documentation.Example
import ray from ray.train.lightgbm import LightGBMTrainer from ray.train import ScalingConfig train_dataset = ray.data.from_items( [{"x": x, "y": x + 1} for x in range(32)] ) trainer = LightGBMTrainer( label_column="y", params={"objective": "regression"}, scaling_config=ScalingConfig(num_workers=3), datasets={"train": train_dataset}, ) result = trainer.fit()
- Parameters:
datasets – The Ray Datasets to use for training and validation. Must include a “train” key denoting the training dataset. All non-training datasets will be used as separate validation sets, each reporting a separate metric.
label_column – Name of the label column. A column with this name must be present in the training dataset.
params – LightGBM training parameters passed to
lightgbm.train()
. Refer to LightGBM documentation for a list of possible parameters.num_boost_round – Target number of boosting iterations (trees in the model). Note that unlike in
lightgbm.train
, this is the target number of trees, meaning that if you setnum_boost_round=10
and pass a model that has already been trained for 5 iterations, it will be trained for 5 iterations more, instead of 10 more.scaling_config – Configuration for how to scale data parallel training.
run_config – Configuration for the execution of the training run.
resume_from_checkpoint – A checkpoint to resume training from.
metadata – Dict that should be made available in
checkpoint.get_metadata()
for checkpoints saved from this Trainer. Must be JSON-serializable.**train_kwargs – Additional kwargs passed to
lightgbm.train()
function.
PublicAPI (beta): This API is in beta and may change before becoming stable.
Methods
Converts self to a
tune.Trainable
class.Checks whether a given directory contains a restorable Train experiment.
Runs training.
Returns a copy of this Trainer's final dataset configs.
Retrieve the LightGBM model stored in this checkpoint.
Deprecated.
Restores a DataParallelTrainer from a previously interrupted/failed run.
Called during fit() to perform initial setup on the Trainer.