Hyperparameter tuning with XGBoostTrainer
Hyperparameter tuning with XGBoostTrainer#
In this example, we will go through how you can use Ray AIR to run a distributed hyperparameter experiment to find optimal hyperparameters for an XGBoost model.
What we’ll cover:
How to load data from an Sklearn example dataset
How to initialize an XGBoost trainer
How to define a search space for regular XGBoost parameters and for data preprocessors
How to fetch the best obtained result from the tuning run
How to fetch a dataframe to do further analysis on the results
We’ll use the Covertype dataset provided from sklearn to train a multiclass classification task using XGBoost.
In this dataset, we try to predict the forst cover type (e.g. “lodgehole pine”) from cartographic variables, like the distance to the closest road, or the hillshade at different times of the day. The features are binary, discrete and continuous and thus well suited for a decision-tree based classification task.
You can find more information about the dataset on the dataset homepage.
We will train XGBoost models on this dataset. Because model training performance can be influenced by hyperparameter choices, we will generate several different configurations and train them in parallel. Notably each of these trials will itself start a distributed training job to speed up training. All of this happens automatically within Ray AIR.
First, let’s make sure we have all dependencies installed:
!pip install -q "ray[air]" scikit-learn
WARNING: You are using pip version 21.3.1; however, version 22.0.4 is available.
You should consider upgrading via the '/Users/kai/.pyenv/versions/3.7.7/bin/python3.7 -m pip install --upgrade pip' command.
Then we can start with some imports.
import pandas as pd
from sklearn.datasets import fetch_covtype
import ray
from ray import tune
from ray.air import RunConfig, ScalingConfig
from ray.train.xgboost import XGBoostTrainer
from ray.tune.tune_config import TuneConfig
from ray.tune.tuner import Tuner
We’ll define a utility function to create a Dataset from the Sklearn dataset. We expect the target column to be in the dataframe, so we’ll add it to the dataframe manually.
def get_training_data() -> ray.data.Dataset:
data_raw = fetch_covtype()
df = pd.DataFrame(data_raw["data"], columns=data_raw["feature_names"])
df["target"] = data_raw["target"]
return ray.data.from_pandas(df)
train_dataset = get_training_data()
2022-05-13 12:31:51,444 INFO services.py:1484 -- View the Ray dashboard at http://127.0.0.1:8265
Let’s take a look at the schema here:
print(train_dataset)
Dataset(num_blocks=1, num_rows=581012, schema={Elevation: float64, Aspect: float64, Slope: float64, Horizontal_Distance_To_Hydrology: float64, Vertical_Distance_To_Hydrology: float64, Horizontal_Distance_To_Roadways: float64, Hillshade_9am: float64, Hillshade_Noon: float64, Hillshade_3pm: float64, Horizontal_Distance_To_Fire_Points: float64, Wilderness_Area_0: float64, Wilderness_Area_1: float64, Wilderness_Area_2: float64, Wilderness_Area_3: float64, Soil_Type_0: float64, Soil_Type_1: float64, Soil_Type_2: float64, Soil_Type_3: float64, Soil_Type_4: float64, Soil_Type_5: float64, Soil_Type_6: float64, Soil_Type_7: float64, Soil_Type_8: float64, Soil_Type_9: float64, Soil_Type_10: float64, Soil_Type_11: float64, Soil_Type_12: float64, Soil_Type_13: float64, Soil_Type_14: float64, Soil_Type_15: float64, Soil_Type_16: float64, Soil_Type_17: float64, Soil_Type_18: float64, Soil_Type_19: float64, Soil_Type_20: float64, Soil_Type_21: float64, Soil_Type_22: float64, Soil_Type_23: float64, Soil_Type_24: float64, Soil_Type_25: float64, Soil_Type_26: float64, Soil_Type_27: float64, Soil_Type_28: float64, Soil_Type_29: float64, Soil_Type_30: float64, Soil_Type_31: float64, Soil_Type_32: float64, Soil_Type_33: float64, Soil_Type_34: float64, Soil_Type_35: float64, Soil_Type_36: float64, Soil_Type_37: float64, Soil_Type_38: float64, Soil_Type_39: float64, target: int32})
Since we’ll be training a multiclass prediction model, we have to pass some information to XGBoost. For instance, XGBoost expects us to provide the number of classes, and multiclass-enabled evaluation metrices.
For a good overview of commonly used hyperparameters, see our tutorial in the docs.
# XGBoost specific params
params = {
"tree_method": "approx",
"objective": "multi:softmax",
"eval_metric": ["mlogloss", "merror"],
"num_class": 8,
"min_child_weight": 2
}
With these parameters in place, we’ll create a Ray AIR XGBoostTrainer
.
Note that we pass in a scaling_config
to configure the distributed training behavior of each individual XGBoost training job. We want to distribute training across 2 workers. We also keep some CPU resources free for Ray Data operations.
The label_column
specifies which columns in the dataset contains the target values. params
are the XGBoost training params defined above - we can tune these later! The datasets
dict contains the dataset we would like to train on. Lastly, we pass the number of boosting rounds to XGBoost.
trainer = XGBoostTrainer(
scaling_config=ScalingConfig(num_workers=2, _max_cpu_fraction_per_node=0.9),
label_column="target",
params=params,
datasets={"train": train_dataset},
num_boost_round=10,
)
We can now create the Tuner with a search space to override some of the default parameters in the XGBoost trainer.
Here, we just want to the XGBoost max_depth
and min_child_weights
parameters. Note that we specifically specified min_child_weight=2
in the default XGBoost trainer - this value will be overwritten during tuning.
We configure Tune to minimize the train-mlogloss
metric. In random search, this doesn’t affect the evaluated configurations, but it will affect our default results fetching for analysis later.
By the way, the name train-mlogloss
is provided by the XGBoost library - train
is the name of the dataset and mlogloss
is the metric we passed in the XGBoost params
above. Trainables can report any number of results (in this case we report 2), but most search algorithms only act on one of them - here we chose the mlogloss
.
tuner = Tuner(
trainer,
run_config=RunConfig(verbose=1),
param_space={
"params": {
"max_depth": tune.randint(2, 8),
"min_child_weight": tune.randint(1, 10),
},
},
tune_config=TuneConfig(num_samples=8, metric="train-mlogloss", mode="min"),
)
Let’s run the tuning. This will take a few minutes to complete.
results = tuner.fit()
Current time: 2022-05-13 12:35:33 (running for 00:03:37.49)
Memory usage on this node: 10.0/16.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/6.73 GiB heap, 0.0/2.0 GiB objects
Current best trial: 4ab2f_00007 with train-mlogloss=0.560217 and parameters={'params': {'max_depth': 7, 'min_child_weight': 4}}
Result logdir: /Users/kai/ray_results/XGBoostTrainer_2022-05-13_12-31-55
Number of trials: 8/8 (8 TERMINATED)
(GBDTTrainable pid=62456) UserWarning: Dataset 'train' has 1 blocks, which is less than the `num_workers` 2. This dataset will be automatically repartitioned to 2 blocks.
(GBDTTrainable pid=62456) 2022-05-13 12:32:02,793 INFO main.py:980 -- [RayXGBoost] Created 2 new actors (2 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=62464) UserWarning: Dataset 'train' has 1 blocks, which is less than the `num_workers` 2. This dataset will be automatically repartitioned to 2 blocks.
(GBDTTrainable pid=62463) UserWarning: Dataset 'train' has 1 blocks, which is less than the `num_workers` 2. This dataset will be automatically repartitioned to 2 blocks.
(GBDTTrainable pid=62465) UserWarning: Dataset 'train' has 1 blocks, which is less than the `num_workers` 2. This dataset will be automatically repartitioned to 2 blocks.
(GBDTTrainable pid=62466) UserWarning: Dataset 'train' has 1 blocks, which is less than the `num_workers` 2. This dataset will be automatically repartitioned to 2 blocks.
(GBDTTrainable pid=62463) 2022-05-13 12:32:05,102 INFO main.py:980 -- [RayXGBoost] Created 2 new actors (2 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=62466) 2022-05-13 12:32:05,204 INFO main.py:980 -- [RayXGBoost] Created 2 new actors (2 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=62464) 2022-05-13 12:32:05,338 INFO main.py:980 -- [RayXGBoost] Created 2 new actors (2 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=62465) 2022-05-13 12:32:07,164 INFO main.py:980 -- [RayXGBoost] Created 2 new actors (2 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=62456) 2022-05-13 12:32:10,549 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(_RemoteRayXGBoostActor pid=62495) [12:32:10] task [xgboost.ray]:6975277392 got new rank 1
(_RemoteRayXGBoostActor pid=62494) [12:32:10] task [xgboost.ray]:4560390352 got new rank 0
(raylet) Spilled 2173 MiB, 22 objects, write throughput 402 MiB/s. Set RAY_verbose_spill_logs=0 to disable this message.
(GBDTTrainable pid=62463) 2022-05-13 12:32:17,848 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(_RemoteRayXGBoostActor pid=62523) [12:32:18] task [xgboost.ray]:4441524624 got new rank 0
(_RemoteRayXGBoostActor pid=62524) [12:32:18] task [xgboost.ray]:6890641808 got new rank 1
(GBDTTrainable pid=62465) 2022-05-13 12:32:21,253 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(GBDTTrainable pid=62466) 2022-05-13 12:32:21,529 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(_RemoteRayXGBoostActor pid=62563) [12:32:21] task [xgboost.ray]:4667801680 got new rank 1
(_RemoteRayXGBoostActor pid=62562) [12:32:21] task [xgboost.ray]:6856360848 got new rank 0
(_RemoteRayXGBoostActor pid=62530) [12:32:21] task [xgboost.ray]:6971527824 got new rank 0
(_RemoteRayXGBoostActor pid=62532) [12:32:21] task [xgboost.ray]:4538321232 got new rank 1
(GBDTTrainable pid=62464) 2022-05-13 12:32:21,937 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(_RemoteRayXGBoostActor pid=62544) [12:32:21] task [xgboost.ray]:7005661840 got new rank 1
(_RemoteRayXGBoostActor pid=62543) [12:32:21] task [xgboost.ray]:4516088080 got new rank 0
(raylet) Spilled 4098 MiB, 83 objects, write throughput 347 MiB/s.
(GBDTTrainable pid=62456) 2022-05-13 12:32:41,289 INFO main.py:1109 -- Training in progress (31 seconds since last restart).
(GBDTTrainable pid=62463) 2022-05-13 12:32:48,617 INFO main.py:1109 -- Training in progress (31 seconds since last restart).
(GBDTTrainable pid=62465) 2022-05-13 12:32:52,110 INFO main.py:1109 -- Training in progress (31 seconds since last restart).
(GBDTTrainable pid=62466) 2022-05-13 12:32:52,448 INFO main.py:1109 -- Training in progress (31 seconds since last restart).
(GBDTTrainable pid=62464) 2022-05-13 12:32:52,692 INFO main.py:1109 -- Training in progress (31 seconds since last restart).
(GBDTTrainable pid=62456) 2022-05-13 12:33:11,960 INFO main.py:1109 -- Training in progress (61 seconds since last restart).
(GBDTTrainable pid=62463) 2022-05-13 12:33:19,076 INFO main.py:1109 -- Training in progress (61 seconds since last restart).
(GBDTTrainable pid=62464) 2022-05-13 12:33:23,409 INFO main.py:1109 -- Training in progress (61 seconds since last restart).
(GBDTTrainable pid=62465) 2022-05-13 12:33:23,420 INFO main.py:1109 -- Training in progress (62 seconds since last restart).
(GBDTTrainable pid=62466) 2022-05-13 12:33:23,541 INFO main.py:1109 -- Training in progress (62 seconds since last restart).
(GBDTTrainable pid=62463) 2022-05-13 12:33:23,693 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=581,012 in 78.74 seconds (65.79 pure XGBoost training time).
(GBDTTrainable pid=62464) 2022-05-13 12:33:24,802 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=581,012 in 79.62 seconds (62.85 pure XGBoost training time).
(GBDTTrainable pid=62648) UserWarning: Dataset 'train' has 1 blocks, which is less than the `num_workers` 2. This dataset will be automatically repartitioned to 2 blocks.
(GBDTTrainable pid=62651) UserWarning: Dataset 'train' has 1 blocks, which is less than the `num_workers` 2. This dataset will be automatically repartitioned to 2 blocks.
(GBDTTrainable pid=62648) 2022-05-13 12:33:38,788 INFO main.py:980 -- [RayXGBoost] Created 2 new actors (2 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=62651) 2022-05-13 12:33:38,766 INFO main.py:980 -- [RayXGBoost] Created 2 new actors (2 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=62456) 2022-05-13 12:33:42,168 INFO main.py:1109 -- Training in progress (92 seconds since last restart).
(GBDTTrainable pid=62456) 2022-05-13 12:33:46,177 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=581,012 in 103.54 seconds (95.60 pure XGBoost training time).
(GBDTTrainable pid=62651) 2022-05-13 12:33:51,825 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(_RemoteRayXGBoostActor pid=62670) [12:33:51] task [xgboost.ray]:4623186960 got new rank 1
(_RemoteRayXGBoostActor pid=62669) [12:33:51] task [xgboost.ray]:4707639376 got new rank 0
(GBDTTrainable pid=62648) 2022-05-13 12:33:52,036 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(_RemoteRayXGBoostActor pid=62672) [12:33:52] task [xgboost.ray]:4530073552 got new rank 1
(_RemoteRayXGBoostActor pid=62671) [12:33:52] task [xgboost.ray]:6824757200 got new rank 0
(GBDTTrainable pid=62466) 2022-05-13 12:33:54,229 INFO main.py:1109 -- Training in progress (92 seconds since last restart).
(GBDTTrainable pid=62465) 2022-05-13 12:33:54,355 INFO main.py:1109 -- Training in progress (93 seconds since last restart).
(GBDTTrainable pid=62730) UserWarning: Dataset 'train' has 1 blocks, which is less than the `num_workers` 2. This dataset will be automatically repartitioned to 2 blocks.
(GBDTTrainable pid=62730) 2022-05-13 12:34:04,708 INFO main.py:980 -- [RayXGBoost] Created 2 new actors (2 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=62466) 2022-05-13 12:34:11,126 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=581,012 in 126.08 seconds (109.48 pure XGBoost training time).
(GBDTTrainable pid=62730) 2022-05-13 12:34:15,175 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(_RemoteRayXGBoostActor pid=62753) [12:34:15] task [xgboost.ray]:4468564048 got new rank 1
(_RemoteRayXGBoostActor pid=62752) [12:34:15] task [xgboost.ray]:6799468304 got new rank 0
(GBDTTrainable pid=62648) 2022-05-13 12:34:22,167 INFO main.py:1109 -- Training in progress (30 seconds since last restart).
(GBDTTrainable pid=62651) 2022-05-13 12:34:22,147 INFO main.py:1109 -- Training in progress (30 seconds since last restart).
(GBDTTrainable pid=62465) 2022-05-13 12:34:24,646 INFO main.py:1109 -- Training in progress (123 seconds since last restart).
(GBDTTrainable pid=62465) 2022-05-13 12:34:24,745 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=581,012 in 137.75 seconds (123.36 pure XGBoost training time).
(GBDTTrainable pid=62651) 2022-05-13 12:34:40,173 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=581,012 in 61.63 seconds (48.34 pure XGBoost training time).
(GBDTTrainable pid=62730) 2022-05-13 12:34:45,745 INFO main.py:1109 -- Training in progress (31 seconds since last restart).
(GBDTTrainable pid=62648) 2022-05-13 12:34:52,543 INFO main.py:1109 -- Training in progress (60 seconds since last restart).
(GBDTTrainable pid=62648) 2022-05-13 12:35:14,888 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=581,012 in 96.35 seconds (82.83 pure XGBoost training time).
(GBDTTrainable pid=62730) 2022-05-13 12:35:16,197 INFO main.py:1109 -- Training in progress (61 seconds since last restart).
(GBDTTrainable pid=62730) 2022-05-13 12:35:33,441 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=581,012 in 88.89 seconds (78.26 pure XGBoost training time).
2022-05-13 12:35:33,610 INFO tune.py:753 -- Total run time: 218.52 seconds (217.48 seconds for the tuning loop).
Now that we obtained the results, we can analyze them. For instance, we can fetch the best observed result according to the configured metric
and mode
and print it:
# This will fetch the best result according to the `metric` and `mode` specified
# in the `TuneConfig` above:
best_result = results.get_best_result()
print("Best result error rate", best_result.metrics["train-merror"])
Best result error rate 0.196929
For more sophisticated analysis, we can get a pandas dataframe with all trial results:
df = results.get_dataframe()
print(df.columns)
Index(['train-mlogloss', 'train-merror', 'time_this_iter_s',
'should_checkpoint', 'done', 'timesteps_total', 'episodes_total',
'training_iteration', 'trial_id', 'experiment_id', 'date', 'timestamp',
'time_total_s', 'pid', 'hostname', 'node_ip', 'time_since_restore',
'timesteps_since_restore', 'iterations_since_restore', 'warmup_time',
'config/params/max_depth', 'config/params/min_child_weight', 'logdir'],
dtype='object')
As an example, let’s group the results per min_child_weight
parameter and fetch the minimal obtained values:
groups = df.groupby("config/params/min_child_weight")
mins = groups.min()
for min_child_weight, row in mins.iterrows():
print("Min child weight", min_child_weight, "error", row["train-merror"], "logloss", row["train-mlogloss"])
Min child weight 1 error 0.262468 logloss 0.69843
Min child weight 2 error 0.311035 logloss 0.79498
Min child weight 3 error 0.240916 logloss 0.651457
Min child weight 4 error 0.196929 logloss 0.560217
Min child weight 6 error 0.219665 logloss 0.608005
Min child weight 7 error 0.311035 logloss 0.794983
Min child weight 8 error 0.311035 logloss 0.794983
As you can see in our example run, the min child weight of 2
showed the best prediction accuracy with 0.196929
. That’s the same as results.get_best_result()
gave us!
The results.get_dataframe()
returns the last reported results per trial. If you want to obtain the best ever observed results, you can pass the filter_metric
and filter_mode
arguments to results.get_dataframe()
. In our example, we’ll filter the minimum ever observed train-merror
for each trial:
df_min_error = results.get_dataframe(filter_metric="train-merror", filter_mode="min")
df_min_error["train-merror"]
0 0.262468
1 0.310307
2 0.310307
3 0.219665
4 0.240916
5 0.220801
6 0.310307
7 0.196929
Name: train-merror, dtype: float64
The best ever observed train-merror
is 0.196929
, the same as the minimum error in our grouped results. This is expected, as the classification error in XGBoost usually goes down over time - meaning our last results are usually the best results.
And that’s how you analyze your hyperparameter tuning results. If you would like to have access to more analytics, please feel free to file a feature request e.g. as a Github issue or on our Discuss platform!