ray.tune.Tuner#
- class ray.tune.Tuner(trainable: str | Callable | Type[Trainable] | BaseTrainer | None = None, *, param_space: Dict[str, Any] | None = None, tune_config: TuneConfig | None = None, run_config: RunConfig | None = None, _tuner_kwargs: Dict | None = None, _tuner_internal: TunerInternal | None = None, _entrypoint: AirEntrypoint = AirEntrypoint.TUNER)[source]#
Tuner is the recommended way of launching hyperparameter tuning jobs with Ray Tune.
- Parameters:
trainable – The trainable to be tuned.
param_space – Search space of the tuning job. One thing to note is that both preprocessor and dataset can be tuned here.
tune_config – Tuning algorithm specific configs. Refer to ray.tune.tune_config.TuneConfig for more info.
run_config – Runtime configuration that is specific to individual trials. If passed, this will overwrite the run config passed to the Trainer, if applicable. Refer to ray.train.RunConfig for more info.
Usage pattern:
from sklearn.datasets import load_breast_cancer from ray import tune from ray.data import from_pandas from ray.train import RunConfig, ScalingConfig from ray.train.xgboost import XGBoostTrainer from ray.tune.tuner import Tuner def get_dataset(): data_raw = load_breast_cancer(as_frame=True) dataset_df = data_raw["data"] dataset_df["target"] = data_raw["target"] dataset = from_pandas(dataset_df) return dataset trainer = XGBoostTrainer( label_column="target", params={}, datasets={"train": get_dataset()}, ) param_space = { "scaling_config": ScalingConfig( num_workers=tune.grid_search([2, 4]), resources_per_worker={ "CPU": tune.grid_search([1, 2]), }, ), # You can even grid search various datasets in Tune. # "datasets": { # "train": tune.grid_search( # [ds1, ds2] # ), # }, "params": { "objective": "binary:logistic", "tree_method": "approx", "eval_metric": ["logloss", "error"], "eta": tune.loguniform(1e-4, 1e-1), "subsample": tune.uniform(0.5, 1.0), "max_depth": tune.randint(1, 9), }, } tuner = Tuner(trainable=trainer, param_space=param_space, run_config=RunConfig(name="my_tune_run")) results = tuner.fit()
To retry a failed tune run, you can then do
tuner = Tuner.restore(results.experiment_path, trainable=trainer) tuner.fit()
results.experiment_path
can be retrieved from the ResultGrid object. It can also be easily seen in the log output from your first run.PublicAPI (beta): This API is in beta and may change before becoming stable.
Methods
Configure and construct a tune run.
Checks whether a given directory contains a restorable Tune experiment.
Executes hyperparameter tuning job as configured and returns result.
Get results of a hyperparameter tuning run.
Restores Tuner after a previously failed run.