ray.tune.Tuner#
- class ray.tune.Tuner(trainable: str | Callable | Type[Trainable] | BaseTrainer | None = None, *, param_space: Dict[str, Any] | None = None, tune_config: TuneConfig | None = None, run_config: RunConfig | None = None, _tuner_kwargs: Dict | None = None, _tuner_internal: TunerInternal | None = None, _entrypoint: AirEntrypoint = AirEntrypoint.TUNER)[source]#
Tuner is the recommended way of launching hyperparameter tuning jobs with Ray Tune.
- Parameters:
trainable – The trainable to be tuned.
param_space – Search space of the tuning job. See Working with Tune Search Spaces.
tune_config – Tuning specific configs, such as setting custom search algorithms and trial scheduling algorithms.
run_config – Job-level run configuration, which includes configs for persistent storage, checkpointing, fault tolerance, etc.
Usage pattern:
import ray.tune def trainable(config): # Your training logic here ray.tune.report({"accuracy": 0.8}) tuner = Tuner( trainable=trainable, param_space={"lr": ray.tune.grid_search([0.001, 0.01])}, run_config=ray.tune.RunConfig(name="my_tune_run"), ) results = tuner.fit()
To retry a failed Tune run, you can then do
tuner = Tuner.restore(results.experiment_path, trainable=trainable) tuner.fit()
results.experiment_path
can be retrieved from the ResultGrid object. It can also be easily seen in the log output from your first run.PublicAPI (beta): This API is in beta and may change before becoming stable.
Methods
Configure and construct a tune run.
Checks whether a given directory contains a restorable Tune experiment.
Executes hyperparameter tuning job as configured and returns result.
Get results of a hyperparameter tuning run.
Restores Tuner after a previously failed run.