Tune Search Algorithms (tune.search)
Contents
Tune Search Algorithms (tune.search)#
Tuneβs Search Algorithms are wrappers around open-source optimization libraries for efficient hyperparameter selection.
Each library has a specific way of defining the search space - please refer to their documentation for more details.
Tune will automatically convert search spaces passed to Tuner
to the library format in most cases.
You can utilize these search algorithms as follows:
from ray import tune
from ray.tune.search.hyperopt import HyperOptSearch
tuner = tune.Tuner(my_function, tune_config=tune.TuneConfig(search_alg=HyperOptSearch(...)))
results = tuner.fit()
Saving and Restoring Tune Runs#
Certain search algorithms have save/restore
implemented,
allowing reuse of learnings across multiple tuning runs.
search_alg = HyperOptSearch()
tuner_1 = tune.Tuner(
trainable,
tune_config=tune.TuneConfig(search_alg=search_alg))
results_1 = tuner_1.fit()
search_alg.save("./my-checkpoint.pkl")
# Restore the saved state onto another search algorithm
search_alg2 = HyperOptSearch()
search_alg2.restore("./my-checkpoint.pkl")
tuner_2 = tune.Tuner(
trainable,
tune_config=tune.TuneConfig(search_alg=search_alg2))
results_2 = tuner_2.fit()
Tune automatically saves its state inside the current experiment folder (βResult Dirβ) during tuning.
Note that if you have two Tune runs with the same experiment folder,
the previous state checkpoint will be overwritten. You can
avoid this by making sure air.RunConfig(name=...)
is set to a unique
identifier.
search_alg = HyperOptSearch()
tuner_1 = tune.Tuner(
cost,
tune_config=tune.TuneConfig(
num_samples=5,
search_alg=search_alg),
run_config=air.RunConfig(
verbose=0,
name="my-experiment-1",
local_dir="~/my_results"
))
results = tuner_1.fit()
search_alg2 = HyperOptSearch()
search_alg2.restore_from_dir(
os.path.join("~/my_results", "my-experiment-1"))
Random search and grid search (tune.search.basic_variant.BasicVariantGenerator)#
The default and most basic way to do hyperparameter search is via random and grid search.
Ray Tune does this through the BasicVariantGenerator
class that generates trial variants given a search space definition.
The BasicVariantGenerator
is used per
default if no search algorithm is passed to
Tuner
.
- class ray.tune.search.basic_variant.BasicVariantGenerator(points_to_evaluate: Optional[List[Dict]] = None, max_concurrent: int = 0, constant_grid_search: bool = False, random_state: Optional[Union[numpy.random._generator.Generator, numpy.random.mtrand.RandomState, int]] = None)[source]#
Uses Tuneβs variant generation for resolving variables.
This is the default search algorithm used if no other search algorithm is specified.
- Parameters
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
max_concurrent β Maximum number of concurrently running trials. If 0 (default), no maximum is enforced.
constant_grid_search β If this is set to
True
, Ray Tune will first try to sample random values and keep them constant over grid search parameters. If this is set toFalse
(default), Ray Tune will sample new random parameters in each grid search condition.random_state β Seed or numpy random generator to use for reproducible results. If None (default), will use the global numpy random generator (
np.random
). Please note that full reproducibility cannot be guaranteed in a distributed environment.
Example:
from ray import tune # This will automatically use the `BasicVariantGenerator` tuner = tune.Tuner( lambda config: config["a"] + config["b"], tune_config=tune.TuneConfig( num_samples=4 ), param_space={ "a": tune.grid_search([1, 2]), "b": tune.randint(0, 3) }, ) tuner.fit()
In the example above, 8 trials will be generated: For each sample (
4
), each of the grid search variants fora
will be sampled once. Theb
parameter will be sampled randomly.The generator accepts a pre-set list of points that should be evaluated. The points will replace the first samples of each experiment passed to the
BasicVariantGenerator
.Each point will replace one sample of the specified
num_samples
. If grid search variables are overwritten with the values specified in the presets, the number of samples will thus be reduced.Example:
from ray import tune from ray.tune.search.basic_variant import BasicVariantGenerator tuner = tune.Tuner( lambda config: config["a"] + config["b"], tune_config=tune.TuneConfig( search_alg=BasicVariantGenerator(points_to_evaluate=[ {"a": 2, "b": 2}, {"a": 1}, {"b": 2} ]), num_samples=4 ), param_space={ "a": tune.grid_search([1, 2]), "b": tune.randint(0, 3) }, ) tuner.fit()
The example above will produce six trials via four samples:
The first sample will produce one trial with
a=2
andb=2
.The second sample will produce one trial with
a=1
andb
sampled randomlyThe third sample will produce two trials, one for each grid search value of
a
. It will beb=2
for both of these trials.The fourth sample will produce two trials, one for each grid search value of
a
.b
will be sampled randomly and independently for both of these trials.
PublicAPI: This API is stable across Ray releases.
Ax (tune.search.ax.AxSearch)#
- class ray.tune.search.ax.AxSearch(space: Optional[Union[Dict, List[Dict]]] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None, parameter_constraints: Optional[List] = None, outcome_constraints: Optional[List] = None, ax_client: Optional[<MagicMock name='mock.AxClient' id='140288563469712'>] = None, **ax_kwargs)[source]#
Uses Ax to optimize hyperparameters.
Ax is a platform for understanding, managing, deploying, and automating adaptive experiments. Ax provides an easy to use interface with BoTorch, a flexible, modern library for Bayesian optimization in PyTorch. More information can be found in https://ax.dev/.
To use this search algorithm, you must install Ax and sqlalchemy:
$ pip install ax-platform sqlalchemy
- Parameters
space β Parameters in the experiment search space. Required elements in the dictionaries are: βnameβ (name of this parameter, string), βtypeβ (type of the parameter: βrangeβ, βfixedβ, or βchoiceβ, string), βboundsβ for range parameters (list of two values, lower bound first), βvaluesβ for choice parameters (list of values), and βvalueβ for fixed parameters (single value).
metric β Name of the metric used as objective in this experiment. This metric must be present in
raw_data
argument tolog_data
. This metric must also be present in the dict reported/returned by the Trainable. If None but a mode was passed, theray.tune.result.DEFAULT_METRIC
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute. Defaults to βmaxβ.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
parameter_constraints β Parameter constraints, such as βx3 >= x4β or βx3 + x4 >= 2β.
outcome_constraints β Outcome constraints of form βmetric_name >= boundβ, like βm1 <= 3.β
ax_client β Optional AxClient instance. If this is set, do not pass any values to these parameters:
space
,metric
,parameter_constraints
,outcome_constraints
.**ax_kwargs β Passed to AxClient instance. Ignored if
AxClient
is not None.
Tune automatically converts search spaces to Axβs format:
from ray import tune from ray.air import session from ray.tune.search.ax import AxSearch config = { "x1": tune.uniform(0.0, 1.0), "x2": tune.uniform(0.0, 1.0) } def easy_objective(config): for i in range(100): intermediate_result = config["x1"] + config["x2"] * i session.report({"score": intermediate_result}) ax_search = AxSearch() tuner = tune.Tuner( easy_objective, tune_config=tune.TuneConfig( search_alg=ax_search, metric="score", mode="max", ), param_space=config, ) tuner.fit()
If you would like to pass the search space manually, the code would look like this:
from ray import tune from ray.air import session from ray.tune.search.ax import AxSearch parameters = [ {"name": "x1", "type": "range", "bounds": [0.0, 1.0]}, {"name": "x2", "type": "range", "bounds": [0.0, 1.0]}, ] def easy_objective(config): for i in range(100): intermediate_result = config["x1"] + config["x2"] * i session.report({"score": intermediate_result}) ax_search = AxSearch(space=parameters, metric="score", mode="max") tuner = tune.Tuner( easy_objective, tune_config=tune.TuneConfig( search_alg=ax_search, ), ) tuner.fit()
Bayesian Optimization (tune.search.bayesopt.BayesOptSearch)#
- class ray.tune.search.bayesopt.BayesOptSearch(space: Optional[Dict] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None, utility_kwargs: Optional[Dict] = None, random_state: int = 42, random_search_steps: int = 10, verbose: int = 0, patience: int = 5, skip_duplicate: bool = True, analysis: Optional[ExperimentAnalysis] = None)[source]#
Uses fmfn/BayesianOptimization to optimize hyperparameters.
fmfn/BayesianOptimization is a library for Bayesian Optimization. More info can be found here: https://github.com/fmfn/BayesianOptimization.
This searcher will automatically filter out any NaN, inf or -inf results.
You will need to install fmfn/BayesianOptimization via the following:
pip install bayesian-optimization
This algorithm requires setting a search space using the BayesianOptimization search space specification.
- Parameters
space β Continuous search space. Parameters will be sampled from this space which will be used to run trials.
metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
utility_kwargs β Parameters to define the utility function. The default value is a dictionary with three keys: - kind: ucb (Upper Confidence Bound) - kappa: 2.576 - xi: 0.0
random_state β Used to initialize BayesOpt.
random_search_steps β Number of initial random searches. This is necessary to avoid initial local overfitting of the Bayesian process.
analysis β Optionally, the previous analysis to integrate.
verbose β Sets verbosity level for BayesOpt packages.
Tune automatically converts search spaces to BayesOptSearchβs format:
from ray import tune from ray.tune.search.bayesopt import BayesOptSearch config = { "width": tune.uniform(0, 20), "height": tune.uniform(-100, 100) } bayesopt = BayesOptSearch(metric="mean_loss", mode="min") tuner = tune.Tuner( my_func, tune_config=tune.TuneConfig( search_alg=baysopt, ), param_space=config, ) tuner.fit()
If you would like to pass the search space manually, the code would look like this:
from ray import tune from ray.tune.search.bayesopt import BayesOptSearch space = { 'width': (0, 20), 'height': (-100, 100), } bayesopt = BayesOptSearch(space, metric="mean_loss", mode="min") tuner = tune.Tuner( my_func, tune_config=tune.TuneConfig( search_alg=bayesopt, ), ) tuner.fit()
BOHB (tune.search.bohb.TuneBOHB)#
BOHB (Bayesian Optimization HyperBand) is an algorithm that both terminates bad trials and also uses Bayesian Optimization to improve the hyperparameter search. It is available from the HpBandSter library.
Importantly, BOHB is intended to be paired with a specific scheduler class: HyperBandForBOHB.
In order to use this search algorithm, you will need to install HpBandSter
and ConfigSpace
:
$ pip install hpbandster ConfigSpace
See the BOHB paper for more details.
- class ray.tune.search.bohb.TuneBOHB(space: Optional[Union[Dict, ConfigSpace.ConfigurationSpace]] = None, bohb_config: Optional[Dict] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None, seed: Optional[int] = None, max_concurrent: int = 0)[source]#
BOHB suggestion component.
Requires HpBandSter and ConfigSpace to be installed. You can install HpBandSter and ConfigSpace with:
pip install hpbandster ConfigSpace
.This should be used in conjunction with HyperBandForBOHB.
- Parameters
space β Continuous ConfigSpace search space. Parameters will be sampled from this space which will be used to run trials.
bohb_config β configuration for HpBandSter BOHB algorithm
metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
seed β Optional random seed to initialize the random number generator. Setting this should lead to identical initial configurations at each run.
max_concurrent β Number of maximum concurrent trials. If this Searcher is used in a
ConcurrencyLimiter
, themax_concurrent
value passed to it will override the value passed here. Set to <= 0 for no limit on concurrency.
Tune automatically converts search spaces to TuneBOHBβs format:
config = { "width": tune.uniform(0, 20), "height": tune.uniform(-100, 100), "activation": tune.choice(["relu", "tanh"]) } algo = TuneBOHB(metric="mean_loss", mode="min") bohb = HyperBandForBOHB( time_attr="training_iteration", metric="mean_loss", mode="min", max_t=100) run(my_trainable, config=config, scheduler=bohb, search_alg=algo)
If you would like to pass the search space manually, the code would look like this:
import ConfigSpace as CS config_space = CS.ConfigurationSpace() config_space.add_hyperparameter( CS.UniformFloatHyperparameter("width", lower=0, upper=20)) config_space.add_hyperparameter( CS.UniformFloatHyperparameter("height", lower=-100, upper=100)) config_space.add_hyperparameter( CS.CategoricalHyperparameter( name="activation", choices=["relu", "tanh"])) algo = TuneBOHB( config_space, metric="mean_loss", mode="min") bohb = HyperBandForBOHB( time_attr="training_iteration", metric="mean_loss", mode="min", max_t=100) run(my_trainable, scheduler=bohb, search_alg=algo)
BlendSearch (tune.search.flaml.BlendSearch)#
BlendSearch is an economical hyperparameter optimization algorithm that combines combines local search with global search. It is backed by the FLAML library. It allows the users to specify a low-cost initial point as input if such point exists.
In order to use this search algorithm, you will need to install flaml
:
$ pip install 'flaml[blendsearch]'
See the BlendSearch paper and documentation in FLAML BlendSearch documentation for more details.
- ray.tune.search.flaml.BlendSearch#
alias of
ray.tune.search.flaml.flaml_search._DummyErrorRaiser
CFO (tune.search.flaml.CFO)#
CFO (Cost-Frugal hyperparameter Optimization) is a hyperparameter search algorithm based on randomized local search. It is backed by the FLAML library. It allows the users to specify a low-cost initial point as input if such point exists.
In order to use this search algorithm, you will need to install flaml
:
$ pip install flaml
See the CFO paper and documentation in FLAML CFO documentation for more details.
- ray.tune.search.flaml.CFO#
alias of
ray.tune.search.flaml.flaml_search._DummyErrorRaiser
Dragonfly (tune.search.dragonfly.DragonflySearch)#
- class ray.tune.search.dragonfly.DragonflySearch(optimizer: Optional[str] = None, domain: Optional[str] = None, space: Optional[Union[Dict, List[Dict]]] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None, evaluated_rewards: Optional[List] = None, random_state_seed: Optional[int] = None, **kwargs)[source]#
Uses Dragonfly to optimize hyperparameters.
Dragonfly provides an array of tools to scale up Bayesian optimisation to expensive large scale problems, including high dimensional optimisation. parallel evaluations in synchronous or asynchronous settings, multi-fidelity optimisation (using cheap approximations to speed up the optimisation process), and multi-objective optimisation. For more info:
Dragonfly Website: https://github.com/dragonfly/dragonfly
Dragonfly Documentation: https://dragonfly-opt.readthedocs.io/
To use this search algorithm, install Dragonfly:
$ pip install dragonfly-opt
This interface requires using FunctionCallers and optimizers provided by Dragonfly.
This searcher will automatically filter out any NaN, inf or -inf results.
- Parameters
optimizer β Optimizer provided from dragonfly. Choose an optimiser that extends BlackboxOptimiser. If this is a string,
domain
must be set andoptimizer
must be one of [random, bandit, genetic].domain β Optional domain. Should only be set if you donβt pass an optimizer as the
optimizer
argument. Has to be one of [cartesian, euclidean].space β Search space. Should only be set if you donβt pass an optimizer as the
optimizer
argument. Defines the search space and requires adomain
to be set. Can be automatically converted from theparam_space
dict passed totune.Tuner()
.metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
evaluated_rewards β If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid re-running those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to re-compute the trial. Must be the same length as points_to_evaluate.
random_state_seed β Seed for reproducible results. Defaults to None. Please note that setting this to a value will change global random state for
numpy
on initalization and loading from checkpoint.
Tune automatically converts search spaces to Dragonflyβs format:
from ray import tune config = { "LiNO3_vol": tune.uniform(0, 7), "Li2SO4_vol": tune.uniform(0, 7), "NaClO4_vol": tune.uniform(0, 7) } df_search = DragonflySearch( optimizer="bandit", domain="euclidean", metric="objective", mode="max") tuner = tune.Tuner( my_func, tune_config=tune.TuneConfig( search_alg=df_search ), param_space=config ) tuner.fit()
If you would like to pass the search space/optimizer manually, the code would look like this:
from ray import tune space = [{ "name": "LiNO3_vol", "type": "float", "min": 0, "max": 7 }, { "name": "Li2SO4_vol", "type": "float", "min": 0, "max": 7 }, { "name": "NaClO4_vol", "type": "float", "min": 0, "max": 7 }] df_search = DragonflySearch( optimizer="bandit", domain="euclidean", space=space, metric="objective", mode="max") tuner = tune.Tuner( my_func, tune_config=tune.TuneConfig( search_alg=df_search ), ) tuner.fit()
- save(checkpoint_path: str)[source]#
Save state to path for this search algorithm.
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be used later when restoring from file.
Example:
search_alg = Searcher(...) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg, num_samples=5 ), param_space=config ) results = tuner.fit() search_alg.save("./my_favorite_path.pkl")
Changed in version 0.8.7: Save is automatically called by
Tuner().fit()
. You can useTuner().restore()
to restore from an experiment directory such as/ray_results/trainable
.
- restore(checkpoint_path: str)[source]#
Restore state for this search algorithm
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be the same as the one provided to βsaveβ.
Example:
search_alg.save("./my_favorite_path.pkl") search_alg2 = Searcher(...) search_alg2 = ConcurrencyLimiter(search_alg2, 1) search_alg2.restore(checkpoint_path) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg2, num_samples=5 ), ) tuner.fit()
HEBO (tune.search.hebo.HEBOSearch)#
- class ray.tune.search.hebo.HEBOSearch(space: Optional[Union[Dict, hebo.design_space.design_space.DesignSpace]] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None, evaluated_rewards: Optional[List] = None, random_state_seed: Optional[int] = None, max_concurrent: int = 8, **kwargs)[source]#
Uses HEBO (Heteroscedastic Evolutionary Bayesian Optimization) to optimize hyperparameters.
HEBO is a cutting edge black-box optimization framework created by Huaweiβs Noah Ark. More info can be found here: https://github.com/huawei-noah/HEBO/tree/master/HEBO.
space
can either be a HEBOβsDesignSpace
object or a dict of Tune search spaces.Please note that the first few trials will be random and used to kickstart the search process. In order to achieve good results, we recommend setting the number of trials to at least 16.
Maximum number of concurrent trials is determined by
max_concurrent
argument. Trials will be done in batches ofmax_concurrent
trials. If this Searcher is used in aConcurrencyLimiter
, themax_concurrent
value passed to it will override the value passed here.- Parameters
space β A dict mapping parameter names to Tune search spaces or a HEBO DesignSpace object.
metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
evaluated_rewards β If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid re-running those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to re-compute the trial. Must be the same length as points_to_evaluate. (See tune/examples/hebo_example.py)
random_state_seed β Seed for reproducible results. Defaults to None. Please note that setting this to a value will change global random states for
numpy
andtorch
on initalization and loading from checkpoint.max_concurrent β Number of maximum concurrent trials. If this Searcher is used in a
ConcurrencyLimiter
, themax_concurrent
value passed to it will override the value passed here.**kwargs β The keyword arguments will be passed to
HEBO()`
.
Tune automatically converts search spaces to HEBOβs format:
from ray import tune from ray.tune.search.hebo import HEBOSearch config = { "width": tune.uniform(0, 20), "height": tune.uniform(-100, 100) } hebo = HEBOSearch(metric="mean_loss", mode="min") tuner = tune.Tuner( trainable_function, tune_config=tune.TuneConfig( search_alg=hebo ), param_space=config ) tuner.fit()
Alternatively, you can pass a HEBO
DesignSpace
object manually to the Searcher:from ray import tune from ray.tune.search.hebo import HEBOSearch from hebo.design_space.design_space import DesignSpace space_config = [ {'name' : 'width', 'type' : 'num', 'lb' : 0, 'ub' : 20}, {'name' : 'height', 'type' : 'num', 'lb' : -100, 'ub' : 100}, ] space = DesignSpace().parse(space_config) hebo = HEBOSearch(space, metric="mean_loss", mode="min") tuner = tune.Tuner( trainable_function, tune_config=tune.TuneConfig( search_alg=hebo ) ) tuner.fit()
HyperOpt (tune.search.hyperopt.HyperOptSearch)#
- class ray.tune.search.hyperopt.HyperOptSearch(space: Optional[Dict] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None, n_initial_points: int = 20, random_state_seed: Optional[int] = None, gamma: float = 0.25)[source]#
A wrapper around HyperOpt to provide trial suggestions.
HyperOpt a Python library for serial and parallel optimization over awkward search spaces, which may include real-valued, discrete, and conditional dimensions. More info can be found at http://hyperopt.github.io/hyperopt.
HyperOptSearch uses the Tree-structured Parzen Estimators algorithm, though it can be trivially extended to support any algorithm HyperOpt supports.
To use this search algorithm, you will need to install HyperOpt:
pip install -U hyperopt
- Parameters
space β HyperOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process.
metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
n_initial_points β number of random evaluations of the objective function before starting to aproximate it with tree parzen estimators. Defaults to 20.
random_state_seed β seed for reproducible results. Defaults to None.
gamma β parameter governing the tree parzen estimators suggestion algorithm. Defaults to 0.25.
Tune automatically converts search spaces to HyperOptβs format:
config = { 'width': tune.uniform(0, 20), 'height': tune.uniform(-100, 100), 'activation': tune.choice(["relu", "tanh"]) } current_best_params = [{ 'width': 10, 'height': 0, 'activation': "relu", }] hyperopt_search = HyperOptSearch( metric="mean_loss", mode="min", points_to_evaluate=current_best_params) tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=hyperopt_search ), param_space=config ) tuner.fit()
If you would like to pass the search space manually, the code would look like this:
space = { 'width': hp.uniform('width', 0, 20), 'height': hp.uniform('height', -100, 100), 'activation': hp.choice("activation", ["relu", "tanh"]) } current_best_params = [{ 'width': 10, 'height': 0, 'activation': "relu", }] hyperopt_search = HyperOptSearch( space, metric="mean_loss", mode="min", points_to_evaluate=current_best_params) tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=hyperopt_search ), ) tuner.fit()
- save(checkpoint_path: str) None [source]#
Save state to path for this search algorithm.
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be used later when restoring from file.
Example:
search_alg = Searcher(...) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg, num_samples=5 ), param_space=config ) results = tuner.fit() search_alg.save("./my_favorite_path.pkl")
Changed in version 0.8.7: Save is automatically called by
Tuner().fit()
. You can useTuner().restore()
to restore from an experiment directory such as/ray_results/trainable
.
- restore(checkpoint_path: str) None [source]#
Restore state for this search algorithm
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be the same as the one provided to βsaveβ.
Example:
search_alg.save("./my_favorite_path.pkl") search_alg2 = Searcher(...) search_alg2 = ConcurrencyLimiter(search_alg2, 1) search_alg2.restore(checkpoint_path) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg2, num_samples=5 ), ) tuner.fit()
Nevergrad (tune.search.nevergrad.NevergradSearch)#
- class ray.tune.search.nevergrad.NevergradSearch(optimizer: Union[None, Type[None]] = None, optimizer_kwargs: Optional[Dict] = None, space: Optional[Dict] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None)[source]#
Uses Nevergrad to optimize hyperparameters.
Nevergrad is an open source tool from Facebook for derivative free optimization. More info can be found at: https://github.com/facebookresearch/nevergrad.
You will need to install Nevergrad via the following command:
$ pip install nevergrad
- Parameters
optimizer β Optimizer class provided from Nevergrad. See here for available optimizers: https://facebookresearch.github.io/nevergrad/optimizers_ref.html#optimizers This can also be an instance of a
ConfiguredOptimizer
. See the section on configured optimizers in the above link.optimizer_kwargs β Kwargs passed in when instantiating the
optimizer
space β Nevergrad parametrization to be passed to optimizer on instantiation, or list of parameter names if you passed an optimizer object.
metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
Tune automatically converts search spaces to Nevergradβs format:
import nevergrad as ng config = { "width": tune.uniform(0, 20), "height": tune.uniform(-100, 100), "activation": tune.choice(["relu", "tanh"]) } current_best_params = [{ "width": 10, "height": 0, "activation": relu", }] ng_search = NevergradSearch( optimizer=ng.optimizers.OnePlusOne, metric="mean_loss", mode="min", points_to_evaluate=current_best_params) run(my_trainable, config=config, search_alg=ng_search)
If you would like to pass the search space manually, the code would look like this:
import nevergrad as ng space = ng.p.Dict( width=ng.p.Scalar(lower=0, upper=20), height=ng.p.Scalar(lower=-100, upper=100), activation=ng.p.Choice(choices=["relu", "tanh"]) ) ng_search = NevergradSearch( optimizer=ng.optimizers.OnePlusOne, space=space, metric="mean_loss", mode="min") run(my_trainable, search_alg=ng_search)
- save(checkpoint_path: str)[source]#
Save state to path for this search algorithm.
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be used later when restoring from file.
Example:
search_alg = Searcher(...) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg, num_samples=5 ), param_space=config ) results = tuner.fit() search_alg.save("./my_favorite_path.pkl")
Changed in version 0.8.7: Save is automatically called by
Tuner().fit()
. You can useTuner().restore()
to restore from an experiment directory such as/ray_results/trainable
.
- restore(checkpoint_path: str)[source]#
Restore state for this search algorithm
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be the same as the one provided to βsaveβ.
Example:
search_alg.save("./my_favorite_path.pkl") search_alg2 = Searcher(...) search_alg2 = ConcurrencyLimiter(search_alg2, 1) search_alg2.restore(checkpoint_path) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg2, num_samples=5 ), ) tuner.fit()
Optuna (tune.search.optuna.OptunaSearch)#
- class ray.tune.search.optuna.OptunaSearch(space: Optional[Union[Dict[str, <MagicMock name='mock.BaseDistribution' id='140288565430480'>], List[Tuple], Callable[[<MagicMock name='mock.Trial' id='140288565566288'>], Optional[Dict[str, Any]]]]] = None, metric: Optional[Union[str, List[str]]] = None, mode: Optional[Union[str, List[str]]] = None, points_to_evaluate: Optional[List[Dict]] = None, sampler: Optional[<MagicMock name='mock.BaseSampler' id='140288565480528'>] = None, seed: Optional[int] = None, evaluated_rewards: Optional[List] = None)[source]#
A wrapper around Optuna to provide trial suggestions.
Optuna is a hyperparameter optimization library. In contrast to other libraries, it employs define-by-run style hyperparameter definitions.
This Searcher is a thin wrapper around Optunaβs search algorithms. You can pass any Optuna sampler, which will be used to generate hyperparameter suggestions.
Multi-objective optimization is supported.
- Parameters
space β
Hyperparameter search space definition for Optunaβs sampler. This can be either a
dict
with parameter names as keys andoptuna.distributions
as values, or a Callable - in which case, it should be a define-by-run function usingoptuna.trial
to obtain the hyperparameter values. The function should return either adict
of constant values with names as keys, or None. For more information, see https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html.Warning
No actual computation should take place in the define-by-run function. Instead, put the training logic inside the function or class trainable passed to
tune.Tuner()
.metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default. Can be a list of metrics for multi-objective optimization.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute. Can be a list of modes for multi-objective optimization (corresponding to
metric
).points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
sampler β
Optuna sampler used to draw hyperparameter configurations. Defaults to
MOTPESampler
for multi-objective optimization with Optuna<2.9.0, andTPESampler
in every other case.Warning
Please note that with Optuna 2.10.0 and earlier default
MOTPESampler
/TPESampler
suffer from performance issues when dealing with a large number of completed trials (approx. >100). This will manifest as a delay when suggesting new configurations. This is an Optuna issue and may be fixed in a future Optuna release.seed β Seed to initialize sampler with. This parameter is only used when
sampler=None
. In all other cases, the sampler you pass should be initialized with the seed already.evaluated_rewards β
If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid re-running those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to re-compute the trial. Must be the same length as points_to_evaluate.
Warning
When using
evaluated_rewards
, the search spacespace
must be provided as adict
with parameter names as keys andoptuna.distributions
instances as values. The define-by-run search space definition is not yet supported with this functionality.
Tune automatically converts search spaces to Optunaβs format:
from ray.tune.search.optuna import OptunaSearch config = { "a": tune.uniform(6, 8) "b": tune.loguniform(1e-4, 1e-2) } optuna_search = OptunaSearch( metric="loss", mode="min") tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=optuna_search, ), param_space=config, ) tuner.fit()
If you would like to pass the search space manually, the code would look like this:
from ray.tune.search.optuna import OptunaSearch import optuna space = { "a": optuna.distributions.UniformDistribution(6, 8), "b": optuna.distributions.LogUniformDistribution(1e-4, 1e-2), } optuna_search = OptunaSearch( space, metric="loss", mode="min") tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=optuna_search, ), ) tuner.fit() # Equivalent Optuna define-by-run function approach: def define_search_space(trial: optuna.Trial): trial.suggest_float("a", 6, 8) trial.suggest_float("b", 1e-4, 1e-2, log=True) # training logic goes into trainable, this is just # for search space definition optuna_search = OptunaSearch( define_search_space, metric="loss", mode="min") tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=optuna_search, ), ) tuner.fit()
Multi-objective optimization is supported:
from ray.tune.search.optuna import OptunaSearch import optuna space = { "a": optuna.distributions.UniformDistribution(6, 8), "b": optuna.distributions.LogUniformDistribution(1e-4, 1e-2), } # Note you have to specify metric and mode here instead of # in tune.TuneConfig optuna_search = OptunaSearch( space, metric=["loss1", "loss2"], mode=["min", "max"]) # Do not specify metric and mode here! tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=optuna_search, ), ) tuner.fit()
You can pass configs that will be evaluated first using
points_to_evaluate
:from ray.tune.search.optuna import OptunaSearch import optuna space = { "a": optuna.distributions.UniformDistribution(6, 8), "b": optuna.distributions.LogUniformDistribution(1e-4, 1e-2), } optuna_search = OptunaSearch( space, points_to_evaluate=[{"a": 6.5, "b": 5e-4}, {"a": 7.5, "b": 1e-3}] metric="loss", mode="min") tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=optuna_search, ), ) tuner.fit()
Avoid re-running evaluated trials by passing the rewards together with
points_to_evaluate
:from ray.tune.search.optuna import OptunaSearch import optuna space = { "a": optuna.distributions.UniformDistribution(6, 8), "b": optuna.distributions.LogUniformDistribution(1e-4, 1e-2), } optuna_search = OptunaSearch( space, points_to_evaluate=[{"a": 6.5, "b": 5e-4}, {"a": 7.5, "b": 1e-3}] evaluated_rewards=[0.89, 0.42] metric="loss", mode="min") tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=optuna_search, ), ) tuner.fit()
New in version 0.8.8.
SigOpt (tune.search.sigopt.SigOptSearch)#
You will need to use the SigOpt experiment and space specification to specify your search space.
- class ray.tune.search.sigopt.SigOptSearch(space: Optional[List[Dict]] = None, name: str = 'Default Tune Experiment', max_concurrent: int = 1, connection: None = None, experiment_id: Optional[str] = None, observation_budget: Optional[int] = None, project: Optional[str] = None, metric: Optional[Union[str, List[str]]] = 'episode_reward_mean', mode: Optional[Union[str, List[str]]] = 'max', points_to_evaluate: Optional[List[Dict]] = None, **kwargs)[source]#
A wrapper around SigOpt to provide trial suggestions.
You must install SigOpt and have a SigOpt API key to use this module. Store the API token as an environment variable
SIGOPT_KEY
as follows:pip install -U sigopt export SIGOPT_KEY= ...
You will need to use the SigOpt experiment and space specification.
This searcher manages its own concurrency. If this Searcher is used in a
ConcurrencyLimiter
, themax_concurrent
value passed to it will override the value passed here.- Parameters
space β SigOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process. Not used if existing experiment_id is given
name β Name of experiment. Required by SigOpt.
max_concurrent β Number of maximum concurrent trials supported based on the userβs SigOpt plan. Defaults to 1. If this Searcher is used in a
ConcurrencyLimiter
, themax_concurrent
value passed to it will override the value passed here.connection β An existing connection to SigOpt.
experiment_id β Optional, if given will connect to an existing experiment. This allows for a more interactive experience with SigOpt, such as prior beliefs and constraints.
observation_budget β Optional, can improve SigOpt performance.
project β Optional, Project name to assign this experiment to. SigOpt can group experiments by project
metric (str or list(str)) β If str then the training result objective value attribute. If list(str) then a list of metrics that can be optimized together. SigOpt currently supports up to 2 metrics.
mode β If experiment_id is given then this field is ignored, If str then must be one of {min, max}. If list then must be comprised of {min, max, obs}. Determines whether objective is minimizing or maximizing the metric attribute. If metrics is a list then mode must be a list of the same length as metric.
Example:
space = [ { 'name': 'width', 'type': 'int', 'bounds': { 'min': 0, 'max': 20 }, }, { 'name': 'height', 'type': 'int', 'bounds': { 'min': -100, 'max': 100 }, }, ] algo = SigOptSearch( space, name="SigOpt Example Experiment", metric="mean_loss", mode="min") Example:
space = [ { 'name': 'width', 'type': 'int', 'bounds': { 'min': 0, 'max': 20 }, }, { 'name': 'height', 'type': 'int', 'bounds': { 'min': -100, 'max': 100 }, }, ] algo = SigOptSearch( space, name="SigOpt Multi Objective Example Experiment", metric=["average", "std"], mode=["max", "min"])
Scikit-Optimize (tune.search.skopt.SkOptSearch)#
- class ray.tune.search.skopt.SkOptSearch(optimizer: Optional[skopt.optimizer.optimizer.Optimizer] = None, space: Optional[Union[List[str], Dict[str, Union[Tuple, List]]]] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None, evaluated_rewards: Optional[List] = None, convert_to_python: bool = True)[source]#
Uses Scikit Optimize (skopt) to optimize hyperparameters.
Scikit-optimize is a black-box optimization library. Read more here: https://scikit-optimize.github.io.
You will need to install Scikit-Optimize to use this module.
pip install scikit-optimize
This Search Algorithm requires you to pass in a skopt Optimizer object.
This searcher will automatically filter out any NaN, inf or -inf results.
- Parameters
optimizer β Optimizer provided from skopt.
space β A dict mapping parameter names to valid parameters, i.e. tuples for numerical parameters and lists for categorical parameters. If you passed an optimizer instance as the
optimizer
argument, this should be a list of parameter names instead.metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
evaluated_rewards β If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid re-running those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to re-compute the trial. Must be the same length as points_to_evaluate. (See tune/examples/skopt_example.py)
convert_to_python β SkOpt outputs numpy primitives (e.g.
np.int64
) instead of Python types. If this setting is set toTrue
, the values will be converted to Python primitives.
Tune automatically converts search spaces to SkOptβs format:
config = { "width": tune.uniform(0, 20), "height": tune.uniform(-100, 100) } current_best_params = [ { "width": 10, "height": 0, }, { "width": 15, "height": -20, } ] skopt_search = SkOptSearch( metric="mean_loss", mode="min", points_to_evaluate=current_best_params) tuner = tune.Tuner( trainable_function, tune_config=tune.TuneConfig( search_alg=skopt_search ), param_space=config ) tuner.fit()
If you would like to pass the search space/optimizer manually, the code would look like this:
parameter_names = ["width", "height"] parameter_ranges = [(0,20),(-100,100)] current_best_params = [[10, 0], [15, -20]] skopt_search = SkOptSearch( parameter_names=parameter_names, parameter_ranges=parameter_ranges, metric="mean_loss", mode="min", points_to_evaluate=current_best_params) tuner = tune.Tuner( trainable_function, tune_config=tune.TuneConfig( search_alg=skopt_search ), ) tuner.fit()
- save(checkpoint_path: str)[source]#
Save state to path for this search algorithm.
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be used later when restoring from file.
Example:
search_alg = Searcher(...) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg, num_samples=5 ), param_space=config ) results = tuner.fit() search_alg.save("./my_favorite_path.pkl")
Changed in version 0.8.7: Save is automatically called by
Tuner().fit()
. You can useTuner().restore()
to restore from an experiment directory such as/ray_results/trainable
.
- restore(checkpoint_path: str)[source]#
Restore state for this search algorithm
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be the same as the one provided to βsaveβ.
Example:
search_alg.save("./my_favorite_path.pkl") search_alg2 = Searcher(...) search_alg2 = ConcurrencyLimiter(search_alg2, 1) search_alg2.restore(checkpoint_path) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg2, num_samples=5 ), ) tuner.fit()
ZOOpt (tune.search.zoopt.ZOOptSearch)#
- class ray.tune.search.zoopt.ZOOptSearch(algo: str = 'asracos', budget: Optional[int] = None, dim_dict: Optional[Dict] = None, metric: Optional[str] = None, mode: Optional[str] = None, points_to_evaluate: Optional[List[Dict]] = None, parallel_num: int = 1, **kwargs)[source]#
A wrapper around ZOOpt to provide trial suggestions.
ZOOptSearch is a library for derivative-free optimization. It is backed by the ZOOpt package. Currently, Asynchronous Sequential RAndomized COordinate Shrinking (ASRacos) is implemented in Tune.
To use ZOOptSearch, install zoopt (>=0.4.1):
pip install -U zoopt
.Tune automatically converts search spaces to ZOOptβs format:
from ray import tune from ray.tune.search.zoopt import ZOOptSearch "config": { "iterations": 10, # evaluation times "width": tune.uniform(-10, 10), "height": tune.uniform(-10, 10) } zoopt_search_config = { "parallel_num": 8, # how many workers to parallel } zoopt_search = ZOOptSearch( algo="Asracos", # only support Asracos currently budget=20, # must match `num_samples` in `tune.TuneConfig()`. dim_dict=dim_dict, metric="mean_loss", mode="min", **zoopt_search_config ) tuner = tune.Tuner( my_objective, tune_config=tune.TuneConfig( search_alg=zoopt_search, num_samples=20 ), run_config=air.RunConfig( name="zoopt_search", stop={"timesteps_total": 10} ), param_space=config ) tuner.fit()
If you would like to pass the search space manually, the code would look like this:
from ray import tune from ray.tune.search.zoopt import ZOOptSearch from zoopt import ValueType dim_dict = { "height": (ValueType.CONTINUOUS, [-10, 10], 1e-2), "width": (ValueType.DISCRETE, [-10, 10], False), "layers": (ValueType.GRID, [4, 8, 16]) } "config": { "iterations": 10, # evaluation times } zoopt_search_config = { "parallel_num": 8, # how many workers to parallel } zoopt_search = ZOOptSearch( algo="Asracos", # only support Asracos currently budget=20, # must match `num_samples` in `tune.TuneConfig()`. dim_dict=dim_dict, metric="mean_loss", mode="min", **zoopt_search_config ) tuner = tune.Tuner( my_objective, tune_config=tune.TuneConfig( search_alg=zoopt_search, num_samples=20 ), run_config=air.RunConfig( name="zoopt_search", stop={"timesteps_total": 10} ), ) tuner.fit()
- Parameters
algo β To specify an algorithm in zoopt you want to use. Only support ASRacos currently.
budget β Number of samples.
dim_dict β Dimension dictionary. For continuous dimensions: (continuous, search_range, precision); For discrete dimensions: (discrete, search_range, has_order); For grid dimensions: (grid, grid_list). More details can be found in zoopt package.
metric β The training result objective value attribute. If None but a mode was passed, the anonymous metric
_metric
will be used per default.mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate β Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.
parallel_num β How many workers to parallel. Note that initial phase may start less workers than this number. More details can be found in zoopt package.
- save(checkpoint_path: str)[source]#
Save state to path for this search algorithm.
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be used later when restoring from file.
Example:
search_alg = Searcher(...) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg, num_samples=5 ), param_space=config ) results = tuner.fit() search_alg.save("./my_favorite_path.pkl")
Changed in version 0.8.7: Save is automatically called by
Tuner().fit()
. You can useTuner().restore()
to restore from an experiment directory such as/ray_results/trainable
.
- restore(checkpoint_path: str)[source]#
Restore state for this search algorithm
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be the same as the one provided to βsaveβ.
Example:
search_alg.save("./my_favorite_path.pkl") search_alg2 = Searcher(...) search_alg2 = ConcurrencyLimiter(search_alg2, 1) search_alg2.restore(checkpoint_path) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg2, num_samples=5 ), ) tuner.fit()
Repeated Evaluations (tune.search.Repeater)#
Use ray.tune.search.Repeater
to average over multiple evaluations of the same
hyperparameter configurations. This is useful in cases where the evaluated
training procedure has high variance (i.e., in reinforcement learning).
By default, Repeater
will take in a repeat
parameter and a search_alg
.
The search_alg
will suggest new configurations to try, and the Repeater
will run repeat
trials of the configuration. It will then average the
search_alg.metric
from the final results of each repeated trial.
Warning
It is recommended to not use Repeater
with a TrialScheduler.
Early termination can negatively affect the average reported metric.
- class ray.tune.search.Repeater(searcher: ray.tune.search.searcher.Searcher, repeat: int = 1, set_index: bool = True)[source]#
A wrapper algorithm for repeating trials of same parameters.
Set tune.TuneConfig(num_samples=β¦) to be a multiple of
repeat
. For example, set num_samples=15 if you intend to obtain 3 search algorithm suggestions and repeat each suggestion 5 times. Any leftover trials (num_samples mod repeat) will be ignored.It is recommended that you do not run an early-stopping TrialScheduler simultaneously.
- Parameters
searcher β Searcher object that the Repeater will optimize. Note that the Searcher will only see 1 trial among multiple repeated trials. The result/metric passed to the Searcher upon trial completion will be averaged among all repeats.
repeat β Number of times to generate a trial with a repeated configuration. Defaults to 1.
set_index β Sets a tune.search.repeater.TRIAL_INDEX in Trainable/Function config which corresponds to the index of the repeated trial. This can be used for seeds. Defaults to True.
Example:
from ray.tune.search import Repeater search_alg = BayesOptSearch(...) re_search_alg = Repeater(search_alg, repeat=10) # Repeat 2 samples 10 times each. tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=re_search_alg, num_samples=20, ), ) tuner.fit()
PublicAPI: This API is stable across Ray releases.
ConcurrencyLimiter (tune.search.ConcurrencyLimiter)#
Use ray.tune.search.ConcurrencyLimiter
to limit the amount of concurrency when using a search algorithm.
This is useful when a given optimization algorithm does not parallelize very well (like a naive Bayesian Optimization).
- class ray.tune.search.ConcurrencyLimiter(searcher: ray.tune.search.searcher.Searcher, max_concurrent: int, batch: bool = False)[source]#
A wrapper algorithm for limiting the number of concurrent trials.
Certain Searchers have their own internal logic for limiting the number of concurrent trials. If such a Searcher is passed to a
ConcurrencyLimiter
, themax_concurrent
of theConcurrencyLimiter
will override themax_concurrent
value of the Searcher. TheConcurrencyLimiter
will then let the Searcherβs internal logic take over.- Parameters
searcher β Searcher object that the ConcurrencyLimiter will manage.
max_concurrent β Maximum concurrent samples from the underlying searcher.
batch β Whether to wait for all concurrent samples to finish before updating the underlying searcher.
Example:
from ray.tune.search import ConcurrencyLimiter search_alg = HyperOptSearch(metric="accuracy") search_alg = ConcurrencyLimiter(search_alg, max_concurrent=2) tuner = tune.Tuner( trainable, tune_config=tune.TuneConfig( search_alg=search_alg ), ) tuner.fit()
PublicAPI: This API is stable across Ray releases.
Custom Search Algorithms (tune.search.Searcher)#
If you are interested in implementing or contributing a new Search Algorithm, provide the following interface:
- class ray.tune.search.Searcher(metric: Optional[str] = None, mode: Optional[str] = None)[source]#
Bases:
object
Abstract class for wrapping suggesting algorithms.
Custom algorithms can extend this class easily by overriding the
suggest
method provide generated parameters for the trials.Any subclass that implements
__init__
must also call the constructor of this class:super(Subclass, self).__init__(...)
.To track suggestions and their corresponding evaluations, the method
suggest
will be passed a trial_id, which will be used in subsequent notifications.Not all implementations support multi objectives.
- Parameters
metric β The training result objective value attribute. If list then list of training result objective value attributes
mode β If string One of {min, max}. If list then list of max and min, determines whether objective is minimizing or maximizing the metric attribute. Must match type of metric.
class ExampleSearch(Searcher): def __init__(self, metric="mean_loss", mode="min", **kwargs): super(ExampleSearch, self).__init__( metric=metric, mode=mode, **kwargs) self.optimizer = Optimizer() self.configurations = {} def suggest(self, trial_id): configuration = self.optimizer.query() self.configurations[trial_id] = configuration def on_trial_complete(self, trial_id, result, **kwargs): configuration = self.configurations[trial_id] if result and self.metric in result: self.optimizer.update(configuration, result[self.metric]) tuner = tune.Tuner( trainable_function, tune_config=tune.TuneConfig( search_alg=ExampleSearch() ) ) tuner.fit()
DeveloperAPI: This API may change across minor Ray releases.
- set_search_properties(metric: Optional[str], mode: Optional[str], config: Dict, **spec) bool [source]#
Pass search properties to searcher.
This method acts as an alternative to instantiating search algorithms with their own specific search spaces. Instead they can accept a Tune config through this method. A searcher should return
True
if setting the config was successful, orFalse
if it was unsuccessful, e.g. when the search space has already been set.- Parameters
metric β Metric to optimize
mode β One of [βminβ, βmaxβ]. Direction to optimize.
config β Tune config dict.
**spec β Any kwargs for forward compatiblity. Info like Experiment.PUBLIC_KEYS is provided through here.
- on_trial_result(trial_id: str, result: Dict) None [source]#
Optional notification for result during training.
Note that by default, the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process.
- Parameters
trial_id β A unique string ID for the trial.
result β Dictionary of metrics for current training progress. Note that the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process.
- on_trial_complete(trial_id: str, result: Optional[Dict] = None, error: bool = False) None [source]#
Notification for the completion of trial.
Typically, this method is used for notifying the underlying optimizer of the result.
- Parameters
trial_id β A unique string ID for the trial.
result β Dictionary of metrics for current training progress. Note that the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process. Upon errors, this may also be None.
error β True if the training process raised an error.
- suggest(trial_id: str) Optional[Dict] [source]#
Queries the algorithm to retrieve the next set of parameters.
- Parameters
trial_id β Trial ID used for subsequent notifications.
- Returns
- Configuration for a trial, if possible.
If FINISHED is returned, Tune will be notified that no more suggestions/configurations will be provided. If None is returned, Tune will skip the querying of the searcher for this step.
- Return type
dict | FINISHED | None
- add_evaluated_point(parameters: Dict, value: float, error: bool = False, pruned: bool = False, intermediate_values: Optional[List[float]] = None)[source]#
Pass results from a point that has been evaluated separately.
This method allows for information from outside the suggest - on_trial_complete loop to be passed to the search algorithm. This functionality depends on the underlying search algorithm and may not be always available.
- Parameters
parameters β Parameters used for the trial.
value β Metric value obtained in the trial.
error β True if the training process raised an error.
pruned β True if trial was pruned.
intermediate_values β List of metric values for intermediate iterations of the result. None if not applicable.
- add_evaluated_trials(trials_or_analysis: Union[Trial, List[Trial], ExperimentAnalysis], metric: str)[source]#
Pass results from trials that have been evaluated separately.
This method allows for information from outside the suggest - on_trial_complete loop to be passed to the search algorithm. This functionality depends on the underlying search algorithm and may not be always available (same as
add_evaluated_point
.)- Parameters
trials_or_analysis β Trials to pass results form to the searcher.
metric β Metric name reported by trials used for determining the objective value.
- save(checkpoint_path: str)[source]#
Save state to path for this search algorithm.
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be used later when restoring from file.
Example:
search_alg = Searcher(...) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg, num_samples=5 ), param_space=config ) results = tuner.fit() search_alg.save("./my_favorite_path.pkl")
Changed in version 0.8.7: Save is automatically called by
Tuner().fit()
. You can useTuner().restore()
to restore from an experiment directory such as/ray_results/trainable
.
- restore(checkpoint_path: str)[source]#
Restore state for this search algorithm
- Parameters
checkpoint_path β File where the search algorithm state is saved. This path should be the same as the one provided to βsaveβ.
Example:
search_alg.save("./my_favorite_path.pkl") search_alg2 = Searcher(...) search_alg2 = ConcurrencyLimiter(search_alg2, 1) search_alg2.restore(checkpoint_path) tuner = tune.Tuner( cost, tune_config=tune.TuneConfig( search_alg=search_alg2, num_samples=5 ), ) tuner.fit()
- set_max_concurrency(max_concurrent: int) bool [source]#
Set max concurrent trials this searcher can run.
This method will be called on the wrapped searcher by the
ConcurrencyLimiter
. It is intended to allow for searchers which have custom, internal logic handling max concurrent trials to inherit the value passed toConcurrencyLimiter
.If this method returns False, it signifies that no special logic for handling this case is present in the searcher.
- Parameters
max_concurrent β Number of maximum concurrent trials.
- save_to_dir(checkpoint_dir: str, session_str: str = 'default')[source]#
Automatically saves the given searcher to the checkpoint_dir.
This is automatically used by Tuner().fit() during a Tune job.
- Parameters
checkpoint_dir β Filepath to experiment dir.
session_str β Unique identifier of the current run session.
- restore_from_dir(checkpoint_dir: str)[source]#
Restores the state of a searcher from a given checkpoint_dir.
Typically, you should use this function to restore from an experiment directory such as
/ray_results/trainable
.tuner = tune.Tuner( cost, run_config=air.RunConfig( name=self.experiment_name, local_dir="~/my_results", ), tune_config=tune.TuneConfig( search_alg=search_alg, num_samples=5 ), param_space=config ) tuner.fit() search_alg2 = Searcher() search_alg2.restore_from_dir( os.path.join("~/my_results", self.experiment_name)
- property metric: str#
The training result objective value attribute.
- property mode: str#
Specifies if minimizing or maximizing the metric.
If contributing, make sure to add test cases and an entry in the function described below.
Shim Instantiation (tune.create_searcher)#
There is also a shim function that constructs the search algorithm based on the provided string. This can be useful if the search algorithm you want to use changes often (e.g., specifying the search algorithm via a CLI option or config file).
- tune.create_searcher(**kwargs)#
Instantiate a search algorithm based on the given string.
This is useful for swapping between different search algorithms.
- Parameters
search_alg β The search algorithm to use.
metric β The training result objective value attribute. Stopping procedures will use this attribute.
mode β One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
**kwargs β Additional parameters. These keyword arguments will be passed to the initialization function of the chosen class.
- Returns
The search algorithm.
- Return type
Example
>>> from ray import tune >>> search_alg = tune.create_searcher('ax')
PublicAPI (beta): This API is in beta and may change before becoming stable.